prompt
stringlengths
1.62k
99.6k
answer
stringlengths
4
245
Relavent Documents: Document 0::: Following is a list of dams and reservoirs in Puerto Rico. The below list is incomplete. The National Inventory of Dams, maintained by the U.S. Army Corps of Engineers, defines any "major dam" as being tall with a storage capacity of at least , or of any height with a storage capacity of . Dams and reservoirs in Puerto Rico Lago Caonillas, Utuado, Puerto Rico Electric Power Authority (PREPA) Lago Carite, Guayama, PREPA Carraízo Dam and Lago Loíza, Trujillo Alto, Puerto Rico Aqueducts and Sewers Authority (PRASA) Lago Cerrillos, Ponce, United States Army Corps of Engineers (USACE) Lago de Cidra, Cidra, PRASA Coamo Dam, between Coamo and Santa Isabel, PREPA Lago Dos Bocas, between Arecibo and Utuado municipalities, PREPA Lago Guajataca, between San Sebastián, Quebradillas and Isabela municipalities, PREPA Lago Guayabal, Juana Díaz, PREPA Lago El Guineo (Río Toro Negro), Villalba, PREPA Lago Garzas, Peñuelas, PREPA Lago La Plata, Toa Alta, PRASA Patillas Dam, Patillas, PREPA Portugués Dam, Ponce, USACE Río Blanco Project, Naguabo, PREPA El Salto #1 and El Salto #2, Comerío Lago Toa Vaca, Villalba, PRASA Yauco Project, Yauco, PREPA Gallery See also List of rivers in Puerto Rico References External links USGS List of Reservoirs in Puerto Rico Document 1::: Magnetic 2D materials or magnetic van der Waals materials are two-dimensional materials that display ordered magnetic properties such as antiferromagnetism or ferromagnetism. After the discovery of graphene in 2004, the family of 2D materials has grown rapidly. There have since been reports of several related materials, all except for magnetic materials. But since 2016 there have been numerous reports of 2D magnetic materials that can be exfoliated with ease, similarly to graphene. The first few-layered van der Waals magnetism was reported in 2017 (Cr2Ge2Te6, and CrI3). One reason for this seemingly late discovery is that thermal fluctuations tend to destroy magnetic order for 2D magnets more easily compared to 3D bulk. It is also generally accepted in the community that low dimensional materials have different magnetic properties compared to bulk. This academic interest that transition from 3D to 2D magnetism can be measured has been the driving force behind much of the recent works on van der Waals magnets. Much anticipated transition of such has been since observed in both antiferromagnets and ferromagnets: FePS3, Cr2Ge2Te6, CrI3, NiPS3, MnPS3, Fe3GeTe2 Although the field has been only around since 2016, it has become one of the most active fields in condensed matter physics and materials science and engineering. There have been several review articles written up to highlight its future and promise. Overview Magnetic van der Waals materials is a new addition to the growing list of 2d materials. The special feature of these new materials is that they exhibit a magnetic ground state, either antiferromagnetic or ferromagnetic, when they are thinned down to very few sheets or even one layer of materials. Another, probably more important, feature of these materials is that they can be easily produced in few layers or monolayer form using simple means such as scotch tape, which is rather uncommon among other magnetic materials like oxide magnets. Interest in these Document 2::: Unified Video Decoder (UVD, previously called Universal Video Decoder) is the name given to AMD's dedicated video decoding ASIC. There are multiple versions implementing a multitude of video codecs, such as H.264 and VC-1. UVD was introduced with the Radeon HD 2000 Series and is integrated into some of AMD's GPUs and APUs. UVD occupies a considerable amount of the die surface at the time of its introduction and is not to be confused with AMD's Video Coding Engine (VCE). As of AMD Raven Ridge (released January 2018), UVD and VCE were succeeded by Video Core Next (VCN). Overview The UVD is based on an ATI Xilleon video processor, which is incorporated onto the same die as the GPU and is part of the ATI Avivo HD for hardware video decoding, along with the Advanced Video Processor (AVP). UVD, as stated by AMD, handles decoding of H.264/AVC, and VC-1 video codecs entirely in hardware. The UVD technology is based on the Cadence Tensilica Xtensa processor, which was originally licensed by ATI Technologies Inc. in 2004. UVD/UVD+ In early versions of UVD, video post-processing is passed to the pixel shaders and OpenCL kernels. MPEG-2 decoding is not performed within UVD, but in the shader processors. The decoder meets the performance and profile requirements of Blu-ray and HD DVD, decoding H.264 bitstreams up to a bitrate of 40 Mbit/s. It has context-adaptive binary arithmetic coding (CABAC) support for H.264/AVC. Unlike video acceleration blocks in previous generation GPUs, which demanded considerable host-CPU involvement, UVD offloads the entire video-decoder process for VC-1 and H.264 except for video post-processing, which is offloaded to the shaders. MPEG-2 decode is also supported, but the bitstream/entropy decode is not performed for MPEG-2 video in hardware. Previously, neither ATI Radeon R520 series' ATI Avivo nor NVidia Geforce 7 series' PureVideo assisted front-end bitstream/entropy decompression in VC-1 and H.264 - the host CPU performed this work. UVD han Document 3::: Sliplining is a technique for repairing leaks or restoring structural stability to an existing pipeline. It involves installing a smaller, "carrier pipe" into a larger "host pipe", grouting the annular space between the two pipes, and sealing the ends. Sliplining has been used since the 1940s. The most common material used to slipline an existing pipe is high-density polyethylene (HDPE), but fiberglass-reinforced pipe (FRP) and PVC are also common. Sliplining can be used to stop infiltration and restore structural integrity to an existing pipe. The most common size is (8"-60"), but sliplining can occur in any size given appropriate access and a new pipe small or large enough to install. Installation methods There are two methods used to install a slipline: continuous and segmental. Continuous sliplining uses a long continuous pipe, such as HDPE, Fusible PVC, or Welded Steel Pipe, that are connected into continuous pieces of any length prior to installation. The continuous carrier pipe is pulled through the existing host pipe starting at an insertion pit and continuing to a receiving pit. Either the insertion pit, the receiving pit, or both can be manholes or other existing access points if the size and material of the new carrier pipe can manoeuvre the existing facilities. Segmental sliplining is very similar to continuous sliplining. The difference is primarily based on the pipe material used as the new carrier pipe. When using any bell and spigot pipe such as FRP, PVC, HDPE or Spirally Welded Steel Pipe, the individual pieces of pipe are lowered into place, pushed together, and pushed along the existing pipe corridor. Using either method the annular space between the two pipes must be grouted. In the case of sanitary sewer lines, the service laterals must be reconnected via excavation. Advantages Sliplining is generally a very cost-effective rehabilitation method. It is also very easy to install and requires tools and equipment widely available to any pi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a sag in geological terms? A. A raised area of land B. A depressed, persistent, low area C. A type of mountain D. A large body of water Answer:
B. A depressed, persistent, low area
Relavent Documents: Document 0::: Madrid Río is an urban park in the Spanish capital Madrid, built along an urban stretch of the Manzanares River following the burial of the M-30 bypass road in this area. It is the result of a project led by the architect Ginés Garrido, who won the international ideas competition organised by the Madrid City Council in 2005 to redevelop the area. The project started with the idea of recovering the banks of the Manzanares River for the use and enjoyment of the citizens. The section of the river that is now known as Madrid Río is the section that was boxed in by the M-30 bypass road, a road that isolated the river between the two directions of the highway as well as creating a barrier and fracture between the two sides of the city, the district of Arganzuela on the left bank, and the districts of Latina, Carabanchel and Usera on the right bank. The connection of the M-30 with the A-5 motorway, the road to Extremadura, separated the city in an impassable way from Casa de Campo, Madrid's largest park. The project involved the undergrounding of the M-30 in this area as well as that section of the A-5 running parallel to Casa de Campo. There are seven dams that regulate the river as it passes through the city. They receive the waters of the Manzanares River after passing through the Santillana reservoir, in Manzanares el Real, and the El Pardo reservoir, in the municipality of Madrid, which is why they are numbered from 3 to 9. Their mechanisms and locks have been repaired and the dams have been used for the new system of crossings. Initially, the project for the renaturation of the Manzanares River as it passes through Madrid Río contemplated the opening of all the dams, except the last one, to create the conditions that would make it possible for the Madrid Río rowing school to train, but finally, contrary to what was first agreed and due to pressure from the local residents, it was also decided to also open the last one so that the river could flow freely. The water level has been dropped as the natural flow of the river has been restored. Accessible wooden boards and fish ladders have been added to encourage the continuity of the underwater fauna along the river. There has been a noticeable improvement in avian biodiversity along the river with herons and kingfishers being regular visitors. The Madrid Río has received the Veronica Rudge Green Prize in Urban Design from Harvard University's Graduate School of Design in 2015. The architects were Ginés Garrido (of Burgos & Garrido), Porras La Casta, Rubio & Álvarez-Sala, and West 8. Notes External links Document 1::: Digistar is the first computer graphics-based planetarium projection and content system. It was designed by Evans & Sutherland and released in 1983. The technology originally focused on accurate and high quality display of stars, including for the first time showing stars from points of view other than Earth's surface, travelling through the stars, and accurately showing celestial bodies from different times in the past and future. Beginning with the Digistar 3 the system now projects full-dome video. Projector Unlike modern full-dome systems, which use LCD, DLP, SXRD, or laser projection technology, the Digistar projection system was designed for projecting bright pinpoints of light representing stars. This was accomplished using a calligraphic display, a form of vector graphics, rather than raster graphics. The heart of the Digistar projector is a large cathode-ray tube (CRT). A phosphor plate is mounted atop the tube, and light is then dispersed by a large lens with a 160 degree field of view to cover the planetarium dome. The original lens bore the inscription: "August 1979 mfg. by Lincoln Optical Corp., L.A., CA for Evans and Sutherland Computer Corp., SLC, UT, Digital planetarium CRT projection lens, 43mm, f2.8, 160 degree field of view". The coordinates of the stars and wire-frame models to be displayed by the projector were stored in computer RAM in a display list. The display would read each set of coordinates in turn and drive the CRT's electron beam directly to those coordinates. If the electron beam was enabled while being moved a line would be painted on the phosphor plate. Otherwise, the electron beam would be enabled once at its destination and a star would be painted. Once all coordinates in the display list had been processed, the display would repeat from the top of the display list. Thus, the shorter the display list the more frequently the electron beam would refresh the charge on a given point on the phosphor plate, making the projection of t Document 2::: David M. Young Jr. (October 20, 1923 – December 21, 2008) was an American mathematician and computer scientist who was one of the pioneers in the field of modern numerical analysis/scientific computing. Contributions Young is best known for establishing the mathematical framework for iterative methods (a.k.a. preconditioning). These algorithms are now used in computer software on high performance supercomputers for the numerical solution of large sparse linear systems arising from problems involving partial differential equations. See, in particular, the successive over-relaxation (SOR) and symmetric successive over-relaxation (SSOR) methods. When Young first began his research on iterative methods in the late 1940s, there was some skepticism with the idea of using iterative methods on the new computing machines to solve industrial-size problems. Ever since Young's ground-breaking Ph.D. thesis, iterative methods have been used on a wide range of scientific and engineering applications with a variety of new iterative methods having been developed. Education and career Young earned a bachelor's degree in 1944 from the Webb Institute of Naval Architecture. After service in the U.S. Navy during part of World War II, he went to Harvard University to study mathematics and was awarded a master's degree in 1947 and a Ph.D in 1950, working under the supervision of Professor Garrett Birkhoff. Young began his academic career at the University of Maryland, College Park and he was the first to teach a mathematics course focusing mainly on numerical analysis and computer programming. After several years working in the aero-space industry in Los Angeles, he joined the faculty of the University of Texas at Austin, Texas, in 1958. Young was the founding Director of the university Computation Center and then the research Center for Numerical Analysis (CNA) in 1970. He would become the Ashbel Smith Professor of Mathematics and Computer Sciences as well as a founding member of the I Document 3::: Cooling load is the rate at which sensible and latent heat must be removed from the space to maintain a constant space dry-bulb air temperature and humidity. Sensible heat into the space causes its air temperature to rise while latent heat is associated with the rise of the moisture content in the space. The building design, internal equipment, occupants, and outdoor weather conditions may affect the cooling load in a building using different heat transfer mechanisms. The SI units are watts. Overview The cooling load is calculated to select HVAC equipment that has the appropriate cooling capacity to remove heat from the zone. A zone is typically defined as an area with similar heat gains, similar temperature and humidity control requirements, or an enclosed space within a building with the purpose to monitor and control the zone's temperature and humidity with a single sensor e.g. thermostat. Cooling load calculation methodologies take into account heat transfer by conduction, convection, and radiation. Methodologies include heat balance, radiant time series, cooling load temperature difference, transfer function, and sol-air temperature. Methods calculate the cooling load in either steady state or dynamic conditions and some can be more involved than others. These methodologies and others can be found in ASHRAE handbooks, ISO Standard 11855, European Standard (EN) 15243, and EN 15255. ASHRAE recommends the heat balance method and radiant time series methods. Differentiation from heat gains The cooling load of a building should not be confused with its heat gains. Heat gains refer to the rate at which heat is transferred into or generated inside a building. Just like cooling loads, heat gains can be separated into sensible and latent heat gains that can occur through conduction, convection, and radiation. Thermophysical properties of walls, floors, ceilings, and windows, lighting power density (LPD), plug load density, occupant density, and equipment efficiency The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What was the main goal of the Madrid Río project initiated by Ginés Garrido? A. To construct a new highway B. To recover the banks of the Manzanares River for public use C. To build luxury apartments along the river D. To create a dam system for flood prevention Answer:
B. To recover the banks of the Manzanares River for public use
Relavent Documents: Document 0::: In surveying, triangulation is the process of determining the location of a point by measuring only angles to it from known points at either end of a fixed baseline by using trigonometry, rather than measuring distances to the point directly as in trilateration. The point can then be fixed as the third point of a triangle with one known side and two known angles. Triangulation can also refer to the accurate surveying of systems of very large triangles, called triangulation networks. This followed from the work of Willebrord Snell in 1615–17, who showed how a point could be located from the angles subtended from three known points, but measured at the new unknown point rather than the previously fixed points, a problem called resectioning. Surveying error is minimized if a mesh of triangles at the largest appropriate scale is established first. Points inside the triangles can all then be accurately located with reference to it. Such triangulation methods were used for accurate large-scale land surveying until the rise of global navigation satellite systems in the 1980s. Principle Triangulation may be used to find the position of the ship when the positions of A and B are known. An observer at A measures the angle α, while the observer at B measures β. The position of any vertex of a triangle can be calculated if the position of one side, and two angles, are known. The following formulae are strictly correct only for a flat surface. If the curvature of the Earth must be allowed for, then spherical trigonometry must be used. Calculation With being the distance between A and B gives: Using the trigonometric identities tan α = sin α / cos α and sin(α + β) = sin α cos β + cos α sin β, this is equivalent to: therefore: From this, it is easy to determine the distance of the unknown point from either observation point, its north/south and east/west offsets from the observation point, and finally its full coordinates. History Triangulation today is used for Document 1::: In the Taroom district in the Dawson River valley of Queensland, Australia, a boggomoss (pl. boggomossi or boggomosses) is a mound spring. Boggomosses range in form from small muddy swamps to elevated peat bogs or swamps, up to 150 meters across scattered among dry woodland communities, which form part of the Springsure Group of Great Artesian Basin springs. They are rich in invertebrates and form a vital chain of permanently moist oases in an otherwise dry environment. The origin of the term boggomoss is not known, but is most likely a compound of the words bog and moss. "Boggomoss creek" in the Parish of Fernyside appears on very early maps. Environment The spring flow rate is usually in the range of 0.5 to 2.0 litres per second (8th magnitude spring), however some large boggomosses have a flow rate of up to 10.0 litres per second (7th magnitude spring). A report commissioned by the Queensland Department of Environment defined four boggomoss vegetation types with distinct environmental relations: Group 1 associated with sandy and relatively infertile soils. Group 2 associated with fertile and heavy surface soils and relatively large mounds. Group 3 associated with fertile and heavy surface soils, but with little or no mound development (probably young springs). Group 4 are linear in shape and flood prone at the base of a gorge Sources Adclarkia dawsonensis (Boggomoss Snail, Dawson Valley Snail) Document 2::: A col in geomorphology is the lowest point on a mountain ridge between two peaks. It may also be called a gap or pass. Particularly rugged and forbidding cols in the terrain are usually referred to as notches. They are generally unsuitable as mountain passes, but are occasionally crossed by mule tracks or climbers' routes. Derived from the French ("collar, neck") from Latin collum, "neck", the term tends to be associated more with mountain than hill ranges. The distinction with other names for breaks in mountain ridges such as saddle, wind gap or notch is not sharply defined and may vary from place to place. Many double summits are separated by prominent cols. The height of a summit above its highest col (called the key col) is effectively a measure of a mountain's topographic prominence. Cols lie on the line of the watershed between two mountains, often on a prominent ridge or arête. For example, the highest col in Austria, the ("Upper Glockner Col", ) lies between the Kleinglockner () and Grossglockner () mountains, giving the Kleinglockner a minimum prominence of 17 metres. See also Saddle (landform) Document 3::: Mammoplasia is the normal or spontaneous enlargement of human breasts. Mammoplasia occurs normally during puberty and pregnancy in women, as well as during certain periods of the menstrual cycle. When it occurs in males, it is called gynecomastia and is considered to be pathological. When it occurs in females and is extremely excessive, it is called macromastia (also known as gigantomastia or breast hypertrophy) and is similarly considered to be pathological. Mammoplasia may be due to breast engorgement, which is temporary enlargement of the breasts caused by the production and storage of breast milk in association with lactation and/or galactorrhea (excessive or inappropriate production of milk). Mastodynia (breast tenderness/pain) frequently co-occurs with mammoplasia. During the luteal phase (latter half) of the menstrual cycle, due to increased mammary blood flow and/or premenstrual fluid retention caused by high circulating concentrations of estrogen and/or progesterone, the breasts temporarily increase in size, and this is experienced by women as fullness, heaviness, swollenness, and a tingling sensation. Mammoplasia can be an effect or side effect of various drugs, including estrogens, antiandrogens such as spironolactone, cyproterone acetate, bicalutamide, and finasteride, growth hormone, and drugs that elevate prolactin levels such as D2 receptor antagonists like antipsychotics (e.g., risperidone), metoclopramide, and domperidone and certain antidepressants like selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs). The risk appears to be less with serotonin-norepinephrine reuptake inhibitors (SNRIs) like venlafaxine. The "atypical" antidepressants mirtazapine and bupropion do not increase prolactin levels (bupropion may actually decrease prolactin levels), and hence there may be no risk with these agents. Other drugs that have been associated with mammoplasia include D-penicillamine, bucillamine, neothetazone, ciclosporin, The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main argument presented by critics of the GAARlandia hypothesis regarding the colonization of the Greater Antilles? A. They support the hypothesis based on geological evidence. B. They believe oceanic dispersal is the best explanation for colonization. C. They argue that multiple lineages colonized simultaneously. D. They find ample support from studies of individual lineages. Answer:
B. They believe oceanic dispersal is the best explanation for colonization.
Relavent Documents: Document 0::: Glycoprotein 130 (also known as gp130, IL6ST, IL6R-beta or CD130) is a transmembrane protein which is the founding member of the class of tall cytokine receptors. It forms one subunit of the type I cytokine receptor within the IL-6 receptor family. It is often referred to as the common gp130 subunit, and is important for signal transduction following cytokine engagement. As with other type I cytokine receptors, gp130 possesses a WSXWS amino acid motif that ensures correct protein folding and ligand binding. It interacts with Janus kinases to elicit an intracellular signal following receptor interaction with its ligand. Structurally, gp130 is composed of five fibronectin type-III domains and one immunoglobulin-like C2-type (immunoglobulin-like) domain in its extracellular portion. Characteristics The members of the IL-6 receptor family are all complex with gp130 for signal transduction. For example, IL-6 binds to the IL-6 Receptor. The complex of these two proteins then associates with gp130. This complex of 3 proteins then homodimerizes to form a hexameric complex which can produce downstream signals. There are many other proteins which associate with gp130, such as cardiotrophin 1 (CT-1), leukemia inhibitory factor (LIF), ciliary neurotrophic factor (CNTF), oncostatin M (OSM), and IL-11. There are also several other proteins which have structural similarity to gp130 and contain the WSXWS motif and preserved cysteine residues. Members of this group include LIF-R, OSM-R, and G-CSF-R. Loss of gp130 gp130 is an important part of many different types of signaling complexes. Inactivation of gp130 is lethal to mice. Homozygous mice who are born show a number of defects including impaired development of the ventricular myocardium. Haematopoietic effects included reduced numbers of stem cells in the spleen and liver. Signal transduction gp130 has no intrinsic tyrosine kinase activity. Instead, it is phosphorylated on tyrosine residues after complexing with other Document 1::: Peter Cameron (1847 – 1 December 1912 in New Mills, Derbyshire) was an English amateur entomologist who specialised in Hymenoptera. An artist, Cameron worked in the dye industry and in calico printing. He described many new species; his collection, including type material, is now in the Natural History Museum. He suffered from poor health and lack of employment. Latterly, he lived in New Mills and was supported by scholarships from the Royal Society. He loaned specimens to Jean-Jacques Kieffer, a teacher and Catholic priest in Bitche, Lorraine, who also named species after Cameron. Some of Cameron's taxonomic work is not very well regarded. Upon his death Claude Morley wrote, "Peter Cameron is dead, as was announced by most of the halfpenny papers on December 4th. What can we say of his life? Nothing; for it concerns us in no way. What shall we say of his work? Much, for it is entirely ours, and will go down to posterity as probably the most prolific and chaotic output of any individual for many years past." Similarly, American entomologist Richard M. Bohart "wound up with the thankless task of sorting through Cameron's North American contributions to a small group of wasps known as the Odynerini. Of the hundred or so names Cameron proposed within the group, almost all, Bohart found, were invalid." The Panamanian oak gall wasp Callirhytis cameroni described 2014 is named in his honor. Works A Monograph of the British Phytophagous Hymenoptera Ray Society (1882–1893) Hymenoptera volumes of the Biologia Centrali-Americana, volumes 1–2 (1883–1900) and (1888–1900) A complete list is given in external links below. External links Publications of Peter Cameron Manuscript collection BHL Hymenoptera Orientalis: or contributions to a knowledge of the Hymenoptera of the Oriental zoological region. Manchester :Literary and Philosophical Society,1889–1903. Digital Version of Biologia Centrali-Americana Document 2::: Buildings and structures Buildings c. 1250 Western towers and north rose window of Notre Dame de Paris in the Kingdom of France are built. Hexham Abbey, England completed (begun c. 1170). Konark Sun Temple in Odisha built. 1250 Château de Spesbourg, Holy Roman Empire built. Ponts Couverts, Strasbourg, opened. 1252 – Church of Alcobaça Monastery in Portugal completed. c. 1252 – The Franciscan abbey of Claregalway Friary, in Claregalway, County Galway, Ireland is commissioned by Norman knight John de Cogan. 1253 – Construction of the upper Basilica of San Francesco d'Assisi in Assisi, designed by Elia Bombardone, is completed. 1255 – New Gothic choir of the Tournai Cathedral in the Kingdom of France built. 1256 – Construction of Hermann Castle in present-day Estonia is started. 1257 – Construction of the Basilica di Santa Chiara in Assisi begun. 1258 – The main body of the Salisbury Cathedral (begun in 1220) in Salisbury, England, is completed. Births c. 1250 Giovanni Pisano, Italian sculptor, painter, and architect (d. c. 1315) Deaths none listed References Document 3::: Sonata was a 3D building design software application developed in the early 1980s and now regarded as the forerunner of today's building information modeling applications. Sonata was commercially released in 1986, having been developed by Jonathan Ingram independently and was sold to T2 Solutions (renamed from GMW Computers in 1987 - which was eventually bought by Alias|Wavefront), and was sold as a successor to GMW's RUCAPS. It ran on workstation computer hardware (by contrast, other 2D computer-aided design (CAD) systems could run on personal computers). The system was not expensive, according to Michael Phiri. Reiach Hall purchased "three Sonata workstations on Silicon Graphics machines, at a total cost of approximately £2000 each" [1990 prices]. Approximately 1,000 seats were sold between 1985 and 1992. However, as a BIM application, in addition to geometric modelling, it could model complete buildings, including complex parametrics, costs and staging of the construction process. Archicad founder Gábor Bojár has acknowledged that Sonata "was more advanced in 1986 than Archicad at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later". Many projects were designed and built using Sonata, including Peddle Thorp Architect's Rod Laver Arena in 1987, and Gatwick Airport North Terminal Domestic Facility by Taylor Woodrow. The US-based architect HKS used the software in 1992 to design a horse racing facility (Lone Star Park in Grand Prairie, Texas) and subsequently purchased the successor product, Reflex. Target Australia Pty. Ltd. the Australian discount department store retailer bought two Sonata licences in 1992 to replace two RUCAPS workstations originally from Coles Supermarkets. The software was run on two Silicon Graphics IRIS Indigo workstations. Staff were trained to use the software including the parametric language. The simple but powerful parametrics enable productivity gains in doc The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What significant event occurred in 2006 regarding Inspur's branding? A. Inspur changed its name from Langchao. B. Inspur was acquired by Microsoft. C. Inspur launched its first cloud computing product. D. Inspur established a joint venture with VMware. Answer:
A. Inspur changed its name from Langchao.
Relavent Documents: Document 0::: EL Aquilae, also known as Nova Aquilae 1927 was a nova that appeared in 1927. It was discovered by Max Wolf on photographic plates taken at Heidelberg Observatory on 30 and 31 July 1927 when it had a photographic magnitude of 9. Subsequent searches of plates taken at the Harvard College Observatory showed the nova was fainter than magnitude 11.1 on 8 June 1927 and had flared to magnitude 6.4 on 15 June 1927. It declined from peak brightness at an average rate of 0.105 magnitudes per day, making it a fast nova, and ultimately dimmed to about magnitude 21. The 14.5 magnitude change from peak brightness to quiescence was unusually large for a nova. All novae are binary stars, with a "donor" star orbiting a white dwarf so closely that matter is transferred from the donor to the white dwarf. Pagnotta & Schaefer argued that the donor star for the EL Aquilae system is a red giant, based on its position in an infrared color–color diagram. Tappert et al. suggest that Pagnotta & Schaefer misidentified EL Aquilae, and claim that EL Aquilae is probably an intermediate polar, a nova with a main sequence donor star, based on its eruption amplitude and color. Notes References Document 1::: The Barton decarboxylation is a radical reaction in which a carboxylic acid is converted to a thiohydroxamate ester (commonly referred to as a Barton ester). The product is then heated in the presence of a radical initiator and a suitable hydrogen donor to afford the decarboxylated product. This is an example of a reductive decarboxylation. Using this reaction it is possible to remove carboxylic acid moieties from alkyl groups and replace them with other functional groups. (See Scheme 1) This reaction is named after its developer, the British chemist and Nobel laureate Sir Derek Barton (1918–1998). Mechanism The reaction is initiated by homolytic cleavage of a radical initiator, in this case 2,2'-azobisisobutyronitrile (AIBN), upon heating. A hydrogen is then abstracted from the hydrogen source (tributylstannane in this case) to leave a tributylstannyl radical that attacks the sulfur atom of the thiohydroxamate ester. The N-O bond of the thiohydroxamate ester undergoes homolysis to form a carboxyl radical which then undergoes decarboxylation and carbon dioxide (CO2) is lost. The remaining alkyl radical (R·) then abstracts a hydrogen atom from remaining tributylstannane to form the reduced alkane (RH). (See Scheme 2) The tributyltin radical enters into another cycle of the reaction until all thiohydroxamate ester is consumed. N-O bond cleavage of the Barton ester can also occur spontaneously upon heating or by irradiation with light to initiate the reaction. In this case a radical initiator is not required but a hydrogen-atom (H-atom) donor is still necessary to form the reduced alkane (RH). Alternative H-atom donors to tributylstannane include tertiary thiols and organosilanes. The relative expense, smell, and toxicity associated with tin, thiol or silane reagents can be avoided by carrying the reaction out using chloroform as both solvent and H-atom donor. It is also possible to functionalize the alkyl radical by use of other radical trapping species (X-Y + R· Document 2::: Before data.europa.eu, the EU Open Data Portal was the point of access to public data published by the EU institutions, agencies and other bodies. On April 21, 2021 it was consolidated to the data.europa.eu portal, together with the European Data Portal: a similar initiative aimed at the EU Member States. Public data can be used and reused for commercial or non‑commercial purposes. The portal was a key instrument of the EU open data strategy. By ensuring easy and free access to data, their innovative use and economic potential can be enhanced. The goal of the portal was also to make the institutions and other EU bodies more transparent and accountable. Legal basis and launch of the portal Launched in December 2012, the portal was formally established by Commission Decision of 12 December 2011 (2011/833/EU) on the reuse of Commission documents to promote accessibility and reuse. Based on this decision, all the EU institutions were invited - and are still today - to publish information such as open data and to make it accessible to the public whenever possible. The operational management of the portal was the task of the Publications Office of the European Union. Implementation of EU open data policy was the responsibility of the Directorate General for Communications Networks, Content and Technology (DG CONNECT) of the European Commission. This is still true today with data.europa.eu. Features The portal enabled users to search, explore, link, download and easily re-use data for commercial or non-commercial purposes, through a common metadata catalogue. From the portal, users could access data published on the websites of the various institutions, agencies and other bodies of the EU. Semantic technologies offered additional functionalities. The metadata catalogue could be searched via an interactive search engine and through SPARQL queries. Users could suggest data they think is missing on the portal and give feedback on the quality of data obtainable. The Document 3::: The Eastern Trough Area Project, commonly known as ETAP, is a network of nine smaller oil and gas fields in the Central North Sea covering an area up to 35 km in diameter. There are a total of nine different fields, six operated by BP and another three operated by Shell, and together, they are a rich mix of geology, chemistry, technology and equity arrangements. Development The ETAP complex was sanctioned for development in 1995 with first hydrocarbons produced in 1998. The original development included Marnock, Mungo, Monan and Machar from BP and Heron, Egret, Skua from Shell. In 2002, BP brought Mirren and Madoes on stream. With these nine fields, the total reserves of ETAP are approximately of oil, of natural gas condensate and of natural gas. A single central processing facility (CPF) sits over the Marnock field and serves as a hub for all production and operations of the asset including all processing and export and a base for expedition to the Mungo NUI. The CPF consists of separate platforms for operations and accommodation linked by two 60 m bridges. The Processing, drilling and Riser platform (PdR), contains the process plant and the export lines, a riser area to receive production fluids from the other ETAP fields and the wellheads of Marnock. The Quarters and Utilities platform (QU) provides accommodation for up to 157 personnel operating this platform or travelling onwards to the Mungo NUI. This partitioning of accommodation and operations into two platforms, adds an extra element of safety, a particular concern for the designers coming only a few years after the Cullen report on the Piper Alpha disaster. Liquids are exported to Kinneil at Grangemouth through the Forties pipeline system. Gas is exported by the Central Area Transmission System to Teesside. Apart from Mungo, which has surface wellheads on a NUI, all other fields use subsea tie-backs. A tenth field, Fidditch, is currently under development by BP. (which has now been put on The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the maximum transfer rate of the HP Precision bus in burst mode? A. 23 MB/s B. 32 MB/s C. 5 MB/s D. 8 MB/s Answer:
A. 23 MB/s
Relavent Documents: Document 0::: Weight loss, in the context of medicine, health, or physical fitness, refers to a reduction of the total body mass, by a mean loss of fluid, body fat (adipose tissue), or lean mass (namely bone mineral deposits, muscle, tendon, and other connective tissue). Weight loss can either occur unintentionally because of malnourishment or an underlying disease, or from a conscious effort to improve an actual or perceived overweight or obese state. "Unexplained" weight loss that is not caused by reduction in calorific intake or increase in exercise is called cachexia and may be a symptom of a serious medical condition. Intentional Intentional weight loss is the loss of total body mass as a result of efforts to improve fitness and health, or to change appearance through slimming. Weight loss is the main treatment for obesity, and there is substantial evidence this can prevent progression from prediabetes to type 2 diabetes with a 7–10% weight loss and manage cardiometabolic health for diabetic people with a 5–15% weight loss. Weight loss in individuals who are overweight or obese can reduce health risks, increase fitness, and may delay the onset of diabetes. It could reduce pain and increase movement in people with osteoarthritis of the knee. Weight loss can lead to a reduction in hypertension (high blood pressure), however whether this reduces hypertension-related harm is unclear. Weight loss is achieved by adopting a lifestyle in which fewer calories are consumed than are expended. Depression, stress or boredom may contribute to unwanted weight gain or loss depending on the individual, and in these cases, individuals are advised to seek medical help. A 2010 study found that dieters who got a full night's sleep lost more than twice as much fat as sleep-deprived dieters. Though hypothesized that supplementation of vitamin D may help, studies do not support this. The majority of dieters regain weight over the long term. According to the UK National Health Service and the Di Document 1::: Arthur Mellen Wellington (December 20, 1847 – May 17, 1895) was an American civil engineer who wrote the 1877 book The Economic Theory of the Location of Railways. The saying that An engineer can do for a dollar what any fool can do for two is an abridgement of a statement made in this work (see below). Wellington was involved in the design and construction of new railways in Mexico. He was chief engineer of the Toledo and Canada Southern Railroad. He was the editor of the Engineering News. The pioneering effort of Wellington in engineering economics in the 1870s was continued by John Charles Lounsbury Fish with the publication of Engineering Economics: First Principles in 1923 and the first publication of the Principles of Engineering Economy in 1930 by Eugene L. Grant. Early life and works He was born on December 25, 1847, in Waltham, Massachusetts. In 1878, he married Agnes Bates, and they had two children. Wellington was a descendant of Roger Wellington, an early settler of the Massachusetts Bay Colony in 1636 and Benjamin Wellington. In 1863, Wellington graduated from the Boston Latin School and then studied engineering with John Benjamin Henck, a prominent civil engineer practicing in Boston. While his work with Henck took place during the American Civil War, he studied mechanical engineering and passed the examination for an assistant engineer in the United States Navy but with the end of the War, never received an appointment. Surveyor and locating engineer Wellington left Henck's office in 1866 to work as a surveyor in the engineers corps at the Brooklyn Parks department on the Prospect Park project under Frederick Law Olmsted. In 1868, he took a position as a surveyor on a locating party for the Blue Ridge railroad in South Carolina in charge of a series of explorations to find possible routes for the railroad. Wellington left the South Carolina road and went on to practice location engineering for the Dutchess & Columbia railroad in New York state. He Document 2::: This page provides supplementary chemical data on vitexin. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as eChemPortal, and follow its directions. Sigma Aldrich MSDS from SDSdata.org Spectral data References Document 3::: In the hyperbolic plane, as in the Euclidean plane, each point can be uniquely identified by two real numbers. Several qualitatively different ways of coordinatizing the plane in hyperbolic geometry are used. This article tries to give an overview of several coordinate systems in use for the two-dimensional hyperbolic plane. In the descriptions below the constant Gaussian curvature of the plane is −1. Sinh, cosh and tanh are hyperbolic functions. Polar coordinate system The polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction. The reference point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the reference direction is the polar axis. The distance from the pole is called the radial coordinate or radius, and the angle is called the angular coordinate, or polar angle. From the hyperbolic law of cosines, we get that the distance between two points given in polar coordinates is Let , differentiating at : we get the corresponding metric tensor: The straight lines are described by equations of the form where r0 and θ0 are the coordinates of the nearest point on the line to the pole. Quadrant model system The Poincaré half-plane model is closely related to a model of the hyperbolic plane in the quadrant Q = {(x,y): x > 0, y > 0}. For such a point the geometric mean and the hyperbolic angle produce a point (u,v) in the upper half-plane. The hyperbolic metric in the quadrant depends on the Poincaré half-plane metric. The motions of the Poincaré model carry over to the quadrant; in particular the left or right shifts of the real axis correspond to hyperbolic rotations of the quadrant. Due to the study of ratios in physics and economics where the quadrant is the universe of discourse, its points are said to be located by hyperbolic coordinates. Cartesian-style coordinate systems I The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does the term "xystum" refer to in architecture? A. A covered portico of a gymnasium B. An open path or promenade C. A type of decorative wall D. A specific style of basilica Answer:
B. An open path or promenade
Relavent Documents: Document 0::: A deadband or dead-band (also known as a dead zone or a neutral zone) is a band of input values in the domain of a transfer function in a control system or signal processing system where the output is zero (the output is 'dead' - no action occurs). Deadband regions can be used in control systems such as servoamplifiers to prevent oscillation or repeated activation-deactivation cycles (called 'hunting' in proportional control systems). A form of deadband that occurs in mechanical systems, compound machines such as gear trains is backlash. Voltage regulators In some power substations there are regulators that keep the voltage within certain predetermined limits, but there is a range of voltage in-between during which no changes are made, such as between 112 and 118 volts (the deadband is 6 volts), or between 215 and 225 volts (deadband is 10 volts). Backlash Gear teeth with slop (backlash) exhibit deadband. There is no drive from the input to the output shaft in either direction while the teeth are not meshed. Leadscrews generally also have backlash and hence a deadband, which must be taken into account when making position adjustments, especially with CNC systems. If mechanical backlash eliminators are not available, the control can compensate for backlash by adding the deadband value to the position vector whenever direction is reversed. Hysteresis versus Deadband Deadband is different from hysteresis. With hysteresis, there is no deadband and so the output is always in one direction or another. Devices with hysteresis have memory, in that previous system states dictate future states. Examples of devices with hysteresis are single-mode thermostats and smoke alarms. Deadband is the range in a process where no changes to output are made. Hysteresis is the difference in a variable depending on the direction of travel. Thermostats Simple (single mode) thermostats exhibit hysteresis. For example, the furnace in the basement of a house is adjusted automatically by Document 1::: A micronucleus is a small nucleus that forms whenever a chromosome or a fragment of a chromosome is not incorporated into one of the daughter nuclei during cell division. It usually is a sign of genotoxic events and chromosomal instability. Micronuclei are commonly seen in cancerous cells and may indicate genomic damage events that can increase the risk of developmental or degenerative diseases. Micronuclei form during anaphase from lagging acentric chromosomes or chromatid fragments caused by incorrectly repaired or unrepaired DNA breaks or by nondisjunction of chromosomes. This improper segregation of chromosomes may result from hypomethylation of repeat sequences present in pericentromeric DNA, irregularities in kinetochore proteins or their assembly, a dysfunctional spindle apparatus, or flawed anaphase checkpoint genes. Micronuclei can contribute to genome instability by promoting a catastrophic mutational event called chromothripsis. Many micronucleus assays have been developed to test for the presence of these structures and determine their frequency in cells exposed to certain chemicals or subjected to stressful conditions. The term micronucleus may also refer to the smaller nucleus in ciliate protozoans, such as the Paramecium. In mitosis it divides by fission, and in conjugation a pair of gamete micronuclei undergo reciprocal fusion to form a zygote nucleus, which gives rise to the macronuclei and micronuclei of the individuals of the next cycle of fission. Discovery Micronuclei in newly formed red blood cells in humans are known as Howell-Jolly bodies because these structures were first identified and described in erythrocytes by hematologists William Howell and Justin Jolly. These structures were later found to be associated with deficiencies in vitamins such as folate and B12. The relationship between formation of micronuclei and exposure to environmental factors was first reported in root tip cells exposed to ionizing radiation. Micronucleus inducti Document 2::: Controlled vocabularies provide a way to organize knowledge for subsequent retrieval. They are used in subject indexing schemes, subject headings, thesauri, taxonomies and other knowledge organization systems. Controlled vocabulary schemes mandate the use of predefined, preferred terms that have been preselected by the designers of the schemes, in contrast to natural language vocabularies, which have no such restriction. In library and information science In library and information science, controlled vocabulary is a carefully selected list of words and phrases, which are used to tag units of information (document or work) so that they may be more easily retrieved by a search. Controlled vocabularies solve the problems of homographs, synonyms and polysemes by a bijection between concepts and preferred terms. In short, controlled vocabularies reduce unwanted ambiguity inherent in normal human languages where the same concept can be given different names and ensure consistency. For example, in the Library of Congress Subject Headings (a subject heading system that uses a controlled vocabulary), preferred terms—subject headings in this case—have to be chosen to handle choices between variant spellings of the same word (American versus British), choice among scientific and popular terms (cockroach versus Periplaneta americana), and choices between synonyms (automobile versus car), among other difficult issues. Choices of preferred terms are based on the principles of user warrant (what terms users are likely to use), literary warrant (what terms are generally used in the literature and documents), and structural warrant (terms chosen by considering the structure, scope of the controlled vocabulary). Controlled vocabularies also typically handle the problem of homographs with qualifiers. For example, the term pool has to be qualified to refer to either swimming pool or the game pool to ensure that each preferred term or heading refers to only one concept. Types u Document 3::: The Brendel–Bormann oscillator model is a mathematical formula for the frequency dependence of the complex-valued relative permittivity, sometimes referred to as the dielectric function. The model has been used to fit to the complex refractive index of materials with absorption lineshapes exhibiting non-Lorentzian broadening, such as metals and amorphous insulators, across broad spectral ranges, typically near-ultraviolet, visible, and infrared frequencies. The dispersion relation bears the names of R. Brendel and D. Bormann, who derived the model in 1992, despite first being applied to optical constants in the literature by Andrei M. Efimov and E. G. Makarova in 1983. Around that time, several other researchers also independently discovered the model. The Brendel-Bormann oscillator model is aphysical because it does not satisfy the Kramers–Kronig relations. The model is non-causal, due to a singularity at zero frequency, and non-Hermitian. These drawbacks inspired J. Orosco and C. F. M. Coimbra to develop a similar, causal oscillator model. Mathematical formulation The general form of an oscillator model is given by where is the relative permittivity, is the value of the relative permittivity at infinite frequency, is the angular frequency, is the contribution from the th absorption mechanism oscillator. The Brendel-Bormann oscillator is related to the Lorentzian oscillator and Gaussian oscillator , given by where is the Lorentzian strength of the th oscillator, is the Lorentzian resonant frequency of the th oscillator, is the Lorentzian broadening of the th oscillator, is the Gaussian broadening of the th oscillator. The Brendel-Bormann oscillator is obtained from the convolution of the two aforementioned oscillators in the manner of , which yields where is the Faddeeva function, . The square root in the definition of must be taken such that its imaginary component is positive. This is achieved by: The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What was the primary cause of the Gerrards Cross Tunnel collapse as confirmed by a later investigation in 2022? A. Heavy rainfall B. Insufficient safety checks C. Backfilling operation D. Design flaws Answer:
C. Backfilling operation
Relavent Documents: Document 0::: 4 Camelopardalis is a probable multiple star in the northern constellation of Camelopardalis, located 177 light years away from the Sun, based upon parallax. With a combined apparent visual magnitude of 5.29, it is visible to the naked eye as a faint, white-hued star. The pair have a relatively high proper motion, traversing the celestial sphere at an angular rate of per year. The system's proper motion makes it a candidate for membership in the IC 2391 supercluster. They are moving away from the Earth with a heliocentric radial velocity of 22.5 km/s. The brighter member, designated component A, is classified as an Am star, which indicates that the spectrum shows abnormalities of certain elements. It is an estimated 560 million years old and is spinning with a projected rotational velocity of 75 The star has 2.01 times the mass of the Sun and 2.57 times the Sun's radius. It is radiating 18 times the Sun's luminosity from its photosphere at an effective temperature of 7,700 K. There is a faint, magnitude 9.49 companion at an angular separation of – component B; the pair most likely form a binary systemwith a period of about 90 years. There is also a 13th-magnitude visual companion away which shares a common proper motion and parallax. Another listed companion, a 12th-magnitude star nearly away, is probably unrelated. References External links HR 1511 CCDM J04480+5645 Image 4 Camelopardalis Document 1::: AIMStar was a proposed antimatter-catalyzed nuclear pulse propulsion craft that uses clouds of antiprotons to initiate fission and fusion within fuel pellets. A magnetic nozzle derives motive force from the resulting explosions. The design was studied during the 1990s by Penn State University. The craft was designed to reach a distance on the order of 10,000 AU from the Sun, with a travel time of 50 years, and a coasting velocity of approximately 960 km/s after the boost phase (roughly 1/300th of the speed of light). The probe would be able to study the interstellar medium as well as reach Alpha Centauri. The project would require more antimatter than we are capable of producing. In addition, some technical hurdles need to be surpassed before it would be feasible. See also ICAN-II - A similar concept that uses less antimatter but more fission propellant Nuclear pulse propulsion References External links Antimatter (PSU) Document 2::: Structural complexity is a science of applied mathematics that aims to relate fundamental physical or biological aspects of a complex system with the mathematical description of the morphological complexity that the system exhibits, by establishing rigorous relations between mathematical and physical properties of such system. Structural complexity emerges from all systems that display morphological organization. Filamentary structures, for instance, are an example of coherent structures that emerge, interact and evolve in many physical and biological systems, such as mass distribution in the Universe, vortex filaments in turbulent flows, neural networks in our brain and genetic material (such as DNA) in a cell. In general information on the degree of morphological disorder present in the system tells us something important about fundamental physical or biological processes. Structural complexity methods are based on applications of differential geometry and topology (and in particular knot theory) to interpret physical properties of dynamical systems. such as relations between kinetic energy and tangles of vortex filaments in a turbulent flow or magnetic energy and braiding of magnetic fields in the solar corona, including aspects of topological fluid dynamics. Literature References Document 3::: {{DISPLAYTITLE:Lp space}} In mathematics, the spaces are function spaces defined using a natural generalization of the -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although according to the Bourbaki group they were first introduced by Frigyes Riesz . spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines. Preliminaries The -norm in finite dimensions The Euclidean length of a vector in the -dimensional real vector space is given by the Euclidean norm: The Euclidean distance between two points and is the length of the straight line between the two points. In many situations, the Euclidean distance is appropriate for capturing the actual distances in a given space. In contrast, consider taxi drivers in a grid street plan who should measure distance not in terms of the length of the straight line to their destination, but in terms of the rectilinear distance, which takes into account that streets are either orthogonal or parallel to each other. The class of -norms generalizes these two examples and has an abundance of applications in many parts of mathematics, physics, and computer science. For a real number the -norm or -norm of is defined by The absolute value bars can be dropped when is a rational number with an even numerator in its reduced form, and is drawn from the set of real numbers, or one of its subsets. The Euclidean norm from above falls into this class and is the -norm, and the -norm is the norm that corresponds to the rectilinear distance. The -norm or maximum norm (or uniform norm) is the limit of the -norms for , given by: For all the -norms and maximum norm satisfy the properties of a "length function" (or norm), that is: only the zero vector has zero length, the length of the vector is positive homogeneous with respect to multiplication by a scalar (positive homogeneity), and the length of the sum of two vectors is no larger than the sum of lengths of the vectors (triangle inequality). Abstractly speaking, this means that together with the -norm is a normed vector space. Moreover, it turns out that this space is complete, thus making it a Banach space. Relations between -norms The grid distance or rectilinear distance (sometimes called the "Manhattan distance") between two points is never shorter than the length of the line segment between them (the Euclidean or "as the crow flies" distance). Formally, this means that the Euclidean norm of any vector is bounded by its 1-norm: This fact generalizes to -norms in that the -norm of any given vector does not grow with : For the opposite direction, the following relation between the -norm and the -norm is known: This inequality depends on the dimension of the underlying vector space and follows directly from the Cauchy–Schwarz inequality. In general, for vectors in where This is a consequence of Hölder's inequality. When In for the formula defines an absolutely homogeneous function for however, the resulting function does not define a norm, because it is not subadditive. On the other hand, the formula defines a subadditive function at the cost of losing absolute homogeneity. It does define an F-norm, though, which is homogeneous of degree Hence, the function defines a metric. The metric space is denoted by Although the -unit ball around the origin in this metric is "concave", the topology defined on by the metric is the usual vector space topology of hence is a locally convex topological vector space. Beyond this qualitative statement, a quantitative way to measure the lack of convexity of is to denote by the smallest constant such that the scalar multiple of the -unit ball contains the convex hull of which is equal to The fact that for fixed we have shows that the infinite-dimensional sequence space defined below, is no longer locally convex. When There is one norm and another function called the "norm" (with quotation marks). The mathematical definition of the norm was established by Banach's Theory of Linear Operations. The space of sequences has a complete metric topology provided by the F-norm on the product metric: The -normed space is studied in functional analysis, probability theory, and harmonic analysis. Another function was called the "norm" by David Donoho—whose quotation marks warn that this function is not a proper norm—is the number of non-zero entries of the vector Many authors abuse terminology by omitting the quotation marks. Defining the zero "norm" of is equal to This is not a norm because it is not homogeneous. For example, scaling the vector by a positive constant does not change the "norm". Despite these defects as a mathematical norm, the non-zero counting "norm" has uses in scientific computing, information theory, and statistics–notably in compressed sensing in signal processing and computational harmonic analysis. Despite not being a norm, the associated metric, known as Hamming distance, is a valid distance, since homogeneity is not required for distances. spaces and sequence spaces The -norm can be extended to vectors that have an infinite number of components (sequences), which yields the space This contains as special cases: the space of sequences whose series are absolutely convergent, the space of square-summable sequences, which is a Hilbert space, and the space of bounded sequences. The space of sequences has a natural vector space structure by applying scalar addition and multiplication. Explicitly, the vector sum and the scalar action for infinite sequences of real (or complex) numbers are given by: Define the -norm: Here, a complication arises, namely that the series on the right is not always convergent, so for example, the sequence made up of only ones, will have an infinite -norm for The space is then defined as the set of all infinite sequences of real (or complex) numbers such that the -norm is finite. One can check that as increases, the set grows larger. For example, the sequence is not in but it is in for as the series diverges for (the harmonic series), but is convergent for One also defines the -norm using the supremum: and the corresponding space of all bounded sequences. It turns out that if the right-hand side is finite, or the left-hand side is infinite. Thus, we will consider spaces for The -norm thus defined on is indeed a norm, and together with this norm is a Banach space. General ℓp-space In complete analogy to the preceding definition one can define the space over a general index set (and ) as where convergence on the right means that only countably many summands are nonzero (see also Unconditional convergence). With the norm the space becomes a Banach space. In the case where is finite with elements, this construction yields with the -norm defined above. If is countably infinite, this is exactly the sequence space defined above. For uncountable sets this is a non-separable Banach space which can be seen as the locally convex direct limit of -sequence spaces. For the -norm is even induced by a canonical inner product called the , which means that holds for all vectors This inner product can expressed in terms of the norm by using the polarization identity. On it can be defined by Now consider the case Define where for all The index set can be turned into a measure space by giving it the discrete σ-algebra and the counting measure. Then the space is just a special case of the more general -space (defined below). Lp spaces and Lebesgue integrals An space may be defined as a space of measurable functions for which the -th power of the absolute value is Lebesgue integrable, where functions which agree almost everywhere are identified. More generally, let be a measure space and When , consider the set of all measurable functions from to or whose absolute value raised to the -th power has a finite integral, or in symbols: To define the set for recall that two functions and defined on are said to be , written , if the set is measurable and has measure zero. Similarly, a measurable function (and its absolute value) is (or ) by a real number written , if the (necessarily) measurable set has measure zero. The space is the set of all measurable functions that are bounded almost everywhere (by some real ) and is defined as the infimum of these bounds: When then this is the same as the essential supremum of the absolute value of : For example, if is a measurable function that is equal to almost everywhere then for every and thus for all For every positive the value under of a measurable function and its absolute value are always the same (that is, for all ) and so a measurable function belongs to if and only if its absolute value does. Because of this, many formulas involving -norms are stated only for non-negative real-valued functions. Consider for example the identity which holds whenever is measurable, is real, and (here when ). The non-negativity requirement can be removed by substituting in for which gives Note in particular that when is finite then the formula relates the -norm to the -norm. Seminormed space of -th power integrable functions Each set of functions forms a vector space when addition and scalar multiplication are defined pointwise. That the sum of two -th power integrable functions and is again -th power integrable follows from although it is also a consequence of Minkowski's inequality which establishes that satisfies the triangle inequality for (the triangle inequality does not hold for ). That is closed under scalar multiplication is due to being absolutely homogeneous, which means that for every scalar and every function Absolute homogeneity, the triangle inequality, and non-negativity are the defining properties of a seminorm. Thus is a seminorm and the set of -th power integrable functions together with the function defines a seminormed vector space. In general, the seminorm is not a norm because there might exist measurable functions that satisfy but are not equal to ( is a norm if and only if no such exists). Zero sets of -seminorms If is measurable and equals a.e. then for all positive On the other hand, if is a measurable function for which there exists some such that then almost everywhere. When is finite then this follows from the case and the formula mentioned above. Thus if is positive and is any measurable function, then if and only if almost everywhere. Since the right hand side ( a.e.) does not mention it follows that all have the same zero set (it does not depend on ). So denote this common set by This set is a vector subspace of for every positive Quotient vector space Like every seminorm, the seminorm induces a norm (defined shortly) on the canonical quotient vector space of by its vector subspace This normed quotient space is called and it is the subject of this article. We begin by defining the quotient vector space. Given any the coset consists of all measurable functions that are equal to almost everywhere. The set of all cosets, typically denoted by forms a vector space with origin when vector addition and scalar multiplication are defined by and This particular quotient vector space will be denoted by Two cosets are equal if and only if (or equivalently, ), which happens if and only if almost everywhere; if this is the case then and are identified in the quotient space. Hence, strictly speaking consists of equivalence classes of functions. The -norm on the quotient vector space Given any the value of the seminorm on the coset is constant and equal to denote this unique value by so that: This assignment defines a map, which will also be denoted by on the quotient vector space This map is a norm on called the . The value of a coset is independent of the particular function that was chosen to represent the coset, meaning that if is any coset then for every (since for every ). The Lebesgue space The normed vector space is called or the of -th power integrable functions and it is a Banach space for every (meaning that it is a complete metric space, a result that is sometimes called the [[Riesz–Fischer theorem#Completeness of Lp, 0 < p ≤ ∞|Riesz–Fischer theorem]]). When the underlying measure space is understood then is often abbreviated or even just Depending on the author, the subscript notation might denote either or If the seminorm on happens to be a norm (which happens if and only if ) then the normed space will be linearly isometrically isomorphic to the normed quotient space via the canonical map (since ); in other words, they will be, up to a linear isometry, the same normed space and so they may both be called " space". The above definitions generalize to Bochner spaces. In general, this process cannot be reversed: there is no consistent way to define a "canonical" representative of each coset of in For however, there is a theory of lifts enabling such recovery. Special cases For the spaces are a special case of spaces; when are the natural numbers and is the counting measure. More generally, if one considers any set with the counting measure, the resulting space is denoted For example, is the space of all sequences indexed by the integers, and when defining the -norm on such a space, one sums over all the integers. The space where is the set with elements, is with its -norm as defined above. Similar to spaces, is the only Hilbert space among spaces. In the complex case, the inner product on is defined by Functions in are sometimes called square-integrable functions, quadratically integrable functions or square-summable functions, but sometimes these terms are reserved for functions that are square-integrable in some other sense, such as in the sense of a Riemann integral . As any Hilbert space, every space is linearly isometric to a suitable where the cardinality of the set is the cardinality of an arbitrary basis for this particular If we use complex-valued functions, the space is a commutative C*-algebra with pointwise multiplication and conjugation. For many measure spaces, including all sigma-finite ones, it is in fact a commutative von Neumann algebra. An element of defines a bounded operator on any space by multiplication. When If then can be defined as above, that is: In this case, however, the -norm does not satisfy the triangle inequality and defines only a quasi-norm. The inequality valid for implies that and so the function is a metric on The resulting metric space is complete. In this setting satisfies a reverse Minkowski inequality, that is for This result may be used to prove Clarkson's inequalities, which are in turn used to establish the uniform convexity of the spaces for . The space for is an F-space: it admits a complete translation-invariant metric with respect to which the vector space operations are continuous. It is the prototypical example of an F-space that, for most reasonable measure spaces, is not locally convex: in or every open convex set containing the function is unbounded for the -quasi-norm; therefore, the vector does not possess a fundamental system of convex neighborhoods. Specifically, this is true if the measure space contains an infinite family of disjoint measurable sets of finite positive measure. The only nonempty convex open set in is the entire space. Consequently, there are no nonzero continuous linear functionals on the continuous dual space is the zero space. In the case of the counting measure on the natural numbers (i.e. ), the bounded linear functionals on are exactly those that are bounded on , i.e., those given by sequences in Although does contain non-trivial convex open sets, it fails to have enough of them to give a base for the topology. Having no linear functionals is highly undesirable for the purposes of doing analysis. In case of the Lebesgue measure on rather than work with for it is common to work with the Hardy space whenever possible, as this has quite a few linear functionals: enough to distinguish points from one another. However, the Hahn–Banach theorem still fails in for . Properties Hölder's inequality Suppose satisfy . If and then and This inequality, called Hölder's inequality, is in some sense optimal since if and is a measurable function such that where the supremum is taken over the closed unit ball of then and Generalized Minkowski inequality Minkowski inequality, which states that satisfies the triangle inequality, can be generalized: If the measurable function is non-negative (where and are measure spaces) then for all Atomic decomposition If then every non-negative has an , meaning that there exist a sequence of non-negative real numbers and a sequence of non-negative functions called , whose supports are pairwise disjoint sets of measure such that and for every integer and and where moreover, the sequence of functions depends only on (it is independent of ). These inequalities guarantee that for all integers while the supports of being pairwise disjoint implies An atomic decomposition can be explicitly given by first defining for every integer and then letting where denotes the measure of the set and denotes the indicator function of the set The sequence is decreasing and converges to as Consequently, if then and so that is identically equal to (in particular, the division by causes no issues). The complementary cumulative distribution function of that was used to define the also appears in the definition of the weak -norm (given below) and can be used to express the -norm (for ) of as the integral where the integration is with respect to the usual Lebesgue measure on Dual spaces The dual space of for has a natural isomorphism with where is such that . This isomorphism associates with the functional defined by for every is a well defined continuous linear mapping which is an isometry by the extremal case of Hölder's inequality. If is a -finite measure space one can use the Radon–Nikodym theorem to show that any can be expressed this way, i.e., is an isometric isomorphism of Banach spaces. Hence, it is usual to say simply that is the continuous dual space of For the space is reflexive. Let be as above and let be the corresponding linear isometry. Consider the map from to obtained by composing with the transpose (or adjoint) of the inverse of This map coincides with the canonical embedding of into its bidual. Moreover, the map is onto, as composition of two onto isometries, and this proves reflexivity. If the measure on is sigma-finite, then the dual of is isometrically isomorphic to (more precisely, the map corresponding to is an isometry from onto The dual of is subtler. Elements of can be identified with bounded signed finitely additive measures on that are absolutely continuous with respect to See ba space for more details. If we assume the axiom of choice, this space is much bigger than except in some trivial cases. However, Saharon Shelah proved that there are relatively consistent extensions of Zermelo–Fraenkel set theory (ZF + DC + "Every subset of the real numbers has the Baire property") in which the dual of is Embeddings Colloquially, if then contains functions that are more locally singular, while elements of can be more spread out. Consider the Lebesgue measure on the half line A continuous function in might blow up near but must decay sufficiently fast toward infinity. On the other hand, continuous functions in need not decay at all but no blow-up is allowed. More formally: If : if and only if does not contain sets of finite but arbitrarily large measure (e.g. any finite measure). If : if and only if does not contain sets of non-zero but arbitrarily small measure (e.g. the counting measure). Neither condition holds for the Lebesgue measure on the real line while both conditions holds for the counting measure on any finite set. As a consequence of the closed graph theorem, the embedding is continuous, i.e., the identity operator is a bounded linear map from to in the first case and to in the second. Indeed, if the domain has finite measure, one can make the following explicit calculation using Hölder's inequality leading to The constant appearing in the above inequality is optimal, in the sense that the operator norm of the identity is precisely the case of equality being achieved exactly when -almost-everywhere. Dense subspaces Let and be a measure space and consider an integrable simple function on given by where are scalars, has finite measure and is the indicator function of the set for By construction of the integral, the vector space of integrable simple functions is dense in More can be said when is a normal topological space and its Borel –algebra. Suppose is an open set with Then for every Borel set contained in there exist a closed set and an open set such that for every . Subsequently, there exists a Urysohn function on that is on and on with If can be covered by an increasing sequence of open sets that have finite measure, then the space of –integrable continuous functions is dense in More precisely, one can use bounded continuous functions that vanish outside one of the open sets This applies in particular when and when is the Lebesgue measure. For example, the space of continuous and compactly supported functions as well as the space of integrable step functions are dense in . Closed subspaces If is any positive real number, is a probability measure on a measurable space (so that ), and is a vector subspace, then is a closed subspace of if and only if is finite-dimensional ( was chosen independent of ). In this theorem, which is due to Alexander Grothendieck, it is crucial that the vector space be a subset of since it is possible to construct an infinite-dimensional closed vector subspace of (which is even a subset of ), where is Lebesgue measure on the unit circle and is the probability measure that results from dividing it by its mass Applications Statistics In statistics, measures of central tendency and statistical dispersion, such as the mean, median, and standard deviation, can be defined in terms of metrics, and measures of central tendency can be characterized as solutions to variational problems. In penalized regression, "L1 penalty" and "L2 penalty" refer to penalizing either the norm of a solution's vector of parameter values (i.e. the sum of its absolute values), or its squared norm (its Euclidean length). Techniques which use an L1 penalty, like LASSO, encourage sparse solutions (where the many parameters are zero). Elastic net regularization uses a penalty term that is a combination of the norm and the squared norm of the parameter vector. Hausdorff–Young inequality The Fourier transform for the real line (or, for periodic functions, see Fourier series), maps to (or to ) respectively, where and This is a consequence of the Riesz–Thorin interpolation theorem, and is made precise with the Hausdorff–Young inequality. By contrast, if the Fourier transform does not map into Hilbert spaces Hilbert spaces are central to many applications, from quantum mechanics to stochastic calculus. The spaces and are both Hilbert spaces. In fact, by choosing a Hilbert basis i.e., a maximal orthonormal subset of or any Hilbert space, one sees that every Hilbert space is isometrically isomorphic to (same as above), i.e., a Hilbert space of type Generalizations and extensions Weak Let be a measure space, and a measurable function with real or complex values on The distribution function of is defined for by If is in for some with then by Markov's inequality, A function is said to be in the space weak , or if there is a constant such that, for all The best constant for this inequality is the -norm of and is denoted by The weak coincide with the Lorentz spaces so this notation is also used to denote them. The -norm is not a true norm, since the triangle inequality fails to hold. Nevertheless, for in and in particular In fact, one has and raising to power and taking the supremum in one has Under the convention that two functions are equal if they are equal almost everywhere, then the spaces are complete . For any the expression is comparable to the -norm. Further in the case this expression defines a norm if Hence for the weak spaces are Banach spaces . A major result that uses the -spaces is the Marcinkiewicz interpolation theorem, which has broad applications to harmonic analysis and the study of singular integrals. Weighted spaces As before, consider a measure space Let be a measurable function. The -weighted space is defined as where means the measure defined by or, in terms of the Radon–Nikodym derivative, the norm for is explicitly As -spaces, the weighted spaces have nothing special, since is equal to But they are the natural framework for several results in harmonic analysis ; they appear for example in the Muckenhoupt theorem: for the classical Hilbert transform is defined on where denotes the unit circle and the Lebesgue measure; the (nonlinear) Hardy–Littlewood maximal operator is bounded on Muckenhoupt's theorem describes weights such that the Hilbert transform remains bounded on and the maximal operator on spaces on manifolds One may also define spaces on a manifold, called the intrinsic spaces of the manifold, using densities. Vector-valued spaces Given a measure space and a locally convex space (here assumed to be complete), it is possible to define spaces of -integrable -valued functions on in a number of ways. One way is to define the spaces of Bochner integrable and Pettis integrable functions, and then endow them with locally convex TVS-topologies that are (each in their own way) a natural generalization of the usual topology. Another way involves topological tensor products of with Element of the vector space are finite sums of simple tensors where each simple tensor may be identified with the function that sends This tensor product is then endowed with a locally convex topology that turns it into a topological tensor product, the most common of which are the projective tensor product, denoted by and the injective tensor product, denoted by In general, neither of these space are complete so their completions are constructed, which are respectively denoted by and (this is analogous to how the space of scalar-valued simple functions on when seminormed by any is not complete so a completion is constructed which, after being quotiented by is isometrically isomorphic to the Banach space ). Alexander Grothendieck showed that when is a nuclear space (a concept he introduced), then these two constructions are, respectively, canonically TVS-isomorphic with the spaces of Bochner and Pettis integral functions mentioned earlier; in short, they are indistinguishable. space of measurable functions The vector space of (equivalence classes of) measurable functions on is denoted . By definition, it contains all the and is equipped with the topology of convergence in measure. When is a probability measure (i.e., ), this mode of convergence is named convergence in probability. The space is always a topological abelian group but is only a topological vector space if This is because scalar multiplication is continuous if and only if If is -finite then the weaker topology of local convergence in measure is an F-space, i.e. a completely metrizable topological vector space. Moreover, this topology is isometric to global convergence in measure for a suitable choice of probability measure The description is easier when is finite. If is a finite measure on the function admits for the convergence in measure the following fundamental system of neighborhoods The topology can be defined by any metric of the form where is bounded continuous concave and non-decreasing on with and when (for example, Such a metric is called Lévy-metric for Under this metric the space is complete. However, as mentioned above, scalar multiplication is continuous with respect to this metric only if . To see this, consider the Lebesgue measurable function defined by . Then clearly . The space is in general not locally bounded, and not locally convex. For the infinite Lebesgue measure on the definition of the fundamental system of neighborhoods could be modified as follows The resulting space , with the topology of local convergence in measure, is isomorphic to the space for any positive –integrable density See also Notes References . . . . . . External links Proof that Lp spaces are complete The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the significance of Lp spaces in mathematics and other disciplines according to the text? A. They are only useful in abstract mathematics. B. They are crucial for mathematical analysis and have applications in physics, statistics, and finance. C. They are primarily concerned with the geometric properties of shapes. D. They are only relevant in the study of finite-dimensional vector spaces. Answer:
B. They are crucial for mathematical analysis and have applications in physics, statistics, and finance.
Relavent Documents: Document 0::: Tesamorelin (INN; trade name Egrifta SV) is a synthetic form of growth-hormone-releasing hormone (GHRH) which is used in the treatment of HIV-associated lipodystrophy, approved initially in 2010. It is produced and developed by Theratechnologies, Inc. of Canada. The drug is a synthetic peptide consisting of all 44 amino acids of human GHRH with the addition of a trans-3-hexenoic acid group. Mechanism of action Tesamorelin is the N-terminally modified compound based on 44 amino acids sequence of human GHRH. This modified synthetic form is more potent and stable than the natural peptide. It is also more resistant to cleavage by the dipeptidyl aminopeptidase than human GHRH. It stimulates the synthesis and release of endogenous GH, with an increase in level of insulin-like growth factor (IGF-1). The released GH then binds with the receptors present on various body organs and regulates the body composition. This regulation is mainly because of the combination of anabolic and lipolytic mechanisms. However, it has been found that the main mechanisms by which Tesamorelin reduces body fat mass are lipolysis followed by reduction in triglycerides level. Contraindication Tesamorelin therapy may cause glucose intolerance and increase the risk of type 2-diabetes, so it is contraindicated in pregnancy. It is also contraindicated in pregnancy (category X) because it may cause harm to fetus. It is also contraindicated in patients affected by hypothalamic-pituitary axis disruption due to pituitary gland tumor, head irradiation and hypopituitarism. Adverse effects Injection site erythema, peripheral edema, injection site pruritus and diarrhea. Document 1::: A demining robot is a robotic land vehicle that is designed for detecting the exact location of land mines and clearing them. Demining by conventional methods can be costly and dangerous for people. Environments that are dull or dirty, or otherwise dangerous to humans, may be well-suited for the use of demining robots. Models Uran-6 Uran-6 is a demining robot model used by Russian Federation in Syria and Ukraine. The Uran-6 is a short-range and remotely piloted robot. Limitations of this robot include the need for human operators to be within a few hundred feet. MV-4 Dok-Ing MV-4 Dok-Ing is a demining robot model used by Republic of Croatia. See also Lawnmower robot Vacuum cleaner robot References Document 2::: Melt is the working material in the steelmaking process, in making glass, and when forming thermoplastics. In thermoplastics, melt is the plastic in its forming temperature, which can vary depending on how it is being used. For steelmaking, it refers to steel in liquid form. See also Wax melter Crucible References Notes Bibliography . . Document 3::: This is a list of municipalities in Finland. There are a total of 308 municipalities, of which 114 have both a Finnish and a Swedish name. These municipalities are listed by the name in the local majority language, with the name in the other national language provided in parentheses. Finnish is the majority language in 99 of these 114 municipalities, while Swedish is the majority language in the remaining 15 municipalities. The four municipalities that are wholly or partly within the Sami native region have their names also given in the local Sami languages. {{aligned table | cols=4 | class=wikitable sortable | row1header=y | col3align=center | col4align=right | Municipality | Administrativecenter |Locationon map|Land area(km2) </th> Population() Density(/km2) | ||| | (Swedish: ) | | | | | | | | | Alavieska | | | | Alavus | | | | | | | | Askola | | | | Aura | | | | | | | | | | | | Enonkoski | | | | Hetta | | | | | | | | Eura | | | | Eurajoki | | | | | | | | | | | | Forssa | | | | | | | | Geta (Vestergeta) | | | | | | | | Haapavesi | | | | Hailuoto | | | | Halsua | | | | Hamina | | | | Hammarland (Kattby) | | | | Hankasalmi | | | | Hanko | | | | Harjavalta | | | | Hartola | | | | Parola | | | | | | | | Heinola | | | | Heinävesi | | | | Helsinki | | | | Hirvensalmi | | | | | | | | | | | | Humppila | | | | Hyrynsalmi | | | | | | | | | | | | | | | | Ii (Iin Hamina) | | | | Iisalmi | | | | | | | | Ikaalinen | | | | Ilmajoki | | | | Ilomantsi | | | | Mansikkala | | | | | | | | | | | | Isojoki | | | | | | | | Jakobstad | | | | | | | | Joensuu | | | | Jokioinen | | | | | | | | Joroinen | | | | Joutsa | | | | Juuka | | | | | | | | Juva | | | | | | | | | | | | | | | | | | | | Kaarina | | | | Kaavi | | | | Kajaani | | | | Kalajoki | | | | The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What was the primary goal of the Brotherhood in "The Invisible Empire"? A. To create a new world religion B. To gain control over the entire world C. To protect the innocent from evil D. To develop advanced technology for peace Answer:
B. To gain control over the entire world
Relavent Documents: Document 0::: Majorana 1 is a hardware device developed by Microsoft, with potential applications to quantum computing. It is the first device produced by Microsoft intended for use in quantum computing. It is an indium arsenide-aluminium hybrid device that admits superconductivity at low temperatures. Microsoft claims that it shows some signals of hosting boundary Majorana zero modes. The device can fit eight qubits. Majorana zero modes, if confirmed, could have potential application to making topological qubits, and eventually a large-scale topological quantum computers. In its February 2025 announcement, Microsoft claimed that the Majorana 1 represents progress in its long-running project to create a quantum computer based on topological qubits. The announcement has generated both excitement and skepticism within the scientific community, in the absence of definitive public evidence that the Majorana 1 device exhibits Majorana zero modes. Background Quantum computing research has historically faced challenges in achieving qubit stability and scalability. Traditional qubits, such as those based on superconducting circuits or trapped ions, are highly susceptible to noise and decoherence, which can introduce errors in computations. To overcome these limitations, researchers have been exploring various approaches to building more robust and fault-tolerant quantum computers. Topological qubits, first theorized in 1997 by Alexei Kitaev and Michael Freedman, offer a promising solution by encoding quantum information in a way that is inherently protected from environmental disturbances. This protection stems from the topological properties of the system, which are resistant to local perturbations. Microsoft's approach, based on Majorana fermions in semiconductor-superconductor heterostructures, is one of several efforts to realize topological quantum computing. Controversy Microsoft's quantum hardware has been the subject of controversy since its high-profile retracted articl Document 1::: Plants may reproduce sexually or asexually. Sexual reproduction produces offspring by the fusion of gametes, resulting in offspring genetically different from either parent. Vegetative reproduction produces new individuals without the fusion of gametes, resulting in clonal plants that are genetically identical to the parent plant and each other, unless mutations occur. In asexual reproduction, only one parent is involved. Asexual reproduction Asexual reproduction does not involve the production and fusion of male and female gametes. Asexual reproduction may occur through budding, fragmentation, spore formation, regeneration and vegetative propagation. Asexual reproduction is a type of reproduction where the offspring comes from one parent only, thus inheriting the characteristics of the parent. Asexual reproduction in plants occurs in two fundamental forms, vegetative reproduction and agamospermy. Vegetative reproduction involves a vegetative piece of the original plant producing new individuals by budding, tillering, etc. and is distinguished from apomixis, which is a replacement of sexual reproduction, and in some cases involves seeds. Apomixis occurs in many plant species such as dandelions (Taraxacum species) and also in some non-plant organisms. For apomixis and similar processes in non-plant organisms, see parthenogenesis. Natural vegetative reproduction is a process mostly found in perennial plants, and typically involves structural modifications of the stem or roots and in a few species leaves. Most plant species that employ vegetative reproduction do so as a means to perennialize the plants, allowing them to survive from one season to the next and often facilitating their expansion in size. A plant that persists in a location through vegetative reproduction of individuals gives rise to a clonal colony. A single ramet, or apparent individual, of a clonal colony is genetically identical to all others in the same colony. The distance that a plant can move during vegetative reproduction is limited, though some plants can produce ramets from branching rhizomes or stolons that cover a wide area, often in only a few growing seasons. In a sense, this process is not one of reproduction but one of survival and expansion of biomass of the individual. When an individual organism increases in size via cell multiplication and remains intact, the process is called vegetative growth. However, in vegetative reproduction, the new plants that result are new individuals in almost every respect except genetic. A major disadvantage of vegetative reproduction is the transmission of pathogens from parent to offspring. It is uncommon for pathogens to be transmitted from the plant to its seeds (in sexual reproduction or in apomixis), though there are occasions when it occurs. Seeds generated by apomixis are a means of asexual reproduction, involving the formation and dispersal of seeds that do not originate from the fertilization of the embryos. Hawkweeds (Hieracium), dandelions (Taraxacum), some species of Citrus and Kentucky blue grass (Poa pratensis) all use this form of asexual reproduction. Pseudogamy occurs in some plants that have apomictic seeds, where pollination is often needed to initiate embryo growth, though the pollen contributes no genetic material to the developing offspring. Other forms of apomixis occur in plants also, including the generation of a plantlet in replacement of a seed or the generation of bulbils instead of flowers, where new cloned individuals are produced. Structures A rhizome is a modified underground stem serving as an organ of vegetative reproduction; the growing tips of the rhizome can separate as new plants, e.g., polypody, iris, couch grass and nettles. Prostrate aerial stems, called runners or stolons, are important vegetative reproduction organs in some species, such as the strawberry, numerous grasses, and some ferns. Adventitious buds form on roots near the ground surface, on damaged stems (as on the stumps of cut trees), or on old roots. These develop into above-ground stems and leaves. A form of budding called suckering is the reproduction or regeneration of a plant by shoots that arise from an existing root system. Species that characteristically produce suckers include elm (Ulmus) and many members of the rose family such as Rosa, Kerria and Rubus. Bulbous plants such as onion (Allium cepa), hyacinths, narcissi and tulips reproduce vegetatively by dividing their underground bulbs into more bulbs. Other plants like potatoes (Solanum tuberosum) and dahlias reproduce vegetatively from underground tubers. Gladioli and crocuses reproduce vegetatively in a similar way with corms. Gemmae are single cells or masses of cells that detach from plants to form new clonal individuals. These are common in Liverworts and mosses and in the gametophyte generation of some filmy fern. They are also present in some Club mosses such as Huperzia lucidula . They are also found in some higher plants such as species of Drosera. Usage The most common form of plant reproduction used by people is seeds, but a number of asexual methods are used which are usually enhancements of natural processes, including: cutting, grafting, budding, layering, division, sectioning of rhizomes, roots, tubers, bulbs, stolons, tillers, etc., and artificial propagation by laboratory tissue cloning. Asexual methods are most often used to propagate cultivars with individual desirable characteristics that do not come true from seed. Fruit tree propagation is frequently performed by budding or grafting desirable cultivars (clones), onto rootstocks that are also clones, propagated by stooling. In horticulture, a cutting is a branch that has been cut off from a mother plant below an internode and then rooted, often with the help of a rooting liquid or powder containing hormones. When a full root has formed and leaves begin to sprout anew, the clone is a self-sufficient plant, genetically identical. Examples include cuttings from the stems of blackberries (Rubus occidentalis), African violets (Saintpaulia), verbenas (Verbena) to produce new plants. A related use of cuttings is grafting, where a stem or bud is joined onto a different stem. Nurseries offer for sale trees with grafted stems that can produce four or more varieties of related fruits, including apples. The most common usage of grafting is the propagation of cultivars onto already rooted plants, sometimes the rootstock is used to dwarf the plants or protect them from root damaging pathogens. Since vegetatively propagated plants are clones, they are important tools in plant research. When a clone is grown in various conditions, differences in growth can be ascribed to environmental effects instead of genetic differences. Sexual reproduction Sexual reproduction involves two fundamental processes: meiosis, which rearranges the genes and reduces the number of chromosomes, and fertilization, which restores the chromosome to a complete diploid number. In between these two processes, different types of plants and algae vary, but many of them, including all land plants, undergo alternation of generations, with two different multicellular structures (phases), a gametophyte and a sporophyte. The evolutionary origin and adaptive significance of sexual reproduction are discussed in the pages Evolution of sexual reproduction and Origin and function of meiosis. The gametophyte is the multicellular structure (plant) that is haploid, containing a single set of chromosomes in each cell. The gametophyte produces male or female gametes (or both), by a process of cell division, called mitosis. In vascular plants with separate gametophytes, female gametophytes are known as mega gametophytes (mega=large, they produce the large egg cells) and the male gametophytes are called micro gametophytes (micro=small, they produce the small sperm cells). The fusion of male and female gametes (fertilization) produces a diploid zygote, which develops by mitotic cell divisions into a multicellular sporophyte. The mature sporophyte produces spores by meiosis, sometimes referred to as reduction division because the chromosome pairs are separated once again to form single sets. In mosses and liverworts, the gametophyte is relatively large, and the sporophyte is a much smaller structure that is never separated from the gametophyte. In ferns, gymnosperms, and flowering plants (angiosperms), the gametophytes are relatively small and the sporophyte is much larger. In gymnosperms and flowering plants the megagametophyte is contained within the ovule (that may develop into a seed) and the microgametophyte is contained within a pollen grain. History of sexual reproduction of plants Unlike animals, plants are immobile, and cannot seek out sexual partners for reproduction. In the evolution of early plants, abiotic means, including water and much later, wind, transported sperm for reproduction. The first plants were aquatic, as described in the page Evolutionary history of plants, and released sperm freely into the water to be carried with the currents. Primitive land plants such as liverworts and mosses had motile sperm that swam in a thin film of water or were splashed in water droplets from the male reproduction organs onto the female organs. As taller and more complex plants evolved, modifications in the alternation of generations evolved. In the Paleozoic era progymnosperms reproduced by using spores dispersed on the wind. The seed plants including seed ferns, conifers and cordaites, which were all gymnosperms, evolved about 350 million years ago. They had pollen grains that contained the male gametes for protection of the sperm during the process of transfer from the male to female parts. It is believed that insects fed on the pollen, and plants thus evolved to use insects to actively carry pollen from one plant to the next. Seed producing plants, which include the angiosperms and the gymnosperms, have a heteromorphic alternation of generations with large sporophytes containing much-reduced gametophytes. Angiosperms have distinctive reproductive organs called flowers, with carpels, and the female gametophyte is greatly reduced to a female embryo sac, with as few as eight cells. Each pollen grains contains a greatly reduced male gametophyte consisting of three or four cells. The sperm of seed plants are non-motile, except for two older groups of plants, the Cycadophyta and the Ginkgophyta, which have flagella. Flowering plants Flowering plants, the dominant plant group, reproduce both by sexual and asexual means. Their distinguishing feature is that their reproductive organs are contained in flowers. Sexual reproduction in flowering plants involves the production of separate male and female gametophytes that produce gametes. The anther produces pollen grains that contain male gametophytes. The pollen grains attach to the stigma on top of a carpel, in which the female gametophytes (inside ovules) are located. Plants may either self-pollinate or cross-pollinate. The transfer of pollen (the male gametophytes) to the female stigmas occurs is called pollination. After pollination occurs, the pollen grain germinates to form a pollen tube that grows through the carpel's style and transports male nuclei to the ovule to fertilize the egg cell and central cell within the female gametophyte in a process termed double fertilization. The resulting zygote develops into an embryo, while the triploid endosperm (one sperm cell plus a binucleate female cell) and female tissues of the ovule give rise to the surrounding tissues in the developing seed. The fertilized ovules develop into seeds within a fruit formed from the ovary. When the seeds are ripe they may be dispersed together with the fruit or freed from it by various means to germinate and grow into the next generation. Pollination Plants that use insects or other animals to move pollen from one flower to the next have developed greatly modified flower parts to attract pollinators and to facilitate the movement of pollen from one flower to the insect and from the insect to the next flower. Flowers of wind-pollinated plants tend to lack petals and or sepals; typically large amounts of pollen are produced and pollination often occurs early in the growing season before leaves can interfere with the dispersal of the pollen. Many trees and all grasses and sedges are wind-pollinated. Plants have a number of different means to attract pollinators including color, scent, heat, nectar glands, edible pollen and flower shape. Along with modifications involving the above structures two other conditions play a very important role in the sexual reproduction of flowering plants, the first is the timing of flowering and the other is the size or number of flowers produced. Often plant species have a few large, very showy flowers while others produce many small flowers, often flowers are collected together into large inflorescences to maximize their visual effect, becoming more noticeable to passing pollinators. Flowers are attraction strategies and sexual expressions are functional strategies used to produce the next generation of plants, with pollinators and plants having co-evolved, often to some extraordinary degrees, very often rendering mutual benefit. The largest family of flowering plants is the orchids (Orchidaceae), estimated by some specialists to include up to 35,000 species, which often have highly specialized flowers that attract particular insects for pollination. The stamens are modified to produce pollen in clusters called pollinia, which become attached to insects that crawl into the flower. The flower shapes may force insects to pass by the pollen, which is "glued" to the insect. Some orchids are even more highly specialized, with flower shapes that mimic the shape of insects to attract them to attempt to 'mate' with the flowers, a few even have scents that mimic insect pheromones. Another large group of flowering plants is the Asteraceae or sunflower family with close to 22,000 species, which also have highly modified inflorescences composed of many individual flowers called florets. Heads with florets of one sex, when the flowers are pistillate or functionally staminate or made up of all bisexual florets, are called homogamous and can include discoid and liguliflorous type heads. Some radiate heads may be homogamous too. Plants with heads that have florets of two or more sexual forms are called heterogamous and include radiate and disciform head forms. Ferns Ferns typically produce large diploids with stem, roots, and leaves. On fertile leaves sporangia are produced, grouped together in sori and often protected by an indusium. If the spores are deposited onto a suitable moist substrate they germinate to produce short, thin, free-living gametophytes called prothalli that are typically heart-shaped, small and green in color. The gametophytes produce both motile sperm in the antheridia and egg cells in separate archegonia. After rains or when dew deposits a film of water, the motile sperm are splashed away from the antheridia, which are normally produced on the top side of the thallus, and swim in the film of water to the antheridia where they fertilize the egg. To promote out crossing or cross-fertilization the sperm is released before the eggs are receptive of the sperm, making it more likely that the sperm will fertilize the eggs of the different thallus. A zygote is formed after fertilization, which grows into a new sporophytic plant. The condition of having separate sporophyte and gametophyte plants is called alternation of generations. Other plants with similar reproductive strategies include Psilotum, Lycopodium, Selaginella and Equisetum. Bryophytes The bryophytes, which include liverworts, hornworts and mosses, can reproduce both sexually and vegetatively. The life cycles of these plants start with haploid spores that grow into the dominant form, which is a multicellular haploid gametophyte, with thalloid or leaf-like structures that photosynthesize. The gametophyte is the most commonly known phase of the plant. Bryophytes are typically small plants that grow in moist locations and like ferns, have motile sperm which swim to the ovule using flagella and therefore need water to facilitate sexual reproduction. Bryophytes show considerable variation in their reproductive structures, and a basic outline is as follows: Haploid gametes are produced in antheridia and archegonia by mitosis. The sperm released from the antheridia respond to chemicals released by ripe archegonia and swim to them in a film of water and fertilize the egg cells, thus producing zygotes that are diploid. The zygote divides repeatedly by mitotic division and grows into a diploid sporophyte. The resulting multicellular diploid sporophyte produces spore capsules called sporangia. The spores are produced by meiosis, and when ripe, the capsules burst open to release the spores. In some species each gametophyte is one sex while other species may be monoicous, producing both antheridia and archegonia on the same gametophyte which is thus hermaphrodite. Algae Sexual reproduction in the multicellular facultatively sexual green alga Volvox carteri is induced by oxidative stress. A two-fold increase in cellular reactive oxygen species (associated with oxidative stress) activates the V. carteri genes needed for sexual reproduction. Exposure to antioxidants inhibits the induction of sex in V. carteri. It was proposed on the basis of these observations that sexual reproduction emerged in V. carteri evolution as an adaptive response to oxidative stress and the DNA damage induced by reactive oxygen species. Oxidative stress induced DNA damage may be repaired during the meiotic event associated with germination of the zygospore and the start of a new generation. Dispersal and offspring care One of the outcomes of plant reproduction is the generation of seeds, spores, and fruits that allow plants to move to new locations or new habitats. Plants do not have nervous systems or any will for their actions. Even so, scientists are able to observe mechanisms that help their offspring thrive as they grow. All organisms have mechanisms to increase survival in offspring. Offspring care is observed in the Mammillaria hernandezii, a small cactus found in Mexico. A cactus is a type of succulent, meaning it retains water when it is available for future droughts. M. hernandezii also stores a portion of its seeds in its stem, and releases the rest to grow. This can be advantageous for many reasons. By delaying the release of some of its seeds, the cactus can protect these from potential threats from insects, herbivores, or mold caused by micro-organisms. A study found that the presence of adequate water in the environment causes M. Hernandezii to release more seeds to allow for germination. The plant was able to perceive a water potential gradient in the surroundings, and act by giving its seeds a better chance in this preferable environment. This evolutionary strategy gives a better potential outcome for seed germination. External links Simple Video Tutorial on Reproduction in Plant Document 2::: Foraminoplasty is a type of endoscopic surgery used to operate on the spine. It is considered a minimally invasive surgery technique and its endoscopic laser is legally regulated. Although most patients have benefited from foraminoplasty, the National Institute for Health and Care Excellence does not fully support it due to it not completing its randomised controlled clinical trial. External links http://ijssurgery.com/10.14444/1026 https://www.nice.org.uk/guidance/ipg31/informationforpublic Document 3::: A voltage doubler is an electronic circuit which charges capacitors from the input voltage and switches these charges in such a way that, in the ideal case, exactly twice the voltage is produced at the output as at its input. The simplest of these circuits is a form of rectifier which take an AC voltage as input and outputs a doubled DC voltage. The switching elements are simple diodes and they are driven to switch state merely by the alternating voltage of the input. DC-to-DC voltage doublers cannot switch in this way and require a driving circuit to control the switching. They frequently also require a switching element that can be controlled directly, such as a transistor, rather than relying on the voltage across the switch as in the simple AC-to-DC case. Voltage doublers are a variety of voltage multiplier circuits. Many, but not all, voltage doubler circuits can be viewed as a single stage of a higher order multiplier: cascading identical stages together achieves a greater voltage multiplication. Voltage doubling rectifiers Villard circuit The Villard circuit, conceived by Paul Ulrich Villard, consists simply of a capacitor and a diode. While it has the great benefit of simplicity, its output has very poor ripple characteristics. Essentially, the circuit is a diode clamp circuit. The capacitor is charged on the negative half cycles to the peak AC voltage (Vpk). The output is the superposition of the input AC waveform and the steady DC of the capacitor. The effect of the circuit is to shift the DC value of the waveform. The negative peaks of the AC waveform are "clamped" to 0 V (actually −VF, the small forward bias voltage of the diode) by the diode, therefore the positive peaks of the output waveform are 2Vpk. The peak-to-peak ripple is an enormous 2Vpk and cannot be smoothed unless the circuit is effectively turned into one of the more sophisticated forms. This is the circuit (with diode reversed) used to supply the negative high voltage for the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main disadvantage of vegetative reproduction in plants? A. It requires two parents for offspring. B. It limits genetic diversity among offspring. C. It is not effective for perennial plants. D. It does not allow for the use of seeds. Answer:
B. It limits genetic diversity among offspring.
Relavent Documents: Document 0::: In organometallic chemistry, agostic interaction refers to the intramolecular interaction of a coordinatively-unsaturated transition metal with an appropriately situated C−H bond on one of its ligands. The interaction is the result of two electrons involved in the C−H bond interaction with an empty d-orbital of the transition metal, resulting in a three-center two-electron bond. It is a special case of a C–H sigma complex. Historically, agostic complexes were the first examples of C–H sigma complexes to be observed spectroscopically and crystallographically, due to intramolecular interactions being particularly favorable and more often leading to robust complexes. Many catalytic transformations involving oxidative addition and reductive elimination are proposed to proceed via intermediates featuring agostic interactions. Agostic interactions are observed throughout organometallic chemistry in alkyl, alkylidene, and polyenyl ligands. History The term agostic, derived from the Ancient Greek word for "to hold close to oneself", was coined by Maurice Brookhart and Malcolm Green, on the suggestion of the classicist Jasper Griffin, to describe this and many other interactions between a transition metal and a C−H bond. Often such agostic interactions involve alkyl or aryl groups that are held close to the metal center through an additional σ-bond. Short interactions between hydrocarbon substituents and coordinatively unsaturated metal complexes have been noted since the 1960s. For example, in tris(triphenylphosphine) ruthenium dichloride, a short interaction is observed between the ruthenium(II) center and a hydrogen atom on the ortho position of one of the nine phenyl rings. Complexes of borohydride are described as using the three-center two-electron bonding model. The nature of the interaction was foreshadowed in main group chemistry in the structural chemistry of trimethylaluminium. Characteristics of agostic bonds Agostic interactions are best demonstrated Document 1::: Street reclaiming is the process of converting, or otherwise returning streets to a stronger focus on non-car use — such as walking, cycling and active street life. It is advocated by many urban planners and urban economists, of widely varying political points of view. Its primary benefits are thought to be: Decreased automobile traffic with less noise pollution, fewer automobile accidents, reduced smog and air pollution Greater safety and access for pedestrians and cyclists Less frequent surface maintenance than car-driven roads Reduced summer temperatures due to less asphalt and more green spaces Increased pedestrian traffic which also increases social and commercial opportunities Increased gardening space for urban residents Better support for co-housing and infirm residents, e.g. suburban eco-villages built around former streets Campaigns An early example of street reclamation was the Stockholm carfree day in 1969. Some consider the best advantages to be gained by redesigning streets, for example as shared space, while others, such as campaigns like "Reclaim the Streets", a widespread "dis-organization", run a variety of events to physically reclaim the streets for political and artistic actions, often called street parties. David Engwicht is also a strong proponent of the concept that street life, rather than physical redesign, is the primary tool of street reclamation. See also References External links RTS — Reclaim the Streets Park(ing) – 22 September What if everyone had a car? by the BBC World News Document 2::: In game theory an uncorrelated asymmetry is an arbitrary asymmetry in a game which is otherwise symmetrical. The name 'uncorrelated asymmetry' is due to John Maynard Smith who called payoff relevant asymmetries in games with similar roles for each player 'correlated asymmetries' (note that any game with correlated asymmetries must also have uncorrelated asymmetries). The explanation of an uncorrelated asymmetry usually makes reference to "informational asymmetry". Which may confuse some readers, since, games which may have uncorrelated asymmetries are still games of complete information . What differs between the same game with and without an uncorrelated asymmetry is whether the players know which role they have been assigned. If players in a symmetric game know whether they are Player 1, Player 2, etc. (or row vs. column player in a bimatrix game) then an uncorrelated asymmetry exists. If the players do not know which player they are then no uncorrelated asymmetry exists. The information asymmetry is that one player believes he is player 1 and the other believes he is player 2. Therefore, "informational asymmetry" does not refer to knowledge in the sense of an information set in an extensive form game. The concept of uncorrelated asymmetries is important in determining which Nash equilibria are evolutionarily stable strategies in discoordination games such as the game of chicken. In these games the mixing Nash is the ESS if there is no uncorrelated asymmetry, and the pure conditional Nash equilibria are ESSes when there is an uncorrelated asymmetry. The usual applied example of an uncorrelated asymmetry is territory ownership in the hawk-dove game. Even if the two players ("owner" and "intruder") have the same payoffs (i.e., the game is payoff symmetric), the territory owner will play Hawk, and the intruder Dove, in what is known as the 'Bourgeois strategy' (the reverse is also an ESS known as the 'anti-bourgeois strategy', but makes little biologi Document 3::: HD 28700 (HR 1433) is a solitary star in the southern constellation Caelum. It has an apparent magnitude of 6.12, making it visible to the naked eye under ideal conditions. Parallax measurements place the object at a distance of 384 light years and is currently receding with a heliocentric radial velocity of . HD 28700 has a stellar classification of K1 III, indicating that it is a red giant. It has three times the Sun's mass and has expanded to ten times its radius. It radiates at 56 times the Sun's luminosity from its swollen photosphere at an effective temperature of , giving it an orange hue. HD 28700 has a projected rotational velocity too low to be measured accurately due to it being less than . HD 28700 has 120% the abundance of iron relative to the Sun. At a modeled age of 377 million years, HD 28700 is on the red giant branch fusing hydrogen in a shell around an inert helium core. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is one of the primary advantages of using the Schmidt–Kalman filter over increasing the dimensionality of the state space? A. Improved accuracy in bias estimation B. Reduction in computational complexity C. Enhanced ability to observe residual biases D. Simplicity in linear state transition models Answer:
B. Reduction in computational complexity
Relavent Documents: Document 0::: NGC 2197 is an centrally condensed open cluster within the Large Magellanic Cloud, in the Dorado constellation. It is estimated to be 400 million years old. Document 1::: The Temple of Friendship () is a small, round building in Sanssouci Park, Potsdam, in Germany. It was built by King Frederick II of Prussia in memory of his sister, Princess Wilhelmine of Prussia, who died in 1758. The building, in the form of a classical temple, was built south of the park's main boulevard between 1768 and 1770 by architect Carl von Gontard. It complements the Temple of Antiquities, which lies due north of the boulevard on an axis with the Temple of Friendship. The First Pavilion in Neuruppin A notable precursor of the Temple of Friendship was the smaller Temple of Apollo constructed in 1735 at Neuruppin, where Crown Prince Frederick (later Frederick II) resided from 1732 to 1735 as the commander of a regiment stationed there. The first building designed by Georg Wenzeslaus von Knobelsdorff, the Temple of Apollo was situated in the Amalthea Garden, a flower and vegetable garden created by Frederick. The Temple of Apollo was an open, round temple, although in 1791 it was enclosed by brick walls between its columns. In August 1735, Frederick wrote to his sister Wilhelmine, who at that time was already married and living in Bayreuth: "The garden house is a temple of eight Doric columns holding up a domed roof. On it stands a statue of Apollo. As soon as it is finished, we shall offer sacrifices in it – naturally to you, dear sister, protectress of the fine arts." The Pavilion in Sanssouci Park To honor the memory of Wilhelmine, Frederick chose, as he had in Neuruppin, the form of an open, round temple with a shallow domed roof supported by eight Corinthian columns. This architectural structure, the monopteros type, has its origins in ancient Greece, where such buildings were erected over cult statues and tombstones. In a shallow alcove at the back wall of the temple is a life-sized statue of Wilhelmine of Bayreuth, holding a book in her hand. The marble figure is from the workshop of the sculptor brothers Johann David and Johann Lorenz Wilh Document 2::: IC 335 is an edge-on lenticular galaxy about 60 million light years (18 million parsecs) away, in the constellation Fornax. It is part of the Fornax Cluster. IC 335 appears very similar to NGC 4452, a lenticular galaxy in Virgo. Both galaxies are edge-on, meaning that their characteristics, like spiral arms, are hidden. Lenticular galaxies like these are thought to be intermediate between spiral galaxies and elliptical galaxies, and like elliptical galaxies, they have very little gas for star formation. IC 335 may have once been a spiral galaxy that ran out of interstellar medium, or it may have collided with a galaxy in the past and thus used up all of its gas (see interacting galaxy). References External links Document 3::: Ortho effect is an organic chemistry phenomenon where the presence of a chemical group at the at ortho position or the 1 and 2 position of a phenyl ring, relative to the carboxylic compound changes the chemical properties of the compound. This is caused by steric effects and bonding interactions along with polar effects caused by the various substituents which are in a given molecule, resulting in changes in its chemical and physical properties. The ortho effect is associated with substituted benzene compounds. There are three main ortho effects in substituted benzene compounds: Steric hindrance forces cause substitution of a chemical group in the ortho position of benzoic acids become stronger acids. Steric inhibition of protonation caused by substitution of anilines to become weaker bases, compared to substitution of isomers in the meta and para position. Electrophilic aromatic substitution of disubstituted benzene compounds causes steric effects which determines the regioselectivity of an incoming electrophile in disubstituted benzene compounds Ortho substituted benzoic acids When a substituent group is located ortho position to the carboxyl group in a substituted benzoic acid compound, the compound becomes more acidic surpassing the unmodified benzoic acid. Generally ortho-substituted benzoic acids are stronger acids than their meta and para isomers. Mechanism of action When ortho substitution occurs in benzoic acid, steric hindrance causes the carboxyl group to twist out of the plane of the benzene ring. The twisting inhibits the resonance of the carboxyl group with the phenyl ring, leading to increased acidity of the carboxyl group. This increased acidity contrasts with the reduced acidity caused by destabilizing cross-conjugation. The destabilizing cross-conjugation causes decreased acidity of benzoic acid compared to formic acid. pKa values The table given below shows pKa values of various monosubstituted benzoic acids. Ortho substituted aniline The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the role of the Eco1/Ctf7 gene in the establishment of sister chromatid cohesion during the cell cycle? A. It is required for maintaining cohesion after S phase. B. It is necessary for the initial loading of cohesin on chromosomes. C. It acetylates lysine residues in the Smc3 subunit of cohesin during S phase. D. It interacts with chromatin-associated cohesin only in G1 phase. Answer:
C. It acetylates lysine residues in the Smc3 subunit of cohesin during S phase.
Relavent Documents: Document 0::: In quantum mechanics, the Schrödinger equation describes how a system changes with time. It does this by relating changes in the state of the system to the energy in the system (given by an operator called the Hamiltonian). Therefore, once the Hamiltonian is known, the time dynamics are in principle known. All that remains is to plug the Hamiltonian into the Schrödinger equation and solve for the system state as a function of time. Often, however, the Schrödinger equation is difficult to solve (even with a computer). Therefore, physicists have developed mathematical techniques to simplify these problems and clarify what is happening physically. One such technique is to apply a unitary transformation to the Hamiltonian. Doing so can result in a simplified version of the Schrödinger equation which nonetheless has the same solution as the original. Transformation A unitary transformation (or frame change) can be expressed in terms of a time-dependent Hamiltonian and unitary operator . Under this change, the Hamiltonian transforms as: . The Schrödinger equation applies to the new Hamiltonian. Solutions to the untransformed and transformed equations are also related by . Specifically, if the wave function satisfies the original equation, then will satisfy the new equation. Derivation Recall that by the definition of a unitary matrix, . Beginning with the Schrödinger equation, , we can therefore insert the identity at will. In particular, inserting it after and also premultiplying both sides by , we get . Next, note that by the product rule, . Inserting another and rearranging, we get . Finally, combining (1) and (2) above results in the desired transformation: . If we adopt the notation to describe the transformed wave function, the equations can be written in a clearer form. For instance, can be rewritten as , which can be rewritten in the form of the original Schrödinger equation, The original wave function can be recovered as . Relation to Document 1::: Ogataea is a genus of ascomycetous yeasts in the family Saccharomycetaceae. It was separated from the former genus Hansenula via an examination of their 18S and 26S rRNA partial base sequencings by Yamada et al. 1994. The genus name of Ogataea is in honour of Koichi Ogata (x - 1977), who was a Japanese microbiologist from the Department of Agricultural Chemistry, Faculty of Agriculture, at Kyoto University. It was stated in the journal; "The genus is named in honor of the late Professor Dr. Koichi Ogata, Department of Agricultural Chemistry, Faculty of Agriculture, Kyoto University, Kyoto, Japan, in recognition of his studies on the oxidation and assimilation of methanol (C1 compound) in methanol-utilizing yeasts." The genus was circumscribed by Yuzo Yamada, Kojiro Maeda and Kozaburo Mikata in Biosc., Biotechn. Biochem. vol.58 (Issue 7) on page 1253 in 1994. Diagnosis Like other yeasts, also the species within the genus Ogataea are single-celled or build pseudohyphae of only a few elongated cells; true hyphae are not formed. They are able to reproduce sexually or asexually. The latter case happens with a cell division by multilateral budding on a narrow base with spherical to ellipsoidal budded cells. In the sexual reproduction the asci are deliquescent and may be unconjugated or show conjugation between a cell and its bud or between independent cells. Asci produce one to four, sometimes more, ascospores which are hat-shaped, allantoid or spherical with a ledge. The species are homothallic or infrequently heterothallic. Ogataea cells are able to ferment glucose or other sugars and some species assimilate nitrate. All known species are able to utilize methanol as carbon source. The predominant ubiquinone is coenzyme Q-7 and the diazonium blue B test is negative. Some species are used and cultured for microbiological and genetic research e.g. Ogataea polymorpha, Ogataea minuta or Ogataea methanolica. Ogataea minuta (Wickerham) Y. Yamada, K. Maeda & Mikata is th Document 2::: An explosimeter is a gas detector which is used to measure the amount of combustible gases present in a sample. When a percentage of the lower explosive limit (LEL) of an atmosphere is exceeded, an alarm signal on the instrument is activated. The device, also called a combustible gas detector, operates on the principle of resistance proportional to heat—a wire is heated, and a sample of the gas is introduced to the hot wire. Combustible gases burn in the presence of the hot wire, thus increasing the resistance and disturbing a Wheatstone bridge, which gives the reading. A flashback arrestor is installed in the device to avoid the explosimeter igniting the sample external to the device. Note, that the detection readings of an explosimeter are only accurate if the gas being sampled has the same characteristics and response as the calibration gas. Most explosimeters are calibrated to methane, hydrogen, and carbon monoxide. Explosimeters are important because workspaces may contain a flammable or explosive atmosphere due to the accumulation of flammable gases or vapors. Sparks from ordinary battery-powered portable equipment, including cameras, cell phones, laptop computers, or anything else located on the job site may serve as an ignition source. The explosimeter warns the user of dangerous atmospheric conditions before a possible explosion can occur. Explosimetry Explosimetry simply means the measurement of flammable or explosive conditions, normally in the atmosphere around us. In modern times, jobsites both above ground and below ground can have a wide range of dangerous flammable materials present. The danger of these flammable materials are mitigated by detection systems. Explosimetry sensors are integrated into stationary and portable devices to detect the concentration of the calibrated gas in air. The explosimeter is an example of a detection system with an explosimetry sensor in it. For explosimeters to work properly they must be calibrated for a parti Document 3::: The Chicago Lake Tunnel was the first of several tunnels built from the city of Chicago's shore on Lake Michigan two miles out into the lake to access unpolluted fresh water far from the city's sewage. Waterborne disease in early Chicago In the early decades of its existence, the growing city was only about three feet above the surface of Lake Michigan, and the areas of early European settlement were flat and sandy with a high water table. European settlers in Chicago only needed to dig 6 to 12 feet to create a private well. The same settlers, however, would also dig privy vaults for human waste nearby. Because the sandy soil topped a layer of hard clay, human waste would sink from the outhouse, meet the impervious clay, and travel laterally into the freshwater supply. As a result, Chicago suffered numerous widespread outbreaks of waterborne diseases. The Chicago Board of Health was organized in 1835, in response to the threat of a cholera epidemic, and later outbreaks of cholera in 1852 and 1854 killed thousands. Chesbrough's Water Plan In 1855 the city’s newly formed Board of Sewerage Commissioners hired the 42-year-old Ellis S. Chesbrough (1813–1886), the first city engineer of Boston to study the problem. In 1863, Chesbrough completed a design for a water and sewer system for the city that included a tunnel five feet wide and lined with brick that would extend through the clay bed of Lake Michigan to a distance of 10,567 feet. Work started in 1864 and the tunnel was opened in 1867. Construction Gravity forces water into the tunnel through a structure called a crib. The crib for the Lake Tunnel was forty feet high and had five sides. Each side was fifty-eight feet long. The crib had outer, middle, inner walls bolted together, and each was sealed with caulk and tar in the same way ships of the day were made. The crib comprised fifteen separate water-tight compartments with an opening at the bottom twenty-five feet in diameter referred to as "the well," which d The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What role does the Foundation for Biomedical Research (FBR) play in relation to animal research? A. It advocates against the use of animals in research. B. It informs various groups about the necessity of animal research in medicine. C. It solely focuses on legislative activism related to animal rights. D. It provides funding exclusively for research on nonhuman primates. Answer:
B. It informs various groups about the necessity of animal research in medicine.
Relavent Documents: Document 0::: This is a list of simultaneous localization and mapping (SLAM) methods. The KITTI Vision Benchmark Suite website has a more comprehensive list of Visual SLAM methods. List of methods EKF SLAM FastSLAM 1.0 FastSLAM 2.0 L-SLAM (Matlab code) QSLAM GraphSLAM Occupancy Grid SLAM DP-SLAM Parallel Tracking and Mapping (PTAM) LSD-SLAM (available as open-source) S-PTAM (available as open-source) ORB-SLAM (available as open-source) ORB-SLAM2 (available as open-source) ORB-SLAM3 (available as open-source) OrthoSLAM MonoSLAM GroundSLAM CoSLAM SeqSlam iSAM (Incremental Smoothing and Mapping) CT-SLAM (Continuous Time) - referred to as Zebedee (SLAM) RGB-D SLAM BranoSLAM Kimera (open-source) Wildcat-SLAM References Document 1::: The Debus–Radziszewski imidazole synthesis is a multi-component reaction used for the synthesis of imidazoles from a 1,2-dicarbonyl, an aldehyde, and ammonia or a primary amine. The method is used commercially to produce several imidazoles. The process is an example of a multicomponent reaction. The reaction can be viewed as occurring in two stages. In the first stage, the dicarbonyl and two ammonia molecules condense with the two carbonyl groups to give a diimine: In the second stage, this diimine condenses with the aldehyde: However, the actual reaction mechanism is not certain. This reaction is named after Heinrich Debus and . A modification of this general method, where one equivalent of ammonia is replaced by an amine, affords N-substituted imidazoles in good yields. This reaction has been applied to the synthesis of a range of 1,3-dialkylimidazolium ionic liquids by using various readily available alkylamines. References Document 2::: Heteroduplex analysis (HDA) is a method in biochemistry used to detect point mutations in DNA (Deoxyribonucleic acid) since 1992. Heteroduplexes are dsDNA molecules that have one or more mismatched pairs, on the other hand homoduplexes are dsDNA which are perfectly paired. This method of analysis depend up on the fact that heteroduplexes shows reduced mobility relative to the homoduplex DNA. heteroduplexes are formed between different DNA alleles. In a mixture of wild-type and mutant amplified DNA, heteroduplexes are formed in mutant alleles and homoduplexes are formed in wild-type alleles. There are two types of heteroduplexes based on type and extent of mutation in the DNA. Small deletions or insertion create bulge-type heteroduplexes which is stable and is verified by electron microscope. Single base substitutions creates more unstable heteroduplexes called bubble-type heteroduplexes, because of low stability it is difficult to visualize in electron microscopy. HDA is widely used for rapid screening of mutation of the 3 bp p.F508del deletion in the CFTR gene. Document 3::: Repeated implantation failure (RIF) is the repeated failure of the embryo to implant onto the side of the uterus wall following IVF treatment. Implantation happens at 6–7 days after conception and involves the embedding of the growing embryo into the mothers uterus and a connection being formed. A successful implantation can be determined by using an ultrasound to view the sac which the baby grows in, inside the uterus. However, the exact definition of RIF is debated. Recently the most commonly accepted definition is when a woman under 40 has gone through three unsuccessful cycles of IVF, when in each cycle four good quality eggs have been transferred. Repeated implantation failure should not be confused with recurrent IVF failure. Recurrent IVF failure is a much more broad term and includes all repeated failures to get pregnant from IVF. Repeated implantation failure specifically refers to those failures due to unsuccessful implanting to the uterus wall. An unsuccessful implantation can result from problems with the mother or with the embryo. It is essential that the mother and embryo are able to communicate with each other during all stages of pregnancy, and an absence of this communication can lead to an unsuccessful implantation and a further unsuccessful pregnancy. Contributing maternal factors During implantation, the embryo must cross the epithelial layer of the maternal endometrium before invading and implanting in the stroma layer. Maternal factors, including congenital uterine abnormalities, fibroids, endometrial polyps, intrauterine adhesions, adenomyosis, thrombophilia and endometriosis, can reduce the chances of implantation and result in RIF. Congenital uterine abnormalities Congenital uterine abnormalities are irregularities in the uterus which occur during the mothers foetal development. Hox genes Two Hox genes have been identified to assist in the development and receptivity of the uterus and endometrium, Hoxa10 and Hoxa11. Hoxa10 has been s The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What significant event occurred on September 9, 1942, in the Siskiyou National Forest? A. The establishment of the forest B. A bombing incident by a Japanese airplane C. The first recorded fire in the forest D. The combination of Rogue River and Siskiyou National Forests Answer:
B. A bombing incident by a Japanese airplane
Relavent Documents: Document 0::: The Commodore LCD (sometimes known in short as the CLCD) is an unreleased LCD-equipped laptop made by Commodore International. It was presented at the January 1985 Consumer Electronics Show, but never released. The CLCD was not directly compatible with other Commodore home computers, but its built-in Commodore BASIC 3.6 interpreter could run programs written in the Commodore 128's BASIC 7.0, as long as these programs did not include system-specific POKE commands. Like the Commodore 264 and Radio Shack TRS-80 Model 100 series computers, the CLCD had several built-in ROM-based office application programs. The CLCD featured a 1 MHz Rockwell 65C102 CPU (a CMOS 6502 variant) and 32 KB of RAM (expandable to 64 KB internally). The BASIC interpreter and application programs were built into 96 KB of ROM. References Document 1::: The Bas Saharan Basin () is an artesian aquifer system which covers most of the Algerian and Tunisian Sahara and extends to Libya, enclosing the whole of the Grand Erg Oriental. References Document 2::: WTX (for Workstation Technology Extended) was a motherboard form factor specification introduced by Intel at the IDF in September 1998, for its use at high-end, multiprocessor, multiple-hard-disk servers and workstations. The specification had support from major OEMs (Compaq, Dell, Fujitsu, Gateway, Hewlett-Packard, IBM, Intergraph, NEC, Siemens Nixdorf, and UMAX) and motherboard manufacturers (Acer, Asus, Supermicro and Tyan) and was updated (1.1) in February 1999. , the specification has been discontinued and the URL www.wtx.org no longer hosts a website and has not been owned by Intel since at least 2004. This form factor was geared specifically towards the needs of high-end systems, and included specifications for a WTX power supply unit (PSU) using two WTX-specific 24-pin and 22-pin Molex connectors. The WTX specification was created to standardize a new motherboard and chassis form factor, fix the relative processor location, and allow for high volume airflow through a portion of the chassis where the processors are positioned. This allowed for standard form factor motherboards and chassis to be used to integrate processors with more demanding thermal management requirements. Bigger than ATX, maximum WTX motherboard size was . This was intended to provide more room in order to accommodate higher numbers of integrated components. WTX computer cases were backwards compatible with ATX motherboards (but not vice versa), and sometimes came equipped with ATX power supplies. Document 3::: A launch loop, or Lofstrom loop, is a proposed system for launching objects into orbit using a moving cable-like system situated inside a sheath attached to the Earth at two ends and suspended above the atmosphere in the middle. The design concept was published by Keith Lofstrom and describes an active structure maglev cable transport system that would be around 2,000 km (1,240 mi) long and maintained at an altitude of up to 80 km (50 mi). A launch loop would be held up at this altitude by the momentum of a belt that circulates around the structure. This circulation, in effect, transfers the weight of the structure onto a pair of magnetic bearings, one at each end, which support it. Launch loops are intended to achieve non-rocket spacelaunch of vehicles weighing 5 metric tons by electromagnetically accelerating them so that they are projected into Earth orbit or even beyond. This would be achieved by the flat part of the cable which forms an acceleration track above the atmosphere. The system is designed to be suitable for launching humans for space tourism, space exploration and space colonization, and provides a relatively low 3g acceleration. History Launch loops were described by Keith Lofstrom in November 1981 Reader's Forum of the American Astronautical Society News Letter, and in the August 1982 L5 News. In 1982, Paul Birch published a series of papers in Journal of the British Interplanetary Society which described orbital rings and described a form which he called Partial Orbital Ring System (PORS). The launch loop idea was worked on in more detail around 1983–1985 by Lofstrom. It is a fleshed-out version of PORS specifically arranged to form a mag-lev acceleration track suitable for launching humans into space, but whereas the orbital ring used superconducting magnetic levitation, launch loops use electromagnetic suspension (EMS). Description Consider a large cannon on an island that shoots a shell into the high atmosphere. The shell will follow a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary purpose of a memory protection unit (MPU) in computer systems? A. To manage virtual memory for applications B. To provide memory protection and prevent unauthorized access C. To enhance the speed of data processing D. To control power consumption in processors Answer:
B. To provide memory protection and prevent unauthorized access
Relavent Documents: Document 0::: A caruncle is defined as 'a small, fleshy excrescence that is a normal part of an animal's anatomy'. Within this definition, caruncles in birds include wattles (or dewlaps), combs, snoods, and earlobes. The term caruncle is derived from Latin caruncula, the diminutive of carō, "flesh". Taxonomy Caruncles are carnosities, often of bright colors such as red, blue, yellow or white. They can be present on the head, neck, throat, cheeks or around the eyes of some birds. They may be present as combs or crests and other structures near the beak, or, hanging from the throat or neck. Caruncles may be featherless, or, have small scattered feathers. In some species, they may form pendulous structures of erectile tissue, such as the snood of the domestic turkey. Caruncles are sometimes secondary sexual characteristics, having a more intense color or even a different color, developing as the male reaches sexual maturity. Function Caruncles are also ornamental elements used by males to attract females to breeding. Having large caruncles or colorful bright ones indicates high levels of testosterone, that they are well-fed birds able to elude other predators thus showing the good quality of their genes. It has been proposed that these organs are also associated with genes which encode resistance to disease. It is believed that for birds living in tropical regions, the caruncles also play a role in thermoregulation, making the blood cool faster when flowing through them. Turkeys In turkeys, the term usually refers to small, bulbous, fleshy protuberances found on the head, neck and throat, with larger structures particularly at the bottom of the throat. The wattle is a flap of skin hanging under the chin connecting the throat and head and the snood is a highly erectile appendage emanating from the forehead. Both sexes of turkey possess caruncles, although they are more pronounced in the male. Usually they are pale, but when the male becomes excited or during courtship, the Document 1::: Apache Flink is an open-source, unified stream-processing and batch-processing framework developed by the Apache Software Foundation. The core of Apache Flink is a distributed streaming data-flow engine written in Java and Scala. Flink executes arbitrary dataflow programs in a data-parallel and pipelined (hence task parallel) manner. Flink's pipelined runtime system enables the execution of bulk/batch and stream processing programs. Furthermore, Flink's runtime supports the execution of iterative algorithms natively. Flink provides a high-throughput, low-latency streaming engine as well as support for event-time processing and state management. Flink applications are fault-tolerant in the event of machine failure and support exactly-once semantics. Programs can be written in Java, Python, and SQL and are automatically compiled and optimized into dataflow programs that are executed in a cluster or cloud environment. Flink does not provide its own data-storage system, but provides data-source and sink connectors to systems such as Apache Doris, Amazon Kinesis, Apache Kafka, HDFS, Apache Cassandra, and ElasticSearch. Development Apache Flink is developed under the Apache License 2.0 by the Apache Flink Community within the Apache Software Foundation. The project is driven by 119 committers and over 340 contributors. Overview Apache Flink's dataflow programming model provides event-at-a-time processing on both finite and infinite datasets. At a basic level, Flink programs consist of streams and transformations. “Conceptually, a stream is a (potentially never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and produces one or more output streams as a result.” Apache Flink includes two core APIs: a DataStream API for bounded or unbounded streams of data and a DataSet API for bounded data sets. Flink also offers a Table API, which is a SQL-like expression language for relational stream and batch processing Document 2::: Erich Otto Engel (29 September 1866, in Alt-Malisch Frankfurt – 11 February 1944, in Dachau) was a German entomologist, who specialised in Diptera. He was a graphic artist and administrator of the Diptera collection in Zoologische Staatssammlung München. Selected works Engel, E. O. (1925) Neue paläarktische Asiliden (Dipt.). Konowia. 4, 189-194. Engel, E. O., & Cuthbertson A. (1934) Systematic and biological notes on some Asilidae of Southern Rhodesia with a description of a species new to science. Proceedings of the Rhodesia Scientific Association. 34, Engel, E.O. 1930. Asilidae (Part 24), in E. Lindner (ed.) Die Fliegen der Paläarktischen Region, vol. 4. Schweizerbart'sche, Stuttgart. 491 pp. Engel, E.O. 1938-1954 Empididae. in Lindner, E. (Ed.). Die Fliegen der Paläarktischen Region, vol.4, 28, 1-400. Engel, E. O., & Cuthbertson A. (1939). Systematic and biological notes on some brachycerous Diptera of southern Rhodesia. Journal of the Entomological Society of Southern Africa. 2, 181–185. References Anonym 1936: [Engel, E. O.] Insektenbörse, Stuttgart 53 (37) Horn, W. 1936: [Engel, E. O.] Arb. morph. taxon. Ent. Berlin-Dahlem, Berlin 3 (4) 301* Reiss, F. 1992: Die Sektion Diptera der Zoologischen Staatssammlung München. Spixiana Suppl., München 17 : 72-82, 9 Abb. 72-75, Portrait External links DEI Portrait Document 3::: In mathematics, the Browder–Minty theorem (sometimes called the Minty–Browder theorem) states that a bounded, continuous, coercive and monotone function T from a real, separable reflexive Banach space X into its continuous dual space X∗ is automatically surjective. That is, for each continuous linear functional g ∈ X∗, there exists a solution u ∈ X of the equation T(u) = g. (Note that T itself is not required to be a linear map.) The theorem is named in honor of Felix Browder and George J. Minty, who independently proved it. See also Pseudo-monotone operator; pseudo-monotone operators obey a near-exact analogue of the Browder–Minty theorem. References (Theorem 10.49) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary purpose of botanical gardens in Bulgaria? A. To showcase only native species B. To preserve global plant diversity C. To serve as private recreational spaces D. To exclusively collect endemic species Answer:
B. To preserve global plant diversity
Relavent Documents: Document 0::: Introduced by Martin Hellman and Susan K. Langford in 1994, the differential-linear attack is a mix of both linear cryptanalysis and differential cryptanalysis. The attack utilises a differential characteristic over part of the cipher with a probability of 1 (for a few rounds—this probability would be much lower for the whole cipher). The rounds immediately following the differential characteristic have a linear approximation defined, and we expect that for each chosen plaintext pair, the probability of the linear approximation holding for one chosen plaintext but not the other will be lower for the correct key. Hellman and Langford have shown that this attack can recover 10 key bits of an 8-round DES with only 512 chosen plaintexts and an 80% chance of success. The attack was generalised by Eli Biham et al. to use differential characteristics with probability less than 1. Besides DES, it has been applied to FEAL, IDEA, Serpent, Camellia, and even the stream cipher Phelix. References Document 1::: Sir Richard Spencer (1593 − 1 November 1661) was an English nobleman, gentleman, knight, and politician who sat in the House of Commons from 1621 to 1629 and in 1661. He supported the Royalist cause in the English Civil War. Early life Spencer was the son of Robert Spencer, 1st Baron Spencer of Wormleighton and his wife Margaret, daughter of Sir Francis Willoughby and Elizabeth Lyttelton. He was baptised on 21 October 1593. He was educated at Corpus Christi College, Oxford in 1609 and was awarded BA in 1612. His brothers were Sir Edward Spencer, Sir John Spencer, and William Spencer, 2nd Baron Spencer of Wormleighton. Parliamentary career In 1621, Spencer was selected Member of Parliament for Northampton. He was a student of Gray's Inn in 1624. In 1624 and 1625 he was re-elected MP for Northampton. He became a gentleman of the bedchamber in 1626. He was re-elected MP for Northampton in 1626 and 1628 and sat until 1629 when King Charles decided to rule without parliament for eleven years. Post-parliament actions He was a J.P. for Kent by 1636. Spencer stood for parliament for Kent for the Long Parliament in November 1640, but withdrew before the election took place. He was imprisoned twice by the Long Parliament for promoting the moderate Kentish petition of 1642. He was a commissioner of array for the King in 1642, stood security for loans amounting to £60,000 and helped to raise two regiments of horse, which he commanded at the Battle of Edgehill. He gave up his command in 1643 and was claimed the benefit of the Oxford Articles. He settled £40 a year on the minister of Orpington and was allowed to compound for a mere £300. However he became involved in the canal schemes of William Sandys and his financial condition became precarious. In 1651 he obtained a pass to France for himself and his family and settled in Brussels. He returned to England in about 1653 and was imprisoned and forced to pay off his debts. At the Restoration, Spencer petitioned unsucce Document 2::: The Declaration of Tokyo is a set of international guidelines for physicians concerning torture and other cruel, inhuman or degrading treatment or punishment in relation to detention and imprisonment, which was adopted in October 1975 during the 29th General assembly of the World Medical Association, and later editorially updated by the WMA in France, May 2005 and 2006. It declares torture to be "contrary to the laws of humanity", and antithetical to the "higher purpose" of the physician, which is to "alleviate the distress of his or her fellow human being." The policy states that doctors should refuse to participate in, condone, or give permission for torture, degradation, or cruel treatment of prisoners or detainees. According to the policy, a prisoner who refuses to eat should not be fed artificially against their will, provided that they are judged to be rational. References External links WMA Declaration of Tokyo - Guidelines for Physicians Concerning Torture and other Cruel, Inhuman or Degrading Treatment or Punishment in Relation to Detention and Imprisonment Document 3::: A hawk is a tool used to hold a plaster, mortar, or a similar material, so that the user can repeatedly, quickly and easily get some of that material on the tool which then applies it to a surface. A hawk consists of a board about 13 inches square with a perpendicular handle fixed centrally on the reverse. The user holds the hawk horizontally with the non-dominant hand and applies the material on the hawk with a tool held in the dominant hand. Hawks are most often used by plasterers, along with finishing trowels, to apply a smooth finish of plaster to a wall. Brick pointers use hawks to hold mortar while they work. Hawks are also used to hold joint compound for taping and jointing. The name "hawk" probably derives from the way the object rides on the user's arm, like a bird of prey. References de:Glättkelle The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the three main stages of the human action cycle as proposed by Donald A. Norman? A. Goal formation, Execution, Evaluation B. Planning, Execution, Review C. Formation, Execution, Assessment D. Goal setting, Action, Outcome Answer:
A. Goal formation, Execution, Evaluation
Relavent Documents: Document 0::: James Wood (14 December 1760 – 23 April 1839) was a mathematician, and Master of St John's College, Cambridge. In his later years he was Dean of Ely. Life Wood was born in Holcombe, Bury where his father ran an evening school and taught his son the elements of arithmetic and algebra. From Bury Grammar School he proceeded to St John's College, Cambridge in 1778, graduating as senior wrangler in 1782. On graduating he became a fellow of the college and in his long tenure there produced several successful academic textbooks for students of mathematics. Between 1795 and 1799 his The principles of mathematics and natural philosophy, was printed, in four volumes, by J. Burges. Vol.I: 'The elements of algebra', by Wood; Vol.II: 'The principles of fluxions' by Samuel Vince; Vol.III Part I: 'The principles of mechanics" by Wood; and Vol.III Part II: "The principles of hydrostatics" by Samuel Vince; Vol.IV "The principles of astronomy" by Samuel Vince. Three other volumes -"A treatise on plane and spherical trigonometry" and "The elements of the conic sections" by Samuel Vince (1800) and "The elements of optics" by Wood (1801" may have been issued as part of the series. Wood remained for sixty years at St. John's, serving as both President (1802–1815) and Master (1815–1839); on his death in 1839 he was interred in the college chapel and bequeathed his extensive library to the college, comprising almost 4,500 printed books on classics, history, mathematics, theology and travel, dating from the 17th to the 19th centuries. Wood was also ordained as a priest in 1787 and served as Dean of Ely from 1820 until his death. Publications The Elements of Algebra (1795) The Principles of Mechanics (1796) The Elements of Optics (1798) References Notes Other sources W. W. Rouse Ball, A History of the Study of Mathematics at Cambridge University, 1889, repr. Cambridge University Press, 2009, , p. 110 Document 1::: The Dublin Institute of Technology (DIT) School of Electrical and Electronic Engineering (SEEE) was the largest and one of the longest established Schools of Electrical and Electronic Engineering in Ireland. It was located at the DIT Kevin Street Campus in Dublin City, as part of the College of Engineering & Built Environment (CEBE). In 2019, DIT along with the Institute of Technology Blanchardstown (ITB) and the Institute of Technology Tallaght (ITT) became the founding institutes of the new Technological University Dublin (TU Dublin). Overview The DIT School of Electrical and Electronic Engineering was the largest education provider of Electrical and Electronic Engineering in Ireland in terms of programme diversity, staff and student numbers, covering a wide range of engineering disciplines including; Communications Engineering, Computer Engineering, Power Engineering, Electrical Services Engineering, Control Engineering, Energy Management and Electronic Engineering. The school included well established research centres in areas such as photonics, energy, antennas, communications and electrical power, with research outputs in; biomedical engineering, audio engineering, sustainable design, assistive technology and health informatics. Educational courses in technical engineering commenced at Kevin St. Dublin 8 in 1887. The school sought accreditation for its programmes from the appropriate Irish and international professional body organisations, such as the Institution of Engineers of Ireland, offering education across the full range of third level National Framework of Qualifications (NFQ) Levels, from Level 6 to Level 10 (apprenticeships to post-doctoral degrees). During the 2010s, the school delivered 23 individual programmes to over 1,200 students and had an output of approximately 350 graduates per year. In 2015, the head of the school was Professor Michael Conlon. In 2015, Electrical & Electronic Engineering was expected to be the first School from the C Document 2::: David Negrete Fernández () was a Mexican colonel who participated in the Mexican Revolution. He was also a musician. Biography David fought alongside military officer Felipe Ángeles as a part of División del Norte. He married Emilia Moreno Anaya, who bore him: Consuelo Negrete Moreno Jorge Alberto Negrete Moreno Emilia Negrete Moreno Teresa Negrete Moreno David Negrete Moreno Rubén Negrete Moreno David was a math teacher in Mexico City. He was also a father-in-law of Elisa Christy and María Félix. References Document 3::: A super seeder is a no-till planter, towed behind a tractor, that sows (plants) especially wheat seeds in rows directly without any prior seedbed preparation. It is operated with the PTO of the tractor and is connected to it with three-point linkage. The Super Seeder is an advanced agricultural machine than Happy seeder, engineered to revolutionize traditional farming methods. It offers an efficient, time-saving solution, allowing farmers to sow wheat seeds directly after rice harvest without the need for prior stubble burning, thereby contributing significantly to environmental preservation. It is mostly used to sow wheat seeds after the paddy harvest in North Indian states. Importance in stubble burning management Traditional wheat farming involves a series of steps such as clearing the harvested paddy fields by burning the leftover stubble, tilling the soil, and then sowing the seeds. This process is not only time-consuming and labor-intensive but also environmentally harmful. Stubble burning contributes to air pollution, exacerbating climate change effects and posing health risks to local communities. The Super Seeder eliminates these adverse effects by providing an integrated solution for planting wheat seeds. This machine cuts and uproots the paddy straw, sows the wheat seeds, and deposits the straw over the sown area as mulch in a single pass. Various Workshops are also organised by state agricultural universities for awareness regarding Importance of super seeder in Stubble Burning Management. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What significant achievement did White Widow accomplish in 1995? A. It became a medical cannabis strain. B. It won the Cannabis Cup. C. It was developed by Mr. Nice Seedbank. D. It was renamed Black Widow. Answer:
B. It won the Cannabis Cup.
Relavent Documents: Document 0::: In the healthcare industry, information continuity is the process by which information relevant to a patient's care is made available to both the patient and the provider at the right place and the right time, to facilitate ongoing health care management and continuity of care. This is an extension of the concept of "Continuity of Care," which is defined by the American Academy of Family Physicians in their Continuity of Care definition as "the process by which the patient and the physician are cooperatively involved in ongoing health care management toward the goal of high quality, cost-effective medical care." There is a non-Information Technology reference to "Informational continuity" — the use of information on past events and personal circumstances to make current care appropriate for each individual. This exists with "Management continuity" and "Relational continuity." Information continuity in the information technology sense may exist alongside physical care continuity, such as when a medical chart arrives with a patient to the hospital. Information continuity may also be separate, such as when a patient's electronic records are sent to a treating physician before the patient arrives at a care site. Creating information continuity in health care typically involves the use of health information technology to link systems using standards. Information continuity will become more and more important as patients in health care systems expect that their treating physicians have all of their medical information across the health care spectrum. This use of this term in health information technology initiated at Seattle, Washington, at the Group Health Cooperative non-profit care system to describe activities including data sharing, allergy and medication reconciliation, and interfacing of data between health care institutions. See also Health care continuity References Document 1::: The following is a list of the 50 most populous incorporated cities in the U.S. state of Ohio. The population is according to the 2018 census estimates from the United States Census Bureau. Document 2::: Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications. In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction. In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess. Backward chaining is implemented in logic programming by SLD resolution. Both rules are based on the modus ponens inference rule. It is one of the two most commonly used methods of reasoning with inference rules and logical implications – the other is forward chaining. Backward chaining systems usually employ a depth-first search strategy, e.g. Prolog. Usage Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if any data supports any of these consequents. An inference engine using backward chaining would search the inference rules until it finds one with a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is not known to be true, then it is added to the list of goals (for one's goal to be confirmed one must also provide data that confirms this new rule). For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about Fritz: Fritz croaks Fritz eats flies The goal is to decide whether Fritz is green, based on a rule base containing the following four rules: If X croaks and X eats flies – Then X is a frog If X chirps and X sings – Then X is a canary If X is a frog – Then X is green If X is a canary – Then X is yellow With backward reasoning, an inference engine can determine whether Fritz is green in four steps. To start, the query is phrased as a goal assertion that is to be proven: "Fritz is green". 1. Fri Document 3::: Forest–savanna mosaic is a transitory ecotone between the tropical moist broadleaf forests of Equatorial Africa and the drier savannas and open woodlands to the north and south of the forest belt. The forest–savanna mosaic consists of drier forests, often gallery forest, interspersed with savannas and open grasslands. Flora This band of marginal savannas bordering the dense dry forest extends from the Atlantic coast of Guinea to South Sudan, corresponding to a climatic zone with relatively high rainfall, between 800 and 1400 mm. It is an often unresolvable, complex of secondary forests and mixed savannas, resulting from intense erosion of primary forests by fire and clearing. The vegetation ceases to have an evergreen character, and becomes more and more seasonal. A species of acacia, Faidherbia albida, marks, with its geographical distribution, the Guinean area of the savannas together with the area of the forest-savanna, arboreal and shrub, and a good part of the dense dry forest with prevalently deciduous trees. Ecoregions The World Wildlife Fund recognizes several distinct forest-savanna mosaic ecoregions: The Guinean forest–savanna mosaic is the transition between the Upper and Lower Guinean forests of West Africa and the West Sudanian savanna. The ecoregion extends from Senegal on the west to the Cameroon Highlands on the east. The Dahomey Gap is a region of Togo and Benin where the forest-savanna mosaic extends to the coast, separating the Upper and Lower Guinean forests. The Northern Congolian forest–savanna mosaic lies between the Congolian forests of Central Africa and the East Sudanian savanna. It extends from the Cameroon Highlands in the west to the East African Rift in the east, encompassing portions of Cameroon, Central African Republic, Democratic Republic of the Congo, and southwestern Sudan. The Western Congolian forest–savanna mosaic lies southwest of the Congolian forest belt, covering portions of southern Gabon, southern Republic of the Co The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary goal of information continuity in healthcare as defined in the text? A. To ensure that patients have access to all healthcare facilities B. To facilitate ongoing health care management and continuity of care C. To maintain a patient's personal information securely D. To reduce the cost of medical services Answer:
B. To facilitate ongoing health care management and continuity of care
Relavent Documents: Document 0::: Antifragility is a property of systems in which they increase in capability to thrive as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures. The concept was developed by Nassim Nicholas Taleb in his book, Antifragile, and in technical papers. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure). The concept has been applied in risk analysis, physics, molecular biology, transportation planning, engineering, aerospace (NASA), and computer science. Taleb defines it as follows in a letter to Nature responding to an earlier review of his book in that journal: Antifragile versus robust/resilient In his book, Taleb stresses the differences between antifragile and robust/resilient. "Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better." The concept has now been applied to ecosystems in a rigorous way. In their work, the authors review the concept of ecosystem resilience in its relation to ecosystem integrity from an information theory approach. This work reformulates and builds upon the concept of resilience in a way that is mathematically conveyed and can be heuristically evaluated in real-world applications: for example, ecosystem antifragility. The authors also propose that for socio-ecosystem governance, planning or in general, any decision making perspective, antifragility might be a valuable and more desirable goal to achieve than a resilience aspiration. In the same way, Pineda and co-workers have proposed a simply calculable measure of antifragility, based on the change of “satisfaction” (i.e., network complexity) before and after adding perturbations, and apply it to random Boolean networks (RBNs). They also show that several well known biological networks such as Arabidopsis thaliana cell-cycle are as ex Document 1::: Otto Schmiedeknecht (8 September 1847 Bad Blankenburg, Thüringen- 11 February 1936, Blankenburg) was a German entomologist who specialised in Hymenoptera. Selected works 1902-1936.Opuscula Ichneumonologica. Blankenburg in Thüringen.1902pp. 1907.Hymenopteren Mitteleuropas. Gustav Fischer. Jena. 804pp. 1914.Die Schlupfwespen (Ichneumonidae) Mitteleuropas, insbesondere deutschlands. In: Schoeder C. "Die Insekten Mitteleuropas". Franckh'sche Verlagshandlung, Stuttgart. pp. 113–170. References Möller, R. 2000: [Schmiedeknecht, O.] Rudolst. Naturhist. Schr. 10 83-90 (Under 'Opuscula ichneumonlogica) Oehlke, J. 1968: Über den Verbleib der Hymenopteren-Typen Schmiedeknechts. Beitr. Ent. , Berlin 18: 319-327 Sources Stefan Vidal (2005). The history of Hymenopteran parasitoid research in Germany, Biological Control, 32 : 25-33. Document 2::: The 12-bit ND812, produced by Nuclear Data, Inc., was a commercial minicomputer developed for the scientific computing market. Nuclear Data introduced it in 1970 at a price under $10,000 (). Description The architecture has a simple programmed I/O bus, plus a DMA channel. The programmed I/O bus typically runs low to medium-speed peripherals, such as printers, teletypes, paper tape punches and readers, while DMA is used for cathode ray tube screens with a light pen, analog-to-digital converters, digital-to-analog converters, tape drives, disk drives. The word size, 12 bits, is large enough to handle unsigned integers from 0 to 4095 – wide enough for controlling simple machinery. This is also enough to handle signed numbers from -2048 to +2047. This is higher precision than a slide rule or most analog computers. Twelve bits could also store two six-bit characters (note, six-bit isn't enough for two cases, unlike "fuller" ASCII character set). "ND Code" was one such 6-bit character encoding that included upper-case alphabetic, digit, a subset of punctuation and a few control characters. The ND812's basic configuration has a main memory of 4,096 twelve-bit words with a 2 microsecond cycle time. Memory is expandable to 16K words in 4K word increments. Bits within the word are numbered from most significant bit (bit 11) to least significant bit (bit 0). The programming model consists of four accumulator registers: two main accumulators, J and K, and two sub accumulators, R and S. A rich set of arithmetic and logical operations are provided for the main accumulators and instructions are provided to exchange data between the main and sub accumulators. Conditional execution is provided through "skip" instructions. A condition is tested and the subsequent instruction is either executed or skipped depending on the result of the test. The subsequent instruction is usually a jump instruction when more than one instruction is needed for the case where the test fails. Document 3::: The Health Products and Food Branch (HPFB) of Health Canada manages the health-related risks and benefits of health products and food by minimizing risk factors while maximizing the safety provided by the regulatory system and providing information to Canadians so they can make healthy, informed decisions about their health. HPFB has ten operational Directorates with direct regulatory responsibilities: Biologics and Genetic Therapies Directorate Food Directorate Marketed Health Products Directorate (with responsibility for post-market surveillance) Medical Devices Directorate Natural Health Products Directorate Office of Nutrition Policy and Promotion Pharmaceutical Drugs Directorate Policy, Planning and International Affairs Directorate Resource Management and Operations Directorate Veterinary Drugs Directorate Extraordinary Use New Drugs Extraordinary Use New Drugs (EUNDs) is a regulatory programme under which, in times of emergency, drugs can be granted regulatory approval under the Food and Drug Act and its regulations. An EUND approved through this pathway can only be sold to federal, provincial, territorial and municipal governments. The text of the EUNDs regulations is available. On 25 March 2011 and after the pH1N1 pandemic, amendments were made to the Food and Drug Regulations (FDR) to include a specific regulatory pathway for EUNDs. Typically, clinical trials in human subjects are conducted and the results are provided as part of the clinical information package of a New Drug Submission (NDS) to Health Canada, the federal authority that reviews the safety and efficacy of human drugs. Health Canada recognizes that there are circumstances in which sponsors cannot reasonably provide substantial evidence demonstrating the safety and efficacy of a therapeutic product for NDS as there are logistical or ethical challenges in conducting the appropriate human clinical trials. The EUND pathway was developed to allow a mechanism for authorization of th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is one common benchmark used to define a small state in terms of population? A. Less than 500,000 people B. Less than 1 million people C. Less than 1.5 million people D. Less than 2 million people Answer:
B. Less than 1 million people
Relavent Documents: Document 0::: The Fischer oxazole synthesis is a chemical synthesis of an oxazole from a cyanohydrin and an aldehyde in the presence of anhydrous hydrochloric acid. This method was discovered by Emil Fischer in 1896. The cyanohydrin itself is derived from a separate aldehyde. The reactants of the oxazole synthesis itself, the cyanohydrin of an aldehyde and the other aldehyde itself, are usually present in equimolar amounts. Both reactants usually have an aromatic group, which appear at specific positions on the resulting heterocycle. A more specific example of Fischer oxazole synthesis involves reacting mandelic acid nitrile with benzaldehyde to give 2,5-diphenyl-oxazole. History Fischer developed the Fischer oxazole synthesis during his time at Berlin University. The Fischer oxazole synthesis was one of the first syntheses developed to produce 2,5-disubstituted oxazoles. Mechanism The Fischer oxazole synthesis is a type of dehydration reaction which can occur under mild conditions in a rearrangement of the groups that would not seem possible. The reaction occurs by dissolving the reactants in dry ether and passing through the solution dry, gaseous hydrogen chloride. The product, which is the 2,5-disubstituted oxazole, precipitates as the hydrochloride and can be converted to the free base by the addition of water or by boiling with alcohol. The cyanohydrins and aldehydes used for the synthesis are usually aromatic, however there have been instances where aliphatic compounds have been used. The first step of the mechanism is the addition of gaseous HCl to the cyanohydrin 1. The cyanohydrin abstracts the hydrogen from HCl while the chloride ion attacks the carbon in the cyano group. This first step results in the formation of an iminochloride intermediate 2, probably as the hydrochloride salt. This intermediate then reacts with the aldehyde; the lone pair of the nitrogen attacks the electrophilic carbonyl carbon on the aldehyde. The following step results in an SN2 attack fo Document 1::: Base Number (BN) is a measurement of basicity that is expressed in terms of the number of milligrams of potassium hydroxide per gram of oil sample (mg KOH/g). BN is an important measurement in petroleum products, and the value varies depending on its application. BN generally ranges from 6–8 mg KOH/g in modern lubricants, 7–10 mg KOH/g for general internal combustion engine use and 10–15 mg KOH/g for diesel engine operations. BN is typically higher for marine grade lubricants, approximately 15-80 mg KOH/g, as the higher BN values are designed to increase the operating period under harsh operating conditions, before the lubricant requires replacement. Oil Additives An oil formulation consists of the base or stock oil and oil additives. Most oil formulations contain basic additives and detergents, designed to react with and neutralise acids, preventing damage to engine parts, including corrosion of metal surfaces. Potentiometric determination Although IP Standard test methods exist, the more common methods for BN are ASTM standardised, such as the potentiometric titration for fresh oils (Test method BN ASTM D2896). A sample is typically dissolved in a pre-mixed solvent of chlorobenzene and acetic acid and titrated with standardised perchloric acid in glacial acetic acid for fresh oil samples. The end point is detected using a glass electrode which is immersed in an aqueous solution containing the sample, and connected to a voltmeter/potentiometer. This causes an ion exchange in the outer solvated layer at the glass membrane, so a change in potential is generated which can be measured by the electrode. When the end point of the chemical reaction is reached, which is shown by an inflection point on the titration curve using a specified detection system, the amount of titrant required is used to generate a result which is reported in milligrams of potassium hydroxide equivalent per gram of sample (mg of KOH/g). Potentiometric titration for used oils (Test method BN A Document 2::: Henry Bickersteth, 1st Baron Langdale, PC (18 June 1783 – 18 April 1851), a member of the prominent Bickersteth family, was an English physician, law reformer, and Master of the Rolls. Early life and education Langdale was born on 18 June 1783 at Kirkby Lonsdale, the third son of Henry Bickersteth, a surgeon, and Elizabeth Batty. His younger brother was Rev. Edward Bickersteth, whose son Edward Henry became Bishop of Exeter and whose grandson Edward was Bishop of South Tokyo. By the advice of his uncle, Dr. Robert Batty, in October 1801, he went to Edinburgh to pursue his medical studies, and in the following year was called home to take his father's practice in his temporary absence. Disliking the idea of settling down in the country as a general practitioner, young Bickersteth determined to become a London physician. With a view to obtaining a medical degree, on 22 June 1802 his name was entered in the books of Gonville and Caius College, Cambridge, and, on 27 October in the same year, he was elected a scholar on the Hewitt foundation. Owing to his intense application to work, his health broke down after his first term. Career A change of scene being deemed necessary to insure his recovery, he obtained, through Dr. Batty, the post of medical attendant to Edward, fifth earl of Oxford, who was then on a tour in Italy. After his return from the continent he continued with the Earl of Oxford until 1805, when he returned to Cambridge. At this time he wanted to enter the army, but his parents disapproved. After three years he was senior Smith's mathematical prizeman of his year (1808), Miles Bland, Charles James Blomfield and Adam Sedgwick being among the competitors. He graduated senior wrangler from Gonville and Caius College, Cambridge in 1808 and after training as a physician like his father, he turned to law. Having taken his degree, Bickersteth was immediately elected a fellow of his college, and thereupon made up his mind to enter the profession of the law. Document 3::: In statistics, a dual-flashlight plot is a type of scatter-plot in which the standardized mean of a contrast variable (SMCV) is plotted against the mean of a contrast variable representing a comparison of interest . The commonly used dual-flashlight plot is for the difference between two groups in high-throughput experiments such as microarrays and high-throughput screening studies, in which we plot the SSMD versus average log fold-change on the y- and x-axes, respectively, for all genes or compounds (such as siRNAs or small molecules) investigated in an experiment. As a whole, the points in a dual-flashlight plot look like the beams of a flashlight with two heads, hence the name dual-flashlight plot. With the dual-flashlight plot, we can see how the genes or compounds are distributed into each category in effect sizes, as shown in the figure. Meanwhile, we can also see the average fold-change for each gene or compound. The dual-flashlight plot is similar to the volcano plot. In a volcano plot, the p-value (or q-value), instead of SMCV or SSMD, is plotted against average fold-change . The advantage of using SMCV over p-value (or q-value) is that, if there exist any non-zero true effects for a gene or compound, the estimated SMCV goes to its population value whereas the p-value (or q-value) for testing no mean difference (or zero contrast mean) goes to zero when the sample size increases . Hence, the value of SMCV is comparable whereas the value of p-value or q-value is not comparable in experiments with different sample size, especially when many investigated genes or compounds do not have exactly zero effects. The dual-flashlight plot bears the same advantage that the SMCV has, as compared to the volcano plot. See also Effect size SSMD SMCV Contrast variable ANOVA Volcano plot (statistics) Further reading Zhang XHD (2011) "Optimal High-Throughput Screening: Practical Experimental Design and Data Analysis for Genome-scale RNAi Research, Cambridge Uni The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the genus of the Bean leafroll virus (BLRV)? A. Luteovirus B. Retrovirus C. Adenovirus D. Picornavirus Answer:
A. Luteovirus
Relavent Documents: Document 0::: James Jeffrey Bull is a professor of biology at the University of Idaho and also Joseph J. and Jeanne M. Lagowski Regents Professor (Emeritus) in Molecular Biology at the University of Texas at Austin. He is best known for his influential 1983 monograph, Evolution of Sex Determining Mechanisms. In the early 1990s, he changed the focus of his work to experimental evolution and phylogenetics, and has since had considerable success in both fields. His work in experimental evolution involves observing genetic and phenotypic changes in bacteria and bacteriophages, the viruses that attack bacteria. In 2003 he was elected a member of the American Academy of Arts and Sciences. In 2016 he was elected to the National Academy of Sciences. After becoming emeritus at UT Austin, Bull moved to the University of Idaho in 2019. Bibliography Evolution of sex determining mechanisms. 1983. Menlo Park, California: The Benjamin/Cummings Publishing Company, Inc. References External links Home page Document 1::: Tyndallization is a process from the nineteenth century for sterilizing substances, usually food, named after its inventor John Tyndall, that can be used to kill heat-resistant endospores. Although now considered dated, it is still occasionally used. A simple and effective sterilizing method commonly used today is autoclaving: heating the substance being sterilized to for 15 minutes in a pressured system. If autoclaving is not possible because of lack of equipment, or the need to sterilize something that will not withstand the higher temperature, unpressurized heating for a prolonged period at a temperature of up to , the boiling point of water, may be used. The heat will kill any bacterial cells; however, bacterial spores capable of later germinating into bacterial cells may survive. Tyndallization can be used to destroy the spores. Tyndallization essentially consists of heating the substance to boiling point (or just a little below boiling point) and holding it there for 15 minutes, three days in succession. After each heating, the resting period will allow spores that have survived to germinate into bacterial cells; these cells will be killed by the next day's heating. During the resting periods the substance being sterilized is kept in a moist environment at a warm room temperature, conducive to germination of the spores. When the environment is favourable for bacteria, it is conducive to the germination of cells from spores, and spores do not form from cells in this environment (see bacterial spores). The Tyndallization process is usually effective in practice. But it is not considered completely reliable — some spores may survive and later germinate and multiply. It is not often used today, but is used for sterilizing items that cannot withstand pressurized heating, such as plant seeds. See also Autoclave References Document 2::: Meat and bone meal (MBM) is a product of the rendering industry. It is typically about 48–52% protein, 33–35% ash, 8–12% fat, and 4–7% water. It is primarily used in the formulation of animal feed to improve the amino acid profile of the feed. Feeding of MBM to cattle is thought to have been responsible for the spread of BSE (mad cow disease); therefore, in most parts of the world, MBM is no longer allowed in feed for ruminant animals. However, it is still used to feed monogastric animals. MBM is widely used in the United States as a low-cost animal protein in dog food and cat food. In Europe, some MBM is used as ingredients in pet food, but the majority is now used as a fossil-fuel replacement for energy generation, as a fuel in cement kilns, landfilling or incineration. History In the UK, after the 1987 discovery that BSE could cause vCJD, the original feed ban was introduced in 1988 to prevent ruminant protein being fed to ruminants. In addition, it has been illegal to feed ruminants with all forms of mammalian protein (with specific exceptions) since November 1994 and to feed any farmed livestock, including fish and horses, with mammalian meat and bone meal (mammalian MBM) since 4 April 1996. Regulation (EC) No.999/2001 introduced EU-wide regulations, which relaxed UK controls. In 2000, UK supermarket chain Co-op Food was still calling for "a legally-binding Europe-wide ban on the feeding of animal waste to farm animals". They opined that the practice was tantamount to cannibalism. According to the BBC at the time there was no ban on the use in livestock feed of animal blood, tallow, poultry offal and feather meal. European categories In Europe (before the 2002 EU Regulation), animal by-products were classified into two categories: "high risk" or "low risk" products. Since 2002, "processed animal protein" (PAP) and other animal by-products, authorized or not for various uses, are categorized into three categories (by the European Regulation 2002) according Document 3::: Biodermogenesi is a procedure adopted in dermatology to promote skin regeneration. The system makes it possible to reorganize the skin layers and stimulate the regeneration of collagen, elastic fibers and basal cells through the reactivation of intra- and extracellular exchange. The therapy is based on the combined delivery of electromagnetic fields, vacuum and electrostimulation (V-EMF therapy). The literature shows that it is generally pleasant and relaxing for patients and has been found to be effective in the anti-aging therapy, of stretch marks and scars, always in the absence of side effects. See Also V-EMF therapy The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is torque ripple in electric motors primarily characterized by? A. A constant output torque B. A periodic increase or decrease in output torque C. A fixed maximum torque value D. A reduction in motor efficiency Answer:
B. A periodic increase or decrease in output torque
Relavent Documents: Document 0::: The aphotic zone (aphotic from Greek prefix + "without light") is the portion of a lake or ocean where there is little or no sunlight. It is formally defined as the depths beyond which less than 1 percent of sunlight penetrates. Above the aphotic zone is the photic zone, which consists of the euphotic zone and the disphotic zone. The euphotic zone is the layer of water in which there is enough light for net photosynthesis to occur. The disphotic zone, also known as the twilight zone, is the layer of water with enough light for predators to see but not enough for the rate of photosynthesis to be greater than the rate of respiration. The depth at which less than one percent of sunlight reaches begins the aphotic zone. While most of the ocean's biomass lives in the photic zone, the majority of the ocean's water lies in the aphotic zone. Bioluminescence is more abundant than sunlight in this zone. Most food in this zone comes from dead organisms sinking to the bottom of the lake or ocean from overlying waters. The depth of the aphotic zone can be greatly affected by such things as turbidity and the season of the year. The aphotic zone underlies the photic zone, which is that portion of a lake or ocean directly affected by sunlight. The Dark Ocean In the ocean, the aphotic zone is sometimes referred to as the dark ocean. Depending on how it is defined, the aphotic zone of the ocean begins between depths of about to and extends to the ocean floor. The majority of the ocean is aphotic, with the average depth of the sea being deep; the deepest part of the sea, the Challenger Deep in the Mariana Trench, is about deep. The depth at which the aphotic zone begins in the ocean depends on many factors. In clear, tropical water sunlight can penetrate deeper and so the aphotic zone starts at greater depths. Around the poles, the angle of the sunlight means it does not penetrate as deeply so the aphotic zone is shallower. If the water is turbid, suspended material can bloc Document 1::: A spring house, or springhouse, is a small building, usually of a single room, constructed over a spring. While the original purpose of a springhouse was to keep the spring water clean by excluding fallen leaves, animals, etc., the enclosing structure was also used for refrigeration before the advent of ice delivery and, later, electric refrigeration. The water of the spring maintains a constant cool temperature inside the spring house throughout the year. Food that would otherwise spoil, such as meat, fruit, or dairy products, could be kept there, safe from animal depredations as well. Springhouses thus often also served as pumphouses, milkhouses and root cellars. The Tomahawk Spring spring house at Tomahawk, West Virginia, was listed on the National Register of Historic Places in 1994. Gallery See also Ice house (building) Smokehouse Windcatcher External links Document 2::: Mycroft was a free and open-source software virtual assistant that uses a natural language user interface. Its code was formerly copyleft, but is now under a permissive license. It was named after a fictional computer from the 1966 science fiction novel The Moon Is a Harsh Mistress. Unusually for a voice-controlled assistant, Mycroft did all of its processing locally, not on a cloud server belonging to the vendor. It could access online resources, but it could also function without an internet connection. In early 2023, Mycroft AI ceased development. A community-driven platform continues with OpenVoiceOS. History Inspiration for Mycroft came when Ryan Sipes and Joshua Montgomery were visiting a makerspace in Kansas City, MO, where they came across a simple and basic intelligent virtual assistant project. They were interested in the technology, but did not like its inflexibility. Montgomery believes that the burgeoning industry of intelligent personal assistance poses privacy concerns for users, and has promised that Mycroft will protect privacy through its open source machine learning platform. Mycroft AI, Inc., has won several awards, including the prestigious Techweek's KC Launch competition in 2016. They were part of the Sprint Accelerator 2016 class in Kansas City and joined 500 Startups Batch 20 in February 2017. The company accepted a strategic investment from Jaguar Land Rover during this same time period. The company had raised more than $2.5 million from institutional investors before they opted to offer shares of the company to the public through StartEngine, an equity crowdfunding platform. In early 2023, Mycroft AI ceased development. Software Mycroft voice stack Mycroft provides free software for most parts of the voice stack. Wake Word Mycroft does Wake Word spotting, also called keyword spotting, through its Precise Wake Word engine. Prior to Precise becoming the default Wake Word engine, Mycroft employed PocketSphinx. Instead of being ba Document 3::: G protein-coupled receptor 126 also known as VIGR and DREG is a protein encoded by the ADGRG6 gene. GPR126 is a member of the adhesion GPCR family. Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain. GPR126 is all widely expressed on stromal cells. The N-terminal fragment of GPR126 contains C1r-C1s, Uegf and Bmp1 (CUB), and PTX-like modules. Ligand GPR126 was shown to bind collagen IV and laminin-211 promoting cyclic adenosine monophosphate (cAMP) to mediate myelination. Signaling Upon lipopolysaccharide (LPS) or thrombin stimulation, expression of GPR126 is induced by MAP kinases in endothelial cells. During angiogenesis, GPR126 promotes protein kinase A (PKA)–cAMP-activated signaling in endothelial cells. Forced GPR126 expression in COS-7 cells enhances cAMP levels by coupling to heterotrimeric Gαs/i proteins. Function GPR126 has been identified in genomic regions associated with adult height, more specially trunk height, pulmonary function and adolescent idiopathic scoliosis. In the vertebrate nervous system, many axons are surrounded by a myelin sheath to conduct action potentials rapidly and efficiently. Applying a genetic screen in zebrafish mutants, Talbot’s group demonstrated that GPR126 affects the development of myelinated axons. GPR126 drives the differentiation of Schwann cells through inducing cAMP levels, which causes Oct6 transcriptional activities to promote myelin gene activity. Mutation of gpr126 in zebrafish affects peripheral myelination. Monk’s group demonstrated domain-specific functions of GPR126 during Schwann cells development: the NTF is necessary and sufficient for axon sorting, while the CTF promotes wrapping through cAMP induction to regulate early and late stages of Schwann cells development. Outside of neurons, GPR126 function is required for heart and inner ear development. GPR126 stimulates VEGF signaling and angiogenesis by modulating VEGF receptor 2 (VEGFR2) expression through STAT5 and GATA2 in endothelial cells. Disease Mouse models have shown GPR126 deletion to affect cartilage biology and spinal column development, supporting findings that variants of GPR126 have been associated with adolescent idiopathic scoliosis, and Mutations have been shown to be responsible for severe arthrogryposis multiplex congenita The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main function of GPR126 in the development of Schwann cells, as described in the text? A. It inhibits axon sorting. B. It promotes myelin gene activity through cAMP induction. C. It decreases cAMP levels in Schwann cells. D. It blocks VEGF signaling in endothelial cells. Answer:
B. It promotes myelin gene activity through cAMP induction.
Relavent Documents: Document 0::: Bridgwater tidal barrier will be a flood control gate located on the River Parrett in Bridgwater, Somerset, England. The River Parrett is tidal for some upstream of Bridgwater, and the combination of flooding on the Somerset Levels and high tides reaching up the Bristol Channel, have a detrimental effect on the whole area. In 2022, a tidal flood gate was approved to be installed at a cost of £249 million, which is expected to be operational by 2027. History Historically, flooding on the River Parrett has occurred when both excess rainwater and high tides in the Bristol Channel, backflow upstream on the river. In December 1929, serious flooding upstream at Lyng and Athelney was in danger of overwhelming those villages, and to prevent this, the locals suggested cutting the dykes, but this would release a "tidal wave" high, and combined with near incoming tide, it was feared mass flooding would occur in the Bridgwater area. However, by early 1930, the locals had abandoned this idea and they seem "resigned to their fate." Several times, locals on the Somerset Levels have complained that their settlements have been sacrificed to save Bridgwater, but one Environment Agency official noted that that is what the Somerset Levels are supposed to do; retain the floodwater and release it slowly. Serious floods occurred in 1960, and as a result, defences against flooding were built along the Parrett catchment. One of the suggestions put forward after the 2014 floods was to build a giant lagoon in Bridgwater Bay which could generate electricity through the flowing of the tides, but could be allowed to store fresh floodwater and release it into the sea at low tide. Future flooding is based on modelling and estimates from the Environment Agency, which detail an increase of 20% of peak flow in all watercourses, coupled with a sea level rise of by the year 2100. In December 2019, proposals for the barrier were submitted in response to severe flooding in Somerset in 2014. The Document 1::: Difelikefalin, sold under the brand name Korsuva, is an opioid peptide used for the treatment of moderate to severe itch. It acts as a peripherally-restricted, highly selective agonist of the κ-opioid receptor (KOR). Difelikefalin acts as an analgesic by activating KORs on peripheral nerve terminals and KORs expressed by certain immune system cells. Activation of KORs on peripheral nerve terminals results in the inhibition of ion channels responsible for afferent nerve activity, causing reduced transmission of pain signals, while activation of KORs expressed by immune system cells results in reduced release of proinflammatory, nerve-sensitizing mediators (e.g., prostaglandins). Difelikefalin was approved for medical use in the United States in August 2021. The U.S. Food and Drug Administration considers it to be a first-in-class medication. Society and culture Legal status On 24 February 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Kapruvia, intended for treatment of moderate-to-severe pruritus associated with chronic kidney disease. The applicant for this medicinal product is Vifor Fresenius Medical Care Renal Pharma France. Difelikefalin was approved for medical use in the European Union in April 2022. Research It is under development by Cara Therapeutics as an intravenous agent for the treatment of postoperative pain. An oral formulation has also been developed. Due to its peripheral selectivity, difelikefalin lacks the central side effects like sedation, dysphoria, and hallucinations of previous KOR-acting analgesics such as pentazocine and phenazocine. In addition to use as an analgesic, difelikefalin is also being investigated for the treatment of pruritus (itching). Difelikefalin has completed phase II clinical trials for postoperative pain and has demonstrated significant and "robust" clinical Document 2::: The World Vegetable Center (WorldVeg) (), previously known as the Asian Vegetable Research and Development Center (AVRDC), is an international, nonprofit institute for vegetable research and development. It was founded in 1971 in Shanhua, southern Taiwan, by the Asian Development Bank, Taiwan, South Korea, Japan, the Philippines, Thailand, the United States and South Vietnam. WorldVeg aims to reduce malnutrition and alleviate poverty in developing nations through improving production and consumption of vegetables. History The World Vegetable Center was founded as the Asian Vegetable Research and Development Center (AVRDC) in 1971 by the Asian Development Bank, Taiwan, South Korea, Japan, the Philippines, Thailand, the United States and South Vietnam. The main campus was opened in 1973. In 2008 the center was rebranded as the World Vegetable Center. For the first 20 years of its existence the World Vegetable Center was a major global sweet potato research center with over 1,600 accessions in their first two years of operation. In 1991 the World Vegetable Center chose to end its sweet potato research due to high costs and other institutions with a tighter focus coming into existence. The WVC duplicated and transferred its research and germplasm to the International Potato Center and Taiwan Agricultural Research institute. Research and development The use of vegetables as crops that are of high worth is important in the Sustainable Development Goals of the United Nations Development Program and the World Vegetable Center. The vegetables bred by the Center can be used in poorer areas, where they can serve as an important source of income and can help fight micronutrient deficiencies. The Center's current crop portfolio focuses on several groups of globally important vegetables, according to the WorldVeg: solanaceous crops: (tomato, sweet pepper, chili pepper, eggplant) bulb alliums (onion, shallot, garlic) cucurbits (Cucurbitaceae): (cucumbers, pumpkins) Indigeno Document 3::: An heirloom plant, heirloom variety, heritage fruit (Australia and New Zealand), or heirloom vegetable (especially in Ireland and the UK) is an old cultivar of a plant used for food that is grown and maintained by gardeners and farmers, particularly in isolated communities of the Western world. These were commonly grown during earlier periods in human history, but are not used in modern large-scale agriculture. In some parts of the world, it is illegal to sell seeds of cultivars that are not listed as approved for sale. The Henry Doubleday Research Association, now known as Garden Organic, responded to this legislation by setting up the Heritage Seed Library to preserve seeds of as many of the older cultivars as possible. However, seed banks alone have not been able to provide sufficient insurance against catastrophic loss. In some jurisdictions, like Colombia, laws have been proposed that would make seed saving itself illegal. Many heirloom vegetables have kept their traits through open pollination, while fruit varieties such as apples have been propagated over the centuries through grafts and cuttings. The trend of growing heirloom plants in gardens has been returning in popularity in North America and Europe. Origin Before the industrialization of agriculture, a much wider variety of plant foods were grown for human consumption, largely due to farmers and gardeners saving seeds and cuttings for future planting. From the 16th century through the early 20th century, the diversity was huge. Old nursery catalogues were filled with plums, peaches, pears and apples of numerous varieties, and seed catalogs offered legions of vegetable varieties. Valuable and carefully selected seeds were sold and traded using these catalogs along with useful advice on cultivation. Since World War II, agriculture in the industrialized world has mostly consisted of food crops which are grown in large, monocultural plots. In order to maximize consistency, few varieties of each type of crop are grown. These varieties are often selected for their productivity and their ability to ripen at the same time while withstanding mechanical picking and cross-country shipping, as well as their tolerance to drought, frost, or pesticides. This form of agriculture has led to a 75% drop in crop genetic diversity. While heirloom gardening has maintained a niche community, in recent years it has seen a resurgence in response to the industrial agriculture trend. In the Global South, heirloom plants are still widely grown, for example, in the home gardens of South and Southeast Asia. Before World War II, the majority of produce grown in the United States was heirlooms. In the 21st century, numerous community groups all over the world are working to preserve historic varieties to make a wide variety of fruits, vegetables, herbs, and flowers available again to the home gardener, by renovating old orchards, sourcing historic fruit varieties, engaging in seed swaps, and encouraging community participation. Heirloom varieties are an increasingly popular way for gardeners and small farmers to connect with traditional forms of agriculture and the crops grown in these systems. Growers also cite lower costs associated with purchasing seeds, improved taste, and perceived improved nutritional quality as reasons for growing heirlooms. In many countries, hundreds or even thousands of heirloom varieties are commercially available for purchase or can be obtained through seed libraries and banks, seed swaps, or community events. Heirloom varieties may also be well suited for market gardening, farmer's market sales, and CSA programs. A primary drawback to growing heirloom varieties is lower disease resistance compared to many commercially available hybrid varieties. Common disease problems, such as verticillium and fusarium wilt, may affect heirlooms more significantly than non-heirloom crops. Heirloom varieties may also be more delicate and perishable. In recent years, research has been conducted into improving the disease resistance of heirlooms, particularly tomatoes, by crossing them with resistant hybrid varieties. Requirements The term heirloom to describe a seed variety was first used in the 1930s by horticulturist and vegetable grower J.R. Hepler to describe bean varieties handed down through families. However, the current definition and use of the word heirloom to describe plants is fiercely debated. One school of thought places an age or date point on the cultivars. For instance, one school says the cultivar must be over 100 years old, others 50 years old, and others prefer the date of 1945, which marks the end of World War II and roughly the beginning of widespread hybrid use by growers and seed companies. Many gardeners consider 1951 to be the latest year a plant could have originated and still be called an heirloom, since that year marked the widespread introduction of the first hybrid varieties. It was in the 1970s that hybrid seeds began to proliferate in the commercial seed trade. Some heirloom varieties are much older; some are apparently pre-historic. Another way of defining heirloom cultivars is to use the definition of the word heirloom in its truest sense. Under this interpretation, a true heirloom is a cultivar that has been nurtured, selected, and handed down from one family member to another for many generations. Additionally, there is another category of cultivars that could be classified as "commercial heirlooms": cultivars that were introduced many generations ago and were of such merit that they have been saved, maintained and handed down—even if the seed company has gone out of business or otherwise dropped the line. Additionally, many old commercial releases have actually been family heirlooms that a seed company obtained and introduced. Regardless of a person's specific interpretation, most authorities agree that heirlooms, by definition, must be open-pollinated. They may also require open-pollinated varieties to have been bred and stabilized using classic breeding practices. While there is currently one genetically modified tomato available to home growers, it is generally agreed that no genetically modified organisms can be considered heirloom cultivars. Another important point of discussion is that without the ongoing growing and storage of heirloom plants, the seed companies and the government will control all seed distribution. Most, if not all, hybrid plants, if they do not have sterile seeds and can be regrown, will not be the same as the original hybrid plant, thus ensuring the dependency on seed distributors for future crops. Writer and author Jennifer A. Jordan describes the term "heirloom" as a culturally constructed concept that is only relevant due to the relatively recent loss of many crop varieties: "It is only with the rise of industrial agriculture that [the] practice of treating food as a literal heirloom has disappeared in many parts of the world—and that is precisely when the heirloom label emerges. ...[T]he concept of an heirloom becomes possible only in the context of the loss of actual heirloom varieties, of increased urbanization and industrialization as fewer people grow their own food, or at least know the people who grow their food." Collection sites The heritage fruit trees that exist today are clonally descended from trees of antiquity. Heirloom roses are sometimes collected (nondestructively as small cuttings) from vintage homes and from cemeteries, where they were once planted at gravesites by mourners and left undisturbed in the decades since. Modern production methods and the rise in population have largely supplanted this practice. UK and EU law and national lists In the UK and Europe, it is thought that many heritage vegetable varieties (perhaps over 2,000) have been lost since the 1970s, when EEC (now EU) laws were passed making it illegal to sell any vegetable cultivar not on the national list of any EEC country. This was set up to help in eliminating seed suppliers selling one seed as another, guarantee the seeds were true to type, and that they germinated consistently. Thus, there were stringent tests to assess varieties, with a view to ensuring they remain the same from one generation to the next. However, unique varieties were lost for posterity. These tests (called DUS) assess "distinctness", "uniformity", and "stability". But since some heritage cultivars are not necessarily uniform from plant to plant, or indeed within a single plant—a single cultivar—this has been a sticking point. "Distinctness" has been a problem, moreover, because many cultivars have several names, perhaps coming from different areas or countries (e.g., carrot cultivar Long Surrey Red is also known as "Red Intermediate", "St. Valery", and "Chertsey"). However, it has been ascertained that some of these varieties that look similar are in fact different cultivars. On the other hand, two that were known to be different cultivars were almost identical to each other, thus one would be dropped from the national list in order to clean it up. Another problem has been the fact that it is somewhat expensive to register and then maintain a cultivar on a national list. Therefore, if no seed breeder or supplier thinks it will sell well, no one will maintain it on a list, and so the seed will not be re-bred by commercial seed breeders. In recent years, progress has been made in the UK to set up allowances and less stringent tests for heritage varieties on a B national list, but this is still under consideration. When heirloom plants are not being sold, however, laws are often more lenient. Because most heirloom plants are at least 50 years old and grown and swapped in a family or community they fall under the public domain. Another worldwide alternative is to submit heirloom seeds to a seedbank. These public repositories in turn maintain and disperse these genetics to anyone who will use them appropriately. Typically, approved uses are breeding, study, and sometimes, further distribution. US state law There are a variety of intellectual property protections and laws that are applied to heirloom seeds, which can often differ greatly between states. Plant patents are based on the Plant Patent Act of 1930, which protects plants grown from cuttings and division, while under intellectual property rights, the Plant Variety Protection Act of 1970 (PVPA) shields non-hybrid, seed-propagated plants. However, seed breeders can only shelter their variety for 20 years under PVPA. There are also a couple of exceptions under the PVPA which allow growers to cultivate, save seeds, and sell the resultant crops, and give breeders allowances to use PVPA protected varieties as starter material as long as it constitutes less than half of the breeding material. There are also seed licenses which may place restrictions on the use of seeds or trademarks that guard against the use of certain plant variety names. In 2014, the Pennsylvania Department of Agriculture caused a seed-lending library to shut down and promised to curtail any similar efforts in the state. The lending library, hosted by a town library, allowed gardeners to "check out" a package of open-pollinated seed, and "return" seeds kept from the crop grown from those seeds. The Department of Agriculture said that this activity raises the possibility of "agro-terrorism", and that a Seed Act of 2004 requires the library staff to test each seed packet for germination rate and whether the seed was true to type. In 2016 the department reversed this decision, and clarified that seed libraries and non-commercial seed exchanges are not subject to the requirements of the Seed Act. Food justice In disputed Palestine, some heirloom growers and seed savers see themselves as contributing a form of resistance against the privatization of agriculture, while also telling stories of their ancestors, defying violence, and encouraging rebellion. The Palestinian Heirloom Seed Library (PHSL), founded by writer and activist Vivien Sansour, breeds and maintains a selection of traditional crops from the region, seeking to "preserve and promote heritage and threatened seed varieties, traditional Palestinian farming practices, and the cultural stories and identities associated with them." Some scholars have additionally framed the increasing control of Israeli agribusiness corporations over Palestinian seed supplies as an attempt to suppress food sovereignty and as a form of subtle ecocide. In January 2012, a conflict over seed access erupted in Latvia when two undercover investigators from the Latvian State Plant Protection Agency charged an independent farm with the illegal sale of unregistered heirloom tomato seeds. The agency suggested that the farm choose a small number of varieties to officially register and to abandon the other approximately 800 varieties grown on the farm. This infuriated customers as well as members of the general public, many of whom spoke out against what was seen as an overly strict interpretation of the law. The scandal further escalated with a series of hearings held by agency officials, during which residents called for a reexamination of seed registration laws and demanded greater citizen participation in legal and political matters relating to agriculture. In Peru and Ecuador, genes from heirloom tomato varieties and wild tomato relatives have been the subject of patent claims by the University of Florida. These genes have been investigated for their usefulness in increasing drought and salt tolerance and disease resistance, as well as improving flavor, in commercial tomatoes. The American genomics development company Evolutionary Genomics identified genes found in Galapagos tomatoes that may increase sweetness by up to 25% and as of 2023 has filed an international patent application on the usage of these genes. Native heirloom and landrace crop varieties and their stewards are sometimes subject to theft and biopiracy. Biopiracy may negatively impact communities that grow these heirloom varieties through loss of profits and livelihoods, as well as litigation. One infamous example is the case of Enola bean patent, in which a Texas corporation collected heirloom Mexican varieties of the scarlet runner bean and patented them, and then sued the farmers who had supplied the seeds in the first place to prevent them from exporting their crops to the US. The 'Enola' bean was granted 20-year patent protection in 1999, but subsequently underwent numerous legal challenges on the grounds that the bean was not a novel variety. In 2004, DNA fingerprinting techniques were used to demonstrate that 'Enola' was functionally identical to a yellow bean grown in Mexico known as Azufrado Peruano 87. The case has been widely cited as a prime example of biopiracy and misapplication of patent rights. Native communities in the United States and Mexico have drawn particular attention to the importance of traditional and culturally appropriate seed supplies. The Traditional Native American Farmers Association (TNAFA) is an Indigenous organization aiming to "revitalize traditional agriculture for spiritual and human need" and advocating for traditional methods of growing, preparing, and consuming plants. In concert with other organizations, TNAFA has also drafted a formal Declaration of Seed Sovereignty and worked with legislators to protect Indigenous heritage seeds. Indigenous peoples are also at the forefront of the seed rematriation movement to bring lost seed varieties back to their traditional stewards. Rematriation efforts are frequently directed at institutions such as universities, museums, and seed banks, which may hold Indigenous seeds in their collection that are inaccessible to the communities from which they originate. In 2018, the Seed Savers Exchange, the largest publicly accessible seed bank in the United States, rematriated several heirloom seed varieties back to Indigenous communities. Activism Activism surrounding food justice, farmers' rights, and seed sovereignty frequently overlap with the promotion and usage of heirloom crop varieties. International peasant farmers' organization La Via Campesina is credited with the first usage of the term "food sovereignty" and campaigns for agrarian reform, seed freedom, and farmers' rights. It currently represents more than 150 social movement organizations in 56 countries. Numerous other organizations and collectives worldwide participate in food sovereignty activism, including the US Food Sovereignty Alliance, Food Secure Canada, and the Latin American Seeds Collective in North and South America; the African Center for Biodiversity (ACB), the Coalition for the Protection of African Genetic Heritage (COPAGEN), and the West African Peasant Seed Committee (COASP) in Africa; and the Alliance for Sustainable and Holistic Agriculture (ASHA), Navdanya, and the Southeast Asia Regional Initiatives for Community Empowerment (SEARICE) in Asia. In a 2022 BBC interview, Indian environmental activist and scholar Vandana Shiva stated that "Seed is the source of life. Seed is the source of food. To protect food freedom, we must protect seed freedom." Other writers have pushed back against the promotion and proliferation of heirloom crop varieties, connecting their usage to the impacts of colonialism. Quoting American author and educator Martín Prechtel in his article in The Guardian, Chris Smith writes that To keep seeds alive, clear, strong and open-pollinated, purity as the idea of a single pure race must be understood as the ironic insistence of imperial minds. Writer and journalist Brendan Borrell calls heirloom tomatoes "the tomato equivalent of the pug—that 'purebred' dog with the convoluted nose that snorts and hacks when it tries to catch a breath" and claims that selection for unique size, shape, color, and flavor has hampered disease resistance and hardiness in heirlooms. Future More attention is being put on heirloom plants as a way to restore genetic diversity and feed a growing population while safeguarding the food supply of diverse regions. Specific heirloom plants are often selected, saved, and planted again because of their superior performance in a particular locality. Over many crop cycles these plants develop unique adaptive qualities to their environment, which empowers local communities and can be vital to maintaining the genetic resources of the world. Some debate has occurred regarding the perceived improved nutritional qualities of heirloom varieties compared to modern cultivars. Anecdotal reports claim that heirloom vegetables are more nutritious or contain more vitamins and minerals than more recently developed vegetables. Current research does not support the claim that heirloom varieties generally contain a greater concentration of nutrients; however, nutrient concentration and composition does appear to vary between different cultivars. Nevertheless, heirloom varieties may still contain the genetic basis for useful traits that can be employed to improve modern crops, including for human nutritional qualities. Heirloom varieties are also critical to promoting global crop diversity, which has generally declined since the middle of the 20th century. Heirloom crops may contain genetic material that is distinct from varieties typically grown in monocrop systems, many of which are hybrid varieties. Monocrop systems tend to be vulnerable to disease and pest outbreaks, which can decimate whole industries due to the genetic similarity between plants. Some organizations have employed seed banks and vaults to preserve and protect crop genetics against catastrophic loss. One of the most notable of these seed banks is the Svalbard Global Seed Vault located in Svalbard, Norway, which safeguards approximately 1.2 million seed samples with capacity for up to 4.5 million. Some writers and farmers have criticized the apparent reliance on seed vaults, however, and argue that heirloom and rare varieties are better protected against extinction when actively planted and grown than stored away with no immediate influence on crop genetic diversity. Examples Bhutanese red rice Black rice Heirloom tomato See also Ark of Taste Biodiversity Community gardening History of gardening Association Kokopelli Landrace List of organic gardening and farming topics Local food Orthodox seed Rare breed Recalcitrant seed Seed saving Seedbank Slow Food Kyoyasai, a specific class of Japanese heirloom vegetables originating around Kyoto, Japan. References Further reading External links What is an heirloom vegetable? Heirloom Vegetables from the Home and Garden Information Center at Clemson University FAO/IAEA Programme Mutant Variety Database FDA Statement of Policy - Foods Derived from New Plant Varieties DEFRA - Plant varieties and seeds The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a primary drawback of growing heirloom varieties compared to commercially available hybrid varieties? A. Higher disease resistance B. Greater productivity C. Lower disease resistance D. Increased shelf life Answer:
C. Lower disease resistance
Relavent Documents: Document 0::: A dynamic shear rheometer, commonly known as DSR, is used for research and development as well as for quality control in the manufacture of a wide range of materials. Dynamic shear rheometers have been used since 1993 when Superpave was used for characterising and understanding high temperature rheological properties of asphalt binders in both the molten and solid state and is fundamental in order formulate the chemistry and predict the end-use performance of these materials. This is done by deriving the complex modulus (G*) from the storage modulus (elastic response, G') and loss modulus (viscous behaviour, G") yielding G* as a function of stress over strain. It is used to characterize the viscoelastic behavior of asphalt binders at intermediate temperatures from . Document 1::: A modillion is an ornate bracket, more horizontal in shape and less imposing than a corbel. They are often seen underneath a cornice which helps to support them. Modillions are more elaborate than dentils (literally translated as small teeth). All three are selectively used as adjectival historic past participles (corbelled, modillioned, dentillated) as to what co-supports or simply adorns any high structure of a building, such as a terrace of a roof (a flat area of a roof), parapet, pediment/entablature, balcony, cornice band or roof cornice. Modillions occur classically under a Corinthian or a Composite cornice but may support any type of eaves cornice. They may be carved or plain. See also Glossary of architecture Gallery References Document 2::: Heterotopia is a concept elaborated by philosopher Michel Foucault to describe certain cultural, institutional and discursive spaces that are somehow "other": disturbing, intense, incompatible, contradictory or transforming. Heterotopias are "worlds within worlds": both similar to their surroundings, and contrasting with or upsetting them. Foucault provides examples: ships, cemeteries, bars, brothels, prisons, gardens of antiquity, fairs, Muslim baths and many more. Foucault outlines the notion of heterotopia on three occasions between 1966 and 1967. A lecture given by Foucault to a group of architects in 1967 is the most well-known explanation of the term. His first mention of the concept is in his preface to The Order of Things, and refers to texts rather than socio-cultural spaces. Etymology Heterotopia follows the template established by the notions of utopia and dystopia. The prefix hetero- is from Ancient Greek ἕτερος (héteros, "other, another, different") and is combined with the Greek morpheme τόπος (topos) and means "place". A utopia is an idea or an image that is not real but represents a perfected version of society, such as Thomas More's book or Le Corbusier's drawings. As Walter Russell Mead has written, "Utopia is a place where everything is good; dystopia is a place where everything is bad; heterotopia is where things are different — that is, a collection whose members have few or no intelligible connections with one another." In Foucault Foucault uses the term heterotopia () to describe spaces that have more layers of meaning or relationships to other places than immediately meet the eye. In general, a heterotopia is a physical representation or approximation of a utopia, or a parallel space (such as a prison) that contains undesirable bodies to make a real utopian space possible. Foucault explains the link between utopias and heterotopias using the example of a mirror. A mirror is a utopia because the image reflected is a "placeless place", an Document 3::: The Planetary Science Decadal Survey is a serial publication of the United States National Research Council produced for NASA and other United States Government Agencies such as the National Science Foundation. The documents identify key questions facing planetary science and outlines recommendations for space and ground-based exploration ten years into the future. Missions to gather data to answer these big questions are described and prioritized, where appropriate. Similar decadal surveys cover astronomy and astrophysics, earth science, and heliophysics. As of 2022 there have been three "Decadals", one published in April 2002 for the decade from 2003 to 2013, one published on March 7, 2011 for the decade from 2013 to 2022, and one published on April 19, 2022 for the decade from 2023 to 2032. Before the decadal surveys Planetary Exploration, 1968-1975, published in 1968, recommended missions to Jupiter, Mars, Venus, and Mercury in that order of priority. Report of Space Science, 1975 recommended exploration of the outer planets. Strategy for Exploration of the Inner Planets, 1977–1987 was published in 1977. Strategy for the Exploration of Primitive Solar-System Bodies--Asteroids, Comets, and Meteoroids, 1980–1990 was published in 1980. A Strategy for Exploration of the Outer Planets, 1986-1996 was published in 1986. Space Science in the Twenty-First Century – Imperatives for the Decades 1995 to 2015, published in 1988, recommended a focus on "Galileo-like missions to study Saturn, Uranus and Neptune" including a mission to rendezvous with Saturn's rings and study Titan. It also recommended study of the moon with a "Lunar Geoscience Orbiter", a network of lunar rovers and sample return from the lunar surface. The report recommended a Mercury Orbiter to study not only that planet but provide some solar study as well. A "Program of Extensive Study of Mars" beginning with the Mars Pathfinder mission was planned for 1995 to be followed up by one in 1998 to ret The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What was the primary purpose of the DreamLab app developed by Imperial College London and the Vodafone Foundation? A. To entertain users with games B. To research various health issues and climate change C. To improve smartphone battery life D. To provide social networking features Answer:
B. To research various health issues and climate change
Relavent Documents: Document 0::: Jason X: Planet of the Beast is a 2005 British science fiction horror novel written by Nancy Kilpatrick and published by Black Flame. A tie-in to the Friday the 13th series of American horror films, it is the third in a series of five Jason X novels published by Black Flame and involves undead cyborg Jason Voorhees running amok on G7, a space station orbiting Planet #666. Plot G7, a research station orbiting Planet #666, receives distress signals from an approaching derelict vessel, Black Star 13. The crew of The Revival, a spaceship docked with G7, boards and investigates Black Star 13, the crew of which had found a waste disposal rocket adrift in space before being slaughtered by the vessel's only passenger, undead cyborg Jason Voorhees. Jason attacks The Revival, causing it and Black Star 13 to crash on Planet #666. Professor Claude Bardox, the lead scientist on G7, discovers Jason was aboard Black Star 13. Bardox, believing Jason's nanotechnology-enhanced physiology is the key to genetic breakthroughs, especially in the field of cloning, sets out to retrieve samples of Jason's DNA. Jason rips Bardox's prosthetic arm off and murders Bardox's assistant, Emery Peterson, and pilot, Felicity Lawrence. Despite this, Bardox is successful in bringing samples of Jason's genetic material back to G7, unaware Jason climbed aboard his shuttle to escape Planet #666. After Jason slays two of the station's personnel, Andre Desjardines and Doctor Brandi Essex Williams, Bardox, oblivious to Jason's presence on G7, knocks the rest of the station's crew out by tampering with the air and sets to work modifying the genetic samples he took from Jason. Bardox wants to create a new, perfect breed of human with Jason's DNA, which he uses to artificially inseminate one of his subordinates, a fellow geneticist named London Jefferson. Bardox believes humanity is being held back by oppressive morality and physical frailty and also wants to show up and impress his abusive father back on Ea Document 1::: The Mining Research and Development Establishment (MRDE) was a division of the National Coal Board. Its site in Bretby, Derbyshire is now a commercial business park. History MRDE's function was research into and testing of mining equipment and procedures. It was created in 1969 with a merger between the Central Engineering Establishment (CEE) and the Mining Research Establishment (MRE). MRE was set up in 1951 to work on projects in conjunction with National Coal Board (NCB) headquarters divisions such as the Production Department and Scientific Department. It was based at Isleworth in West London. CEE was created in 1954 to work on research and development projects in conjunction with other departments, and was based at Bretby. In 1985 the MRDE merged with the Mining Department. Awards It won the Queens Award for Technological Achievement in 1991 for its extraction drum for dust and frictional ignition control. Structure The site was on the south side of the A511 in the south of Derbyshire. See also National Coal Board References Document 2::: To help compare different orders of magnitude, the following list describes various speed levels between approximately 2.2 m/s and 3.0 m/s (the speed of light). Values in bold are exact. List of orders of magnitude for speed See also Typical projectile speeds - also showing the corresponding kinetic energy per unit mass Neutron temperature References Document 3::: In computing, a segmentation fault (often shortened to segfault) or access violation is a failure condition raised by hardware with memory protection, notifying an operating system (OS) the software has attempted to access a restricted area of memory (a memory access violation). On standard x86 computers, this is a form of general protection fault. The operating system kernel will, in response, usually perform some corrective action, generally passing the fault on to the offending process by sending the process a signal. Processes can in some cases install a custom signal handler, allowing them to recover on their own, but otherwise the OS default signal handler is used, generally causing abnormal termination of the process (a program crash), and sometimes a core dump. Segmentation faults are a common class of error in programs written in languages like C that provide low-level memory access and few to no safety checks. They arise primarily due to errors in use of pointers for virtual memory addressing, particularly illegal access. Another type of memory access error is a bus error, which also has various causes, but is today much rarer; these occur primarily due to incorrect physical memory addressing, or due to misaligned memory access – these are memory references that the hardware cannot address, rather than references that a process is not allowed to address. Many programming languages have mechanisms designed to avoid segmentation faults and improve memory safety. For example, Rust employs an ownership-based model to ensure memory safety. Other languages, such as Lisp and Java, employ garbage collection, which avoids certain classes of memory errors that could lead to segmentation faults. Overview A segmentation fault occurs when a program attempts to access a memory location that it is not allowed to access, or attempts to access a memory location in a way that is not allowed (for example, attempting to write to a read-only location, or to overwrite part The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does the 10G-EPON standard primarily support in terms of data transmission rates for upstream and downstream directions? A. 1 Gbit/s upstream and 10 Gbit/s downstream B. 10 Gbit/s upstream and 1 Gbit/s downstream C. 10 Gbit/s in both upstream and downstream directions D. 25 Gbit/s upstream and 10 Gbit/s downstream Answer:
C. 10 Gbit/s in both upstream and downstream directions
Relavent Documents: Document 0::: Distributed.net is a volunteer computing effort that is attempting to solve large scale problems using otherwise idle CPU or GPU time. It is governed by Distributed Computing Technologies, Incorporated (DCTI), a non-profit organization under U.S. tax code 501(c)(3). Distributed.net is working on RC5-72 (breaking RC5 with a 72-bit key). The RC5-72 project is on pace to exhaust the keyspace in just under 37 years as of January 2025, although the project will end whenever the required key is found. RC5 has eight unsolved challenges from RSA Security, although in May 2007, RSA Security announced that they would no longer be providing prize money for a correct key to any of their secret key challenges. distributed.net has decided to sponsor the original prize offer for finding the key as a result. In 2001, distributed.net was estimated to have a throughput of over 30 TFLOPS. , the throughput was estimated to be the same as a Cray XC40, as used in the Lonestar 5 supercomputer, or around 1.25 petaFLOPs. History A coordinated effort was started in February 1997 by Earle Ady and Christopher G. Stach II of Hotjobs.com and New Media Labs, as an effort to break the RC5-56 portion of the RSA Secret-Key Challenge, a 56-bit encryption algorithm that had a $10,000 USD prize available to anyone who could find the key. Unfortunately, this initial effort had to be suspended as the result of SYN flood attacks by participants upon the server. A new independent effort, named distributed.net, was coordinated by Jeffrey A. Lawson, Adam L. Beberg, and David C. McNett along with several others who would serve on the board and operate infrastructure. By late March 1997 new proxies were released to resume RC5-56 and work began on enhanced clients. A cow head was selected as the icon of the application and the project's mascot. The RC5-56 challenge was solved on October 19, 1997, after 250 days. The correct key was "0x532B744CC20999" and the plaintext message read "The unknown message is Document 1::: ETUT (European Training network in collaboration with Ukraine for electrical Transport) is a research project funded by the European Commission's Horizon 2020 program under the Marie Sktodowska-Curie Actions Innovative Training Networks (MSCA- ITN) scheme. The project, undertaken by a collaborative effort of the University of Twente, the University of Nottingham, and Dnipro National University of Railway Transport, aims to develop efficient interfacing technology for more-electric transport amidst the ever-increasing demand in transportation systems which contribute to increased carbon dioxide emissions. The project has employed 12 Early Stage Researches (doctoral candidates) who will work closely with six industrial partners to improve upon the existing electrical and energy storage systems that will help in alleviating the reliance on non-renewable energy sources for large-scale transportation systems such as railways and maritime transport. The project is segregated into two main groups with one focusing on power electronics for efficient use of energy resources in power delivery, and the other on electromagnetic compatibility of such systems. History With the increasing use of solid-state devices in transportation systems, a number of electromagnetic interference and interoperability problems are occurring which could hinder the electrification of transport on a global scale. This requires a closer inspection of the interaction of power electronic converters with equipment of the information and communications technology. With this in mind, MSCA ETUT brought together researchers from different academic backgrounds spanning all over the globe to contrive innovative ways of countering these problems. Jan Abraham Ferreira (IEEE Fellow, former President of IEEE Power Electronics Society ), Frank Leferink (IEEE Fellow and Director EMC at Thales Nederland), Patrick Wheeler (IEEE Fellow and Global Director of the University of Nottingham's Institute of Aerospace Tech Document 2::: The 833 cents scale is a musical tuning and scale proposed by Heinz Bohlen based on combination tones, an interval of 833.09 cents, and, coincidentally, the Fibonacci sequence. The golden ratio is , which as a musical interval is 833.09 cents (). In the 833 cents scale this interval is taken as an alternative to the octave as the interval of repetition, however the golden ratio is not regarded as an equivalent interval (notes 833.09 cents apart are not "the same" in the 833 cents scale the way notes 1200 cents apart are in traditional tunings). Other music theorists such as Walter O'Connell, in his 1993 "The Tonality of the Golden Section", and Lorne Temes in 1970, appear to have also created this scale prior to Bohlen's discovery of it. Derivation Starting with any interval, take the interval produced by the highest original tone and the closest combination tone. Then do the same for that interval. These intervals "converge to a value close to 833 cents. That means nothing else than that for instance for an interval of 144:89 (833.11 cents) both the summation and the difference tone appear...again 833 cents distance from this interval". For example, 220 Hz and 220 Hz (unison) produce combination tones at 0 and 440 Hz. 440 Hz is an octave above 220 Hz. 220 Hz and 440 Hz produce combination tones at 220 Hz and 660 Hz. 660 Hz is a perfect fifth (3:2) above 440 Hz, and produce combination tones at 220 Hz and 1,100 Hz. 1,100 Hz is a major sixth (5:3) above 660 Hz, and produce combination tones at 440 Hz and 1,760 Hz. 1100 Hz and 1760 Hz are a minor sixth (8:5), and so on. "It is by the way unimportant which interval we choose as a starting point for the above exercise; the result is always 833 cent." Once the interval of 833.09 cents is determined, a stack of them is produced: Two stacks are also produced on 3:2 and its inverse 4:3 to provide steps 2 and 5, creating a two dimensional lattice. Given that the golden ratio is an irrational number there are three infin Document 3::: GI Monocerotis, also known as Nova Monocerotis 1918, was a nova that erupted in the constellation Monoceros during 1918. It was discovered by Max Wolf on a photographic plate taken at the Heidelberg Observatory on 4 February 1918. At the time of its discovery, it had a photographic magnitude of 8.5, and had already passed its peak brightness. A search of plates taken at the Harvard College Observatory showed that it had a photographic magnitude of 5.4 on 1 January 1918, so it would have been visible to the naked eye around that time. By March 1918 it had dropped to ninth or tenth magnitude. By November 1920 it was a little fainter than 15th magnitude. A single pre-eruption photographic detection of GI Monocerotis exists, showing its magnitude was 15.1 before the nova event. GI Monoceros dropped by 3 magnitudes from its peak in about 23 days, making it a "fast nova". Long after the nova eruption, six small outbursts with a mean amplitude of 0.9 magnitudes were detected when the star was monitored from the year 1991 through 2000. Radio emission from the nova has been detected at the JVLA in the C (5 GHz), X (8 GHz) and K (23 GHz) bands. All novae are binary stars, with a "donor" star orbiting a white dwarf. The two stars are so close together that matter is transferred from the donor star to the white dwarf. Worpel et al. report that the orbital period for the binary is probably 4.33 hours, and there is a 48.6 minute period which may represent the rotation period for the white dwarf. Their X-ray observations indicate that GI Mon is a non-magnetic cataclysmic variable star, meaning that the material lost from the donor star forms an accretion disk around the white dwarf, rather than flowing directly to the surface of the white dwarf. It is estimated that the donor star is transferring of material to the accretion disk each year. A 1995 search for an optically resolved nova remnant using the Anglo-Australian Telescope was unsuccessful. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main characteristic that distinguishes Sarcoscypha dudleyi from Sarcoscypha coccinea? A. The color of the fruit body B. The size of the spores C. The presence and number of oil droplets in the spores D. The habitat preferences Answer:
C. The presence and number of oil droplets in the spores
Relavent Documents: Document 0::: The Warkworth Radio Astronomical Observatory is a radio telescope observatory, located just south of Warkworth, New Zealand, about 50 km north of the Auckland CBD. It was established by the Institute for Radio Astronomy and Space Research, Auckland University of Technology. The WARK12M 12m Radio Telescope was constructed in 2008. In 2010, a licence to operate the Telecom New Zealand 30m dish was granted, which led to the commissioning of the WARK30M 30m Radio Telescope. The first observations made in conjunction with the Australian Long Baseline Array took place in 2011. The observatory was purchased by Space Operations New Zealand Limited (SpaceOps NZ) in July 2023 after the university decided to close the facility. Because of its wider ambit, the facility is now called the Warkworth Space Centre. History The Warkworth Space Centre is on land that was first developed for long-range telecommunications, operated by the New Zealand Post Office and opening on 17 July 1971. The station, primarily connecting to fourth generation Intelsat satellites, was used for satellite telephone circuits and television, including the broadcast of the 1974 British Commonwealth Games, held in Christchurch. The shallow valley site was chosen as it was sheltered from winds and radio noise, and the horizon elevation of only 5° allowed the station to be useful for transmissions to low orbit satellites. The original 30-metre antenna was decommissioned on 18 June 2008 and demolished. A second antenna and station building were opened on 24 July 1984. This was removed from commercial service in November 2010, after which the Auckland University of Technology upgraded the motor drives and began using it for radio astronomy. Technical information A hydrogen maser is installed on-site to provide the very accurate timing required by VLBI observations. The observatory has a 10 Gbit/s connection to the REANNZ network, providing high speed data transfers for files and e-VLBI as well as linking Document 1::: Indian hedgehog homolog (Drosophila), also known as IHH, is a protein which in humans is encoded by the IHH gene. This cell signaling protein is in the hedgehog signaling pathway. The several mammalian variants of the Drosophila hedgehog gene (which was the first named) have been named after the various species of hedgehog; the Indian hedgehog is honored by this one. The gene is not specific to Indian hedgehogs. Function The Indian hedgehog protein is one of three proteins in the mammalian hedgehog family, the others being desert hedgehog (DHH) and sonic hedgehog (SHH). It is involved in chondrocyte differentiation, proliferation and maturation especially during endochondral ossification. It regulates its effects by feedback control of parathyroid hormone-related peptide (PTHrP). Indian Hedge Hog, (Ihh) is one of three signaling molecules from the Hedgehog (Hh) gene family. Genes of the Hh family, Sonic Hedgehog (Shh), Desert Hedgehog (Dhh) and Ihh regulate several fetal developmental processes. The Ihh homolog is involved in the formation of chondrocytes during the development of limbs. The protein is released by small, non-proliferating, mature chondrocytes during endochondral ossification. Recently, Ihh mutations are shown to cause brachydactyly type A1 (BDA1), the first Mendelian autosomal dominant disorder in humans to be recorded. There are seven known mutations to Ihh that cause BDA1. Of particular interest, are mutations involving the E95 residue, which is thought to be involved with proper signaling mechanisms between Ihh and its receptors. In a mouse model, mice with mutations to the E95 residue were found to have abnormalities to their digits. Ihh may also be involved in endometrial cell differentiation and implantation. Studies have shown progesterone to upregulate Ihh expression in the murine endometrium, suggesting a role in implantation. Ihh is suspected to be involved in the downstream regulation of other signaling molecules that are known to pla Document 2::: Differential effects play a special role in certain observational studies in which treatments are not assigned to subjects at random, where differing outcomes may reflect biased assignments rather than effects caused by the treatments. Definition For two treatments, differential effects is the effect of applying one treatment in lieu of the other. Differential effects are not immune to differential biases, whose possible consequences are examined by sensitivity analysis. Methods In statistics and data science, causality is often tested via regression analysis. Several methods can be used to distinguish actual differential effects from spurious correlations. First, the balancing score (namely propensity score) matching method can be implemented for controlling the covariate balance. Second, the difference-in-differences (DID) method with a parallel trend assumption (2 groups would show a parallel trend if neither of them experienced the treatment effect) is a useful method to reduce the impact of extraneous factors and selection bias. The differential effect of treatments (DET) was explored using several examples and models. In the biomedicine area, differential effects of early hippocampal pathology were investigated on episodic and semantic memory. The differential effects of apolipoproteins E3 and E4 were also examined on neuronal growth in vitro. Document 3::: The Redfish standard is a suite of specifications that deliver an industry standard protocol providing a RESTful interface for the management of servers, storage, networking, and converged infrastructure. History The Redfish standard has been elaborated under the SPMF umbrella at the DMTF in 2014. The first specification with base models (1.0) was published in August 2015. In 2016, Models for BIOS, disk drives, memory, storage, volume, endpoint, fabric, switch, PCIe device, zone, software/firmware inventory & update, multi-function NICs), host interface (KCS replacement) and privilege mapping were added. In 2017, Models for Composability, Location and errata were added. There is work in progress for Ethernet Switching, DCIM, and OCP. In August 2016, SNIA released a first model for network storage services (Swordfish), an extension of the Redfish specification. Industry adoption Redfish support on server Advantech SKY Server BMC Dell iDRAC BMC with minimum iDRAC 7/8 FW 2.40.40.40, iDRAC9 FW 3.00.00.0 Fujitsu iRMCS5 BMC HPE iLO BMC with minimum iLO4 FW 2.30, iLO5 and more recent HPE Moonshot BMC with minimum FW 1.41 Lenovo XClarity Controller (XCC) BMC with minimum XCC FW 1.00 Supermicro X10 BMC with minimum FW 3.0 and X11 with minimum FW 1.0 IBM Power Systems BMC with minimum OpenPOWER (OP) firmware level OP940 IBM Power Systems Flexible Service Processor (FSP) with minimum firmware level FW860.20 Cisco Integrated Management Controller with minimum IMC SW Version 3.0 Redfish support on BMC Insyde Software Supervyse BMC OpenBMC a Linux Foundation collaborative open-source BMC firmware stack American Megatrends MegaRAC Remote Management Firmware Vertiv Avocent Core Insight Embedded Management Systems Software using Redfish APIs OpenStack Ironic bare metal deployment project has a Redfish driver. Ansible has multiple Redfish modules for Remote Management including redfish_info, redfish_config, and redfish_command ManageIQ Apache CloudStack Redf The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the original name given to Byssomerulius corium when it was first described? A. Thelephora corium B. Byssomerulius corium C. Irpicaceae corium D. Christiaan corium Answer:
A. Thelephora corium
Relavent Documents: Document 0::: The commutation cell is the basic structure in power electronics. It is composed of two electronic switches (today, a high-power semiconductor, not a mechanical switch). It was traditionally referred to as a chopper, but since switching power supplies became a major form of power conversion, this new term has become more popular. The purpose of the commutation cell is to "chop" DC power into square wave alternating current. This is done so that an inductor and a capacitor can be used in an LC circuit to change the voltage. This is, in theory, a lossless process; in practice, efficiencies above 80-90% are routinely achieved. The output is usually run through a filter to produce clean DC power. By controlling the on and off times (the duty cycle) of the switch in the commutation cell, the output voltage can be regulated. This basic principle is the core of most modern power supplies, from tiny DC-DC converters in portable devices to massive switching stations for high voltage DC power transmission. Connection of two power elements A Commutation cell connects two power elements, often referred to as sources, although they can either produce or absorb power. Some requirements to connect power sources exist. The impossible configurations are listed in figure 1. They are basically: a voltage source cannot be shorted, as the short circuit would impose a zero voltage which would contradict the voltage generated by the source; in an identical way, a current source cannot be placed in an open circuit; two (or more) voltage sources cannot be connected in parallel, as each of them would try to impose the voltage on the circuit; two (or more) current sources cannot be connected in series, as each of them would try to impose the current in the loop. This applies to classical sources (battery, generator) and capacitors and inductors: At a small time scale, a capacitor is identical to a voltage source and an inductor to a current source. Connecting two capacitors with dif Document 1::: Computed tomography of the chest or chest CT is a group of computed tomography scan protocols used in medical imaging to evaluate the lungs and search for lung disorders. Contrast agents are sometimes used in CT scans of the chest to accentuate or enhance the differences in radiopacity between vascularized and less vascularized structures, but a standard chest CT scan is usually non-contrasted (i.e. "plain") and relies on different algorithms to produce various series of digitalized images known as view or "window". Modern detail-oriented scans such as high-resolution computed tomography (HRCT) is the gold standard in respiratory medicine and thoracic surgery for investigating disorders of the lung parenchyma (alveoli). Contrasted CT scans of the chest are usually used to confirm diagnosis of for lung cancer and abscesses, as well as to assess lymph node status at the hila and the mediastinum. CT pulmonary angiogram, which uses time-matched ("phased") protocols to assess the lung perfusion and the patency of great arteries and veins, particularly to look for pulmonary embolism. References Document 2::: Rachel E. Scherr is an American physics educator, currently an associate professor of physics at the University of Washington Bothell. Her research includes studies of responsive teaching and active learning, video and gestural analysis of classroom behavior, and student understanding of energy and special relativity. Education and career In high school, Scherr worked as an "explainer" at the Exploratorium in San Francisco. She majored in physics at Reed College, where she used the support of a Watson Fellowship for a year abroad studying physics education in Europe, Asia, and Africa. After graduating in 1993 she went to the University of Washington for graduate study in physics and physics education, earning a master's degree in 1996 and completing her Ph.D. in 2001. Her dissertation, An investigation of student understanding of basic concepts in special relativity, was jointly supervised by Lillian C. McDermott and Stamatis Vokos. After a short-term stint at Evergreen State College, she became a postdoctoral researcher and research assistant professor at the University of Maryland from 2001 to 2010. She moved to Seattle Pacific University as a research scientist in 2008, and to her present position at the University of Washington Bothell in 2018. She was chair of the Topical Group on Physics Education Research of the American Physical Society (APS) for the 2016 term. Recognition Scherr was elected as a Fellow of the American Physical Society in 2017, after a nomination from the APS Topical Group on Physics Education Research, "for foundational research into energy learning and representations, application of video analysis methods to study physics classrooms, and physics education research community leadership". References External links Home page Document 3::: Vaccine-naive is a lack of immunity, or immunologic memory, to a disease because the person has not been vaccinated. There are a variety of reasons why a person may not have received a vaccination, including contraindications due to preexisting medical conditions, lack of resources, previous vaccination failure, religious beliefs, personal beliefs, fear of side-effects, phobias to needles, lack of information, vaccine shortages, physician knowledge and beliefs, social pressure, and natural resistance. Effect on herd immunity Communicable diseases, such as measles and influenza, are more readily spread in vaccine-naive populations, causing frequent outbreaks. Vaccine-naive persons threaten what epidemiologists call herd immunity. This is because vaccinations provide not just protection to those who receive them, but also provide indirect protection to those who remain susceptible because of the reduced prevalence of infectious diseases. Fewer individuals available to transmit the disease reduce the incidence of it, creating herd immunity. See also Immune system Outbreak Pulse vaccination strategy Vaccine References External links Centers for Disease Control and Prevention (CDC) immunization schedules CDC Vaccine contraindications The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary distinguishing feature of Lepiota babruzalka that can be observed microscopically? A. The length of the stem B. The color of the gills C. The variably shaped cystidia on the edges of the gills D. The size of the cap Answer:
C. The variably shaped cystidia on the edges of the gills
Relavent Documents: Document 0::: In theoretical computer science, Arden's rule, also known as Arden's lemma, is a mathematical statement about a certain form of language equations. Background A (formal) language is simply a set of strings. Such sets can be specified by means of some language equation, which in turn is based on operations on languages. Language equations are mathematical statements that resemble numerical equations, but the variables assume values of formal languages rather than numbers. Among the most common operations on two languages A and B are the set union A ∪ B, and their concatenation A⋅B. Finally, as an operation taking a single operand, the set A* denotes the Kleene star of the language A. Statement of Arden's rule Arden's rule states that the set A*⋅B is the smallest language that is a solution for X in the linear equation X = A⋅X ∪ B where X, A, B are sets of strings. Moreover, if the set A does not contain the empty word, then this solution is unique. Equivalently, the set B⋅A* is the smallest language that is a solution for X in X = X⋅A ∪ B. Application Arden's rule can be used to help convert some finite automatons to regular expressions, as in Kleene's algorithm. See also Regular expression Nondeterministic finite automaton Notes References Arden, D. N. (1960). Delayed logic and finite state machines, Theory of Computing Machine Design, pp. 1-35, University of Michigan Press, Ann Arbor, Michigan, USA. (open-access abstract) John E. Hopcroft and Jeffrey D. Ullman, Introduction to Automata Theory, Languages, and Computation, Addison-Wesley Publishing, Reading Massachusetts, 1979. . Chapter 2: Finite Automata and Regular Expressions, p.54. Arden, D.N. An Introduction to the Theory of Finite State Machines, Monograph No. 12, Discrete System Concepts Project, 28 June 1965. Document 1::: Inconel is a nickel-chromium-based superalloy often utilized in extreme environments where components are subjected to high temperature, pressure or mechanical loads. Inconel alloys are oxidation- and corrosion-resistant. When heated, Inconel forms a thick, stable, passivating oxide layer protecting the surface from further attack. Inconel retains strength over a wide temperature range, attractive for high-temperature applications where aluminum and steel would succumb to creep as a result of thermally-induced crystal vacancies. Inconel's high-temperature strength is developed by solid solution strengthening or precipitation hardening, depending on the alloy. Inconel alloys are typically used in high temperature applications. Common trade names for various Inconel alloys include: Alloy 625: Inconel 625, Chronin 625, Altemp 625, Sanicro 625, Haynes 625, Nickelvac 625 Nicrofer 6020 and UNS designation N06625. Alloy 600: NA14, BS3076, 2.4816, NiCr15Fe (FR), NiCr15Fe (EU), NiCr15Fe8 (DE) and UNS designation N06600. Alloy 718: Nicrofer 5219, Superimphy 718, Haynes 718, Pyromet 718, Supermet 718, Udimet 718 and UNS designation N07718. History The Inconel family of alloys was first developed before December 1932, when its trademark was registered by the US company International Nickel Company of Delaware and New York. A significant early use was found in support of the development of the Whittle jet engine, during the 1940s by research teams at Henry Wiggin & Co of Hereford, England a subsidiary of the Mond Nickel Company, which merged with Inco in 1928. The Hereford Works and its properties including the Inconel trademark were acquired in 1998 by Special Metals Corporation. Specific data Composition Inconel alloys vary widely in their compositions, but all are predominantly nickel, with chromium as the second element. Properties When heated, Inconel forms a thick and stable passivating oxide layer protecting the surface from further attack. Inconel retains stren Document 2::: City Brain (sometimes treated as an improper noun by sources and rendered as city brain; ) is a software system that utilizes artificial intelligence and data collection for urban management. Developed by Chinese tech company Alibaba Group, the City Brain systems have been adopted by local governments throughout the country as well as in Kuala Lumpur, the capital of Malaysia. Currently, these systems are mostly used for traffic management, but they have been used and expanded to cover other needs as well. City Brain systems have been promoted as making cities "smarter" and improving their residents' quality of life. However, the systems have also faced criticism for issues relating to privacy, cost, and their use of surveillance. History and overview The first City Brain system was announced and developed in 2016 by Alibaba Cloud for its home city of Hangzhou. First aiming to curb the city's high level of traffic congestion, it was initially "given control" of traffic lights in Xiaoshan District, where it increased traffic speed by 15%. This led to its adoption by the rest of the city in 2017, where it has seen praise for continuing to reduce congestion and aiding first responders to travel faster. In addition to traffic light management, the system also analyzes camera feeds to detect accidents and alert authorities. In the following years, many other local governments in China sought their own such systems. In addition, Kuala Lumpur, the capital of Malaysia, announced its adoption of a City Brain system in 2018 to deal with traffic. City Brain's use in Kuala Lumpur was Alibaba's first overseas implementation of the platform. By September 2019, Alibaba stated that 22 Chinese cities, including Macau, as well as Kuala Lumpur, had City Brain systems. The scope of these systems has expanded, with some being used to track pollution, alert authorities to illegal gatherings and possible conflicts of interest/corruption in government outsourcing, and aid in contact tr Document 3::: Sir Francis Rolle (1630–1686) was an English lawyer and politician who sat in the House of Commons at various times between 1656 and 1685. Early life Rolle was the only son of Henry Rolle of Shapwick in Somerset, who was Chief Justice of the King's Bench and his wife Margaret Bennett. He entered Inner Temple in 1646 and was admitted at Emmanuel College, Cambridge on 25 January 1647. He was called to the bar in 1653. Career In 1656, Rolle was elected Member of Parliament for Somerset in the Second Protectorate Parliament. He succeeded his father to the estate at Shapwick in 1656 and became JP for Somerset until July 1660, In 1657 he was commissioner for assessment for Somerset and Hampshire. He was commissioner for militia in 1659 and JP for Hampshire from 1659 to July 1660. He was commissioner for assessment for Somerset and Hampshire from January 1660 to 1680 and commissioner for militia in March 1660. In April 1660 he was elected MP for Bridgwater in the Convention Parliament. He was commissioner for sewers for Somerset in August 1660 and was JP for Somerset from September 1660 to 1680. He was High Sheriff of Hampshire from 1664 to 1665 and was knighted on 1 March 1665. He also became a freeman of Portsmouth in 1665. In 1669 he was briefly MP for Bridgwater again but was removed on petition. He was High Sheriff of Somerset from 1672 to 1673 and was commissioner for inquiry for Finkley forest and for New Forest in 1672. He was commissioner for inquiry for the New Forest again in 1673. In 1675 he was elected MP for Hampshire for the Cavalier Parliament and was made freeman of Winchester 1675. He was commissioner for recusants for Somerset and Hampshire in 1675 and commissioner for inquiry for the New Forest in 1676 and in 1679. In May 1679 he was elected MP for Bridgwater, and in October 1679 he was elected MP for Hampshire. He was elected MP for Hampshire again in 1681. In 1685 he was committed to the Tower of London on 26 June before he could join Monmouth who The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary purpose of the International Ideographs Core (IICore)? A. To provide a complete set of CJK Unified Ideographs characters B. To create a subset of characters for devices with limited capabilities C. To replace the ISO 10646/Unicode standard entirely D. To standardize all Chinese character encodings Answer:
B. To create a subset of characters for devices with limited capabilities
Relavent Documents: Document 0::: A contract manufacturer (CM) is a manufacturer that contracts with a firm for components or products (in which case it is a turnkey supplier). It is a form of outsourcing. A contract manufacturer performing packaging operations is called copacker or a contract packager. Brand name companies focus on product innovation, design and sales, while the manufacturing takes place in independent factories (the turnkey suppliers). Most turnkey suppliers specialize in simply manufacturing physical products, but some are also able to handle a significant part of the design and customization process if needed. Some turnkey suppliers specialize in one base component (e.g. memory chips) or a base process (e.g. plastic molding). Business model In a contract manufacturing business model, the hiring firm approaches the contract manufacturer with a design or formula. The contract manufacturer will quote the parts based on processes, labor, tooling, and material costs. Typically a hiring firm will request quotes from multiple CMs. After the bidding process is complete, the hiring firm will select a source, and then, for the agreed-upon price, the CM acts as the hiring firm's factory, producing and shipping units of the design on behalf of the hiring firm. Job production is, in essence, manufacturing on a contract basis, and thus it forms a subset of the larger field of contract manufacturing. But the latter field also includes, in addition to jobbing, a higher level of outsourcing in which a product-line-owning company entrusts its entire production to a contractor, rather than just outsourcing parts of it. Industries that use the practice Many industries use this process, especially the aerospace, defense, computer, semiconductor, energy, medical, food manufacturing, personal care, packaging, and automotive fields. Some types of contract manufacturing include CNC machining, complex assembly, aluminum die casting, grinding, broaching, gears, and forging. The pharmaceutical ind Document 1::: The 2N7000 is an N-channel, enhancement-mode MOSFET used for low-power switching applications. The 2N7000 is a widely available and popular part, often recommended as useful and common components to have around for hobbyist use. Packaged in a TO-92 enclosure, the 2N7000 is rated to withstand 60 volts and can switch 200 millamps. Applications The 2N7000 has been referred to as a "FETlington" and as an "absolutely ideal hacker part." The word "FETlington" is a reference to the Darlington-transistor-like saturation characteristic. A typical use of these transistors is as a switch for moderate voltages and currents, including as drivers for small lamps, motors, and relays. In switching circuits, these FETs can be used much like bipolar junction transistors, but have some advantages: high input impedance of the insulated gate means almost no gate current is required consequently no current-limiting resistor is required in the gate input MOSFETs, unlike PN junction devices (such as LEDs) can be paralleled because resistance increases with temperature, although the quality of this load balance is largely dependent on the internal chemistry of each individual MOSFET in the circuit The main disadvantages of these FETs over bipolar transistors in switching are the following: susceptibility to cumulative damage from static discharge prior to installation circuits with external gate exposure require a protection gate resistor or other static discharge protection Non-zero ohmic response when driven to saturation, as compared to a constant junction voltage drop in a bipolar junction transistor Other devices Many other n-channel MOSFETs exist. Some part number are 2N7002, BS170, VQ1000J, and VQ1000P. They may have different pin outs, packages, and electrical properties. The BS250P is "a good p-channel analog of the 2N7000." References Document 2::: A dam is a water reservoir in the ground, confined by a barrier, embankment or excavation, on a pastoral property or similar. The term is found widely in South African, Australian and New Zealand English, and several other English dialects, such as that of Yorkshire. The term can be found in the old English folk song Three Jolly Rogues: The expression "farm dam" has this meaning unambiguously, and where the barrier or embankment is intended, it may be referred to as the "dam wall". Usage examples Examples from Australia: An example from New Zealand: Examples from South Africa: Document 3::: Bark isolates are chemicals which have been extracted from bark. Prominent medical examples are salicylic acid (active metabolite of aspirin) and paclitaxel (Taxol). The pharmacology of bark isolates is an ongoing topic of medical research. See also The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a "farm dam" as described in the text? A. A type of plant B. A water reservoir with a barrier C. A folk song D. An excavation site Answer:
B. A water reservoir with a barrier
Relavent Documents: Document 0::: Tinea incognita, also spelled tinea incognito, is a fungal infection of the skin that generally looks odd for a typical tinea infection. The border of the skin lesion is usually blurred and it appears to have florid growth. It generally occurs following the application of a steroid cream to what at first is thought to be eczema. Continued application results in expansion of the fungal infection which appears unrecognisable. Occasionally, secondary infection with bacteria occurs with concurrent pustules and impetigo. Cause The use of a topical steroid is the most common cause. Frequently, a combination topical steroid and antifungal cream is prescribed by a physician. These combinations include betamethasone dipropionate and clotrimazole (trade name Lotrisone) and triamcinolone acetonide and clotrimazole. In areas of open skin, these combinations are acceptable in treating fungal infection of the skin. In areas where the skin is occluded (groin, buttock crease, armpit), the immunosuppression by the topical steroid might be significant enough to cause tinea incognita to occur even in the presence of an effective antifungal. Diagnosis Clinical suspicion arises especially if the eruption is on the face, ankle, legs, or groin. A history of topical steroid or immunosuppressive agent is noted. Confirmation is with a skin scraping and either fungal culture or microscopic exam with potassium hydroxide solution. Characteristic hyphae are seen running through the squamous epithelial cells. Treatment The removal of the offending topical steroid or immunosuppressive agent and treatment with a topical antifungal is often adequate. If the tinea incognita is extensive or involves hair bearing areas, treatment with a systemic antifungal may be indicated. Notes References Document 1::: The Kerrison Predictor was one of the first fully automated anti-aircraft fire-control systems. It was used to automate the aiming of the British Army's Bofors 40 mm guns and provide accurate lead calculations through simple inputs on three main handwheels. The predictor could aim a gun at an aircraft based on simple inputs like the observed speed and the angle to the target. Such devices had been used on ships for gunnery control for some time, and versions such as the Vickers Predictor were available for larger anti-aircraft guns intended to be used against high-altitude bombers. Kerrison's analog computer was the first to be fast enough to be used in the demanding high-speed low-altitude role, which involved very short engagement times and high angular rates. The design was also adopted for use in the United States, where it was produced by Singer Corporation as the M5 Antiaircraft Director, later updated as the M5A1 and M5A2. The M6 was mechanically identical, differing only in running on UK-style 50 Hz power. History By the late 1930s, both Vickers and Sperry had developed predictors for use against high-altitude bombers. However, low-flying aircraft presented a very different problem, with very short engagement times and high angular rates of motion, but at the same time less need for ballistic accuracy. Machine guns had been the preferred weapon against these targets, aimed by eye and swung by hand, but these no longer had the performance needed to deal with the larger and faster aircraft of the 1930s. The British Army's new Bofors 40 mm guns were intended as their standard low-altitude anti-aircraft weapons. However, existing gunnery control systems were inadequate for the purpose; the range was too far to "guess" the lead, but at the same time close enough that the angle could change faster than the gunners could turn the traversal handles. Trying to operate a calculating gunsight at the same time was an added burden on the gunner. Making matters worse Document 2::: Oregon Tool, Inc. is an American company that manufactures saw chain and other equipment for the forestry, agriculture, and construction industries. Based in Portland, Oregon, Oregon Tool globally manufactures their products in ten different plants across five countries. Oregon Tool produces and markets saw chain, chain saw bars and sprockets, battery operated lawn and garden equipment, lawn mower blades, string trimmer line, concrete cutting saws and chain, and agricultural cutting equipment for OEMs, dealers, and end-user markets. Oregon Tool employs approximately 3,300 people across the world in 17 global locations. History Joseph Buford Cox founded the Oregon Saw Chain Company in 1947. An avid inventor, Cox designed the modern saw chain after witnessing a timber beetle larvae chewing through some timber in a Pacific Northwest forest. The saw chain he ultimately created serves as the foundation for the modern chipper chain design, and influenced other forms of modern saw chain design. Known as Biomimetics, Cox solved a complex problem by taking inspiration from nature. After experimenting with casting techniques, Cox later founded Precision Castparts Corp. In 1953, John D. Gray acquired the company and changed the name to Omark Industries. In the 1980s, Omark began researching and adopting just-in-time manufacturing processes. By visiting factories in Japan, Omark studied examples of lean manufacturing. The concepts kept the saw chain products viable in the export market during a period with a strong dollar. In 1985, Omark Industries was purchased by Blount Industries, Inc. and its founder, Winton M. Blount. In 1993, Blount Industries, Inc. was renamed Blount International, Inc., and shifted its focus from construction to manufacturing. In 1997, Blount purchased Frederick Manufacturing Corp. of Kansas City, Missouri and added lawnmower blades and garden products to its portfolio. In 1999, Blount was acquired by Lehman Brothers Merchant Banking. In 2002, Blou Document 3::: Hendrik Elingsz van Rijgersma (born 5 January 1835 in Lemmer, Province of Friesland, the Netherlands, died 4 March 1877 in St. Martin) was a Dutch naturalist, physician, amateur botanist, malacologist and ichthyologist. Biography Rijgersma became a physician in 1858, and practiced medicine in the small town of Jisp and on the island of Marken. In 1861 he married Maria Henriette Gräfing; they had seven children. When slavery was abolished in the Dutch colonies in 1863, he was one of six physicians appointed to provide medical care to the liberated slaves on the island of St. Martin in the Netherlands Antilles, where he served as government physician until his untimely death at the age of 42. There he collected many fossils, plants, birds, reptiles, fishes, mollusks, crustaceans and insects. Hendrik van Rijgersma was an excellent painter and left to posterity many, mostly unpublished, drawings, sketches and water colors of plants, shells and other subjects. His animal collections were sent by him to the Academy of Natural Sciences of Philadelphia, of which he was a corresponding member. The plants he sent to the Berlin herbarium were destroyed. There apparently are also plants he collected at the National Herbarium of the Netherlands at Leyden. In the Swedish Museum of Natural History, there are 129 plants collected by van Rijgersma, of which 74 have illustrations. A species of snake, Alsophis rijgersmaei, is named in his honor. Works by and about Rijgersma WorldCat References Sources Biography by Mia Ehn, Swedish Museum of Natural History Ehn, Mia & Zanoni, T. A. The herbarium and botanical art of Hendrik Elingsz van Rijgersma, Taxon 51: 513–520, 2002. External links Plants collected by Rijgersma at the Swedish Museum of Natural History The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary function of lipoproteins in the body? A. To transport oxygen in the blood B. To transport hydrophobic lipid molecules in water C. To provide structural support to cells D. To store energy in muscle tissues Answer:
B. To transport hydrophobic lipid molecules in water
Relavent Documents: Document 0::: Workerism is a political theory that emphasizes the importance of or glorifies the working class. Workerism, or , was of particular significance in Italian left-wing politics, being largely embraced in Italian political groups ranging from Italian communists to Italian anarchists. As revolutionary praxis Workerism (or ) is a political analysis, whose main elements were to merge into autonomism, that starts out from the power of the working class. Michael Hardt and Antonio Negri, known as operaist and autonomist writers, offer a definition of , quoting from Karl Marx as they do so: builds on Marx's claim that capital reacts to the struggles of the working class; the working class is active and capital reactive. Technological development: Where there are strikes, machines will follow. "It would be possible to write a whole history of the inventions made since 1830 for the sole purpose of providing capital with weapons against working-class revolt." (Capital, Vol. 1, Chapter 15, Section 5) Political development: The factory legislation in England was a response to the working class struggle over the length of the working day. "Their formulation, official recognition and proclamation by the State were the result of a long class struggle." (Capital, Vol. 1, Chapter 10, Section 6) takes this as its fundamental axiom: the struggles of the working class precede and prefigure the successive re-structurations of capital. The workerists followed Marx in seeking to base their politics on an investigation of working class life and struggle. Through translations made available by Danilo Montaldi and others, they drew upon previous activist research in the United States by the Johnson–Forest Tendency and in France by the group Socialisme ou Barbarie. The Johnson–Forest Tendency had studied working class life and struggles within the Detroit auto industry, publishing pamphlets such as "The American Worker" (1947), "Punching Out" (1952) and "Union Committeemen and Wildcat Document 1::: In mathematics, linearization (British English: linearisation) is finding the linear approximation to a function at a given point. The linear approximation of a function is the first order Taylor expansion around the point of interest. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems. This method is used in fields such as engineering, physics, economics, and ecology. Linearization of a function Linearizations of a function are lines—usually lines that can be used for purposes of calculation. Linearization is an effective method for approximating the output of a function at any based on the value and slope of the function at , given that is differentiable on (or ) and that is close to . In short, linearization approximates the output of a function near . For example, . However, what would be a good approximation of ? For any given function , can be approximated if it is near a known differentiable point. The most basic requisite is that , where is the linearization of at . The point-slope form of an equation forms an equation of a line, given a point and slope . The general form of this equation is: . Using the point , becomes . Because differentiable functions are locally linear, the best slope to substitute in would be the slope of the line tangent to at . While the concept of local linearity applies the most to points arbitrarily close to , those relatively close work relatively well for linear approximations. The slope should be, most accurately, the slope of the tangent line at . Visually, the accompanying diagram shows the tangent line of at . At , where is any small positive or negative value, is very nearly the value of the tangent line at the point . The final equation for the linearization of a function at is: For , . The derivative of is , and the slope of at is . Example To find , Document 2::: A certificate policy (CP) is a document which aims to state what are the different entities of a public key infrastructure (PKI), their roles and their duties. This document is published in the PKI perimeter. When in use with X.509 certificates, a specific field can be set to include a link to the associated certificate policy. Thus, during an exchange, any relying party has an access to the assurance level associated with the certificate, and can decide on the level of trust to put in the certificate. RFC 3647 The reference document for writing a certificate policy is, , . The RFC proposes a framework for the writing of certificate policies and Certification Practice Statements (CPS). The points described below are based on the framework presented in the RFC. Main points Architecture The document should describe the general architecture of the related PKI, present the different entities of the PKI and any exchange based on certificates issued by this very same PKI. Certificate uses An important point of the certificate policy is the description of the authorized and prohibited certificate uses. When a certificate is issued, it can be stated in its attributes what use cases it is intended to fulfill. For example, a certificate can be issued for digital signature of e-mail (aka S/MIME), encryption of data, authentication (e.g. of a Web server, as when one uses HTTPS) or further issuance of certificates (delegation of authority). Prohibited uses are specified in the same way. Naming, identification and authentication The document also describes how certificates names are to be chosen, and besides, the associated needs for identification and authentication. When a certification application is filled, the certification authority (or, by delegation, the registration authority) is in charge of checking the information provided by the applicant, such as his identity. This is to make sure that the CA does not take part in an identity theft. Key generation The ge Document 3::: Meteodyn WT, commonly known as Meteodyn is a wind energy software program that uses computational fluid dynamics (CFD) to conduct wind resource assessment. Developed and marketed by Meteodyn, Meteodyn WT was first released in September 2009. The software quantifies the wind resource in a desired terrain in order to assess the feasibility of a proposed wind farm. The software's objective is to design the most profitable wind farm. This is achieved by taking into account the measured wind data at a measurement tower and the terrain conditions. Both of these are essential to be able to obtain the wind conditions and therefore the wind resources of the desired terrain. Meteodyn WT has been validated with actual wind measurements by independent studies. Meteodyn WT is used by wind turbine manufacturers, wind farm developers, consulting firms and wind farm operators. Graphical user interface The current version of Meteodyn WT displays all projects in one world map; this map already includes terrain and roughness data. Features Meteodyn WT features a geographical data management tool, a meteorological data processing tool, a LIDAR correction tool, a wind turbine creation tool, a wind turbines micro-siting tool, a wind atlas tool and an auto-convergence tool. It also includes the following functions: wind resource mapping, wake effect computation, annual energy production (AEP) with / without wake effect, directional wind shear at each turbine, wind and turbulence matrices at each turbine, IEC compliance, automatic report generation, losses and uncertainties. Compatibility with other software The current version of Meteodyn WT exports to the following wind energy software formats: .wrg, .rsf, .wrb, .fmv and .flowres. Google earth and Surfer export formats are also available. Solution method Meteodyn WT uses computational fluid dynamics (CFD) which directly takes into consideration the geometry of the terrain in question. The software solves the Navier-Stokes equa The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the six parameters used to classify a rock mass in the Rock Mass Rating (RMR) system? A. Uniaxial compressive strength, RQD, spacing of discontinuities, condition of discontinuities, groundwater conditions, orientation of discontinuities B. Density, porosity, uniaxial compressive strength, spacing of discontinuities, moisture content, orientation of discontinuities C. Rock quality designation, ground stability, water saturation, temperature, orientation of discontinuities, cohesion D. Uniaxial compressive strength, permeability, spacing of discontinuities, weathering, groundwater conditions, cohesion Answer:
A. Uniaxial compressive strength, RQD, spacing of discontinuities, condition of discontinuities, groundwater conditions, orientation of discontinuities
Relavent Documents: Document 0::: ISO 31-11:1992 was the part of international standard ISO 31 that defines mathematical signs and symbols for use in physical sciences and technology. It was superseded in 2009 by ISO 80000-2:2009 and subsequently revised in 2019 as ISO-80000-2:2019. It included definitions for symbols for mathematical logic, set theory, arithmetic and complex numbers, functions and special functions and values, matrices, vectors, and tensors, coordinate systems, and miscellaneous mathematical relations. References and notes Document 1::: VFDB also known as Virulence Factor Database is a database that provides scientist quick access to virulence factors in bacterial pathogens. It can be navigated and browsed using genus or words. A BLAST tool is provided for search against known virulence factors. VFDB contains a collection of 16 important bacterial pathogens. Perl scripts were used to extract positions and sequences of VF from GenBank. Clusters of Orthologous Groups (COG) was used to update incomplete annotations. More information was obtained by NCBI. VFDB was built on Windows operation systems on DELL PowerEdge 1600SC servers. Document 2::: Transposition is the process by which a specific genetic sequence, known as a transposon, is moved from one location of the genome to another. Simple, or conservative transposition, is a non-replicative mode of transposition. That is, in conservative transposition the transposon is completely removed from the genome and reintegrated into a new, non-homologous locus, the same genetic sequence is conserved throughout the entire process. The site in which the transposon is reintegrated into the genome is called the target site. A target site can be in the same chromosome as the transposon or within a different chromosome. Conservative transposition uses the "cut-and-paste" mechanism driven by the catalytic activity of the enzyme transposase. Transposase acts like DNA scissors; it is an enzyme that cuts through double-stranded DNA to remove the transposon, then transfers and pastes it into a target site. A simple, or conservative, transposon refers to the specific genetic sequence that is moved via conservative transposition. These specific genetic sequences range in size, they can be hundreds to thousands of nucleotide base-pairs long. A transposon contains genetic sequences that encode for proteins that mediate its own movement, but can also carry genes for additional proteins. Transposase is encoded within the transposon DNA and used to facilitate its own movement, making this process self-sufficient within organisms. All simple transposons contain a transposase encoding region flanked by terminal inverted repeats, but the additional genes within the transposon DNA can vary. Viruses, for example, encode the essential viral transposase needed for conservative transposition as well as protective coat proteins that allow them to survive outside of cells, thus promoting the spread of mobile genetic elements. "Cut-and-paste" transposition method The mechanism by which conservative transposition occurs is called the "cut-and-paste" method, which involves five main steps: The transposase enzyme is bound to the inverted repeated sequences flanking the ends of the transposon Inverted repeats define the ends of transposons and provide recognition sites for transposase to bind. The formation of the transposition complex In this step the DNA bends and folds into a pre-excision synaptic complex so the two transposases enzymes can interact. The interaction of these transposases activates the complex; transposase makes double stranded breaks in the DNA and the transposon is fully excised. The transposase enzymes locate, recognize and bind to the target site within the target DNA. Transposase creates a double stranded break in the DNA and integrates the transposon into the target site. Both the excision and insertion of the transposon leaves single or double stranded gaps in the DNA, which are repaired by host enzymes such as DNA polymerase. Scientific application Current researchers have developed gene transfer systems on the basis of conservative transposition which can integrate new DNA in both invertebrates and vertebrate genomes. Scientists alter the genetic sequence of a transposon in a laboratory setting, then insert this sequence into a vector which is then inserted into a target cell. The transposase coding region of these transposons is replaced by a gene of interest intended to be integrated into the genome. Conservative transposition is induced by the expression of transposase from another source within the cell, since the transposon no longer contains the transposase coding region to be self sufficient. Generally a second vector is prepared and inserted into the cell for expression of transposase. This technique is used in transgenesis and insertional mutagenesis research fields. The Sleeping Beauty transposon system is an example of gene transfer system developed for use in vertebrates. Further development in integration site preferences of transposable elements is expected to advance the technologies of human gene therapy. Document 3::: The persistent random walk is a modification of the random walk model. A population of particles are distributed on a line, with constant speed , and each particle's velocity may be reversed at any moment. The reversal time is exponentially distributed as , then the population density evolves according towhich is the telegrapher's equation. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary function of the enzyme transposase in conservative transposition? A. To replicate the transposon B. To cut and paste the transposon into the genome C. To repair DNA gaps after transposon insertion D. To encode additional proteins for transposon movement Answer:
B. To cut and paste the transposon into the genome
Relavent Documents: Document 0::: An inertial reference unit (IRU) is a type of inertial sensor which uses gyroscopes (electromechanical, ring laser gyro or MEMS) and accelerometers (electromechanical or MEMS) to determine a moving aircraft’s or spacecraft’s change in rotational attitude (angular orientation relative to some reference frame) and translational position (typically latitude, longitude and altitude) over a period of time. In other words, an IRU allows a device, whether airborne or submarine, to travel from one point to another without reference to external information. Another name often used interchangeably with IRU is Inertial Measurement Unit. The two basic classes of IRUs/IMUs are "gimballed" and "strapdown". The older, larger gimballed systems have become less prevalent over the years as the performance of newer, smaller strapdown systems has improved greatly via the use of solid-state sensors and advanced real-time computer algorithms. Gimballed systems are still used in some high-precision applications where strapdown performance may not be as good. See also Air data inertial reference unit Inertial measurement unit External links Optical Inertial Reference Units (IRUs) Document 1::: The log-spectral distance (LSD), also referred to as log-spectral distortion or root mean square log-spectral distance, is a distance measure between two spectra. The log-spectral distance between spectra and is defined as p-norm: where and are power spectra. Unlike the Itakura–Saito distance, the log-spectral distance is symmetric. In speech coding, log spectral distortion for a given frame is defined as the root mean square difference between the original LPC log power spectrum and the quantized or interpolated LPC log power spectrum. Usually the average of spectral distortion over a large number of frames is calculated and that is used as the measure of performance of quantization or interpolation. Meaning When measuring the distortion between signals, the scale or temporality/spatiality of the signals can have different levels of significance to the distortion measures. To incorporate the proper level of significance, the signals can be transformed into a different domain. When the signals are transformed into the spectral domain with transformation methods such as Fourier transform and DCT, the spectral distance is the measure to compare the transformed signals. LSD incorporates the logarithmic characteristics of the power spectra, and it becomes effective when the processing task of the power spectrum also has logarithmic characteristics, e.g. human listening to the sound signal with different levels of loudness. Moreover, LSD is equal to the cepstral distance which is the distance between the signals' cepstrum when the p-numbers are the same by Parseval's theorem. Other Representations As LSD is in the form of p-norm, it can be represented with different p-numbers and log scales. For instance, when it is expressed in dB with L2 norm, it is defined as: . When it is represented in the discrete space, it is defined as: where and are power spectra in discrete space. Document 2::: In chemistry, a non-covalent interaction differs from a covalent bond in that it does not involve the sharing of electrons, but rather involves more dispersed variations of electromagnetic interactions between molecules or within a molecule. The chemical energy released in the formation of non-covalent interactions is typically on the order of 1–5 kcal/mol (1000–5000 calories per 6.02 molecules). Non-covalent interactions can be classified into different categories, such as electrostatic, π-effects, van der Waals forces, and hydrophobic effects. Non-covalent interactions are critical in maintaining the three-dimensional structure of large molecules, such as proteins and nucleic acids. They are also involved in many biological processes in which large molecules bind specifically but transiently to one another (see the properties section of the DNA page). These interactions also heavily influence drug design, crystallinity and design of materials, particularly for self-assembly, and, in general, the synthesis of many organic molecules. The non-covalent interactions may occur between different parts of the same molecule (e.g. during protein folding) or between different molecules and therefore are discussed also as intermolecular forces. Electrostatic interactions Ionic Ionic interactions involve the attraction of ions or molecules with full permanent charges of opposite signs. For example, sodium fluoride involves the attraction of the positive charge on sodium (Na+) with the negative charge on fluoride (F−). However, this particular interaction is easily broken upon addition to water, or other highly polar solvents. In water ion pairing is mostly entropy driven; a single salt bridge usually amounts to an attraction value of about ΔG =5 kJ/mol at an intermediate ion strength I, at I close to zero the value increases to about 8 kJ/mol. The ΔG values are usually additive and largely independent of the nature of the participating ions, except for transition metal ions etc. These interactions can also be seen in molecules with a localized charge on a particular atom. For example, the full negative charge associated with ethoxide, the conjugate base of ethanol, is most commonly accompanied by the positive charge of an alkali metal salt such as the sodium cation (Na+). Hydrogen bonding A hydrogen bond (H-bond), is a specific type of interaction that involves dipole–dipole attraction between a partially positive hydrogen atom and a highly electronegative, partially negative oxygen, nitrogen, sulfur, or fluorine atom (not covalently bound to said hydrogen atom). It is not a covalent bond, but instead is classified as a strong non-covalent interaction. It is responsible for why water is a liquid at room temperature and not a gas (given water's low molecular weight). Most commonly, the strength of hydrogen bonds lies between 0–4 kcal/mol, but can sometimes be as strong as 40 kcal/mol In solvents such as chloroform or carbon tetrachloride one observes e.g. for the interaction between amides additive values of about 5 kJ/mol. According to Linus Pauling the strength of a hydrogen bond is essentially determined by the electrostatic charges. Measurements of thousands of complexes in chloroform or carbon tetrachloride have led to additive free energy increments for all kind of donor-acceptor combinations. Halogen bonding Halogen bonding is a type of non-covalent interaction which does not involve the formation nor breaking of actual bonds, but rather is similar to the dipole–dipole interaction known as hydrogen bonding. In halogen bonding, a halogen atom acts as an electrophile, or electron-seeking species, and forms a weak electrostatic interaction with a nucleophile, or electron-rich species. The nucleophilic agent in these interactions tends to be highly electronegative (such as oxygen, nitrogen, or sulfur), or may be anionic, bearing a negative formal charge. As compared to hydrogen bonding, the halogen atom takes the place of the partially positively charged hydrogen as the electrophile. Halogen bonding should not be confused with halogen–aromatic interactions, as the two are related but differ by definition. Halogen–aromatic interactions involve an electron-rich aromatic π-cloud as a nucleophile; halogen bonding is restricted to monatomic nucleophiles. Van der Waals forces Van der Waals forces are a subset of electrostatic interactions involving permanent or induced dipoles (or multipoles). These include the following: permanent dipole–dipole interactions, alternatively called the Keesom force dipole-induced dipole interactions, or the Debye force induced dipole-induced dipole interactions, commonly referred to as London dispersion forces Hydrogen bonding and halogen bonding are typically not classified as Van der Waals forces. Dipole–dipole Dipole-dipole interactions are electrostatic interactions between permanent dipoles in molecules. These interactions tend to align the molecules to increase attraction (reducing potential energy). Normally, dipoles are associated with electronegative atoms, including oxygen, nitrogen, sulfur, and fluorine. For example, acetone, the active ingredient in some nail polish removers, has a net dipole associated with the carbonyl (see figure 2). Since oxygen is more electronegative than the carbon that is covalently bonded to it, the electrons associated with that bond will be closer to the oxygen than the carbon, creating a partial negative charge (δ−) on the oxygen, and a partial positive charge (δ+) on the carbon. They are not full charges because the electrons are still shared through a covalent bond between the oxygen and carbon. If the electrons were no longer being shared, then the oxygen-carbon bond would be an electrostatic interaction. Often molecules contain dipolar groups, but have no overall dipole moment. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane. Note that the dipole-dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. See atomic dipoles. Dipole-induced dipole A dipole-induced dipole interaction (Debye force) is due to the approach of a molecule with a permanent dipole to another non-polar molecule with no permanent dipole. This approach causes the electrons of the non-polar molecule to be polarized toward or away from the dipole (or "induce" a dipole) of the approaching molecule. Specifically, the dipole can cause electrostatic attraction or repulsion of the electrons from the non-polar molecule, depending on orientation of the incoming dipole. Atoms with larger atomic radii are considered more "polarizable" and therefore experience greater attractions as a result of the Debye force. London dispersion forces London dispersion forces are the weakest type of non-covalent interaction. In organic molecules, however, the multitude of contacts can lead to larger contributions, particularly in the presence of heteroatoms. They are also known as "induced dipole-induced dipole interactions" and present between all molecules, even those which inherently do not have permanent dipoles. Dispersive interactions increase with the polarizability of interacting groups, but are weakened by solvents of increased polarizability. They are caused by the temporary repulsion of electrons away from the electrons of a neighboring molecule, leading to a partially positive dipole on one molecule and a partially negative dipole on another molecule. Hexane is a good example of a molecule with no polarity or highly electronegative atoms, yet is a liquid at room temperature due mainly to London dispersion forces. In this example, when one hexane molecule approaches another, a temporary, weak partially negative dipole on the incoming hexane can polarize the electron cloud of another, causing a partially positive dipole on that hexane molecule. In absence of solvents hydrocarbons such as hexane form crystals due to dispersive forces ; the sublimation heat of crystals is a measure of the dispersive interaction. While these interactions are short-lived and very weak, they can be responsible for why certain non-polar molecules are liquids at room temperature. π-effects π-effects can be broken down into numerous categories, including π-stacking, cation-π and anion-π interactions, and polar-π interactions. In general, π-effects are associated with the interactions of molecules with the π-systems of arenes. π–π interaction π–π interactions are associated with the interaction between the π-orbitals of a molecular system. The high polarizability of aromatic rings lead to dispersive interactions as major contribution to so-called stacking effects. These play a major role for interactions of nucleobases e.g. in DNA. For a simple example, a benzene ring, with its fully conjugated π cloud, will interact in two major ways (and one minor way) with a neighboring benzene ring through a π–π interaction (see figure 3). The two major ways that benzene stacks are edge-to-face, with an enthalpy of ~2 kcal/mol, and displaced (or slip stacked), with an enthalpy of ~2.3 kcal/mol. The sandwich configuration is not nearly as stable of an interaction as the previously two mentioned due to high electrostatic repulsion of the electrons in the π orbitals. Cation–π and anion–π interaction Cation–pi interactions can be as strong or stronger than H-bonding in some contexts. Anion–π interactions are very similar to cation–π interactions, but reversed. In this case, an anion sits atop an electron-poor π-system, usually established by the presence of electron-withdrawing substituents on the conjugated molecule Polar–π Polar–π interactions involve molecules with permanent dipoles (such as water) interacting with the quadrupole moment of a π-system (such as that in benzene (see figure 5). While not as strong as a cation-π interaction, these interactions can be quite strong (~1-2 kcal/mol), and are commonly involved in protein folding and crystallinity of solids containing both hydrogen bonding and π-systems. In fact, any molecule with a hydrogen bond donor (hydrogen bound to a highly electronegative atom) will have favorable electrostatic interactions with the electron-rich π-system of a conjugated molecule. Hydrophobic effect The hydrophobic effect is the desire for non-polar molecules to aggregate in aqueous solutions in order to separate from water. This phenomenon leads to minimum exposed surface area of non-polar molecules to the polar water molecules (typically spherical droplets), and is commonly used in biochemistry to study protein folding and other various biological phenomenon. The effect is also commonly seen when mixing various oils (including cooking oil) and water. Over time, oil sitting on top of water will begin to aggregate into large flattened spheres from smaller droplets, eventually leading to a film of all oil sitting atop a pool of water. However the hydrophobic effect is not considered a non-covalent interaction as it is a function of entropy and not a specific interaction between two molecules, usually characterized by entropy.enthalpy compensation. An essentially enthalpic hydrophobic effect materializes if a limited number of water molecules are restricted within a cavity; displacement of such water molecules by a ligand frees the water molecules which then in the bulk water enjoy a maximum of hydrogen bonds close to four. Examples Drug design Most pharmaceutical drugs are small molecules which elicit a physiological response by "binding" to enzymes or receptors, causing an increase or decrease in the enzyme's ability to function. The binding of a small molecule to a protein is governed by a combination of steric, or spatial considerations, in addition to various non-covalent interactions, although some drugs do covalently modify an active site (see irreversible inhibitors). Using the "lock and key model" of enzyme binding, a drug (key) must be of roughly the proper dimensions to fit the enzyme's binding site (lock). Using the appropriately sized molecular scaffold, drugs must also interact with the enzyme non-covalently in order to maximize binding affinity binding constant and reduce the ability of the drug to dissociate from the binding site. This is achieved by forming various non-covalent interactions between the small molecule and amino acids in the binding site, including: hydrogen bonding, electrostatic interactions, pi stacking, van der Waals interactions, and dipole–dipole interactions. Non-covalent metallo drugs have been developed. For example, dinuclear triple-helical compounds in which three ligand strands wrap around two metals, resulting in a roughly cylindrical tetracation have been prepared. These compounds bind to the less-common nucleic acid structures, such as duplex DNA, Y-shaped fork structures and 4-way junctions. Protein folding and structure The folding of proteins from a primary (linear) sequence of amino acids to a three-dimensional structure is directed by all types of non-covalent interactions, including the hydrophobic forces and formation of intramolecular hydrogen bonds. Three-dimensional structures of proteins, including the secondary and tertiary structures, are stabilized by formation of hydrogen bonds. Through a series of small conformational changes, spatial orientations are modified so as to arrive at the most energetically minimized orientation achievable. The folding of proteins is often facilitated by enzymes known as molecular chaperones. Sterics, bond strain, and angle strain also play major roles in the folding of a protein from its primary sequence to its tertiary structure. Single tertiary protein structures can also assemble to form protein complexes composed of multiple independently folded subunits. As a whole, this is called a protein's quaternary structure. The quaternary structure is generated by the formation of relatively strong non-covalent interactions, such as hydrogen bonds, between different subunits to generate a functional polymeric enzyme. Some proteins also utilize non-covalent interactions to bind cofactors in the active site during catalysis, however a cofactor can also be covalently attached to an enzyme. Cofactors can be either organic or inorganic molecules which assist in the catalytic mechanism of the active enzyme. The strength with which a cofactor is bound to an enzyme may vary greatly; non-covalently bound cofactors are typically anchored by hydrogen bonds or electrostatic interactions. Boiling points Non-covalent interactions have a significant effect on the boiling point of a liquid. Boiling point is defined as the temperature at which the vapor pressure of a liquid is equal to the pressure surrounding the liquid. More simply, it is the temperature at which a liquid becomes a gas. As one might expect, the stronger the non-covalent interactions present for a substance, the higher its boiling point. For example, consider three compounds of similar chemical composition: sodium n-butoxide (C4H9ONa), diethyl ether (C4H10O), and n-butanol (C4H9OH). The predominant non-covalent interactions associated with each species in solution are listed in the above figure. As previously discussed, ionic interactions require considerably more energy to break than hydrogen bonds, which in turn are require more energy than dipole–dipole interactions. The trends observed in their boiling points (figure 8) shows exactly the correlation expected, where sodium n-butoxide requires significantly more heat energy (higher temperature) to boil than n-butanol, which boils at a much higher temperature than diethyl ether. The heat energy required for a compound to change from liquid to gas is associated with the energy required to break the intermolecular forces each molecule experiences in its liquid state. Document 3::: In mechanical engineering, a helix angle is the angle between any helix and an axial line on its right, circular cylinder or cone. Common applications are screws, helical gears, and worm gears. The helix angle references the axis of the cylinder, distinguishing it from the lead angle, which references a line perpendicular to the axis. Naturally, the helix angle is the geometric complement of the lead angle. The helix angle is measured in degrees. Concept In terms specific to screws, the helix angle can be found by unraveling the helix from the screw, representing the section as a right triangle, and calculating the angle that is formed. Note that while the terminology directly refers to screws, these concepts are analogous to most mechanical applications of the helix angle. The helix angle can be expressed as: where l is lead of the screw or gear rm is mean radius of the screw thread or gear Applications The helix angle is crucial in mechanical engineering applications that involve power transfer and motion conversion. Some examples are outlined below, though its use is much more widely spread. Screw Cutting a single helical groove into a screw-stock cylinder yields what is referred to as a single-thread screw. Similarly, one may construct a double-thread screw provided that the helix angle of the two cuts is the same, and that the second cut is positioned in the uncut material between the grooves of the first. For certain applications, triple and quadruple threads are in use. The helix may be cut either right hand or left hand. In screws especially, the helix angle is essential for calculating torque in power screw applications. The maximum efficiency for a screw is defined by the following equations: Where is the helix angle, is the friction angle, and is the maximum efficiency. The friction value is dependent on the materials of the screw and interacting nut, but ultimately the efficiency is controlled by the helix angle. The efficiency can be plotted The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What type of non-covalent interaction is primarily responsible for the aggregation of non-polar molecules in aqueous solutions to minimize their exposure to water? A. Hydrogen bonding B. Electrostatic interactions C. Hydrophobic effect D. Van der Waals forces Answer:
C. Hydrophobic effect
Relavent Documents: Document 0::: Wind engineering is a subset of mechanical engineering, structural engineering, meteorology, and applied physics that analyzes the effects of wind in the natural and the built environment and studies the possible damage, inconvenience or benefits which may result from wind. In the field of engineering it includes strong winds, which may cause discomfort, as well as extreme winds, such as in a tornado, hurricane or heavy storm, which may cause widespread destruction. In the fields of wind energy and air pollution it also includes low and moderate winds as these are relevant to electricity production and dispersion of contaminants. Wind engineering draws upon meteorology, fluid dynamics, mechanics, geographic information systems, and a number of specialist engineering disciplines, including aerodynamics and structural dynamics. The tools used include atmospheric models, atmospheric boundary layer wind tunnels, and computational fluid dynamics models. Wind engineering involves, among other topics: Wind impact on structures (buildings, bridges, towers) Wind comfort near buildings Effects of wind on the ventilation system in a building Wind climate for wind energy Air pollution near buildings Wind engineering may be considered by structural engineers to be closely related to earthquake engineering and explosion protection. Some sports stadiums such as Candlestick Park and Arthur Ashe Stadium are known for their strong, sometimes swirly winds, which affect the playing conditions. History Wind engineering as a separate discipline can be traced to the UK in the 1960s, when informal meetings were held at the National Physical Laboratory, the Building Research Establishment, and elsewhere. The term "wind engineering" was first coined in 1970. Alan Garnett Davenport was one of the most prominent contributors to the development of wind engineering. He is well known for developing the Alan Davenport wind-loading chain or in short "wind-loading chain" that describes how Document 1::: A spica splint is a type of orthopedic splint used to immobilize the thumb and/or wrist while allowing the other digits freedom to move. It is used to provide support for thumb injuries (ligament instability, sprain or muscle strain), gamekeeper's thumb, osteoarthritis, de Quervain's syndrome or fractures of the scaphoid, lunate, or first metacarpal. It is also suitable for post-operative use or after removal of a hand/thumb cast. References Document 2::: SMSS J114447.77–430859.3 or J1144 or J1144–4308 is a very bright (unbeamed) quasar (g = 14.5 ABmag, K = 11.9 Vegamag) and a supermassive black hole, that appears from Earth to be in the constellation Centaurus at RA 11h44m and Declination –43, near the Southern Cross (Crux). The SkyMapper Southern Survey (SMSS) was used to ascertain its spectral properties. J1144 was identified during a search for binary stars. Despite being relatively bright, it had escaped classification as a quasar in earlier searches, which avoided the crowded fields near the galactic equator. After examining various data sets, the study group determined that J1144 is the most intrinsically luminous quasar known over the last ~9 Gyr of cosmic history, having a luminosity 8 times greater than 3C 273 in Virgo. According to the lead researcher Dr Christofer Onken, of the Australian National University: While black holes are themselves not visible; their gravity is so great that not even light can escape them, they are observable because of the matter that swirls around them. References Document 3::: Wheal Maid (also Wheal Maiden) is a former mine in the Camborne-Redruth-St Day Mining District, 1.5 km east of St Day. Between 1800 and 1840, profits are said to have been up to £200,000. In 1852, the mine was amalgamated with Poldice Mine and Carharrack Mine and worked as St Day United mine. Throughout the 1970s and 1980s, the mine site was turned into large lagoons and used as a tip for two other nearby mines: Mount Wellington and Wheal Jane. There were suggestions that the mine could be used as a landfill site for rubbish imported from New York and a power plant that would produce up to 40 megawatts of electricity; the concept was opposed by local residents and by Cornwall County Council, with Doris Ansari, the chair of the council's planning committee, saying that the idea "[did] not seem right for Cornwall". The site was bought from Carnon Enterprises by Gwennap District Council for a price of £1 in 2002. An investigation by the Environment Agency that concluded in 2007 found that soil near the mine had high levels of arsenic, copper and zinc contamination and by 2012, it was deemed too hazardous for human activity. The mine gains attention during dry spells when the lagoons dry up and leaving brightly coloured stains on the pit banks and bed. 2014 murder In 2014, a 72-year-old man from Falmouth died at the site after what was initially thought to be a cycling accident. It was later found that the man had been murdered. A 34-year-old was found guilty and sentenced to life and to serve at least 28 years. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary function of a spica splint? A. To immobilize the entire hand B. To provide support for thumb injuries and allow movement of other digits C. To enhance flexibility in the wrist D. To assist in strengthening the hand muscles Answer:
B. To provide support for thumb injuries and allow movement of other digits
Relavent Documents: Document 0::: A solar sibling is a star that formed in the same star cluster as the Sun. Stars that have been proposed as candidate solar siblings include HD 162826, HD 175740, and HD 186302. There is as yet no confirmed solar sibling and studies disagree on the most likely candidates; for example, a 2016 study suggested that previous candidates, including HD 162826 and HD 175740, are unlikely to be solar siblings. A study using Gaia DR2 data published in 2020 found that HD 186302 is unlikely to be a solar sibling, while identifying a new candidate, "Solar Sibling 1", designated 2MASS J19354742+4803549. This star is also known as Kepler-1974 or KOI-7368, and was found in 2022 to be a member of a stellar association that is 40 million years old, much younger than the Sun, so it cannot be a solar sibling. A 2019 study of the comet C/2018 V1 (Machholz–Fujikawa–Iwamoto) found that it may be an interstellar object, and identified two stars it may have originated from (Gaia DR2 1927143514955658880 and 1966383465746413568), which could be candidate solar siblings. This comet is not confirmed to have an interstellar origin and could be a more typical Oort cloud object. References Further reading Document 1::: Medical technology assessment (MTA) is the objective evaluation of a medical technology regarding its safety and performance, its (future) impact on clinical and non-clinical patient outcomes as well as its interactive effects on economical, organizational, social, juridical and ethical aspects of healthcare. Medical technologies are assessed both in absolute terms and in comparison to other (combinations of) medical technologies, procedures, treatments or ‘doing-nothing’. The aim of MTA is to provide objective, high-quality information that relevant stakeholders use for decision-making about for example development, pricing, market access and reimbursement of new medical technologies. As such, MTA is similar to health technology assessment (HTA), except that HTA has a wider scope and may include assessments of for example organizational or financial interventions. The classical approach of MTA is to evaluate technologies after they enter the marketplace. Yet, a growing number of researchers and policy-makers argue that new technologies should be evaluated before they diffuse into routine clinical practice. MTA of biomedical innovations in a very early stage of development could improve health outcomes, minimise wrong investment and prevent social and ethical conflicts. One particular method within the area of early MTA is constructive technology assessment (CTA). CTA is particularly appropriate for the early assessment of dynamic technologies that are implemented under uncertain circumstances. CTA is based on the idea that during the course of technology development, choices are constantly being made about the form, the function, and the use of that technology. Especially in early stages, technologies are not always stable, nor are its specifications and neither is its use, as both technology and environment will mutually influence each other. In recent years, CTA has developed from assessing the (clinical) impact of a new technology to a much broader approach, Document 2::: In mathematics, specifically in control theory, subspace identification (SID) aims at identifying linear time invariant (LTI) state space models from input-output data. SID does not require that the user parametrizes the system matrices before solving a parametric optimization problem and, as a consequence, SID methods do not suffer from problems related to local minima that often lead to unsatisfactory identification results. History SID methods are rooted in the work by the German mathematician Leopold Kronecker (1823–1891). Kronecker showed that a power series can be written as a rational function when the rank of the Hankel operator that has the power series as its symbol is finite. The rank determines the order of the polynomials of the rational function. In the 1960s the work of Kronecker inspired a number of researchers in the area of Systems and Control, like Ho and Kalman, Silverman and Youla and Tissi, to store the Markov parameters of an LTI system into a finite dimensional Hankel matrix and derive from this matrix an (A,B,C) realization of the LTI system. The key observation was that when the Hankel matrix is properly dimensioned versus the order of the LTI system, the rank of the Hankel matrix is the order of the LTI system and the SVD of the Hankel matrix provides a basis of the column space observability matrix and row space of the controllability matrix of the LTI system. Knowledge of this key spaces allows to estimate the system matrices via linear least squares. An extension to the stochastic realization problem where we have knowledge only of the Auto-correlation (covariance) function of the output of an LTI system driven by white noise, was derived by researchers like Akaike. A second generation of SID methods attempted to make SID methods directly operate on input-output measurements of the LTI system in the decade 1985–1995. One such generalization was presented under the name of the Eigensystem Realization Algorithm (ERA) made use of spe Document 3::: In computer architecture, cache coherence is the uniformity of shared resource data that is stored in multiple local caches. In a cache coherent system, if multiple clients have a cached copy of the same region of a shared memory resource, all copies are the same. Without cache coherence, a change made to the region by one client may not be seen by others, and errors can result when the data used by different clients is mismatched. A cache coherence protocol is used to maintain cache coherency. The two main types are snooping and directory-based protocols. Cache coherence is of particular relevance in multiprocessing systems, where each CPU may have its own local cache of a shared memory resource. Overview In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion. The following are the requirements for cache coherence: Write Propagation Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches. Transaction Serialization Reads/Writes to a single memory location must be seen by all processors in the same order. Theoretically, coherence can be performed at the load/store granularity. However, in practice it is generally performed at the granularity of cache blocks. Definition Coherence defines the behavior of reads and writes to a single address location. In a multiprocessor system, consider that more than one processor has cached a copy of the memory location X. The following conditions are necessary to achieve cache coherence: In a read made by a processor P to a location X that follows a write b The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What phenomenon describes the behavior of wavepackets launched through disordered media that eventually return to their starting points? A. Quantum tunneling effect B. Quantum boomerang effect C. Quantum entanglement effect D. Quantum displacement effect Answer:
B. Quantum boomerang effect
Relavent Documents: Document 0::: Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems. Overview Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation. The concept of a synthetic instrument is a subset of the virtual instrumentation concept. A synthetic instrument is a kind of virtual instrumentation that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instrumentation can still have measurement-specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instrumentation is by definition not specific to the measurement, nor is it necessarily (or usually) modular. Leveraging commercially available technologies, such as the PC and the analog-to-digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems. The newly updated te Document 1::: Nestin is a protein that in humans is encoded by the NES gene. Nestin (acronym for neuroepithelial stem cell protein) is a type VI intermediate filament (IF) protein. These intermediate filament proteins are expressed mostly in nerve cells where they are implicated in the radial growth of the axon. Seven genes encode for the heavy (NF-H), medium (NF-M) and light neurofilament (NF-L) proteins, nestin and α-internexin in nerve cells, synemin α and desmuslin/synemin β (two alternative transcripts of the DMN gene) in muscle cells, and syncoilin (also in muscle cells). Members of this group mostly preferentially coassemble as heteropolymers in tissues. Steinert et al. has shown that nestin forms homodimers and homotetramers but does not form IF by itself in vitro. In mixtures, nestin preferentially co-assembles with purified vimentin or the type IV IF protein internexin to form heterodimer coiled-coil molecules. Gene Structurally, nestin has the shortest head domain (N-terminus) and the longest tail domain (C-terminus) of all the IF proteins. Nestin is of high molecular weight (240kDa) with a terminus greater than 500 residues (compared to cytokeratins and lamins with termini less than 50 residues). After subcloning the human nestin gene into plasmid vectors, Dahlstrand et al. determined the nucleotide sequence of all coding regions and parts of the introns. In order to establish the boundaries of the introns, they used the polymerase chain reaction (PCR) to amplify a fragment made from human fetal brain cDNA using two primers located in the first and fourth exon, respectively. The resulting 270 base pair (bp) long fragment was then sequenced directly in its entirety, and intron positions precisely located by comparison with the genomic sequence. Putative initiation and stop codons for the human nestin gene were found at the same positions as in the rat gene, in regions where overall similarity was very high. Based on this assumption, the human nestin gene encod Document 2::: The dynamic aperture is the stability region of phase space in a circular accelerator. For hadrons In the case of protons or heavy ion accelerators, (or synchrotrons, or storage rings), there is minimal radiation, and hence the dynamics is symplectic. For long term stability, tiny dynamical diffusion (or Arnold diffusion) can lead an initially stable orbit slowly into an unstable region. This makes the dynamic aperture problem particularly challenging. One may be considering stability over billions of turns. A scaling law for Dynamic aperture vs. number of turns has been proposed by Giovannozzi. For electrons For the case of electrons, the electrons will radiate which causes a damping effect. This means that one typically only cares about stability over thousands of turns. Methods to compute or optimize dynamic aperture The basic method for computing dynamic aperture involves the use of a tracking code. A model of the ring is built within the code that includes an integration routine for each magnetic element. The particle is tracked many turns and stability is determined. In addition, there are other quantities that may be computed to characterize the dynamics, and can be related to the dynamic aperture. One example is the tune shift with amplitude. There have also been other proposals for approaches to enlarge dynamic aperture, such as: References Document 3::: NGC 855 is a star-forming dwarf elliptical galaxy located in the Triangulum constellation. The discovery and a first description (as H 26 613) was realized by William Herschel on 26 October 1786 and the findings made public through his Catalogue of Nebulae and Clusters of Stars, published the same year. NGC 855's relative velocity to the cosmic microwave background is 343 ± 18 km/s (343 ± 18) km/s, corresponding to a Hubble distance of 5.06 ± 0.44 Mpc (~16.5 million ly). There is some uncertainty about its precise distance since two surface brightness fluctuation measurements give a distance of 9.280 ± 0.636 Mpc (~30.3 million ly), a range outside the Hubble distance determined by the galaxy's redshift survey. Star formation Using infrared data collected from two regions in the center of the galaxy by the Spitzer Space Telescope, astronomers were able to suggest NGC 855 to be a star-forming galaxy. Its HI distribution (Neutral atomic hydrogen emission lines) suggests the star-forming activity might have been triggered by a minor merger. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the temporary equilibrium method primarily used for in economic analysis? A. To analyze interdependent variables of different speeds B. To forecast long-term market trends C. To determine fixed prices in the market D. To calculate the total supply of goods Answer:
A. To analyze interdependent variables of different speeds
Relavent Documents: Document 0::: The relativistic Breit–Wigner distribution (after the 1936 nuclear resonance formula of Gregory Breit and Eugene Wigner) is a continuous probability distribution with the following probability density function, where is a constant of proportionality, equal to (This equation is written using natural units, .) It is most often used to model resonances (unstable particles) in high-energy physics. In this case, is the center-of-mass energy that produces the resonance, is the mass of the resonance, and is the resonance width (or decay width), related to its mean lifetime according to (With units included, the formula is Usage The probability of producing the resonance at a given energy is proportional to , so that a plot of the production rate of the unstable particle as a function of energy traces out the shape of the relativistic Breit–Wigner distribution. Note that for values of off the maximum at such that (hence for the distribution has attenuated to half its maximum value, which justifies the name width at half-maximum for . In the limit of vanishing width, the particle becomes stable as the Lorentzian distribution sharpens infinitely to where is the Dirac delta function (point impulse). In general, can also be a function of ; this dependence is typically only important when is not small compared to , and the phase space-dependence of the width needs to be taken into account. (For example, in the decay of the rho meson into a pair of pions.) The factor of that multiplies should also be replaced with (or etc.) when the resonance is wide. The form of the relativistic Breit–Wigner distribution arises from the propagator of an unstable particle, which has a denominator of the form (Here, is the square of the four-momentum carried by that particle in the tree Feynman diagram involved.) The propagator in its rest frame then is proportional to the quantum-mechanical amplitude for the decay utilized to reconstruct that resonance, The resulting probability distribution is proportional to the absolute square of the amplitude, so then the above relativistic Breit–Wigner distribution for the probability density function. The form of this distribution is similar to the amplitude of the solution to the classical equation of motion for a driven harmonic oscillator damped and driven by a sinusoidal external force. It has the standard resonance form of the Lorentz, or Cauchy distribution, but involves relativistic variables here The distribution is the solution of the differential equation for the amplitude squared w.r.t. the energy energy (frequency), in such a classical forced oscillator, or rather with Resonant cross-section formula The cross-section for resonant production of a spin- particle of mass by the collision of two particles with spins and is generally described by the relativistic Breit–Wigner formula: where is the centre-of-mass energy of the collision, , is the centre-of-mass momentum of each of the two colliding particles, is the resonance's full width at half maximum, and is the branching fraction for the resonance's decay into particles and . If the resonance is only being detected in a specific output channel, then the observed cross-section will be reduced by the branching fraction () for that decay channel. Gaussian broadening In experiment, the incident beam that produces resonance always has some spread of energy around a central value. Usually, that is a Gaussian/normal distribution. The resulting resonance shape in this case is given by the convolution of the Breit–Wigner and the Gaussian distribution: This function can be simplified by introducing new variables, to obtain where the relativistic line broadening function has the following definition: is the relativistic counterpart of the similar line-broadening function for the Voigt profile used in spectroscopy (see also § 7.19 of ). Document 1::: The Connaissance des temps (English: Knowledge of the Times) is an official yearly publication of astronomical ephemerides in France. Until just after the French Revolution, the title appeared as Connoissance des temps, and for several years afterwards also as Connaissance des tems. Since 1984 it has appeared under the title Ephémérides astronomiques: Annuaire du Bureau des longitudes. History Connaissance des temps is the oldest such publication in the world, published without interruption since 1679 (originally named La Connoissance des Temps ou calendrier et éphémérides du lever & coucher du Soleil, de la Lune & des autres planètes), when the astronomer Jean Picard (1620–1682) obtained from the King the right to create the annual publication. The first eight editors were: 1679–1684: Jean Picard (1620–1682) 1685–1701: Jean Le Fèvre (1650–1706) 1702–1729: (1660–1733) 1730–1734: Louis Godin (1704–1760) 1735–1759: Giovanni Domenico Maraldi (1709–1788) 1760–1775: Joseph Jérôme Lefrançois de Lalande (1732–1807) 1776–1787: Edme-Sébastien Jeaurat (1725–1803) 1788–1794: Pierre Méchain (1744–1804) Other notable astronomers who edited the Connaissance des temps were: Alexis Bouvard (1767–1843) Bureau des Longitudes Rodolphe Radau (1835–1911) Marie Henri Andoyer (1862–1929) Among the other prestigious national astronomical ephemerides, The Nautical Almanac was established in Great Britain in 1767, and the Berliner Astronomisches Jahrbuch in 1776. Contents The volumes of the Connaissance des temps had two parts: a section of ephemerides, containing various tables articles giving a deeper coverage of various topics, often written by famous astronomers Document 2::: Energy subsidies are government payments that keep the price of energy lower than market rate for consumers or higher than market rate for producers. These subsidies are part of the energy policy of the United States. According to Congressional Budget Office testimony in 2016, an estimated $10.9 billion in tax preferences was directed toward renewable energy, $4.6 billion went to fossil fuels, and $2.7 billion went to energy efficiency or electricity transmission. According to a 2015 estimate by the Obama administration, the US oil industry benefited from subsidies of about $4.6 billion per year. A 2017 study by researchers at Stockholm Environment Institute published in the journal Nature Energy estimated that "tax preferences and other subsidies push nearly half of new, yet-to-be-developed oil investments into profitability, potentially increasing US oil production by 17 billion barrels over the next few decades." Overview of energy subsidies Biofuel subsidies In the United States, biofuel subsidies have been justified on the following grounds: energy independence, reduction in greenhouse gas emissions, improvements in rural development related to biofuel plants and farm income support. Several economists from Iowa State University found "there is no evidence to disprove that the primary objective of biofuel policy is to support farm income." Consumer subsidies Consumers who purchase hybrid vehicles are eligible for a tax credit that depends upon the type of vehicle and the difference in fuel economy in comparison to vehicles of similar weights. These credits range from several hundred dollars to a few thousand dollars. Homeowners can receive a tax credit up to $500 for energy-efficient products like insulation, windows, doors, as well as heating and cooling equipment. Homeowners who install solar electric systems can receive a 30% tax credit and homeowners who install small wind systems can receive a tax credit up to $4000. Geothermal heat pumps also qualif Document 3::: The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing (including point-to-point and publish-and-subscribe), reliability and security. AMQP mandates the behavior of the messaging provider and client to the extent that implementations from different vendors are interoperable, in the same way as SMTP, HTTP, FTP, etc. have created interoperable systems. Previous standardizations of middleware have happened at the API level (e.g. JMS) and were focused on standardizing programmer interaction with different middleware implementations, rather than on providing interoperability between multiple implementations. Unlike JMS, which defines an API and a set of behaviors that a messaging implementation must provide, AMQP is a wire-level protocol. A wire-level protocol is a description of the format of the data that is sent across the network as a stream of bytes. Consequently, any tool that can create and interpret messages that conform to this data format can interoperate with any other compliant tool irrespective of implementation language. Overview AMQP is a binary application layer protocol, designed to efficiently support a wide variety of messaging applications and communication patterns. It provides flow controlled, message-oriented communication with message-delivery guarantees such as at-most-once (where each message is delivered once or never), at-least-once (where each message is certain to be delivered, but may do so multiple times) and exactly-once (where the message will always certainly arrive and do so only once), and authentication and/or encryption based on SASL and/or TLS. It assumes an underlying reliable transport layer protocol such as Transmission Control Protocol (TCP). The AMQP specification is defined in several layers: (i) a type system, (ii) a symmetric, asynchronous protocol for the transfer of messages fro The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the significance of the relativistic Breit–Wigner distribution in high-energy physics? A. It models stable particles. B. It describes resonances of unstable particles. C. It calculates the speed of light. D. It establishes the mass-energy equivalence. Answer:
B. It describes resonances of unstable particles.
Relavent Documents: Document 0::: Resumption of meiosis occurs as a part of oocyte meiosis after meiotic arrest has occurred. In females, meiosis of an oocyte begins during embryogenesis and will be completed after puberty. A primordial follicle will arrest, allowing the follicle to grow in size and mature. Resumption of meiosis will resume following an ovulatory surge (ovulation) of luteinising hormone (LH). Meiotic arrest Meiosis was initially discovered by Oscar Hertwig in 1876 as he examined the fusion of the gametes in sea urchin eggs. In 1890, August Weismann, concluded that two different rounds of meiosis are required and defined the difference between somatic cells and germ cells. Studies regarding meiotic arrest and resumption have been difficult to attain because, within females, the oocyte is inaccessible. The majority of research was conducted by removing the follicles and artificially maintaining the oocyte in meiotic arrest. Despite this allowing the gain of knowledge on meiosis in oocytes, the results of this methodology may be difficult to interpret and apply to humans. During oogenesis, meiosis arrests twice. The main arrest occurs during the diplotene stage of prophase 1, this arrest lasts until puberty. The second meiotic arrest then occurs after ovulation during metaphase 2 and lasts for a much shorter time than the first arrest. Meiotic arrest occurs mainly due to increased cAMP levels in the oocyte, which regulates key regulator cyclin kinase complex maturation promoting factor (MPF). cGMPs produced by somatic follicular cells further regulate cAMP concentration in the oocyte. Meiotic resumption in mammals Meiotic resumption is visually manifested as “germinal vesicle breakdown” (GVBD), referring to the primary oocyte nucleus. GVBD is the process of nuclear envelope dissolution and chromosome condensation similar to mitotic prophase. In females, the process of folliculogenesis begins during fetal development. Folliculogenesis is the maturation of ovarian follicles. Primordial germ-cells (PGC’S) undergo meiosis leading to the formation of primordial follicles. At birth, meiosis arrests at the diplotene phase of prophase I. Oocytes will remain in this state until the time of puberty. At the time of ovulation a surge of LH initiates the resumption of meiosis and oocytes enter the second cycle, which is known as oocyte maturation. Meiosis is then arrested again during metaphase 2 until fertilisation. At fertilisation meiosis then resumes which results in the disassociation from the 2nd polar body, meaning maturation of the oocyte is now complete. Meiotic resumption signalling Cyclic adenosine monophosphate levels (cAMP) Elevated concentrations of intra-oocyte cAMP regulates meiotic arrest and prevents meiotic resumption. Intracellular cAMP constantly activates PKA, which then activates nuclear kinase Weel/MtyI. Weel/Mtyl inhibits cell division cycle 25B (CDC25B) which is a main activator for Cyclin-dependent kinase (CDK). This leads to the inactivation of maturation promoting factor (MPF) as MPF comprises CDK and Cyclin B. MPF is an essential regulator for M-phase transition and plays a key role in meiotic resumption in oocytes and its post-GVBD activities. Hence, a high level of cAMP indirectly inactivates MPF, preventing meiotic resumption. GPCR3-Gs-ADCY Cascade The production of cAMP is maintained by the intra-oocyte GPCR-GS-ADCY cascade. Inhibition of Gs protein in mouse oocyte leads to meiotic resumption. Gs protein-coupled receptor 3 (GPCR3) KO mice was found to present with spontaneous meiotic resumption as well, which was preventable with the administration of GPCR3 RNA into the oocyte. GPCR3 can be found to be present in the oocyte membrane and functions to sustain a minimal level of cAMP, preventing meiotic resumption. In the oocyte, the effector enzyme of GPR is adencylate cyclase (ADCY). It acts as a catalyst converting adenosine triphosphate (ATP) to cAMP, maintaining cAMP levels within the oocyte, preventing meiotic resumption. Somatic follicular cells and cyclic guanosine monophosphate (cGMP) The removal of oocyte from the follicle results in spontaneous meiotic resumption which implicates the role of somatic follicular cells in meiotic arrest. cGMP is produced by guanylyl cyclase present the granulosa cells, in particular, natriuretic peptide receptor 2 (NPR2) and natriuretic peptide precursor-C (NPPC) that can be found in the cumulus and mural granulosa cells respectively (in mice, pigs and human). cGMP produced by these granulosa cells rapidly diffuse into the oocyte through gap junctions and inhibits cAMP-phosphodiesterase 3A (cAMP-PDE3A). cAMP-PDE3A functions as a catalyst for the breakdown of cAMP to AMP within the oocyte. Hence, somatic follicular cells produce cGMP inhibit cell resumption via maintain intra-oocyte cAMP levels. Inosine 5’ monophosphate (IMP) dehydrogenase (IMPDH) Previous studies have demonstrated that treatment of mouse oocytes with IMPDH  inhibitors induced gonadotropin-independent meiotic resumption in vivo. IMPDH is a rate limiting enzyme that catalyses IMP to xanthosine monophosphate (XMP). It can induce meiotic resumption as XMP produced is ultimately converted to cGMP through a series of enzymatic activities. In addition, IMPDH maintains hypoxanthine (HX) levels in the follicular fluid. The HX concentration inhibits cAMP-PDE activity in vitro. Lutensing Hormone (LH) It is commonly known that monthly surge of preovulatory LH from the pituitary gland promotes meiotic resumption. First, LH signaling dephosphorylates and inactivates NPR2 guanylyl cyclase. This results in a rapid decrease in cGMP levels in the granulosa cells and the oocytes through the gap junctions. PDE5 is also activated, increasing cGMP hydrolysis. In mouse follicles, the concentration of cGMP drops from ~2-5 μM to ~100nM within a minute from exposure to LH. The decreasing cGMP concentration occurs in a sequential fashion, from the mural granulosa cells, the cumulus granulosa cells and finally the oocyte. The diffusion of cGMP out of the oocyte promotes meiotic resumption. It proposed that the diffusion of cGMP away from the oocyte occurs before LH-induced closure of gap junctions between somatic cells, could be an “augment step to further guarantee a low level of cGMP within the oocyte or cumulus granulosa”. It is also believed that LH-induced cGMP decrease in granulosa cells is only part of the mechanism, with the full mechanism remaining unexplained. Document 1::: Marianne Suhr MRICS (born c. 1969) is an English Chartered Building Surveyor, writer, and expert on historic buildings. She co-presented the television series Restoration with Ptolemy Dean and Griff Rhys Jones. Work Suhr trained with the Society for the Protection of Ancient Buildings and runs training courses for them. She also writes articles for national newspapers and magazines, including contributions to Period Living. In October 2005, Suhr joined Richard Chartres, Bishop of London at St Giles in the Fields, London, to launch a new maintenance project for the capital's historic churches. In Restoration, she toured the United Kingdom looking for restoration projects, repairing old buildings, mixing building mud and dowsing. On her own account, she restored a derelict thatched cottage in Kibworth Beauchamp, Leicestershire, with a friend. On completion of the project, in May 2005, she advised "Don't expect to make money on a project if you're going to do it properly. It involves a lot of time, expensive materials and specialist craftsmen. Estimate the cost and duration of the work, then double both." In 2006, with her partner Richard, she renovated a timber-framed house in Oxfordshire as a home. On British house-hunters in France, she has said: Publications Urban Renewal Berlin: Experience Examples Prospects (Senate Building and Housing Department, 1991) Old House Handbook: a Practical Guide to Care and Repair (with Roger Hunt) (London, Frances Lincoln Publishers, 2008, ) Old House Eco Handbook: a Practical Guide to Retrofitting for Energy-Efficiency & Sustainability (with Roger Hunt) (London, Frances Lincoln Publishers, 2013, ) References External links Profile: Marianne Suhr Building surveyor and co-presenter of the BBC's Restoration series at British Library Direct Document 2::: A branch plant economy is an economy that hosts many branch plants (i.e. factories or firms near the base of a supply chain/command chain), but does not host headquarters. In particular, the term was used in arguments that countries must develop independent companies, as a form of economic nationalism, to create better jobs and avoid having managerial positions filled only by corporate workers from outside the country. The term was used in the 1970s to describe Canadian reliance on US headquartered corporations or Scottish reliance on English-headquartered corporations but may have fallen out of mainstream use. Some opinion pieces still use the terminology to decry reliance on outside states, especially with regards to Canada’s relationship with the United States. References Document 3::: VirusBuster Ltd. was a Hungarian IT security software vendor. The fully Hungarian owned company developed software under the brand name "VirusBuster" for the Hungarian and international market to protect users' computers from malware programs and other IT security threats. In August 2012, VirusBuster Ltd. announced the discontinuation of its antivirus products. History The legal and trade predecessor of the firm began to develop antiviral products at the end of the 1980s. At that time, it released the software product "Virsec", at a time other well known vendors appeared with similar solutions. In 1992, Virsec was renamed to "VirusBuster". At the later official formation of the firm, this name became the name of the company as well. In 1997 the official foundation of VirusBuster Ltd took place. In the next years, the company produced mainly anti-viral products for the Hungarian market, primarily for the governmental sector. Between 2000 and 2003 VirusBuster steadily extended its portfolio. The developments manifested in the increasing number of protected platforms and systems, also in new layered security products to target new threats, and in the management system for the firm's security products. With all of these achievements, the company could offer effective security solutions for every organizational customers. In 2001, the Australian Leprechaun licensed VirusBuster's antiviral technology to integrate it into its own security products. However, Leprechaun ceased operations in 2004. Between 2003 and 2006, VirusBuster strengthened its Eastern European and worldwide market positions. The antiviral engine of the company became an independent product, and thus it was integrated into several other firms' security solutions. These were the years, when VirusBuster appeared on the American market. In August 2012, Agnitum, the PC security expert and manufacturer of the Outpost range of security products, had announced the acquisition of antivirus technology and The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary role of cyclic adenosine monophosphate (cAMP) in the process of meiotic arrest in oocytes? A. It promotes meiotic resumption by activating MPF. B. It inhibits meiotic resumption by regulating MPF activity. C. It facilitates the breakdown of the nuclear envelope. D. It increases the production of cGMP in granulosa cells. Answer:
B. It inhibits meiotic resumption by regulating MPF activity.
Relavent Documents: Document 0::: The following is a list of ecoregions in Rwanda, according to the Worldwide Fund for Nature (WWF). Terrestrial ecoregions By major habitat type: Tropical and subtropical moist broadleaf forests Albertine Rift montane forests Tropical and subtropical grasslands, savannas, and shrublands Victoria Basin forest–savanna mosaic Montane grasslands and shrublands Ruwenzori-Virunga montane moorlands Freshwater ecoregions By bioregion: Great Lakes Lake Victoria Basin Document 1::: The Çocuktepe Dam is a gravity dam under construction on the Güzeldere River (a tributary of the Great Zab) in Çukurca district of Hakkâri Province, southeast Turkey. Under contract from Turkey's State Hydraulic Works, İnelsan İnşaat began construction on the dam in 2008 and a completion date has not been announced. Construction on the Gölgeliyamaç Dam immediately upstream began in 2008 as well but was cancelled due to poor geology. The reported purpose of the dam is water storage and it can also support a hydroelectric power station in the future. Another purpose of the dam which has been widely reported in the Turkish press is to reduce the freedom of movement of militants of the Kurdistan Workers' Party (PKK). Blocking and flooding valleys in close proximity to the Iraq–Turkey border is expected to help curb cross-border PKK smuggling and deny caves in which ammunition can be stored. A total of 11 dams along the border; seven in Şırnak Province and four in Hakkâri Province were implemented for this purpose. In Hakkâri are the Gölgeliyamaç (since cancelled) and Çocuktepe Dams on the Güzeldere River and the Aslandağ and Beyyurdu Dams on the Bembo River. In Şırnak there is the Silopi Dam on the Hezil River and the Şırnak, Uludere, Balli, Kavşaktepe, Musatepe and Çetintepe Dams on the Ortasu River. Construction was still ongoing as of March 2019. See also List of dams and reservoirs in Turkey References Document 2::: The Sitara Arm Processor family, developed by Texas Instruments, features ARM9, ARM Cortex-A8, ARM Cortex-A9, ARM Cortex-A15, and ARM Cortex-A53 application cores, C66x DSP cores, imaging and multimedia acceleration cores, industrial communication IP, and other technology to serve a broad base of applications. Development using Sitara processors is supported by the open source Beagle community as well as Texas Instruments' open source development community. Products featuring Sitara Arm Nest, a learning thermostat Netgate SG-1000, a micro firewall based on pfSense MOD/MOD Live, makes GPS-enabled Micro Optics Displays (MOD) for snow goggles BeagleBone Black single board computer BeagleBoard-X15 single board computer Lego Mindstorms EV3 – Lego Mindstorms EV3 bricks use the ARM9 TI Sitara AM1x The Sitara family Sitara Arm processors available today include: Document 3::: Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two. The description of photon polarization contains many of the physical concepts and much of the mathematical machinery of more involved quantum descriptions, such as the quantum mechanics of an electron in a potential well. Polarization is an example of a qubit degree of freedom, which forms a fundamental basis for an understanding of more complicated quantum phenomena. Much of the mathematical machinery of quantum mechanics, such as state vectors, probability amplitudes, unitary operators, and Hermitian operators, emerge naturally from the classical Maxwell's equations in the description. The quantum polarization state vector for the photon, for instance, is identical with the Jones vector, usually used to describe the polarization of a classical wave. Unitary operators emerge from the classical requirement of the conservation of energy of a classical wave propagating through lossless media that alter the polarization state of the wave. Hermitian operators then follow for infinitesimal transformations of a classical polarization state. Many of the implications of the mathematical machinery are easily verified experimentally. In fact, many of the experiments can be performed with polaroid sunglass lenses. The connection with quantum mechanics is made through the identification of a minimum packet size, called a photon, for energy in the electromagnetic field. The identification is based on the theories of Planck and the interpretation of those theories by Einstein. The correspondence principle then allows the identification of momentum and angular momentum (called spin), as well as energy, with the photon. Polarization of classical electromagnetic waves Polarization states Linear polarization The wave is linearly polarized (or plane polarized) when the phase angles are equal, This represents a wave with phase polarized at an angle with respect to the x axis. In this case the Jones vector can be written with a single phase: The state vectors for linear polarization in x or y are special cases of this state vector. If unit vectors are defined such that and then the linearly polarized polarization state can be written in the "x–y basis" as Circular polarization If the phase angles and differ by exactly and the x amplitude equals the y amplitude the wave is circularly polarized. The Jones vector then becomes where the plus sign indicates left circular polarization and the minus sign indicates right circular polarization. In the case of circular polarization, the electric field vector of constant magnitude rotates in the x–y plane. If unit vectors are defined such that and then an arbitrary polarization state can be written in the "R–L basis" as where and We can see that Elliptical polarization The general case in which the electric field rotates in the x–y plane and has variable magnitude is called elliptical polarization. The state vector is given by Geometric visualization of an arbitrary polarization state To get an understanding of what a polarization state looks like, one can observe the orbit that is made if the polarization state is multiplied by a phase factor of and then having the real parts of its components interpreted as x and y coordinates respectively. That is: If only the traced out shape and the direction of the rotation of is considered when interpreting the polarization state, i.e. only (where and are defined as above) and whether it is overall more right circularly or left circularly polarized (i.e. whether or vice versa), it can be seen that the physical interpretation will be the same even if the state is multiplied by an arbitrary phase factor, since and the direction of rotation will remain the same. In other words, there is no physical difference between two polarization states and , between which only a phase factor differs. It can be seen that for a linearly polarized state, M will be a line in the xy plane, with length 2 and its middle in the origin, and whose slope equals to . For a circularly polarized state, M will be a circle with radius and with the middle in the origin. Energy, momentum, and angular momentum of a classical electromagnetic wave Energy density of classical electromagnetic waves Energy in a plane wave The energy per unit volume in classical electromagnetic fields is (cgs units) and also Planck units: For a plane wave, this becomes: where the energy has been averaged over a wavelength of the wave. Fraction of energy in each component The fraction of energy in the x component of the plane wave is with a similar expression for the y component resulting in . The fraction in both components is Momentum density of classical electromagnetic waves The momentum density is given by the Poynting vector For a sinusoidal plane wave traveling in the z direction, the momentum is in the z direction and is related to the energy density: The momentum density has been averaged over a wavelength. Angular momentum density of classical electromagnetic waves Electromagnetic waves can have both orbital and spin angular momentum. The total angular momentum density is For a sinusoidal plane wave propagating along axis the orbital angular momentum density vanishes. The spin angular momentum density is in the direction and is given by where again the density is averaged over a wavelength. Optical filters and crystals Passage of a classical wave through a polaroid filter A linear filter transmits one component of a plane wave and absorbs the perpendicular component. In that case, if the filter is polarized in the x direction, the fraction of energy passing through the filter is Example of energy conservation: Passage of a classical wave through a birefringent crystal An ideal birefringent crystal transforms the polarization state of an electromagnetic wave without loss of wave energy. Birefringent crystals therefore provide an ideal test bed for examining the conservative transformation of polarization states. Even though this treatment is still purely classical, standard quantum tools such as unitary and Hermitian operators that evolve the state in time naturally emerge. Initial and final states A birefringent crystal is a material that has an optic axis with the property that the light has a different index of refraction for light polarized parallel to the axis than it has for light polarized perpendicular to the axis. Light polarized parallel to the axis are called "extraordinary rays" or "extraordinary photons", while light polarized perpendicular to the axis are called "ordinary rays" or "ordinary photons". If a linearly polarized wave impinges on the crystal, the extraordinary component of the wave will emerge from the crystal with a different phase than the ordinary component. In mathematical language, if the incident wave is linearly polarized at an angle with respect to the optic axis, the incident state vector can be written and the state vector for the emerging wave can be written While the initial state was linearly polarized, the final state is elliptically polarized. The birefringent crystal alters the character of the polarization. Dual of the final state The initial polarization state is transformed into the final state with the operator U. The dual of the final state is given by where is the adjoint of U, the complex conjugate transpose of the matrix. Unitary operators and energy conservation The fraction of energy that emerges from the crystal is In this ideal case, all the energy impinging on the crystal emerges from the crystal. An operator U with the property that where I is the identity operator and U is called a unitary operator. The unitary property is necessary to ensure energy conservation in state transformations. Hermitian operators and energy conservation If the crystal is very thin, the final state will be only slightly different from the initial state. The unitary operator will be close to the identity operator. We can define the operator H by and the adjoint by Energy conservation then requires This requires that Operators like this that are equal to their adjoints are called Hermitian or self-adjoint. The infinitesimal transition of the polarization state is Thus, energy conservation requires that infinitesimal transformations of a polarization state occur through the action of a Hermitian operator. Photons: connection to quantum mechanics Energy, momentum, and angular momentum of photons Energy The treatment to this point has been classical. It is a testament, however, to the generality of Maxwell's equations for electrodynamics that the treatment can be made quantum mechanical with only a reinterpretation of classical quantities. The reinterpretation is based on the theories of Max Planck and the interpretation by Albert Einstein of those theories and of other experiments. Einstein's conclusion from early experiments on the photoelectric effect is that electromagnetic radiation is composed of irreducible packets of energy, known as photons. The energy of each packet is related to the angular frequency of the wave by the relationwhere is an experimentally determined quantity known as the reduced Planck constant. If there are photons in a box of volume , the energy in the electromagnetic field isand the energy density is The photon energy can be related to classical fields through the correspondence principle that states that for a large number of photons, the quantum and classical treatments must agree. Thus, for very large , the quantum energy density must be the same as the classical energy density The number of photons in the box is then Momentum The correspondence principle also determines the momentum and angular momentum of the photon. For momentumwhere is the wave number. This implies that the momentum of a photon is Angular momentum and spin Similarly for the spin angular momentumwhere is field strength. This implies that the spin angular momentum of the photon isthe quantum interpretation of this expression is that the photon has a probability of of having a spin angular momentum of and a probability of of having a spin angular momentum of . We can therefore think of the spin angular momentum of the photon being quantized as well as the energy. The angular momentum of classical light has been verified. A photon that is linearly polarized (plane polarized) is in a superposition of equal amounts of the left-handed and right-handed states. Upon absorption by an electronic state, the angular momentum is "measured" and this superposition collapses into either right-hand or left-hand, corresponding to a raising or lowering of the angular momentum of the absorbing electronic state, respectively. Spin operator The spin of the photon is defined as the coefficient of in the spin angular momentum calculation. A photon has spin 1 if it is in the state and −1 if it is in the state. The spin operator is defined as the outer product The eigenvectors of the spin operator are and with eigenvalues 1 and −1, respectively. The expected value of a spin measurement on a photon is then An operator S has been associated with an observable quantity, the spin angular momentum. The eigenvalues of the operator are the allowed observable values. This has been demonstrated for spin angular momentum, but it is in general true for any observable quantity. Spin states We can write the circularly polarized states aswhere s = 1 for and s = −1 for . An arbitrary state can be writtenwhere and are phase angles, θ is the angle by which the frame of reference is rotated, and Spin and angular momentum operators in differential form When the state is written in spin notation, the spin operator can be written The eigenvectors of the differential spin operator are To see this, note The spin angular momentum operator is Nature of probability in quantum mechanics Probability for a single photon There are two ways in which probability can be applied to the behavior of photons; probability can be used to calculate the probable number of photons in a particular state, or probability can be used to calculate the likelihood of a single photon to be in a particular state. The former interpretation violates energy conservation. The latter interpretation is the viable, if nonintuitive, option. Dirac explains this in the context of the double-slit experiment: Some time before the discovery of quantum mechanics people realized that the connection between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the beam is connected with the probable number of photons in it, we should have half the total number going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs.—Paul Dirac, The Principles of Quantum Mechanics, 1930, Chapter 1 Probability amplitudes The probability for a photon to be in a particular polarization state depends on the fields as calculated by the classical Maxwell's equations. The polarization state of the photon is proportional to the field. The probability itself is quadratic in the fields and consequently is also quadratic in the quantum state of polarization. In quantum mechanics, therefore, the state or probability amplitude contains the basic probability information. In general, the rules for combining probability amplitudes look very much like the classical rules for composition of probabilities: [The following quote is from Baym, Chapter 1] The probability amplitude for two successive probabilities is the product of amplitudes for the individual possibilities. For example, the amplitude for the x polarized photon to be right circularly polarized and for the right circularly polarized photon to pass through the y-polaroid is the product of the individual amplitudes. The amplitude for a process that can take place in one of several indistinguishable ways is the sum of amplitudes for each of the individual ways. For example, the total amplitude for the x polarized photon to pass through the y-polaroid is the sum of the amplitudes for it to pass as a right circularly polarized photon, plus the amplitude for it to pass as a left circularly polarized photon, The total probability for the process to occur is the absolute value squared of the total amplitude calculated by 1 and 2. Uncertainty principle Mathematical preparation For any legal operators the following inequality, a consequence of the Cauchy–Schwarz inequality, is true. If B A ψ and A B ψ are defined, then by subtracting the means and re-inserting in the above formula, we deduce where is the operator mean of observable X in the system state ψ and Here is called the commutator of A and B. This is a purely mathematical result. No reference has been made to any physical quantity or principle. It simply states that the uncertainty of one operator times the uncertainty of another operator has a lower bound. Application to angular momentum The connection to physics can be made if we identify the operators with physical operators such as the angular momentum and the polarization angle. We have then which means that angular momentum and the polarization angle cannot be measured simultaneously with infinite accuracy. (The polarization angle can be measured by checking whether the photon can pass through a polarizing filter oriented at a particular angle, or a polarizing beam splitter. This results in a yes/no answer that, if the photon was plane-polarized at some other angle, depends on the difference between the two angles.) States, probability amplitudes, unitary and Hermitian operators, and eigenvectors Much of the mathematical apparatus of quantum mechanics appears in the classical description of a polarized sinusoidal electromagnetic wave. The Jones vector for a classical wave, for instance, is identical with the quantum polarization state vector for a photon. The right and left circular components of the Jones vector can be interpreted as probability amplitudes of spin states of the photon. Energy conservation requires that the states be transformed with a unitary operation. This implies that infinitesimal transformations are transformed with a Hermitian operator. These conclusions are a natural consequence of the structure of Maxwell's equations for classical waves. Quantum mechanics enters the picture when observed quantities are measured and found to be discrete rather than continuous. The allowed observable values are determined by the eigenvalues of the operators associated with the observable. In the case angular momentum, for instance, the allowed observable values are the eigenvalues of the spin operator. These concepts have emerged naturally from Maxwell's equations and Planck's and Einstein's theories. They have been found to be true for many other physical systems. In fact, the typical program is to assume the concepts of this section and then to infer the unknown dynamics of a physical system. This was done, for instance, with the dynamics of electrons. In that case, working back from the principles in this section, the quantum dynamics of particles were inferred, leading to Schrödinger's equation, a departure from Newtonian mechanics. The solution of this equation for atoms led to the explanation of the Balmer series for atomic spectra and consequently formed a basis for all of atomic physics and chemistry. This is not the only occasion in which Maxwell's equations have forced a restructuring of Newtonian mechanics. Maxwell's equations are relativistically consistent. Special relativity resulted from attempts to make classical mechanics consistent with Maxwell's equations (see, for example, Moving magnet and conductor problem). The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What describes the relationship between the polarization state of a photon and classical electromagnetic waves? A. The polarization state is unrelated to classical electromagnetic waves. B. The polarization state is a simple linear function of classical electromagnetic waves. C. The polarization state is identical to the Jones vector used for classical wave polarization. D. The polarization state can only be described using quantum mechanics, with no classical counterpart. Answer:
C. The polarization state is identical to the Jones vector used for classical wave polarization.
Relavent Documents: Document 0::: A torsion box consists of two thin layers of material (skins) on either side of a lightweight core, usually a grid of beams. It is designed to resist torsion under an applied load. A hollow core door is probably the most common example of a torsion box (stressed skin) structure. The principle is to use less material more efficiently. The torsion box uses the properties of its thin surfaces to carry the imposed loads primarily through tension while the close proximity of the enclosed core material compensates for the tendency of the opposite side to buckle under compression. Torsion boxes are used in the construction of structural insulated panels for houses, wooden tables and doors, skis, snowboards, and airframes - especially wings and vertical stabilizers. Document 1::: esmGFP is an artificial green fluorescent protein designed using the AI model ESM3, developed by EvolutionaryScale. The protein does not exist in nature and was generated through a simulation of 500 million years of molecular evolution. Development Scientists at EvolutionaryScale and the Arc Institute developed esmGFP by training ESM3 on a dataset of 770 billion protein sequences. The AI model, designed to predict and generate protein structures, mimicked evolutionary processes over 500 million simulated years to create functional proteins beyond those found in nature. EvolutionaryScale, founded by former researchers from Meta, developed ESM3 as one of the largest AI models applied to protein design. The model's ability to generate new fluorescent proteins has attracted significant investment. At its core, ESM3 functions similarly to a language model, predicting protein sequences and structures much like an AI predicts words in a sentence. Scientists prompted ESM3 to generate a fluorescent protein by focusing on residues responsible for fluorescence. Through iterative refinement, the AI designed a protein that shares 58% sequence similarity with its closest known counterpart, a fluorescent protein from the bubble-tip sea anemone (Entacmaea quadricolor). The resulting protein was synthesized and tested in a lab, where it successfully exhibited fluorescence, demonstrating that AI-driven protein design can produce functional biomolecules that nature never evolved. Applications Green fluorescent proteins are widely used in biological research, particularly for tagging and tracking cellular processes. AI-designed proteins like esmGFP could lead to advancements in medicine, environmental science, and synthetic biology. Potential applications include enzyme development for plastic degradation, novel disease treatments, and tools for exploring protein evolution. Scientific Review The research on esmGFP was initially released as a preprint and later peer-rev Document 2::: D'Arcy Island is an island in Haro Strait, south of Sidney Island and east of the Saanich Peninsula (Vancouver Island). It is the southernmost of the Gulf Islands and is part of the Gulf Islands National Park Reserve. History The island was used as a leper colony for Chinese immigrants from 1891 to 1924, when the inhabitants were moved to Bentinck Island, closer to Victoria. Ruins of the time's buildings still visible. D'Arcy Island's proximity to the border with the United States was exploited by American bootlegger Roy Olmstead in the smuggling of Canadian liquor, primarily whisky, to Washington State. His operation would transport the liquor from Victoria, British Columbia, to islands in Haro Strait, including D'Arcy, for later pickup by smaller craft that would move the contraband during rough weather, making it more difficult for the Coast Guard to detect them. D'Arcy was declared a marine park in 1961 and included as part of the Gulf Islands National Park Reserve in 2003. Access D'Arcy is accessible by private watercraft only. Camping Gulf Islands National Park Reserve offers seven marine-accessible backcountry campsites on D'Arcy. Facilities are limited to pit toilets and picnic tables. There is no drinking water available, and no campfires are permitted. References External links Gulf Islands National Park Reserve Document 3::: Charismatic megafauna are animal species that are large—in the category that they represent—with symbolic value or widespread popular appeal, and are often used by environmental activists to gain public support for environmentalist goals. In this definition, animals such as penguins or bald eagles are megafauna because they are among the largest animals within the local animal community, and they disproportionately affect their environment. The vast majority of charismatic megafauna species are threatened and endangered by issues such as overhunting, poaching, black market trade, climate change, habitat destruction, and invasive species. In a 2018 study, the top twenty most charismatic megafauna included the tiger, lion, and elephant. Use in conservation Charismatic species are often used as flagship species in conservation programs, as they are supposed to affect people's feelings more. However, being charismatic does not protect species against extinction; all of the 10 most charismatic species are currently endangered, and only the giant panda shows a demographic growth from an extremely small population. Beginning early in the 20th century, efforts to reintroduce extirpated charismatic megafauna to ecosystems have been an interest of a number of private and non-government conservation organizations. Species have been reintroduced from captive breeding programs in zoos, such as the wisent (the European bison) to Poland's Białowieża Forest. These and other reintroductions of charismatic megafauna, such as Przewalski's horse to Mongolia, have been to areas of limited, and often patchy, range compared to the historic ranges of the respective species. Environmental activists and proponents of ecotourism seek to use the leverage provided by charismatic and well-known species to achieve more subtle and far-reaching goals in species and biodiversity conservation. By directing public attention to the diminishing numbers of giant panda due to habitat loss, for examp The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is transrepression in molecular biology? A. The process by which one protein enhances the activity of another protein B. A mechanism where one protein inhibits the activity of another protein through interaction C. The interaction of multiple proteins leading to cell division D. The activation of a transcription factor to increase gene expression Answer:
B. A mechanism where one protein inhibits the activity of another protein through interaction
Relavent Documents: Document 0::: In particle physics, the crypton is a hypothetical superheavy particle, thought to exist in a hidden sector of string theory. It has been proposed as a candidate particle to explain the dark matter content of the universe. Cryptons arising in the hidden sector of a superstring-derived flipped SU(5) GUT model have been shown to be metastable with a lifetime exceeding the age of the universe. Their slow decays may provide a source for the ultra-high-energy cosmic rays (UHECR). References Document 1::: The Hong Kong International Medical Devices and Supplies Fair is a trade fair organised by the Hong Kong Trade Development Council, held annually at the Hong Kong Convention and Exhibition Centre. The 2009 fair attracted over 150 exhibitors from 12 countries and regions. Several themed zones which include Medical Device, Medical Supplies and Disposables, and Tech Exchange help buyers connect with the right suppliers. Major exhibit categories Accident and Emergency Equipment Building Technology and Hospital Furniture Chinese Medical Devices Communication, Systems and Information Technology Dental Equipment and Supplies Diagnostics Electromedical Equipment / Medical Technology Laboratory Equipment Medical Components and Materials Medical Supplies and Disposables Physiotherapy / Orthopaedic / Rehabilitation Technology Textiles Document 2::: Addiction is a state characterized by compulsive engagement in rewarding stimuli, despite adverse consequences. The process of developing an addiction occurs through instrumental learning, which is otherwise known as operant conditioning. Neuroscientists believe that drug addicts’ behavior is a direct correlation to some physiological change in their brain, caused by using drugs. This view believes there is a bodily function in the brain causing the addiction. This is brought on by a change in the brain caused by brain damage or adaptation from chronic drug use. In humans, addiction is diagnosed according to diagnostic models such as the Diagnostic and Statistical Manual of Mental Disorders, through observed behaviors. There has been significant advancement in understanding the structural changes that occur in parts of the brain involved in the reward pathway (mesolimbic system) that underlies addiction. Most research has focused on two portions of the brain: the ventral tegmental area, (VTA) and the nucleus accumbens (NAc). The VTA is the portion of the mesolimbic system responsible for spreading dopamine to the whole system. The VTA is stimulated by ″rewarding experiences″. The release of dopamine by the VTA induces pleasure, thus reinforcing behaviors that lead to the reward. Drugs of abuse increase the VTA's ability to project dopamine to the rest of the reward circuit. These structural changes only last 7–10 days, however, indicating that the VTA cannot be the only part of the brain that is affected by drug use, and changed during the development of addiction. The nucleus accumbens (NAc) plays an essential part in the formation of addiction. Almost every drug with addictive potential induces the release of dopamine into the NAc. In contrast to the VTA, the NAc shows long-term structural changes. Drugs of abuse weaken the connections within the NAc after habitual use, as well as after use then withdrawal. Structural changes of learning Learning by experience occurs through modifications of the structural circuits of the brain. These circuits are composed of many neurons and their connections, called synapses, which occur between the axon of one neuron and the dendrite of another. A single neuron generally has many dendrites which are called dendritic branches, each of which can be synapsed by many axons. Along dendritic branches there can be hundreds or even thousands of dendritic spines, structural protrusions that are sites of excitatory synapses. These spines increase the number of axons from which the dendrite can receive information. Dendritic spines are very plastic, meaning they can be formed and eliminated very quickly, in the order of a few hours. More spines grow on a dendrite when it is repetitively activated. Dendritic spine changes have been correlated with long-term potentiation (LTP) and long-term depression (LTD). LTP is the way that connections between neurons and synapses are strengthened. LTD is the process by which synapses are weakened. For LTP to occur, NMDA receptors on the dendritic spine send intracellular signals to increase the number of AMPA receptors on the post synaptic neuron. If a spine is stabilized by repeated activation, the spine becomes mushroom shaped and acquires many more AMPA receptors. This structural change, which is the basis of LTP, persists for months and may be an explanation for some of the long-term behavioral changes that are associated with learned behaviors including addiction to drugs. Research methodologies Animal models Animal models, especially rats and mice, are used for many types of biological research. The animal models of addiction are particularly useful because animals that are addicted to a substance show behaviors similar to human addicts. This implies that the structural changes that can be observed after the animal ingests a drug can be correlated with an animal's behavioral changes, as well as with similar changes occurring in humans. Administration protocols Administration of drugs that are often abused can be done either by the experimenter (non-contingent), or by a self-administration (contingent) method. The latter usually involves the animal pressing a lever to receive a drug. Non-contingent models are generally used for convenience, being useful for examining the pharmacological and structural effects of the drugs. Contingent methods are more realistic because the animal controls when and how much of the drug it receives. This is generally considered a better method for studying the behaviors associated with addiction. Contingent administration of drugs has been shown to produce larger structural changes in certain parts of the brain, in comparison to non-contingent administration. Types of drugs All abused drugs directly or indirectly promote dopamine signaling in the mesolimbic dopamine neurons which project from the ventral tegmental area to the nucleus accumbens (NAc). The types of drugs used in experimentation increase this dopamine release through different mechanisms. Opiates Opiates are a class of sedative with the capacity for pain relief. Morphine is an opiate that is commonly used in animal testing of addiction. Opiates stimulate dopamine neurons in the brain indirectly by inhibiting GABA release from modulatory interneurons that synapse onto the dopamine neurons. GABA is an inhibitory neurotransmitter that decreases the probability that the target neuron will send a subsequent signal. Stimulants Stimulants used regularly in neuroscience experimentation are cocaine and amphetamine. These drugs induce an increase in synaptic dopamine by inhibiting the reuptake of dopamine from the synaptic cleft, effectively increasing the amount of dopamine that reaches the target neuron. The reward pathway The reward pathway, also called the mesolimbic system of the brain, is the part of the brain that registers reward and pleasure. This circuit reinforces the behavior that leads to a positive and pleasurable outcome. In drug addiction, the drug-seeking behaviors become reinforced by the rush of dopamine that follows the administration of a drug of abuse. The effects of drugs of abuse on the ventral tegmental area (VTA) and the nucleus accumbens (NAc) have been studied extensively. Drugs of abuse change the complexity of dendritic branching as well as the number and size of the branches in both the VTA and the NAc. [7] By correlation, these structural changes have been linked to addictive behaviors. The effect of these structural changes on behavior is uncertain and studies have produced conflicting results. Two studies have shown that an increase in dendritic spine density due to cocaine exposure facilitates behavioral sensitization, while two other studies produce contradicting evidence. In response to drugs of abuse, structural changes can be observed in the size of neurons and the shape and number of the synapses between them. The nature of the structural changes is specific to the type of drug used in the experiment. Opiates and stimulants produce opposite effects in structural plasticity in the reward pathway. It is not expected that these drugs would induce opposing structural changes in the brain because these two classes of drugs, opiates and stimulants, both cause similar behavioral phenotypes. Both of these drugs induce increased locomotor activity acutely, escalated self-administration chronically, and dysphoria when the drug is taken away. Although their effects on structural plasticity are opposite, there are two possible explanations as to why these drugs still produce the same indicators of addiction: Either these changes produce the same behavioral phenotype when any change from baseline is produced, or the critical changes that cause the addictive behavior cannot be quantified by measuring dendritic spine density. Opiates decrease spine density and dendrite complexity in the nucleus accumbens (NAc). Morphine decreases spine density regardless of the treatment paradigm (with one exception: "chronic morphine increases spine number on orbitofrontal cortex (oPFC) pyramidal neurons"). Either chronic or intermittent administration of morphine will produce the same effect. The only case where opiates increase dendritic density is with chronic morphine exposure, which increases spine density on pyramidal neurons in the orbitofrontal cortex. Stimulants increase spinal density and dendritic complexity in the nucleus accumbens (NAc), ventral tegmental area (VTA), and other structures in the reward circuit. Ventral tegmental area There are neurons with cell bodies in the VTA that release dopamine onto specific parts of the brain, including many of the limbic regions such as the NAc, the medial prefrontal cortex (mPFC), dorsal striatum, amygdala, and the hippocampus. The VTA has both dopaminergic and GABAergic neurons that both project to the NAc and mPFC. GABAergic neurons in the VTA also synapse on local dopamine cells. In non-drug models, the VTA dopamine neurons are stimulated by rewarding experiences. A release of dopamine from the VTA neurons seems to be the driving action behind drug-induced pleasure and reward. Exposure to drugs of abuse elicits LTP at excitatory synapses on VTA dopamine neurons. Excitatory synapses in brain slices from the VTA taken 24 hours after a single cocaine exposure showed an increase in AMPA receptors in comparison to a saline control. Additional LTP could not be induced in these synapses. This is thought to be because the maximal amount of LTP had already been induced by the administration of cocaine. LTP is only seen on the dopamine neurons, not on neighboring GABAergic neurons. This is of interest because the administration of drugs of abuse increases the excitation of VTA dopamine neurons, but does not increase inhibition. Excitatory inputs into the VTA will activate the dopamine neurons 200%, but do not increase activation of GABA neurons which are important in local inhibition. This effect of inducing LTP in VTA slices 24 hours after drug exposure has been shown using morphine, nicotine, ethanol, cocaine, and amphetamines. These drugs have very little in common except that they are all potentially addictive. This is evidence supporting a link between structural changes in the VTA and the development of addiction. Changes other than LTP have been observed in the VTA after treatment with drugs of abuse. For example, neuronal body size decreased in response to opiates. Although the structural changes in the VTA invoked by exposure to an addictive drug generally disappear after a week or two, the target regions of the VTA, including the NAc, may be where the longer-term changes associated with addiction occur during the development of the addiction. Nucleus accumbens The nucleus accumbens plays an integral role in addiction. Almost every addictive drug of abuse induces the release of dopamine into the nucleus accumbens. The NAc is particularly important for instrumental learning, including cue-induced reinstatement of drug-seeking behavior. It is also involved in mediating the initial reinforcing effects of addictive drugs. The most common cell type in the NAc is the GABAergic medium spiny neuron. These neurons project inhibitory connections to the VTA and receive excitatory input from various other structures in the limbic system. Changes in the excitatory synaptic inputs into these neurons have been shown to be important in mediating addiction-related behaviors. It has been shown that LTP and LTD occurs at NAc excitatory synapses. Unlike the VTA, a single dose of cocaine induces no change in potentiation in the excitatory synapses of the NAc. LTD was observed in the medium spiny neurons in the NAc following two different protocols: a daily cocaine administration for five days or a single dose followed by 10–14 days of withdrawal. This suggests that the structural changes in the NAc are associated with long-term behaviors (rather than acute responses) associated with addiction such as drug seeking. Human relevance Relapse Neuroscientists studying addiction define relapse as the reinstatement of drug-seeking behavior after a period of abstinence. The structural changes in the VTA are hypothesized to contribute to relapse. As the molecular mechanisms of relapse are better understood, pharmacological treatments to prevent relapse are further refined. Risk of relapse is a serious and long-term problem for recovering addicts. An addict can be forced to abstain from using drugs while they are admitted in a treatment clinic, but once they leave the clinic they are at risk of relapse. Relapse can be triggered by stress, cues associated with past drug use, or re-exposure to the substance. Animal models of relapse can be triggered in the same way. Search for a cure for addiction The goal of addiction research is to find ways to prevent and reverse the effects of addiction on the brain. Theoretically, if the structural changes in the brain associated with addiction can be blocked, then the negative behaviors associated with the disease should never develop. Structural changes associated with addiction can be inhibited by NMDA receptor antagonists which block the activity of NMDA receptors. NMDA receptors are essential in the process of LTP and LTD. Drugs of this class are unlikely candidates for pharmacological prevention of addiction because these drugs themselves are used recreationally. Examples of NMDAR antagonists are ketamine, dextromethorphan (DXM), phencyclidine (PCP). Document 3::: An upper tropospheric cyclonic vortex is a vortex, or a circulation with a definable center, that usually moves slowly from east-northeast to west-southwest and is prevalent across Northern Hemisphere's warm season. Its circulations generally do not extend below in altitude, as it is an example of a cold-core low. A weak inverted wave in the easterlies is generally found beneath it, and it may also be associated with broad areas of high-level clouds. Downward development results in an increase of cumulus cloudy and the appearance of circulation at ground level. In rare cases, a warm-core cyclone can develop in its associated convective activity, resulting in a tropical cyclone and a weakening and southwest movement of the nearby upper tropospheric cyclonic vortex. Symbiotic relationships can exist between tropical cyclones and the upper level lows in their wake, with the two systems occasionally leading to their mutual strengthening. When they move over land during the warm season, an increase in monsoon rains occurs History of research Using charts of mean 200-hectopascal circulation for July through August (located above sea level) to locate the circumpolar troughs and ridges, trough lines extend over the eastern and central North Pacific and over the North Atlantic. Case studies of upper tropospheric cyclones in the Atlantic and Pacific have been performed by using airplane reports (winds, temperatures and heights), radiosonde data, geostationary satellite cloud imagery, and cloud-tracked winds throughout the troposphere. It was determined they were the origin of an upper tropospheric cold-core lows, or cut-off lows. Characteristics The tropical upper tropospheric cyclone has a cold core, meaning it is stronger aloft than at the Earth's surface, or stronger in areas of the troposphere with lower pressures. This is explained by the thermal wind relationship. It also means that a pool of cold air aloft is associated with the feature. If both an upper The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What role does the nucleus accumbens (NAc) play in addiction according to the text? A. It is responsible for the initial pleasure derived from drug use. B. It regulates dopamine release from the ventral tegmental area (VTA). C. It is involved in mediating the initial reinforcing effects of addictive drugs. D. It exclusively processes the withdrawal symptoms associated with addiction. Answer:
C. It is involved in mediating the initial reinforcing effects of addictive drugs.
Relavent Documents: Document 0::: In mathematics, the Tonelli–Hobson test gives sufficient criteria for a function ƒ on R2 to be an integrable function. It is often used to establish that Fubini's theorem may be applied to ƒ. It is named for Leonida Tonelli and E. W. Hobson. More precisely, the Tonelli–Hobson test states that if ƒ is a real-valued measurable function on R2, and either of the two iterated integrals or is finite, then ƒ is Lebesgue-integrable on R2. References Document 1::: Londina Illustrata. Graphic and Historic Memorials of Monasteries, Churches, Chapels, Schools, Charitable Foundations, Palaces, Halls, Courts, Processions, Places of Early Amusement and Modern & Present Theatres, In the Cities and Suburbs of London & Westminster was a book published in two volumes by Robert Wilkinson in 1819 & 1825, that had initially been released with William Herbert as groups of engravings between 1808 and 1819 which featured topographical illustrations by some of the foremost engravers and illustrators of the day, of the cities of London and Westminster, the county of Middlesex and some areas south of the River Thames, then in Surrey, such as Southwark. Most of the plates carry names of the draughtsman and engraver. A few early artists are included such as Wenceslaus Hollar. More recent draughtsmen included Robert Blemmell Schnebbelie, Frederick Nash, William Capon, George Jones, H. Gardner, George Shepherd, William Goodman, C.J.M. Whichelo, John Carter, Fellows, C. Westmacott, E. Burney, Bartholomew Howlett, Thomas H. Shepherd, Banks, Ravenhill, William Oram. Engravers include James Stow, T. Dale, Bartholomew Howlett, John Whichelo, W. Wise, Samuel Rawle, T. Bourne, H. Cook, M. Springsguth, Wenceslaus Hollar, Joseph Skelton, Israel Silvestre, Richard Sawyer, S. Springsguth junr., Taylor. Document 2::: The following is a list of the 21 largest civil settlements, reached between the United States Department of Justice and pharmaceutical companies from 2001 to 2017, ordered by the size of the total civil settlement. Some of these matters also resolved criminal fines and penalties, listed in parentheses, but these amounts are not considered when ranking these settlements as some of the settlements listed did not have a criminal component. Thus, this article provides the most accurate list of the largest civil-only portion of settlements between pharmaceutical companies and the Department of Justice. Because this article focuses on civil portions of settlements, some of the larger total settlements do not make the list. For example, Purdue Pharmaceuticals entered an agreement with the United States, pleading guilty to felony misbranding of OxyContin with intent to defraud and mislead under sections 33 1(a) and 333(a)(2) of the FD&C Act and agreed to pay more than $600 million, but only $160 million was allocated to resolve civil claims under the False Claims Act, while the remainder was allocated to resolve criminal claims and private claims. Legal claims against the pharmaceutical industry have varied widely over the past two decades, including Medicare and Medicaid fraud driven by off-label promotion, and inadequate manufacturing practices. With respect to off-label promotion, specifically, a federal court recognized that bills submitted to Medicare or Medicaid driven by off-label promotion as a violation of the False Claims Act for the first time in Franklin v. Parke-Davis, leading to a $430 million settlement. The civil portion of this settlement was $190 million, and it is the last settlement included in the below table. See also Pharmaceutical fraud List of off-label promotion pharmaceutical settlements Document 3::: The Intel Cluster Ready certification is a marketing program from Intel. It is aimed at hardware and software vendors in the low-end and mid-range cluster market. To get certified, systems have to fulfill a minimum set of cluster-specific requirements. This way, vendors of parallel software can build their applications on a basic cluster platform, trusting certain components to be present. Other drivers, libraries and tools will have to be provided by the software vendor or its partners, or by a system integrator. Description The program was announced in June 2007. The nodes of an Intel Cluster Ready compliant cluster are based on Xeon server processors and PC hardware, interconnected through Ethernet or InfiniBand. The operating system is a Linux distribution conforming to a specific file system layout. Also included are Intel's closed source but publicly available parallel libraries: the Message Passing Interface, Threading Building Blocks, and Math Kernel Library. Intel only specifies the requirements a cluster has to fulfill to get certified. The specific implementation is the responsibility of the platform vendor. Intel's Cluster Checker checks the system's compliance. It is not only deployed by the vendor, the integrator and the end user to verify the system, it can also be used to troubleshoot an operational cluster. While cluster hardware gets certified, software can be registered as well. Intel provides a minimal cluster infrastructure where software vendors can run their package, test scripts and test data. After successful completion, the application gets registered as being Intel Cluster Ready compliant. The Cluster Ready program is free for both hardware and software vendors. The Intel Cluster Ready program does not primarily include the high-end clusters used for scientific calculations at universities and research institutes. It aims to commoditize the parallel systems used for industrial and commercial applications. According to IDC, more than h The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary characteristic of a person classified as a "morning lark" according to the text? A. They usually stay up late at night. B. They feel most energetic just after waking up in the morning. C. They prefer to work in evening shifts. D. They have a low body temperature in the morning. Answer:
B. They feel most energetic just after waking up in the morning.
Relavent Documents: Document 0::: In chemistry, a dihydrogen bond is a kind of hydrogen bond, an interaction between a metal hydride bond and an OH or NH group or other proton donor. With a van der Waals radius of 1.2 Å, hydrogen atoms do not usually approach other hydrogen atoms closer than 2.4 Å. Close approaches near 1.8 Å, are, however, characteristic of dihydrogen bonding. Boron hydrides An early example of this phenomenon is credited to Brown and Heseltine. They observed intense absorptions in the IR bands at 3300 and 3210 cm−1 for a solution of (CH3)2NHBH3. The higher energy band is assigned to a normal N−H vibration whereas the lower energy band is assigned to the same bond, which is interacting with the B−H. Upon dilution of the solution, the 3300 cm−1 band increased in intensity and the 3210 cm−1 band decreased, indicative of intermolecular association. Interest in dihydrogen bonding was reignited upon the crystallographic characterization of the molecule H3NBH3. In this molecule, like the one studied by Brown and Hazeltine, the hydrogen atoms on nitrogen have a partial positive charge, denoted Hδ+, and the hydrogen atoms on boron have a partial negative charge, often denoted Hδ−. In other words, the amine is a protic acid and the borane end is hydridic. The resulting B−H...H−N attractions stabilize the molecule as a solid. In contrast, the related substance ethane, H3CCH3, is a gas with a boiling point 285 °C lower. Because two hydrogen centers are involved, the interaction is termed a dihydrogen bond. Formation of a dihydrogen bond is assumed to precede formation of H2 from the reaction of a hydride and a protic acid. A very short dihydrogen bond is observed in NaBH4·2H2O with H−H contacts of 1.79, 1.86, and 1.94 Å. Coordination chemistry Protonation of transition metal hydride complexes is generally thought to occur via dihydrogen bonding. This kind of H−H interaction is distinct from the H−H bonding interaction in transition metal complexes having dihydrogen bound to a meta Document 1::: Ali Progri (October 10, 1929 - March 24, 2009) was an Albanian engineer. He was a participant in World War II and afterwards was graduated in Kiev, Ukraine (then part of Soviet Union). He had a remarkable career and held various high-ranking positions during his lifetime. Ali Progri was awarded the distinguished title Merited Teacher of Albania. Early life and education Ali Progri was born in the village of Progër in Albania, on 19 October 1929 and was the son of Muharrem Progri. He took part actively in the Albanian Resistance of World War II and after finishing his high school in Albania he went to Kiev, Ukraine in 1948 for his academic studies. In 1953 he was graduated from Kiev Polytechnic Institute, achieving high results. Career After returning to Albania he was active in the field of engineering and was among the founders of the modern University of Tirana in 1957 (even though the first academic institute, The Pedagogical Institute of Tirana was founded on December 20, 1946) and eventually he earned the position of vice-dean of the Faculty of Civil Engineering. Thanks to his experience and intellectual profile he would be chosen as Director in the Directory of Professional Middle Education in the Council of Ministers of Albania. As a distinguished engineer and expert in the field of applied mechanics he also earned the position of the Chief Technology Officer of the Train Factory in Tirana. Nevertheless, he is most notable for his 27 years as the Principal of the Polytechnic High School of Tirana. For his contribution in the Albanian Resistance of World War II he was awarded by the Parliament of Albania () the Medal of Liberation and Memory and for the experience and contribution in the education field, he gained the honorific title Merited Teacher of Albania''. The engineer Ali Progri also held the title Professor. He died in Tirana, on 24 March 2009, at the age of 80. References Document 2::: Arbitrariness is the quality of being "determined by chance, whim, or impulse, and not by necessity, reason, or principle". It is also used to refer to a choice made without any specific criterion or restraint. Arbitrary decisions are not necessarily the same as random decisions. For example, during the 1973 oil crisis, Americans were allowed to purchase gasoline only on odd-numbered days if their license plate was odd, and on even-numbered days if their license plate was even. The system was well-defined and not random in its restrictions; however, since license plate numbers are completely unrelated to a person's fitness to purchase gasoline, it was still an arbitrary division of people. Similarly, schoolchildren are often organized by their surname in alphabetical order, a non-random yet an arbitrary method—at least in cases where surnames are irrelevant. Philosophy Arbitrary actions are closely related to teleology, the study of purpose. Actions lacking a telos, a goal, are necessarily arbitrary. With no end to measure against, there can be no standard applied to choices, so all decisions are alike. Note that arbitrary or random methods in the standard sense of arbitrary may not qualify as arbitrary choices philosophically if they were done in furtherance of a larger purpose (such as the examples above for the purposes of establishing discipline in school and avoiding overcrowding at gas stations). Nihilism is the philosophy that believes that there is no purpose in the universe, and that every choice is arbitrary. According to nihilism, the universe contains no value and is essentially meaningless. Because the universe and all of its constituents contain no higher goal for us to make subgoals from, all aspects of human life and experiences are completely arbitrary. There is no right or wrong decision, thought or practice and whatever choice a human being makes is just as meaningless and empty as any other choice he or she could have made. Many brands of th Document 3::: The Mandarin paradox is an ethical parable used to illustrate the difficulty of fulfilling moral obligations when moral punishment is unlikely or impossible, leading to moral disengagement. It has been used to underscore the fragility of ethical standards when moral agents are separated by physical, cultural, or other distance, especially as facilitated by globalization. It was first posed by French writer Chateaubriand in "The Genius of Christianity" (1802): I ask my own heart, I put to myself this question: "If thou couldst by a mere wish kill a fellow-creature in China, and inherit his fortune in Europe, with the supernatural conviction that the fact would never be known, wouldst thou consent to form such a wish?" The paradox is famously used to foreshadow the character development of the arriviste Eugène de Rastignac in Balzac's novel Père Goriot. Rastignac asks Bianchon if he recalls the paradox, to which Bianchon first replies that he is "at [his] thirty-third mandarin," but then states that he would refuse to take an unknown man's life regardless of circumstance. Rastignac wrongly attributes the quote to Jean-Jacques Rousseau, which propagated to later writings. In fiction The Mandarin (novel) by José Maria de Eça de Queirós Button, Button by Richard Matheson (plus movie The Box (2009 film)) The Count of Monte Cristo by Alexandre Dumas See also Ring of Gyges References Bibliography Champsaur F., Le Mandarin, Paris, P. Ollendorff, 1895‐1896 (réimprimé sous le nom L’Arriviste, Paris, Albin Michel, 1902) Schneider L. Tuer le Mandarin // Le Gaulois, 18 juillet 1926, no 17818 Paul Ronal. Tuer le mandarin // Revue de littirature comparee 10, no. 3 (1930): 520–23. E. Latham // Une cita􀁒on de Rousseau // Mercure de France, 1er juin 1947, no 1006, p. 393. Barbey B. «Tuer le Mandarin» // Mercure de France, 1er septembre 1947, no 1009 Laurence W. Keates. Mysterious Miraculous Mandarin: Origins, Literary Paternity, Implications in Ethics // Revue de The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the redshift value of OH 471, making it one of the most distant objects observed? A. 2.50 B. 3.40 C. 4.00 D. 1.80 Answer:
B. 3.40
Relavent Documents: Document 0::: An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus. More formally, in differential geometry of curves, the osculating circle of a sufficiently smooth plane curve at a given point p on the curve has been traditionally defined as the circle passing through p and a pair of additional points on the curve infinitesimally close to p. Its center lies on the inner normal line, and its curvature defines the curvature of the given curve at that point. This circle, which is the one among all tangent circles at the given point that approaches the curve most tightly, was named circulus osculans (Latin for "kissing circle") by Leibniz. The center and radius of the osculating circle at a given point are called center of curvature and radius of curvature of the curve at that point. A geometric construction was described by Isaac Newton in his Principia: Nontechnical description Imagine a car moving along a curved road on a vast flat plane. Suddenly, at one point along the road, the steering wheel locks in its present position. Thereafter, the car moves in a circle that "kisses" the road at the point of locking. The curvature of the circle is equal to that of the road at that point. That circle is the osculating circle of the road curve at that point. Mathematical description Let be a regular parametric plane curve, where is the arc length (the natural parameter). This determines the unit tangent vector , the unit normal vector , the signed curvature and the radius of curvature at each point for which is composed: Suppose that P is a point on γ where . The corresponding center of curvature is the point Q at distance R along N, in the same direction if k is positive and in the opposite Document 1::: In a low-IF receiver, the radio frequency (RF) signal is mixed down to a non-zero low or moderate intermediate frequency (IF). Typical frequency values are a few megahertz (instead of 33–40 MHz) for TV, and even lower frequencies, typically 120–130 kHz (instead of 10.7–10.8 MHz or 13.45 MHz) in the case of FM radio receivers or 455–470 kHz for AM radio (MW/LW/SW) receivers. Low-IF receiver topologies have many of the desirable properties of zero-IF architectures, but avoid the DC offset and 1/f noise problems. The use of a non-zero IF re-introduces the image issue. However, when there are relatively relaxed image and neighbouring channel rejection requirements they can be satisfied by carefully designed low-IF receivers. Image signal and unwanted blockers can be rejected by quadrature down-conversion (complex mixing) and subsequent filtering. This technique is now widely used in the tiny FM receivers incorporated into MP3 players and mobile phones and is becoming commonplace in both analog and digital TV receiver designs. Using advanced analog- and digital signal processing techniques, cheap, high quality receivers using no resonant circuits at all are now possible. Document 2::: High winds can blow railway trains off tracks and cause accidents. Dangers of high winds High winds can cause problems in a number of ways: blow trains off the tracks blow trains or wagons along the tracks and cause collisions cause cargo to blow off trains which can damage objects outside the railway or which other trains can collide with cause pantographs and overhead wiring to tangle cause trees and other objects to fall onto the railway. Preventative measures Risks from high winds can be reduced by: wind fences akin to snow sheds lower profile of carriages lowered centre of gravity of vehicles reduction in train speed or cancellation, at high winds a wider rail gauge improve overhead wiring with: regulated tension rather than fixed terminations shorter catenary spans solid conductors By country Australia 1928 – 47 wagons blown along line at Tocumwal 1931 – Kandos – wind blows level crossing gates closed in front of motor-cyclist 1943 – Hobart, Tasmania; Concern that wind will blow over doubledeck trams on gauge if top deck enclosed. 2010 – Marla, South Australia; Small tornado blows over train. Austria 1910 – Trieste (now in Italy) – train blown down embankment. China Lanxin High-Speed Railway#Wind shed risk February 28, 2007 – Wind blows 10 passenger rail cars off the track near Turpan, China. Denmark Great Belt Bridge rail accident. On 2 January 2019 a DSB express passenger train is hit by a semi-trailer from a passing cargo train on the western bridge of the Great Belt Fixed Link during Storm Alfrida, killing eight people and injuring 16. Germany Rügen narrow-gauge railway, 20 October 1936: derailment of a train, five injured India One reason for choosing broad gauge in India for greater stability in high winds. Ireland On the night of 30 January 1925, strong winds derailed carriages of a train crossing the Owencarrow Viaduct of the gauge Londonderry and Lough Swilly Railway. Japan Inaho Amarube Viaduct 1895 Document 3::: Duinviridae is a family of RNA viruses, which infect prokaryotes. Taxonomy Duinviridae contains the following 10 genera: Apeevirus Beshanovirus Cartwavirus Cubpivirus Derlivirus Dirlevirus Kahshuvirus Kohmavirus Samuneavirus Tehuhdavirus The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary purpose of the Artificially Expanded Genetic Information System (AEGIS) as described in the text? A. To create a new type of DNA for human use B. To understand the development of extraterrestrial life C. To replace natural DNA in all organisms D. To study the effects of synthetic nucleobases on human health Answer:
B. To understand the development of extraterrestrial life
Relavent Documents: Document 0::: Kappa Ophiuchi, Latinized from κ Ophiuchi, is a star in the equatorial constellation Ophiuchus. It is a suspected variable star with an average apparent visual magnitude of 3.20, making it visible to the naked eye and one of the brighter members of this constellation. Based upon parallax measurements made during the Hipparcos mission, it is situated at a distance of around from Earth. The overall brightness of the star is diminished by 0.11 magnitudes due to extinction from intervening matter along the line of sight. The spectrum of this star matches a stellar classification of K2 III, with the luminosity class of 'III' indicating this is a giant star that has exhausted the hydrogen at its core and evolved away from the main sequence of stars like the Sun. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. It is 19% more massive than the Sun, but the outer envelope has expanded to around 11 times the Sun's radius. With its enlarged size, it is radiating 51 times the luminosity of the Sun from its outer atmosphere at an effective temperature of 4,449 K. This is cooler than the Sun's surface and gives Kappa Ophiuchi the orange-hued glow of a K-type star. Although designated as a variable star, observations with the Hipparcos satellite showed a variation of no more than 0.02 in magnitude. In designating this as a suspected variable star, it is possible that Kappa Ophiuchi was mistaken for Chi Ophiuchi, which is a variable star. Kappa Ophiuchi belongs to an evolutionary branch known as the red clump, making it a clump giant. The surface abundance of elements other than hydrogen and helium, what astronomers term the star's metallicity, is similar to the abundances of those elements in the Sun. Document 1::: Immersion chillers work by circulating a cooling fluid (usually tap water from a garden hose or faucet) through a copper/stainless steel coil that is placed directly in the hot wort. As the cooling fluid runs through the coil it absorbs and carries away heat until the wort has cooled to the desired temperature. The advantage of using a copper or stainless steel immersion chiller is the lower risk of contamination versus other methods when used in an amateur or homebrewing environment. The clean chiller is placed directly in the still boiling wort and thus sanitized before the cooling process begins. See also Brewing#Wort cooling coolship, alternate equipment for cooling References Document 2::: Salonga National Park (French: Parc National de la Salonga) is a national park in the Democratic Republic of the Congo located in the Congo River basin. It is Africa's largest tropical rainforest reserve covering about 36,000 km2 or . It extends into the provinces of Mai Ndombe, Equateur, Kasaï and Sankuru. In 1984, the national park was inscribed on the UNESCO World Heritage List for its protection of a large swath of relatively intact rainforest and its important habitat for many rare species. In 1999, the site has been listed as endangered due to poaching and housing construction. Following the improvement in its state of conservation, the site was removed from the endangered list in 2021. Geography The park is in an area of rainforest about halfway between Kinshasa, the capital, and Kisangani. There are no roads and most of the park is accessible only by river. Sections of the national park are almost completely inaccessible and have never been systematically explored. The southern region inhabited by the Iyaelima people is accessible via the Lokoro River, which flows through the center and northern parts of the park, and the Lula River in the south. The Salonga River meanders in a generally northwest direction through the Salonga National Park to its confluence with the Busira River. History The Salonga National Park was established as the Tshuapa National Park in 1956, and gained its present boundaries with a 1970 presidential decree by President Mobutu Sese Seko. It was registered as a UNESCO World Heritage Site in 1984. Due to the civil war in the eastern half of the country, it was added to the List of World Heritage in Danger in 1999. The park is co-managed by the Institut Congolais pour la Conservation de la Nature and the World Wide Fund for Nature since 2015. Extensive consultation is ongoing, with the two main populations living within the park; the Iyaelima, the last remaining residents of the park and the Kitawalistes, a religious sect who insta Document 3::: EXAPT (a portmanteau of "Extended Subset of APT") is a production-oriented programming language that allows users to generate NC programs with control information for machining tools and facilitates decision-making for production-related issues that may arise during various machining processes. EXAPT was first developed to address industrial requirements. Through the years, the company created additional software for the manufacturing industry. Today, EXAPT offers a suite of SAAS products and services for the manufacturing industry. The trade name, EXAPT, is most commonly associated with the CAD/CAM-System, production data, and tool management software of the German company EXAPT Systemtechnik GmbH based in Aachen, DE. General EXAPT is a modularly built programming system for all NC machining operations as Drilling Turning Milling Turn-Milling Nibbling Flame-, laser-, plasma- and water jet cutting Wire eroding Operations with industrial robots Due to the modular structure, the main product groups, EXAPTcam and EXAPTpdo, are gradually expandable and permit individual software for the manufacturing industry used individually and also in a compound with an existing IT environment. Functionality EXAPTcam meets the requirements for NC planning, especially for the cutting operations such as turning, drilling, and milling up to 5-axis simultaneous machining. Thereby new process technologies, tool, and machine concepts are constantly involved. In the NC programming data from different sources such as 3D CAD models, drawings or tables can flow in. The possibilities of NC programming reaches from language-oriented to feature-oriented NC programming. The integrated EXAPT knowledge database and intelligent and scalable automatisms support the user. The EXAPT NC planning also covers the generation of production information as clamping and tool plans, presetting data or time calculations. The realistic simulation possibilities of NC planning and NC control data provi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What characteristic of twigs is important for diagnosing their age and growth rate? A. Color of the bark B. Thickness of the pith C. Number of annular ring markings D. Length of the twig Answer:
C. Number of annular ring markings
Relavent Documents: Document 0::: The Cloaca Maxima ( , ) or, less often, Maxima Cloaca, is one of the world's earliest sewage systems. Its name is related to that of Cloacina, a Roman goddess. Built during either the Roman Kingdom or early Roman Republic, it was constructed in Ancient Rome in order to drain local marshes and remove waste from the city. It carried effluent to the River Tiber, which ran beside the city. The sewer started at the Forum Augustum and ended at the Ponte Rotto and Ponte Palatino. It began as an open air canal, but it developed into a much larger sewer over the course of time. Agrippa renovated and reconstructed much of the sewer. This would not be the only development in the sewers, by the first century AD all eleven Roman aqueducts were connected to the sewer. After the Roman Empire fell the sewer still was used. By the 19th century, it had become a tourist attraction. Some parts of the sewer are still used today. During its heyday, it was highly valued as a sacred symbol of Roman culture and Roman engineering. Construction and history According to tradition, it may have initially been constructed around 600 BC under the orders of the king of Rome, Tarquinius Priscus. He ordered Etruscan workers and the plebeians to construct the sewers. Before constructing the Cloaca Maxima, Priscus, and his son Tarquinius Superbus, worked to transform the land by the Roman forum from a swamp into a solid building ground, thus reclaiming the Velabrum. In order to achieve this, they filled it up with 10-20,000 cubic meters of soil, gravel, and debris. At the beginning of the sewer's life it consisted of open-air channels lined up with bricks centered around a main pipe. At this stage it might have had no roof. However, wooden holes spread throughout the sewer indicate that wooden bridges may have been built over it, which possibly functioned as a roof. Alternatively, the holes could have functioned as a support for the scaffolding needed to construct the sewer. The Cloaca Maxima may al Document 1::: Optimal virulence is a concept relating to the ecology of hosts and parasites. One definition of virulence is the host's parasite-induced loss of fitness. The parasite's fitness is determined by its success in transmitting offspring to other hosts. For about 100 years, the consensus was that virulence decreased and parasitic relationships evolved toward symbiosis. This was even called the law of declining virulence despite being a hypothesis, not even a theory. It has been challenged since the 1980s and has been disproved. A pathogen that is too restrained will lose out in competition to a more aggressive strain that diverts more host resources to its own reproduction. However, the host, being the parasite's resource and habitat in a way, suffers from this higher virulence. This might induce faster host death, and act against the parasite's fitness by reducing probability to encounter another host (killing the host too fast to allow for transmission). Thus, there is a natural force providing pressure on the parasite to "self-limit" virulence. The idea is, then, that there exists an equilibrium point of virulence, where parasite's fitness is highest. Any movement on the virulence axis, towards higher or lower virulence, will result in lower fitness for the parasite, and thus will be selected against. Mode of transmission Paul W. Ewald has explored the relationship between virulence and mode of transmission. He came to the conclusion that virulence tends to remain especially high in waterborne and vector-borne infections, such as cholera and dengue. Cholera is spread through sewage and dengue through mosquitos. In the case of respiratory infections, the pathogen depends on an ambulatory host to survive. It must spare the host long enough to find a new host. Water- or vector-borne transmission circumvents the need for a mobile host. Ewald is convinced that the crowding of field hospitals and trench warfare provided an easy route to transmission that evolved the Document 2::: Obstacle avoidance, in robotics, is a critical aspect of autonomous navigation and control systems. It is the capability of a robot or an autonomous system/machine to detect and circumvent obstacles in its path to reach a predefined destination. This technology plays a pivotal role in various fields, including industrial automation, self-driving cars, drones, and even space exploration. Obstacle avoidance enables robots to operate safely and efficiently in dynamic and complex environments, reducing the risk of collisions and damage. For a robot or autonomous system to successfully navigate through obstacles, it must be able to detect such obstacles. This is most commonly done through the use of sensors, which allow the robot to process its environment, make a decision on what it must do to avoid an obstacle, and carry out that decision with the use of its effectors, or tools that allow a robot to interact with its environment. Approaches There are several methods for robots or autonomous machines to carry out their decisions in real-time. Some of these methods include sensor-based approaches, path planning algorithms, and machine learning techniques. Sensor-based One of the most common approaches to obstacle avoidance is the use of various sensors, such as ultrasonic, LiDAR, radar, sonar, and cameras. These sensors allow an autonomous machine to do a simple 3 step process: sense, think, and act. They take in inputs of distances in objects and provide the robot with data about its surroundings enabling it to detect obstacles and calculate their distances. The robot can then adjust its trajectory to navigate around these obstacles while maintaining its intended path. All of this is done and carried out in real-time and can be practically and effectively used in most applications of obstacle avoidance While this method works well under most circumstances, there are such where more advanced techniques could be useful and appropriate for efficiently reaching an en Document 3::: The Jesus Incident (1979) is the second science fiction novel set in the Destination: Void universe by the American author Frank Herbert and poet Bill Ransom. It is a sequel to Destination: Void (1965), and has two sequels: The Lazarus Effect (1983) and The Ascension Factor (1988). Plot introduction The book takes place at an indeterminate time following the events in Destination: Void. At the end of Destination: Void the crew of the ship had succeeded in creating an artificial consciousness. The new conscious being, now known as 'Ship', gains a level of awareness that allows it to manipulate space and time. Ship instantly transports itself to a planet which it has decided the crew will colonize, christening it "Pandora". The first book ends with a demand from Ship for the crew to learn how to WorShip or how to establish a relationship with Ship, a godlike being. The action of the book is divided between two settings, the internal spaces of Ship which is orbiting Pandora and the settlements on the planet. While the original crew of Ship, as described in Destination: Void, were cloned human beings from the planet Earth, by the time of The Jesus Incident, the crew has become a mixed bag of peoples from various cultures that have been accepted as crew members by Ship when it visited their planet as well as people who have been conceived and born on the ship. Evidently Ship has shown up at a number of planets as the suns of those planets were going nova, the implication being that these planets were other, failed experiments by Ship to establish a relationship with human beings. Ship refers to these as replays of human history, suggesting Ship itself has manipulated human history time and time again. Ship's charge for humans to decide how to WorShip still remains unsatisfied. In the opening chapters, Ship reveals that Pandora will be a final test for the human race. Ship awakens the Chaplain/Psychiatrist Raja Flattery (part of the original Destination: Void crew) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the expected completion year for the first stage of the Dasu Dam project? A. 2024 B. 2028 C. 2029 D. 2030 Answer:
C. 2029
Relavent Documents: Document 0::: In histology, the HPS stain, or hematoxylin phloxine saffron stain, is a way of marking tissues. HPS is similar to H&E, the standard bearer in histology. However, it differentiates between the most common connective tissue (collagen) and muscle and cytoplasm by staining the former yellow and the latter two pink, unlike an H&E stain, which stains all three pink. HPS stained sections are more expensive than H&E stained sections, primarily due to the cost of saffron. See also Histopathology References External links Histopathology Laboratory - Kingston General Hospital. Document 1::: V553 Centauri is a variable star in the southern constellation of Centaurus, abbreviated V553 Cen. It ranges in brightness from an apparent visual magnitude of 8.22 down to 8.80 with a period of 2.06 days. At that magnitude, it is too dim to be visible to the naked eye. Based on parallax measurements, it is located at a distance of approximately 1,890 light years from the Sun. Observations The variability of this star was announced in 1936 by C. Hoffmeister. In 1957, he determined it to be a Delta Cepheid variable with a magnitude range of and a periodicity of . The observers M. W. Feast and G. H. Herbig noted a peculiar spectrum with strong absorption lines of the molecules CH and CN, while neutral iron lines are unusually weak. They found a stellar classification of G5p I–III. In 1972, T. Lloyd-Evans and associates found the star's prominent bands of C2, CH, and CN varied with the Cepheid phase, being strongest at minimum. They suggested a large overabundance of carbon in the star's atmosphere. Chemical analysis of the atmosphere in 1979 showed a metallicity close to solar, with an enhancement of carbon and nitrogen. It was proposed that V553 Cen is an evolved RR Lyrae variable and is now positioned above the horizontal branch on the HR diagram. V553 Cen is classified as a BL Herculis variable, being a low–mass type II Cepheid with a period between . As with other variables of this type, it displays a secondary bump on its light curve. It is a member of a small group of carbon Cepheids, and is one of the brightest stars of that type. V553 Cen does not appear to have a companion. From the luminosity and shape of the light curve, stellar models from 1981 suggest a mass equal to 49% of the Sun's with 9.9 times the radius of the Sun. Further analysis of the spectrum showed that oxygen is not enhanced, but sodium may be moderately enhanced. There is no evidence of s-process enhancement of elements. Instead, the abundance peculiarities are the result of nuclear reaction sequences followed by dredge-up. In particular, these are the product of triple-α, CN, ON, and perhaps some Ne–Na reactions. See also Carbon star RT Trianguli Australis Further reading Document 2::: Taylor–Maccoll flow refers to the steady flow behind a conical shock wave that is attached to a solid cone. The flow is named after G. I. Taylor and J. W. Maccoll, whom described the flow in 1933, guided by an earlier work of Theodore von Kármán. Mathematical description Consider a steady supersonic flow past a solid cone that has a semi-vertical angle . A conical shock wave can form in this situation, with the vertex of the shock wave lying at the vertex of the solid cone. If it were a two-dimensional problem, i.e., for a supersonic flow past a wedge, then the incoming stream would have deflected through an angle upon crossing the shock wave so that streamlines behind the shock wave would be parallel to the wedge sides. Such a simple turnover of streamlines is not possible for three-dimensional case. After passing through the shock wave, the streamlines are curved and only asymptotically they approach the generators of the cone. The curving of streamlines is accompanied by a gradual increase in density and decrease in velocity, in addition to those increments/decrements effected at the shock wave. The direction and magnitude of the velocity immediately behind the oblique shock wave is given by weak branch of the shock polar. This particularly suggests that for each value of incoming Mach number , there exists a maximum value of beyond which shock polar do not provide solution under in which case the conical shock wave will have detached from the solid surface (see Mach reflection). These detached cases are not considered here. The flow immediately behind the oblique conical shock wave is typically supersonic, although however when is close to , it can be subsonic. The supersonic flow behind the shock wave will become subsonic as it evolves downstream. Since all incident streamlines intersect the conical shock wave at the same angle, the intensity of the shock wave is constant. This particularly means that entropy jump across the shock wave is also constant t Document 3::: Processing is a free graphics library and integrated development environment (IDE) built for the electronic arts, new media art, and visual design communities with the purpose of teaching non-programmers the fundamentals of computer programming in a visual context. Processing uses the Java programming language, with additional simplifications such as additional classes and aliased mathematical functions and operations. It also provides a graphical user interface for simplifying the compilation and execution stage. The Processing language and IDE have been the precursor to other projects including Arduino and Wiring. History The project was initiated in 2001 by Casey Reas and Ben Fry, both formerly of the Aesthetics and Computation Group at the MIT Media Lab. In 2012, they started the Processing Foundation along with Daniel Shiffman, who joined as a third project lead. Johanna Hedva joined the Foundation in 2014 as Director of Advocacy. Originally, Processing had used the domain proce55ing.net, because the processing domain was taken; Reas and Fry eventually acquired the domain processing.org and moved the project to it in 2004. While the original name had a combination of letters and numbers, it was always officially referred to as processing, but the abbreviated term p5 is still occasionally used (e.g. in "p5.js") in reference to the old domain name. In 2012 the Processing Foundation was established and received 501(c)(3) nonprofit status, supporting the community around the tools and ideas that started with the Processing Project. The foundation encourages people around the world to meet annually in local events called Processing Community Day. Features Processing includes a sketchbook, a minimal alternative to an integrated development environment (IDE) for organizing projects. Every Processing sketch is actually a subclass of the PApplet Java class (formerly a subclass of Java's built-in Applet) which implements most of the Processing language's features The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the apparent visual magnitude range of V553 Centauri as mentioned in the text? A. 8.22 to 8.80 B. 8.00 to 8.50 C. 7.50 to 8.00 D. 8.10 to 8.60 Answer:
A. 8.22 to 8.80
Relavent Documents: Document 0::: Roadside conservation is a conservation strategy in Australia and other countries where Road verge flora and habitats are protected or improved. The general aim is to conserve or increase the amount of native flora species; especially where that work will lead to higher conservation value, for example providing food or habitat for rare or endangered native fauna. Issues Potential benefits of roadside conservation strategies can include: Maintenance of the biodiversity of the roadsides Sustaining available corridors for fauna movement and habitat Preserving remnants of native vegetation adjoining man-made environments Problems with the maintaining of roadsides include: Removal, cutting of native vegetation Conflicts with road safety, such as foliage growth which restricts visibility for road users Work-load due to the extent of roadsides (i.e. the great length) Western Australia Formal recognition of the importance of roadside reserves occurred in the 1960s when then-Premier of Western Australia, the Hon. David Brand, ensured all new roads in Western Australia would have road reserves at least 40 metres wider than that needed for transport purposes. Notes Document 1::: Calculus of voting refers to any mathematical model which predicts voting behaviour by an electorate, including such features as participation rate. A calculus of voting represents a hypothesized decision-making process. These models are used in political science in an attempt to capture the relative importance of various factors influencing an elector to vote (or not vote) in a particular way. Example One such model was proposed by Anthony Downs (1957) and is adapted by William H. Riker and Peter Ordeshook, in “A Theory of the Calculus of Voting” (Riker and Ordeshook 1968) V = pB − C + D where V = the proxy for the probability that the voter will turn out p = probability of vote “mattering” B = “utility” benefit of voting--differential benefit of one candidate winning over the other C = costs of voting (time/effort spent) D = citizen duty, goodwill feeling, psychological and civic benefit of voting (this term is not included in Downs's original model) A political science model based on rational choice used to explain why citizens do or do not vote. The alternative equation is V = pB + D > C Where for voting to occur the the vote will matter "times" the (B)enefit of one candidate winning over another combined with the feeling of civic (D)uty, must be greater than the (C)ost of voting References Downs, Anthony. 1957. An Economic Theory of Democracy. New York: Harper & Row. Riker, William and Peter Ordeshook. 1968. “A Theory of the Calculus of Voting.” American Political Science Review 62(1): 25–42. Document 2::: The state auditor of Massachusetts is an elected constitutional officer in the executive branch of the U.S. state of Massachusetts. Twenty-six individuals have occupied the office of state auditor since the office's creation in 1849. The incumbent is Diana DiZoglio, a Democrat. Election Term of office The state auditor is elected by the people on Election Day in November to four-year terms, and takes office on the third Wednesday of the January following a general election. There is no limit to the number of terms a state auditor may hold. Institutionally speaking, the state auditor is thus completely independent of both the governor and General Court for the purpose of performing their official duties. These constitutional protections notwithstanding, the state auditor may still be impeached for misconduct or maladministration by the House of Representatives and, if found guilty, removed from office by the Senate. Qualifications Any person seeking election to the office of state auditor must meet the following requirements: Be at least eighteen years of age; Be a registered voter in Massachusetts; Be a Massachusetts resident for at least five years when elected; and Receive 5,000 signatures from registered voters on nomination papers. Vacancies In the event of a vacancy in the office of state auditor, the General Court is charged, if in session, with electing from among the eligible citizens of the Commonwealth a successor to serve the balance of the prior auditor's term in office. If, however, the vacancy occurs while the General Court is not in session, then responsibility for appointing a successor falls to the governor. The appointment is not valid without the advice and consent of the Governor's Council. Powers and duties The state auditor conducts independent and objective performance audits of each department, office, commission, agency, authority, institution, court, county, and any other activity of the Commonwealth, including programs and contractor Document 3::: Fish-booking is the process of pre-ordering delivery of freshly caught, unfrozen fish, crustaceans and mollusks of ocean, sea or river origin directly from the fishermen, fisheries using a specialized service aggregator or directly. Fish-booking is adherent to zero waste concept aimed at the reduction and minimization of waste in the areas, where it is possible. Pre-ordering fish through fish-booking enables consumers to ‘book’ the exact amount of product they intend to consume directly from the fisherman, without wasting resources on the part of the catch that will turn into trash. These kinds of services are popular in the countries with their own access to sea / ocean and relatively short delivery distances. The services providing quick delivery of fresh fish over long distances requiring air travel are also being developed. In addition, the so-called “subscription” services for the delivery of fish with a certain regularity, also allowing to plan the harvest and ensure prudent use of natural resource are gaining a widespread use. Reduction of harvested fish losses According to The State of World Fisheries and Aquaculture 2020 published by the Food and Agriculture Organization of the United Nations, 35% of the global harvest, caught or grown for the needs of the consumers, are lost or wasted during transportation and processing, or as leftover stock at the wholesale warehouse and retail stores. This amounted to around 62 million tons of the total fisheries and aquaculture production (179 million tons) in 2018. Also, fish and fishery products are the first to be thrown out from the fridge by end consumers, if they get spoiled. It is an additional loss ranging from 5% (China) to 30% (United States) in different regions of the world at consumer level. Fish-booking helps eliminate these unnecessary losses through direct contracts with the fishermen or fisheries and pre-planned orders. The examples of companies operating based on fish-booking principle (delivery o The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary function of the Intel 82288 bus controller? A. To act as a memory controller B. To control the bus for Intel 80286 processors C. To serve as a graphics processor D. To replace the CPU in older systems Answer:
B. To control the bus for Intel 80286 processors
Relavent Documents: Document 0::: The NSLU2 (Network Storage Link for USB 2.0 Disk Drives) is a network-attached storage (NAS) device made by Linksys introduced in 2004 and discontinued in 2008. It makes USB flash memory and hard disks accessible over a network using the SMB protocol (also known as Windows file sharing or CIFS). It was superseded mainly by the NAS200 (enclosure type storage link) and in another sense by the WRT600N and WRT300N/350N which both combine a Wi-Fi router with a storage link. The device runs a modified version of Linux and by default, formats hard disks with the ext3 filesystem, but a firmware upgrade from Linksys adds the ability to use NTFS and FAT32 formatted drives with the device for better Windows compatibility. The device has a web interface from which the various advanced features can be configured, including user and group permissions and networking options. Hardware The device has two USB 2.0 ports for connecting hard disks and uses an ARM-compatible Intel XScale IXP420 CPU. In models manufactured prior to around April 2006, Linksys had underclocked the processor to 133 MHz, though a simple hardware modification to remove this restriction is possible. Later models (circa. May 2006) are clocked at the rated speed of 266 MHz. The device includes 32 MB of SDRAM, and 8 MB of flash memory. It also has a 100 Mbit/s Ethernet network connection. The NSLU2 is fanless, making it completely silent. User community Stock, the device runs a customised version of Linux. Linksys was required to release their source code as per the terms of the GNU General Public License. Due to the availability of source code, the NSLU2's use of well-documented commodity components and its relatively low price, there are several community projects centered around it, including hardware modifications, alternative firmware images, and alternative operating systems with varying degrees of reconfiguration. Hardware modifications Unofficial hardware modifications include: Doubling the clock freq Document 1::: The Porto Alegre Botanical Garden is a Foundation of Rio Grande do Sul located on the street Salvador França, in Porto Alegre, Brazil. History The project for a botanical garden in Porto Alegre dates back to the beginning of the 19th century, when Dom Joao VI, after creating the Rio de Janeiro Botanical Garden, sent seedlings to Porto Alegre to establish another similar park in the city. Unfortunately these seedlings did not come to the capital, remaining trapped in Rio Grande, where they were planted. The agriculturist Paul Schoenwald subsequently donated a plot of land to the state government to establish a green area, but the project was unsuccessful. A third attempt would be made in 1882, when councilman Francisco Pinto de Souza presented a proposal for scientific exploitation of the area then known as the Várzea de Petrópolis, providing a garden and a promenade. Considered utopian, the plan was terminated and lay dormant for decades, only returning to consideration in the mid-20th century. In 1953 the 2136 law authorized the selling of an area of 81.57 hectares, of which 50 hectares would be to create a park or botanical garden. A committee, which included prominent teacher and religious figure Teodoro Luís, was formed to develop the project, which began in 1957 with the first planting of selected species: a collection of palm trees, conifers and succulent. When opened to the public on September 10, 1958, it already featured nearly 600 species. Soon after, in 1962, was inaugurated the oven for cacti, in the 1970s and the botanical garden was integrated into Fundação Zoobotânica Foundation, along with the Park Zoo and the Museum of Natural Sciences. This season began the collection of trees, with emphasis on families of ecological importance (Myrtaceae, Rutaceae, Myrsinaceae, Bignoniaceae, Fabales, Zingiberales, among others), thematic groups (condiments and scented) and forest formations typical of the state, and is launched a program for expeditions to c Document 2::: The Agricultural Technology Research Program (ATRP) is part of the Aerospace, Transportation and Advanced Systems Laboratory of the Georgia Tech Research Institute. It was founded in 1973 to work with Georgia agribusiness, especially the poultry industry, to develop new technologies and adapt existing ones for specialized industrial needs. The program's goal is to improve productivity, reduce costs, and enhance safety and health through technological innovations. ATRP conducts state-sponsored and contract research for industry and government agencies. Researchers focus efforts on both immediate and long-term industrial needs, including advanced robotic systems for materials handling, machine-vision developments for grading and control, improved wastewater treatment technologies, and biosensors for rapid microbial detection. With guidance from the Georgia Poultry Federation, ATRP also conducts a variety of outreach activities to provide the industry with timely information and technical assistance. Researchers have complementary backgrounds in mechanical, electrical, computer, environmental, and safety engineering; physics; and microbiology. ATRP is one of the oldest and largest agricultural technology research and development programs in the nation, and is conducted in cooperation with the Georgia Poultry Federation with funding from the Georgia General Assembly. Research and development areas Robotics and automation systems Robotics/automation research studies focus heavily on integrated, "intelligent" automation systems. These systems offer major opportunities to further enhance productivity in the poultry and food processing industries. They incorporate advanced sensors, robotics, and computer simulation and control technologies in an integrated package and tackle a number of unique challenges in trying to address specific industrial needs. Advanced imaging and sensor concepts Advanced imaging and sensor concepts research studies focus on the design of syste Document 3::: A loader is a heavy equipment machine used in construction to move or load materials such as soil, rock, sand, demolition debris, etc. into or onto another type of machinery (such as a dump truck, conveyor belt, feed-hopper, or railroad car). There are many types of loader, which, depending on design and application, are variously called a bucket loader, end loader, front loader, front-end loader, payloader, high lift, scoop, shovel dozer, skid-steer, skip loader, tractor loader or wheel loader. Description A loader is a type of tractor, usually wheeled, sometimes on tracks, that has a front-mounted wide bucket connected to the end of two booms (arms) to scoop up loose material from the ground, such as dirt, sand or gravel, and move it from one place to another without pushing the material across the ground. A loader is commonly used to move a stockpiled material from ground level and deposit it into an awaiting dump truck or into an open trench excavation. The loader assembly may be a removable attachment or permanently mounted. Often the bucket can be replaced with other devices or tools—for example, many can mount forks to lift heavy pallets or shipping containers, and a hydraulically opening "clamshell" bucket allows a loader to act as a light dozer or scraper. The bucket can also be augmented with devices like a bale grappler for handling large bales of hay or straw. Large loaders, such as the Kawasaki 95ZV-2, John Deere 844K, ACR 700K Compact Wheel Loader, Caterpillar 950H, Volvo L120E, Case 921E, or Hitachi ZW310 usually have only a front bucket and are called front loaders, whereas small loader tractors are often also equipped with a small backhoe and are called backhoe loaders or loader backhoes or JCBs, after the company that first claimed to have invented them. Other companies like CASE in America and Whitlock in the UK had been manufacturing excavator loaders well before JCB. The largest loader in the world is LeTourneau L-2350. Currently these la The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What notable event occurred in 1883 related to cryptography? A. The invention of the Playfair cipher B. The publication of Auguste Kerckhoffs' La Cryptographie militare C. The first use of the one-time pad D. The establishment of the first long-distance semaphore telegraph line Answer:
B. The publication of Auguste Kerckhoffs' La Cryptographie militare
Relavent Documents: Document 0::: The IBM System/4 Pi is a family of avionics computers used, in various versions, on the F-15 Eagle fighter, E-3 Sentry AWACS, Harpoon Missile, NASA's Skylab, MOL, and the Space Shuttle, as well as other aircraft. Development began in 1965, deliveries in 1967. They were developed by the IBM Federal Systems Division and produced by the Electronics Systems Center in Owego, NY. It descends from the approach used in the System/360 mainframe family of computers, in which the members of the family were intended for use in many varied user applications. (This is expressed in the name: there are 4π steradians in a sphere, just as there are 360 degrees in a circle.) Previously, custom computers had been designed for each aerospace application, which was extremely costly. Early models In 1967, the System/4 Pi family consisted of these basic models: Model TC (Tactical Computer) - A briefcase-size computer for applications such as missile guidance, helicopters, satellites and submarines. Model CP (Customized Processor/Cost Performance) - An intermediate-range processor for applications such as aircraft navigation, weapons delivery, radar correlation and mobile battlefield systems. Model CP-2 (Cost Performance - Model 2) Model EP (Extended Performance) - A large-scale data processor for applications requiring real-time processing of large volumes of data, such as crewed spacecraft, airborne warning and control systems and command and control systems. Model EP used an instruction subset of IBM System/360 (Model 44) - user programs could be checked on System/360 The Skylab space station employed the model TC-1, which had a 16-bit word length and 16,384 words of memory with a custom input/output assembly. Skylab had two, redundant, TC-1 computers: a prime (energized) and a backup (non energized.) There would be an automatic switchover (taking on the order of one second) to the backup in the event of a critical failure of the prime. A total of twelve were delivered to NASA Document 1::: Lift-induced drag, induced drag, vortex drag, or sometimes drag due to lift, in aerodynamics, is an aerodynamic drag force that occurs whenever a moving object redirects the airflow coming at it. This drag force occurs in airplanes due to wings or a lifting body redirecting air to cause lift and also in cars with airfoil wings that redirect air to cause a downforce. It is symbolized as , and the lift-induced drag coefficient as . For a constant amount of lift, induced drag can be reduced by increasing airspeed. A counter-intuitive effect of this is that, up to the speed-for-minimum-drag, aircraft need less power to fly faster. Induced drag is also reduced when the wingspan is higher, or for wings with wingtip devices. Explanation The total aerodynamic force acting on a body is usually thought of as having two components, lift and drag. By definition, the component of force parallel to the oncoming flow is called drag; and the component perpendicular to the oncoming flow is called lift. At practical angles of attack the lift greatly exceeds the drag. Lift is produced by the changing direction of the flow around a wing. The change of direction results in a change of velocity (even if there is no speed change), which is an acceleration. To change the direction of the flow therefore requires that a force be applied to the fluid; the total aerodynamic force is simply the reaction force of the fluid acting on the wing. An aircraft in slow flight at a high angle of attack will generate an aerodynamic reaction force with a high drag component. By increasing the speed and reducing the angle of attack, the lift generated can be held constant while the drag component is reduced. At the optimum angle of attack, total drag is minimised. If speed is increased beyond this, total drag will increase again due to increased profile drag. Vortices When producing lift, air below the wing is at a higher pressure than the air pressure above the wing. On a wing of finite span, this p Document 2::: Confusion of the inverse, also called the conditional probability fallacy or the inverse fallacy, is a logical fallacy whereupon a conditional probability is equated with its inverse; that is, given two events A and B, the probability of A happening given that B has happened is assumed to be about the same as the probability of B given A, when there is actually no evidence for this assumption. More formally, P(A|B) is assumed to be approximately equal to P(B|A). Examples Example 1 In one study, physicians were asked to give the chances of malignancy with a 1% prior probability of occurring. A test can detect 80% of malignancies and has a 10% false positive rate. What is the probability of malignancy given a positive test result? Approximately 95 out of 100 physicians responded the probability of malignancy would be about 75%, apparently because the physicians believed that the chances of malignancy given a positive test result were approximately the same as the chances of a positive test result given malignancy. The correct probability of malignancy given a positive test result as stated above is 7.5%, derived via Bayes' theorem: Other examples of confusion include: Hard drug users tend to use marijuana; therefore, marijuana users tend to use hard drugs (the first probability is marijuana use given hard drug use, the second is hard drug use given marijuana use). Most accidents occur within 25 miles from home; therefore, you are safest when you are far from home. Terrorists tend to have an engineering background; so, engineers have a tendency towards terrorism. For other errors in conditional probability, see the Monty Hall problem and the base rate fallacy. Compare to illicit conversion. Example 2 In order to identify individuals having a serious disease in an early curable form, one may consider screening a large group of people. While the benefits are obvious, an argument against such screenings is the disturbance caused by false positive screening r Document 3::: A guard byte is a part of a computer program's memory that helps software developers find buffer overflows while developing the program. Principle When a program is compiled for debugging, all memory allocations are prefixed and postfixed by guard bytes. Special memory allocation routines may then perform additional tasks to determine unwanted read and write attempts outside the allocated memory. These extra bytes help to detect that the program is writing into (or even reading from) inappropriate memory areas, potentially causing buffer overflows. In case of accessing these bytes by the program's algorithm, the programmer is warned with information assisting them to locate the problem. Checking for the inappropriate access to the guard bytes may be done in two ways: by setting a memory breakpoint on a condition of write and/or read to those bytes, or by pre-initializing the guard bytes with specific values and checking the values upon deallocation. The first way is possible only with a debugger that handles such breakpoints, but significantly increases the chance of locating the problem. The second way does not require any debuggers or special environments and can be done even on other computers, but the programmer is alerted about the overflow only upon the deallocation, which is sometimes quite late. Because guard bytes require additional code to be executed and additional memory to be allocated, they are used only when the program is compiled for debugging. When compiled as a release, guard bytes are not used at all, neither the routines working with them. Example A programmer wants to allocate a buffer of 100 bytes of memory while debugging. The system memory allocating routine will allocate 108 bytes instead, adding 4 leading and 4 trailing guard bytes, and return a pointer shifted by the 4 leading guard bytes to the right, hiding them from the programmer. The programmer should then work with the received pointer without the knowledge of the presence of The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does total correlation quantify in a set of random variables? A. The total number of variables in the set B. The dependencies or redundancies among the variables C. The individual entropies of each variable D. The maximum possible entropy of the variable set Answer:
B. The dependencies or redundancies among the variables
Relavent Documents: Document 0::: An adaptive algorithm is an algorithm that changes its behavior at the time it is run, based on information available and on a priori defined reward mechanism (or criterion). Such information could be the story of recently received data, information on the available computational resources, or other run-time acquired (or a priori known) information related to the environment in which it operates. Among the most used adaptive algorithms is the Widrow-Hoff’s least mean squares (LMS), which represents a class of stochastic gradient-descent algorithms used in adaptive filtering and machine learning. In adaptive filtering the LMS is used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual signal). For example, stable partition, using no additional memory is O(n lg n) but given O(n) memory, it can be O(n) in time. As implemented by the C++ Standard Library, stable_partition is adaptive and so it acquires as much memory as it can get (up to what it would need at most) and applies the algorithm using that available memory. Another example is adaptive sort, whose behavior changes upon the presortedness of its input. An example of an adaptive algorithm in radar systems is the constant false alarm rate (CFAR) detector. In machine learning and optimization, many algorithms are adaptive or have adaptive variants, which usually means that the algorithm parameters such as learning rate are automatically adjusted according to statistics about the optimisation thus far (e.g. the rate of convergence). Examples include adaptive simulated annealing, adaptive coordinate descent, adaptive quadrature, AdaBoost, Adagrad, Adadelta, RMSprop, and Adam. In data compression, adaptive coding algorithms such as Adaptive Huffman coding or Prediction by partial matching can take a stream of data as input, and adapt their compression technique based on the symbols that th Document 1::: Dense granules (also known as dense bodies or delta granules) are specialized secretory organelles. Dense granules are found only in platelets and are smaller than alpha granules. The origin of these dense granules is still unknown, however, it is thought that may come from the mechanism involving the endocytotic pathway. Dense granules are a sub group of lysosome-related organelles (LRO). There are about three to eight of these in a normal human platelet. In unicellular organisms They are found in animals and in unicellular organisms including Apicomplexa protozoans. They are also found in Entamoeba. Dense granules play a major role in Toxoplasma gondii. When the parasite invades it releases its dense granules which help to create the parasitophorous vacuole. Toxoplasma gondii T. gondii contains organelles called unique organelles including dense granules. Dense granules, along with other secretory vesicles such as a microneme and rhoptry secrete proteins involved in the gliding motility, invasion, and parasitophorous vacuole formation of Toxoplasma gondii. Dense granules specifically secrete their contents several minutes after parasite invasion and localization into the parasitophorous vacuole. Proteins released from these specialized organelles are critical to adapting to the intracellular environment of the invaded host cell and contribute to parasitophorous vacuolar structure and maintenance. Structure and Biogenesis Dense granules in T. gondii are spherical, electron dense bodies that resemble secretory vesicles in mammalian cells about 200 nm in diameter and most likely form from budding off the trans-golgi network. Dense granule protein aggregation and retention is vital to maintaining dense granule biogenesis. This process is thought to follow the sorting-by-retention model in higher eukaryotes due to the morphological similarities of T. gondii’s dense granule and higher eukaryotes’ dense core granules. The proposition includes the accumulation of se Document 2::: In chemical biology, bioorthogonal chemical reporter is a non-native chemical functionality that is introduced into the naturally occurring biomolecules of a living system, generally through metabolic or protein engineering. These functional groups are subsequently utilized for tagging and visualizing biomolecules. Jennifer Prescher and Carolyn R. Bertozzi, the developers of bioorthogonal chemistry, defined bioorthogonal chemical reporters as "non-native, non-perturbing chemical handles that can be modified in living systems through highly selective reactions with exogenously delivered probes." It has been used to enrich proteins and to conduct proteomic analysis. In the early development of the technique, chemical motifs have to fulfill criteria of biocompatibility and selective reactivity in order to qualify as bioorthogonal chemical reporters. Some combinations of proteinogenic amino acid side chains meet the criteria, as do ketone and aldehyde tags. Azides and alkynes are other examples of chemical reporters. A bioorthogonal chemical reporter must be incorporated into a biomolecule. This occurs via metabolism. The chemical reporter is linked to a substrate, which a cell can metabolize. Document 3::: Sperry Corporation was a major American equipment and electronics company whose existence spanned more than seven decades of the 20th century. Sperry ceased to exist in 1986 following a prolonged hostile takeover bid engineered by Burroughs Corporation, which merged the combined operation under the new name Unisys. Some of Sperry's former divisions became part of Honeywell, Lockheed Martin, Raytheon Technologies, and Northrop Grumman. The company is best known as the developer of the artificial horizon and a wide variety of other gyroscope-based aviation instruments like autopilots, bombsights, analog ballistics computers and gyro gunsights. In the post-WWII era the company branched out into electronics, both aviation-related, and later, computers. The company was founded by Elmer Ambrose Sperry. History Early history The company was incorporated on April 14 1910 by Elmer Ambrose Sperry as the Sperry Gyroscope Company, to manufacture navigation equipment—chiefly his own inventions: the marine gyrostabilizer and the gyrocompass—at 40 Flatbush Avenue Extension in Downtown Brooklyn. During World War I the company diversified into aircraft components including bomb sights and fire control systems. In their early decades, Sperry Gyroscope and related companies were concentrated on Long Island, New York, especially in Nassau County. Over the years, it diversified to other locations. In 1918, Lawrence Sperry split from his father to compete over aero-instruments with the Lawrence Sperry Aircraft Company, including the new automatic pilot. After the death of Lawrence on December 13, 1923, the two firms were brought together in 1924. Then in January 1929 it was acquired by North American Aviation, who reincorporated it in New York as the Sperry Gyroscope Company, Inc. The company once again became independent in 1933 when it was spun-off as a subsidiary of the newly formed Sperry Corporation. The new corporation was a holding company for a number of smaller entities su The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main purpose of a work system according to the text? A. To create software for internal use B. To produce products and services for customers C. To manage human resources effectively D. To analyze financial performance Answer:
B. To produce products and services for customers
Relavent Documents: Document 0::: In design for additive manufacturing (DFAM), there are both broad themes (which apply to many additive manufacturing processes) and optimizations specific to a particular AM process. Described here is DFM analysis for stereolithography, in which design for manufacturability (DFM) considerations are applied in designing a part (or assembly) to be manufactured by the stereolithography (SLA) process. In SLA, parts are built from a photocurable liquid resin that cures when exposed to a laser beam that scans across the surface of the resin (photopolymerization). Resins containing acrylate, epoxy, and urethane are typically used. Complex parts and assemblies can be directly made in one go, to a greater extent than in earlier forms of manufacturing such as casting, forming, metal fabrication, and machining. Realization of such a seamless process requires the designer to take in considerations of manufacturability of the part (or assembly) by the process. In any product design process, DFM considerations are important to reduce iterations, time and material wastage. Challenges in stereolithography Material Excessive setup specific material cost and lack of support for 3rd party resins is a major challenge with SLA process:. The choice of material (a design process) is restricted by the supported resin. Hence, the mechanical properties are also fixed. When scaling up dimensions selectively to deal with expected stresses, post curing is done by further treatment with UV light and heat. Although advantageous to mechanical properties, the additional polymerization and cross linkage can result in shrinkage, warping and residual thermal stresses. Hence, the part shall be designed in its 'green' stage i.e. pre-treatment stage. Setup and process SLA process is an additive manufacturing process. Hence, design considerations such as orientation, process latitude, support structures etc. have to be considered. Orientation affects the support structures, manufacturing time, part q Document 1::: The Teletype Model 37 is an electromechanical teleprinter manufactured by the Teletype Corporation in 1968. Electromechanical user interfaces would be superseded as a year later in 1969 the Computer Terminal Corporation introduced the electronic terminal with a screen. Features The Model 37 came with many features including upper- and lowercase letters, reverse page feed for printing charts, red and black ink, and optional tape and punch reader. It handled speeds up to 150 Baud (15 characters/second). This made it 50% faster than its predecessor, the Model 33. The Model 37 terminal utilizes a serial input / output 10 unit code signal consisting of a start bit, seven information bits, an even parity bit and a stop bit. It was produced in ASR (Automatic Send and Receive)also known as the Model 37/300, KSR (Keyboard Send and Receive) also known as the Model 37/200 and RO (Receive Only) also known as the Model 37/100. The Model 37 handles all 128 ASCII code combinations. It uses a six-row removable typebox with provisions for 96 type pallet positions. When the Shift-Out feature is included, the six-row typebox is replaced with a seven-row typebox allowing 112 pallet positions, or it can be replaced with an eight-row typebox allowing 128 type pallet positions. Technical specifications The Model 37 RO and KSR are 36.25 inches high, 27.5 inches deep and 22.5 inches deep. The ASR is 36.25 inches high, 44.5 inches wide and 27.5 inches deep. The ASR weighs approximately 340 pounds and KSR and RO weighs approximately 185 pounds. The Model 37 interface meets the requirements of EIA RS-232-B and has a recommended maintenance interval of every six months or every 1500 hours. Power Requirements - 115VAC ± 10%, 60 Hz ± .45 Hz, RO is approximately 200 Watts, KSR is approximately 300 Watts, ASR is approximately 550 Watts. Fun Facts Most Model 37s were re-purchased from customers and sold to the Soviet Union with antiquated mainframe computer systems. Further reading Dolotta, Document 2::: Generalized blockmodeling of binary networks (also relational blockmodeling) is an approach of generalized blockmodeling, analysing the binary network(s). As most network analyses deal with binary networks, this approach is also considered as the fundamental approach of blockmodeling. This is especially noted, as the set of ideal blocks, when used for interpretation of blockmodels, have binary link patterns, which precludes them to be compared with valued empirical blocks. When analysing the binary networks, the criterion function is measuring block inconsistencies, while also reporting the possible errors. The ideal block in binary blockmodeling has only three types of conditions: "a certain cell must be (at least) 1, a certain cell must be 0 and the over each row (or column) must be at least 1". It is also used as a basis for developing the generalized blockmodeling of valued networks. References See also homogeneity blockmodeling binary relation binary matrix Document 3::: Concatenation theory, also called string theory, character-string theory, or theoretical syntax, studies character strings over finite alphabets of characters, signs, symbols, or marks. String theory is foundational for formal linguistics, computer science, logic, and metamathematics especially proof theory. A generative grammar can be seen as a recursive definition in string theory. The most basic operation on strings is concatenation; connect two strings to form a longer string whose length is the sum of the lengths of those two strings. ABCDE is the concatenation of AB with CDE, in symbols ABCDE = AB ^ CDE. Strings, and concatenation of strings can be treated as an algebraic system with some properties resembling those of the addition of integers; in modern mathematics, this system is called a free monoid. In 1956 Alonzo Church wrote: "Like any branch of mathematics, theoretical syntax may, and ultimately must, be studied by the axiomatic method". Church was evidently unaware that string theory already had two axiomatizations from the 1930s: one by Hans Hermes and one by Alfred Tarski. Coincidentally, the first English presentation of Tarski's 1933 axiomatic foundations of string theory appeared in 1956 – the same year that Church called for such axiomatizations. As Tarski himself noted using other terminology, serious difficulties arise if strings are construed as tokens rather than types in the sense of Peirce's type-token distinction. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary purpose of a stress test in hardware testing? A. To measure the maximum performance of a system B. To determine the breaking points and safe usage limits C. To evaluate the aesthetic design of the hardware D. To compile a list of software applications used Answer:
B. To determine the breaking points and safe usage limits
Relavent Documents: Document 0::: In astronomy, planetary transits and occultations occur when a planet passes in front of another object, as seen by an observer. The occulted object may be a distant star, but in rare cases it may be another planet, in which case the event is called a mutual planetary occultation or mutual planetary transit, depending on the relative apparent diameters of the objects. The word "transit" refers to cases where the nearer object appears smaller than the more distant object. Cases where the nearer object appears larger and completely hides the more distant object are known as occultations. Mutual planetary occultations and transits Mutual occultations or transits of planets are extremely rare. The most recent event occurred on 3 January 1818, and the next will occur on 22 November 2065. Both involve the same two planets: Venus and Jupiter. Historical observations An occultation of Mars by Venus on 13 October 1590 was observed by the German astronomer Michael Maestlin at Heidelberg. The 1737 event (see list below) was observed by John Bevis at Greenwich Observatory – it is the only detailed account of a mutual planetary occultation. A transit of Mars across Jupiter on 12 September 1170 was observed by the monk Gervase at Canterbury, and by Chinese astronomers. Future events The next time a mutual planetary transit or occultation will happen (as seen from Earth) will be on 22 November 2065 at about 12:43 UTC, when Venus near superior conjunction (with an angular diameter of 10.6") will transit in front of Jupiter (with an angular diameter of 30.9"); however, this will take place only 8° west of the Sun, and will therefore not be visible to the unaided/unprotected eye. Before transiting Jupiter, Venus will occult Jupiter's moon Ganymede at around 11:24 UTC as seen from some southernmost parts of Earth. Parallax will cause actual observed times to vary by a few minutes, depending on the precise location of the observer. List of mutual planetary occultations and transit Document 1::: Diffbot is a developer of machine learning and computer vision algorithms and public APIs for extracting data from web pages / web scraping to create a knowledge base. The company has gained interest from its application of computer vision technology to web pages, wherein it visually parses a web page for important elements and returns them in a structured format. In 2015 Diffbot announced it was working on its version of an automated "Knowledge Graph" by crawling the web and using its automatic web page extraction to build a large database of structured web data. In 2019 Diffbot released their Knowledge Graph which has since grown to include over two billion entities (corporations, people, articles, products, discussions, and more), and ten trillion "facts." The company's products allow software developers to analyze web home pages and article pages, and extract the "important information" while ignoring elements deemed not core to the primary content. In August 2012 the company released its Page Classifier API, which automatically categorizes web pages into specific "page types". As part of this, Diffbot analyzed 750,000 web pages shared on the social media service Twitter and revealed that photos, followed by articles and videos, are the predominant web media shared on the social network. In September 2020 the company released a Natural Language Processing API for automatically building Knowledge Graphs from text. The company raised $2 million in funding in May 2012 from investors including Andy Bechtolsheim and Sky Dayton. Diffbot's customers include Adobe, AOL, Cisco, DuckDuckGo, eBay, Instapaper, Microsoft, Onswipe and Springpad. See also GPT-3 References External links Knowledge Graph Document 2::: Random self-reducibility (RSR) is the rule that a good algorithm for the average case implies a good algorithm for the worst case. RSR is the ability to solve all instances of a problem by solving a large fraction of the instances. Definition If for a function f evaluating any instance x can be reduced in polynomial time to the evaluation of f on one or more random instances yi, then it is self-reducible (this is also known as a non-adaptive uniform self-reduction). In a random self-reduction, an arbitrary worst-case instance x in the domain of f is mapped to a random set of instances y1, ..., yk. This is done so that f(x) can be computed in polynomial time, given the coin-toss sequence from the mapping, x, and f(y1), ..., f(yk). Therefore, taking the average with respect to the induced distribution on yi, the average-case complexity of f is the same (within polynomial factors) as the worst-case randomized complexity of f. One special case of note is when each random instance yi is distributed uniformly over the entire set of elements in the domain of f that have a length of |x|. In this case f is as hard on average as it is in the worst case. This approach contains two key restrictions. First the generation of y1, ..., yk is performed non-adaptively. This means that y2 is picked before f(y1) is known. Second, it is not necessary that the points y1, ..., yk be uniformly distributed. Application in cryptographic protocols Problems that require some privacy in the data (typically cryptographic problems) can use randomization to ensure that privacy. In fact, the only provably secure cryptographic system (the one-time pad) has its security relying totally on the randomness of the key data supplied to the system. The field of cryptography utilizes the fact that certain number-theoretic functions are randomly self-reducible. This includes probabilistic encryption and cryptographically strong pseudorandom number generation. Also, instance-hiding schemes (whe Document 3::: A thoracostomy is a small incision of the chest wall, with maintenance of the opening for drainage. It is most commonly used for the treatment of a pneumothorax. This is performed by physicians, paramedics, and nurses usually via needle thoracostomy or an incision into the chest wall with the insertion of a thoracostomy tube (chest tube) or with a hemostat and the provider's finger (finger thoracostomy). Medical uses When air, blood, or other fluids accumulate in the pleural cavity it may be drained by thoracostomy. Whereas air in this space (pneumothorax) may be released by needle thoracostomy, other substances require drainage with a thoracostomy tube. Contra-indications There are no absolute contraindications to thoracostomy. There are relative contraindications (such as coagulopathies); however, in an emergency setting these are outweighed by the necessity to re-inflate a collapsed lung by draining fluid/air from the space around the lung. Technique The standard location for thoracostomy is the triangle of safety. This is an anatomical triangle. The borders of which are; the anterior border of the latissimus dorsi, the lateral border of the pectoralis major muscle, a line superior to the horizontal level of the nipple (or 5th intercostal space), with the apex being below, or at, the axilla. A primary skin incision is made superiorly to the rib to avoid the neurovascular supply that runs inferiorly to the rib. This should be around 4–5 cm long. The clinician will tunnel through the subcutaneous tissue and muscle using forceps to reach the pleural. Further blunt dissection is used to carefully penetrate the pleural cavity. A finger is then inserted into this hole, the finger is swept to feel for lung adhesions to the rib cage and to feel for an inflating lung. This cavity is where a hemothorax or pneumothorax would accumulate. A finger thoracostomy as described here can be the first step in inserting an intercostal chest drain. At this point, a chest tub The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary function of Ile aux Aigrettes as mentioned in the text? A. A commercial fishing area B. A nature reserve and scientific research station C. A historical monument D. A luxury resort Answer:
B. A nature reserve and scientific research station
Relavent Documents: Document 0::: Lanchester's laws are mathematical formulas for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two armies' strengths A and B as a function of time, with the function depending only on A and B. In 1915 and 1916 during World War I, M. Osipov and Frederick Lanchester independently devised a series of differential equations to demonstrate the power relationships between opposing forces. Among these are what is known as Lanchester's linear law (for ancient combat) and Lanchester's square law (for modern combat with long-range weapons such as firearms). As of 2017 modified variations of the Lanchester equations continue to form the basis of analysis in many of the US Army’s combat simulations, and in 2016 a RAND Corporation report examined by these laws the probable outcome in the event of a Russian invasion into the Baltic nations of Estonia, Latvia, and Lithuania. Lanchester's linear law For ancient combat, between phalanxes of soldiers with spears for example, one soldier could only ever fight exactly one other soldier at a time. If each soldier kills, and is killed by, exactly one other, then the number of soldiers remaining at the end of the battle is simply the difference between the larger army and the smaller, assuming identical weapons. The linear law also applies to unaimed fire into an enemy-occupied area. The rate of attrition depends on the density of the available targets in the target area as well as the number of weapons shooting. If two forces, occupying the same land area and using the same weapons, shoot randomly into the same target area, they will both suffer the same rate and number of casualties, until the smaller force is eventually eliminated: the greater probability of any one shot hitting the larger force is balanced by the greater number of shots directed at the smaller force. Lanchester's square law Lanchester's square law is also known as the Document 1::: In physics and mathematics, the phase (symbol φ or ϕ) of a wave or other periodic function of some real variable (such as time) is an angle-like quantity representing the fraction of the cycle covered up to . It is expressed in such a scale that it varies by one full turn as the variable goes through each period (and goes through each complete cycle). It may be measured in any angular unit such as degrees or radians, thus increasing by 360° or as the variable completes a full period. This convention is especially appropriate for a sinusoidal function, since its value at any argument then can be expressed as , the sine of the phase, multiplied by some factor (the amplitude of the sinusoid). (The cosine may be used instead of sine, depending on where one considers each period to start.) Usually, whole turns are ignored when expressing the phase; so that is also a periodic function, with the same period as , that repeatedly scans the same range of angles as goes through each period. Then, is said to be "at the same phase" at two argument values and (that is, ) if the difference between them is a whole number of periods. The numeric value of the phase depends on the arbitrary choice of the start of each period, and on the interval of angles that each period is to be mapped to. The term "phase" is also used when comparing a periodic function with a shifted version of it. If the shift in is expressed as a fraction of the period, and then scaled to an angle spanning a whole turn, one gets the phase shift, phase offset, or phase difference of relative to . If is a "canonical" function for a class of signals, like is for all sinusoidal signals, then is called the initial phase of . Mathematical definition Let the signal be a periodic function of one real variable, and be its period (that is, the smallest positive real number such that for all ). Then the phase of at any argument is Here denotes the fractional part of a real number, di Document 2::: Thymosin beta-4 is a protein that in humans is encoded by the TMSB4X gene. Recommended INN (International Nonproprietary Name) for thymosin beta-4 is 'timbetasin', as published by the World Health Organization (WHO). The protein consists (in humans) of 43 amino acids (sequence: SDKPDMAEI EKFDKSKLKK TETQEKNPLP SKETIEQEKQ AGES) and has a molecular weight of 4921 g/mol. Thymosin-β4 is a major cellular constituent in many tissues. Its intracellular concentration may reach as high as 0.5 mM. Following Thymosin α1, β4 was the second of the biologically active peptides from Thymosin Fraction 5 to be completely sequenced and synthesized. Function This gene encodes an actin sequestering protein which plays a role in regulation of actin polymerization. The protein is also involved in cell proliferation, migration, and differentiation. This gene escapes X inactivation and has a homolog on chromosome Y (TMSB4Y). Biological activities of thymosin β4 Any concepts of the biological role of thymosin β4 must inevitably be coloured by the demonstration that total ablation of the thymosin β4 gene in the mouse allows apparently normal embryonic development of mice which are fertile as adults. Actin binding Thymosin β4 was initially perceived as a thymic hormone. However this changed when it was discovered that it forms a 1:1 complex with G (globular) actin, and is present at high concentration in a wide range of mammalian cell types. When appropriate, G-actin monomers polymerize to form F (filamentous) actin, which, together with other proteins that bind to actin, comprise cellular microfilaments. Formation by G-actin of the complex with β-thymosin (= "sequestration") opposes this. Due to its profusion in the cytosol and its ability to bind G-actin but not F-actin, thymosin β4 is regarded as the principal actin-sequestering protein in many cell types. Thymosin β4 functions like a buffer for monomeric actin as represented in the following reaction: F-actin ↔ G-actin + Thymo Document 3::: A counterexample is any exception to a generalization. In logic a counterexample disproves the generalization, and does so rigorously in the fields of mathematics and philosophy. For example, the fact that "student John Smith is not lazy" is a counterexample to the generalization "students are lazy", and both a counterexample to, and disproof of, the universal quantification "all students are lazy." In mathematics In mathematics, counterexamples are often used to prove the boundaries of possible theorems. By using counterexamples to show that certain conjectures are false, mathematical researchers can then avoid going down blind alleys and learn to modify conjectures to produce provable theorems. It is sometimes said that mathematical development consists primarily in finding (and proving) theorems and counterexamples. Rectangle example Suppose that a mathematician is studying geometry and shapes, and she wishes to prove certain theorems about them. She conjectures that "All rectangles are squares", and she is interested in knowing whether this statement is true or false. In this case, she can either attempt to prove the truth of the statement using deductive reasoning, or she can attempt to find a counterexample of the statement if she suspects it to be false. In the latter case, a counterexample would be a rectangle that is not a square, such as a rectangle with two sides of length 5 and two sides of length 7. However, despite having found rectangles that were not squares, all the rectangles she did find had four sides. She then makes the new conjecture "All rectangles have four sides". This is logically weaker than her original conjecture, since every square has four sides, but not every four-sided shape is a square. The above example explained — in a simplified way — how a mathematician might weaken her conjecture in the face of counterexamples, but counterexamples can also be used to demonstrate the necessity of certain assumptions and hypothesis. For exa The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What distinguishes allosteric modulators from orthosteric modulators in terms of their binding sites and mechanisms of action? A. Allosteric modulators bind to the active site and directly block enzyme activity, while orthosteric modulators bind to a regulatory site. B. Allosteric modulators bind to a site distinct from the active site, causing conformational changes, while orthosteric modulators bind directly to the active site and block substrate binding. C. Allosteric modulators enhance substrate binding at the active site, while orthosteric modulators decrease enzyme activity by binding to a different site. D. Allosteric modulators are only inhibitors, while orthosteric modulators can be either inhibitors or activators. Answer:
B. Allosteric modulators bind to a site distinct from the active site, causing conformational changes, while orthosteric modulators bind directly to the active site and block substrate binding.
Relavent Documents: Document 0::: Étienne-Laurent-Joseph-Hippolyte Boyer de Fonscolombe (22 July 1772, Aix-en-Provence – 13 February 1853, Aix) was a French entomologist who specialised in Coleoptera and Hymenoptera and pest insects. Biography Early life Étienne Joseph Hippolyte Boyer de Fonscolombe on 22 July 1772 in Aix-en-Provence, France. He was the son of Emmanuel Honoré Hippolyte de Boyer (1744, Aix, Saint-Sauveur-1810) an aristocrat who studied agronomy, writing on this subject in the Mémoires de l'académie d'Aix. He was educated at the Collège de Juilly. Career Upon finishing his education in 1789, "he had attended meetings of the Constituent Assembly in Versailles, with Mirabeau." He was later "Locked up as suspect" (1793–94). These were dangerous times-his father was also imprisoned in the Terror. On his release and marriage he lived with his parents and his mother-in-law at the castle of Montvert. With the death of his father in 1810, he rented a floor of the hotel of Aix and the couple lived with his mother, who found him "Entomologiste très remarquable avec l’intelligence, la bonté, la vertu et le savoir" (a remarkable entomologist with intelligence, kindness, virtue and knowledge). Hippolyte and his brother Marcellin de Fonscolombe never ceased occupying themselves with the natural sciences "like those of antiquity and on medals". From 1833, he entrusted the management of the Fonscolombe estates to his son-in-law, Adolphe de Saporta, and in 1848 he sold Montvert when it was left to his wife. He was then able to devote himself entirely to entomology. Following his father he published most of his work in Mémoires de l'académie d'Aix Death He died on 13 February 1853 in Aix-en-Provence. Legacy Much of his collection is in the Muséum national d'histoire naturelle in Paris and there are some Apoidea in the Hope Department of Entomology in Oxford. Works Monographia chalciditum galloprovinciae circa aquas degentum. Annales des Sciences Naturelles (1) (Zoologie) 26: 273–307. 1840 Adde Document 1::: The Blok 2BL was a rocket stage, a member of Blok L family, used as an upper stage on some versions of the Molniya-M carrier rocket. It was used as a fourth stage to launch the Oko missile early warning defence spacecraft. References Document 2::: Oliver Vaughan Snell Bulleid CBE (19 September 1882 – 25 April 1970) was a British railway and mechanical engineer best known as the Chief Mechanical Engineer (CME) of the Southern Railway between 1937 and the 1948 nationalisation, developing many well-known locomotives. Early life and Great Northern Railway He was born in Invercargill, New Zealand, to William Bulleid and his wife Marian Pugh, both British immigrants. On the death of his father in 1889, his mother returned to Llanfyllin, Wales, where the family home had been, with Bulleid. In 1901, after a technical education at Accrington Grammar School, he joined the Great Northern Railway (GNR) at Doncaster at the age of 18, as an apprentice under Henry Ivatt, the Chief Mechanical Engineer (CME). After a four-year apprenticeship, he became the assistant to the Locomotive Running Superintendent, and a year later, the Doncaster Works manager. In 1908, he left to work in Paris with the French division of Westinghouse Electric Corporation as a Test Engineer, and was soon promoted to Assistant Works Manager and Chief Draughtsman. Later that year, he married Marjorie Ivatt, Henry Ivatt's youngest daughter. A brief period working for the Board of Trade followed from 1910, arranging exhibitions in Brussels, Paris and Turin. He was able to travel widely in Europe, later including a trip with Nigel Gresley, William Stanier and Frederick Hawksworth, to Belgium, in 1934, to see a metre-gauge bogie locomotive. In December 1912, he rejoined the GNR as Personal Assistant to Nigel Gresley, the new CME. Gresley was only six years Bulleid's senior. The First World War intervened; Bulleid joined the British Army and was assigned to the rail transport arm, rising to the rank of Major. After the war, Bulleid returned to the GNR as the Manager of the Wagon and Carriage Works. London and North Eastern Railway The Grouping, in 1923, of Britain's financially troubled railways, saw the GNR subsumed into the new London and North Easte Document 3::: Luche reduction is the selective organic reduction of α,β-unsaturated ketones to allylic alcohols. The active reductant is described as "cerium borohydride", which is generated in situ from NaBH4 and CeCl3(H2O)7. The Luche reduction can be conducted chemoselectively toward ketone in the presence of aldehydes or towards α,β-unsaturated ketones in the presence of a non-conjugated ketone. An enone forms an allylic alcohol in a 1,2-addition, and the competing conjugate 1,4-addition is suppressed. The selectivity can be explained in terms of the HSAB theory: carbonyl groups require hard nucleophiles for 1,2-addition. The hardness of the borohydride is increased by replacing hydride groups with alkoxide groups, a reaction catalyzed by the cerium salt by increasing the electrophilicity of the carbonyl group. This is selective for ketones because they are more Lewis basic. In one application, a ketone is selectively reduced in the presence of an aldehyde. Actually, in the presence of methanol as solvent, the aldehyde forms a methoxy acetal that is inactive in the reducing conditions. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What significant event does the legend associated with North Wind's Weir describe involving Storm Wind and his grandmother? A. Storm Wind helped his grandmother defeat North Wind by creating a flood. B. Storm Wind was warned by North Wind to stay away from the mountain. C. Storm Wind built a stone weir to block the river. D. Storm Wind and North Wind made peace after a long battle. Answer:
A. Storm Wind helped his grandmother defeat North Wind by creating a flood.
Relavent Documents: Document 0::: Cisgenesis is a product designation for a category of genetically engineered plants. A variety of classification schemes have been proposed that order genetically modified organisms based on the nature of introduced genotypical changes, rather than the process of genetic engineering. Cisgenesis (etymology: cis = same side; and genesis = origin) is one term for organisms that have been engineered using a process in which genes are artificially transferred between organisms that could otherwise be conventionally bred. Genes are only transferred between closely related organisms. Nucleic acid sequences must be isolated and introduced using the same technologies that are used to produce transgenic organisms, making cisgenesis similar in nature to transgenesis. The term was first introduced in 2000 by Henk J. Schouten and Henk Jochemsen, and in 2004 a PhD thesis by Jan Schaart of Wageningen University in 2004, discussing making strawberries less susceptible to Botrytis cinerea. In Europe, currently, this process is governed by the same laws as transgenesis. While researchers at Wageningen University in the Netherlands feel that this should be changed and regulated in the same way as conventionally bred plants, other scientists, writing in Nature Biotechnology, have disagreed. In 2012 the European Food Safety Authority (EFSA) issued a report with their risk assessment of cisgenic and intragenic plants. They compared the hazards associated with plants produced by cisgenesis and intragenesis with those obtained either by conventional plant breeding techniques or transgenesis. The EFSA concluded that "similar hazards can be associated with cisgenic and conventionally bred plants, while novel hazards can be associated with intragenic and transgenic plants." Cisgenesis has been applied to transfer of natural resistance genes to the devastating disease Phytophthora infestans in potato and scab (Venturia inaequalis) in apple. Cisgenesis and transgenesis use artificial gene t Document 1::: The Mobile Clinic is a medical emergency facility, created by Dr. Claudio Costa to rescue riders injured during motorcycle races. In 1976, on Costa's initiative and with funding from Gino Amisano, founder and owner of the AGV, the first vehicle specifically designed to provide rapid medical intervention to injured riders still on the track. The mobile clinic began on the race of the World Championship in Motorcycle Grand Prix of Austria, in Salzburg on 1 May 1977. During the race for the Class 350, Patrick Fernandez, Franco Uncini, Hans Stadelmann, Dieter Braun and Johnny Cecotto were involved in a terrible accident at the fast curve at Fahrerlager. The clinic intervened, but were attacked by police dogs. The doctors persevered and their actions saved Uncini's life. Stadelmann died on the spot and Braun ended his career because of a serious eye injury. The mobile clinic became an institution on the motorcycle racing circuit, helping thousands of riders. References External links Document 2::: Langya virus (LayV), scientific name Parahenipavirus langyaense, is a species of paramyxovirus first detected in the Chinese provinces of Shandong and Henan. It has been announced in 35 patients from 2018 to August 2022. All but 9 of the 35 cases in China were infected with LayV only, with the symptoms including fever, fatigue, and cough. No deaths due to LayV have been reported . Langya virus affects species including humans, dogs, goats, and its presumed original host, shrews. The 35 cases were not in contact with each other, and it is not known if the virus is capable of human-to-human transmission. Etymology The name of the virus in Simplified Chinese (, ) refers to Langya Commandery, a historical commandery in present-day Shandong, China. Classification Langya virus is classified in the family Paramyxoviridae. It is also closely related to the Nipah virus and the Hendra virus. Symptoms Of the 35 people infected with the virus, 26 were identified as not showing signs of another infection. They all experienced fever, with fatigue being the second most common symptom. Coughing, muscle pains, nausea, headaches and vomiting were also reported as symptoms of infection. More than half of the infectees had leukopenia, and more than a third had thrombocytopenia, with a smaller number being reported to have impaired liver or kidney function. Transmission The researchers who identified the virus found LayV antibodies in a few goats and dogs, and identified LayV viral RNA in 27% of the 262 shrews they sampled. They found no strong evidence of the virus spreading between people. One researcher commented in the NEJM that henipaviruses do not typically spread between people, and thus LayV would be unlikely to become a pandemic, stating: "The only henipavirus that has showed some sign of human-to-human transmission is the Nipah virus and that requires very close contact. I don't think this has much pandemic potential." Another researcher noted that LayV is most likely Document 3::: Quanta Magazine is an editorially independent online publication of the Simons Foundation covering developments in physics, mathematics, biology and computer science. History Quanta Magazine was initially launched as Simons Science News in October 2012, but it was renamed to its current title in July 2013. It was founded by the former New York Times journalist Thomas Lin, who was the magazine's editor-in-chief until 2024. The two deputy editors are John Rennie and Michael Moyer, formerly of Scientific American, and the art director is Samuel Velasco. In 2024, Samir Patel became the magazine's second editor in chief. Content The articles in the magazine are freely available to read online. Scientific American, Wired, The Atlantic, and The Washington Post, as well as international science publications like Spektrum der Wissenschaft, have reprinted articles from the magazine. In November 2018, MIT Press published two collections of articles from Quanta Magazine, Alice and Bob Meet the Wall of Fire and The Prime Number Conspiracy. The magazine also has three podcasts, two of which are hosted by Steven Strogatz. Janna Levin joined as cohost of The Joy of Why for the third season in 2024. Reception Undark Magazine described Quanta Magazine as "highly regarded for its masterful coverage of complex topics in science and math." The science news aggregator RealClearScience ranked Quanta Magazine first on its list of "The Top 10 Websites for Science in 2018." In 2020, the magazine received a National Magazine Award for General Excellence from the American Society of Magazine Editors for its "willingness to tackle some of the toughest and most difficult topics in science and math in a language that is accessible to the lay reader without condescension or oversimplification." In May 2022 the magazine's staff, notably Natalie Wolchover, were awarded the Pulitzer Prize for Explanatory Reporting. References External links The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary aim of the European environmental research and innovation policy? A. To promote military advancements in Europe B. To contribute to a transformative agenda for improving citizens' quality of life and the environment C. To increase the production of fossil fuels D. To reduce the number of researchers in the European Union Answer:
B. To contribute to a transformative agenda for improving citizens' quality of life and the environment
Relavent Documents: Document 0::: This is a list of Storage area network (SAN) management systems. A storage area network is a dedicated network that provides access to consolidated, block level data storage. Systems Brocade Network Advisor Cisco Fabric Manager Enterprise Fabric Connectivity (EFC) Manager EMC ControlCenter EMC VisualSRM EMC Invista Hitachi Data Systems HiCommand HP OpenView Storage Area Manager IBM SAN Volume Controller Symantec Veritas Command Central Storage KernSafe Cross-Platform iSCSI SAN References Document 1::: In a broad sense, the term graphic statics is used to describe the technique of solving particular practical problems of statics using graphical means. Actively used in the architecture of the 19th century, the methods of graphic statics were largely abandoned in the second half of the 20th century, primarily due to widespread use of frame structures of steel and reinforced concrete that facilitated analysis based on linear algebra. The beginning of the 21st century was marked by a "renaissance" of the technique driven by its addition to the computer-aided design tools thus enabling engineers to instantly visualize form and forces. History Markou and Ruan trace the origins of the graphic statics to da Vinci and Galileo who used the graphical means to calculate the sum of forces, Simon Stevin's parallelogram of forces and the 1725 introduction of the force polygon and funicular polygon by Pierre Varignon. Giovanni Poleni used the graphical calculations (and Robert Hooke's analogy between the hanging chain and standing structure) while studying the dome of the Saint Peter's Basilica in Rome (1748). Gabriel Lamé and Émile Clapeyron studied of the dome of the Saint Isaac's Cathedral with the help of the force and funicular polygons (1823). Finally, Carl Culmann had established the new discipline (and gave it a name) in his 1864 work Die Graphische Statik. Culmann was inspired by preceding work by Jean-Victor Poncelet on earth pressure and Lehrbuch der Statik by August Möbius. The next twenty years saw rapid development of methods that involved, among others, major physicists like James Clerk Maxwell and William Rankine. In 1872 Luigi Cremona introduced the Cremona diagram to calculate trusses, in 1873 Robert H. Bow established the "Bow's notation" that is still in use. It fell out of use, especially since construction methods, such as concrete post and beam, allowed for familiar numerical calculations. Access to powerful computation gave structural engineers new to Document 2::: A fuzzy associative matrix expresses fuzzy logic rules in tabular form. These rules usually take two variables as input, mapping cleanly to a two-dimensional matrix, although theoretically a matrix of any number of dimensions is possible. From the perspective of neuro-fuzzy systems, the mathematical matrix is called a "Fuzzy associative memory" because it stores the weights of the perceptron. Applications In the context of game AI programming, a fuzzy associative matrix helps to develop the rules for non-player characters. Suppose a professional is tasked with writing fuzzy logic rules for a video game monster. In the game being built, entities have two variables: hit points (HP) and firepower (FP): This translates to: IF MonsterHP IS VeryLowHP AND MonsterFP IS VeryWeakFP THEN Retreat IF MonsterHP IS LowHP AND MonsterFP IS VeryWeakFP THEN Retreat IF MonsterHP IS MediumHP AND MonsterFP is VeryWeakFP THEN Defend Multiple rules can fire at once, and often will, because the distinction between "very low" and "low" is fuzzy. If it is more "very low" than it is low, then the "very low" rule will generate a stronger response. The program will evaluate all the rules that fire and use an appropriate defuzzification method to generate its actual response. An implementation of this system might use either the matrix or the explicit IF/THEN form. The matrix makes it easy to visualize the system, but it also makes it impossible to add a third variable just for one rule, so it is less flexible. Identify a rule set There is no inherent pattern in the matrix. It appears as if the rules were just made up, and indeed they were. This is both a strength and a weakness of fuzzy logic in general. It is often impractical or impossible to find an exact set of rules or formulae for dealing with a specific situation. For a sufficiently complex game, a mathematician would not be able to study the system and figure out a mathematically accurate set of rules. However, this weakness is Document 3::: LifeSigns: Surgical Unit, released in Europe as LifeSigns: Hospital Affairs, is an adventure game for the Nintendo DS set in a hospital. LifeSigns is the followup to Kenshūi Tendō Dokuta, a game released at the end of 2004; that game has not been released outside Japan, although the localized LifeSigns still makes reference to it. The game is commented as being "like Phoenix Wright crossed with Trauma Center". Characters Seimei Medical University Hospital (Age 25, Male) A second-year intern at the prestigious Seimei Medical University Hospital. He is studying Emergency medicine. He can be a little naive but he is very talented. He devoted his life to helping people after seeing his mother die of cancer. In the first game, he misdiagnosed a patient, which nearly ruined his career. His given name is derived from the English word "doctor", transliterated with kanji. (Age 22, Female) A nurse who has been working at the hospital 1 year before Tendo. She seems to be careless at times, but nonetheless, she is a talented and devoted nurse. She seems to have a crush on Tendo. She is one of the girls Tendo can date later in the game. (Age 36, Female) An extremely gifted surgeon, Asou is Tendo's supervisor and mentor. She recently got over her alcoholism after a disastrous break-up with Prof. Sawai (first game). She is the head of the 3rd Surgery Department, which specializes in heart diseases. Tendo seems to have a crush on her. She wears a bell choker around her neck. Kyousuke Sawai (Age 52, Male) A world authority in the field of immunology, and head of the 1st Surgical Department of Seimei Medical University Hospital. A cold and callous man who only seems to be interested in results. In the first game, it was revealed that he was Tendo's biological father and arranged his transfer to the Seimei. He is involved in the cancer treatment research and recently developed a miracle drug that might cure cancer called SPX (Sawai Power Plex). He also dated Suzu Asou for The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a primary function of a Storage Area Network (SAN)? A. To provide access to consolidated, block level data storage B. To manage user access to cloud applications C. To connect multiple office locations via a VPN D. To enhance the performance of personal computers Answer:
A. To provide access to consolidated, block level data storage
Relavent Documents: Document 0::: In convex analysis, Popoviciu's inequality is an inequality about convex functions. It is similar to Jensen's inequality and was found in 1965 by Tiberiu Popoviciu, a Romanian mathematician. Formulation Let f be a function from an interval to . If f is convex, then for any three points x, y, z in I, If a function f is continuous, then it is convex if and only if the above inequality holds for all x, y, z from . When f is strictly convex, the inequality is strict except for x = y = z. Generalizations It can be generalized to any finite number n of points instead of 3, taken on the right-hand side k at a time instead of 2 at a time: Let f be a continuous function from an interval to . Then f is convex if and only if, for any integers n and k where n ≥ 3 and , and any n points from I, Weighted inequality Popoviciu's inequality can also be generalized to a weighted inequality. Let f be a continuous function from an interval to . Let be three points from , and let be three nonnegative reals such that and . Then, Notes Document 1::: Non-homologous end-joining factor 1 (NHEJ1), also known as Cernunnos or XRCC4-like factor (XLF), is a protein that in humans is encoded by the NHEJ1 gene. XLF was originally discovered as the protein mutated in five patients with growth retardation, microcephaly, and immunodeficiency. The protein is required for the non-homologous end joining (NHEJ) pathway of DNA repair. Patients with XLF mutations also have immunodeficiency due to a defect in V(D)J recombination, which uses NHEJ to generate diversity in the antibody repertoire of the immune system. XLF interacts with DNA ligase IV and XRCC4 and is thought to be involved in the end-bridging or ligation steps of NHEJ. The yeast (Saccharomyces cerevisiae) homolog of XLF is Nej1. Phenotypes In contrast to the profound immunodeficiency phenotype of XLF deletion in humans, deletion of XLF alone has a mild phenotype in mice. However, combining a deletion of XLF with deletion of the ATM kinase causes a synthetic defect in NHEJ, suggesting partial redundancy in the function of these two proteins in mice. Structure XLF is structurally similar to XRCC4, existing as a constitutive dimer with an N-terminal globular head domain, an alpha-helical stalk, and an unstructured C-terminal region (CTR). Interactions XLF has been shown to interact with XRCC4, and with Ku protein, and it can also interact weakly with DNA. Co-crystal structures of XLF and XRCC4 suggest that the two proteins can form hetero-oligomers via head-to-head interaction of alternating XLF and XRCC4 subunits. These XRCC4-XLF filaments have been proposed to bridge DNA prior to end ligation during NHEJ. Formation of XRCC4-XLF oligomers can be disrupted by interaction of the C-terminal domain of XRCC4 with the BRCT domain of DNA ligase IV. Hematopoietic stem cell aging Deficiency of NHEJ1 in mice leads to premature aging of hematopoietic stem cells as indicated by several lines of evidence including evidence that long-term repopulation is defect Document 2::: Nesting algorithms are used to make the most efficient use of material or space. This could for instance be done by evaluating many different possible combinations via recursion. Linear (1-dimensional): The simplest of the algorithms illustrated here. For an existing set there is only one position where a new cut can be placed – at the end of the last cut. Validation of a combination involves a simple Stock - Yield - Kerf = Scrap calculation. Plate (2-dimensional): These algorithms are significantly more complex. For an existing set, there may be as many as eight positions where a new cut may be introduced next to each existing cut, and if the new cut is not perfectly square then different rotations may need to be checked. Validation of a potential combination involves checking for intersections between two-dimensional objects. Packing (3-dimensional): These algorithms are the most complex illustrated here due to the larger number of possible combinations. Validation of a potential combination involves checking for intersections between three-dimensional objects. References Document 3::: NGC 6738 is an astronomical feature that is catalogued as an NGC object. Although listed as an open cluster in some astronomical databases, it may be merely an asterism; a 2003 paper in the journal Astronomy and Astrophysics describes it as being an "apparent concentration of a few bright stars on patchy background absorption". References External links Simbad Image NGC 6738 NGC 6738 The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What notable AI systems did Tuomas Sandholm develop that defeated top human players in poker? A. AlphaGo and DeepMind B. Libratus and Pluribus C. Watson and Siri D. DeepBlue and ChatGPT Answer:
B. Libratus and Pluribus
Relavent Documents: Document 0::: ESO 137-001, also known as the Jellyfish Galaxy, is a barred spiral galaxy located in the constellation Triangulum Australe and in the cluster Abell 3627. As the galaxy moves to the center of the galaxy cluster at 1900 km/s, it is stripped by hot gas, thus creating a 260,000 light-year long tail. This is called ram pressure stripping (RPS). The intergalactic gas in Abell 3627 is at 100 million Kelvin, which causes star formation in the tails. The galaxy has a low amount of Hl regions which combine to a total mass of 3.5x10^8 solar masses, only 10% of which is located in the main disk of ESO 137-001. History The galaxy was discovered by Ming Sun in 2005. Galaxy's fate The stripping of gas is thought to have a significant effect on the galaxy's development, removing cold gas from the galaxy, shutting down the formation of new stars in the galaxy, and changing the appearance of inner spiral arms and bulges because of the effects of star formation. Gallery See also Abell 3627 List of galaxies Jellyfish galaxy References External links NASA gallery: ESO 137-001 Cornell University: The Flying Spaghetti Monster: Impact of magnetic fields on ram pressure stripping in disk galaxies Document 1::: A digital cross-connect system (DCS or DXC) is a piece of circuit-switched network equipment, used in telecommunications networks, that allows lower-level TDM bit streams, such as DS0 bit streams, to be rearranged and interconnected among higher-level TDM signals, such as DS1 bit streams. DCS units are available that operate on both older T-carrier/E-carrier bit streams, as well as newer SONET/SDH bit streams. DCS devices can be used for "grooming" telecommunications traffic, switching traffic from one circuit to another in the event of a network failure, supporting automated provisioning, and other applications. Having a DCS in a circuit-switched network provides important flexibility that can otherwise only be obtained at higher cost using manual "DSX" cross-connect patch panels. DCS devices "switch" traffic, but they are not packet switches—they switch circuits, not packets, and the circuit arrangements they are used to manage tend to persist over very long time spans, typically months or longer, as compared to packet switches, which can route every packet differently, and operate on micro- or millisecond time spans. DCS units are also sometimes colloquially called "DACS" units, after a proprietary brand name of DCS units created and sold by AT&T's Western Electric division, now Alcatel-Lucent. Modern digital access and cross-connect systems are not limited to the T-carrier system, and may accommodate high data rates such as those of SONET. Transmuxing Transmuxing (transmux: transcode multiplexing) is a telecommunications signaling format change between two signaling methods, typically synchronous optical network signals, SONET, and various time-division multiplexing, TDM, signals. Transmuxing changes the “container” without changing the “contents.” Transmuxing provides the carrier the capability to embed a telecommunications signal from one logical TDM circuit to another within SONET without physically breaking down the TDM circuit into its components and Document 2::: An SSH client is a software program which uses the secure shell protocol to connect to a remote computer. This article compares a selection of notable clients. General Platform The operating systems or virtual machines the SSH clients are designed to run on without emulation include several possibilities: Partial indicates that while it works, the client lacks important functionality compared to versions for other OSs but may still be under development. The list is not exhaustive, but rather reflects the most common platforms today. Technical Features Authentication key algorithms This table lists standard authentication key algorithms implemented by SSH clients. Some SSH implementations include both server and client implementations and support custom non-standard authentication algorithms not listed in this table. Document 3::: In operations research and engineering, a criticality matrix is a representation (often graphical) of failure modes along with their probabilities and severities. Severity may be classified in four categories, with Level I as most severe or "catastrophic"; Level II for "critical"; Level III for "marginal"; and Level IV for "minor". Example For example, an aircraft might have the following matrix: References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the significance of Boletellus dicymbophilus in the context of scientific research? A. It is a well-known edible mushroom. B. It was first discovered in the United States. C. It is a newly described species of fungus. D. It belongs to the family Agaricaceae. Answer:
C. It is a newly described species of fungus.
Relavent Documents: Document 0::: Cantharellus is a genus of mushrooms, commonly known as chanterelles (), a name which can also refer to the type species, Cantharellus cibarius. They are mycorrhizal fungi, meaning they form symbiotic associations with plants. Chanterelles may resemble a number of other species, some of which are poisonous. The name comes from the Greek word kantharos ('tankard, cup'). Chanterelles are one of the most recognized and harvested groups of edible mushrooms. Description Mushrooms in the genus are generally shaped like cups or trumpets. The hue is mostly yellow, with the gills sometimes pinkish. Similar species The false chanterelle (Hygrophoropsis aurantiaca) has finer, more orange gills and a darker cap. It is sometimes regarded as poisonous. The very similar jack-o'-lantern mushroom (Omphalotus olearius) and its sister species (Omphalotus olivascens) are very poisonous, though not lethal. They have true gills (unlike chanterelles) which are thinner, have distinct crowns, and generally do not reach up to the edge. Additionally, the jack-o-lantern mushroom is bioluminescent and grows on wood – possibly buried – whereas Cantharellus species grow on the ground. Species in the genera Craterellus, Gomphus, and Polyozellus may also look like chanterelles. Taxonomy The genus Cantharellus is large and has a complex taxonomic history. Index Fungorum lists over 500 scientific names that have been applied to the genus, although the number of currently valid names is less than 100. In addition to synonymy, many species have been moved into other genera such as Afrocantharellus, Arrhenia, Craterellus, Gomphus, Hygrophoropsis, and Pseudocraterellus. Molecular phylogenetic analyses are providing new information about relationships between chanterelle populations. The genus has been divided into eight subgenera Afrocantharellus Eyssart. & Buyck, Cantharellus Adans. ex Fr., Cinnabarinus Buyck & V. Hofst., Magni T. Cao & H.S. Yuan, Parvocantharellus Eyssart. & Buyck, Pseudocan Document 1::: In fluid dynamics, the Taylor–Couette flow consists of a viscous fluid confined in the gap between two rotating cylinders. For low angular velocities, measured by the Reynolds number Re, the flow is steady and purely azimuthal. This basic state is known as circular Couette flow, after Maurice Marie Alfred Couette, who used this experimental device as a means to measure viscosity. Sir Geoffrey Ingram Taylor investigated the stability of Couette flow in a ground-breaking paper. Taylor's paper became a cornerstone in the development of hydrodynamic stability theory and demonstrated that the no-slip condition, which was in dispute by the scientific community at the time, was the correct boundary condition for viscous flows at a solid boundary. Taylor showed that when the angular velocity of the inner cylinder is increased above a certain threshold, Couette flow becomes unstable and a secondary steady state characterized by axisymmetric toroidal vortices, known as Taylor vortex flow, emerges. Subsequently, upon increasing the angular speed of the cylinder the system undergoes a progression of instabilities which lead to states with greater spatio-temporal complexity, with the next state being called wavy vortex flow. If the two cylinders rotate in opposite sense then spiral vortex flow arises. Beyond a certain Reynolds number there is the onset of turbulence. Circular Couette flow has wide applications ranging from desalination to magnetohydrodynamics and also in viscosimetric analysis. Different flow regimes have been categorized over the years including twisted Taylor vortices and wavy outflow boundaries. It has been a well researched and documented flow in fluid dynamics. Flow description A simple Taylor–Couette flow is a steady flow created between two rotating infinitely long coaxial cylinders. Since the cylinder lengths are infinitely long, the flow is essentially unidirectional in steady state. If the inner cylinder with radius is rotating at constant ang Document 2::: Cannabis Science, Inc. is a biotech company based in Irvine, California. The company was incorporated in 2009 and formerly traded under the ticker CBIS on the Over-The-Counter Bulletin Board until October 2019, when their SEC license was revoked. The company's stated goal was to obtain Food and Drug Administration (FDA) approval for cannabis-based medicines, with a focus on treating skin cancer (basal and squamous cell carcinomas), posttraumatic stress disorder and HIV. The FDA has not approved these treatments. References Document 3::: Virtual machining is the practice of using computers to simulate and model the use of machine tools for part manufacturing. Such activity replicates the behavior and errors of a real environment in virtual reality systems. This can provide useful ways to manufacture products without physical testing on the shop floor. As a result, time and cost of part production can be decreased. Applications Virtual machining provides various benefits: Simulated machining process in virtual environments reveals errors without wasting materials, damaging machine tools, or putting workers at risk. A computer simulation helps improve accuracy in the produced part. Virtual inspection systems such as surface finish, surface metrology, and waviness can be applied to the simulated parts in virtual environments to increase accuracy. Systems can augment process planning of machining operations with regards to the desired tolerances of part designing. Virtual machining system can be used in process planning of machining operations by considering the most suitable steps of machining operations with regard to the time and cost of part manufacturing. Optimization techniques can be applied to the simulated machining process to increase efficiency of parts production. Finite element method (FEM) can be applied to the simulated machining process in virtual environments to analyze stress and strain of the machine tool, workpiece and cutting tool. Accuracy of mathematical error modeling in prediction of machined surfaces can be analyzed by using the virtual machining systems. Machining operations of flexible materials can be analyzed in virtual environments to increase accuracy of part manufacturing. Vibrations of machine tools as well as possibility of chatter along cutting tool paths in machining operations can be analyzed by using simulated machining operations in virtual environments. Time and cost of accurate production can be decreased by applying rules of production process management to the simulated manufacturing process in the virtual environment. Feed rate scheduling systems based on virtual machining can also be presented to increase accuracy as well as efficiency of part manufacturing. Material removal rate in machining operations of complex surfaces can be simulated in virtual environments for analysis and optimization. Efficiency of part manufacturing can be improved by analyzing and optimizing production methods. Errors in actual machined parts can be simulated in virtual environments for analysis and compensation. Simulated machining centers in virtual environments can be connected by the network and Internet for remote analysis and modification. Elements and structures of machine tools such as spindle, rotation axis, moving axes, ball screw, numerical control unit, electric motors (step motor and servomotor), bed and et al. can be simulated in virtual environments so they can be analyzed and modified. As a result, optimized versions of machine tool elements can boost levels of technology in part manufacturing. Geometry of cutting tools can be analyzed and modified as a result of simulated cutting forces in virtual environments. Thus, machining time as well as surface roughness can be minimized and tool life can be maximized due to decreasing cutting forces by modified geometries of cutting tools. Also, the modified versions of cutting tool geometries with regards to minimizing cutting forces can decrease cost of cutting tools by presenting a wider range of acceptable materials for cutting tools such as high-speed steel, carbon tool steels, cemented carbide, ceramic, cermet and et al. The generated heat in engagement areas of cutting tool and workpiece can be simulated, analyzed, and decreased. Tool life can be maximized as a result of decreasing generated heat in engagement areas of cutting tool and workpiece. Machining strategies can be analyzed and modified in virtual environments in terms of collision detection processes. 3D vision of machining operations with errors of actual machined parts and tool deflection error in virtual environments can help designers as well as machining strategists to analyze and modify the process of part production. Virtual machining can augment the experience and training of novice machine tool operators in a virtual machining training system. To increase added value in processes of part production, energy consumption of machine tools can be simulated and analyzed in virtual environments by presenting an efficient energy use machine tool. Machining strategies of freeform surfaces can be analyzed and optimized in virtual environments to increase accuracy of part manufacturing. Future research works Some suggestions for the future studies in virtual machining systems are presented as: Machining operations of new alloy can be simulated in virtual environments for study. As a result, deformation, surface properties and residue stress of new alloy can be analyzed and modified. New material of cutting tool can be simulated and analyzed in virtual environments. Thus, tool deflection error of new cutting tools along machining paths can be studied without the need of actual machining operations. Deformation and deflections of large workpieces can be simulated and analyzed in virtual environments. Machining operations of expensive materials such as gold as well as superalloys can be simulated in virtual environments to predict real machining conditions without the need of shop floor testing. References External links Virtual Machining, Automation World AMGM Institute, Virtual Machining MACHpro: THE VIRTUAL MACHINING SYSTEM The Virtual Machine Shop The 5th International Conference on Virtual Machining Process Technology (VMPT 2016) Eureka Virtual Machining SIMNC Products Overview, Virtual Machining The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is one significant benefit of virtual machining according to the text? A. It guarantees the lowest cost for part manufacturing. B. It allows for testing of new alloys without real machining. C. It eliminates the need for skilled machine tool operators. D. It helps reveal errors in machining processes without wasting materials. Answer:
D. It helps reveal errors in machining processes without wasting materials.
Relavent Documents: Document 0::: USA-440, also known as GPS-III SV07, NAVSTAR 83, RRT-1 or Sally Ride, is a United States navigation satellite which forms part of the Global Positioning System. The satellite is named after Sally Ride. The RRT-1 name refers to the Rapid Response Trailblazer program in which the satellite was launched on an accelerated timeline. Satellite SV07 is the seventh GPS Block III satellite to launch. Launch USA-440 was launched by SpaceX on 16 December 2024 at 7:52pm Eastern, atop a Falcon 9 rocket. The launch took place from SLC-40 at Cape Canaveral Space Force Station. Document 1::: The Extended Evolutionary Synthesis (EES) consists of a set of theoretical concepts argued to be more comprehensive than the earlier modern synthesis of evolutionary biology that took place between 1918 and 1942. The extended evolutionary synthesis was called for in the 1950s by C. H. Waddington, argued for on the basis of punctuated equilibrium by Stephen Jay Gould and Niles Eldredge in the 1980s, and was reconceptualized in 2007 by Massimo Pigliucci and Gerd B. Müller. The extended evolutionary synthesis revisits the relative importance of different factors at play, examining several assumptions of the earlier synthesis, and augmenting it with additional causative factors. It includes multilevel selection, transgenerational epigenetic inheritance, niche construction, evolvability, and several concepts from evolutionary developmental biology. Not all biologists have agreed on the need for, or the scope of, an extended synthesis. Many have collaborated on another synthesis in evolutionary developmental biology, which concentrates on developmental molecular genetics and evolution to understand how natural selection operated on developmental processes and deep homologies between organisms at the level of highly conserved genes. The preceding "modern synthesis" The modern synthesis was the widely accepted early-20th-century synthesis reconciling Charles Darwin's theory of evolution by natural selection and Gregor Mendel's theory of genetics in a joint mathematical framework. It established evolution as biology's central paradigm. The 19th-century ideas of natural selection by Darwin and Mendelian genetics were united by researchers who included Ronald Fisher, J. B. S. Haldane and Sewall Wright, the three founders of population genetics, between 1918 and 1932. Julian Huxley introduced the phrase "modern synthesis" in his 1942 book, Evolution: The Modern Synthesis. Early history During the 1950s, English biologist C. H. Waddington called for an extended synthesis b Document 2::: In probability theory and statistics, the trapezoidal distribution is a continuous probability distribution whose probability density function graph resembles a trapezoid. Likewise, trapezoidal distributions also roughly resemble mesas or plateaus. Each trapezoidal distribution has a lower bound and an upper bound , where , beyond which no values or events on the distribution can occur (i.e. beyond which the probability is always zero). In addition, there are two sharp bending points (non-differentiable discontinuities) within the probability distribution, which we will call and , which occur between and , such that . The image to the right shows a perfectly linear trapezoidal distribution. However, not all trapezoidal distributions are so precisely shaped. In the standard case, where the middle part of the trapezoid is completely flat, and the side ramps are perfectly linear, all of the values between and will occur with equal frequency, and therefore all such points will be modes (local frequency maxima) of the distribution. On the other hand, though, if the middle part of the trapezoid is not completely flat, or if one or both of the side ramps are not perfectly linear, then the trapezoidal distribution in question is a generalized trapezoidal distribution, and more complicated and context-dependent rules may apply. The side ramps of a trapezoidal distribution are not required to be symmetric in the general case, just as the sides of trapezoids in geometry are not required to be symmetric. The non-central moments of the trapezoidal distribution are Special cases of the trapezoidal distribution include the uniform distribution (with and ) and the triangular distribution (with ). Trapezoidal probability distributions seem to not be discussed very often in the literature. The uniform, triangular, Irwin-Hall, Bates, Poisson, normal, bimodal, and multimodal distributions are all more frequently discussed in the literature. This may be because these other ( Document 3::: TMC-647055 is an experimental antiviral drug which was developed as a treatment for hepatitis C, and is in clinical trials as a combination treatment with ribavirin and simeprevir. It acts as a NS5b polymerase inhibitor. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary metal used in the production of jewelry wire mentioned in the text? A. Copper B. Silver C. Gold D. Brass Answer:
A. Copper
Relavent Documents: Document 0::: Privatism is a generic term generally describing any belief that people have a right to the private ownership of certain things. According to different perspectives, it describes also the attitude of people to be concerned only about ideas or facts that affect them as individuals. There are many degrees of privatism, from the advocacy of limited private property over specific kinds of items (personal property) to the advocacy of unrestricted private property over everything; such as in anarcho-capitalism. Regarding public policy, it gives primacy to the private sector as the central agent for action, necessitates the social and economic benefits for private initiatives and competition, and "legitimizes the public consequences of private action" Sociology Privatism is based on the concept of individual sphere of interactions. According to this point of view, collective efforts can not be meaningful by themselves, but they can gain meaning only if considered as a sum of individual activities. Hence, every single action (economical, social, spiritual and so on) can be seen only as the result of an individual choice. For this reason, privatism is based on the concept of individual consumption. Indeed, the private consumption reflects the singular choice of the consumer that according to his own value and prerogatives decide how to consume its own income. Political theory The political ideals of privatism are directly consequent of the interpretation of society as just the sum of individuals that compose it. Indeed, privatism supporters believe that the economic role of the welfare state should be reduced, giving more freedom to consumers and private volunteering organizations to operate inside the economic environment. According to this view, the private allocation of resources would be more efficient and less authoritarian than the one provided by the state. From this point of view, the formation of common social and political views on various topics, is connect Document 1::: The was the first successful Japanese-designed and constructed airplane. It was designed by Captain Yoshitoshi Tokugawa and was first flown by him on October 13, 1911, at Tokorozawa in Saitama Prefecture. There is a replica displayed in the Tokorozawa Aviation Museum, located near the place where the aircraft's first flight took place. Specifications References External links Document 2::: Sup35p is the Saccharomyces cerevisiae (a yeast) eukaryotic translation release factor. More specifically, it is the yeast eukaryotic release factor 3 (eRF3), which forms the translation termination complex with eRF1 (Sup45p in yeast). This complex recognizes and catalyzes the release of the nascent polypeptide chain when the ribosome encounters a stop codon. While eRF1 recognizes stop codons, eRF3 facilitates the release of the polypeptide chain through GTP hydrolysis. Partial loss of function results in nonsense suppression, in which stop codons are ignored and proteins are abnormally synthesized with carboxyl terminal extensions. Complete loss of function is fatal. History Sup35p was shown to propagate in a prion form in 1994 by Reed Wickner. For this reason it is an intensely studied protein. When yeast cells harbor Sup35p in the prion state the resulting phenotype is known as [PSI+]. In [PSI+] cells Sup35p exists in an amyloid state that can be propagated and passed to daughter cells. This results in less soluble and functional protein and thus in an increased rate of nonsense suppression (translational read-through of stop codons). The overexpression of the gene has been shown to induce the [Psi+] conformation. Evolutionary capacitance Several journal articles have suggested that the ability to interconvert between [PSI+] and [psi-](prion-free) states provides an evolutionary advantage, but this remains an area of much debate. Susan Lindquist has shown that isogenic populations of yeast can express different phenotypes based on whether they had the prion form of Sup35p or the non-prion form. She did an experiment where seven strains of yeast with different genetic backgrounds were grown under many different stressful conditions, with matched [PSI+] and [psi-] strains. In some cases, the [PSI+] version grew faster, in others [psi-] grew faster. She proposed that [PSI+] may act as an evolutionary capacitor to facilitate adaptation by releasing cryptic gen Document 3::: Anethole (also known as anise camphor) is an organic compound that is widely used as a flavoring substance. It is a derivative of the aromatic compound allylbenzene and occurs widely in the essential oils of plants. It is in the class of phenylpropanoid organic compounds. It contributes a large component of the odor and flavor of anise and fennel (both in the botanical family Apiaceae), anise myrtle (Myrtaceae), liquorice (Fabaceae), magnolia blossoms, and star anise (Schisandraceae). Closely related to anethole is its isomer estragole, which is abundant in tarragon (Asteraceae) and basil (Lamiaceae), and has a flavor reminiscent of anise. It is a colorless, fragrant, mildly volatile liquid. Anethole is only slightly soluble in water but exhibits high solubility in ethanol. This trait causes certain anise-flavored liqueurs to become opaque when diluted with water; this is called the ouzo effect. Structure and production Anethole is an aromatic, unsaturated ether related to lignols. It exists as both cis–trans isomers (see also E–Z notation), involving the double bond outside the ring. The more abundant isomer, and the one preferred for use, is the trans or E isomer. Like related compounds, anethole is poorly soluble in water. Historically, this property was used to detect adulteration in samples. Most anethole is obtained from turpentine-like extracts from trees. Of only minor commercial significance, anethole can also be isolated from essential oils. Currently Banwari Chemicals Pvt Ltd situated in Bhiwadi, Rajasthan, India is the leading manufacturer of anethole. It is prepared commercially from 4-methoxypropiophenone, which is prepared from anisole. Uses Flavoring Anethole is distinctly sweet, measuring 13 times sweeter than sugar. It is perceived as being pleasant to the taste by many even at higher concentrations. It is used in alcoholic drinks ouzo, rakı, anisette and absinthe, among others. It is also used in seasoning and confectionery applications, s The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What was the first successful Japanese-designed and constructed airplane called, and who designed it? A. Yoshitoshi B. Tokorozawa C. Captain Yoshitoshi Tokugawa D. Aviation Museum Answer:
C. Captain Yoshitoshi Tokugawa
Relavent Documents: Document 0::: Gilbert Wakefield (1756–1801) was an English scholar and controversialist. He moved from being a cleric and academic, into tutoring at dissenting academies, and finally became a professional writer and publicist. In a celebrated state trial, he was imprisoned for a pamphlet critical of government policy of the French Revolutionary Wars; and died shortly after his release. Early life and background He was born 22 February 1756 in Nottingham, the third son of the Rev. George Wakefield, then rector of St Nicholas' Church, Nottingham but afterwards at Kingston-upon-Thames, and his wife Elizabeth. He was one of five brothers, who included George, a merchant in Manchester. His father was from Rolleston, Staffordshire, and came to Cambridge in 1739 as a sizar. He had support in his education from the Hardinge family, of Melbourne, Derbyshire, his patrons being Nicholas Hardinge and his physician brother. In his early career he was chaplain to Margaret Newton, in her own right 2nd Countess Coningsby. George Hardinge, son of Nicholas, after Gilbert's death pointed out that the living of Kingston passed to George Wakefield in 1769, under an Act of Parliament specifying presentations to chapels of the parish, only because he had used his personal influence with his uncle Charles Pratt, 1st Baron Camden the Lord Chancellor, and Jeremiah Dyson. Education and Fellowship Wakefield had some schooling in the Nottingham area, under Samuel Berdmore and then at Wilford under Isaac Pickthall. He then made good progress at Kingston Free School under Richard Wooddeson the elder (died 1774), father of Richard Wooddeson the jurist. Wakefield was sent to university young, because Wooddeson was retiring from teaching. An offer came of a place at Christ Church, Oxford, from the Rev. John Jeffreys (1718–1798); but his father turned it down. He went to Jesus College, Cambridge on a scholarship founded by Robert Marsden: the Master Lynford Caryl was from Nottinghamshire, and a friend of his f Document 1::: Demosaicing (or de-mosaicing, demosaicking), also known as color reconstruction, is a digital image processing algorithm used to reconstruct a full color image from the incomplete color samples output from an image sensor overlaid with a color filter array (CFA) such as a Bayer filter. It is also known as CFA interpolation or debayering. Most modern digital cameras acquire images using a single image sensor overlaid with a CFA, so demosaicing is part of the processing pipeline required to render these images into a viewable format. Many modern digital cameras can save images in a raw format allowing the user to demosaic them using software, rather than using the camera's built-in firmware. Goal The aim of a demosaicing algorithm is to reconstruct a full color image (i.e. a full set of color triples) from the spatially undersampled color channels output from the CFA. The algorithm should have the following traits: Avoidance of the introduction of false color artifacts, such as chromatic aliases, zippering (abrupt unnatural changes of intensity over a number of neighboring pixels) and purple fringing Maximum preservation of the image resolution Low computational complexity for fast processing or efficient in-camera hardware implementation Amenability to analysis for accurate noise reduction Background: color filter array A color filter array is a mosaic of color filters in front of the image sensor. Commercially, the most commonly used CFA configuration is the Bayer filter illustrated here. This has alternating red (R) and green (G) filters for odd rows and alternating green (G) and blue (B) filters for even rows. There are twice as many green filters as red or blue ones, catering to the human eye's higher sensitivity to green light. Since the color subsampling of a CFA by its nature results in aliasing, an optical anti-aliasing filter is typically placed in the optical path between the image sensor and the lens to reduce the false color artifacts (chromati Document 2::: In molecular genetics, a regulon is a group of genes that are regulated as a unit, generally controlled by the same regulatory gene that expresses a protein acting as a repressor or activator. This terminology is generally, although not exclusively, used in reference to prokaryotes, whose genomes are often organized into operons; the genes contained within a regulon are usually organized into more than one operon at disparate locations on the chromosome. Applied to eukaryotes, the term refers to any group of non-contiguous genes controlled by the same regulatory gene. A modulon is a set of regulons or operons that are collectively regulated in response to changes in overall conditions or stresses, but may be under the control of different or overlapping regulatory molecules. The term stimulon is sometimes used to refer to the set of genes whose expression responds to specific environmental stimuli. Examples Commonly studied regulons in bacteria are those involved in response to stress such as heat shock. The heat shock response in E. coli is regulated by the sigma factor σ32 (RpoH), whose regulon has been characterized as containing at least 89 open reading frames. Regulons involving virulence factors in pathogenic bacteria are of particular research interest; an often-studied example is the phosphate regulon in E. coli, which couples phosphate homeostasis to pathogenicity through a two-component system. Regulons can sometimes be pathogenicity islands. The Ada regulon in E. coli is a well-characterized example of a group of genes involved in the adaptive response form of DNA repair. Quorum sensing behavior in bacteria is a commonly cited example of a modulon or stimulon, though some sources describe this type of intercellular auto-induction as a separate form of regulation. Evolution Changes in the regulation of gene networks are a common mechanism for prokaryotic evolution. An example of the effects of different regulatory environments for homologous protei Document 3::: In mathematics, a Zariski geometry consists of an abstract structure introduced by Ehud Hrushovski and Boris Zilber, in order to give a characterisation of the Zariski topology on an algebraic curve, and all its powers. The Zariski topology on a product of algebraic varieties is very rarely the product topology, but richer in closed sets defined by equations that mix two sets of variables. The result described gives that a very definite meaning, applying to projective curves and compact Riemann surfaces in particular. Definition A Zariski geometry consists of a set X and a topological structure on each of the sets X, X2, X3, ... satisfying certain axioms. (N) Each of the Xn is a Noetherian topological space, of dimension at most n. Some standard terminology for Noetherian spaces will now be assumed. (A) In each Xn, the subsets defined by equality in an n-tuple are closed. The mappings Xm → Xn defined by projecting out certain coordinates and setting others as constants are all continuous. (B) For a projection p: Xm → Xn and an irreducible closed subset Y of Xm, p(Y) lies between its closure Z and Z \ where is a proper closed subset of Z. (This is quantifier elimination, at an abstract level.) (C) X is irreducible. (D) There is a uniform bound on the number of elements of a fiber in a projection of any closed set in Xm, other than the cases where the fiber is X. (E) A closed irreducible subset of Xm, of dimension r, when intersected with a diagonal subset in which s coordinates are set equal, has all components of dimension at least r − s + 1. The further condition required is called very ample (cf. very ample line bundle). It is assumed there is an irreducible closed subset P of some Xm, and an irreducible closed subset Q of P× X2, with the following properties: (I) Given pairs (x, y), (, ) in X2, for some t in P, the set of (t, u, v) in Q includes (t, x, y) but not (t, , ) (J) For t outside a proper closed subset of P, the set of (x, y) in X The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is another name for Coronavirus HKU15 as mentioned in the text? A. Porcine epidemic diarrhea virus B. Swine deltacoronavirus C. Porcine deltacoronavirus D. Bovine coronavirus Answer:
B. Swine deltacoronavirus
Relavent Documents: Document 0::: The Guajira–Barranquilla xeric scrub is a xeric shrubland ecoregion in Colombia, Venezuela, and the ABC Islands (Leeward Antilles), covering an estimated area of . Rainfall varies from , and the median temperature is . Location The ecoregion occupies the Guajira Peninsula, the valley of the Rancheria river and Guajira Department, covering parts of the northeastern coast of Venezuela to the ABC Islands (Leeward Antilles). The valleys lie in the rain shadow of the surrounding Serranía de Macuira, which reaches an elevation of over sea level. These mountains trap some of the trade winds, causing mist. An important tourist destination in the area is Cabo de la Vela, and Klein Curaçao. Ecology Flora The ecoregion is dominated by thorny trees and succulents. Common species include Acacia glomerosa, Bourreria cumanensis, Bulnesia arborea, Caesalpinia coriaria, Copaifera venezolana, Croton sp., Gyrocarpus americanus, Hyptis sp., Jacquinia pungens, Malpighia glabra, Myrospermum frutescens, Opuntia caribaea, Pereskia guamacho, Piptadenia flava, Prosopis juliflora, and Stenocereus griseus. Forests dominated by Lonchocarpus punctatus are often accompanied by Bunchosia odorata and Ayenia magna. Other forests exist in which Prosopis juliflora, Erythrina velutina and Clerodendron ternifolium are dominant. A variety of plant communities occur where two plant species are dominant. Examples include Astronium graveolens – Handroanthus billbergii, Haematoxylum brasiletto – Melochia tomentosa, Caesalpinia coriaria – Cordia curassavica, Bursera glabra – Castela erecta, Vitex cymosa – Libidibia coraria, Mimosa cabrera – Cordia curassavica, Bursera tomentosa – Bursera graveolens and Castela erecta – Parkinsonia praecox. Fauna The ecoregion is notable for being the habitat of a large community of Caribbean flamingo (Phoenicopterus ruber), besides a diversity of birds and bats. Conservation Most of the Serranía de Macuira lies within National Natural Park of Macuira. Nearby is Document 1::: In signal processing, nonlinear multidimensional signal processing (NMSP) covers all signal processing using nonlinear multidimensional signals and systems. Nonlinear multidimensional signal processing is a subset of signal processing (multidimensional signal processing). Nonlinear multi-dimensional systems can be used in a broad range such as imaging, teletraffic, communications, hydrology, geology, and economics. Nonlinear systems cannot be treated as linear systems, using Fourier transformation and wavelet analysis. Nonlinear systems will have chaotic behavior, limit cycle, steady state, bifurcation, multi-stability and so on. Nonlinear systems do not have a canonical representation, like impulse response for linear systems. But there are some efforts to characterize nonlinear systems, such as Volterra and Wiener series using polynomial integrals as the use of those methods naturally extend the signal into multi-dimensions. Another example is the Empirical mode decomposition method using Hilbert transform instead of Fourier Transform for nonlinear multi-dimensional systems. This method is an empirical method and can be directly applied to data sets. Multi-dimensional nonlinear filters (MDNF) are also an important part of NMSP, MDNF are mainly used to filter noise in real data. There are nonlinear-type hybrid filters used in color image processing, nonlinear edge-preserving filters use in magnetic resonance image restoration. Those filters use both temporal and spatial information and combine the maximum likelihood estimate with the spatial smoothing algorithm. Nonlinear analyser A linear frequency response function (FRF) can be extended to a nonlinear system by evaluation of higher order transfer functions and impulse response functions by Volterra series. Suppose we have a time series , which is decomposed into components of various order Each component is defined as , for , is the linear convolution. is the generalized impulse response of order . T Document 2::: The heat capacity rate is heat transfer terminology used in thermodynamics and different forms of engineering denoting the quantity of heat a flowing fluid of a certain mass flow rate is able to absorb or release per unit temperature change per unit time. It is typically denoted as C, listed from empirical data experimentally determined in various reference works, and is typically stated as a comparison between a hot and a cold fluid, Ch and Cc either graphically, or as a linearized equation. It is an important quantity in heat exchanger technology common to either heating or cooling systems and needs, and the solution of many real world problems such as the design of disparate items as different as a microprocessor and an internal combustion engine. Basis A hot fluid's heat capacity rate can be much greater than, equal to, or much less than the heat capacity rate of the same fluid when cold. In practice, it is most important in specifying heat-exchanger systems, wherein one fluid usually of dissimilar nature is used to cool another fluid such as the hot gases or steam cooled in a power plant by a heat sink from a water source—a case of dissimilar fluids, or for specifying the minimal cooling needs of heat transfer across boundaries, such as in air cooling. As the ability of a fluid to resist change in temperature itself changes as heat transfer occurs changing its net average instantaneous temperature, it is a quantity of interest in designs which have to compensate for the fact that it varies continuously in a dynamic system. While itself varying, such change must be taken into account when designing a system for overall behavior to stimuli or likely environmental conditions, and in particular the worst-case conditions encountered under the high stresses imposed near the limits of operability— for example, an air-cooled engine in a desert climate on a very hot day. If the hot fluid had a much larger heat capacity rate, then when hot and cold fluids went through Document 3::: Serang virus (SERV) is a single-stranded, negative-sense, enveloped, novel RNA orthohantavirus. Natural reservoir SERV was first isolated from the Asian house rat (R.Tanezumi) in Serang, Indonesia in 2000. Virology Phylogenetic analysis based on partial L, M and S segment nucleotide sequences show SERV is novel and distinct among the hantaviruses. It is most closely related to Thailand virus (THAIV) which is carried by the great bandicoot rat (Bandicota indica). Nucleotide sequence comparison suggests that SERV is the result of cross-species transmission from bandicoots to Asian rats. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the two main phases of gameplay in Unnatural Selection? A. Exploration and Combat B. Breeding and Battle C. Strategy and Defense D. Training and Competition Answer:
B. Breeding and Battle
Relavent Documents: Document 0::: Luidia, Inc. produces portable interactive whiteboard technology for classrooms and conference rooms. Its eBeam hardware and software products work with computers and digital projectors to use existing whiteboard or writing surface as interactive whiteboards. The company’s eBeam products allow text, images and video to be projected onto display surfaces, where an interactive stylus or marker can be used to add notes, access menus, manipulate images and create diagrams and drawings. Technology Luidia’s eBeam technology uses infrared and ultrasound receivers to track the location of a transmitter-equipped pen, called a stylus, or a standard dry-erase marker in a transmitter-equipped sleeve. Company history Luidia’s eBeam technology was originally developed and patented by engineers at Electronics for Imaging Inc. (Nasdaq: EFII), a Foster City, California developer of digital print server technology. Luidia was spun off from EFI in July 2003 with venture funding from Globespan Capital Partners and Silicom Ventures. In 2007, Luidia was selected by the Mexican government to install eBeam-enabled interactive boards in public seventh-grade classrooms in Mexico as part of the government’s Enciclomedia program. In 2007 and 2008, Luidia was accredited by Deloitte LLP in the accounting firm’s Silicon Valley “Technology Fast 50” program, which accredits fast-growing companies in the San Francisco Bay area. In January 2021 their main sites and web documentation started returning 404 errors; however, their shop was still up. In 2022, Ludia sent out notice that it would be shutting down in July of that year. References Document 1::: In Euclidean geometry, linear separability is a property of two sets of points. This is most easily visualized in two dimensions (the Euclidean plane) by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets are linearly separable if there exists at least one line in the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if the line is replaced by a hyperplane. The problem of determining if a pair of sets is linearly separable and finding a separating hyperplane if they are, arises in several areas. In statistics and machine learning, classifying certain types of data is a problem for which good algorithms exist that are based on this concept. Mathematical definition Let and be two sets of points in an n-dimensional Euclidean space. Then and are linearly separable if there exist n + 1 real numbers , such that every point satisfies and every point satisfies , where is the -th component of . Equivalently, two sets are linearly separable precisely when their respective convex hulls are disjoint (colloquially, do not overlap). In simple 2D, it can also be imagined that the set of points under a linear transformation collapses into a line, on which there exists a value, k, greater than which one set of points will fall into, and lesser than which the other set of points fall. Examples Three non-collinear points in two classes ('+' and '-') are always linearly separable in two dimensions. This is illustrated by the three examples in the following figure (the all '+' case is not shown, but is similar to the all '-' case): However, not all sets of four points, no three collinear, are linearly separable in two dimensions. The following example would need two straight lines and thus is not linearly separable: Notice that three points which are collinear and of the form "+ ⋅⋅⋅ Document 2::: In coding theory, a standard array (or Slepian array) is a by array that lists all elements of a particular vector space. Standard arrays are used to decode linear codes; i.e. to find the corresponding codeword for any received vector. Definition A standard array for an [n,k]-code is a by array where: The first row lists all codewords (with the 0 codeword on the extreme left) Each row is a coset with the coset leader in the first column The entry in the i-th row and j-th column is the sum of the i-th coset leader and the j-th codeword. For example, the [5,2]-code = {0, 01101, 10110, 11011} has a standard array as follows: The above is only one possibility for the standard array; had 00011 been chosen as the first coset leader of weight two, another standard array representing the code would have been constructed. The first row contains the 0 vector and the codewords of (0 itself being a codeword). Also, the leftmost column contains the vectors of minimum weight enumerating vectors of weight 1 first and then using vectors of weight 2. Also each possible vector in the vector space appears exactly once. Constructing a standard array Because each possible vector can appear only once in a standard array some care must be taken during construction. A standard array can be created as follows: List the codewords of , starting with 0, as the first row Choose any vector of minimum weight not already in the array. Write this as the first entry of the next row. This vector is denoted the 'coset leader'. Fill out the row by adding the coset leader to the codeword at the top of each column. The sum of the i-th coset leader and the j-th codeword becomes the entry in row i, column j. Repeat steps 2 and 3 until all rows/cosets are listed and each vector appears exactly once. Adding vectors is done mod q. For example, binary codes are added mod 2 (which equivalent to bit-wise XOR addition). For example, in , 11000 + 11011 = 00011. That selecting different c Document 3::: Stephen Thomas is a professor at the University of Greenwich Business School, working in the area of energy policy. Before moving to the University of Greenwich in 2001, Thomas worked for twenty-two years at the University of Sussex. Research work Stephen Thomas is professor at the University of Greenwich Business School, and has been a researcher in the area of energy policy for over twenty-five years. He specialises in the economics and policy of nuclear power (of which he is a critic), liberalisation and privatisation of the electricity and gas industries, and trade policy on network energy industries. Thomas serves on the editorial boards of several periodicals including Energy Policy, Utilities Policy, Energy & Environment, and International Journal of Regulation and Governance. Before moving to the University of Greenwich in 2001, Thomas worked for twenty-two years at the Science Policy Research Unit (SPRU) at the University of Sussex. Selected publications with Mycle Schneider and Antony Froggatt (2011). World Nuclear Industry Status Report 2010-2011: Nuclear Power in a Post-Fukushima World, Worldwatch Institute. with Mycle Schneider, Antony Froggatt, and Doug Koplow. The World Nuclear Industry Status Report 2009 Commissioned by German Federal Ministry of Environment, Nature Conservation and Reactor Safety, August 2009. International Perspectives on Energy Policy and the Role of Nuclear Power, Multi-Science Publishing, 2009. The grin of the Cheshire cat, Energy Policy, vol 34, 15, 2006, pp 1974–1983. The British Model in Britain: failing slowly, Energy Policy, vol 34, 5, 2006, pp 583–600. The UK Nuclear Decommissioning Authority, Energy & Environment, vol 16, no 6, 2005, pp 923–935. Evaluating the British Model of electricity deregulation, Annals of Public and Cooperative Economics, vol 75, 3, 2004, pp 367–398. See also Andy Stirling David Elliott Energy Fair Gordon Walker World Nuclear Industry Status Report References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main reason researchers proposed the existence of the Fifth Giant in the Solar System? A. To explain the formation of the Sun B. To resolve discrepancies in the Nice Model's predictions C. To account for the presence of Earth-like planets D. To identify the origin of asteroids Answer:
B. To resolve discrepancies in the Nice Model's predictions
Relavent Documents: Document 0::: Arthur Aaron Oliner (March 5, 1921 – September 9, 2013) was an American physicist and electrical engineer, who was professor emeritus at department of electrical and computer engineering at New York University-Polytechnic. Best known for his contributions to engineering electromagnetics and antenna theory, he is regarded as a pioneer of leaky wave theory and leaky wave antennas. Biography Arthur Aaron Oliner was born on March 5, 1921, in Shanghai, China. He received an undergraduate degree from Brooklyn College and Ph.D. from Cornell University in 1941 and 1946 respectively, with both being in physics. In 1946, he joined Microwave Research Institute at New York University's school of engineering, then known as the Polytechnic Institute of Brooklyn. In 1965, he went on to a sabbatical at École normale supérieure in Paris, France, under a Guggenheim Fellowship. Becoming a full professor in 1957, Oliner acted as the head of the institute's department of electrical engineering in between 1966 and 1974. In addition, he was the director of the Microwave Research Institute from 1967 until 1982. He retired from New York University in 1990. He died on September 9, 2013, in Lexington, Massachusetts. He was survived by two children, three grandchildren, and one great-grandchild; his wife Frieda, died in 2013. Oliner was a Fellow of AAAS and a Life Fellow of IEEE. In 1991, he was elected to the National Academy of Engineering for his "contributions to the theory of guided electromagnetic waves and antennas." He was a recipient of the IEEE Heinrich Hertz Medal (2000) and Distinguished Educator Award of the Microwave Theory and Techniques Society, of which he was a Honorary Life Member. During his career, Oliner was also employed as an engineering consultant for IBM, Boeing, Raytheon Technologies, Hughes Aircraft Company and Rockwell International. He was the founder of Merrimac Industries, and served on its board of directors from 1962 until its acquisition by Crane Aerospace Document 1::: The static induction thyristor (SIT, SITh) is a thyristor with a buried gate structure in which the gate electrodes are placed in n-base region. Since they are normally on-state, gate electrodes must be negatively or anode biased to hold off-state. It has low noise, low distortion, high audio frequency power capability. The turn-on and turn-off times are very short, typically 0.25 microseconds. History The first static induction thyristor was invented by Japanese engineer Jun-ichi Nishizawa in 1975. It was capable of conducting large currents with a low forward bias and had a small turn-off time. It had a self controlled gate turn-off thyristor that was commercially available through Tokyo Electric Co. (now Toyo Engineering Corporation) in 1988. The initial device consisted of a p+nn+ diode and a buried p+ grid. In 1999, an analytical model of the SITh was developed for the PSPICE circuit simulator. In 2010, a newer version of SITh was developed by Zhang Caizhen, Wang Yongshun, Liu Chunjuan and Wang Zaixing, the new feature of which was its high forward blocking voltage. See also Static induction transistor MOS composite static induction thyristor References External links Static induction thyristor Document 2::: The planar process is a manufacturing process used in the semiconductor industry to build individual components of a transistor, and in turn, connect those transistors together. It is the primary process by which silicon integrated circuit chips are built, and it is the most commonly used method of producing junctions during the manufacture of semiconductor devices. The process utilizes the surface passivation and thermal oxidation methods. The planar process was developed at Fairchild Semiconductor in 1959 and process proved to be one of the most important single advances in semiconductor technology. Overview The key concept is to view a circuit in its two-dimensional projection (a plane), thus allowing the use of photographic processing concepts such as film negatives to mask the projection of light exposed chemicals. This allows the use of a series of exposures on a substrate (silicon) to create silicon oxide (insulators) or doped regions (conductors). Together with the use of metallization, and the concepts of p–n junction isolation and surface passivation, it is possible to create circuits on a single silicon crystal slice (a wafer) from a monocrystalline silicon boule. The process involves the basic procedures of silicon dioxide (SiO2) oxidation, SiO2 etching and heat diffusion. The final steps involves oxidizing the entire wafer with an SiO2 layer, etching contact vias to the transistors, and depositing a covering metal layer over the oxide, thus connecting the transistors without manually wiring them together. History Development In 1955 at Bell Labs, Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over a silicon wafer, for which they observed surface passivation properties. In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors, the first transistors in which drain and source were adjacent at the surface, showing that silicon dioxide surface passivation protected and insulated Document 3::: The Living Museum of Bujumbura () is a zoo and museum in Burundi. The museum is located in Bujumbura, the country's largest city and former capital, and is one of the country's two public museums. It is dedicated to the wildlife and art of Burundi. The museum was founded in 1977 and occupies a park on the rue du 13 Octobre in downtown Bujumbura. In December 2016, the zoo's collection included six crocodiles, one monkey, one leopard, two chimpanzees, three guinea fowls, a tortoise, an antelope, and a number of snakes and fish. A number of Burundian craftsmen also have workshops on the museum's premises. Several different types of trees stand in the park, alongside a reconstruction of a traditional Burundian house (rugo). The number of visitors to the museum fell sharply in the aftermath of the 2015 Burundian unrest, following a wider decline in the number of tourists to the country. References External links Le Musée Vivant (Bujumbura) at Petit Futé The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What role does jasmonic acid (JA) play in plant responses to wounding events? A. It inhibits the production of defense proteins. B. It decreases leaf area to reduce evaporation. C. It induces the production of proteins functioning in plant defenses. D. It prevents the synthesis of callose in damaged sieve elements. Answer:
C. It induces the production of proteins functioning in plant defenses.
Relavent Documents: Document 0::: HD 23127 b is a jovian extrasolar planet orbiting the star HD 23127 at the distance of 2.29 AU, taking 3.32 years to orbit. The orbit is very eccentric, a so-called "eccentric Jupiter". At periastron, the distance is 1.28 AU, and at apastron, the distance is 3.30 AU. The mass is at least 1.37 times Jupiter. Only the minimum mass is known because the inclination is not known. References Document 1::: Glaze or glaze ice, also called glazed frost or verglas, is a smooth, transparent and homogeneous ice coating occurring when freezing rain or drizzle hits a surface. It is similar in appearance to clear ice, which forms from supercooled water droplets. It is a relatively common occurrence in temperate climates in the winter when precipitation forms in warm air aloft and falls into below-freezing temperature at the surface. Effects When the freezing rain or drizzle is light and not prolonged, the ice formed is thin. It usually causes only minor damage, relieving trees of their dead branches, etc. When large quantities accumulate, however, it is one of the most dangerous types of winter hazard. When the ice layer exceeds , tree limbs with branches heavily coated in ice can break off under the enormous weight and fall onto power lines. Windy conditions, when present, will exacerbate the damage. Power lines coated with ice become extremely heavy, causing support poles, insulators, and lines to break. The ice that forms on roadways makes vehicle travel dangerous. Unlike snow, wet ice provides almost no traction, and vehicles will slide even on gentle slopes. Because it conforms to the shape of the ground or object (such as a tree branch or car) it forms on, it is often difficult to notice until it is too late to react. Glaze from freezing rain on a large scale causes effects on plants that can be severe, as they cannot support the weight of the ice. Trees may snap as they are dormant and fragile during winter weather. Pine trees are also victims of ice storms as their needles will catch the ice, but not be able to support the weight. Orchardists spray water onto budding fruit to simulate glaze as the ice insulates the buds from even lower temperatures. This saves the crop from severe frost damage. Glaze from freezing rain is also an extreme hazard to aircraft, as it causes very rapid structural icing. Most helicopters and small airplanes lack the necessary deicin Document 2::: Scigress, stylised SCiGRESS, is a software suite designed for molecular modeling, computational and experimental chemistry, drug design, and materials science. It is a successor to the Computer Aided Chemistry (CAChe) software and has been used to perform experiments on hazardous or novel biomolecules and proteins in silico. Functions and use cases Molecule editing. Theory levels: DFT, semi-empirical, molecular mechanics and dynamics. Determination of low energy conformations and thermodynamic properties. Calculation and 3D visualization of electronic properties, such as partial charges, orbitals, electron densities, and electrostatic surfaces. Analysis of transition states and intrinsic reaction coordinates. Infrared, UV, and NMR spectroscopy. Study of phase transitions, expansion, crystal defects, compressibility, tensile strength, adsorption, absorption, and thermal conductivity. Protein handling and protein-ligand docking. See also References Document 3::: A vacuum-tube computer, now termed a first-generation computer, is a computer that uses vacuum tubes for logic circuitry. While the history of mechanical aids to computation goes back centuries, if not millennia, the history of vacuum tube computers is confined to the middle of the 20th century. Lee De Forest invented the triode in 1906. The first example of using vacuum tubes for computation, the Atanasoff–Berry computer, was demonstrated in 1939. Vacuum-tube computers were initially one-of-a-kind designs, but commercial models were introduced in the 1950s and sold in volumes ranging from single digits to thousands of units. By the early 1960s vacuum tube computers were obsolete, superseded by second-generation transistorized computers. Much of what we now consider part of digital computing evolved during the vacuum tube era. Initially, vacuum tube computers performed the same operations as earlier mechanical computers, only at much higher speeds. Gears and mechanical relays operate in milliseconds, whereas vacuum tubes can switch in microseconds. The first departure from what was possible prior to vacuum tubes was the incorporation of large memories that could store thousands of bits of data and randomly access them at high speeds. That, in turn, allowed the storage of machine instructions in the same memory as data—the stored program concept, a breakthrough which today is a hallmark of digital computers. Other innovations included the use of magnetic tape to store large volumes of data in compact form (UNIVAC I) and the introduction of random access secondary storage (IBM RAMAC 305), the direct ancestor of all the hard disk drives we use today. Even computer graphics began during the vacuum tube era with the IBM 740 CRT Data Recorder and the Whirlwind light pen. Programming languages originated in the vacuum tube era, including some still used today such as Fortran & Lisp (IBM 704), Algol (Z22) and COBOL. Operating systems, such as the GM-NAA I/O, also were b The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What are the two main applications of cost–benefit analysis (CBA) as described in the text? A. To determine if an investment is sound and to estimate the overall environmental impact B. To determine if an investment is sound and to provide a basis for comparing investments C. To provide a basis for comparing investments and to estimate future economic growth D. To evaluate business decisions and to assess worker satisfaction Answer:
B. To determine if an investment is sound and to provide a basis for comparing investments
Relavent Documents: Document 0::: Lump sum turnkey (LSTK) is a combination of the business-contract concepts of lump sum and turnkey. Lump sum is a noun which means a complete payment consisting of a single sum of money while turnkey is an adjective of a product or service which means product or service will be ready to use upon delivery. In the construction industry, LSTK combines two concepts. The LS (lump sum) part refers to the payment of a fixed sum for the delivery under e.g. an EPC contract. The financial risk lies with the contractor. TK (turn key) specifies that the scope of work includes start-up of the facility to a level of operational status. Ultimately the scope of work will define just exactly what is needed. Progressive LSTK Very large projects may be split into phases where a fixed price (lump sum) is agreed at the start of each phase. This reduces the overall project risk taken on by the contractor at the start of the project, and increases flexibility on the part of the project owner to adapt the project to changing circumstances. References Document 1::: Glovadalen (developmental code name UCB-0022) is a dopamine D1 receptor positive allosteric modulator which is under development for the treatment of Parkinson's disease. It has been found to potentiate the capacity of dopamine to activate the D1 receptor by 10-fold in vitro with no actions on other dopamine receptors. As of May 2024, glovadalen is in phase 2 clinical trials for this indication. The drug is under development by UCB Biopharma. It is described as an orally active, centrally penetrant small molecule. Document 2::: In mathematics, a continuum structure function (CSF) is defined by Laurence Baxter as a nondecreasing mapping from the unit hypercube to the unit interval. It is used by Baxter to help in the Mathematical modelling of the level of performance of a system in terms of the performance levels of its components. References Further reading Document 3::: MicroStation is a CAD software platform for two- and three-dimensional design and drafting, developed and sold by Bentley Systems and used in the architectural and engineering industries. It generates 2D/3D vector graphics objects and elements and includes building information modeling (BIM) features. The current version is MicroStation CONNECT Edition. History MicroStation was initially developed by 3 Individual developers and sold and supported by Intergraph in the 1980s. The latest versions of the software are released solely for Microsoft Windows operating systems, but historically MicroStation was available for Macintosh platforms and a number of Unix-like operating systems. From its inception MicroStation was designed as an IGDS (Interactive Graphics Design System) file editor for the PC. Its initial development was a result of the developers experience developing PseudoStation released in 1984, a program designed to replace the use of proprietary Intergraph graphic workstations to edit DGN files by substituting the much less expensive Tektronix compatible graphics terminals. PseudoStation as well as Intergraph's IGDS program ran on a modified version of Digital Equipment Corporation's VAX super-mini computer. In 1985, MicroStation 1.0 was released as a DGN file read-only and plot program designed to run exclusively on the IBM PC-AT personal computer. In 1987, MicroStation 2.0 was released, and was the first version of MicroStation to read and write DGN files. Almost two years later, MicroStation 3.0 was released, which took advantage of the increasing processing power of the PC, particularly with respect to dynamics. Intergraph MicroStation 4.0 was released in late 1990 and added many features: reference file clipping and masking, a DWG translator, fence modes, the ability to name levels, as well as GUI enhancements. The 1992 release of version 4 introduced the ability to write applications using the MicroStation Development Language (MDL). In 1993, Mic The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary therapeutic target of glovadalen in the treatment of Parkinson's disease? A. Dopamine D1 receptor B. Dopamine D2 receptor C. Serotonin receptor D. Norepinephrine receptor Answer:
A. Dopamine D1 receptor
Relavent Documents: Document 0::: Filariasis is a filarial infection caused by parasitic nematodes (roundworms) spread by different vectors. They are included in the list of neglected tropical diseases. The most common type is lymphatic filariasis caused by three species of Filaria that are spread by mosquitoes. Other types of filariasis are onchocerciasis also known as river blindness caused by Onchocerca volvulus; Loa loa filariasis (Loiasis) caused by Loa loa; Mansonelliasis caused by three species of Mansonella, and Dirofilariasis caused by two types of Dirofilaria. Epidemiology In the year 2000, 199 million infection cases of lymphatic filariasis were predicted with 3.1 million cases in America and around 107 million in South East Asia, making up to 52% of the global cases coming from Bangladesh, India, Indonesia, and Myanmar combined. While the African nations that comprised around 21% of the cases showed a decrease in the trend over a period of 19 years from 2000 to 2018, studies still proved the global burden of infection to be concentrated in southeast Asia. Cause Eight known filarial worms have humans as a definitive host. These are divided into three groups according to the part of the body they affect: Lymphatic filariasis is caused by the worms Wuchereria bancrofti, Brugia malayi, and Brugia timori. These worms occupy the lymphatic system, including the lymph nodes; in chronic cases, these worms can lead to the syndrome of elephantiasis. Loiasis a subcutaneous filariasis is caused by Loa loa (the eye worm). Mansonella streptocerca, and Onchocerca volvulus. These worms occupy the layer just under the skin. O. volvulus causes river blindness. Serous cavity filariasis is caused by the worms Mansonella perstans and Mansonella ozzardi, which occupy the serous cavity of the abdomen. Dirofilaria immitis, the dog heartworm, rarely infects humans. These worms are transmitted by infected mosquitoes of the genera Aedes, Culex, Anopheles and Mansonia. Recent evidence suggests that climat Document 1::: The Legendre pseudospectral method for optimal control problems is based on Legendre polynomials. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. A basic version of the Legendre pseudospectral was originally proposed by Elnagar and his coworkers in 1995. Since then, Ross, Fahroo and their coworkers have extended, generalized and applied the method for a large range of problems. An application that has received wide publicity is the use of their method for generating real time trajectories for the International Space Station. Fundamentals There are three basic types of Legendre pseudospectral methods: One based on Gauss-Lobatto points First proposed by Elnagar et al and subsequently extended by Fahroo and Ross to incorporate the covector mapping theorem. Forms the basis for solving general nonlinear finite-horizon optimal control problems. Incorporated in several software products DIDO, OTIS, PSOPT One based on Gauss-Radau points First proposed by Fahroo and Ross and subsequently extended (by Fahroo and Ross) to incorporate a covector mapping theorem. Forms the basis for solving general nonlinear infinite-horizon optimal control problems. Forms the basis for solving general nonlinear finite-horizon problems with one free endpoint. One based on Gauss points First proposed by Reddien Forms the basis for solving finite-horizon problems with free endpoints Incorporated in several software products GPOPS, PROPT Software The first software to implement the Legendre pseudospectral method was DIDO in 2001. Subsequently, the method was incorporated in the NASA code OTIS. Years later, many other software products emerged at an increasing pace, such as PSOPT, PROPT and GPOPS. Flight implementations The Legendre pseudospectral method (based on Gauss-Lobatto points) has been implemented in flight by NASA on several spacecraft through the use of the software, DIDO. The first flight implementation was on November 5, 200 Document 2::: In immunology, epitope mapping is the process of experimentally identifying the binding site, or epitope, of an antibody on its target antigen (usually, on a protein). Identification and characterization of antibody binding sites aid in the discovery and development of new therapeutics, vaccines, and diagnostics. Epitope characterization can also help elucidate the binding mechanism of an antibody and can strengthen intellectual property (patent) protection. Experimental epitope mapping data can be incorporated into robust algorithms to facilitate in silico prediction of B-cell epitopes based on sequence and/or structural data. Epitopes are generally divided into two classes: linear and conformational/discontinuous. Linear epitopes are formed by a continuous sequence of amino acids in a protein. Conformational epitopes epitopes are formed by amino acids that are nearby in the folded 3D structure but distant in the protein sequence. Note that conformational epitopes can include some linear segments. B-cell epitope mapping studies suggest that most interactions between antigens and antibodies, particularly autoantibodies and protective antibodies (e.g., in vaccines), rely on binding to discontinuous epitopes. Importance for antibody characterization By providing information on mechanism of action, epitope mapping is a critical component in therapeutic monoclonal antibody (mAb) development. Epitope mapping can reveal how a mAb exerts its functional effects - for instance, by blocking the binding of a ligand or by trapping a protein in a non-functional state. Many therapeutic mAbs target conformational epitopes that are only present when the protein is in its native (properly folded) state, which can make epitope mapping challenging. Epitope mapping has been crucial to the development of vaccines against prevalent or deadly viral pathogens, such as chikungunya, dengue, Ebola, and Zika viruses, by determining the antigenic elements (epitopes) that confer long-lasting Document 3::: Algaenan is the resistant biopolymer in the cell walls of unrelated groups of green algae, and facilitates their preservation in the fossil record. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the classification of Vendomyces according to the text? A. Chytridiomycetes B. Ascomycetes C. Basidiomycetes D. Zygomycetes Answer:
A. Chytridiomycetes
Relavent Documents: Document 0::: Tetrasodium pyrophosphate, also called sodium pyrophosphate, tetrasodium phosphate or TSPP, is an inorganic compound with the formula Na4P2O7. As a salt, it is a white, water-soluble solid. It is composed of pyrophosphate anion and sodium ions. Toxicity is approximately twice that of table salt when ingested orally. Also known is the decahydrate Na4P2O710(H2O). Use Tetrasodium pyrophosphate is used as a buffering agent, an emulsifier, a dispersing agent, and a thickening agent, and is often used as a food additive. Common foods containing tetrasodium pyrophosphate include chicken nuggets, marshmallows, pudding, crab meat, imitation crab, canned tuna, and soy-based meat alternatives and cat foods and cat treats where it is used as a palatability enhancer. In toothpaste and dental floss, tetrasodium pyrophosphate acts as a tartar control agent, serving to remove calcium and magnesium from saliva and thus preventing them from being deposited on teeth. Tetrasodium pyrophosphate is used in commercial dental rinses before brushing to aid in plaque reduction. Tetrasodium pyrophosphate is sometimes used in household detergents to prevent similar deposition on clothing, but due to its phosphate content it causes eutrophication of water, promoting algae growth. Production Tetrasodium pyrophosphate is produced by the reaction of furnace-grade phosphoric acid with sodium carbonate to form disodium phosphate, which is then heated to 450 °C to form tetrasodium pyrophosphate: 2 Na2HPO4 → Na4P2O7 + H2O References Document 1::: The concept of the stochastic discount factor (SDF) is used in financial economics and mathematical finance. The name derives from the price of an asset being computable by "discounting" the future cash flow by the stochastic factor , and then taking the expectation. This definition is of fundamental importance in asset pricing. If there are n assets with initial prices at the beginning of a period and payoffs at the end of the period (all xs are random (stochastic) variables), then SDF is any random variable satisfying The stochastic discount factor is sometimes referred to as the pricing kernel as, if the expectation is written as an integral, then can be interpreted as the kernel function in an integral transform. Other names sometimes used for the SDF are the "marginal rate of substitution" (the ratio of utility of states, when utility is separable and additive, though discounted by the risk-neutral rate), a "change of measure", "state-price deflator" or a "state-price density". Properties The existence of an SDF is equivalent to the law of one price; similarly, the existence of a strictly positive SDF is equivalent to the absence of arbitrage opportunities (see Fundamental theorem of asset pricing). This being the case, then if is positive, by using to denote the return, we can rewrite the definition as and this implies Also, if there is a portfolio made up of the assets, then the SDF satisfies By a simple standard identity on covariances, we have Suppose there is a risk-free asset. Then implies . Substituting this into the last expression and rearranging gives the following formula for the risk premium of any asset or portfolio with return : This shows that risk premiums are determined by covariances with any SDF. See also Hansen–Jagannathan bound Document 2::: The Desagüe was the hydraulic engineering project to drain Mexico's central lake system in order to protect the capital from persistent and destructive flooding. Begun in the sixteenth century and completed in the late nineteenth century, it has been deemed "the greatest engineering project of colonial Spanish America." Historian Charles Gibson goes further and considers it "one of the largest engineering enterprises of pre-industrial society anywhere in the world." There had been periodic flooding of the prehispanic Aztec capital of Tenochtitlan, the site which became the Spanish capital of Mexico City. Flooding continued to be a threat to the viceregal capital, so at the start of the seventeenth century, the crown ordered a solution to the problem that entailed the employment of massive numbers of indigenous laborers who were compelled to work on the drainage project. The crown also devoted significant funding.  A tunnel and later a surface drainage system diverted flood waters outside the closed basin of Mexico.  Not until the late nineteenth century under Porfirio Díaz (1876-1911) was the project completed by British entrepreneur and engineer, Weetman Pearson, using machinery imported from Great Britain and other technology at a cost of 16 million pesos, a vast sum at the time.  The ecological impact was long lasting, with desiccation permanently changing the ecology of the Basin of Mexico. Early history In the period before the Spanish conquest, the Aztec capital of Tenochtitlan had been subject to flooding during prolonged rains.  There was no natural drainage of the lake system outside the closed basin. In the late 1440s, the ruler of Tenochtitlan, Moctezuma I, and the ruler of the allied kingdom of Texcoco, Nezahualcoyotl, ordered a dike (albarradón) to be constructed, which was expanded under the rule of Ahuitzotl. The dike was in place when the Spanish conquered Tenochtitlan in 1521, but major flooding in 1555-56 prompted the construction of a second dik Document 3::: In mathematics, the exponential function can be characterized in many ways. This article presents some common characterizations, discusses why each makes sense, and proves that they are all equivalent. The exponential function occurs naturally in many branches of mathematics. Walter Rudin called it "the most important function in mathematics". It is therefore useful to have multiple ways to define (or characterize) it. Each of the characterizations below may be more or less useful depending on context. The "product limit" characterization of the exponential function was discovered by Leonhard Euler. Characterizations The six most common definitions of the exponential function for real values are as follows. Product limit. Define by the limit: Power series. Define as the value of the infinite series (Here denotes the factorial of . One proof that is irrational uses a special case of this formula.) Inverse of logarithm integral. Define to be the unique number such that That is, is the inverse of the natural logarithm function , which is defined by this integral. Differential equation. Define to be the unique solution to the differential equation with initial value: where denotes the derivative of . Functional equation. The exponential function is the unique function with the multiplicative property for all and . The condition can be replaced with together with any of the following regularity conditions: For the uniqueness, one must impose some regularity condition, since other functions satisfying can be constructed using a basis for the real numbers over the rationals, as described by Hewitt and Stromberg. Elementary definition by powers. Define the exponential function with base to be the continuous function whose value on integers is given by repeated multiplication or division of , and whose value on rational numbers is given by . Then define to be the exponential function whose base is the unique positive real number satisfyi The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary purpose of the Environmental Design Research Association (EDRA)? A. To promote architectural design in commercial spaces B. To advance and disseminate environmental design research C. To provide funding for design projects worldwide D. To organize international competitions for architects Answer:
B. To advance and disseminate environmental design research
Relavent Documents: Document 0::: In telecommunications, the term multiplex baseband has the following meanings: In frequency-division multiplexing, the frequency band occupied by the aggregate of the signals in the line interconnecting the multiplexing and radio or line equipment. In frequency division multiplexed carrier systems, at the input to any stage of frequency translation, the frequency band occupied. For example, the output of a group multiplexer consists of a band of frequencies from 60 kHz to 108 kHz. This is the group-level baseband that results from combining 12 voice-frequency input channels, having a bandwidth of 4 kHz each, including guard bands. In turn, 5 groups are multiplexed into a super group having a baseband of 312 kHz to 552 kHz. This baseband, however, does not represent a group-level baseband. Ten super groups are in turn multiplexed into one master group, the output of which is a baseband that may be used to modulate a microwave-frequency carrier. References Document 1::: The Phycological Society of India was founded and registered as Society on 1962. Prof. M. O. P. Iyengar was the first President of the society. Prof. Vidyavati, former Vice-Chancellor, Kakatiya University, Telangana, India is the President now. The Phycological Society of India promotes interest and studies in various branches of phycology. Publications The Society also publishes a half yearly research journal called Phykos and the first volume was published in April 1962. References External links http://www.phykosindia.com http://www.psaalgae.org Document 2::: Mond gas is a cheap coal gas that was used for industrial heating purposes. Coal gases are made by decomposing coal through heating it to a high temperature. Coal gases were the primary source of gas fuel during the 1940s and 1950s until the adoption of natural gas. They were used for lighting, heating, and cooking, typically being supplied to households through pipe distribution systems. The gas was named after its discoverer, Ludwig Mond. Discovery In 1889, Ludwig Mond discovered that the combustion of coal with air and steam produced ammonia along with an extra gas, which was named the Mond gas. He discovered this while looking for a process to form ammonium sulfate, which was useful in agriculture. The process involved reacting low-quality coal with superheated steam, which produced the Mond gas. The gas was then passed through dilute sulfuric acid spray, which ultimately removed the ammonia, forming ammonium sulfate. Mond modified the gasification process by restricting the air supply and filling the air with steam, providing a low working temperature. This temperature was below ammonia's point of dissociation, maximizing the amount of ammonia that could be produced from the nitrogen, a product from superheating coal. Gas production The Mond gas process was designed to convert cheap coal into flammable gas, which was made up of mainly hydrogen, while recovering ammonium sulfate. The gas produced was rich in hydrogen and poor in carbon monoxide. Although it could be used for some industrial purposes and power generation, the gas was limited for heating or lighting. In 1897, the first Mond gas plant began at the Brunner Mond & Company in Northwich, Cheshire. Mond plants which recovered ammonia needed to be large in order to be profitable, using at least 182 tons of coal per week. Reaction Predominant reaction in Mond Gas Process: C + 2H2O = CO2+ 2H2 The Mond gas was composed of roughly: 12% CO (Carbon monoxide) 28% H2 (Hydrogen) 2.2% CH4 (Methane) 1 Document 3::: Pentamidine is an antimicrobial medication used to treat African trypanosomiasis, leishmaniasis, Balamuthia infections, babesiosis, and to prevent and treat pneumocystis pneumonia (PCP) in people with poor immune function. In African trypanosomiasis it is used for early disease before central nervous system involvement, as a second line option to suramin. It is an option for both visceral leishmaniasis and cutaneous leishmaniasis. Pentamidine can be given by injection into a vein or muscle or by inhalation. Common side effects of the injectable form include low blood sugar, pain at the site of injection, nausea, vomiting, low blood pressure, and kidney problems. Common side effects of the inhaled form include wheezing, cough, and nausea. It is unclear if doses should be changed in those with kidney or liver problems. Pentamidine is not recommended in early pregnancy but may be used in later pregnancy. Its safety during breastfeeding is unclear. Pentamidine is in the aromatic diamidine family of medications. While the way the medication works is not entirely clear, it is believed to involve decreasing the production of DNA, RNA, and protein. Pentamidine came into medical use in 1937. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In regions of the world where trypanosomiasis is common pentamidine is provided for free by the World Health Organization (WHO). Medical uses Treatment of PCP caused by Pneumocystis jirovecii Prevention of PCP in adults with HIV who have one or both of the following: History of PCP CD4+ count ≤ 200mm³ Treatment of leishmaniasis Treatment of African trypanosomiasis caused by Trypanosoma brucei gambiense Balamuthia infections Pentamidine is classified as an orphan drug by the U.S. Food and Drug Administration Other uses Use as an antitumor drug has also been proposed. Pentamidine is also identified as a potential small molecule antagonist that disrupts this interacti The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary purpose of the Loewy decomposition in the context of differential equations? A. To simplify the process of solving reducible linear ordinary differential equations B. To provide a numerical approximation for differential equations C. To transform partial differential equations into ordinary differential equations D. To find the Galois group of a differential equation Answer:
A. To simplify the process of solving reducible linear ordinary differential equations
Relavent Documents: Document 0::: Geomagnetic latitude, or magnetic latitude (MLAT), is a parameter analogous to geographic latitude, except that, instead of being defined relative to the geographic poles, it is defined by the axis of the geomagnetic dipole, which can be accurately extracted from the International Geomagnetic Reference Field (IGRF). Further, Magnetic Local Time (MLT) is the geomagnetic dipole equivalent to geographic longitude. See also Earth's magnetic field Geomagnetic equator Ionosphere L-shell Magnetosphere World Magnetic Model (WMM) References External links Tips on Viewing the Aurora (SWPC) Magnetic Field Calculator (NCEI) Ionospheric Electrodynamics Using Magnetic Apex Coordinates (Journal of Geomagnetism and Geoelectricity) Document 1::: A Wuest type herringbone gear, invented by Swiss engineer Caspar Wüst-Kunz in the early 20th century, is a type of herringbone gear wherein "the teeth on opposite sides of the center line are staggered by an amount equal to one half the circular pitch". This staggering of the two rows of teeth causes the gear to wear more evenly, at the slight cost of strength. Document 2::: Nutritional yeast (also known as nooch) is a deactivated (i.e. dead) yeast, often a strain of Saccharomyces cerevisiae, that is sold commercially as a food product. It is sold in the form of yellow flakes, granules, or powder, and may be found in the bulk aisle of natural food stores. It is used in vegan and vegetarian cooking as an ingredient in recipes or as a condiment. It is a source of some B-complex vitamins and contains trace amounts of several other vitamins and minerals. It may be fortified with vitamin B12. Nutritional yeast has a strong flavor described as nutty or cheesy for use as a cheese substitute. It may be used in preparation of mashed potatoes or tofu. Nutritional yeast is a whole-cell inactive yeast that contains both soluble and insoluble parts, which is different from yeast extract. Yeast extract is made by centrifuging inactive nutritional yeast and concentrating the water-soluble yeast cell proteins which are rich in glutamic acid, nucleotides, and peptides, the flavor compounds responsible for umami taste. Commercial production Nutritional yeast is produced by culturing yeast in a nutrient medium for several days. The primary ingredient in the growth medium is glucose, often from either sugarcane or beet molasses. When the yeast is ready, it is killed with heat and then harvested, washed, dried and packaged. The species of yeast used is often a strain of Saccharomyces cerevisiae. The strains are cultured and selected for desirable characteristics and often exhibit a different phenotype from strains of S. cerevisiae used in baking and brewing. Nutrition In a reference amount of , one manufactured, fortified brand is 33% carbohydrates, 53% protein, and 3% fat, providing 60 calories (table). Levels of B vitamins in the reference amount are multiples of the Daily Value (table). Nutritional yeast contains low amounts of dietary minerals (source in table), unless fortified. There may be confusion about the source of vitamin B12 in nutrit Document 3::: Thin-film thickness monitors, deposition rate controllers, and so on, are a family of instruments used in high and ultra-high vacuum systems. They can measure the thickness of a thin film, not only after it has been made, but while it is still being deposited, and some can control either the final thickness of the film, the rate at which it is deposited, or both. Not surprisingly, the devices which control some aspect of the process tend to be called controllers, and those that simply monitor the process tend to be called monitors. Most such instruments use a quartz crystal microbalance as the sensor. Optical measurements are sometimes used; this may be especially appropriate if the film being deposited is part of a thin film optical device. A thickness monitor measures how much material is deposited on its sensor. Most deposition processes are at least somewhat directional. The sensor and the sample generally cannot be in the same direction from the deposition source (if they were, the one closer to the source would shadow the other), and may not even be at the same distance from it. Therefore, the rate at which the material is deposited on the sensor may not equal the rate at which it is deposited on the sample. The ratio of the two rates is sometimes called the "tooling factor". For careful work, the tooling factor should be checked by measuring the amount of material deposited on some samples after the fact and comparing it to what the thickness monitor measured. Fizeau interferometers are often used to do this. Many other techniques might be used, depending on the thickness and characteristics of the thin film, including surface profilers, ellipsometry, dual polarisation interferometry and scanning electron microscopy of cross-sections of the sample. Many thickness monitors and controllers allow tooling factors to be entered into the device before deposition begins. The correct tooling factor can be calculated as follows: where Fi is the initial The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main function of the Helix fast-response system (HFRS) as described in the text? A. To drill new oil wells B. To respond to subsea well incidents C. To transport oil to tankers D. To monitor underwater ecosystems Answer:
B. To respond to subsea well incidents
Relavent Documents: Document 0::: Matthew John Fuchter is a British chemist who is a professor of chemistry at the University of Oxford. His research focuses on the development and application of novel functional molecular systems to a broad range of areas; from materials to medicine. He has been awarded both the Harrison-Meldola Memorial Prize (2014) and the Corday–Morgan Prizes (2021) of the Royal Society of Chemistry. In 2020 he was a finalist for the Blavatnik Awards for Young Scientists. Early life and education Fuchter earned a master's degree (MSci) in chemistry at the University of Bristol, where he was awarded the Richard Dixon prize. It was during his undergraduate degree that he first became interested in organic synthesis. As a graduate student he moved to Imperial College London, where he worked with Anthony Barrett on the synthesis and applications of porphyrazines, including as therapeutic agents. During his doctoral studies Barrett and Fuchter collaborated with Brian M. Hoffman at Northwestern University. Research and career After completing his PhD, Fuchter moved to Australia, for postdoctoral research at CSIRO and the University of Melbourne, where he worked with Andrew Bruce Holmes. In 2007 Fuchter returned to the United Kingdom, where he began his independent academic career at the School of Pharmacy, University of London (now UCL School of Pharmacy). Less than one year later he was appointed a Lecturer at Imperial College London, where he was promoted to Reader (Associate Professor) in 2015 and professor in 2019. Fuchter develops photoswitchable molecules, chiral materials and new pharmaceuticals. Fuchter is interested in how considerations of chirality can be applied to the development of novel approaches in chiral optoelectronic materials and devices. In particular, he focusses on the introduction of chiral-optical (so-called chiroptical) properties into optoelectronic materials. Amongst these materials, Fuchter has extensively evaluated the use of chiral small molecule additives (helicenes) to induce chiroptical properties into light emitting polymers for the realisation of chiral (circularly polarised, CP) OLEDs. He has also investigated the application of such materials in circularly polarised photodetectors, which are devices that are capable of detecting circularly polarised light. As well as using chiral functional materials for light emission and detection, Fuchter has investigated the charge transport properties of enantiopure and racemic chiral functional materials. Fuchter has also developed novel molecular photoswitches – molecules that can be cleanly and reversibly interconverted between two states using light – with a focus on heteroaromatic versions of azobenzene. The arylazopyrazole switches developed by Fuchter out perform the ubiquitous azobenzene switches, demonstrating complete photoswitching in both directions and thermal half-lives of the Z isomer of up to 46 years. Fuchter continues to apply these switches to a range of photoaddressable applications from photopharmacology to energy storage. Alongside his work on functional material discovery, Fuchter works in medicinal chemistry and develops small molecule ligands that can either inhibit or stimulate the activity of disease relevant proteins. While he has worked on many drug targets, he has specialised in proteins involved in the transcriptional and epigenetic processes of disease. A particular interest has been the development of inhibitors for the histone-lysine methyltransferase enzymes in the Plasmodium parasite that causes human malaria. In 2018 one of the cancer drugs developed by Fuchter, together with Anthony Barrett, Simak Ali and Charles Coombes entered a phase 1 clinical trial, and as of 2020, it is in phase 2. The drug, which was designed using computational chemistry, inhibits the cyclin-dependent kinase 7 (CDK7), a transcriptional regulatory protein that also regulates the cell cycle. Certain cancers rely on CDK7, so inhibition of this enzyme has potential to have a significant impact on cancer pathogenesis. In 2024 Fuchter joined the University of Oxford as a Professor of Chemistry and the Sydney Bailey Fellow in Chemistry at St Peter’s College Oxford. Academic service Fuchter serves on the editorial board of MedChemComm. He is an elected council member of the Royal Society of Chemistry organic division. Fuchter is co-director of the Imperial College London Centre for Drug Discovery Science. Awards and honours 2014 Royal Society of ChemistryHarrison-Meldola Memorial Prize 2014 Elected a Fellow of the Royal Society of Chemistry (FRSC) 2015 Thieme Medical Publishers Chemistry Journal Awardee 2017 Imperial College London President's Award for Excellence in Research 2017 Imperial College London President’s Medal for Excellence in Innovation and Entrepreneurship 2018 Tetrahedron Young Investigator Award 2018 Engineering and Physical Sciences Research Council (EPSRC) Established career fellowship 2020 Blavatnik Awards for Young Scientists 2021 Royal Society of Chemistry Corday–Morgan Prize 2022 Royal Society of Chemistry Stephanie L. Kwolek Award 2023 Royal Society of Chemistry Biological and Medicinal Chemistry Sector Malcolm Campbell Memorial Prize 2023 Elected Fellow of the European Academy of Sciences and Arts Selected publications Document 1::: A planetary phase is a certain portion of a planet's area that reflects sunlight as viewed from a given vantage point, as well as the period of time during which it occurs. The phase is determined by the phase angle, which is the angle between the planet, the Sun and the Earth. Inferior planets The two inferior planets, Mercury and Venus, which have orbits that are smaller than the Earth's, exhibit the full range of phases as does the Moon, when seen through a telescope. Their phases are "full" when they are at superior conjunction, on the far side of the Sun as seen from the Earth. It is possible to see them at these times, since their orbits are not exactly in the plane of Earth's orbit, so they usually appear to pass slightly above or below the Sun in the sky. Seeing them from the Earth's surface is difficult, because of sunlight scattered in Earth's atmosphere, but observers in space can see them easily if direct sunlight is blocked from reaching the observer's eyes. The planets' phases are "new" when they are at inferior conjunction, passing more or less between the Sun and the Earth. Sometimes they appear to cross the solar disk, which is called a transit of the planet. At intermediate points on their orbits, these planets exhibit the full range of crescent and gibbous phases. Superior planets The superior planets, orbiting outside the Earth's orbit, do not exhibit a full range of phases since their maximum phase angles are smaller than 90°. Mars often appears significantly gibbous, it has a maximum phase angle of 45°. Jupiter has a maximum phase angle of 11.1° and Saturn of 6°, so their phases are almost always full. See also Earth phase Lunar phase Phases of Venus References Further reading One Schaaf, Fred. The 50 Best Sights in Astronomy and How to See Them: Observing Eclipses, Bright Comets, Meteor Showers, and Other Celestial Wonders. Hoboken, New Jersey: John Wiley, 2007. Print. Two Ganguly, J. Thermodynamics in Earth and Planetary Sciences. B Document 2::: Hashcash is a proof-of-work system used to limit email spam and denial-of-service attacks. Hashcash was proposed in 1997 by Adam Back and described more formally in Back's 2002 paper "Hashcash – A Denial of Service Counter-Measure". In Hashcash the client has to concatenate a random number with a string several times and hash this new string. It then has to do so over and over until a hash beginning with a certain number of zeros is found. Background The idea "...to require a user to compute a moderately hard, but not intractable function..." was proposed by [Cynthia Dwork] and [Moni Naor] in their 1992 paper "Pricing via Processing or Combatting Junk Mail". How it works Hashcash is a cryptographic hash-based proof-of-work algorithm that requires a selectable amount of work to compute, but the proof can be verified efficiently. For email uses, a textual encoding of a hashcash stamp is added to the header of an email to prove the sender has expended a modest amount of CPU time calculating the stamp prior to sending the email. In other words, as the sender has taken a certain amount of time to generate the stamp and send the email, it is unlikely that they are a spammer. The receiver can, at negligible computational cost, verify that the stamp is valid. However, the only known way to find a header with the necessary properties is brute force, trying random values until the answer is found; though testing an individual string is easy, satisfactory answers are rare enough that it will require a substantial number of tries to find the answer. The hypothesis is that spammers, whose business model relies on their ability to send large numbers of emails with very little cost per message, will cease to be profitable if there is even a small cost for each spam they send. Receivers can verify whether a sender made such an investment and use the results to help filter email. Technical details The header line looks something like this: X-Hashcash: 1:20:1303030600:adam@cyph Document 3::: The School of Physics is an academic unit located within the College of Sciences at the Georgia Institute of Technology (Georgia Tech), Georgia, United States. It conducts research and teaching activities related to physics at the undergraduate and graduate levels. The School of Physics offers bachelor's degrees in Physics or Applied Physics. A core of technical courses gives a strong background in mathematics and the physical principles of mechanics, electricity and magnetism, thermodynamics, and quantum theory. The School of Physics also offers programs of study leading to certificates in Applied Optics; Atomic, Molecular, and Chemical Physics; and in Computer Bases Instrumentation. History The Physics Department was one of the eight original departments created, when Georgia Tech opened in 1888. The first chair of the department was Isaac S. Hopkins, who also became Georgia Tech's first president. At the outset, Georgia Tech closely modeled itself after the Worcester "Free School" in Worcester, Massachusetts (now the Worcester Polytechnic Institute) and the Massachusetts Institute of Technology in Cambridge. The curricula of such schools emphasized primarily an amalgamation of undergraduate physics education with engineering. In the 1920s and 1930s, the physics department, under the directorship of J. B. Edwards, was closely tied to applied research connected with public and private companies. During the latter decades of the twentieth century, the groups specialized in applied, interdisciplinary, and pure research. The applied and interdisciplinary centers include the Center for Nonlinear Science (CNS), which consists of thirteen core members and ten associate members. In addition, the center hosts visiting faculty. There is also the Center for Relativistic Astrophysics, which consists of ten core members working on problems including gravitational waves. The Howey-Physics Building, home to physics and calculus, was named after Joseph H. Howey. The buildin The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary focus of Matthew Fuchter's research as mentioned in the text? A. Development of renewable energy sources B. Novel functional molecular systems in materials and medicine C. Study of historical chemical processes D. Agricultural chemistry advancements Answer:
B. Novel functional molecular systems in materials and medicine
Relavent Documents: Document 0::: The Miniature X-ray Solar Spectrometer (MinXSS) CubeSat was the first launched National Aeronautics and Space Administration Science Mission Directorate CubeSat with a science mission. It was designed, built, and operated primarily by students at the University of Colorado Boulder with professional mentorship and involvement from professors, scientists, and engineers in the Aerospace Engineering Sciences department and the Laboratory for Atmospheric and Space Physics, as well as Southwest Research Institute, NASA Goddard Space Flight Center, and the National Center for Atmospheric Research's High Altitude Observatory. The mission principal investigator is Dr. Thomas N. Woods and co-investigators are Dr. Amir Caspi, Dr. Phil Chamberlin, Dr. Andrew Jones, Rick Kohnert, Professor Xinlin Li, Professor Scott Palo, and Dr. Stanley Solomon. The student lead (project manager, systems engineer) was Dr. James Paul Mason, who has since become a Co-I for the second flight model of MinXSS. MinXSS launched on 2015 December 6 to the International Space Station as part of the Orbital ATK Cygnus CRS OA-4 cargo resupply mission. The launch vehicle was a United Launch Alliance Atlas V rocket in the 401 configuration. CubeSat ridesharing was organized as part of NASA ELaNa-IX. Deployment from the International Space Station was achieved with a NanoRacks CubeSat Deployer on 2016 May 16. Spacecraft beacons were picked up soon after by amateur radio operators around the world. Commissioning of the spacecraft was completed on 2016 June 14 and observations of solar flares captured nearly continuously since then. The altitude rapidly decayed in the last week of the mission as atmospheric drag increased exponentially with altitude. The last contact from MinXSS came on 2017-05-06 at 02:37:26 UTC from a HAM operator in Australia. At that time, some temperatures on the spacecraft were already in excess of 100 °C. (One temperature of >300 °C indicated that the solar panel had disconnected, sugge Document 1::: Archive for Mathematical Logic is a peer-reviewed mathematics journal published by Springer Science+Business Media. It was established in 1950 and publishes articles on mathematical logic. Abstracting and indexing The journal is abstracted and indexed in: Mathematical Reviews Zentralblatt MATH Scopus SCImago According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.287. References External links Document 2::: X-ray absorption spectroscopy (XAS) is a set of advanced techniques used for probing the local environment of matter at atomic level and its electronic structure. The experiments require access to synchrotron radiation facilities for their intense and tunable X-ray beams. Samples can be in the gas phase, solutions, or solids. Background XAS data are obtained by tuning the photon energy, using a crystalline monochromator, to a range where core electrons can be excited (0.1-100 keV). The edges are, in part, named by which core electron is excited: the principal quantum numbers n = 1, 2, and 3, correspond to the K-, L-, and M-edges, respectively. For instance, excitation of a 1s electron occurs at the K-edge, while excitation of a 2s or 2p electron occurs at an L-edge (Figure 1). There are three main regions found on a spectrum generated by XAS data, which are then thought of as separate spectroscopic techniques (Figure 2): The absorption threshold determined by the transition to the lowest unoccupied states: The X-ray absorption near-edge structure (XANES), introduced in 1980 and later in 1983 and also called NEXAFS (near-edge X-ray absorption fine structure), which are dominated by core transitions to quasi bound states (multiple scattering resonances) for photoelectrons with kinetic energy in the range from 10 to 150 eV above the chemical potential, called "shape resonances" in molecular spectra since they are due to final states of short life-time degenerate with the continuum with the Fano line-shape. In this range, multi-electron excitations and many-body final states in strongly correlated systems are relevant; In the high kinetic energy range of the photoelectron, the scattering cross-section with neighbor atoms is weak, and the absorption spectra are dominated by EXAFS (extended X-ray absorption fine structure), where the scattering of the ejected photoelectron of neighboring atoms can be approximated by single scattering events. In 1985, it was shown Document 3::: Analyst is a biweekly peer-reviewed scientific journal covering all aspects of analytical chemistry, bioanalysis, and detection science. It is published by the Royal Society of Chemistry and the editor-in-chief is Melanie Bailey (University of Surrey). The journal was established in 1877 by the Society for Analytical Chemistry. Abstracting and indexing The journal is abstracted and indexed in MEDLINE and Analytical Abstracts. According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.2. Analytical Communications In 1999, the Royal Society of Chemistry closed the journal Analytical Communications because it felt that the material submitted to that journal would be best included in a new communications section of Analyst. Predecessor journals of Analytical Communications were Proceedings of the Society for Analytical Chemistry, 1964–1974; Proceedings of the Analytical Division of the Chemical Society, 1975–1979; Analytical Proceedings, 1980–1993; Analytical Proceedings including Analytical Communications, 1994–1995. References External links The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What year was the journal Analyst established? A. 1867 B. 1877 C. 1887 D. 1897 Answer:
B. 1877
Relavent Documents: Document 0::: The SpaceOrb 360 is a 6DOF computer input device that is designed to be operated with two hands. Each of the 6 axes have 10-bit precision each when measuring the amount of force or torque applied. It has two right-index-finger buttons and four right-thumb buttons. It interfaces with a computer through an RS-232 serial port using a custom binary protocol. Drivers for the device exist for Mac OS, Microsoft Windows and Linux. Logitech had similar 6DOF devices during the same time period called the Cyberman and Cyberman II. The device was released in 1996, the same year as popular 3D games such as Descent II and Quake. It was originally called the Spaceball Avenger II, a sequel to SpaceTec's Spaceball Avenger. The SpaceOrb was especially suited for the gameplay of Descent because of the complete freedom-of-motion afforded by its rendering engine. There was strong support for the device in both Quake and Quake II, but the WASD-type keyboard-and-mouse controls eventually became more popular. As of the Half-Life engine (based on the original Quake source), there was specific support for the SpaceOrb's capabilities. Developers later started to drop variable movement speed support, which reduced the 10bit translation force measurement to 1bit per direction. It was originally manufactured and sold by the SpaceTec IMC company (first bought by Labtec, which itself was later bought by Logitech). The device is no longer sold nor supported by Logitech. It has been supplanted by more modern devices sold under Logitech's 3Dconnexion brand, which are all one-handed 3DMice that afford the other hand the freedom to interact with the keyboard/mouse. In 2009, a SpaceOrb fan with the username "vputz" has designed Arduino add ons (OrbDuino, OrbShield, Orbotron) to make SpaceOrbs available over USB, making it compatible with modern operating systems by emulating joystick, mouse, and/or keyboard. ASCII Sphere 360 ASCII Entertainment (later Agetec) bought the SpaceOrb 360 design and tech Document 1::: The Pellizzari reaction was discovered in 1911 by Guido Pellizzari, and is the organic reaction of an amide and a hydrazide to form a 1,2,4-triazole. The product is similar to that of the Einhorn-Brunner reaction, but the mechanism itself is not regioselective. Mechanism The mechanism begins by the nitrogen in the hydrazide attacking the carbonyl carbon on the amide to form compound 3. The negatively charged oxygen then abstracts two hydrogens from neighboring nitrogens in order for a molecule of water to be released to form compound 5. The nitrogen then performs an intramolecular attack on the carbonyl group to form the five-membered ring of compound 6. After another proton migration from the nitrogens to the oxygen, another water molecule is released to form the 1,2,4-triazole 8. Uses The synthesis of the 1,2,4-triazole has a wide range of biological functions. 1,2,4-triazoles have antibacterial, antifungal, antidepressant and hypoglycemic properties. 3-benzylsulfanyl derivates of the triazole also show slight to moderate antimycobacterial activity, but are considered moderately toxic. Problems and variations The Pellizzari reaction is limited in the number of substituents that can be on the ring, so other methods have been developed to incorporate three elements of diversity. Liquid-phase synthesis of 3-alkylamino-4,5-disubstituted-1,2,4-triazoles by PEG support has given moderate yields with excellent purity. In practice, the Pellizzari reaction requires high temperatures, long reaction times, and has an overall low yield. However, adding microwave irradiation shortens the reaction time and increases its yield. Related reactions Einhorn-Brunner reaction References Document 2::: Norelgestromin, or norelgestromine, sold under the brand names Evra and Ortho Evra among others, is a progestin medication which is used as a method of birth control for women. The medication is available in combination with an estrogen and is not available alone. It is used as a patch that is applied to the skin. Side effects of the combination of an estrogen and norelgestromin include menstrual irregularities, headaches, nausea, abdominal pain, breast tenderness, mood changes, and others. Norelgestromin is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has very weak androgenic activity and no other important hormonal activity. Norelgestromin was introduced for medical use in 2002. It is sometimes referred to as a "third-generation" progestin. Norelgestromin is marketed widely throughout the world. It is available as a generic medication. Medical uses Norelgestromin is used in combination with ethinyl estradiol in contraceptive patches. These patches mediate their contraceptive effects by suppressing gonadotropin levels as well as by causing changes in the cervical mucus and endometrium that diminish the likelihood of pregnancy. Available forms Norelgestromin is available only as a transdermal contraceptive patch in combination with ethinyl estradiol. The Ortho Evra patch is a 20 cm, once-weekly adhesive that contains 6.0 mg norelgestromin and 0.6 mg ethinyl estradiol and delivers 200 μg/day norelgestromin and 35 μg/day ethinyl estradiol. Contraindications Side effects Norelgestromin has mostly been studied in combination with an estrogen, so the side effects of norelgestromin specifically or on its own have not been well-defined. Side effects associated with the combination of ethinylestradiol and norelgestromin as a transdermal patch in premenopausal women, with greater than or equal to 2.5% incidence over 6 to 13 menstrual cycles, include breast sym Document 3::: In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics. Definition The probability density function of the wrapped normal distribution is where μ and σ are the mean and standard deviation of the unwrapped distribution, respectively. Expressing the above density function in terms of the characteristic function of the normal distribution yields: where is the Jacobi theta function, given by and The wrapped normal distribution may also be expressed in terms of the Jacobi triple product: where and Moments In terms of the circular variable the circular moments of the wrapped normal distribution are the characteristic function of the normal distribution evaluated at integer arguments: where is some interval of length . The first moment is then the average value of z, also known as the mean resultant, or mean resultant vector: The mean angle is and the length of the mean resultant is The circular standard deviation, which is a useful measure of dispersion for the wrapped normal distribution and its close relative, the von Mises distribution is given by: Estimation of parameters A series of N measurements zn = e iθn drawn from a wrapped normal distribution may be used to estimate certain parameters of the distribution. The average of the series is defined as and its expectation value will be just the first moment: In other words, is an unbiased estimator of the first moment. If we assume that the mean μ lies in the interval [−π, π), then Arg  will be a (biased) estimator of the mean μ. Viewing the zn as a set of vectors in the complex plane, the 2 statistic is the square of the length of the averaged vector: and its expected value is: In other words, the statistic will be an unbiased estimator of e−σ2, and ln(1/Re2) will be a (biased) estimator of σ2 Entropy The information entropy of the wrapped normal distribution is defined as: where is any interval of length . Defining and , the Jacobi triple product representation for the wrapped normal is: where is the Euler function. The logarithm of the density of the wrapped normal distribution may be written: Using the series expansion for the logarithm: the logarithmic sums may be written as: so that the logarithm of density of the wrapped normal distribution may be written as: which is essentially a Fourier series in . Using the characteristic function representation for the wrapped normal distribution in the left side of the integral: the entropy may be written: which may be integrated to yield: External links Circular Values Math and Statistics with C++11, A C++11 infrastructure for circular values (angles, time-of-day, etc.) mathematics and statistics The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary application of the wrapped normal distribution as mentioned in the text? A. Estimation of parameters B. Theory of Brownian motion C. Information entropy calculation D. Circular standard deviation measurement Answer:
B. Theory of Brownian motion
Relavent Documents: Document 0::: Clitoral vibrators are vibrators designed to externally stimulate the clitoris for sexual pleasure and orgasm. They are sex toys created for massaging the clitoris, and are not penetrating sex toys, although the shape of some vibrators allows for penetration and the stimulation of inner erogenous zones for extra sexual pleasure. Use Regardless of the design, the main function of the clitoral vibrator is to vibrate at varying speeds and intensities. Vibrators are normally driven by batteries and some of them can be used underwater. Discretion is often a useful feature for a sex toy, so clitoral vibrators that look like ordinary objects at first glance are also available. Clitoral vibrators have been designed to resemble lipsticks, mobile telephones, sponges, and many other everyday items. Types Most vibrators can be used for clitoral stimulation, but there are a few distinct types of vibrator available: Manual clitoral vibrators come in a wide variety of designs. Some wand vibrators (such as the Hitachi Magic Wand and the Doxy) are powered by a long cable to a wall socket, making them somewhat less convenient, and unsafe in a wet environment. However, they are generally powerful, offering more intense stimulation and better durability. There are also battery operated vibrators, as well as small ones that can be worn on a finger. Pressure Wave Vibrators The concept of pressure wave vibrators represents a recent innovation in the field of clitoral stimulation devices. Rather than the conventional method of direct contact, these vibrators utilise air waves to stimulate the clitoris, thereby offering a gentler yet more intense form of stimulation that is distinct from other vibrators. The technology was developed by Michael Lenke and his wife in Bavaria, following years of research and development. In 2014, they established the brand Womanizer. The technology has since been adopted by numerous manufacturers for use in a wide variety of sex toys. Hands-free clito Document 1::: NEFERT (Neck Flexion Rotation Test) is a medical examination procedure developed in 1999 by German neurootologist Claus-Frenz Claussen. Use The procedure serves for investigating intracorporal movement differences between head and body, especially at the atlanto-axial joint and the lower cervical spine column. The method can help diagnosing sprains of the neck, stiff necks, and whiplash. According to the National Center for Biotechnology Information within the National Institutes of Health, the procedure is "commonly used in clinical practice to evaluate patients with cervicogenic headache." Method The test consists of six movements, which can also be distinguished into four phases. The movements are performed by the patient in a standing position. (Phase I) The patient turns his head several times maximally within a time period of 20 seconds. (Phase II) The patient bows his head forward maximally. In a bowed position, the patient turns his head maximally from the left to the right and from the right to the left within a time period of 20 seconds. (Phase III) The patient bows his head backwards maximally. In a position bowed backwards, the patient turns his head for several times maximally from the left to the right and from the right to the left within a time period of 20 seconds. (Phase IV) After a total time period of 60 seconds, the patient returns into a straight position. If the test results are interfered by unconscious shoulder movements of the patient, a second test course is performed, during which the examining person holds the patient's shoulders with the hands. The test results are recorded and graphically evaluated by a computer, for example with the help of Cranio-corpography. Literature Claus-Frenz Claussen, Burkard Franz: Contemporary and Practical Neurootology, Neurootologisches Forschungsinstitut der 4-G-Forschung e. V., Bad Kissingen 2006, References Document 2::: A reconfigurable manufacturing system (RMS) is a system invented in 1998 that is designed for the outset of rapid change in its structure, as well as its hardware and software components, in order to quickly adjust its production capacity and functionality within a part family in response to sudden market changes or intrinsic system change. A reconfigurable machine can have its features and parts machined. History The RMS, as well as one of its components—the reconfigurable machine tool (RMT)—were invented in 1998 in the Engineering Research Center for Reconfigurable Manufacturing Systems (ERC/RMS) at the University of Michigan College of Engineering. The term reconfigurability in manufacturing was likely coined by Kusiak and Lee. From 1996 to 2007, Yoram Koren received an NSF grant of $32.5million to develop the RMS science base and its software and hardware tools. RMS technology is based on an approach that consists of key elements, the compilation of which is called the RMS science base. System operations The system is composed of stages: 10, 20, 30, etc. Each stage consists of identical machines, such as CNC milling machines. The system produces one product. The manufactured product moves on the horizontal conveyor. Then Gantry-10 grips the product and brings it to one of CNC-10. When CNC-10 finishes the processing, Gantry-10 moves it back to the conveyor. The conveyor moves the product to Gantry-20, which grips the product and loads it on the RMT-20, and so on. Inspection machines are placed at several stages and at the end of the manufacturing system. The product may move during its production in many production paths. In practice, there are small variations in the precision of identical machines, which create accumulated errors in the manufactured product; each path has its own "stream-of-variations" (a term coined by Y. Koren). Characteristics Ideal reconfigurable manufacturing systems, according to professor Yoram Koren in 1995, possess six chara Document 3::: Binimetinib, sold under the brand name Mektovi, is an anti-cancer medication used to treat various cancers. Binimetinib is a selective inhibitor of MEK, a central kinase in the tumor-promoting MAPK pathway. Inappropriate activation of the pathway has been shown to occur in many cancers. In June 2018 it was approved by the FDA in combination with encorafenib for the treatment of patients with unresectable or metastatic BRAF V600E or V600K mutation-positive melanoma. In October 2023, it was approved by the FDA for treatment of NSCLC with a BRAF V600E mutation in combination with encorafenib. It was developed by Array Biopharma. Mechanism of action Binimetinib is an orally available inhibitor of mitogen-activated protein kinase kinase (MEK), or more specifically, a MAP2K inhibitor. MEK is part of the RAS pathway, which is involved in cell proliferation and survival. MEK is upregulated in many forms of cancer. Binimetinib, uncompetitive with ATP, binds to and inhibits the activity of MEK1/2 kinase, which has been shown to regulate several key cellular activities including proliferation, survival, and angiogenesis. MEK1/2 are dual-specificity threonine/tyrosine kinases that play key roles in the activation of the RAS/RAF/MEK/ERK pathway and are often upregulated in a variety of tumor cell types. Inhibition of MEK1/2 prevents the activation of MEK1/2 dependent effector proteins and transcription factors, which may result in the inhibition of growth factor-mediated cell signaling. As demonstrated in preclinical studies, this may eventually lead to an inhibition of tumor cell proliferation and an inhibition in production of various inflammatory cytokines including interleukin-1, -6 and tumor necrosis factor. Development In 2015, it was in phase III clinical trials for ovarian cancer, BRAF mutant melanoma, and NRAS Q61 mutant melanoma. In December 2015, the company announced that the mutant-NRAS melanoma trial was successful. In the trial, those receiving binimetinib h The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main purpose of the 8b/10b encoding scheme in telecommunications? A. To ensure data is transmitted at higher speeds B. To achieve DC balance and bounded disparity in data transmission C. To compress the size of the data being transmitted D. To eliminate the need for error detection Answer:
B. To achieve DC balance and bounded disparity in data transmission
Relavent Documents: Document 0::: In computer graphics, view synthesis, or novel view synthesis, is a task which consists of generating images of a specific subject or scene from a specific point of view, when the only available information is pictures taken from different points of view. This task was only recently (late 2010s – early 2020s) tackled with significant success, mostly as a result of advances in machine learning. Notable successful methods are Neural Radiance Fields and 3D Gaussian Splatting. Applications of view synthesis are numerous, one of them being Free viewpoint television. Document 1::: The chameleon is a hypothetical scalar particle that couples to matter more weakly than gravity, postulated as a dark energy candidate. Due to a non-linear self-interaction, it has a variable effective mass which is an increasing function of the ambient energy density—as a result, the range of the force mediated by the particle is predicted to be very small in regions of high density (for example on Earth, where it is less than 1 mm) but much larger in low-density intergalactic regions: out in the cosmos chameleon models permit a range of up to several thousand parsecs. As a result of this variable mass, the hypothetical fifth force mediated by the chameleon is able to evade current constraints on equivalence principle violation derived from terrestrial experiments even if it couples to matter with a strength equal or greater than that of gravity. Although this property would allow the chameleon to drive the currently observed acceleration of the universe's expansion, it also makes it very difficult to test for experimentally. In 2021, physicists suggested that an excess reported at the dark matter detector experiment XENON1T rather than being a dark matter candidate could be a dark energy candidate: particularly, chameleon particles yet in July 2022 a new analysis by XENONnT discarded the excess. Hypothetical properties Chameleon particles were proposed in 2003 by Khoury and Weltman. In most theories, chameleons have a mass that scales as some power of the local energy density: , where Chameleons also couple to photons, allowing photons and chameleons to oscillate between each other in the presence of an external magnetic field. Chameleons can be confined in hollow containers because their mass increases rapidly as they penetrate the container wall, causing them to reflect. One strategy to search experimentally for chameleons is to direct photons into a cavity, confining the chameleons produced, and then to switch off the light source. Chameleons would be ind Document 2::: Thread Level Speculation (TLS), also known as Speculative Multi-threading, or Speculative Parallelization, is a technique to speculatively execute a section of computer code that is anticipated to be executed later in parallel with the normal execution on a separate independent thread. Such a speculative thread may need to make assumptions about the values of input variables. If these prove to be invalid, then the portions of the speculative thread that rely on these input variables will need to be discarded and squashed. If the assumptions are correct the program can complete in a shorter time provided the thread was able to be scheduled efficiently. Description TLS extracts threads from serial code and executes them speculatively in parallel with a safe thread. The speculative thread will need to be discarded or re-run if its presumptions on the input state prove to be invalid. It is a dynamic (runtime) parallelization technique that can uncover parallelism that static (compile-time) parallelization techniques may fail to exploit because at compile time thread independence cannot be guaranteed. For the technique to achieve the goal of reducing overall execute time, there must be available CPU resource that can be efficiently executed in parallel with the main safe thread. TLS assumes optimistically that a given portion of code (generally loops) can be safely executed in parallel. To do so, it divides the iteration space into chunks that are executed in parallel by different threads. A hardware or software monitor ensures that sequential semantics are kept (in other words, that the execution progresses as if the loop were executing sequentially). If a dependence violation appears, the speculative framework may choose to stop the entire parallel execution and restart it; to stop and restart the offending threads and all their successors, in order to be fed with correct data; or to stop exclusively the offending thread and its successors that have consumed Document 3::: Cloem is a company based in Cannes, France, which applies natural language processing (NLP) technologies to assist patent applicants in creating variants of patent claims, called "cloems". According to the company, these "computer-generated claims can be published to keep potential competitors from attempting to file adjacent patent claims." Technology According to Cloem, dictionaries, ontologies and proprietary claim-drafting algorithms are used to draft alternative claims based on a client's original set of claims. In particular, the original set of claims is subject to various permutations and linguistic manipulations "by considering alternative definitions for terms as well as “synonyms, hyponyms, hyperonyms, meronyms, holonyms, and antonyms.”" Possible uses Cloem can optionally publish one or more created texts, as electronic publications or as paper-printed publications. These can potentially serve – through a defensive publication – as prior art to prevent another party for obtaining a patent on the subject-matter at stake. In other words, after an initial patent filing, an "improvement" patent (adjacent invention) can be applied for by another party, such as a competitor. By publishing variants of a patent claim, the risk of adverse patenting may potentially be decreased (improvement inventions may no longer be patentable). Cloems may also be potentially patentable. One of the issues of patentability, however, is that only a natural person can be a listed as an inventor on a patent. Since cloems are produced by a computer based on a person's input, it is not clear if the computer or the person is the inventor. The inventorship of Cloem texts is an open question. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does the term "railworthiness" refer to in the context of railway vehicles? A. The aesthetic appeal of railway vehicles B. The ability of railway vehicles to meet safety and operating standards C. The efficiency of fuel consumption in locomotives D. The speed capabilities of passenger trains Answer:
B. The ability of railway vehicles to meet safety and operating standards
Relavent Documents: Document 0::: HD 221420 (HR 8935; Gliese 4340) is a likely binary star system in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.81, allowing it to be faintly seen with the naked eye. The object is relatively close at a distance of 102 light years but is receding with a heliocentric radial velocity of . HD 221420 has a stellar classification of G2 IV-V, indicating a solar analogue with a luminosity class intermediate between a subgiant and a main sequence star. The object is also extremely chromospherically inactive. It has a comparable mass to the Sun and a diameter of . It shines with a luminosity of from its photosphere at an effective temperature of , giving a yellow glow. HD 221420 is younger than the Sun at 3.65 billion years. Despite this, the star is already beginning to evolve off the main sequence. Like most planetary hosts, HD 221420 has a metallicity over twice of that of the Sun and spins modestly with a projected rotational velocity . There is a mid-M-dwarf star with a similar proper motion and parallax to HD 221420, which is likely gravitationally bound to it. The two stars are separated by 698 arcseconds, corresponding to a distance of . Planetary system In a 2019 doppler spectroscopy survey, an exoplanet was discovered orbiting the star. The planet was originally thought to be a super Jupiter, having a minimum mass of . However, later observations using Hipparcos and Gaia astrometry found it to be a brown dwarf with a high-inclination orbit, revealing a true mass of . References Document 1::: In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval , the rule takes the form: where n is the number of sample points used, wi are quadrature weights, and xi are the roots of the nth Legendre polynomial. This choice of quadrature weights wi and quadrature nodes xi is the unique choice that allows the quadrature rule to integrate degree polynomials exactly. Many algorithms have been developed for computing Gauss–Legendre quadrature rules. The Golub–Welsch algorithm presented in 1969 reduces the computation of the nodes and weights to an eigenvalue problem which is solved by the QR algorithm. This algorithm was popular, but significantly more efficient algorithms exist. Algorithms based on the Newton–Raphson method are able to compute quadrature rules for significantly larger problem sizes. In 2014, Ignace Bogaert presented explicit asymptotic formulas for the Gauss–Legendre quadrature weights and nodes, which are accurate to within double-precision machine epsilon for any choice of n ≥ 21. This allows for computation of nodes and weights for values of n exceeding one billion. History Carl Friedrich Gauss was the first to derive the Gauss–Legendre quadrature rule, doing so by a calculation with continued fractions in 1814. He calculated the nodes and weights to 16 digits up to order n=7 by hand. Carl Gustav Jacob Jacobi discovered the connection between the quadrature rule and the orthogonal family of Legendre polynomials. As there is no closed-form formula for the quadrature weights and nodes, for many decades people were only able to hand-compute them for small n, and computing the quadrature had to be done by referencing tables containing the weight and node values. By 1942 these values were only known for up to n=16. Definition For integrating f over with Gauss–Legendre quadrature, the associated orthogonal polynomials are Legendre p Document 2::: XLA (Accelerated Linear Algebra) is an open-source compiler for machine learning developed by the OpenXLA project. XLA is designed to improve the performance of machine learning models by optimizing the computation graphs at a lower level, making it particularly useful for large-scale computations and high-performance machine learning models. Key features of XLA include: Compilation of Computation Graphs: Compiles computation graphs into efficient machine code. Optimization Techniques: Applies operation fusion, memory optimization, and other techniques. Hardware Support: Optimizes models for various hardware, including CPUs, GPUs, and NPUs. Improved Model Execution Time: Aims to reduce machine learning models' execution time for both training and inference. Seamless Integration: Can be used with existing machine learning code with minimal changes. XLA represents a significant step in optimizing machine learning models, providing developers with tools to enhance computational efficiency and performance. Supported target devices x86-64 ARM64 NVIDIA GPU AMD GPU Intel GPU Apple GPU Google TPU AWS Trainium, Inferentia Cerebras Graphcore IPU Document 3::: Thulium is a chemical element; it has symbol Tm and atomic number 69. It is the thirteenth element in the lanthanide series of metals. It is the second-least abundant lanthanide in the Earth's crust, after radioactively unstable promethium. It is an easily workable metal with a bright silvery-gray luster. It is fairly soft and slowly tarnishes in air. Despite its high price and rarity, thulium is used as a dopant in solid-state lasers, and as the radiation source in some portable X-ray devices. It has no significant biological role and is not particularly toxic. In 1879, the Swedish chemist Per Teodor Cleve separated two previously unknown components, which he called holmia and thulia, from the rare-earth mineral erbia; these were the oxides of holmium and thulium, respectively. His example of thulium oxide contained impurities of ytterbium oxide. A relatively pure sample of thulium oxide was first obtained in 1911. The metal itself was first obtained in 1936 by Wilhelm Klemm and Heinrich Bommer. Like the other lanthanides, its most common oxidation state is +3, seen in its oxide, halides and other compounds. In aqueous solution, like compounds of other late lanthanides, soluble thulium compounds form coordination complexes with nine water molecules. Properties Physical properties Pure thulium metal has a bright, silvery luster, which tarnishes on exposure to air. The metal can be cut with a knife, as it has a Mohs hardness of 2 to 3; it is malleable and ductile. Thulium is ferromagnetic below 32K, antiferromagnetic between 32 and 56K, and paramagnetic above 56K. Thulium has two major allotropes: the tetragonal α-Tm and the more stable hexagonal β-Tm. Chemical properties Thulium tarnishes slowly in air and burns readily at 150°C to form thulium(III) oxide: Thulium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form thulium hydroxide: Thulium reacts with all the halogens. Reactions are slow at room temperature, The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the apparent magnitude of the star HD 221420, which allows it to be faintly seen with the naked eye? A. 4.50 B. 5.81 C. 6.30 D. 7.25 Answer:
B. 5.81
Relavent Documents: Document 0::: Maharashtra Advanced Research and Vigilance for Enhanced Law Enforcement (MARVEL) is an artificial intelligence (AI) system implemented by the Maharashtra Police. It is noted for being the first state-level police AI system in India. Approved in 2024, MARVEL aims to integrate AI technologies into law enforcement to enhance crime-solving capabilities and improve predictive policing. History The Maharashtra state cabinet approved the creation of MARVEL in March 2024, just before the announcement of the Model Code of Conduct for the Lok Sabha elections. The project was allocated an initial budget of ₹23 crore. Leadership MARVEL is overseen by officials from the Maharashtra Police and the Indian Institute of Management (IIM) Nagpur. The Superintendent of Police, Nagpur (Rural), Harssh Poddar, serves as the ex-officio Chief Executive Officer (CEO) of MARVEL. Dr. Bhimaraya Metri, the Director of IIM Nagpur, acts as an ex-officio director of the company. Structure and Partnerships MARVEL is established as a Special Purpose Vehicle (SPV) and operates through a partnership between: Government of Maharashtra Indian Institute of Management Nagpur Pinaka Technologies Private Limited The company is registered under the Companies Act 2013 and has its office located at the Indian Institute of Management in Nagpur. Funding The Government of Maharashtra has committed to providing 100% share capital to MARVEL for the first five years, amounting to ₹4.2 crore annually. Related Initiatives In addition to MARVEL, the Maharashtra government has approved other technology-driven law enforcement initiatives: A ₹76 crore semi-automated processing project for the speedy disposal of cybercrime cases Establishment of a ₹42 crore Computer Forensic Science Centre of Excellence Document 1::: In probability theory, the central limit theorem (CLT) states that, in many situations, when independent and identically distributed random variables are added, their properly normalized sum tends toward a normal distribution. This article gives two illustrations of this theorem. Both involve the sum of independent and identically-distributed random variables and show how the probability distribution of the sum approaches the normal distribution as the number of terms in the sum increases. The first illustration involves a continuous probability distribution, for which the random variables have a probability density function. The second illustration, for which most of the computation can be done by hand, involves a discrete probability distribution, which is characterized by a probability mass function. Illustration of the continuous case The density of the sum of two independent real-valued random variables equals the convolution of the density functions of the original variables. Thus, the density of the sum of m+n terms of a sequence of independent identically distributed variables equals the convolution of the densities of the sums of m terms and of n term. In particular, the density of the sum of n+1 terms equals the convolution of the density of the sum of n terms with the original density (the "sum" of 1 term). A probability density function is shown in the first figure below. Then the densities of the sums of two, three, and four independent identically distributed variables, each having the original density, are shown in the following figures. If the original density is a piecewise polynomial, as it is in the example, then so are the sum densities, of increasingly higher degree. Although the original density is far from normal, the density of the sum of just a few variables with that density is much smoother and has some of the qualitative features of the normal density. The convolutions were computed via the discrete Fourier transform. A list of value Document 2::: In mathematics, non-commutative conditional expectation is a generalization of the notion of conditional expectation in classical probability. The space of essentially bounded measurable functions on a -finite measure space is the canonical example of a commutative von Neumann algebra. For this reason, the theory of von Neumann algebras is sometimes referred to as noncommutative measure theory. The intimate connections of probability theory with measure theory suggest that one may be able to extend the classical ideas in probability to a noncommutative setting by studying those ideas on general von Neumann algebras. For von Neumann algebras with a faithful normal tracial state, for example finite von Neumann algebras, the notion of conditional expectation is especially useful. Formal definition Let be von Neumann algebras ( and may be general C*-algebras as well), a positive, linear mapping of onto is said to be a conditional expectation (of onto ) when and if and . Applications Sakai's theorem Let be a C*-subalgebra of the C*-algebra an idempotent linear mapping of onto such that acting on the universal representation of . Then extends uniquely to an ultraweakly continuous idempotent linear mapping of , the weak-operator closure of , onto , the weak-operator closure of . In the above setting, a result first proved by Tomiyama may be formulated in the following manner. Theorem. Let be as described above. Then is a conditional expectation from onto and is a conditional expectation from onto . With the aid of Tomiyama's theorem an elegant proof of Sakai's result on the characterization of those C*-algebras that are *-isomorphic to von Neumann algebras may be given. Notes References Kadison, R. V., Non-commutative Conditional Expectations and their Applications, Contemporary Mathematics, Vol. 365 (2004), pp. 143–179. Document 3::: Reed mats are handmade mats of plaited reed or other plant material. East Asia In Japan, a traditional reed mat is the tatami (畳). Tatami are covered with a weft-faced weave of (common rush), on a warp of hemp or weaker cotton. There are four warps per weft shed, two at each end (or sometimes two per shed, one at each end, to cut costs). The (core) is traditionally made from sewn-together rice straw, but contemporary tatami sometimes have compressed wood chip boards or extruded polystyrene foam in their cores, instead or as well. The long sides are usually with brocade or plain cloth, although some tatami have no edging. Southeast Asia In the Philippines, woven reed mats are called banig. They are used as sleeping mats or floor mats, and were also historically used as sails. They come in many different weaving styles and typically have colorful geometric patterns unique to the ethnic group that created them. They are made from buri palm leaves, pandan leaves, rattan, or various kinds of native reeds known by local names like tikog, sesed (Fimbristykis miliacea), rono, or bamban. In Thailand and Cambodia, the mats are produced by plaiting reeds, strips of palm leaf, or some other easily available local plant. The supple mats made by this process of weaving without a loom are widely used in Thai homes. These mats are also now being made into shopping bags, place mats, and decorative wall hangings. One popular kind of Thai mat is made from a kind of reed known as Kachud, which grows in the southern marshes. After the reeds are harvested, they are steeped in mud, which toughens them and prevents them from becoming brittle. They are then dried in the sun for a time and pounded flat, after which they are ready to be dyed and woven into mats of various sizes and patterns. Other mats are produced in different parts of Thailand, most notably in the eastern province of Chanthaburi. Durable as well as attractive, they are plaited entirely by hand with an intricacy th The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is a key feature of von Neumann algebras with a faithful normal tracial state? A. They are always commutative. B. They do not support conditional expectations. C. They are particularly useful for defining conditional expectations. D. They can only be defined in classical probability. Answer:
C. They are particularly useful for defining conditional expectations.
Relavent Documents: Document 0::: Amanita silvicola, also known as the woodland amanita or Kauffman's forest amanita, is a species of Amanita found in coniferous woods the Pacific Northwest and California. A. silvicola is a small to medium-sized white mushroom, distinguishable from most other white Amanita species by its short stalk. Its cap ranges from 5–12 cm and is pure white, convex to flat, often with an incurved margin. The cap is initially rounded, covered in a "wooly" outer veil that later leaves soft patchy remnants across its surface as it flattens. The stem is patched with volva remains, and is slightly larger at its base. Gills are white, close and crowded, and free, just reaching the stem, or to narrowly adnate. The flesh of A. silvicola does not change colour when bruised or cut, but it's cap may discolour with age. The edibibility of A. Silivicola is uncertain, but, due to its close resemblance to two poisonous mushrooms in the Amanita genus, A. pantherina and A. Smithiana, experimentation with this mushroom is strongly advised against. Description The cap of A. silvicola are 5 to 12 cm wide, dry and pure white in color. In advanced age and with decay, the cap may discolour, developing, as observed by Kauffman, "bright rose-colored spots and streaks". Younger fruiting bodies (mushrooms) are covered by a fluffy continuous universal veil, which breaks up irregularly across its slightly sticky surface into soft powdery patches instead of firm warts. The flesh of the cap thins considerably at its margin, which remains incurved into maturity. The gills are white and crowded together and have a free to narrowly adnate attachment, though sometimes reach towards the stipe in a deccurent tooth. The gills are medium broad, 6-7mm, with cottony edges, and in maturity they project below the margin of the cap. A. silvicola spores 8.0-10.0 μm by 4.2-6.0 μm, they are smooth, amyloid, ellipsoid and colourless, leaving a white spore print. The stem is 50 to 120mm long,12 to 25mm thick and stout, tapering slightly as it reaches the cap. It sometimes has a slight ring on its cap. A. silvicola rarely roots, it has a basal marginate bulb (distinctly separate from the stem) at its base, about 3–4 cm thick with wooly veil remnants on its margin. The flesh of A. silvicola is white and does not change color when cut. Habitat and distribution Amanita silvicola is found in the Pacific northwest of North America, California, and more rarely in the Sierra Nevada mountains. The IUCN Red List has assessed it as Least Concern (LC), as the population is stable and "locally common" in the Pacific northwest and California. A. silvicola is a terrestrial species, it can be found as a solitary mushroom or in small groups in coniferous woods, especially under Western Hemlock. It has a preference for areas of high rainfall. Taxonomy and Etymology The species was first described and named by Kauffman in 1925, who had collected the type specimen in Mt. Hood, Oregon on September 30, 1922. The species epithet silvicola is derived from silva, Latin for "wood" or "forest", and -cola, Latin suffix for "dweller of" or "inhabiting", referring to its habitat. External links Document 1::: A virtually safe dose (VSD) may be determined for those carcinogens not assumed to have a threshold. Virtually safe doses are calculated by regulatory agencies to represent the level of exposure to such carcinogenic agents at which an excess of cancers greater than that level accepted by society is not expected. Document 2::: In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically checking all possible candidates for whether or not each candidate satisfies the problem's statement. A brute-force algorithm that finds the divisors of a natural number n would enumerate all integers from 1 to n, and check whether each of them divides n without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other. While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutionswhich in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion). Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than processing speed. This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences or when using a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method when benchmarking other algorithms or metaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a tablenamely, check all entries of the latter, sequentiallyis called linear se Document 3::: A snake-arm robot is a slender hyper-redundant manipulator. The high number of degrees of freedom allows the arm to “snake” along a path or around an obstacle – hence the name “snake-arm”. Definition Snake-arm robots are also described as continuum robots and elephant's trunk robots although these descriptions are restrictive in their definitions and cannot be applied to all snake-arm robots. A continuum robot is a continuously curving manipulator, much like the arm of an octopus. An elephant's trunk robot is a good descriptor of a continuum robot. This has generally been associated with whole arm manipulation – where the entire arm is used to grasp and manipulate objects, in the same way that an elephant would pick up a ball. This is an emerging field and as such there is no agreement on the best term for this class of robot. Snake-arm robots are often used in association with another device. The function of the other device is to introduce the snake-arm into the confined space. Examples of possible introduction axes include mounting a snake-arm on a remote controlled vehicle or an industrial robot or designing a bespoke a linear actuator. In this case the shape of the arm is coordinated with the linear movement of the introduction axis enabling the arm to follow a path into confined spaces. Other features which are usually (but not always) associated with snake-arm robots: Continuous diameter along the length of the arm Self-supporting Either tendon-driven or pneumatically controlled in most cases. A snake-arm robot is not to be confused with a snakebot which mimics the biomorphic motion of a snake in order to slither along the ground. Applications The ability to reach into confined spaces lends itself to many applications involving access problems. The list below is not intended to be an exhaustive list of possibilities but merely an indication of where these robots are being used or developed for use. Industry Nuclear Decommissioning Repair The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary habitat where Amanita silvicola is typically found? A. Desert regions B. Coniferous woods C. Grasslands D. Urban areas Answer:
B. Coniferous woods
Relavent Documents: Document 0::: Marina Barrage is a dam in southern Singapore built at the confluence of five rivers, across the Marina Channel between Marina East and Marina South. First conceptualised in 1987 by then prime minister Lee Kuan Yew to help achieve greater self-sufficiency for the country's water needs, the barrage began construction on 22 March 2005, and was officially opened on 31 October 2008 as Singapore's fifteenth reservoir, the Marina Reservoir. It provides water storage, flood control and recreation. It won a Superior Achievement Award from the American Academy of Environmental Engineers in 2009. It also turned the previously salt water Marina Bay into fresh water for the first time in its history. Purpose The S$3 billion project, with $226 million for the structure itself, turned Marina Bay and Kallang Basin into a new downtown freshwater Marina Reservoir. It provides water supply, flood control and a new lifestyle attraction. After its opening, the Marina Barrage quickly became a tourist attraction not long after. By keeping out seawater, the barrage formed Singapore's 15th reservoir and first reservoir in the city. Marina Reservoir, together with the future Punggol and Serangoon reservoirs, increased Singapore's water catchment areas by one-sixth of Singapore's total land area. Marina Barrage also acts as a tidal barrier to keep seawater out, helping to alleviate flooding in high-risk low-lying areas of the downtown districts such as Chinatown, Jalan Besar and Geylang. When it rains heavily during low-tide, the barrage's crest gates will be lowered to release excess water from the coastal reservoir into the sea. If heavy rain falls during high-tide, the crest gates remain closed and giant drainage pumps are activated to pump excess water out to sea. As the water in the Marina Basin is unaffected by the tides, the water level will be kept constant, making it ideal for all kinds of recreational activities such as boating, windsurfing, kayaking and dragonboating. Impa Document 1::: In meteorology, an air mass is a volume of air defined by its temperature and humidity. Air masses cover many hundreds or thousands of square miles, and adapt to the characteristics of the surface below them. They are classified according to latitude and their continental or maritime source regions. Colder air masses are termed polar or arctic, while warmer air masses are deemed tropical. Continental and superior air masses are dry, while maritime and monsoon air masses are moist. Weather fronts separate air masses with different density (temperature or moisture) characteristics. Once an air mass moves away from its source region, underlying vegetation and water bodies can quickly modify its character. Classification schemes tackle an air mass's characteristics, as well as modification. Classification and notation The Bergeron classification is the most widely accepted form of air mass classification, though others have produced more refined versions of this scheme over different regions of the globe. Air mass classification involves three letters. The first letter describes its moisture properties – "c" represents continental air masses (dry), and "m" represents maritime air masses (moist). Its source region follows: "T" stands for Tropical, "P" stands for Polar, "A" stands for Arctic or Antarctic, "M" stands for monsoon, "E" stands for Equatorial, and "S" stands for adiabatically drying and warming air formed by significant downward motion in the atmosphere. For instance, an air mass originating over the desert southwest of the United States in summer may be designated "cT". An air mass originating over northern Siberia in winter may be indicated as "cA". The stability of an air mass may be shown using a third letter, either "k" (air mass colder than the surface below it) or "w" (air mass warmer than the surface below it). An example of this might be a polar air mass blowing over the Gulf Stream, denoted as "cPk". Occasionally, one may also encounter the Document 2::: In statistics, several scatterplot smoothing methods are available to fit a function through the points of a scatterplot to best represent the relationship between the variables. Scatterplots may be smoothed by fitting a line to the data points in a diagram. This line attempts to display the non-random component of the association between the variables in a 2D scatter plot. Smoothing attempts to separate the non-random behaviour in the data from the random fluctuations, removing or reducing these fluctuations, and allows prediction of the response based value of the explanatory variable. Smoothing is normally accomplished by using any one of the techniques mentioned below. A straight line (simple linear regression) A quadratic or a polynomial curve Local regression Smoothing splines The smoothing curve is chosen so as to provide the best fit in some sense, often defined as the fit that results in the minimum sum of the squared errors (a least squares criterion). See also Additive model Generalized additive model Smoothing References Document 3::: Grafana is a multi-platform open source analytics and interactive visualization web application. It can produce charts, graphs, and alerts for the web when connected to supported data sources. There is also a licensed Grafana Enterprise version with additional capabilities, which is sold as a self-hosted installation or through an account on the Grafana Labs cloud service. It is expandable through a plug-in system. Complex monitoring dashboards can be built by end users, with the aid of interactive query builders. The product is divided into a front end and back end, written in TypeScript and Go, respectively. As a visualization tool, Grafana can be used as a component in monitoring stacks, often in combination with time series databases such as InfluxDB, Prometheus and Graphite; monitoring platforms such as Sensu, Icinga, Checkmk, Zabbix, Netdata, and PRTG; SIEMs such as Elasticsearch, OpenSearch, and Splunk; and other data sources. The Grafana user interface was originally based on version 3 of Kibana. History Grafana was first released in 2014 by Torkel Ödegaard as an offshoot of a project at Orbitz. It targeted time series databases such as InfluxDB, OpenTSDB, and Prometheus, but evolved to support relational databases such as MySQL/MariaDB, PostgreSQL and Microsoft SQL Server. In 2019, Grafana Labs secured $24 million in Series A funding. In the 2020 Series B funding round it obtained $50 million. In the 2021 Labs Series C funding round, Grafana secured $220 million. Grafana Labs acquired Kausal in 2018, k6 and Amixr in 2021, and Asserts.ai in 2023. Adoption Grafana is used in Wikimedia's infrastructure. In 2017, Grafana had over 1000 paying customers, including Bloomberg, JP Morgan Chase, and eBay. Licensing Previously, Grafana was licensed with an Apache License 2.0 license and used a CLA based on the Harmony Contributor Agreement. Since 2021, Grafana has been licensed under an AGPLv3 license. Contributors to Grafana need to sign a Contributor Licen The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the term used in Title 18, Section 1030 of the United States Code to refer to computers that are protected under the Computer Fraud and Abuse Act? A. Federal interest computers B. Protected computers C. Government computers D. Financial institution computers Answer:
B. Protected computers
Relavent Documents: Document 0::: In finance, the Treynor reward-to-volatility model (sometimes called the reward-to-volatility ratio or Treynor measure), named after American economist Jack L. Treynor, is a measurement of the returns earned in excess of that which could have been earned on an investment that has no risk that can be diversified (e.g., Treasury bills or a completely diversified portfolio), per unit of market risk assumed. The Treynor ratio relates excess return over the risk-free rate to the additional risk taken; however, systematic risk is used instead of total risk. The higher the Treynor ratio, the better the performance of the portfolio under analysis. Formula where: Treynor ratio, portfolio i'''s return, risk free rate [[Beta coefficient|portfolio i's beta]] Example Taking the equation detailed above, let us assume that the expected portfolio return is 20%, the risk free rate is 5%, and the beta of the portfolio is 1.5. Substituting these values, we get the following Limitations Like the Sharpe ratio, the Treynor ratio (T'') does not quantify the value added, if any, of active portfolio management. It is a ranking criterion only. A ranking of portfolios based on the Treynor Ratio is only useful if the portfolios under consideration are sub-portfolios of a broader, fully diversified portfolio. If this is not the case, portfolios with identical systematic risk, but different total risk, will be rated the same. But the portfolio with a higher total risk is less diversified and therefore has a higher unsystematic risk which is not priced in the market. An alternative method of ranking portfolio management is Jensen's alpha, which quantifies the added return as the excess return above the security market line in the capital asset pricing model. As these two methods both determine rankings based on systematic risk alone, they will rank portfolios identically. See also Bias ratio (finance) Hansen-Jagannathan bound Jensen's alpha Modern portfolio theory Modigliani r Document 1::: The Center for the Blue Economy (CBE) is a research center managed by the Middlebury Institute of International Studies (MIIS) in Monterey, California. The CBE research focuses on the Blue Economy. The CBE was founded in 2011. It received the initial fund of $1 million from Robin and Deborah Hicks, the parents of the Middlebury College students, in their capacities as trustees of the Loker Foundation. Professor Jason Scorse, who is also the Head/Chair of International Environmental Policy (IEP) program at MIIS, is the Director for the Center for the Blue Economy. The CBE was created to address the issues related to "Blue Economy" in the ocean and coastal areas. Research focus The research at the CBE mainly focuses on determining the factors that ensure sustainability and economics of oceans and coastal regions. The research at the center provides open-access data to different stakeholders, including businesses, governments, nonprofits that could help them to make decisions for managing ocean and coastal resources. The research also focuses on climate change adaptation in coastal areas, governing the environmental issues and providing possible solutions considering the ocean and coastal issues. The center collaborates with various local and national organizations and they worked on a wide range of topics related to ocean and coastal areas. The center also offers specialization course of Ocean and Coastal Resource Management(OCRM) for the IEP program. CBE Advisory Council The CBE Advisory Council has experts from different backgrounds and experiences including marine science, policy and business. The team aims to make a change and shape the future of blue economy. The organogram of the CBE Advisory Council is shown in Figure. Grants and funding The CBE 2018 received grants from three major sources:71% federal government, 28% state and local agencies, and 1% other sources. Speaker series The CBE hosts the Speakers Series which are events where professionals fr Document 2::: Recurrence period density entropy (RPDE) is a method, in the fields of dynamical systems, stochastic processes, and time series analysis, for determining the periodicity, or repetitiveness of a signal. Overview Recurrence period density entropy is useful for characterising the extent to which a time series repeats the same sequence, and is therefore similar to linear autocorrelation and time delayed mutual information, except that it measures repetitiveness in the phase space of the system, and is thus a more reliable measure based upon the dynamics of the underlying system that generated the signal. It has the advantage that it does not require the assumptions of linearity, Gaussianity or dynamical determinism. It has been successfully used to detect abnormalities in biomedical contexts such as speech signal. The RPDE value is a scalar in the range zero to one. For purely periodic signals, , whereas for purely i.i.d., uniform white noise, . Method description The RPDE method first requires the embedding of a time series in phase space, which, according to stochastic extensions to Taken's embedding theorems, can be carried out by forming time-delayed vectors: for each value xn in the time series, where M is the embedding dimension, and τ is the embedding delay. These parameters are obtained by systematic search for the optimal set (due to lack of practical embedding parameter techniques for stochastic systems) (Stark et al. 2003). Next, around each point in the phase space, an -neighbourhood (an m-dimensional ball with this radius) is formed, and every time the time series returns to this ball, after having left it, the time difference T between successive returns is recorded in a histogram. This histogram is normalised to sum to unity, to form an estimate of the recurrence period density function P(T). The normalised entropy of this density: is the RPDE value, where is the largest recurrence value (typically on the order of 1000 samples). Note that RPDE i Document 3::: Ralph Philip Boas Jr. (August 8, 1912 – July 25, 1992) was a mathematician, teacher, and journal editor. He wrote over 200 papers, mainly in the fields of real and complex analysis. Biography He was born in Walla Walla, Washington, the son of an English professor at Whitman College, but moved frequently as a child; his younger sister, Marie Boas Hall, later to become a historian of science, was born in Springfield, Massachusetts, where his father had become a high school teacher. He was home-schooled until the age of eight, began his formal schooling in the sixth grade, and graduated from high school while still only 15. After a gap year auditing classes at Mount Holyoke College (where his father had become a professor) he entered Harvard, intending to major in chemistry and go into medicine, but ended up studying mathematics instead. His first mathematics publication was written as an undergraduate, after he discovered an incorrect proof in another paper. He got his A.B. degree in 1933, received a Sheldon Fellowship for a year of travel, and returned to Harvard for his doctoral studies in 1934. He earned his doctorate there in 1937, under the supervision of David Widder. After postdoctoral studies at Princeton University with Salomon Bochner, and then the University of Cambridge in England, he began a two-year instructorship at Duke University, where he met his future wife, Mary Layne, also a mathematics instructor at Duke. They were married in 1941, and when the United States entered World War II later that year, Boas moved to the Navy Pre-flight School in Chapel Hill, North Carolina. In 1942, he interviewed for a position in the Manhattan Project, at the Los Alamos National Laboratory, but ended up returning to Harvard to teach in a Navy instruction program there, while his wife taught at Tufts University. Beginning when he was an instructor at Duke University, Boas had become a prolific reviewer for Mathematical Reviews, and at the end of the war he took a The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What significant achievement did Pars Rocketry Group accomplish in June 2016 during the Intercollegiate Rocket Engineering Competition? A. They won first place among 44 teams. B. They won the 6th position among 44 teams. C. They developed a new type of rocket engine. D. They launched the most powerful rocket engine in Turkey. Answer:
B. They won the 6th position among 44 teams.
Relavent Documents: Document 0::: . Sexual communication is a conversation between partners about sex, which is necessary to obtain sexual consent, to learn about likes and dislikes, and to obtain sexual satisfaction. Sexual communication is a transitional stage from the romantic period of a relationship to a closer intimate and sexual relationship between partners. Sexual communication in different countries is based on the partners' chosen religion and marriage customs, so it can start at different stages of the partners' relationship. Sexual communication is not primary in the relationship of partners, and in harmonious relationships it occurs after the spiritual perception of the partner. Meaning for sex life Sexual communication is necessary for partners to share their sexual experiences with each other based on trust and respect, in a safe manner. In order to initiate sexual relations, consent to sex is essential, meaning that consent is free, informed, revocable, enthusiastic and specific. When "NO" means - NO. Sexual relations that began without consent to sex, as well as consent to sex that was obtained through a false promise to marry, are considered rape. Sexual communication includes being aware of the STD (sexually transmitted disease) status of partners, discussing the purpose of the relationship, getting tested for STDs together, and safe sex and contraceptives. Positive sexual communication is associated with sexual satisfaction in relationships and well-being. Sexual communication helps partners to better understand each other to build intimate relationships, to understand the differences in perceptions and feelings of partners in sexual activity. Talking about sex opens up self-awareness for self-reflection for conscious sex with a partner. Forms of sexual communication Sexual communication occurs in various forms: Physical communication is physical actions that speak louder than words: a look, a gently placed hand can speak better than verbally. Sexual cues are v Document 1::: The fungi imperfecti or imperfect fungi are fungi which do not fit into the commonly established taxonomic classifications of fungi that are based on biological species concepts or morphological characteristics of sexual structures because their sexual form of reproduction has never been observed. They are known as imperfect fungi because only their asexual and vegetative phases are known. They have asexual form of reproduction, meaning that these fungi produce their spores asexually, in the process called sporogenesis. There are about 25,000 species that have been classified in the phylum Deuteromycota and many are Basidiomycota or Ascomycota anamorphs. Fungi producing the antibiotic penicillin and those that cause athlete's foot and yeast infections are algal fungi. In addition, there are a number of edible imperfect fungi, including the ones that provide the distinctive characteristics of Roquefort and Camembert cheese. Other, more informal names besides phylum Deuteromycota (or class "Deuteromycetes") and fungi imperfecti are anamorphic fungi, or mitosporic fungi, but these are terms without taxonomic rank. Examples are Alternaria, Colletotrichum, Trichoderma etc. The class Phycomycetes ("algal fungi") has also been used. Problems in taxonomic classification Although Fungi imperfecti/Deuteromycota is no longer formally accepted as a taxon, many of the fungi it included have yet to find a place in modern fungal classification. This is because most fungi are classified based on characteristics of the fruiting bodies and spores produced during sexual reproduction, and members of the Deuteromycota have been observed to reproduce only asexually or produce no spores. Mycologists formerly used a unique dual system of nomenclature in classifying fungi, which was permitted by Article 59 of the International Code of Botanical Nomenclature (the rules governing the naming of plants and fungi). However, the system of dual nomenclature for fungi was abolished in the 2011 Document 2::: Ankyrin 1, also known as ANK-1, and erythrocyte ankyrin, is a protein that in humans is encoded by the ANK1 gene. Tissue distribution The protein encoded by this gene, Ankyrin 1, is the prototype of the ankyrin family, was first discovered in erythrocytes, but since has also been found in brain and muscles. Genetics Complex patterns of alternative splicing in the regulatory domain, giving rise to different isoforms of ankyrin 1 have been described, however, the precise functions of the various isoforms are not known. Alternative polyadenylation accounting for the different sized erythrocytic ankyrin 1 mRNAs, has also been reported. Truncated muscle-specific isoforms of ankyrin 1 resulting from usage of an alternate promoter have also been identified. Disease linkage Mutations in erythrocytic ankyrin 1 have been associated in approximately half of all patients with hereditary spherocytosis. ANK1 shows altered methylation and expression in Alzheimer's disease. A gene expression study of postmortem brains has suggested ANK1 interacts with interferon-γ signalling. Function The ANK1 protein belongs to the ankyrin family that are believed to link the integral membrane proteins to the underlying spectrin-actin cytoskeleton and play key roles in activities such as cell motility, activation, proliferation, contact, and maintenance of specialized membrane domains. Multiple isoforms of ankyrin with different affinities for various target proteins are expressed in a tissue-specific, developmentally regulated manner. Most ankyrins are typically composed of three structural domains: an amino-terminal domain containing multiple ankyrin repeats; a central region with a highly conserved spectrin-binding domain; and a carboxy-terminal regulatory domain, which is the least conserved and subject to variation. The small ANK1 (sAnk1) protein splice variants makes contacts with obscurin, a giant protein surrounding the contractile apparatus in striated muscle. Interactions A Document 3::: Spinnbarkeit (), also known as fibrosity, is a biomedical rheology term which refers to the stringy or stretchy property found to varying degrees in mucus, saliva, albumen and similar viscoelastic fluids. The term is used especially with reference to cervical mucus at the time just prior to or during ovulation. Under the influence of estrogens, cervical mucus becomes abundant, clear, and stretchable, and somewhat like egg white. The stretchability of the mucus is described by its spinnbarkeit, from the German word for the ability to be spun. Only such mucus appears to be able to be penetrated by sperm. After ovulation, the character of cervical mucus changes, and under the influence of progesterone it becomes thick, scant, and tacky. Sperm typically cannot penetrate it. Saliva does not always exhibit spinnbarkeit, but it can under certain circumstances. The thickness and spinnbarkeit of nasal mucus are factors in whether or not the nose seems to be blocked. Mucociliary transport depends on the interaction of fibrous mucus with beating cilia. See also Mucorrhea References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does matricity specifically refer to in the context of protein interactions? A. The interaction of a matrix with its environment B. The affinity of proteins for each other C. The clustering of proteins for multivalent ligands D. The solid state of a matrix without interaction Answer:
A. The interaction of a matrix with its environment
Relavent Documents: Document 0::: In mathematics, more specifically in functional analysis, a positive linear operator from an preordered vector space into a preordered vector space is a linear operator on into such that for all positive elements of that is it holds that In other words, a positive linear operator maps the positive cone of the domain into the positive cone of the codomain. Every positive linear functional is a type of positive linear operator. The significance of positive linear operators lies in results such as Riesz–Markov–Kakutani representation theorem. Definition A linear function on a preordered vector space is called positive if it satisfies either of the following equivalent conditions: implies if then The set of all positive linear forms on a vector space with positive cone called the dual cone and denoted by is a cone equal to the polar of The preorder induced by the dual cone on the space of linear functionals on is called the . The order dual of an ordered vector space is the set, denoted by defined by Canonical ordering Let and be preordered vector spaces and let be the space of all linear maps from into The set of all positive linear operators in is a cone in that defines a preorder on . If is a vector subspace of and if is a proper cone then this proper cone defines a on making into a partially ordered vector space. If and are ordered topological vector spaces and if is a family of bounded subsets of whose union covers then the positive cone in , which is the space of all continuous linear maps from into is closed in when is endowed with the -topology. For to be a proper cone in it is sufficient that the positive cone of be total in (that is, the span of the positive cone of be dense in ). If is a locally convex space of dimension greater than 0 then this condition is also necessary. Thus, if the positive cone of is total in and if is a locally convex space, then the canonical ordering of defin Document 1::: In oceanography, the sverdrup (symbol: Sv) is a non-SI metric unit of volumetric flow rate, with equal to . It is equivalent to the SI derived unit cubic hectometer per second (symbol: hm3/s or hm3⋅s−1): 1 Sv is equal to 1 hm3/s. It is used almost exclusively in oceanography to measure the volumetric rate of transport of ocean currents. It is named after Harald Sverdrup. One sverdrup is about five times what is carried at the estuary by the world's largest river, the Amazon. In the context of ocean currents, a volume of one million cubic meters may be imagined as a "slice" of ocean with dimensions × × (width × length × thickness) or a cube that is 100 m × 100 m × 100 m. At this scale, these units can be more easily compared in terms of width of the current (several km), depth (hundreds of meters), and current speed (as meters per second). Thus, a hypothetical current wide, 500 m (0.5 km) deep, and moving at 2 m/s would be transporting of water. The sverdrup is distinct from the SI sievert unit or the non-SI svedberg unit. All three use the same symbol, but they are not related. History The sverdrup is named in honor of the Norwegian oceanographer, meteorologist and polar explorer Harald Ulrik Sverdrup (1888–1957), who wrote the 1942 volume The Oceans, Their Physics, Chemistry, and General Biology together with Martin W. Johnson and Richard H. Fleming. In the 1950s and early 1960s both Soviet and North American scientists contemplated the damming of the Bering Strait, thus enabling temperate Atlantic water to heat up the cold Arctic Sea and, the theory went, making Siberia and northern Canada more habitable. As part of the North American team, Canadian oceanographer Maxwell Dunbar found it "very cumbersome" to repeatedly reference millions of cubic meters per second. He casually suggested that as a new unit of water flow, "the inflow through Bering Strait is one sverdrup". At the Arctic Basin Symposium in October 1962, the unit came into general usage. Document 2::: Tellurocracy (from and ) is a concept proposed by Aleksandr Dugin to describe a type of civilization or state system that is defined by the development of land territories and consistent penetration into inland territories. Tellurocratic states possess a set state-territory in which the state-forming ethnic majority lives, around this territory further land expansion occurs. Tellurocracy is conceived of as an antonym to thalassocracy. Most states display an amalgam of tellurocratic and thalassocratic features. In political geography, geopolitics and geo-economics, the term is used to explain the power of a country through its control over land. For example, prior to their merger, the Sultanate of Muscat was thalassocratic, but the Imamate of Oman was landlocked and purely tellurocratic. It could be suggested that most or all landlocked states are tellurocracies. Defining tellurocracy Tellurocracies are generally not purely tellurocratic. In particular, most large tellurocracies have coastlines and not just inland territories, unlike thalassocracies, which historically would generally only have coastlines, and not inland territories. This makes it difficult to define what exactly a tellurocracy is. For example, the Mongols attempted to conquer Japan on multiple occasions. As well, the Russian Empire conquered Russian America (now Alaska) after it reached a point where it could no longer expand eastward by land. Likewise, the United States acquired Alaska and incorporated many islands and the Panama Canal Zone after it could no longer expand westward. It is also worth noting that the largely tellurocratic, continental Australia, founded as a group of thalassocratic colonies, now holds its own island territories outside of its mainland, such as Christmas Island. Historical tellurocracies Many empires of antiquity are noted for being more tellurocratic than their rivals, such as the early Roman Republic in opposition to its rival Carthaginian Empire, which later Document 3::: The measurement of economic worth over time is the problem of relating past prices, costs, values and proportions of social production to current ones. For a number of reasons, relating any past indicator to a current indicator of worth is theoretically and practically difficult for economists, historians, and political economists. This has led to some questioning of the idea of time series of worth having any meaning. However, the popular demand for measurements of social worth over time have caused the production of a number of series. The need to measure worth over time People often seek a comparison between the price of an item in the past, and the price of an item today. Over short periods of time, like months, inflation may measure the role an object and its cost played in an economy: the price of fuel may rise or fall over a month. The price of money itself changes over time, as does the availability of goods and services as they move into or out of production. What people choose to consume changes over time. Finally, concepts such as cash money economies may not exist in past periods, nor ideas like wage labour or capital investment. Comparing what someone paid for a good, how much they had to work for that money, what the money was worth, how scarce a particular good was, what role it played in someone's standard of living, what its proportion was as part of social income, and what proportion it was as part of possible social production is a difficult task. This task is made more difficult by conflicting theoretical concepts of worth. Theoretical problems One chief problem is the competition between different fundamental conceptions of the division of social product into measurable or theorisable concepts. Marxist and political economic value, neoclassical marginalist, and other ideas regarding proportion of social product not measured in money terms have arisen. Practical problems Official measures by governments have a limited depth of the time series, mainly originating in the 20th century. Even within these series, changes in parameters such as consumption bundles, or measures of GDP fundamentally affect the worth of a series. Historical series computed from statistical data sets, or estimated from archival records have a number of other problems, including changing consumption bundles, consumption bundles not representing standard measures, and changes to the structure of social worth itself such as the move to wage labour and market economies. Different series and their use A different time series should be used depending on what kind of economic object is being compared over time: Consumer Price Indexes and Wage-Price series Used to compare the price of a basket of standard consumer goods for an "average" individual (often defined as non-agricultural workers, based on survey data), or to assess the ability of individuals to acquire these baskets. For example, used to answer the question, "Has the money price of goods purchased by a typical household risen over time?" or used to make adjustments in international comparisons of standards of living. Share of GDP Used to measure income distribution in society or social power of individuals, and the equivalent power of capital. For example, can be used to ask "Has the share of annual production that has gone to workers in the form of income decreased or increased over time?" or "Over the long run do the shares of labor's and capital's income constant over time or do they exhibit trends?" GDP per capita Used to compare wage or income relativities over time, for example, used to answer the question, "If in 1870 a grocer earned $40 a week in profit, what would that profit be worth today in terms of social status and economic impact?" Wage price series A wage price series is a set of data claiming to indicate the real wages and real prices of goods over a span of time, and their relationship to each other through the price of money at the time they were compared. Wage price series are currently collected and computed by major governments. Wage price series are also computed historically by economic historians and non-government organisations. Both contemporary and historical wage price series are inherently controversial, as they speak to the standard of living of working-class people. Contemporary wage price series Contemporary wage price series are the result of the normal operation of government economic statistics units, and are often produced as part of the National income accounts. Historical wage price series Computing a historical wage price series requires discovering government or non government data, determining if the measures of wage or price are appropriate, and then manipulating the data. Some of these series have been criticised for failing to deal with a number of significant data and theoretical problems. Historical wage price series of the United Kingdom Due to the survival of literary records of economic life from the 13th century in the South of England, extensive attempts have been made to produce long run wage price series regarding Southern England, England in General, or the United Kingdom in the British Isles. Officer's production of a series from 1264 is reliant on a number of assumptions which he readily admits produce questions about his series' representation of reality. Officer is reliant on subseries compiled using different criteria, and these series are reliant upon primary sources that describe different earnings and expenses bundles. The assumption of universal wage labour and a retail goods market, the assumption of money rents, the inability to compute non-market earnings such as obligatory benefits received from masters or the right to squat, all impact on the quality and representative nature of Officer's sources and series. References Officer, Lawrence H. What Were the U.K. Earnings Rate and Retail Price Index Then?: a data study. (PDF) Measuringworth.com (unpublished). Further reading Ashton, TS. "The Standard of Life of the Workers in England, 1790-1830." Journal of Economic History 9 (Supplement) 1949: 19–38. Boot, HM. "Real Incomes of the British Middle Class, 1760-1850: the Experience of Clerks at the East India Company." Economic History Review 52 (68) 1999: 638–668. Bowley, Arthur L. Prices and Wages in the United Kingdom, 1914-1920. Oxford: Clarendon Press, 1921. Bowley, Arthur L. Wages and Income in the United Kingdom since 1860. Cambridge: Cambridge University Press, 1937. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What challenges do economists face when trying to measure economic worth over time according to the text? A. The lack of consumer demand for economic measurements B. The difficulty of relating past and present indicators C. The availability of an abundance of historical data D. The simplicity of measuring changes in market conditions Answer:
B. The difficulty of relating past and present indicators
Relavent Documents: Document 0::: Biodegradation is a peer-reviewed scientific journal covering biotransformation, mineralization, detoxification, recycling, amelioration or treatment of chemicals or waste materials by naturally occurring microbial strains, microbial associations or recombinant organisms. According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.909. The editor-in-chief of the journal is Claudia K. Gunsch (Duke University). Document 1::: A star war was a decisive conflict between rival polities of the Maya civilization during the first millennium AD. The term comes from a specific type of glyph used in the Maya script, which depicts a star showering the earth with liquid droplets, or a star over a shell. It represents a verb but its phonemic value and specific meaning have not yet been deciphered. The name "star war" was coined by the epigrapher Linda Schele to refer to the glyph, and by extension to the type of conflict that it indicates. Examples Maya inscriptions assign episodes of Maya warfare to four distinct categories, each represented by its own glyph. Those accorded the greatest significance by the Maya were described with the "star war" glyph, representing a major war resulting in the defeat of one polity by another. This represents the installation of a new dynastic line of rulers, complete dominion of one polity over another, or a successful war of independence by a formerly dominated polity. Losing a star war could be disastrous for the defeated party. The first recorded star war in 562, between Caracol and Tikal, resulted in a 120-year hiatus for the latter city. It saw a decline in Tikal's population, a cessation of monument erection, and the destruction of certain monuments in the Great Plaza. When Calakmul defeated Naranjo in a star war on December 25, 631, it resulted in Naranjo's ruler being tortured to death and possibly eaten. Another star war in February 744 resulted in Tikal sacking Caracol and capturing a personal god effigy of its ruler. An inscription from a monument found at Tortuguero (dating from 669) describes the aftermath of a star war: "the blood was pooled, the skulls were piled". Astronomical connections Mayanists have noted that the dates of recorded star wars often coincide with astronomical events involving the planet Venus, either when it was first visible in the morning or night sky or during its absence at inferior conjunction. Venus was known to Mesoame Document 2::: The Paraná Delta () is the delta of the Paraná River in Argentina and it consists of several islands known as the Islas del Paraná. The Paraná flows north–south and becomes an alluvial basin (a flood plain) between the Argentine provinces of Entre Ríos, Santa Fe and Buenos Aires then emptying into the . It covers about and starts to form between the cities of Santa Fe and Rosario, where the river splits into several arms, creating a network of islands and wetlands. Most of it is in the jurisdiction of Entre Ríos Province, and parts in the north of Buenos Aires Province. The Paraná Delta is conventionally divided into three parts: the Upper Delta, from the Diamante – Puerto Gaboto line to Villa Constitución; the Middle Delta, from Villa Constitución to the Ibicuy Islands; the Lower Delta, from the Ibicuy Islands to the mouth of the river. The total length of the delta is about , and its width varies between . It carries 160 million tonnes of suspended sediment (about half of it coming from the Bermejo River through the Paraguay River) and advances from (depending on the source) per year over the Río de la Plata. It is the world's only river delta that is in contact not with the sea but with another river. The Lower Delta was the site of the first modern settlements in the Paraná-Plata basin and is today densely populated, being the agricultural and industrial core of Argentina and host to several major ports. The main course of the Paraná lies on the west of the delta, and is navigable downstream from Puerto General San Martín by ships up to Panamax kind. Rivers of the delta Among the many arms of the river are the Paraná Pavón, the Paraná Ibicuy, the Paraná de las Palmas, the Paraná Guazú and the smaller Paraná Miní and Paraná Bravo. The Paraná Pavón is the first major branch. It has a meandering course that starts on the eastern side, opposite Villa Constitución. Between the main Paraná and the Paraná Pavón lie the Lechiguanas Islands. The Paraná Pavón Document 3::: Concept creep is the process by which harm-related topics experience semantic expansion to include topics which would not have originally been envisaged to be included under that label. It was first described in a Psychological Inquiry article by Nick Haslam in 2016, who identified its effects on the concepts of abuse, bullying, trauma, mental disorder, addiction, and prejudice. Others have identified its effects on terms like "gaslight" and "emotional labour". The phenomenon can be related to the concept of hyperbole. It has been criticised for making people more sensitive to harms and for blurring people's thinking and understanding of such terms, by categorising too many things together which should not be, and by losing the clarity and specificity of a term. Although the initial research on concept creep has focused on concepts central to the political left's ideology, psychologists have also found evidence that people identifying with the political right have more expansive interpretations of concepts central to their own ideology (ex. sexual deviance, personal responsibility and terrorism). The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main criticism of concept creep as mentioned in the text? A. It enhances clarity and specificity of terms. B. It makes people less sensitive to harms. C. It blurs understanding by categorizing too many unrelated topics. D. It is only relevant to the political left. Answer:
C. It blurs understanding by categorizing too many unrelated topics.
Relavent Documents: Document 0::: Veterinary medicine is the branch of medicine that deals with the prevention, management, diagnosis, and treatment of disease, disorder, and injury in non-human animals. The scope of veterinary medicine is wide, covering all animal species, both domesticated and wild, with a wide range of conditions that can affect different species. Veterinary medicine is widely practiced, both with and without professional supervision. Professional care is most often led by a veterinary physician (also known as a veterinarian, veterinary surgeon, or "vet"), but also by paraveterinary workers, such as veterinary nurses, veterinary technicians, and veterinary assistants. This can be augmented by other paraprofessionals with specific specialties, such as animal physiotherapy or dentistry, and species-relevant roles such as farriers. Veterinary science helps human health through the monitoring and control of zoonotic disease (infectious disease transmitted from nonhuman animals to humans), food safety, and through human applications via medical research. They also help to maintain food supply through livestock health monitoring and treatment, and mental health by keeping pets healthy and long-living. Veterinary scientists often collaborate with epidemiologists and other health or natural scientists, depending on type of work. Ethically, veterinarians are usually obliged to look after animal welfare. Veterinarians diagnose, treat, and help keep animals safe and healthy. History Premodern era Archeological evidence, in the form of a cow skull upon which trepanation had been performed, shows that people were performing veterinary procedures in the Neolithic (3400–3000 BCE). The Egyptian Papyrus of Kahun (Twelfth Dynasty of Egypt) is the first extant record of veterinary medicine. The Shalihotra Samhita, dating from the time of Ashoka, is an early Indian veterinary treatise. The edicts of Asoka read: "Everywhere King Piyadasi (Asoka) made two kinds of medicine (चिकित्सा) available, Document 1::: Space weather is a branch of space physics and aeronomy, or heliophysics, concerned with the varying conditions within the Solar System and its heliosphere. This includes the effects of the solar wind, especially on the Earth's magnetosphere, ionosphere, thermosphere, and exosphere. Though physically distinct, space weather is analogous to the terrestrial weather of Earth's atmosphere (troposphere and stratosphere). The term "space weather" was first used in the 1950s and popularized in the 1990s. Later, it prompted research into "space climate", the large-scale and long-term patterns of space weather. History For many centuries, the effects of space weather were noticed, but not understood. Displays of auroral light have long been observed at high latitudes. Beginnings In 1724, George Graham reported that the needle of a magnetic compass was regularly deflected from magnetic north over the course of each day. This effect was eventually attributed to overhead electric currents flowing in the ionosphere and magnetosphere by Balfour Stewart in 1882, and confirmed by Arthur Schuster in 1889 from analysis of magnetic observatory data. In 1852, astronomer and British Major General Edward Sabine showed that the probability of the occurrence of geomagnetic storms on Earth was correlated with the number of sunspots, demonstrating a novel solar-terrestrial interaction. The solar storm of 1859 caused brilliant auroral displays and disrupted global telegraph operations. Richard Carrington correctly connected the storm with a solar flare that he had observed the day before near a large sunspot group, demonstrating that specific solar events could affect the Earth. Kristian Birkeland explained the physics of aurorae by creating artificial ones in his laboratory, and predicted the solar wind. The introduction of radio revealed that solar weather could cause extreme static or noise. Radar jamming during a large solar event in 1942 led to the discovery of solar radio bursts, radio waves over a broad frequency range created by a solar flare. The 20th century In the 20th century, the interest in space weather expanded as military and commercial systems came to depend on systems affected by space weather. Communications satellites are a vital part of global commerce. Weather satellite systems provide information about terrestrial weather. The signals from satellites of a global positioning system (GPS) are used in a wide variety of applications. Space weather phenomena can interfere with or damage these satellites or interfere with the radio signals with which they operate. Space weather phenomena can cause damaging surges in long-distance transmission lines and expose passengers and crew of aircraft travel to radiation, especially on polar routes. The International Geophysical Year increased research into space weather. Ground-based data obtained during IGY demonstrated that the aurorae occurred in an auroral oval, a permanent region of luminescence 15 to 25° in latitude from the magnetic poles and 5 to 20° wide. In 1958, the Explorer I satellite discovered the Van Allen belts, regions of radiation particles trapped by the Earth's magnetic field. In January 1959, the Soviet satellite Luna 1 first directly observed the solar wind and measured its strength. A smaller International Heliophysical Year (IHY) occurred in 2007–2008. In 1969, INJUN-5 (or Explorer 40) made the first direct observation of the electric field impressed on the Earth's high-latitude ionosphere by the solar wind. In the early 1970s, Triad data demonstrated that permanent electric currents flowed between the auroral oval and the magnetosphere. The term "space weather" came into usage in the late 1950s as the space age began and satellites began to measure the space environment. The term regained popularity in the 1990s along with the belief that space's impact on human systems demanded a more coordinated research and application framework. Programs US National Space Weather Program The purpose of the US National Space Weather Program is to focus research on the needs of the affected commercial and military communities, to connect the research and user communities, to create coordination between operational data centers, and to better define user community needs. NOAA operates the National Weather Service's Space Weather Prediction Center. The concept was turned into an action plan in 2000, an implementation plan in 2002, an assessment in 2006 and a revised strategic plan in 2010. A revised action plan was scheduled to be released in 2011 followed by a revised implementation plan in 2012. ICAO Space Weather Advisory International Civil Aviation Organization (ICAO) implemented a Space Weather Advisory program in late 2019. Under this program, ICAO designated four global space weather service providers: The United States, which is done by the National Oceanic and Atmospheric Administration (NOAA) Space Weather Prediction Center. The Australia, Canada, France, and Japan (ACFJ) consortium, comprising space weather agencies from Australia, Canada, France, and Japan. The Pan-European Consortium for Aviation Space Weather User Services (PECASUS), comprising space weather agencies from Finland (lead), Belgium, the United Kingdom, Poland, Germany, Netherlands, Italy, Austria, and Cyprus. The China-Russian Federation Consortium (CRC) comprising space weather agencies from China and the Russian Federation. Phenomena Within the Solar System, space weather is influenced by the solar wind and the interplanetary magnetic field carried by the solar wind plasma. A variety of physical phenomena is associated with space weather, including geomagnetic storms and substorms, energization of the Van Allen radiation belts, ionospheric disturbances and scintillation of satellite-to-ground radio signals and long-range radar signals, aurorae, and geomagnetically induced currents at Earth's surface. Coronal mass ejections are also important drivers of space weather, as they can compress the magnetosphere and trigger geomagnetic storms. Solar energetic particles (SEP) accelerated by coronal mass ejections or solar flares can trigger solar particle events, a critical driver of human impact space weather, as they can damage electronics onboard spacecraft (e.g. Galaxy 15 failure), and threaten the lives of astronauts, as well as increase radiation hazards to high-altitude, high-latitude aviation. Effects Spacecraft electronics Some spacecraft failures can be directly attributed to space weather; many more are thought to have a space weather component. For example, 46 of the 70 failures reported in 2003 occurred during the October 2003 geomagnetic storm. The two most common adverse space weather effects on spacecraft are radiation damage and spacecraft charging. Radiation (high-energy particles) passes through the skin of the spacecraft and into the electronic components. In most cases, the radiation causes an erroneous signal or changes one bit in memory of a spacecraft's electronics (single event upsets). In a few cases, the radiation destroys a section of the electronics (single-event latchup). Spacecraft charging is the accumulation of an electrostatic charge on a nonconducting material on the spacecraft's surface by low-energy particles. If enough charge is built up, a discharge (spark) occurs. This can cause an erroneous signal to be detected and acted on by the spacecraft computer. A recent study indicated that spacecraft charging is the predominant space weather effect on spacecraft in geosynchronous orbit. Spacecraft orbit changes The orbits of spacecraft in low Earth orbit (LEO) decay to lower and lower altitudes due to the resistance from the friction between the spacecraft's surface (i.e. , drag) and the outer layer of the Earth's atmosphere (or the thermosphere and exosphere). Eventually, a LEO spacecraft falls out of orbit and towards the Earth's surface. Many spacecraft launched in the past few decades have the ability to fire a small rocket to manage their orbits. The rocket can increase altitude to extend lifetime, to direct the re-entry towards a particular (marine) site, or route the satellite to avoid collision with other spacecraft. Such maneuvers require precise information about the orbit. A geomagnetic storm can cause an orbit change over a few days that otherwise would occur over a year or more. The geomagnetic storm adds heat to the thermosphere, causing the thermosphere to expand and rise, increasing the drag on spacecraft. The 2009 satellite collision between the Iridium 33 and Cosmos 2251 demonstrated the importance of having precise knowledge of all objects in orbit. Iridium 33 had the capability to maneuver out of the path of Cosmos 2251 and could have evaded the crash, if a credible collision prediction had been available. Humans in space The exposure of a human body to ionizing radiation has the same harmful effects whether the source of the radiation is a medical X-ray machine, a nuclear power plant, or radiation in space. The degree of the harmful effect depends on the length of exposure and the radiation's energy density. The ever-present radiation belts extend down to the altitude of crewed spacecraft such as the International Space Station (ISS) and the Space Shuttle, but the amount of exposure is within the acceptable lifetime exposure limit under normal conditions. During a major space weather event that includes an SEP burst, the flux can increase by orders of magnitude. Areas within ISS provide shielding that can keep the total dose within safe limits. For the Space Shuttle, such an event would have required immediate mission termination. Ground systems Spacecraft signals The ionosphere bends radio waves in the same manner that water in a pool bends visible light. When the medium through which such waves travel is disturbed, the light image or radio information is distorted and can become unrecognizable. The degree of distortion (scintillation) of a radio wave by the ionosphere depends on the signal frequency. Radio signals in the VHF band (30 to 300 MHz) can be distorted beyond recognition by a disturbed ionosphere. Radio signals in the UHF band (300 MHz to 3 GHz) transit a disturbed ionosphere, but a receiver may not be able to keep locked to the carrier frequency. GPS uses signals at 1575.42 MHz (L1) and 1227.6 MHz (L2) that can be distorted by a disturbed ionosphere. Space weather events that corrupt GPS signals can significantly impact society. For example, the Wide Area Augmentation System operated by the US Federal Aviation Administration (FAA) is used as a navigation tool for North American commercial aviation. It is disabled by every major space weather event. Outages can range from minutes to days. Major space weather events can push the disturbed polar ionosphere 10° to 30° of latitude toward the equator and can cause large ionospheric gradients (changes in density over distance of hundreds of km) at mid and low latitude. Both of these factors can distort GPS signals. Long-distance radio signals Radio waves in the HF band (3 to 30 MHz) (also known as the shortwave band) are reflected by the ionosphere. Since the ground also reflects HF waves, a signal can be transmitted around the curvature of the Earth beyond the line of sight. During the 20th century, HF communications was the only method for a ship or aircraft far from land or a base station to communicate. The advent of systems such as Iridium brought other methods of communications, but HF remains critical for vessels that do not carry the newer equipment and as a critical backup system for others. Space weather events can create irregularities in the ionosphere that scatter HF signals instead of reflecting them, preventing HF communications. At auroral and polar latitudes, small space weather events that occur frequently disrupt HF communications. At mid-latitudes, HF communications are disrupted by solar radio bursts, by X-rays from solar flares (which enhance and disturb the ionospheric D-layer) and by TEC enhancements and irregularities during major geomagnetic storms. Transpolar airline routes are particularly sensitive to space weather, in part because Federal Aviation Regulations require reliable communication over the entire flight. Diverting such a flight is estimated to cost about $100,000. Humans in commercial aviation The magnetosphere guides cosmic ray and solar energetic particles to polar latitudes, while high-energy charged particles enter the mesosphere, stratosphere, and troposphere. These energetic particles at the top of the atmosphere shatter atmospheric atoms and molecules, creating harmful lower-energy particles that penetrate deep into the atmosphere and create measurable radiation. All aircraft flying above 8 km (26,200 feet) altitude are exposed to these particles. The dose exposure is greater in polar regions than at midlatitude and equatorial regions. Many commercial aircraft fly over the polar region. When a space weather event causes radiation exposure to exceed the safe level set by aviation authorities, the aircraft's flight path is diverted. Measurements of the radiation environment at commercial aircraft altitudes above 8 km (26,000 ft) have historically been done by instruments that record the data on board where the data are then processed later on the ground. However, a system of real-time radiation measurements on-board aircraft has been developed through the NASA Automated Radiation Measurements for Aerospace Safety (ARMAS) program. ARMAS has flown hundreds of flights since 2013, mostly on research aircraft, and sent the data to the ground through Iridium satellite links. The eventual goal of these types of measurements is to data assimilate them into physics-based global radiation models, e.g., NASA's Nowcast of Atmospheric Ionizing Radiation System (NAIRAS), so as to provide the weather of the radiation environment rather than the climatology. Ground-induced electric fields Magnetic storm activity can induce geoelectric fields in the Earth's conducting lithosphere. Corresponding voltage differentials can find their way into electric power grids through ground connections, driving uncontrolled electric currents that interfere with grid operation, damage transformers, trip protective relays, and sometimes cause blackouts. This complicated chain of causes and effects was demonstrated during the magnetic storm of March 1989, which caused the complete collapse of the Hydro-Québec electric-power grid in Canada, temporarily leaving nine million people without electricity. The possible occurrence of an even more intense storm led to operational standards intended to mitigate induction-hazard risks, while reinsurance companies commissioned revised risk assessments. Geophysical exploration Air- and ship-borne magnetic surveys can be affected by rapid magnetic field variations during geomagnetic storms. Such storms cause data-interpretation problems because the space weather-related magnetic field changes are similar in magnitude to those of the subsurface crustal magnetic field in the survey area. Accurate geomagnetic storm warnings, including an assessment of storm magnitude and duration, allows for an economic use of survey equipment. Geophysics and hydrocarbon production For economic and other reasons, oil and gas production often involves horizontal drilling of well paths many kilometers from a single wellhead. Accuracy requirements are strict, due to target size – reservoirs may only be a few tens to hundreds of meters across – and safety, because of the proximity of other boreholes. The most accurate gyroscopic method is expensive, since it can stop drilling for hours. An alternative is to use a magnetic survey, which enables measurement while drilling (MWD). Near real-time magnetic data can be used to correct drilling direction. Magnetic data and space weather forecasts can help to clarify unknown sources of drilling error. Terrestrial weather The amount of energy entering the troposphere and stratosphere from space weather phenomena is trivial compared to the solar insolation in the visible and infrared portions of the solar electromagnetic spectrum. Although some linkage between the 11-year sunspot cycle and the Earth's climate has been claimed., this has never been verified. For example, the Maunder minimum, a 70-year period almost devoid of sunspots, has often been suggested to be correlated to a cooler climate, but these correlations have disappeared after deeper studies. The suggested link from changes in cosmic-ray flux causing changes in the amount of cloud formation did not survive scientific tests. Another suggestion, that variations in the extreme ultraviolet (EUV) flux subtly influence existing drivers of the climate and tip the balance between El Niño/La Niña events collapsed when new research showed this was not possible. As such, a linkage between space weather and the climate has not been demonstrated. In addition, a link has been suggested between high energy charged particles (such as SEPs and cosmic rays) and cloud formation. This is because charged particles interact with the atmosphere to produce volatiles which then condense, creating cloud seeds. This is a topic of ongoing research at CERN, where experiments test the effect of high-energy charged particles on atmosphere. If proven, this may suggest a link between space weather (in the form of solar particle events) and cloud formation. Most recently, a statistical connection has been reported between the occurrence of heavy floods and the arrivals of high-speed solar wind streams (HSSs). The enhanced auroral energy deposition during HSSs is suggested as a mechanism for the generation of downward propagating atmospheric gravity waves (AGWs). As AGWs reach lower atmosphere, they may excite the conditional instability in the troposphere, thus leading to excessive rainfall. Observation Observation of space weather is done both for scientific research and applications. Scientific observation has evolved with the state of knowledge, while application-related observation expanded with the ability to exploit such data. Ground-based Space weather is monitored at ground level by observing changes in the Earth's magnetic field over periods of seconds to days, by observing the surface of the Sun, and by observing radio noise created in the Sun's atmosphere. The Sunspot Number (SSN) is the number of sunspots on the Sun's photosphere in visible light on the side of the Sun visible to an Earth observer. The number and total area of sunspots are related to the brightness of the Sun in the EUV and X-ray portions of the solar spectrum and to solar activity such as solar flares and coronal mass ejections. The 10.7 cm radio flux (F10.7) is a measurement of RF emissions from the Sun and is roughly correlated with the solar EUV flux. Since this RF emission is easily obtained from the ground and EUV flux is not, this value has been measured and disseminated continuously since 1947. The world standard measurements are made by the Dominion Radio Astrophysical Observatory at Penticton, BC, Canada and reported once a day at local noon in solar flux units (10−22W·m−2·Hz−1). F10.7 is archived by the National Geophysical Data Center. Fundamental space weather monitoring data are provided by ground-based magnetometers and magnetic observatories. Magnetic storms were first discovered by ground-based measurement of occasional magnetic disturbance. Ground magnetometer data provide real-time situational awareness for postevent analysis. Magnetic observatories have been in continuous operations for decades to centuries, providing data to inform studies of long-term changes in space climatology. Disturbance storm time index (Dst index) is an estimate of the magnetic field change at the Earth's magnetic equator due to a ring of electric current at and just earthward of the geosynchronous orbit. The index is based on data from four ground-based magnetic observatories between 21° and 33° magnetic latitude during a one-hour period. Stations closer to the magnetic equator are not used due to ionospheric effects. The Dst index is compiled and archived by the World Data Center for Geomagnetism, Kyoto. Kp/ap index: 'a' is an index created from the geomagnetic disturbance at one midlatitude (40° to 50° latitude) geomagnetic observatory during a 3-hour period. 'K' is the quasilogarithmic counterpart of the 'a' index. Kp and ap are the average of K and an over 13 geomagnetic observatories to represent planetary-wide geomagnetic disturbances. The Kp/ap index indicates both geomagnetic storms and substorms (auroral disturbance). Kp/ap data are available from 1932 onward. AE index is compiled from geomagnetic disturbances at 12 geomagnetic observatories in and near the auroral zones and is recorded at 1-minute intervals. The public AE index is available with a lag of two to three days that limits its utility for space weather applications. The AE index indicates the intensity of geomagnetic substorms except during a major geomagnetic storm when the auroral zones expand equatorward from the observatories. Radio noise bursts are reported by the Radio Solar Telescope Network to the U.S. Air Force and to NOAA. The radio bursts are associated with solar flare plasma that interacts with the ambient solar atmosphere. The Sun's photosphere is observed continuously for activity that can be the precursors to solar flares and CMEs. The Global Oscillation Network Group (GONG) project monitors both the surface and the interior of the Sun by using helioseismology, the study of sound waves propagating through the Sun and observed as ripples on the solar surface. GONG can detect sunspot groups on the far side of the Sun. This ability has recently been verified by visual observations from the STEREO spacecraft. Neutron monitors on the ground indirectly monitor cosmic rays from the Sun and galactic sources. When cosmic rays interact with the atmosphere, atomic interactions occur that cause a shower of lower-energy particles to descend into the atmosphere and to ground level. The presence of cosmic rays in the near-Earth space environment can be detected by monitoring high-energy neutrons at ground level. Small fluxes of cosmic rays are present continuously. Large fluxes are produced by the Sun during events related to energetic solar flares. Total Electron Content (TEC) is a measure of the ionosphere over a given location. TEC is the number of electrons in a column one meter square from the base of the ionosphere (around 90 km altitude) to the top of the ionosphere (around 1000 km altitude). Many TEC measurements are made by monitoring the two frequencies transmitted by GPS spacecraft. Presently, GPS TEC is monitored and distributed in real time from more than 360 stations maintained by agencies in many countries. Geoeffectiveness is a measure of how strongly space weather magnetic fields, such as coronal mass ejections, couple with the Earth's magnetic field. This is determined by the direction of the magnetic field held within the plasma that originates from the Sun. New techniques measuring Faraday rotation in radio waves are in development to measure field direction. Satellite-based A host of research spacecraft have explored space weather. The Orbiting Geophysical Observatory series were among the first spacecraft with the mission of analyzing the space environment. Recent spacecraft include the NASA-ESA Solar-Terrestrial Relations Observatory (STEREO) pair of spacecraft launched in 2006 into solar orbit and the Van Allen Probes, launched in 2012 into a highly elliptical Earth orbit. The two STEREO spacecraft drift away from the Earth by about 22° per year, one leading and the other trailing the Earth in its orbit. Together they compile information about the solar surface and atmosphere in three dimensions. The Van Allen probes record detailed information about the radiation belts, geomagnetic storms, and the relationship between the two. Some spacecraft with other primary missions have carried auxiliary instruments for solar observation. Among the earliest such spacecraft were the Applications Technology Satellite (ATS) series at GEO that were precursors to the modern Geostationary Operational Environmental Satellite (GOES) weather satellite and many communication satellites. The ATS spacecraft carried environmental particle sensors as auxiliary payloads and had their navigational magnetic field sensor used for sensing the environment. Many of the early instruments were research spacecraft that were re-purposed for space weather applications. One of the first of these was the IMP-8 (Interplanetary Monitoring Platform). It orbited the Earth at 35 Earth radii and observed the solar wind for two-thirds of its 12-day orbits from 1973 to 2006. Since the solar wind carries disturbances that affect the magnetosphere and ionosphere, IMP-8 demonstrated the utility of continuous solar wind monitoring. IMP-8 was followed by ISEE-3, which was placed near the Sun-Earth Lagrangian point, 235 Earth radii above the surface (about 1.5 million km, or 924,000 miles) and continuously monitored the solar wind from 1978 to 1982. The next spacecraft to monitor the solar wind at the point was WIND from 1994 to 1998. After April 1998, the WIND spacecraft orbit was changed to circle the Earth and occasionally pass the point. The NASA Advanced Composition Explorer has monitored the solar wind at the point from 1997 to present. In addition to monitoring the solar wind, monitoring the Sun is important to space weather. Because the solar EUV cannot be monitored from the ground, the joint NASA-ESA Solar and Heliospheric Observatory (SOHO) spacecraft was launched and has provided solar EUV images beginning in 1995. SOHO is a main source of near-real time solar data for both research and space weather prediction and inspired the STEREO mission. The Yohkoh spacecraft at LEO observed the Sun from 1991 to 2001 in the X-ray portion of the solar spectrum and was useful for both research and space weather prediction. Data from Yohkoh inspired the Solar X-ray Imager on GOES. Spacecraft with instruments whose primary purpose is to provide data for space weather predictions and applications include the Geostationary Operational Environmental Satellite (GOES) series of spacecraft, the POES series, the DMSP series, and the Meteosat series. The GOES spacecraft have carried an X-ray sensor (XRS) which measures the flux from the whole solar disk in two bands – 0.05 to 0.4 nm and 0.1 to 0.8 nm – since 1974, an X-ray imager (SXI) since 2004, a magnetometer which measures the distortions of the Earth's magnetic field due to space weather, a whole disk EUV sensor since 2004, and particle sensors (EPS/HEPAD) which measure ions and electrons in the energy range of 50 keV to 500 MeV. Starting sometime after 2015, the GOES-R generation of GOES spacecraft will replace the SXI with a solar EUV image (SUVI) similar to the one on SOHO and STEREO and the particle sensor will be augmented with a component to extend the energy range down to 30 eV. The Deep Space Climate Observatory (DSCOVR) satellite is a NOAA Earth observation and space weather satellite that launched in February 2015. Among its features is advance warning of coronal mass ejections. Models Space weather models are simulations of the space weather environment. Models use sets of mathematical equations to describe physical processes. These models take a limited data set and attempt to describe all or part of the space weather environment in or to predict how weather evolves over time. Early models were heuristic; i.e., they did not directly employ physics. These models take less resources than their more sophisticated descendants. Later models use physics to account for as many phenomena as possible. No model can yet reliably predict the environment from the surface of the Sun to the bottom of the Earth's ionosphere. Space weather models differ from meteorological models in that the amount of input is vastly smaller. A significant portion of space weather model research and development in the past two decades has been done as part of the Geospace Environmental Model (GEM) program of the National Science Foundation. The two major modeling centers are the Center for Space Environment Modeling (CSEM) and the Center for Integrated Space weather Modeling (CISM). The Community Coordinated Modeling Center (CCMC) at the NASA Goddard Space Flight Center is a facility for coordinating the development and testing of research models, for improving and preparing models for use in space weather prediction and application. Modeling techniques include (a) magnetohydrodynamics, in which the environment is treated as a fluid, (b) particle in cell, in which non-fluid interactions are handled within a cell and then cells are connected to describe the environment, (c) first principles, in which physical processes are in balance (or equilibrium) with one another, (d) semi-static modeling, in which a statistical or empirical relationship is described, or a combination of multiple methods. Commercial space weather development During the first decade of the 21st Century, a commercial sector emerged that engaged in space weather, serving agency, academia, commercial and consumer sectors. Space weather providers are typically smaller companies, or small divisions within a larger company, that provide space weather data, models, derivative products and service distribution. The commercial sector includes scientific and engineering researchers as well as users. Activities are primarily directed toward the impacts of space weather upon technology. These include, for example: Atmospheric drag on LEO satellites caused by energy inputs into the thermosphere from solar UV, FUV, Lyman-alpha, EUV, XUV, X-ray, and gamma ray photons as well as by charged particle precipitation and Joule heating at high latitudes; Surface and internal charging from increased energetic particle fluxes, leading to effects such as discharges, single event upsets and latch-up, on LEO to GEO satellites; Disrupted GPS signals caused by ionospheric scintillation leading to increased uncertainty in navigation systems such as aviation's Wide Area Augmentation System (WAAS); Lost HF, UHF and L-band radio communications due to ionosphere scintillation, solar flares and geomagnetic storms; Increased radiation to human tissue and avionics from galactic cosmic rays SEP, especially during large solar flares, and possibly bremsstrahlung gamma-rays produced by precipitating radiation belt energetic electrons at altitudes above 8 km; Increased inaccuracy in surveying and oil/gas exploration that uses the Earth's main magnetic field when it is disturbed by geomagnetic storms; Loss of power transmission from GIC surges in the electrical power grid and transformer shutdowns during large geomagnetic storms. Many of these disturbances result in societal impacts that account for a significant part of the national GDP. The concept of incentivizing commercial space weather was first suggested by the idea of a Space Weather Economic Innovation Zone discussed by the American Commercial Space Weather Association (ACSWA) in 2015. The establishment of this economic innovation zone would encourage expanded economic activity developing applications to manage the risks space weather and would encourage broader research activities related to space weather by universities. It could encourage U.S. business investment in space weather services and products. It promoted the support of U.S. business innovation in space weather services and products by requiring U.S. government purchases of U.S. built commercial hardware, software, and associated products and services where no suitable government capability pre-exists. It also promoted U.S. built commercial hardware, software, and associated products and services sales to international partners. designate U.S. built commercial hardware, services, and products as “Space Weather Economic Innovation Zone” activities; Finally, it recommended that U.S. built commercial hardware, services, and products be tracked as Space Weather Economic Innovation Zone contributions within agency reports. In 2015 the U.S. Congress bill HR1561 provided groundwork where social and environmental impacts from a Space Weather Economic Innovation Zone could be far-reaching. In 2016, the Space Weather Research and Forecasting Act (S. 2817) was introduced to build on that legacy. Later, in 2017-2018 the HR3086 Bill took these concepts, included the breadth of material from parallel agency studies as part of the OSTP-sponsored Space Weather Action Program (SWAP), and with bicameral and bipartisan support the 116th Congress (2019) is considering passage of the Space Weather Coordination Act (S141, 115th Congress). American Commercial Space Weather Association On April 29, 2010, the commercial space weather community created the American Commercial Space Weather Association (ACSWA) an industry association. ACSWA promotes space weather risk mitigation for national infrastructure, economic strength and national security. It seeks to: provide quality space weather data and services to help mitigate risks to technology; provide advisory services to government agencies; provide guidance on the best task division between commercial providers and government agencies; represent the interests of commercial providers; represent commercial capabilities in the national and international arena; develop best-practices. A summary of the broad technical capabilities in space weather that are available from the association can be found on their web site http://www.acswa.us. Notable events On December 21, 1806, Alexander von Humboldt observed that his compass had become erratic during a bright auroral event. The Solar storm of 1859 (Carrington Event) caused widespread disruption of telegraph service. The Aurora of November 17, 1882 disrupted telegraph service. The May 1921 geomagnetic storm, one of the largest geomagnetic storms disrupted telegraph service and damaged electrical equipment worldwide. The Solar storm of August 1972, a large SEP event occurred. If astronauts had been in space at the time, the dose could have been life-threatening. The March 1989 geomagnetic storm included multiple space weather effects: SEP, CME, Forbush decrease, ground level enhancement, geomagnetic storm, etc.. The 2000 Bastille Day event coincided with exceptionally bright aurora. April 21, 2002, the Nozomi Mars Probe was hit by a large SEP event that caused large-scale failure. The mission, which was already about 3 years behind schedule, was abandoned in December 2003. The 2003 Halloween solar storms, a series of coronal mass ejections and solar flares in late October and early November 2003 with associated impacts. Citations General bibliography Daglis, Ioannis A.: Effects of Space Weather on Technology Infrastructure. Springer, Dordrecht 2005, . Lilensten, Jean, and Jean Bornarel, Space Weather, Environment and Societies, Springer, . Moldwin, Mark: An Introduction to Space Weather. Cambridge Univ. Press, Cambridge 2008, . Schwenn, Rainer, Space Weather, Living Reviews in Solar Physics 3, (2006), 2, online article. External links Real-time space weather forecast Utah State Univ SWC Real-time GAIM Ionosphere – (real-time model of ionosphere) Space Weather and Radio Propagation. Live and historical data and images with a perspective on how it affects radio propagation Latest Data from STEREO, HINODE and SDO (Large bandwidth) Other links Space Weather FX – Video podcast series on Space Weather from MIT Haystack Observatory ESA's Space Weather Site Space Weather European Network – (ESA) Q-Up Now (Q-up) Space Weather For Today and Tomorrow (SWFTT) Space Weather Today – Space Weather from Russian Institute for Applied Geophysics Document 2::: Fuchsia is an open-source capability-based operating system developed by Google. In contrast to Google's Linux-based operating systems such as ChromeOS and Android, Fuchsia is based on a custom kernel named Zircon. It publicly debuted as a self-hosted git repository in August 2016 without any official corporate announcement. After years of development, its official product launch was in 2021 on the first-generation Google Nest Hub, replacing its original Linux-based Cast OS. Etymology Fuchsia is named for the color fuchsia, which is a combination of pink and purple. The name is a reference to two operating systems projects within Apple which influenced team members of the Fuchsia project: Taligent (codenamed "Pink") and iOS (codenamed "Purple"). The color-based naming scheme derives from the colors of index cards which Apple employees used to organize their ideas. The name of the color fuchsia is derived from the Fuchsia plant genus, which is derived from the name of botanist Leonhart Fuchs. History In August 2016, media outlets reported on a mysterious source code repository published on GitHub, revealing that Google was developing a new operating system named Fuchsia. No official announcement was made, but inspection of the code suggested its capability to run on various devices, including "dash infotainment" systems for cars, embedded devices like traffic lights, digital watches, smartphones, tablets, and PCs. Its architecture differs entirely from the Linux-based Android and ChromeOS due in part to its unique Zircon kernel, formerly named Magenta. In May 2017, Ars Technica wrote about Fuchsia's new user interface, an upgrade from its command-line interface at its first reveal in August. A developer wrote that Fuchsia "isn't a toy thing, it's not a 20% Project, it's not a dumping ground of a dead thing that we don't care about anymore". Though users could test Fuchsia, nothing "works", because "it's all a bunch of placeholder interfaces that don't do anything Document 3::: In the field of financial economics, Holding value is an indicator of a theoretical value of an asset that someone has in their portfolio. It is a value which sums the impacts of all the dividends that would be given to the holder in the future, to help them estimate a price to buy or sell assets. Expression The following formula gives the holding value (HV) for a period beginning at i through the period n. where div = dividend r = interest rate (of the money if it is kept at the bank; e.g., 0.02 or 2%) i = the period at the beginning of the estimation n = the last period considered in the window of future dividends. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary concern of space weather as described in the text? A. The effects of solar wind on Earth's atmosphere B. The correlation between sunspots and Earth climate C. The impact of space weather on spacecraft and human activities D. The historical observations of auroras at high latitudes Answer:
C. The impact of space weather on spacecraft and human activities
Relavent Documents: Document 0::: Software analytics is the analytics specific to the domain of software systems taking into account source code, static and dynamic characteristics (e.g., software metrics) as well as related processes of their development and evolution. It aims at describing, monitoring, predicting, and improving the efficiency and effectiveness of software engineering throughout the software lifecycle, in particular during software development and software maintenance. The data collection is typically done by mining software repositories, but can also be achieved by collecting user actions or production data. Definitions "Software analytics aims to obtain insightful and actionable information from software artifacts that help practitioners accomplish tasks related to software development, systems, and users." --- centers on analytics applied to artifacts a software system is composed of. "Software analytics is analytics on software data for managers and software engineers with the aim of empowering software development individuals and teams to gain and share insight form their data to make better decisions." --- strengthens the core objectives for methods and techniques of software analytics, focusing on both software artifacts and activities of involved developers and teams. "Software analytics (SA) represents a branch of big data analytics. SA is concerned with the analysis of all software artifacts, not only source code. [...] These tiers vary from the higher level of the management board and setting the enterprise vision and portfolio management, going through project management planning and implementation by software developers." --- reflects the broad scope including various stakeholders. Aims Software analytics aims at supporting decisions and generating insights, i.e., findings, conclusions, and evaluations about software systems and their implementation, composition, behavior, quality, evolution as well as about the activities of various stakeholders of these proce Document 1::: A canary trap is a method for exposing an information leak by giving different versions of a sensitive document to each of several suspects and seeing which version gets leaked. It could be one false statement, to see whether sensitive information gets out to other people as well. Special attention is paid to the quality of the prose of the unique language, in the hopes that the suspect will repeat it verbatim in the leak, thereby identifying the version of the document. The term was coined by Tom Clancy in his novel Patriot Games, although Clancy did not invent the technique. The actual method (usually referred to as a barium meal test in espionage circles) has been used by intelligence agencies for many years. The fictional character Jack Ryan describes the technique he devised for identifying the sources of leaked classified documents: Each summary paragraph has six different versions, and the mixture of those paragraphs is unique to each numbered copy of the paper. There are over a thousand possible permutations, but only ninety-six numbered copies of the actual document. The reason the summary paragraphs are so lurid is to entice a reporter to quote them verbatim in the public media. If he quotes something from two or three of those paragraphs, we know which copy he saw and, therefore, who leaked it. A refinement of this technique uses a thesaurus program to shuffle through synonyms, thus making every copy of the document unique. Barium meal test According to the book Spycatcher by Peter Wright (published in 1987), the technique is standard practice that has been used by MI5 (and other intelligence agencies) for many years, under the name "barium meal test", named for the medical procedure. A barium meal test is more sophisticated than a canary trap because it is flexible and may take many different forms. However, the basic premise is to reveal a supposed secret to a suspected enemy (but nobody else) then monitor whether there is evidence of the fake infor Document 2::: Dysprosium phosphide is an inorganic compound of dysprosium and phosphorus with the chemical formula DyP. Synthesis The compound can be obtained by the reaction of phosphorus and dysprosium at high temperature. 4 Dy + P4 → 4 DyP Physical properties DyP has a NaCl structure (a=5.653 Å), where dysprosium is +3 valence. Its band gap is 1.15 eV, and the Hall mobility (μH) is 8.5 cm3/V·s. DyP forms crystals of a cubic system, space group Fm3m. Uses The compound is a semiconductor used in high power, high frequency applications and in laser diodes. References Document 3::: Malic acid is an organic compound with the molecular formula . It is a dicarboxylic acid that is made by all living organisms, contributes to the sour taste of fruits, and is used as a food additive. Malic acid has two stereoisomeric forms (L- and D-enantiomers), though only the L-isomer exists naturally. The salts and esters of malic acid are known as malates. The malate anion is a metabolic intermediate in the citric acid cycle. Etymology The word 'malic' is derived from Latin , meaning 'apple'. The related Latin word , meaning 'apple tree', is used as the name of the genus Malus, which includes all apples and crabapples; and is the origin of other taxonomic classifications such as Maloideae, Malinae, and Maleae. Biochemistry L-Malic acid is the naturally occurring form, whereas a mixture of L- and D-malic acid is produced synthetically. Malate plays an important role in biochemistry. In the C4 carbon fixation process, malate is a source of CO2 in the Calvin cycle. In the citric acid cycle, (S)-malate is an intermediate, formed by the addition of an -OH group on the si face of fumarate. It can also be formed from pyruvate via anaplerotic reactions. Malate is also synthesized by the carboxylation of phosphoenolpyruvate in the guard cells of plant leaves. Malate, as a double anion, often accompanies potassium cations during the uptake of solutes into the guard cells in order to maintain electrical balance in the cell. The accumulation of these solutes within the guard cell decreases the solute potential, allowing water to enter the cell and promote aperture of the stomata. In food Malic acid was first isolated from apple juice by Carl Wilhelm Scheele in 1785. Antoine Lavoisier in 1787 proposed the name acide malique, which is derived from the Latin word for apple, mālum—as is its genus name Malus. In German it is named Äpfelsäure (or Apfelsäure) after plural or singular of a sour thing from the apple fruit, but the salt(s) are called Malat(e). Malic acid is the The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the chemical formula of dysprosium phosphide? A. DyP B. Dy3P C. Dy2P3 D. DyP2 Answer:
A. DyP
Relavent Documents: Document 0::: HD 174179 is a single star in the northern constellation of Lyra. It has a white hue and is dimly visible to the naked eye with an apparent visual magnitude of 6.06. The star is located at a distance of approximately 1,280 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −15 km/s. The star is an estimated 33 million years old with a low projected rotational velocity of 5 km/s. It has 6.6 times the mass of the Sun and is radiating 2,036 times the Sun's luminosity from its photosphere at an effective temperature of 17,900 K. HD 174179 is a Be star, showing Balmer emission lines in its spectrum at times. It has a stellar classification of B3IVp, with 'p' indicating spectral features of a shell star. A 1976 study found no emission features, but the star was reported to show emission lines again in later studies. References Document 1::: HD 38282 (R144, BAT99-118, Brey 89) is a massive spectroscopic binary star in the Tarantula Nebula (Large Magellanic Cloud), consisting of two hydrogen-rich Wolf-Rayet stars. R144 is located near the R136 cluster at the center of NGC 2070 and may have been ejected from it after an encounter with another massive binary. It shares a common X-ray cavity with the R146 (HD 269926) and R147 (HD 38344) Wolf-Rayet star systems. Both components of R144 are detected in the spectrum and both are WNh stars, very hot stars with strong emission lines due to their strong stellar winds. The orbit has not been determined, but is likely to be between two and six months long, possibly more if it is eccentric. The primary, slightly hotter, star is observed to be the less massive of the two. Each star is amongst the most luminous known, but the exact parameters of each has not been determined. Their combined luminosity is around to . The masses have not yet been calculated accurately from the orbital parameters, but the stars have been modelled to initially have been around and . Depending on their exact age, this has now decreased to between and for the primary and and for the secondary. See also List of most massive stars Document 2::: Ring is a dynamically typed, general-purpose programming language. It can be embedded in C/C++ projects, extended using C/C++ code or used as a standalone language. The supported programming paradigms are imperative, procedural, object-oriented, functional, meta, declarative using nested structures, and natural programming. The language is portable (Windows, Linux, macOS, Android, WebAssembly, etc.) and can be used to create console, GUI, web, game and mobile applications. History In 2009, Mahmoud Samir Fayed created a minor domain-specific language called Supernova that focuses on User interface (UI) creation and uses some ideas related to Natural Language Programming, then he realized the need for a new language that is general-purpose and can increase the productivity of natural language creation. Ring aims to offer a language focused on helping the developer with building natural interfaces and declarative DSLs. Goals The general goals behind Ring: Applications programming language. Productivity and developing high quality solutions that can scale. Small and flexible language that can be embedded in C/C++ projects. Simple language that can be used in education and introducing Compiler/VM concepts. General-Purpose language that can be used for creating domain-specific libraries, frameworks and tools. Practical language designed for creating the next version of the Programming Without Coding Technology software. Examples Hello World program The same program can be written using different styles. Here is an example of the standard "Hello, World!" program using four different styles. The first style: see "Hello, World!" The second style: put "Hello, World!" The third style: print("Hello, World!") Another style: similar to xBase languages like Clipper and Visual FoxPro ? "Hello, World!" Change the keywords and operators Ring supports changing the language keywords and operators. This could be done many times in the same source file, and is us Document 3::: d5SICS is an artificial nucleoside containing 6-methylisoquinoline-1-thione-2-yl group instead of a base. It pairs up with dNaM in a hydrophobic interaction. It was not able to be removed by the error-correcting machinery of the E. coli into which it was inserted. The pairing of d5SICS–dNaM is mediated by packing and hydrophobic forces instead of hydrogen bonding, which occurs in natural base pairs. Therefore, in free DNA, rings of d5SICS and dNaM are placed in parallel planes instead of the same plane. The d5SICS-dNaM pairing replaced an older dNaM-dTPT3 pairing. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the premise of the television series The 100? A. A group of grounders tries to reclaim Earth from the Ark survivors. B. Juvenile delinquents are sent back to Earth to determine if it is habitable after a nuclear apocalypse. C. A family struggles to survive in a post-apocalyptic world dominated by AI. D. A group of space travelers seeks a new planet after Earth is destroyed. Answer:
B. Juvenile delinquents are sent back to Earth to determine if it is habitable after a nuclear apocalypse.
Relavent Documents: Document 0::: The T-3000 is a fictional character and the main antagonist of Terminator Genisys, the fifth installment in the Terminator series, portrayed by Jason Clarke. In the film, the T-3000 is an alternate timeline counterpart of Skynet's (portrayed by Matt Smith) nemesis John Connor (also portrayed by Clarke), created after Skynet infects a variant of Connor with nanotechnology and fractures the timeline. The T-3000 also serves as a foil personality to "Guardian" (a reprogrammed T-800 portrayed by Arnold Schwarzenegger), a protagonist who is somewhat similar to the T-3000 but also opposite in many ways, of their relationship dynamics with Sarah Connor (portrayed by Emilia Clarke) and Kyle Reese (portrayed by Jai Courtney). The T-3000's sole mission is to protect and ensure the ultimate survival of Skynet, which seeks to eliminate the human race with its global machine network. The T-3000 describes itself as neither machine nor human; rather, it is a hybrid nanotechnological cyborg. Producer David Ellison explains that the title Terminator Genisys "[is] in reference to genesis, which is in reference to the singularity and the man-machine hybrid that John Connor ends up being." The T-3000 returns in the 2017 video game Terminator Genisys: Future War, a direct sequel to Genisys produced following the cancellation of the film's planned sequels in favor of another alternate timeline-set sequel, Terminator: Dark Fate (2019). Background In a desperate effort to ensure its survival, the rogue artificial intelligence Skynet creates an avatar for itself in the form of a T-5000 (Matt Smith). This Terminator travels through many timelines searching for a way to defeat the Human Resistance and ultimately infiltrates it under the guise of a fighter named Alex. "Alex" is present as a soldier when John Connor and Kyle Reese (Jai Courtney) discover Skynet's time machine at the end of the war with the machines. As Kyle is being sent back in time to protect John's mother Sarah Connor (Emi Document 1::: Qatar Sustainability Assessment System (QSAS) is a green building certification system developed for the State of Qatar. The primary objective of Qatar Sustainability Assessment System [QSAS] is to create a sustainable built environment that minimizes ecological impact while addressing the specific regional needs and environment of Qatar. Overview QSAS was developed by the Gulf Organisation for Research and Development (GORD) in collaboration with the T.C. Chan Center for Building Simulation and Energy Studies at the University of Pennsylvania . Since its deployment in 2009, over 128 buildings in Qatar have been certified through QSAS. In December 2010, QSAS was adopted into the curriculum of the environmental design faculty at King Fahd University and Qatar University. In March, 2011 the State of Qatar integrated QSAS into the Qatar Construction Specifications [QCS] making the implementation of certain criteria mandatory for buildings developed in Qatar. The development of the rating system took advantage of a comprehensive review of combined best practices employed by a mix of established international and regional rating systems. This review has been performed while taking into consideration the needs that are specific to Qatar’s local environment, culture, and policies. This has led to adaptations and additions to sustainability criteria. Measurements for the rating system are designed to be performance-based and quantifiable. The result is a performance-based sustainable building rating system customized to the unique conditions and requirements of the State of Qatar. Rating System QSAS consists of a series of sustainable categories and criteria, each with a direct impact on environmental stress mitigation. Each category measures a different aspect of the project’s environmental impact. The categories define these broad impacts and address ways in which a project can mitigate the negative environmental effects. These categories are then broken down into spec Document 2::: Eucalyptus rigens, commonly known as saltlake mallee, is a species of sprawling mallee that is endemic to the southwest of Western Australia. It has smooth bark, lance-shaped adult leaves, flower buds in groups of three on a flattened peduncle and sessile, ribbed fruit. Description Eucalyptus rigens is a sprawling, sometimes almost prostrate mallee that typically grows to a height of . It has smooth grey over white bark that peels in strips. Young plants and coppice regrowth have leaves that are up to long and wide. Adult leaves are the same shade of greyish to light green on both sides, lance-shaped, up to long and wide, firm, stiff and often erect. The flower buds are arranged in leaf axils in groups of three on a strongly flattened peduncle up to long. Mature buds are oval, ribbed, up to long and wide with a ribbed, conical operculum. Flowering occurs from July to September and the flowers are creamy white. The fruit is a woody, sessile, cup-shaped or conical capsule long and wide with the valves near rim level. Taxonomy and naming Eucalyptus rigens was first formally described by the botanists Ian Brooker and Stephen Hopper in 1989 in the journal Nuytsia from material they collected in the Truslove Nature Reserve near Grass Patch. The specific epithet (rigens) is a Latin word meaning "stiff" or "rigid", referring to the leaves. Distribution and habitat Saltlake mallee grows in sandy soils around the edges of salt lakes, usually in mallee shrubland, to the north and north-east of Esperance. Conservation status This mallee is classified as "not threatened" by the Western Australian Government Department of Parks and Wildlife. See also List of Eucalyptus species References Document 3::: The Antwerp Water Works () or AWW produces water for the city of Antwerp (Belgium) and its surroundings. The AWW has a yearly production of and a revenue of 100 million euro. History Between 1832 and 1892, Antwerp was struck every ten to fifteen years by a major cholera epidemic which each time claimed a few thousand lives and lasted for about two years. In 1866 the cholera epidemic infected about 5000 people and about 3000 people died. Between 1861 and 1867 several propositions were done for a water supply for Antwerp. In 1873, under mayor Leopold De Wael, it was decided that a concession should be granted to secure the water supply of the city. On 25 June 1873, a concession of 50 years was granted to the English engineers, Joseph Quick from London, together with John Dick, to organize the water supply of Antwerp. Due to a lack of funds and a dispute between the partners this venture stranded. In 1879, the English engineering company Easton & Anderson took over the yards and the concession. Within two years they succeeded in finishing the work. An exploitation society was established: the Antwerp Waterworks Company Limited, a society according to English law which would be in charge of the exploitation from 1881 up to 1930. The water was won from the Nete river at the bridge of Walem. It was purified according to an original method: an iron filter. In the period 1881 up to 1908 the system was repaired repeatedly, until eventually a new method of filtration was chosen which was a combination of fast with slow sand filtration. This method of filtration is still being used today for the treatment of a large part of the raw material, now water from the Albert Canal. In 1930, the concession came to an end, as no agreement could be reached with the English owners concerning a new construction in which the municipalities surrounding Antwerp would be included. The city of Antwerp took over the company and founded a mixed intermunicipal company (private and public participation) in which the English Waterworks kept a minority participation. The remaining shares were in the hands of the city of Antwerp and the surrounding municipalities of Berchem, Boechout, Borgerhout, Deurne, Edegem, Ekeren, Hoboken, Hove, Mortsel, Kontich and Wilrijk. The English withdrew from the company in 1965. In the same year a new production site in Oelegem was established and a new office building in Antwerp. During the dry summer of 1976 it became clear that the reserve capacity needed to be expanded and in 1982 the reservoir of Broechem was inaugurated. The second concession ended after 53 years, so in 1983 a new concession to the AWW was granted. In 2003 Brabo Industrial Water Solutions (BIWS) started, a consortium with Ondeo Industrial Solutions, to provide water tailored for the industry. In 2004 the RI-ANT project started (together with Aquafin), which takes over the management and the maintenance of the sewerage network of Antwerp. See also EU water policy Public water supply Water purification References Sources AWW AWW History (Dutch) The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What major health crisis affected Antwerp every ten to fifteen years between 1832 and 1892, resulting in thousands of deaths? A. Typhoid fever B. Cholera C. Influenza D. Tuberculosis Answer:
B. Cholera
Relavent Documents: Document 0::: Dr. Amanda Bradford is a marine mammal biologist who is currently researching cetacean population dynamics for the National Marine Fisheries Service of the National Oceanic and Atmospheric Administration. Bradford is currently a Research Ecologist with the Pacific Islands Fisheries Science Center's Cetacean Research Program. Her research primarily focuses on assessing populations of cetaceans, including evaluating population size, health, and impacts of human-caused threats, such as fisheries interactions. Bradford is a cofounder and organizer of the Women in Marine Mammal Science (WIMMS) Initiative. Education Undergraduate education Bradford received her Bachelor of Science in Marine Biology from Texas A&M University in Galveston, Texas in 1998. She worked in the lab of Bernd Würsig. While Bradford was an undergraduate, she was a volunteer at the Texas Marine Mammal Stranding Network from 1994 to 1998. Bradford, monitored live stranded delphinids and performed basic husbandry and life-support for bottlenose dolphins and false killer whales. Bradford also participated in marine mammal necropsies. During her senior year, Bradford began analyzing photo-identification data from the western North Pacific population of gray whales. Shortly after graduation, Bradford traveled to northeastern Sakhalin Island in the Russian Far East to join a collaborative Russia-U.S. field study of these whales on their primary feeding ground. Once Bradford returned from the field, she spent a year as a research assistant for this project based at the Southwest Fisheries Science Center in La Jolla, California. Graduate education Bradford attended the University of Washington, School of Aquatic and Fishery Sciences (SAFS) in Seattle, Washington, receiving her Masters of Science in 2003 and then Doctorate of Philosophy (PhD) in 2011. Bradford studied under the late Glenn VanBlaricom for both degrees. During her time at SAFS, Bradford spent 10 summers in the Russian Far East studying the endangered western population of gray whales. Bradford's graduate research focused on estimating survival, abundance, anthropogenic impacts, and body condition of these whales. Her results showed that calf survival in the population was notably low, the population numbered only around 100 whales in the early 2000s, whales were vulnerable to fishing gear entanglement and vessel collisions, and that body condition varied by season and year. Lactating females where found to have the poorest body condition and did not always appear to recover by the end of a feeding season. Bradford also studied the age at sexual maturity and the birth-interval of the western gray whales, both important parameters for understanding the dynamics of this endangered population. Bradford spent a lot of time as a graduate student working on photo-identification of the western gray whale population and published a paper on how to identify calves based on their barnacle scars and pigmentation patterns. Academic awards and honors Bradford received the National Marine Fisheries Service - Sea Grant Joint Fellowship Program in Population and Ecosystem Dynamics and Marine Resource Economics. This fellowship is designed to support and train highly qualified PhD students to pursue careers in these fields. Career and research Graduate research and early career The majority of Bradford's work while completing her PhD focused on the western gray whale population. While the population is currently listed as endangered on the Red List of the International Union for Conservation of Nature (IUCN) and considered to be increasing, when Bradford was researching them they were listed as critically endangered. Much of what is known about the western gray whales is a result of the work of Bradford and her international colleagues. Western Gray Whale Advisory Panel - International Union for Conservation of Nature Bradford was responsible for synthesizing data and assisting with population analyses for the Western Gray Whale Advisory Panel between 2007 and 2011. Bradford also participated in two ship-based western gray whale satellite tagging surveys off Sakhalin Island, Russia. Western Gray Whale Project, Russia-U.S. Collaboration Bradford participated and eventually lead western gray whale boat-based photo-identification and genetic-monitoring surveys between 1998 and 2010, which included her putting in over 1,500 hours of small boat work. Further, Bradford collected gray whale behavioral data and theodolite-tracked movement data. In addition to the gray whale work, Bradford collected information on spotted seals in the early years of the collaboration. Pacific Islands Fisheries Science Center Shortly before graduating with her PhD, Bradford took a position at the Pacific Islands Fisheries Science Center, a part of NOAA Fisheries. Bradford is in the Cetacean Research Program of the Protected Species Division, where she studies population dynamics and demography, line-transect abundance estimation, mark-recapture parameter estimation, and health and injury assessment. Bradford's work has been relevant to estimating thee bycatch of false killer whales in the Hawaii-based deep-set longline fishery. False killer whales are known for depredating catch and bait in this fishery and due to this behavior, they are one of the most often accidentally caught marine mammals. Bradford was involved in a study of false killer whale behavior and interactions with the fisheries in an effort to try and reduce the bycatch of this species and achieve conservation goals. Bradford has also been working on a population study of Megaptera novaeangeliae, the humpback whale, and coauthored a paper in 2020 on a newfound breeding ground for the endangered western North Pacific humpback whale population off the Marina Archipelago. In order to promote the recovery of this population, it is vital to know the full extent of their breeding grounds to be able to assess and eliminate threats. Bradford regularly participates in ship-based and small boat surveys for cetaceans in the Pacific Islands region. She also plays a leading role in efforts to incorporate unmanned aircraft systems, automated photo-identification using machine learning, and open data science practices into the data collection and analysis workflows of the Cetacean Research program. She regularly gives presentations, contributes to web stories, and otherwise communicates to stakeholders and members of the public. Outreach and service Women in Marine Mammal Science Bradford is a cofounder and organizer of Women in Marine Mammal Science (WIMMS), an initiative aimed at amplifying women and helping them advance their careers in the field of marine mammal science. The initiative was formed following a workshop in 2017 at the Society for Marine Mammalogy Biennial Conference on the Biology of Marine Mammals. The workshop focused on identifying barriers that women face in the marine mammal science field and provided strategies to overcome these barriers. As a part of WIMMS, Bradford conducted a survey and analyzed results on gender-specific experiences in marine mammal science. In 2020, Bradford signed a petition to the Society of Marine Mammalogy asking for them to help eliminate unpaid research positions within the field as the prevalence of these positions decreases the accessibility of the field and limits the diversity and inclusion. Society for Marine Mammalogy Bradford served as the Student-Member-at-Large for the Society for Marine Mammalogy's Board of Governors from 2006 to 2008. Bradford served as the student representative, facilitated student participation in the Society, and promoted the growth of the student chapters. Select publications Bradford A. et al. (2021). Line-transect abundance estimates of cetaceans in U.S. waters around the Hawaiian Islands in 2002, 2010 and 2017. U.S. Department of Commerce, NOAA Tech. Memo. NMFS-PIFSC-115.52pp. Bradford A. et al. (2020). Abundance estimates of false killers whales in Hawaiian waters and the broader central Pacific. U.S. Department of Commerce, NOAA Tech. Memo. NMFS-PIFSC-104.78pp Hill M. and Bradford A. et al.(2020). Found: a missing breeding ground for endangered western North Pacific humpback whales in the Mariana Archipelago. Endangered Species Research. 91–103. 10.3354/esr01010. Bradford A. et al. (2018). Abundance estimates for management of endangered false killer whales in the main Hawaiian Islands. Endangered Species Research 36:297-313. Weller D. and Bradford A. et al. (2018). Prevalence of Killer Whale Tooth Rake Marks on Gray Whales off Sakhalin Island, Russia. Aquatic Mammals. 44. 643–652. 10.1578/AM.44.6.2018.643. Bradford A. Forney K, Oleson E, Barlow J. (2017). Abundance estimates of cetaceans from a line-transect survey within the U.S. Hawaiian Islands Exclusive Economic Zone. Fishery Bulletin 115:129-142. Bradford A. Forney K, Oleson E, Barlow J. (2014). Accounting for subgroup structure in line-transect abundance estimates of false killer whales (Pseudorca crassidens) in Hawaiian waters. PLoS ONE 9:e90464. Bradford A. et al. (2012). Leaner leviathans: Body condition variation in a critically endangered whale population. Journal of Mammalogy. 93. 251–266. 10.1644/11-MAMM-A-091.1. Bradford A, Weller D, Burdin A, Brownell R. (2011). Using barnacle and pigmentation characteristics to identify gray whale calves on their feeding grounds. Marine Mammal Science - MAR MAMMAL SCI. 27. 10.1111/j.1748-7692.2010.00413.x. Bradford A. et al. (2009). Anthropogenic scarring of western gray whales (Eschrichtius robustus). Marine Mammal Science 25:161-175. Document 1::: The knower paradox is a paradox belonging to the family of the paradoxes of self-reference (like the liar paradox). Informally, it consists in considering a sentence saying of itself that it is not known, and apparently deriving the contradiction that such sentence is both not known and known. History A version of the paradox occurs already in chapter 9 of Thomas Bradwardine’s Insolubilia. In the wake of the modern discussion of the paradoxes of self-reference, the paradox has been rediscovered (and dubbed with its current name) by the US logicians and philosophers David Kaplan and Richard Montague, and is now considered an important paradox in the area. The paradox bears connections with other epistemic paradoxes such as the hangman paradox and the paradox of knowability. Formulation The notion of knowledge seems to be governed by the principle that knowledge is factive: (KF): If the sentence ' P ' is known, then P (where we use single quotes to refer to the linguistic expression inside the quotes and where 'is known' is short for 'is known by someone at some time'). It also seems to be governed by the principle that proof yields knowledge: (PK): If the sentence ' P ' has been proved, then ' P ' is known Consider however the sentence: (K): (K) is not known Assume for reductio ad absurdum that (K) is known. Then, by (KF), (K) is not known, and so, by reductio ad absurdum, we can conclude that (K) is not known. Now, this conclusion, which is the sentence (K) itself, depends on no undischarged assumptions, and so has just been proved. Therefore, by (PK), we can further conclude that (K) is known. Putting the two conclusions together, we have the contradiction that (K) is both not known and known. Solutions Since, given the diagonal lemma, every sufficiently strong theory will have to accept something like (K), absurdity can only be avoided either by rejecting one of the two principles of knowledge (KF) and (PK) or by rejecting classical logic (which valida Document 2::: The word substrate comes from the Latin sub - stratum meaning 'the level below' and refers to any material existing or extracted from beneath the topsoil, including sand, chalk and clay. The term is also used for materials used in building foundations or else incorporated into plaster, brick, ceramic and concrete components, which are sometimes called 'filler' products. See also Firestop Sealant Caulking Paint References Document 3::: The Middle East Youth Initiative is a program at the Wolfensohn Center for Development, housed in the Global Economy and Development program at the Brookings Institution. It was launched in July 2006 as a joint effort between the Wolfensohn Center and the Dubai School of Government. The Initiative performs vigorous research on issues pertaining to regional youth (ages 15–24) on the topics of Youth Exclusion, education, employment, marriage, housing, and credit, and on the ways in which all of these elements are linked during young people’s experience of waithood. In addition to research and policy recommendation, the Initiative serves as a hub for networking between policymakers, regional actors in development, government officials, representatives from the private sector, and youth. Current fellows with the Initiative include Djavad Salehi-Isfahani. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is Dr. Amanda Bradford's primary research focus as a marine mammal biologist? A. Studying the effects of climate change on marine ecosystems B. Evaluating population dynamics and health of cetaceans C. Investigating deep-sea fish species D. Analyzing coral reef restoration techniques Answer:
B. Evaluating population dynamics and health of cetaceans
Relavent Documents: Document 0::: The death clock calculator is a conceptual idea of a predictive algorithm that uses personal socioeconomic, demographic, or health data (such as gender, age, or BMI) to estimate a person's lifespan and provide an estimated time of death. Recent research In December 2023, Nature Computational Science published a paper introducing the life2vec algorithm, developed as part of a scientific research project. Life2vec is a transformer-based model, similar to those used in natural language processing (e.g., ChatGPT or Llama), trained to analyze life trajectories. The project leverages rich registry data from Denmark, covering six million individuals, with event data related to health, demographics, and labor, recorded at a day-to-day resolution. While life2vec aims to provide insights into early mortality risks and life trends, it does not predict specific death dates, and it is not publicly available as of 2024. Some media outlets and websites misrepresented the intent of life2vec by calling it a death clock calculator, leading to confusion and speculation about the capabilities of the algorithm. This misinterpretation has also led to fraudulent calculators pretending to use AI-based predictions, often promoted by scammers to deceive users. Document 1::: This article provides a list of sites in the United Kingdom which are recognised for their importance to biodiversity conservation. The list is divided geographically by region and county. Inclusion criteria Sites are included in this list if they are given any of the following designations: Sites of importance in a global context Biosphere Reserves (BR) World Heritage Sites (WHS) (where biological interest forms part of the reason for designation) all Ramsar Sites Sites of importance in a European context all Special Protection Areas (SPA) all Special Area of Conservation (SAC) all Important Bird Areas (IBA) Sites of importance in a national context all sites which were included in the Nature Conservation Review (NCR site) all national nature reserves (NNR) Sites of Special Scientific Interest (SSSI), where biological interest forms part of the justification for notification (SSSIs which are designated purely for their geological interest are not included unless they meet other criteria) England Southwest Cornwall Devon Dorset Somerset Avon Wiltshire Gloucestershire Southeast Bedfordshire Berkshire Buckinghamshire Essex Greater London Hampshire Hertfordshire Kent Oxfordshire Surrey Sussex Rye Harbour Nature Reserve Midlands Derbyshire Herefordshire Leicestershire Northamptonshire Shropshire Staffordshire Nottinghamshire Warwickshire Worcestershire East Anglia Northwest Cheshire Northeast Lincolnshire Yorkshire County Durham Wales Anglesey Scotland Northeast Scotland Shetland Unst Orkney Outer Hebrides Lewis and Harris North Uist, South Uist and Benbecula Other islands See also Conservation in the United Kingdom National Nature Reserves in the United Kingdom Sites of Special Scientific Interest References Document 2::: Habeas data is a writ and constitutional remedy available in certain nations. The literal translation from Latin of habeas data is "[we command] you have the data," or "you [the data subject] have the data." The remedy varies from country to country, but in general, it is designed to protect, by means of an individual complaint presented to a constitutional court, the data, image, privacy, honour, information self-determination and freedom of information of a person. Habeas data can be sought by any citizen against any manual or automated data register to find out what information is held about his or her person. That person can request the rectification, update or the destruction of the personal data held. The legal nature of the individual complaint of habeas data is that of voluntary jurisdiction, which means that the person whose privacy is being compromised can be the only one to present it. The courts do not have any power to initiate the process by themselves. History Habeas data is an individual complaint filed before a constitutional court and related to the privacy of personal data. The first such complaint is the habeas corpus (which is roughly translated as "[we command] you have the body"). Other individual complaints include the writ of mandamus (USA), amparo (Spain, Mexico and Argentina), and respondeat superior (Taiwan). The habeas data writ itself has a very short history, but its origins can be traced to certain European legal mechanisms that protected individual privacy. In particular, certain German constitutional rights can be identified as the direct progenitors of the habeas data right. In particular, the right to information self-determination was created by the German constitutional tribunal by interpretation of the existing rights of human dignity and personality. This is a right to know what type of data are stored in manual and automatic databases about an individual, and it implies that there must be transparency on the gathering and Document 3::: A kernel debugger is a debugger present in some operating system kernels to ease debugging and kernel development by the kernel developers. A kernel debugger might be a stub implementing low-level operations, with a full-blown debugger such as GNU Debugger (gdb), running on another machine, sending commands to the stub over a serial line or a network connection, or it might provide a command line that can be used directly on the machine being debugged. Operating systems and operating system kernels that contain a kernel debugger: The Windows NT family includes a kernel debugger named KD, which can act as a local debugger with limited capabilities (reading and writing kernel memory, and setting breakpoints) and can attach to a remote machine over a serial line, IEEE 1394 connection, USB 2.0 or USB 3.0 connection. The WinDbg GUI debugger can also be used to debug kernels on local and remote machines. BeOS and Haiku include a kernel debugger usable with either an on-screen console or over a serial line. It features various commands to inspect memory, threads, and other kernel structures. In Haiku, the debugger is called "Kernel Debugging Land" (KDL). DragonFly BSD Linux kernel; No kernel debugger was included in the mainline Linux tree prior to version 2.6.26-rc1 because Linus Torvalds didn't want a kernel debugger in the kernel. KDB (local) KGDB (remote) MDB (local/remote) NetBSD has DDB for local and KGDB for remote. macOS has ddb for local and kdp for remote. OpenBSD includes ddb which has a syntax is similar to GNU Debugger. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main function of a kernel debugger in operating systems? A. To provide a graphical user interface for applications B. To ease debugging and kernel development by developers C. To manage system resources and processes D. To enhance the performance of the operating system Answer:
B. To ease debugging and kernel development by developers
Relavent Documents: Document 0::: Runway end identifier lights (REIL) (ICAO identifies these as Runway Threshold Identification Lights) are installed at many airports to provide rapid and positive identification of the approach end of a particular runway. The system consists of a pair of synchronized flashing lights located laterally on each side of the runway threshold. REILs may be either omnidirectional or unidirectional facing the approach area. They are effective for: Identification of a runway surrounded by a preponderance of other lighting Identification of a runway which lacks contrast with surrounding terrain Identification of a runway during reduced visibility The International Civil Aviation Organization (ICAO) recommends that: Runway threshold identification lights should be installed: at the threshold of a non-precision approach runway when additional threshold conspicuity is necessary or where it is not practicable to provide other approach lighting aids; and where a runway threshold is permanently displaced from the runway extremity or temporarily displaced from the normal position and additional threshold conspicuity is necessary. Runway threshold identification lights shall be located symmetrically about the runway centre line, in line with the threshold and approximately 10 meters outside each line of runway edge lights. Runway threshold identification lights should be flashing white lights with a flash frequency between 60 and 120 per minute. The lights shall be visible only in the direction of approach to the runway. References External links FAA Aeronautical Information Manual Document 1::: In microbiology, the term isolation refers to the separation of a strain from a natural, mixed population of living microbes, as present in the environment, for example in water or soil, or from living beings with skin flora, oral flora or gut flora, in order to identify the microbe(s) of interest. Historically, the laboratory techniques of isolation first developed in the field of bacteriology and parasitology (during the 19th century), before those in virology during the 20th century. History The laboratory techniques of isolating microbes first developed during the 19th century in the field of bacteriology and parasitology using light microscopy. 1860 marked the successful introduction of liquid medium by Louis Pasteur. The liquid culture pasteur developed allowed for the visulization of promoting or inhibiting growth of specific bacteria. This same technique is utilized today through various mediums like Mannitol salt agar, a solid medium. Solid cultures were developed in 1881 when Robert Koch solidified the liquid media through the addition of agar Proper isolation techniques of virology did not exist prior to the 20th century. The methods of microbial isolation have drastically changed over the past 50 years, from a labor perspective with increasing mechanization, and in regard to the technologies involved, and with it speed and accuracy. General techniques In order to isolate a microbe from a natural, mixed population of living microbes, as present in the environment, for example in water or soil flora, or from living beings with skin flora, oral flora or gut flora, one has to separate it from the mix. Traditionally microbes have been cultured in order to identify the microbe(s) of interest based on its growth characteristics. Depending on the expected density and viability of microbes present in a liquid sample, physical methods to increase the gradient as for example serial dilution or centrifugation may be chosen. In order to isolate organisms in mat Document 2::: Live blood analysis (LBA), live cell analysis, Hemaview or nutritional blood analysis is the use of high-resolution dark field microscopy to observe live blood cells. Live blood analysis is promoted by some alternative medicine practitioners, who assert that it can diagnose a range of diseases. There is no scientific evidence that live blood analysis is reliable or effective, and it has been described as a fraudulent means of convincing people that they are ill and should purchase dietary supplements. Live blood analysis is not accepted in laboratory practice and its validity as a laboratory test has not been established. There is no scientific evidence for the validity of live blood analysis, it has been described as a pseudoscientific, bogus and fraudulent medical test, and its practice has been dismissed by the medical profession as quackery. The field of live blood microscopy is unregulated, there is no training requirement for practitioners and no recognised qualification, no recognised medical validity to the results, and proponents have made false claims about both medical blood pathology testing and their own services, which some have refused to amend when instructed by the Advertising Standards Authority. It has its origins in the now-discarded theories of pleomorphism promoted by Günther Enderlein, notably in his 1925 book Bakterien-Cyklogenie. In January 2014 prominent live blood proponent and teacher Robert O. Young was arrested and charged for practising medicine without a license, and in March 2014 Errol Denton, a former student of his, a UK live blood practitioner, was convicted on nine counts in a rare prosecution under the Cancer Act 1939, followed in May 2014 by another former student, Stephen Ferguson. Overview Proponents claim that live blood analysis provides information "about the state of the immune system, possible vitamin deficiencies, amount of toxicity, pH and mineral imbalance, areas of concern and weaknesses, fungus and yeast." Some Document 3::: The word chemistry derives from the word alchemy, which is found in various forms in European languages. The word 'alchemy' itself derives from the Arabic word al-kīmiyāʾ (), wherein al- is the definite article 'the'. The ultimate origin of the word is uncertain, but the Arabic term kīmiyāʾ () is likely derived from either the Ancient Greek word khēmeia () or the similar khēmia (). The Greek term khēmeia, meaning "cast together" may refer to the art of alloying metals, from root words χύμα (khúma, "fluid"), from χέω (khéō, "I pour"). Alternatively, khēmia may be derived from the ancient Egyptian name of Egypt, khem or khm, khame, or khmi, meaning "blackness", likely in reference to the rich dark soil of the Nile river valley. Overview There are two main views on the derivation of the Greek word. According to one, the word comes from the greek χημεία (chimeía), pouring, infusion, used in connexion with the study of the juices of plants, and thence extended to chemical manipulations in general; this derivation accounts for the old-fashioned spellings "chymist" and "chymistry". The other view traces it to khem or khame, hieroglyph khmi, which denotes black earth as opposed to barren sand, and occurs in Plutarch as χημία (chimía); on this derivation alchemy is explained as meaning the "Egyptian art". The first occurrence of the word is said to be in a treatise of Julius Firmicus, an astrological writer of the 4th century, but the prefix al there must be the addition of a later Arabic copyist. In English, Piers Plowman (1362) contains the phrase "experimentis of alconomye", with variants "alkenemye" and " alknamye". The prefix al began to be dropped about the middle of the 16th century (further details of which are given below). Egyptian origin According to the Egyptologist Wallis Budge, the Arabic word al-kīmiyaʾ actually means "the Egyptian [science]", borrowing from the Coptic word for "Egypt", kēme (or its equivalent in the Mediaeval Bohairic dialect of Coptic, khēme). This Coptic word derives from Demotic kmỉ, itself from ancient Egyptian kmt. The ancient Egyptian word referred to both the country and the colour "black" (Egypt was the "Black Land", by contrast with the "Red Land", the surrounding desert); so this etymology could also explain the nickname "Egyptian black arts". However, according to Mahn, this theory may be an example of folk etymology. Assuming an Egyptian origin, chemistry is defined as follows: Chemistry, from the ancient Egyptian word "khēmia" meaning transmutation of earth, is the science of matter at the atomic to molecular scale, dealing primarily with collections of atoms, such as molecules, crystals, and metals. Thus, according to Budge and others, chemistry derives from an Egyptian word khemein or khēmia, "preparation of black powder", ultimately derived from the name khem, Egypt. A decree of Diocletian, written about 300 AD in Greek, speaks against "the ancient writings of the Egyptians, which treat of the khēmia transmutation of gold and silver". Greek origin Arabic al-kīmiyaʾ or al-khīmiyaʾ ( or ), according to some, is thought to derive from the Koine Greek word khymeia () meaning "the art of alloying metals, alchemy"; in the manuscripts, this word is also written khēmeia () or kheimeia (), which is the probable basis of the Arabic form. According to Mahn, the Greek word χυμεία khumeia originally meant "cast together", "casting together", "weld", "alloy", etc. (cf. Gk. kheein () "to pour"; khuma (), "that which is poured out, an ingot"). Assuming a Greek origin, chemistry is defined as follows: Chemistry, from the Greek word (khēmeia) meaning "cast together" or "pour together", is the science of matter at the atomic to molecular scale, dealing primarily with collections of atoms, such as molecules, crystals, and metals. From alchemy to chemistry Later medieval Latin had alchimia / alchymia "alchemy", alchimicus "alchemical", and alchimista "alchemist". The mineralogist and humanist Georg Agricola (died 1555) was the first to drop the Arabic definite article al-. In his Latin works from 1530 on he exclusively wrote chymia and chymista in describing activity that we today would characterize as chemical or alchemical. As a humanist, Agricola was intent on purifying words and returning them to their classical roots. He had no intent to make a semantic distinction between chymia and alchymia. During the later sixteenth century Agricola's new coinage slowly propagated. It seems to have been adopted in most of the vernacular European languages following Conrad Gessner's adoption of it in his extremely popular pseudonymous work, Thesaurus Euonymi Philiatri De remediis secretis: Liber physicus, medicus, et partim etiam chymicus (Zurich 1552). Gessner's work was frequently re-published in the second half of the 16th century in Latin and was also published in a number of vernacular European languages, with the word spelled without the al-. In the 16th and 17th centuries in Europe the forms alchimia and chimia (and chymia) were synonymous and interchangeable. The semantic distinction between a rational and practical science of chimia and an occult alchimia arose only in the early eighteenth century. In 16th, 17th and early 18th century English the spellings — both with and without the "al" — were usually with an i or y as in chimic / chymic / alchimic / alchymic. During the later 18th century the spelling was re-fashioned to use a letter e, as in chemic in English. In English after the spelling shifted from chimical to chemical, there was corresponding shift from alchimical to alchemical, which occurred in the early 19th century. In French, Italian, Spanish and Russian today it continues to be spelled with an i as in for example Italian chimica. See also History of chemistry History of science History of thermodynamics List of Arabic loanwords in English List of chemical element name etymologies References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary meaning of the Greek word "khēmeia" as it relates to the origin of chemistry? A. Preparation of black powder B. The art of alloying metals C. Transmutation of gold and silver D. Study of plant juices Answer:
B. The art of alloying metals
Relavent Documents: Document 0::: In organic chemistry, umpolung () or polarity inversion is the chemical modification of a functional group with the aim of the reversal of polarity of that group. This modification allows secondary reactions of this functional group that would otherwise not be possible. The concept was introduced by D. Seebach (hence the German word for reversed polarity) and E.J. Corey. Polarity analysis during retrosynthetic analysis tells a chemist when umpolung tactics are required to synthesize a target molecule. Introduction The vast majority of important organic molecules contain heteroatoms, which polarize carbon skeletons by virtue of their electronegativity. Therefore, in standard organic reactions, the majority of new bonds are formed between atoms of opposite polarity. This can be considered to be the "normal" mode of reactivity. One consequence of this natural polarization of molecules is that 1,3- and 1,5- heteroatom substituted carbon skeletons are extremely easy to synthesize (Aldol reaction, Claisen condensation, Michael reaction, Claisen rearrangement, Diels-Alder reaction), whereas 1,2-, 1,4-, and 1,6- heteroatom substitution patterns are more difficult to access via "normal" reactivity. It is therefore important to understand and develop methods to induce umpolung in organic reactions. Examples The simplest method of obtaining 1,2-, 1,4-, and 1,6- heteroatom substitution patterns is to start with them. Biochemical and industrial processes can provide inexpensive sources of chemicals that have normally inaccessible substitution patterns. For example, amino acids, oxalic acid, succinic acid, adipic acid, tartaric acid, and glucose are abundant and provide nonroutine substitution patterns. Cyanide-type umpolung The canonical umpolung reagent is the cyanide ion. The cyanide ion is unusual in that a carbon triply bonded to a nitrogen would be expected to have a (+) polarity due to the higher electronegativity of the nitrogen atom. Yet, the negative charge of the Document 1::: Augmented reality-based testing (ARBT) is a test method that combines augmented reality and software testing to enhance testing by inserting an additional dimension into the testers field of view. For example, a tester wearing a head-mounted display (HMD) or Augmented reality contact lenses that places images of both the physical world and registered virtual graphical objects over the user's view of the world can detect virtual labels on areas of a system to clarify test operating instructions for a tester who is performing tests on a complex system. In 2009 as a spin-off to augmented reality for maintenance and repair (ARMAR), Alexander Andelkovic coined the idea 'augmented reality-based testing', introducing the idea of using augmented reality together with software testing. Overview The test environment of technology is becoming more complex, this puts higher demand on test engineers to have higher knowledge, testing skills and work effective. A powerful unexplored dimension that can be utilized is the virtual environment, a lot of information and data that today is available but unpractical to use due to overhead in time needed to gather and present can with ARBT be used instantly. Application ARBT can be of help in following test environments: Support Assembling and disassembling a test object can be learned out and practice scenarios can be run through to learn how to fix fault scenarios that may occur. Guidance Minimizing risk of misunderstanding complex test procedures can be done by virtually describing test steps in front of the tester on the actual test object. Educational Background information about test scenario with earlier bugs found pointed out on the test object and reminders to avoid repeating previous mistakes made during testing of selected test area. Training Junior testers can learn complex test scenarios with less supervision. Test steps will be pointed out and information about pass criteria need to be confirmed the junior tester c Document 2::: The Christensen failure criterion is a material failure theory for isotropic materials that attempts to span the range from ductile to brittle materials. It has a two-property form calibrated by the uniaxial tensile and compressive strengths T and C . The theory was developed by Stanford professor Richard. M. Christensen and first published in 1997. Description The Christensen failure criterion is composed of two separate subcriteria representing competitive failure mechanisms. When expressed in principal stress components, it is given by : Polynomial invariants failure criterion For Coordinated Fracture Criterion For The geometric form of () is that of a paraboloid in principal stress space. The fracture criterion () (applicable only over the partial range 0 ≤ T/C ≤ 1/2 ) cuts slices off the paraboloid, leaving three flattened elliptical surfaces on it. The fracture cutoff is vanishingly small at T/C=1/2 but it grows progressively larger as T/C diminishes. The organizing principle underlying the theory is that all isotropic materials admit a distinct classification system based upon their T/C ratio. The comprehensive failure criterion () and () reduces to the Mises criterion at the ductile limit, T/C = 1. At the brittle limit, T/C = 0, it reduces to a form that cannot sustain any tensile components of stress. Many cases of verification have been examined over the complete range of materials from extremely ductile to extremely brittle types. Also, examples of applications have been given. Related criteria distinguishing ductile from brittle failure behaviors have been derived and interpreted. Applications have been given by Ha to the failure of the isotropic, polymeric matrix phase in fiber composite materials. Document 3::: IvanAnywhere is a simple, remote-controlled telepresence robot created by Sybase iAnywhere programmers to enable their co-worker, Ivan Bowman, to efficiently remote work. The robot enables Bowman to be virtually present at conferences and presentations, and to discuss product development with other developers face-to-face. IvanAnywhere is powered by SAP's mobile database product, SQL Anywhere. IvanAnywhere evolution Ivan Bowman has been a software developer at Sybase/iAnywhere/SAP since 1993, and now is an Engineering Director at SAP Canada. In 2002 his wife received a job in Halifax approximately from his place of work in Waterloo, Ontario, Canada, North America. His employers allowed him to remote work initially via email, instant messenger, and phone. Using speakerphone during meetings was less than ideal because Ivan could not see his co-workers' visual communication clues, or what they wrote on the white board. The first solution was a stationary webcam with a speaker, which was kept in the corner of the office. The problem with this method was that the webcam was just that – stationary. Ivan could not see people if they were not standing near the webcam. More frustrating, perhaps, was that Ivan could hear distant conversations through the webcam's microphone, but was unable to contribute to the conversation if the impromptu meeting did not take place in his visual range. Proof of concept In November 2006, iAnywhere programmer Ian McHardy and Director of Engineering Glenn Paulley (Ivan’s immediate manager) conceived the idea of IvanAnywhere after Glenn saw a television commercial for a remote controlled toy blimp. In January 2007, after considering different possible designs and getting through a number of deadlines related to iAnywhere releases, Ian started working on a proof-of-concept: a tablet computer and webcam mounted on a radio-controlled toy truck. In February 2007, even though the truck was challenging to drive and the webcam was only a few inch The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary function of the Database of Macromolecular Motions? A. To provide a platform for protein sequencing B. To categorize macromolecular motions and visualize protein conformational changes C. To develop new bioinformatics software D. To store genetic information Answer:
B. To categorize macromolecular motions and visualize protein conformational changes
Relavent Documents: Document 0::: Angiotensin is a peptide hormone that causes vasoconstriction and an increase in blood pressure. It is part of the renin–angiotensin system, which regulates blood pressure. Angiotensin also stimulates the release of aldosterone from the adrenal cortex to promote sodium retention by the kidneys. An oligopeptide, angiotensin is a hormone and a dipsogen. It is derived from the precursor molecule angiotensinogen, a serum globulin produced in the liver. Angiotensin was isolated in the late 1930s (first named 'angiotonin' or 'hypertensin', later renamed 'angiotensin' as a consensus by the 2 groups that independently discovered it) and subsequently characterized and synthesized by groups at the Cleveland Clinic and Ciba laboratories. Precursor and types Angiotensinogen Angiotensinogen is an α-2-globulin synthesized in the liver and is a precursor for angiotensin, but has also been indicated as having many other roles not related to angiotensin peptides. It is a member of the serpin family of proteins, leading to another name: Serpin A8, although it is not known to inhibit other enzymes like most serpins. In addition, a generalized crystal structure can be estimated by examining other proteins of the serpin family, but angiotensinogen has an elongated N-terminus compared to other serpin family proteins. Obtaining actual crystals for X-ray diffractometric analysis is difficult in part due to the variability of glycosylation that angiotensinogen exhibits. The non-glycosylated and fully glycosylated states of angiotensinogen also vary in molecular weight, the former weighing 53 kDa and the latter weighing 75 kDa, with a plethora of partially glycosylated states weighing in between these two values. Angiotensinogen is also known as renin substrate. It is cleaved at the N-terminus by renin to result in angiotensin I, which will later be modified to become angiotensin II. This peptide is 485 amino acids long, and 10 N-terminus amino acids are cleaved when renin acts on it. Document 1::: Reinnervation is the restoration, either by spontaneous cellular regeneration or by surgical grafting, of nerve supply to a body part from which it has been lost or damaged. See also Denervation Neuroregeneration Targeted reinnervation References Document 2::: Predicate transformer semantics were introduced by Edsger Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". They define the semantics of an imperative programming paradigm by assigning to each statement in this language a corresponding predicate transformer: a total function between two predicates on the state space of the statement. In this sense, predicate transformer semantics are a kind of denotational semantics. Actually, in guarded commands, Dijkstra uses only one kind of predicate transformer: the well-known weakest preconditions (see below). Moreover, predicate transformer semantics are a reformulation of Floyd–Hoare logic. Whereas Hoare logic is presented as a deductive system, predicate transformer semantics (either by weakest-preconditions or by strongest-postconditions see below) are complete strategies to build valid deductions of Hoare logic. In other words, they provide an effective algorithm to reduce the problem of verifying a Hoare triple to the problem of proving a first-order formula. Technically, predicate transformer semantics perform a kind of symbolic execution of statements into predicates: execution runs backward in the case of weakest-preconditions, or runs forward in the case of strongest-postconditions. Weakest preconditions Definition For a statement S and a postcondition R, a weakest precondition is a predicate Q such that for any precondition , if and only if . In other words, it is the "loosest" or least restrictive requirement needed to guarantee that R holds after S. Uniqueness follows easily from the definition: If both Q and Q' are weakest preconditions, then by the definition so and so , and thus . We often use to denote the weakest precondition for statement S with respect to a postcondition R. Conventions We use T to denote the predicate that is everywhere true and F to denote the one that is everywhere false. We shouldn't at least conceptually confuse ourselv Document 3::: Incremental operating margin is the increase or decrease of income from continuing operations before stock-based compensation, interest expense and income-tax expense between two periods, divided by the increase or decrease in revenue between the same two periods. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is Elsinoë randii classified as in terms of its biological family? A. Bacterium B. Fungus C. Virus D. Alga Answer:
B. Fungus
Relavent Documents: Document 0::: This list of botanical gardens in Japan is intended to include all significant botanical gardens and arboretums in Japan. Akatsuka Botanical Garden (Itabashi, Tokyo) Aloha Garden Tateyama (Tateyama, Chiba) Amami Islands Botanical Garden (Amami, Kagoshima) Aoshima Subtropical Botanical Garden (Miyazaki, Miyazaki) Aritaki Arboretum (Koshigaya, Saitama) Atagawa Tropical & Alligator Garden (Kamo, Shizuoka) Botanic Garden, Faculty of Science, Kanazawa University (Kanazawa, Ishikawa) Botanical Garden of Tohoku University (Sendai, Miyagi) Botanic Gardens of Toyama (Toyama, Toyama) Botanical Gardens Faculty of Science Osaka City University (Katano, Osaka) Enoshima Tropical Plants Garden (Fujisawa, Kanagawa) Experimental Station for Landscape Plants (Chiba, Chiba) Fuji Bamboo Garden (Nagaizumi, Shizuoka) Fukuoka Municipal Zoo and Botanical Garden (Fukuoka, Fukuoka) Futagami Manyo Botanical Gardens (Takaoka, Toyama) Hakone Botanical Garden of Wetlands (Hakone, Kanagawa) Handayama Botanical Garden (Okayama, Okayama) Hattori Ryokuchi Arboretum (Toyonaka, Osaka) Higashiyama Zoo and Botanical Gardens (Nagoya, Aichi) Himeji City Tegarayama Botanical Garden (Himeji, Hyōgo) Himi Seaside Botanical Garden (Himi, Toyama) Hiroshima Botanical Garden (Hiroshima, Hiroshima) Hirugano Botanical Garden (Gujō, Gifu) Hokkaido University Botanical Gardens (Sapporo, Hokkaidō) Ibaraki Botanical Garden (Naka, Ibaraki) Ibusuki Experimental Botanical Garden (Ibusuki, Kagoshima) Ishikawa Forest Experiment Station (Hakusan, Ishikawa) Itabashi Botanical Garden (Itabashi, Tokyo) Jindai Botanical Garden (Chōfu, Tokyo) Kagoshima Botanical Garden (Kagoshima, Kagoshima) Kanagawa Prefectural Ofuna Botanical Garden (Kamakura, Kanagawa) Kawaguchi Green Center (Kawaguchi, Saitama) Kiseki No Hoshi Greenhouse (Awaji, Hyōgo) Kitayama Botanical Garden (Nishinomiya, Hyōgo) Kobe Municipal Arboretum (Kōbe, Hyōgo) Koishikawa Botanical Gardens (Bunkyō, Tokyo) Kosobe Conservato Document 1::: Phylomedicine is an emerging discipline at the intersection of medicine, genomics, and evolution. It focuses on the use of evolutionary knowledge to predict functional consequences of mutations found in personal genomes and populations. History Modern technologies have made genome sequencing accessible, and biomedical scientists have profiled genomic variation in apparently healthy individuals and individuals diagnosed with a variety of diseases. This work has led to the discovery of thousands of disease-associated genes and genetic variants, elucidating a more robust picture of the amount and types of variations found within and between humans. Proteins are encoded in genomic DNA by exons, and these comprise only ~1% of the human genomic sequence (aka the exome). The exome of an individual carries about 6,000–10,000 amino-acid-altering nSNVs, and many of these variants are already known to be associated with more than 1000 diseases. Although only a small fraction of these personal variants are likely to impact health, the sheer volume of known genomic and exomic variants is too large to apply traditional laboratory or experimental techniques to explore their functional consequences. Translating a personal genome into useful phenotypic information (e.g. relating to predisposition to disease, differential drug response, or other health concerns), is therefore a grand challenge in the field of genomic medicine. Fortunately, results from the natural experiment of molecular evolution are recorded in the genomes of humans and other living species. All genomic variation is subjected to the process of natural selection which generally reduces mutations with negative effects on phenotype over time. With the availability of a large number of genomes from the tree of life, evolutionary conservation of individual genomic positions and the sets of mutations permitted among species informs the functional and health consequences of these mutations. Consequently, phylomedicin Document 2::: Kepler-1513 is a main-sequence star about away in the constellation Lyra. It has a late-G or early-K spectral type, and it hosts at least one, and likely two, exoplanets. Planetary system Kepler-1513b (KOI-3678.01) was confirmed in 2016 as part of a study statistically validating hundreds of Kepler planets. In November 2022, an exomoon candidate was reported around Kepler-1513b based on transit-timing variations (TTVs). Unlike previous giant exomoon candidates in the Kepler-1625 and Kepler-1708 systems, this exomoon would have been terrestrial-mass, ranging from 0.76 Lunar masses to 0.34 Earth masses depending on the planet's mass and the moon's orbital period. In October 2023, a follow-up study by the same team of astronomers using additional observations found that the observed TTVs cannot be explained by an exomoon, but can be explained by a second, outer planet, Kepler-1513c, with a mass comparable to Saturn. See also Kepler-90g References Document 3::: This is a list of dams and reservoirs in Romania. References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the main purpose of gelding a male horse? A. To increase its breeding potential B. To make it calmer and easier to control C. To enhance its physical strength D. To improve its reproductive health Answer:
B. To make it calmer and easier to control
Relavent Documents: Document 0::: In botanical nomenclature, author citation is the way of citing the person or group of people who validly published a botanical name, i.e. who first published the name while fulfilling the formal requirements as specified by the International Code of Nomenclature for algae, fungi, and plants (ICN). In cases where a species is no longer in its original generic placement (i.e. a new combination of genus and specific epithet), both the authority for the original genus placement and that for the new combination are given (the former in parentheses). In botany, it is customary (though not obligatory) to abbreviate author names according to a recognised list of standard abbreviations. There are differences between the botanical code and the normal practice in zoology. In zoology, the publication year is given following the author names and the authorship of a new combination is normally omitted. A small number of more specialized practices also vary between the recommendations of the botanical and zoological codes. Introduction In biological works, particularly those dealing with taxonomy and nomenclature but also in ecological surveys, it has long been the custom that full citations to the place where a scientific name was published are omitted, but a short-hand is used to cite the author of the name, at least the first time this is mentioned. The author name is frequently not sufficient information, but can help to resolve some difficulties. Problems include: The name of a taxon being referred to is ambiguous, as in the case of homonyms such as Darlingtonia Torr., a genus of carnivorous plants, vs. Darlingtonia DC., a genus of leguminous plants. The publication of the name may be in a little-known journal or book. The author name may sometimes help to resolve this. The name may not have been validly published, but the supposed author name may be helpful to locate the publication or manuscript in which it was listed. Rules and recommendations for author citations Document 1::: In aerospace engineering, concerning aircraft, rocket and spacecraft design, overall propulsion system efficiency is the efficiency with which the energy contained in a vehicle's fuel is converted into kinetic energy of the vehicle, to accelerate it, or to replace losses due to aerodynamic drag or gravity. Mathematically, it is represented as , where is the cycle efficiency and is the propulsive efficiency. The cycle efficiency is expressed as the percentage of the heat energy in the fuel that is converted to mechanical energy in the engine, and the propulsive efficiency is expressed as the proportion of the mechanical energy actually used to propel the aircraft. The propulsive efficiency is always less than one, because conservation of momentum requires that the exhaust have some of the kinetic energy, and the propulsive mechanism (whether propeller, jet exhaust, or ducted fan) is never perfectly efficient. It is greatly dependent on exhaust expulsion velocity and airspeed. Cycle efficiency Most aerospace vehicles are propelled by heat engines of some kind, usually an internal combustion engine. The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input. From the laws of thermodynamics: where is the work extracted from the engine. (It is negative because work is done by the engine.) is the heat energy taken from the high-temperature system (heat source). (It is negative because heat is extracted from the source, hence is positive.) is the heat energy delivered to the low-temperature system (heat sink). (It is positive because heat is added to the sink.) In other words, a heat engine absorbs heat from some heat source, converting part of it to useful work, and delivering the rest to a heat sink at lower temperature. In an engine, efficiency is defined as the ratio of useful work done to energy expended. The theoretical maximum efficiency of a heat engine, the Carnot efficiency, depends only on its operating temperatures. Mathematically, this is because in reversible processes, the cold reservoir would gain the same amount of entropy as that lost by the hot reservoir (i.e., ), for no change in entropy. Thus: where is the absolute temperature of the hot source and that of the cold sink, usually measured in kelvins. Note that is positive while is negative; in any reversible work-extracting process, entropy is overall not increased, but rather is moved from a hot (high-entropy) system to a cold (low-entropy one), decreasing the entropy of the heat source and increasing that of the heat sink. Propulsive efficiency Propulsive efficiency is defined as the ratio of propulsive power (i.e. thrust times velocity of the vehicle) to work done on the fluid. In generic terms, the propulsive power can be calculated as follows: where represents thrust and , the flight speed. The thrust can be computed from intake and exhaust massflows ( and ) and velocities ( and ): The work done by the engine to the flow, on the other hand, is the change in kinetic energy per time. This does not take into account the efficiency of the engine used to generate the power, nor of the propeller, fan or other mechanism used to accelerate air. It merely refers to the work done to the flow, by any means, and can be expressed as the difference between exhausted kinetic energy flux and incoming kinetic energy flux: The propulsive efficiency can therefore be computed as: Depending on the type of propulsion used, this equation can be simplified in different ways, demonstrating some of the peculiarities of different engine types. The general equation already shows, however, that propulsive efficiency improves when using large massflows and small velocities compared to small mass-flows and large velocities, since the squared terms in the denominator grow faster than the non-squared terms. The losses modelled by propulsive efficiency are explained by the fact that any mode of aero propulsion leaves behind a jet moving into the opposite direction of the vehicle. The kinetic energy flux in this jet is for the case that . Jet engines The propulsive efficiency formula for air-breathing engines is given below. It can be derived by setting in the general equation, and assuming that . This cancels out the mass-flow and leads to: where is the exhaust expulsion velocity and is both the airspeed at the inlet and the flight velocity. For pure jet engines, particularly with afterburner, a small amount of accuracy can be gained by not assuming the intake and exhaust massflow to be equal, since the exhaust gas also contains the added mass of the fuel injected. For turbofan engines, the exhaust massflow may be marginally smaller than the intake massflow because the engine supplies "bleed air" from the compressor to the aircraft. In most circumstances, this is not taken into account, as it makes no significant difference to the computed propulsive efficiency. By computing the exhaust velocity from the equation for thrust (while still assuming ), we can also obtain the propulsive efficiency as a function of specific thrust (): A corollary of this is that, particularly in air breathing engines, it is more energy efficient to accelerate a large amount of air by a small amount, than it is to accelerate a small amount of air by a large amount, even though the thrust is the same. This is why turbofan engines are more efficient than simple jet engines at subsonic speeds. Rocket engines A rocket engine's is usually high due to the high combustion temperatures and pressures, and the long converging-diverging nozzle used. It varies slightly with altitude due to changing atmospheric pressure, but can be up to 70%. Most of the remainder is lost as heat in the exhaust. Rocket engines have a slightly different propulsive efficiency () than air-breathing jet engines, as the lack of intake air changes the form of the equation. This also allows rockets to exceed their exhaust's velocity. Similarly to jet engines, matching the exhaust speed and the vehicle speed gives optimum efficiency, in theory. However, in practice, this results in a very low specific impulse, causing much greater losses due to the need for exponentially larger masses of propellant. Unlike ducted engines, rockets give thrust even when the two speeds are equal. In 1903, Konstantin Tsiolkovsky discussed the average propulsive efficiency of a rocket, which he called the utilization (utilizatsiya), the "portion of the total work of the explosive material transferred to the rocket" as opposed to the exhaust gas. Propeller engines The calculation is somewhat different for reciprocating and turboprop engines which rely on a propeller for propulsion since their output is typically expressed in terms of power rather than thrust. The equation for heat added per unit time, Q, can be adopted as follows: where H = calorific value of the fuel in BTU/lb, h = fuel consumption rate in lb/hr and J = mechanical equivalent of heat = 778.24 ft.lb/BTU, where is engine output in horsepower, converted to foot-pounds/second by multiplication by 550. Given that specific fuel consumption is Cp = h/Pe and H = 20 052 BTU/lb for gasoline, the equation is simplified to: expressed as a percentage. Assuming a typical propeller efficiency of 86% (for the optimal airspeed and air density conditions for the given propeller design), maximum overall propulsion efficiency is estimated as: See also References Notes Document 2::: NGC 465 is an open cluster in the Magellanic Clouds. Being part of the Tucana constellation, it was discovered by Scottish astronomer James Dunlop in 1826. Document 3::: NGC 3341 is a peculiar galaxy located in the constellation of Sextans. It is located 415 million light years away from Earth and has a diameter of 170,000 light years. It was discovered by Albert Marth on March 22, 1865, who described the object as "very faint and small". The galaxy is classified a minor galaxy merger system, with two known companions revealed as offset active galactic nuclei (AGN). Characteristics NGC 3341 is classified as a giant disk galaxy located at redshift 0.027. It has a magnitude of MB = -20.3 with a mass of ≈ 1 x 1011 MΘ. The galaxy has two smaller companions of low mass located north from the galaxy with an estimated distance of 5.1 and 8.4 kiloparsecs respectively. Further observations by astronomers, classified the two offset nuclei of NGC 3341 as dwarf ellipticals or budge remnants of spiral galaxies, whose disk structures were tidal stripped as they coalesced into the larger primary galaxy. According to observations made by Foord and his colleagues, they found the primary nucleus of NGC 3341 has a 0.5-8 keV flux with luminosity of 3.63+0.07-0.05 in harmony with a rest-frame luminosity of 8.54+0.41-0.33 x 1041 erg s−1. The secondary nucleus on the other hand, has an observed 0.5-8 keV flux of 2.7+0.6-0.8 x 10-15 erg s−1 cm−2 s−1. Despite the primary nucleus having an X-ray luminosity of LX > 1 x 1041 erg s−1, the second doesn't. What is more stranger about the nuclei of NGC 3341, is both of them have different classifications. One is classified a Seyfert type II while the other is a LINER containing weak emission lines. However, the primary nucleus in NGC 3341 contains an emission-line spectrum. Based on the optical spectra of the two nuclei, it is suggested NGC 3341 might well be a dual AGN or a triple AGN system. But because the secondary nucleus never met the X-ray luminosity standards, the merger system of NGC 3341 actually contains a sole AGN. The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the primary factor that affects propulsive efficiency in aerospace vehicles according to the text? A. Exhaust expulsion velocity B. Weight of the vehicle C. Altitude of operation D. Fuel type Answer:
A. Exhaust expulsion velocity
Relavent Documents: Document 0::: Gastropexy is a surgical operation in which the stomach is sutured to the abdominal wall or the diaphragm. Gastropexies in which the stomach is sutured to the diaphragm are sometimes performed as a treatment of GERD to prevent the stomach from moving up into the chest. Document 1::: Relieving tackle is tackle employing one or more lines attached to a vessel's steering mechanism, to assist or substitute for the whipstaff or ship's wheel in steering the craft. This enabled the helmsman to maintain control in heavy weather, when the rudder is under more stress and requires greater effort to handle, and also to steer the vessel were the helm damaged or destroyed. In vessels with whipstaffs (long vertical poles extending above deck, acting as a lever to move the tiller below deck), relieving lines were attached to the tiller or directly to the whipstaff. When wheels were introduced, their greater mechanical advantage lessened the need for such assistance, but relieving tackle could still be used on the tiller, located on a deck underneath the wheel. Relieving tackle was also rigged on vessels going into battle, to assist in steering in case the helm was damaged or shot away. When a storm threatened, or battle impended, the tackle would be affixed to the tiller, and hands assigned to man them. Additional tackle was available to attach directly to the rudder as surety against loss of the tiller. The term can also refer to lines or cables attached to a vessel that has been careened (laid over to one side for maintenance). The lines passed under the hull and were secured to the opposite side, to keep the vessel from overturning further, and to aid in righting the ship when the work was finished. References Document 2::: Gamma Piscis Austrini, Latinized from γ Piscis Austrini, is three-star system in the southern constellation of Piscis Austrinus. It is visible to the naked eye with a combined apparent visual magnitude of +4.448. Based upon an annual parallax shift of as seen from the Earth, the system is located about 216 light years from the Sun. The A and B components, as of 2010, are separated by 4 arc seconds in the sky along a position angle of 255°. The "A" component is itself a binary, made up of two stars orbiting each other with an orbital period of 15 years and a separation of nine astronomical units, with a combined apparent magnitude of 4.59. The component Aa has 2.65 times more mass than the Sun and 2.9 times its radius, being a chemically peculiar star with a spectral type . The Ab component is smaller, at 0.94 times the Sun's mass and 0.84 times its radius. The fainter magnitude 8.20 companion, component B, is an F-type main sequence star with a class of F5 V. It has 20% more mass than the Sun and a radius 15% larger. Gamma Piscis Austrini is moving through the Galaxy at a speed of 24.1 km/s relative to the Sun. Its projected Galactic orbit carries it between and from the center of the Galaxy. It came closest to the Sun 1.8 million years ago at a distance of . The current age of the system is 350 million years. It will become a triple white dwarf system within 14 billion years. Naming In Chinese, (), meaning Decayed Mortar, refers to an asterism consisting of refers to an asterism consisting of γ Piscis Austrini, γ Gruis, λ Gruis and 19 Piscis Austrini. Consequently, the Chinese name for γ Piscis Austrini itself is (, .) References External links Document 3::: Jantar Mantar in New Delhi is an observatory, designed to be used with the naked eye. It is one of five Jantar Mantar in India. "Jantar Mantar" means "instruments for measuring the harmony of the heavens". It consists of 13 architectural astronomy instruments. The site is one of five built by Maharaja Jai Singh II of Jaipur, from 1723 onwards, revising the calendar and astronomical tables. Jai Singh, born in 1688 into a royal Rajput family that ruled the regional kingdom, was born into an era of education that maintained a keen interest in astronomy. There is a plaque fixed on one of the structures in the Jantar Mantar observatory in New Delhi that was placed there in 1910 mistakenly dating the construction of the complex to the year 1710. Later research, though, suggests 1724 as the actual year of construction. Its height is . The primary purpose of the observatory was to compile astronomical tables, and to predict the times and movements of the Sun, Moon and planets. Some of these purposes nowadays would be classified as astronomy. Completed in 1724, the Delhi Jantar Mantar had decayed considerably by 1857 uprising. The Ram Yantra, the Samrat Yantra, the Jai Prakash Yantra and the Misra Yantra are the distinct instruments of Jantar Mantar. The most famous of these structures, the Jaipur, had also deteriorated by the end of the nineteenth century until in 1901 when Maharaja Ram Singh set out to restore the instrument. History Jantar Mantar is located in New Delhi and built by Maharaja Jai Singh II of Jaipur in the year 1724. The maharaja built five observatories during his rule in the 18th century. Among these five, the one in Delhi was the first to be built. The other four observatories are located in Ujjain, Mathura, Varanasi, and Jaipur. The objective behind the construction of these observatories was to assemble astronomical data and to accurately predict the movement of the planets, Moon, Sun, etc. in the Solar System. It was a one of a kind in its tim The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What is the significance of the development of the servomechanism in the context of numerical control (NC) technology? A. It allowed for the automation of template tracing. B. It enabled powerful and controlled movement with accurate measurement. C. It was the first machine tool developed for numerical control. D. It simplified the process of creating punched tape. Answer:
B. It enabled powerful and controlled movement with accurate measurement.
Relavent Documents: Document 0::: A sweep generator is a piece of electronic test equipment similar to, and sometimes included on, a function generator which creates an electrical waveform with a linearly varying frequency and a constant amplitude. Sweep generators are commonly used to test the frequency response of electronic filter circuits. These circuits are mostly transistor circuits with inductors and capacitors to create linear characteristics. Sweeps are a popular method in the field of audio measurement to describe the change in a measured output value over a progressing input parameter. The most commonly-used progressive input parameter is frequency varied over the standard audio bandwidth of 20 Hz to 20 kHz. Glide Sweep A glide sweep (or chirp) is a continuous signal in which the frequency increases or decreases logarithmically with time. This provides the complete range of testing frequencies between the start and stop frequency. An advantage over the stepped sweep is that the signal duration can be reduced by the user without any loss of frequency resolution in the results. This allows for rapid testing. Although the theory behind the glide sweep has been known for several decades, its use in audio measuring devices has only evolved over the past several years. The reason for this lies with the high computing power required. Stepped Sweep In a stepped sweep, one variable input parameter (frequency or amplitude) is incremented or decremented in discrete steps. After each change, the analyzer waits until a stable reading is detected before switching to the next step. The scaling of the steps is linear or logarithmic. Since the settling time of different test objects cannot be predicted, the duration of a stepped sweep cannot be determined exactly in advance. For the determination of amplitude or frequency response, the stepped sweep has been largely replaced by the glide sweep. The main application for the stepped sweep is to measure the linearity of systems. Here, the frequency of t Document 1::: Metvuw Weather and Climate service is run by James McGregor. The site was formerly associated with Victoria University of Wellington (hence the VUW acronym), but is now run independently. Much of the weather content and forecast material is available directly from the website, free. A range of different weather information is available, as different pages, under the following headings. Metvuw home Featuring updates advice about the service, and 'photo of the day' as well as links to previous photos of the day. Many images of quirky and unusual weather-related phenomena are to be found, especially clouds of the Mammatus variety. Satellite imagery 9 panels of monochrome satellite derived cloud pictures are shown, in 3 hour intervals, ending in current time slot. Images can be enlarged. Data provided by MetService, Image enhancement by Metvuw, Himawari satellite data courtesy of Japanese Meteorological Agency. Weather radar 3 panels each showing 9 locations throughout New Zealand, at 3 hour intervals, ending in current time slot. Colour coding is derived from 'reflectivity' (DBz). Images can be enlarged. Data provided by MetService, Image enhancement by Metvuw. Radiosondes (weather balloons) measuring upper air temperatures and winds are routinely launched from five stations around New Zealand. At Whenuapai, Paraparaumu and Invercargill they are launched at 1100 and 2300 NZT. At Raoul Island they are launched daily at 1100 NZT. The upper air data are displayed on tephigrams. Forecast charts A large range between 1 and 10 day forecasts are available, delivering 4 and 40 images, respectively, spaced at +6 hour intervals, beginning with the forecast time. These weather forecast charts are generated by software written and maintained by James McGregor. The data used is obtained from the United States National Weather Service. These charts are updated approximately every 6 hours and provide forecasts up to 240h ahead of the time they were issued. Current NZ weath Document 2::: Worm bagging (also referred to as facultative vivipary or endotokia matricida) is a form of vivipary observed in nematodes, namely Caenorhabditis elegans. The process is characterized by eggs hatching within the parent and the larvae proceeding to consume and emerge from the parent. History While the phenomenon was mentioned as a result of fluorodeoxyuridine treatment as early as 1979 and egg-laying mutants were identified in 1984, the natural circumstances and mechanisms resulting in this behavior were not fully explored until 2003. From this point, modest explorations of the mechanisms underlying this behavior have been observed. Proximate causes Bagging will occur in vulvaless or egg-laying mutants of C. elegans but can also be induced in wild-type strains. Identified stressors that can induce bagging are starvation, high salt concentration, and antagonistic bacteria. It has been observed in larval development, that the WRT-5 protein is secreted into the pharyngeal lumen and the pharyngeal expression changes in a cycle that is connected to the molting cycle. Deletion mutations in wrt-5 cause embryonic lethality, which are temperature sensitive and more severe at 15 degrees C than at 25 degrees C. Additionally, animals that hatch exhibit variable abnormal morphology, for example, bagging worms, blistering, molting defects, or Roller phenotypes. Internal hatching is initiated by genes and is not restricted to the widely used laboratory strain N2. Internal hatching is rare when worms are maintained under standard laboratory conditions. However, axenic condition which is a transfer from solid to liquid medium along with adverse environmental conditions, such as starvation, exposure to harsh compounds, and bacteria can increase the frequency of worm bags. In a study C. elegans were starved and in stressful conditions such as a high salt environment. As a result there was a connection drawn between the pathway leading to the dauer stage and the pathway leading to Document 3::: The parameter space is the space of all possible parameter values that define a particular mathematical model. It is also sometimes called weight space, and is often a subset of finite-dimensional Euclidean space. In statistics, parameter spaces are particularly useful for describing parametric families of probability distributions. They also form the background for parameter estimation. In the case of extremum estimators for parametric models, a certain objective function is maximized or minimized over the parameter space. Theorems of existence and consistency of such estimators require some assumptions about the topology of the parameter space. For instance, compactness of the parameter space, together with continuity of the objective function, suffices for the existence of an extremum estimator. Sometimes, parameters are analyzed to view how they affect their statistical model. In that context, they can be viewed as inputs of a function, in which case the technical term for the parameter space is domain of a function. The ranges of values of the parameters may form the axes of a plot, and particular outcomes of the model may be plotted against these axes to illustrate how different regions of the parameter space produce different types of behavior in the model. Examples A simple model of health deterioration after developing lung cancer could include the two parameters gender and smoker/non-smoker, in which case the parameter space is the following set of four possibilities: . The logistic map has one parameter, r, which can take any positive value. The parameter space is therefore positive real numbers. For some values of r, this function ends up cycling around a few values or becomes fixed on one value. These long-term values can be plotted against r in a bifurcation diagram to show the different behaviours of the function for different values of r. In a sine wave model the parameters are amplitude A > 0, angular frequency ω > 0, and phase φ ∈ S1. Thus the parameter space is In complex dynamics, the parameter space is the complex plane C = { z = x + y i : x, y ∈ R }, where i2 = −1. The famous Mandelbrot set is a subset of this parameter space, consisting of the points in the complex plane which give a bounded set of numbers when a particular iterated function is repeatedly applied from that starting point. The remaining points, which are not in the set, give an unbounded set of numbers (they tend to infinity) when this function is repeatedly applied from that starting point. In machine learning, hyperparameters are used to describe models. In deep learning, the parameters of a deep network are called weights. Due to the layered structure of deep networks, their weight space has a complex structure and geometry. For example, in multilayer perceptrons, the same function is preserved when permuting the nodes of a hidden layer, amounting to permuting weight matrices of the network. This property is known as equivariance to permutation of deep weight spaces. The study seeks hyperparameter optimization. History Parameter space contributed to the liberation of geometry from the confines of three-dimensional space. For instance, the parameter space of spheres in three dimensions, has four dimensions—three for the sphere center and another for the radius. According to Dirk Struik, it was the book Neue Geometrie des Raumes (1849) by Julius Plücker that showed ...geometry need not solely be based on points as basic elements. Lines, planes, circles, spheres can all be used as the elements (Raumelemente) on which a geometry can be based. This fertile conception threw new light on both synthetic and algebraic geometry and created new forms of duality. The number of dimensions of a particular form of geometry could now be any positive number, depending on the number of parameters necessary to define the "element". The requirement for higher dimensions is illustrated by Plücker's line geometry. Struik writes [Plücker's] geometry of lines in three-space could be considered as a four-dimensional geometry, or, as Klein has stressed, as the geometry of a four-dimensional quadric in a five-dimensional space. Thus the Klein quadric describes the parameters of lines in space. See also Sample space Configuration space Data analysis Dimensionality reduction Model selection Parametric equation Parametric surface Phase space References The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses. What does the parameter space define in mathematical models? A. The space of all possible parameter values B. The physical dimensions of an object C. The limits of statistical distribution D. The randomness of data points Answer:
A. The space of all possible parameter values