Dataset Viewer
title
stringlengths 2
124
| source
stringlengths 39
44
| id
int64 8.14k
35.2M
| text
stringlengths 1
3.27k
|
---|---|---|---|
Introduction to electromagnetism
|
https://en.wikipedia.org/wiki?curid=58686423
| 10,471,744 |
##e that light is a form of electromagnetic radiation and to posit that other electromagnetic radiation could exist with different wavelengths. the existence of electromagnetic radiation was proved by heinrich hertz in a series of experiments ranging from 1886 to 1889 in which he discovered the existence of radio waves. the full electromagnetic spectrum ( in order of increasing frequency ) consists of radio waves, microwaves, infrared radiation, visible light, ultraviolet light, x - rays and gamma rays. a further unification of electromagnetism came with einstein ' s special theory of relativity. according to special relativity, observers moving at different speeds relative to one another occupy different observational frames of reference. if one observer is in motion relative to another observer then they experience length contraction where unmoving objects appear closer together to the observer in motion than to the observer at rest. therefore, if an electron is moving at the same speed as the current in a neutral wire, then they experience the flowing electrons in the wire as standing still relative to it and the positive charges as contracted together. in the lab frame, the electron is moving and so feels a magnetic force from the current in the wire but because the wire is neutral it feels no electric force. but in the electron ' s rest frame, the positive charges seem closer together compared to the flowing electrons and so the wire seems positively charged. therefore, in the electron ' s rest frame it feels no magnetic force ( because it is not moving in its own frame ) but it does feel an electric force due to the positively charged wire. this result from relativity proves that magnetic fields are just electric fields in a different reference frame ( and vice versa ) and so the two are different manifestations of the same underlying electromagnetic field. a conductor is a material that allows electrons to flow easily. the most effective conductors are usually metals because they can be described fairly accurately by the free electron model in which electrons delocalize from the atomic nuclei, leaving positive ions surrounded by a cloud of free electrons. examples of good conductors include copper, aluminum, and silver. wires in electronics are often made of copper. in some materials, the electrons are bound to the atomic nuclei and so are not free to move around but the energy required to set them free is low. in these materials, called semiconductors, the conductivity is low at low temperatures but as the temperature is increased the electrons gain more thermal energy and the conductivity increases. silicon is an example of a semiconductors that can be used to create solar panels which become more conductive the more energy they receive from photons
|
In-place matrix transposition
|
https://en.wikipedia.org/wiki?curid=11174336
| 12,465,107 |
m " ) or " a " ( " m ", " n " ) jumps discontiguously by " n " in memory ( depending on whether the array is in column - major or row - major format, respectively ). that is, the algorithm does not exploit locality of reference. one solution to improve the cache utilization is to " block " the algorithm to operate on several numbers at once, in blocks given by the cache - line size ; unfortunately, this means that the algorithm depends on the size of the cache line ( it is " cache - aware " ), and on a modern computer with multiple levels of cache it requires multiple levels of machine - dependent blocking. instead, it has been suggested ( frigo " et al. ", 1999 ) that better performance can be obtained by a recursive algorithm : divide the matrix into four submatrices of roughly equal size, transposing the two submatrices along the diagonal recursively and transposing and swapping the two submatrices above and below the diagonal. ( when " n " is sufficiently small, the simple algorithm above is used as a base case, as naively recurring all the way down to " n " = 1 would have excessive function - call overhead. ) this is a cache - oblivious algorithm, in the sense that it can exploit the cache line without the cache - line size being an explicit parameter. for non - square matrices, the algorithms are more complicated. many of the algorithms prior to 1980 could be described as " follow - the - cycles " algorithms. that is, they loop over the cycles, moving the data from one location to the next in the cycle. in pseudocode form : the differences between the algorithms lie mainly in how they locate the cycles, how they find the starting addresses in each cycle, and how they ensure that each cycle is moved exactly once. typically, as discussed above, the cycles are moved in pairs, since " s " and " mn " −1− " s " are in cycles of the same length ( possibly the same cycle ). sometimes, a small scratch array, typically of length " m " + " n " ( e. g. brenner, 1973 ; cate & twigg, 1977 ) is used to keep track of a subset of locations in the array that have been visited, to accelerate the algorithm. in order to determine whether a given cycle has been moved already, the simplest scheme would be to use " o " ( " mn " ) auxiliary storage
|
Molecular glue
|
https://en.wikipedia.org/wiki?curid=67037360
| 14,326,614 |
a molecular glue is a small molecule that stabilizes the interaction between two proteins that do not normally interact. the most commonly employed molecular glue induces a novel interaction between a substrate receptor of an e3 ubiquitin ligase and a target protein leading to proteolysis of the target. examples of molecular glues that induce degradation of protein targets include the immunomodulatory imide drug ( also known as imid ; e. g., thalidomide, lenalidomide, pomalidomide ), which generate a novel interaction between a substrate ( e. g., ikzf1 / 3, also known as ikaros / aiolos ) and cereblon, a substrate receptor ( also known as dcaf ) for cullin - ring ubiquitin ligase 4 ( crl4 ). they work in a similar manner to protac molecules, bringing about targeted protein degradation. distinct from protac molecules, molecular glues insert into a naturally occurring ppi interface, with contacts optimized for both the substrate and ligase within the same small molecule entity. the phrase " molecular glue " was coined in 1992 by stuart schreiber in reference to the immunophilins.
|
Prospect theory
|
https://en.wikipedia.org/wiki?curid=197284
| 2,937,893 |
000 customers, they had four options for the deductibles on their policy ; $ 100, $ 250, $ 500, $ 1000. from this it was found that a $ 500 deductible resulted in a $ 715 annual premium and $ 1000 deductible being $ 615. the customers that chose the $ 500 deductible were paying an additional $ 100 per year even though the chance that a claim will be made is extremely low, and the deductible be paid. under the expected utility framework, this can only be realised through high levels of risk aversion. households place a greater weight on the probability that a claim will be made when choosing a policy, thus it is suggested that the reference point of the household significantly influences the decisions when it comes to premiums and deductibles. this is consistent with the theory that people assign excessive weight to scenarios with low probabilities and insufficient weight to events with high probability. the original version of prospect theory gave rise to violations of first - order stochastic dominance. that is, prospect a might be preferred to prospect b even if the probability of receiving a value x or greater is at least as high under prospect b as it is under prospect a for all values of x, and is greater for some value of x. later theoretical improvements overcame this problem, but at the cost of introducing intransitivity in preferences. a revised version, called cumulative prospect theory overcame this problem by using a probability weighting function derived from rank - dependent expected utility theory. cumulative prospect theory can also be used for infinitely many or even continuous outcomes ( for example, if the outcome can be any real number ). an alternative solution to overcome these problems within the framework of ( classical ) prospect theory has been suggested as well. the reference point in the prospect theory inverse s - shaped graph also could lead to limitations due to it possibly being discontinuous at that point and having a geometric violation. this would lead to limitations in regards to accounting for the zero - outcome effect, the absence of behavioral conditionality in risky decisions as well as limitations in deriving the curve. a transitionary concave - convex universal system was proposed to eliminate this limitation. critics from the field of psychology argued that even if prospect theory arose as a descriptive model, it offers no psychological explanations for the processes stated in it. furthermore, factors that are equally important to decision making processes have not been included in the model, such as emotion. a relatively simple ad hoc decision strategy
|
Tensor operator
|
https://en.wikipedia.org/wiki?curid=39654522
| 10,439,991 |
. examples of vector operators are the momentum, the position, the orbital angular momentum, formula _ 11, and the spin angular momentum, formula _ 12. ( fine print : angular momentum is a vector as far as rotations are concerned, but unlike position or momentum it does not change sign under space inversion, and when one wishes to provide this information, it is said to be a pseudovector. ) scalar, vector and tensor operators can also be formed by products of operators. for example, the scalar product formula _ 13 of the two vector operators, formula _ 11 and formula _ 12, is a scalar operator, which figures prominently in discussions of the spin – orbit interaction. similarly, the quadrupole moment tensor of our example molecule has the nine components here, the indices formula _ 17 and formula _ 18 can independently take on the values 1, 2, and 3 ( or formula _ 7, formula _ 5, and formula _ 3 ) corresponding to the three cartesian axes, the index formula _ 22 runs over all particles ( electrons and nuclei ) in the molecule, formula _ 25 is the formula _ 17th component of the position of this particle. each term in the sum is a tensor operator. in particular, the nine products formula _ 27 together form a second rank tensor, formed by taking the direct product of the vector operator formula _ 28 with itself. and let formula _ 31 be a rotation matrix. according to the rodrigues ' rotation formula, the rotation operator then amounts to the orthonormal basis set for total angular momentum is formula _ 37, where " j " is the total angular momentum quantum number and " m " is the magnetic angular momentum quantum number, which takes values − " j ", − " j " + 1,..., " j " − 1, " j ". a general state for the case of orbital angular momentum, the eigenstates formula _ 46 of the orbital angular momentum operator l and solutions of laplace ' s equation on a 3d sphere are spherical harmonics : where " p " is an associated legendre polynomial, is the orbital angular momentum quantum number, and " m " is the orbital magnetic quantum number which takes the values −, − + 1,... − 1, the formalism of spherical harmonics have wide applications in applied mathematics, and are closely related to the formalism of spherical tensors, as shown below. spherical harmonics are functions of the polar and azimuthal angles, " [UNK] " and " θ
|
Cleavage (crystal)
|
https://en.wikipedia.org/wiki?curid=1890880
| 4,407,808 |
mineral, while parting is only found in samples with structural defects. examples of parting include the octahedral parting of magnetite, the rhombohedral and basal parting in corundum, and the basal parting in pyroxenes. cleavage is a physical property traditionally used in mineral identification, both in hand specimen and microscopic examination of rock and mineral studies. as an example, the angles between the prismatic cleavage planes for the pyroxenes ( 88 – 92° ) and the amphiboles ( 56 – 124° ) are diagnostic. crystal cleavage is of technical importance in the electronics industry and in the cutting of gemstones. synthetic single crystals of semiconductor materials are generally sold as thin wafers which are much easier to cleave. simply pressing a silicon wafer against a soft surface and scratching its edge with a diamond scribe is usually enough to cause cleavage ; however, when dicing a wafer to form chips, a procedure of scoring and breaking is often followed for greater control. elemental semiconductors ( si, ge, and diamond ) are diamond cubic, a space group for which octahedral cleavage is observed. this means that some orientations of wafer allow near - perfect rectangles to be cleaved. most other commercial semiconductors ( gaas, insb, etc. ) can be made in the related zinc blende structure, with similar cleavage planes.
|
Genome architecture mapping
|
https://en.wikipedia.org/wiki?curid=57153310
| 23,245,108 |
data. finally, graph analysis can be applied to the segregation table to locate communities. communities can be defined several ways, such as by cliques, but in this article, community analysis will be focused on centrality. centrality - based communities can be thought of as analogous to celebrities and their fan bases on a social media network. the fans may not interact with each other very much, but they do interact with the celebrity, who is the “ center. ” there are several different types of centrality, including but not limited to degree centrality, eigenvector centrality, and betweenness centrality, which may all result in different communities being defined. something of note is that in our social network analogy above, an eigenvector centrality may not be accurate because one person who follows many celebrities may not have any influence over them. in that case, the graph may be seen as directed. in gam analysis, it is generally assumed that the graph is undirected, so that if eigenvector centrality were to be used it would be accurate. both clique and centrality calculations are computationally complex. similar to the clustering mentioned above, they do not scale well to large problems. slice ( statistical inference of co - segregation ) plays a key role in gam data analysis. it was developed in the laboratory of mario nicodemi to provide a math model to identify the most specific interactions among loci from gam cosegregation data. it estimates the proportion of specific interaction for each pair loci at a given time. it is a kind of likelihood method. the first step of slice is to provide a function of the expected proportion of gam nuclear profiles. then find the best probability result to explain the experimental data. the slice model is based on a hypothesis that the probability of non - interacting loci falls into the same nuclear profile is predictable. the probability is dependent on the distance of these loci. the slice model considers a pair of loci as two types : one is interacting, the other is non - interacting. as per the hypothesis, the proportions of nuclear profiles state can be predicted by mathematical analysis. by deriving a function of the interaction probability, these gam data can also be used to find prominent interactions and explore the sensitivity of gam. slice considers a pair of loci can be interaction or non - interaction across the cell population. the first step of this calculation is to describe a single locus. a pair of loci, a and b, can have
|
Fermi Gamma-ray Space Telescope
|
https://en.wikipedia.org/wiki?curid=399678
| 7,067,449 |
the egret instrument on nasa ' s compton gamma ray observatory satellite in the 1990s. several countries produced the components of the lat, who then sent the components for assembly at slac national accelerator laboratory. slac also hosts the lat instrument science operations center, which supports the operation of the lat during the fermi mission for the lat scientific collaboration and for nasa. education and public outreach are important components of the fermi project. the main fermi education and public outreach website at http : / / glast. sonoma. edu offers gateways to resources for students, educators, scientists, and the public. nasa ' s education and public outreach ( e / po ) group operates the fermi education and outreach resources at sonoma state university. the 2011 bruno rossi prize was awarded to bill atwood, peter michelson and the fermi lat team " for enabling, through the development of the large area telescope, new insights into neutron stars, supernova remnants, cosmic rays, binary systems, active galactic nuclei and gamma - ray bursts. " in 2013, the prize was awarded to roger w. romani of leland stanford junior university and alice harding of goddard space flight center for their work in developing the theoretical framework underpinning the many exciting pulsar results from fermi gamma - ray space telescope. the 2014 prize went to tracy slatyer, douglas finkeiner and meng su " for their discovery, in gamma rays, of the large unanticipated galactic structure called the fermi bubbles. " the 2018 prize was awarded to colleen wilson - hodge and the fermi gbm team for the detection of, the first unambiguous and completely independent discovery of an electromagnetic counterpart to a gravitational wave signal ( gw170817 ) that " confirmed that short gamma - ray bursts are produced by binary neutron star mergers and enabled a global multi - wavelength follow - up campaign. "
|
Discovery and development of phosphodiesterase 5 inhibitors
|
https://en.wikipedia.org/wiki?curid=41069188
| 17,047,703 |
. the amino acids residues, gln817, phe820, val782 and tyr612, are lined in the q pocket, they are highly conserved in all pdes. the amide moiety of the pyrazolopyrimidinone group forms a bidentate hydrogen bond with the ɣ - amide group of gln817. 3d structure of sildenafil is demonstrated in figure 4. pde5 inhibitors are generally well tolerated, with side effects including transient headaches, flushing, dyspepsia, congestion and dizziness. there have also been reports of temporary vision disturbances with sildenafil and, to a lesser extent, vardenafil, and back and muscle pain with tadalafil. these side effects may be attributed to the unintended effects of pde5 inhibitors against other pde isozymes, such as pde1, pde6 and pde11. it is theorised that improved selectivity of pde5 inhibitors may lead to fewer side effects. for example, vardenafil and tadalafil have demonstrated reduced adverse effects probably due to improved selectivity for pde5. however, no highly selective pde5 inhibitors are currently in development. patients who take nitrates, alpha blockers or sgc stimulators within 24 hours of pde5 inhibitor administration ( or 48 hours for tadalafil ) may experience symptomatic hypotension, so concurrent use is contraindicated. pde5 inhibitors are also contraindicated in patients with hereditary eye conditions such as retinitis pigmentosa due to the small increased risk of nonarteritic ischaemic optic neuropathy in patients taking the medication. hearing impairment is one risk factor for those who are using pde5 inhibitors and it has been reported for all available drugs on the market. this problem may be due to high level effect cgmp on cochlear hair cells. it has been reported that pde5 inhibitors ( sildenafil & vardenafil ) cause transient visual disturbances likely due to pde6 inhibition. several reports are about approaches to improve pde5 inhibitors, where as chemical groups have been switched out to increase potency and selectivity, which should potentially lead to drugs with fewer side effects. sildenafil, the first pde5 inhibitor, was discovered through rational drug design programme. the compound was potent and selective over pde5
|
Boolean satisfiability problem
|
https://en.wikipedia.org/wiki?curid=4715
| 2,258,074 |
¬ " x " ∨ " y " can be rewritten as " x " ∧... ∧ " x " → " y ", that is, if " x "..., " x " are all true, then " y " needs to be true as well. a generalization of the class of horn formulae is that of renameable - horn formulae, which is the set of formulae that can be placed in horn form by replacing some variables with their respective negation. for example, ( " x " ∨ ¬ " x " ) ∧ ( ¬ " x " ∨ " x " ∨ " x " ) ∧ ¬ " x " is not a horn formula, but can be renamed to the horn formula ( " x " ∨ ¬ " x " ) ∧ ( ¬ " x " ∨ " x " ∨ ¬ " y " ) ∧ ¬ " x " by introducing " y " as negation of " x ". checking the existence of such a replacement can be done in linear time ; therefore, the satisfiability of such formulae is in p as it can be solved by first performing this replacement and then checking the satisfiability of the resulting horn formula. another special case is the class of problems where each clause contains xor ( i. e. exclusive or ) rather than ( plain ) or operators. this is in p, since an xor - sat formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time by gaussian elimination ; see the box for an example. this recast is based on the kinship between boolean algebras and boolean rings, and the fact that arithmetic modulo two forms a finite field. since " a " xor " b " xor " c " evaluates to true if and only if exactly 1 or 3 members of { " a ", " b ", " c " } are true, each solution of the 1 - in - 3 - sat problem for a given cnf formula is also a solution of the xor - 3 - sat problem, and in turn each solution of xor - 3 - sat is a solution of 3 - sat, cf. picture. as a consequence, for each cnf formula, it is possible to solve the xor - 3 - sat problem defined by the formula, and based on the result infer either that the 3 - sat problem is solvable or that the 1 - in - 3 - sat problem is unsolvable
|
Thomas Hunt Morgan
|
https://en.wikipedia.org/wiki?curid=31522
| 5,468,409 |
s interest. among other projects that year, morgan completed an experimental study of ctenophore embryology. in naples and through loeb, he became familiar with the " entwicklungsmechanik " ( roughly, " developmental mechanics " ) school of experimental biology. it was a reaction to the vitalistic " naturphilosophie ", which was extremely influential in 19th - century morphology. morgan changed his work from traditional, largely descriptive morphology to experimental embryology that sought physical and chemical explanations for organismal development. at the time, there was considerable scientific debate over the question of how an embryo developed. following wilhelm roux ' s mosaic theory of development, some believed that hereditary material was divided among embryonic cells, which were predestined to form particular parts of a mature organism. driesch and others thought that development was due to epigenetic factors, where interactions between the protoplasm and the nucleus of the egg and the environment could affect development. morgan was in the latter camp ; his work with driesch and jofi demonstrated that blastomeres isolated from sea urchin and ctenophore eggs could develop into complete larvae, contrary to the predictions ( and experimental evidence ) of roux ' s supporters. a related debate involved the role of epigenetic and environmental factors in development ; on this front morgan showed that sea urchin eggs could be induced to divide without fertilization by adding magnesium chloride. loeb continued this work and became well - known for creating fatherless frogs using the method. when morgan returned to bryn mawr in 1895, he was promoted to full professor. morgan ' s main lines of experimental work involved regeneration and larval development ; in each case, his goal was to distinguish internal and external causes to shed light on the roux - driesch debate. he wrote his first book, " the development of the frog ' s egg " ( 1897 ), with the help of jofi. he began a series of studies on different organisms ' ability to regenerate. he looked at grafting and regeneration in tadpoles, fish, and earthworms ; in 1901 he published his research as " regeneration ". beginning in 1900, morgan started working on the problem of sex determination, which he had previously dismissed when nettie stevens discovered the impact of the y chromosome on sex. he also continued to study the evolutionary problems that had been the focus of his earliest work. morgan worked at columbia university for 24 years, from 1904 until 1928 when
|
Cnoidal wave
|
https://en.wikipedia.org/wiki?curid=22463969
| 13,216,679 |
η " are either all three real, or otherwise one is real and the remaining two are a pair of complex conjugates. in the latter case, with only one real - valued root, there is only one elevation " η " at which " f " ( " η " ) is zero. and consequently also only one elevation at which the surface slope " η ’ " is zero. however, we are looking for wave like solutions, with two elevations — the wave crest and trough ( physics ) — where the surface slope is zero. the conclusion is that all three roots of " f " ( " η " ) have to be real valued. now, from equation ( ) it can be seen that only real values for the slope exist if " f " ( " η " ) is positive. this corresponds with " η " ≤ " η " ≤ " η ", which therefore is the range between which the surface elevation oscillates, see also the graph of " f " ( " η " ). this condition is satisfied with the following representation of the elevation " η " ( " ξ " ) : in agreement with the periodic character of the sought wave solutions and with " ψ " ( " ξ " ) the phase of the trigonometric functions sin and cos. from this form, the following descriptions of various terms in equations ( ) and ( ) can be obtained : using these in equations ( ) and ( ), the following ordinary differential equation relating " ψ " and " ξ " is obtained, after some manipulations : with the right hand side still positive, since " η " − " η " ≥ " η " − " η ". without loss of generality, we can assume that " ψ " ( " ξ " ) is a monotone function, since " f " ( " η " ) has no zeros in the interval " η " < " η " < " η ". so the above ordinary differential equation can also be solved in terms of " ξ " ( " ψ " ) being a function of " ψ " : with " f " ( " ψ " | " m " ) the incomplete elliptic integral of the first kind. the jacobi elliptic functions cn and sn are inverses of " f " ( " ψ " | " m " ) given by first, since " η " is the crest elevation and " η " is the trough elevation, it is convenient to introduce the wave height, defined as " h " = " η " − " η ". consequently, we find
|
Bi-specific T-cell engager
|
https://en.wikipedia.org/wiki?curid=18879982
| 10,007,370 |
is re - engineering some of the currently used conventional antibodies like trastuzumab ( targeting her2 / neu ), cetuximab and panitumumab ( both targeting the egf receptor ), using the bite approach.
|
Thermodynamic databases for pure substances
|
https://en.wikipedia.org/wiki?curid=7760322
| 14,369,598 |
database is its change in value during the formation of a compound from the standard - state elements, or for any standard chemical reaction ( δ " g " ° or δ " g " ° ). compilers of thermochemical databases may contain some additional thermodynamic functions. for example, the absolute enthalpy of a substance " h " ( " t " ) is defined in terms of its formation enthalpy and its heat content as follows : for an element, " h " ( " t " ) and [ " h " - " h " ] are identical at all temperatures because δ " h " ° is zero, and of course at 298. 15 k, " h " ( " t " ) = 0. for a compound : similarly, the absolute gibbs energy " g " ( " t " ) is defined by the absolute enthalpy and entropy of a substance : some tables may also contain the gibbs energy function ( " h " ° – " g " ° ) / " t " which is defined in terms of the entropy and heat content. the gibbs energy function has the same units as entropy, but unlike entropy, exhibits no discontinuity at normal phase transition temperatures. the log of the equilibrium constant " k " is often listed, which is calculated from the defining thermodynamic equation. a thermodynamic database consists of sets of critically evaluated values for the major thermodynamic functions. originally, data was presented as printed tables at 1 atm and at certain temperatures, usually 100° intervals and at phase transition temperatures. some compilations included polynomial equations that could be used to reproduce the tabular values. more recently, computerized databases are used which consist of the equation parameters and subroutines to calculate specific values at any temperature and prepare tables for printing. computerized databases often include subroutines for calculating reaction properties and displaying the data as charts. thermodynamic data comes from many types of experiments, such as calorimetry, phase equilibria, spectroscopy, composition measurements of chemical equilibrium mixtures, and emf measurements of reversible reactions. a proper database takes all available information about the elements and compounds in the database, and assures that the presented results are " internally consistent ". internal consistency requires that all values of the thermodynamic functions are correctly calculated by application of the appropriate thermodynamic equations. for example, values of the gibbs energy obtained from high - temperature equilibrium emf methods must be
|
Egyptian geometry
|
https://en.wikipedia.org/wiki?curid=29130644
| 11,772,744 |
square. this problem ' s result is used in problem 50. that this octagonal figure, whose area is easily calculated, so accurately approximates the area of the circle is just plain good luck. obtaining a better approximation to the area using finer divisions of a square and a similar argument is not simple. problem 50 of the rmp finds the area of a round field of diameter 9 khet. this is solved by using the approximation that circular field of diameter 9 has the same area as a square of side 8. problem 52 finds the area of a trapezium with ( apparently ) equally slanting sides. the lengths of the parallel sides and the distance between them being the given numbers. several problems compute the volume of cylindrical granaries ( 41, 42, and 43 of the rmp ), while problem 60 rmp seems to concern a pillar or a cone instead of a pyramid. it is rather small and steep, with a seked ( slope ) of four palms ( per cubit ). a problem appearing in section iv. 3 of the lahun mathematical papyri computes the volume of a granary with a circular base. a similar problem and procedure can be found in the rhind papyrus ( problem 43 ). several problems in the moscow mathematical papyrus ( problem 14 ) and in the rhind mathematical papyrus ( numbers 44, 45, 46 ) compute the volume of a rectangular granary. problem 14 of the moscow mathematical papyrus computes the volume of a truncated pyramid, also known as a frustum. problem 56 of the rmp indicates an understanding of the idea of geometric similarity. this problem discusses the ratio run / rise, also known as the seked. such a formula would be needed for building pyramids. in the next problem ( problem 57 ), the height of a pyramid is calculated from the base length and the " seked " ( egyptian for slope ), while problem 58 gives the length of the base and the height and uses these measurements to compute the seked. in problem 59 part 1 computes the seked, while the second part may be a computation to check the answer : " if you construct a pyramid with base side 12 [ cubits ] and with a seked of 5 palms 1 finger ; what is its altitude? "
|
Data center management
|
https://en.wikipedia.org/wiki?curid=58899779
| 8,822,822 |
may have to increase. data center asset management ( also referred to as " inventory management " ) is the set of business practices that join financial, contractual and inventory functions to support life cycle management and strategic decision making for the it environment. assets include all elements of software and hardware that are found in the business environment. it asset management generally uses automation to manage the discovery of assets so inventory can be compared to license entitlements. full business management of it assets requires a repository of multiple types of information about the asset, as well as integration with other systems such as supply chain, help desk, procurement and hr systems and itsm. hardware asset management entails the management of the physical components of computers and computer networks, from acquisition through disposal. common business practices include request and approval process, procurement management, life cycle management, redeployment and disposal management. a key component is capturing the financial information about the hardware life cycle which aids the organization in making business decisions based on meaningful and measurable financial objectives. software asset management is a similar process, focusing on software assets, including licenses. standards for this aspect of data center management are part of iso / iec 19770. data center - infrastructure management ( dcim ) is the integration of information technology ( it ) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center ' s critical systems. achieved through the implementation of specialized software, hardware and sensors, dcim enables common, real - time monitoring and management platform for all interdependent systems across it and facility infrastructures. dcim products can help data center managers identify and eliminate sources of risk and improve availability of critical it systems. they can also be used to identify interdependencies between facility and it infrastructures to alert the facility manager to gaps in system redundancy, and provide dynamic, holistic benchmarks on power consumption and efficiency to measure the effectiveness of " green it " initiatives. important data center metrics include those regarding energy efficiency and use of servers, storage, and staff. in too many cases, disk capacity is vastly underused and servers run at 20 % use or less. more effective automation tools can also improve the number of servers or virtual machines that a single admin can handle. dcim providers are increasingly linking with computational fluid dynamics providers to predict complex airflow patterns in the data center. the cfd component is necessary to quantify the impact of planned future changes on cooling resilience, capacity and efficiency. information technology operations, or it operations (
|
Green fluorescent protein
|
https://en.wikipedia.org/wiki?curid=143533
| 3,115,675 |
s size and molecular mass, and can impair the protein ' s natural function or change its location or trajectory of transport within the cell. in the 1960s and 1970s, gfp, along with the separate luminescent protein aequorin ( an enzyme that catalyzes the breakdown of luciferin, releasing light ), was first purified from the jellyfish " aequorea victoria " and its properties studied by osamu shimomura. in " a. victoria ", gfp fluorescence occurs when aequorin interacts with ca ions, inducing a blue glow. some of this luminescent energy is transferred to the gfp, shifting the overall color towards green. however, its utility as a tool for molecular biologists did not begin to be realized until 1992 when douglas prasher reported the cloning and nucleotide sequence of wtgfp in " gene ". the funding for this project had run out, so prasher sent cdna samples to several labs. the lab of martin chalfie expressed the coding sequence of wtgfp, with the first few amino acids deleted, in heterologous cells of " e. coli " and " c. elegans ", publishing the results in " science " in 1994. frederick tsuji ' s lab independently reported the expression of the recombinant protein one month later. remarkably, the gfp molecule folded and was fluorescent at room temperature, without the need for exogenous cofactors specific to the jellyfish. although this near - wtgfp was fluorescent, it had several drawbacks, including dual peaked excitation spectra, ph sensitivity, chloride sensitivity, poor fluorescence quantum yield, poor photostability and poor folding at 37 °c. the first reported crystal structure of a gfp was that of the s65t mutant by the remington group in " science " in 1996. one month later, the phillips group independently reported the wild - type gfp structure in " nature biotechnology ". these crystal structures provided vital background on chromophore formation and neighboring residue interactions. researchers have modified these residues by directed and random mutagenesis to produce the wide variety of gfp derivatives in use today. further research into gfp has shown that it is resistant to detergents, proteases, guanidinium chloride ( gdmcl ) treatments, and drastic temperature changes. due to the potential for widespread usage and the evolving needs of researchers, many different mutants of gfp have been
|
Chemical transport reaction
|
https://en.wikipedia.org/wiki?curid=12013342
| 16,803,067 |
##o is used in halogen lamps. the tungsten is evaporated from the tungsten filament and converted with traces of oxygen and iodine into the woi, at the high temperatures near the filament the compound decomposes back to tungsten, oxygen and iodine.
|
Birthday problem
|
https://en.wikipedia.org/wiki?curid=73242
| 599,979 |
? often, people ' s intuition is that the answer is above. most people ' s intuition is that it is in the thousands or tens of thousands, while others feel it should at least be in the hundreds. the correct answer is 23. the reason is that the correct comparison is to the number of partitions of the weights into left and right. there are different partitions for weights, and the left sum minus the right sum can be thought of as a new random quantity for each partition. the distribution of the sum of weights is approximately gaussian, with a peak at and width, so that when is approximately equal to the transition occurs. 2 is about 4 million, while the width of the distribution is only 5 million. arthur c. clarke ' s novel " a fall of moondust ", published in 1961, contains a section where the main characters, trapped underground for an indefinite amount of time, are celebrating a birthday and find themselves discussing the validity of the birthday problem. as stated by a physicist passenger : " if you have a group of more than twenty - four people, the odds are better than even that two of them have the same birthday. " eventually, out of 22 present, it is revealed that two characters share the same birthday, may 23.
|
Numerical integration
|
https://en.wikipedia.org/wiki?curid=170089
| 3,029,346 |
error, if the derivatives of " f " are available. this integration method can be combined with interval arithmetic to produce computer proofs and " verified " calculations. several methods exist for approximate integration over unbounded intervals. the standard technique involves specially derived quadrature rules, such as gauss - hermite quadrature for integrals on the whole real line and gauss - laguerre quadrature for integrals on the positive reals. monte carlo methods can also be used, or a change of variables to a finite interval ; e. g., for the whole line one could use the quadrature rules discussed so far are all designed to compute one - dimensional integrals. to compute integrals in multiple dimensions, one approach is to phrase the multiple integral as repeated one - dimensional integrals by applying fubini ' s theorem ( the tensor product rule ). this approach requires the function evaluations to grow exponentially as the number of dimensions increases. three methods are known to overcome this so - called " curse of dimensionality ". a great many additional techniques for forming multidimensional cubature integration rules for a variety of weighting functions are given in the monograph by stroud. monte carlo methods and quasi - monte carlo methods are easy to apply to multi - dimensional integrals. they may yield greater accuracy for the same number of function evaluations than repeated integrations using one - dimensional methods. a large class of useful monte carlo methods are the so - called markov chain monte carlo algorithms, which include the metropolis – hastings algorithm and gibbs sampling. sparse grids were originally developed by smolyak for the quadrature of high - dimensional functions. the method is always based on a one - dimensional quadrature rule, but performs a more sophisticated combination of univariate results. however, whereas the tensor product rule guarantees that the weights of all of the cubature points will be positive if the weights of the quadrature points were positive, smolyak ' s rule does not guarantee that the weights will all be positive. bayesian quadrature is a statistical approach to the numerical problem of computing integrals and falls under the field of probabilistic numerics. it can provide a full handling of the uncertainty over the solution of the integral expressed as a gaussian process posterior variance. can be reduced to an initial value problem for an ordinary differential equation by applying the first part of the fundamental theorem of calculus. by differentiating both sides of the above with respect to the
|
Ring current
|
https://en.wikipedia.org/wiki?curid=146644
| 16,290,973 |
a ring current is an electric current carried by charged particles trapped in a planet ' s magnetosphere. it is caused by the longitudinal drift of energetic ( 10 – 200 kev ) particles. earth ' s ring current is responsible for shielding the lower latitudes of the earth from magnetospheric electric fields. it therefore has a large effect on the electrodynamics of geomagnetic storms. the ring current system consists of a band, at a distance of 3 to 8 " r ", which lies in the equatorial plane and circulates clockwise around the earth ( when viewed from the north ). the particles of this region produce a magnetic field in opposition to the earth ' s magnetic field and so an earthly observer would observe a decrease in the magnetic field in this area. the negative deflection of the earth ' s magnetic field due to the ring current is measured by the dst index. the ring current energy is mainly carried around by the ions, most of which are protons. however, one also sees alpha particles in the ring current, a type of ion that is plentiful in the solar wind. in addition, a certain percentage are o oxygen ions, similar to those in the ionosphere of earth, though much more energetic. this mixture of ions suggests that ring current particles probably come from more than one source. during a geomagnetic storm, the number of particles in the ring current will increase. as a result, there is a decrease in the effects of geomagnetic field.
|
Dirac large numbers hypothesis
|
https://en.wikipedia.org/wiki?curid=1331039
| 9,244,847 |
intelligent beings since they parametrize fusion of hydrogen in stars and hence carbon - based life would not arise otherwise. various authors have introduced new sets of numbers into the original " coincidence " considered by dirac and his contemporaries, thus broadening or even departing from dirac ' s own conclusions. jordan ( 1947 ) noted that the mass ratio for a typical star ( specifically, a star of the chandrasekhar mass, itself a constant of nature, approx. 1. 44 solar masses ) and an electron approximates to 10, an interesting variation on the 10 and 10 that are typically associated with dirac and eddington respectively. ( the physics defining the chandrasekhar mass produces a ratio that is the −3 / 2 power of the gravitational fine - structure constant, 10. ) several authors have recently identified and pondered the significance of yet another large number, approximately 120 orders of magnitude. this is for example the ratio of the theoretical and observational estimates of the energy density of the vacuum, which nottale ( 1993 ) and matthews ( 1997 ) associated in an lnh context with a scaling law for the cosmological constant. carl friedrich von weizsacker identified 10 with the ratio of the universe ' s volume to the volume of a typical nucleon bounded by its compton wavelength, and he identified this ratio with the sum of elementary events or bits of information in the universe. valev ( 2019 ) found equation connecting cosmological parameters ( for example density of the universe ) and planck units ( for example planck density ). this ratio of densities, and other ratios ( using four fundamental constants : speed of light in vacuum c, newtonian constant of gravity g, reduced planck constant [UNK], and hubble constant h ) computes to an exact number,. this provides evidence of the dirac large numbers hypothesis by connecting the macro - world and the micro - world.
|
Chromosomal crossover
|
https://en.wikipedia.org/wiki?curid=64045
| 3,556,151 |
. an msh4 hypomorphic ( partially functional ) mutant of " s. cerevisiae " showed a 30 % genome wide reduction in crossover numbers, and a large number of meioses with non exchange chromosomes. nevertheless, this mutant gave rise to spore viability patterns suggesting that segregation of non - exchange chromosomes occurred efficiently. thus in " s. cerevisiae " proper segregation apparently does not entirely depend on crossovers between homologous pairs. the grasshopper " melanoplus femur - rubrum " was exposed to an acute dose of x - rays during each individual stage of meiosis, and chiasma frequency was measured. irradiation during the leptotene - zygotene stages of meiosis ( that is, prior to the pachytene period in which crossover recombination occurs ) was found to increase subsequent chiasma frequency. similarly, in the grasshopper " chorthippus brunneus ", exposure to x - irradiation during the zygotene - early pachytene stages caused a significant increase in mean cell chiasma frequency. chiasma frequency was scored at the later diplotene - diakinesis stages of meiosis. these results suggest that x - rays induce dna damages that are repaired by a crossover pathway leading to chiasma formation. double strand breaks ( dsbs ) are repaired by two pathways to generate crossovers in eukaryotes. the majority of them are repaired by mutl homologs mlh1 and mlh3, which defines the class i crossovers. the remaining are the result of the class ii pathway, which is regulated by mus81 endonuclease. there are interconnections between these two pathways — class i crossovers can compensate for the loss of class ii pathway. in mus81 knockout mice, class i crossovers are elevated, while total crossover counts at chiasmata are normal. however, the mechanisms underlining this crosstalk are not well understood. a recent study suggests that a scaffold protein called slx4 may participate in this regulation. specifically, slx4 knockout mice largely phenocopies the mus81 knockout — once again, an elevated class i crossovers while normal chiasmata count. in most eukaryotes, a cell carries two versions of each gene, each referred to as an allele. each parent passes on one allele to each offspring. an individual game
|
MAPK networks
|
https://en.wikipedia.org/wiki?curid=54942000
| 30,030,076 |
pitzschke 3 ). avrpto interacts with bak1 and interrupts the binding of fls2. " pseudomonas syringae " have hopai1, which is a phosphothreonin lyase, and dephosphorylates the threonine residue at the upstream mapkks. hopai1 interacts with mpk3 and mpk6 allowing for flg22 activation to occur. in certain soil - borne pathogens that carry flagellin variants cannot be detected by fls2, but there is still a triggered innate immune response. the immune response has been shown to be from the ef - tu protein. flg22, elf18, fls2 and efr have receptors that are in the same subfamily of lrr - rlks, lrrxii. this means that elf18 and flg22 induce extracellular alkalization, rapid activation of mapks, and gene responses that are similar to each other. although there appears to be an important relationship between mapks with ef - tu - triggered defense, the evidence remains to be unclear. the reason for this unclear relationship is because of " agrobacterium tumefaciens, " which infects into segments of plant dna. efr1 mutants do not recognize ef - tu, but there is no research on mapk activities and " efr1. " initiation of defense signaling can be a positive effect to the plant pathogens because activating mpk3 in response to flg22 causes phosphorylation and translocation of vire2 interacting protein 1 ( vip1 ). vip1 serves as a shuttle for the pathogenic t - dna, but the induction of defense genes can occur as well. this allows for the spreading and cessation of the pathogen in the plant, but the pathogen can overcome the problem by attacking vip1 for proteasome degradation by virf, which is a virulence factor that encodes an f - box protein.
|
Clausius–Clapeyron relation
|
https://en.wikipedia.org/wiki?curid=1780425
| 3,081,199 |
the clausius – clapeyron relation, named after rudolf clausius and benoit paul emile clapeyron, specifies the temperature dependence of pressure, most importantly vapor pressure, at a discontinuous phase transition between two phases of matter of a single constituent. its relevance to meteorology and climatology is the increase of the water - holding capacity of the atmosphere by about 7 % for every 1 °c ( 1. 8 °f ) rise in temperature. on a pressure – temperature ( " p " – " t " ) diagram, the line separating the two phases is known as the coexistence curve. the clapeyron relation gives the slope of the tangents to this curve. mathematically, where formula _ 2 is the slope of the tangent to the coexistence curve at any point, formula _ 3 is the specific latent heat, formula _ 4 is the temperature, formula _ 5 is the specific volume change of the phase transition, and formula _ 6 is the specific entropy change of the phase transition. the clausius – clapeyron equation expresses this in a more convenient form just in terms of the latent heat, for moderate temperatures and pressures. using the state postulate, take the specific entropy formula _ 8 for a homogeneous substance to be a function of specific volume formula _ 9 and temperature formula _ 4. the clausius – clapeyron relation characterizes behavior of a closed system during a phase change at constant temperature and pressure. therefore, where formula _ 14 is the pressure. since pressure and temperature are constant, the derivative of pressure with respect to temperature does not change. therefore, the partial derivative of specific entropy may be changed into a total derivative and the total derivative of pressure with respect to temperature may be factored out when integrating from an initial phase formula _ 16 to a final phase formula _ 17, to obtain where formula _ 19 and formula _ 20 are respectively the change in specific entropy and specific volume. given that a phase change is an internally reversible process, and that our system is closed, the first law of thermodynamics holds where formula _ 22 is the internal energy of the system. given constant pressure and temperature ( during a phase change ) and the definition of specific enthalpy formula _ 23, we obtain this result ( also known as the clapeyron equation ) equates the slope formula _ 2 of the coexistence curve formula _ 33 to the function formula _ 34 of the specific latent heat formula _ 3, the temperature formula _ 4, and
|
Xenon isotope geochemistry
|
https://en.wikipedia.org/wiki?curid=70939631
| 29,425,436 |
time this quantity was p, the interval between t and t is given by the law of radioactive decay as here formula _ 3 is the decay constant of the radioisotope, which is the probability of decay per nucleus per unit time. the decay constant is related to the half life t, by t = ln ( 2 ) / formula _ 3 the i - xe system was first applied in 1975 to estimate the age of the earth. for all xe isotopes, the initial isotope composition of iodine in the earth is given by where formula _ 6 is the isotopic ratios of iodine at the time that earth primarily formed, formula _ 7 is the isotopic ratio of iodine at the end of stellar nucleosynthesis, and formula _ 8 is the time interval between the end of stellar nucleosynthesis and the formation of the earth. the estimated iodine - 127 concentration in the bulk silicate earth ( bse ) ( = crust + mantle average ) ranges from 7 to 10 parts per billion ( ppb ) by mass. if the bse represents earth ' s chemical composition, the total i in the bse ranges from 2. 26×10 to 3. 23×10 moles. the meteorite bjurbole is 4. 56 billion years old with an initial i / i ratio of 1. 1×10, so an equation can be derived as where formula _ 10 is the interval between the formation of the earth and the formation of meteorite bjurbole. given the half life of i of 15. 7 myr, and assuming that all the initial i has decayed to xe, the following equation can be derived : xe in the modern atmosphere is 3. 63×10 grams. the iodine content for bse lies between 10 and 12 ppb by mass. consequently, formula _ 12 should be 108 myr, i. e., the xe - closure age is 108 myr younger than the age of meteorite bjurbole. the estimated xe closure time was ~ 4. 45 billion years ago when the growing earth started to retain xe in its atmosphere, which is coincident with ages derived from other geochronology dating methods. there are some disputes about using i - xe dating to estimate the xe closure time. first, in the early solar system, planetesimals collided and grew into larger bodies that accreted to form the earth. but there could be a 10 to 10
|
Orientation (vector space)
|
https://en.wikipedia.org/wiki?curid=1391942
| 9,847,951 |
as an instance of stokes ' theorem. a closed interval is a one - dimensional manifold with boundary, and its boundary is the set. in order to get the correct statement of the fundamental theorem of calculus, the point should be oriented positively, while the point should be oriented negatively. the one - dimensional case deals with a line which may be traversed in one of two directions. there are two orientations to a line just as there are two orientations to a circle. in the case of a line segment ( a connected subset of a line ), the two possible orientations result in directed line segments. an orientable surface sometimes has the selected orientation indicated by the orientation of a line perpendicular to the surface. for any " n " - dimensional real vector space " v " we can form the " k " th - exterior power of " v ", denoted λ " v ". this is a real vector space of dimension formula _ 9. the vector space λ " v " ( called the " top exterior power " ) therefore has dimension 1. that is, λ " v " is just a real line. there is no " a priori " choice of which direction on this line is positive. an orientation is just such a choice. any nonzero linear form " ω " on λ " v " determines an orientation of " v " by declaring that " x " is in the positive direction when " ω " ( " x " ) > 0. to connect with the basis point of view we say that the positively - oriented bases are those on which " ω " evaluates to a positive number ( since " ω " is an " n " - form we can evaluate it on an ordered set of " n " vectors, giving an element of r ). the form " ω " is called an orientation form. if { " e " } is a privileged basis for " v " and { " e " } is the dual basis, then the orientation form giving the standard orientation is. the connection of this with the determinant point of view is : the determinant of an endomorphism formula _ 10 can be interpreted as the induced action on the top exterior power. let " b " be the set of all ordered bases for " v ". then the general linear group gl ( " v " ) acts freely and transitively on " b ". ( in fancy language, " b " is a gl ( " v " ) - torsor ). this means that as a manifold, " b " is
|
K-way merge algorithm
|
https://en.wikipedia.org/wiki?curid=48597951
| 8,155,686 |
so the winner of a game is the smaller one of both elements. for k - way merging, it is more efficient to only store the loser of each game ( see image ). the data structure is therefore called a loser tree. when building the tree or replacing an element with the next one from its list, we still promote the winner of the game to the top. the tree is filled like in a sports match but the nodes only store the loser. usually, an additional node above the root is added that represents the overall winner. every leaf stores a pointer to one of the input arrays. every inner node stores a value and an index. the index of an inner node indicates which input array the value comes from. the value contains a copy of the first element of the corresponding input array. the algorithm iteratively appends the minimum element to the result and then removes the element from the corresponding input list. it updates the nodes on the path from the updated leaf to the root ( " replacement selection " ). the removed element is the overall winner. therefore, it has won each game on the path from the input array to the root. when selecting a new element from the input array, the element needs to compete against the previous losers on the path to the root. when using a loser tree, the partner for replaying the games is already stored in the nodes. the loser of each replayed game is written to the node and the winner is iteratively promoted to the top. when the root is reached, the new overall winner was found and can be used in the next round of merging. the images of the tournament tree and the loser tree in this section use the same data and can be compared to understand the way a loser tree works. a tournament tree can be represented as a balanced binary tree by adding sentinels to the input lists ( i. e. adding a member to the end of each list with a value of infinity ) and by adding null lists ( comprising only a sentinel ) until the number of lists is a power of two. the balanced tree can be stored in a single array. the parent element can be reached by dividing the current index by two. when one of the leaves is updated, all games from the leaf to the root are replayed. in the following pseudocode, an object oriented tree is used instead of an array because it is easier to understand. additionally, the number of lists to merge is assumed to be a power of two. in the beginning, the tree is first created in time
|
Fermi Gamma-ray Space Telescope
|
https://en.wikipedia.org/wiki?curid=399678
| 7,067,449 |
sky scanning mode of observing ". fermi switched to " sky survey mode " on 26 june 2008 so as to begin sweeping its field of view over the entire sky every three hours ( every two orbits ). on 30 april 2013, nasa revealed that the telescope had narrowly avoided a collision a year earlier with a defunct cold war - era soviet spy satellite, kosmos 1805, in april 2012. orbital predictions several days earlier indicated that the two satellites were expected to occupy the same point in space within 30 milliseconds of each other. on 3 april, telescope operators decided to stow the satellite ' s high - gain parabolic antenna, rotate the solar panels out of the way and to fire fermi ' s rocket thrusters for one second to move it out of the way. even though the thrusters had been idle since the telescope had been placed in orbit nearly five years earlier, they worked correctly and potential disaster was thus avoided. in june 2015, the fermi lat collaboration released " pass 8 lat data ". iterations of the analysis framework used by lat are called " passes " and at launch fermi lat data was analyzed using pass 6. significant improvements to pass 6 were included in pass 7 which debuted in august 2011. every detection by the fermi lat since its launch, was reexamined with the latest tools to learn how the lat detector responded to both each event and to the background. this improved understanding led to two major improvements : gamma - rays that had been missed by previous analysis were detected and the direction they arrived from was determined with greater accuracy. the impact of the latter is to sharpen fermi lat ' s vision as illustrated in the figure on the right. pass 8 also delivers better energy measurements and a significantly increased effective area. the entire mission dataset was reprocessed. these improvements have the greatest impact on both the low and high ends of the range of energy fermi lat can detect - in effect expanding the energy range within which lat can make useful observations. the improvement in the performance of fermi lat due to pass 8 is so dramatic that this software update is sometimes called the cheapest satellite upgrade in history. among numerous advances, it allowed for a better search for galactic spectral lines from dark matter interactions, analysis of extended supernova remnants, and to search for extended sources in the galactic plane. for almost all event classes, version p8r2 had a residual background that was not fully isotropic. this anisotropy
|
Transcriptome instability
|
https://en.wikipedia.org/wiki?curid=56255646
| 32,443,337 |
transcriptome instability is a genome - wide, pre - mrna splicing - related characteristic of certain cancers. in general, pre - mrna splicing is dysregulated in a high proportion of cancerous cells. for certain types of cancer, like in colorectal and prostate, the number of splicing errors per cancer has been shown to vary greatly between individual cancers, a phenomenon referred to as transcriptome instability. transcriptome instability correlates significantly with reduced expression level of splicing factor genes. mutation of " dnmt3a " contributes to development of hematologic malignancies, and " dnmt3a " - mutated cell lines exhibit transcriptome instability as compared to their isogenic wildtype counterparts.
|
Richard Feynman
|
https://en.wikipedia.org/wiki?curid=25523
| 261,353 |
##nman ' s that told how to implement renormalization. feynman was prompted to publish his ideas in the " physical review " in a series of papers over three years. his 1948 papers on " a relativistic cut - off for classical electrodynamics " attempted to explain what he had been unable to get across at pocono. his 1949 paper on " the theory of positrons " addressed the schrodinger equation and dirac equation, and introduced what is now called the feynman propagator. finally, in papers on the " mathematical formulation of the quantum theory of electromagnetic interaction " in 1950 and " an operator calculus having applications in quantum electrodynamics " in 1951, he developed the mathematical basis of his ideas, derived familiar formulae and advanced new ones. while papers by others initially cited schwinger, papers citing feynman and employing feynman diagrams appeared in 1950, and soon became prevalent. students learned and used the powerful new tool that feynman had created. computer programs were later written to evaluate feynman diagrams, enabling physicists to use quantum field theory to make high - precision predictions. marc kac adapted feynman ' s technique of summing over possible histories of a particle to the study of parabolic partial differential equations, yielding what is now known as the feynman – kac formula, the use of which extends beyond physics to many applications of stochastic processes. to schwinger, however, the feynman diagram was " pedagogy, not physics ". by 1949, feynman was becoming restless at cornell. he never settled into a particular house or apartment, living in guest houses or student residences, or with married friends " until these arrangements became sexually volatile ". he liked to date undergraduates, hire prostitutes, and sleep with the wives of friends. he was not fond of ithaca ' s cold winter weather, and pined for a warmer climate. above all, at cornell, he was always in the shadow of hans bethe. despite all of this, feynman looked back favorably on the telluride house, where he resided for a large period of his cornell career. in an interview, he described the house as " a group of boys that have been specially selected because of their scholarship, because of their cleverness or whatever it is, to be given free board and lodging and so on, because of their brains ". he enjoyed the house ' s convenience and said that "
|
SCN5A
|
https://en.wikipedia.org/wiki?curid=7011336
| 17,859,567 |
activation or inactivation ( resulting in an increased window - current ). scn5a mutations are believed to be found in a disproportionate number of people who have irritable bowel syndrome, particularly the constipation - predominant variant ( ibs - c ). the resulting defect leads to disruption in bowel function, by affecting the nav1. 5 channel, in smooth muscle of the colon and pacemaker cells. researchers managed to treat a case of ibs - c with mexiletine to restore nav1. 5 channels, reversing constipation and abdominal pain. genetic variations in scn5a, i. e. single nucleotide polymorphisms ( snps ) have been described in both coding and non - coding regions of the gene. these variations are typically present at relatively high frequencies within the general population. genome wide association studies ( gwas ) have used this type of common genetic variation to identify genetic loci associated with variability in phenotypic traits. in the cardiovascular field this powerful technique has been used to detect loci involved in variation in electrocardiographic parameters ( i. e. pr -, qrs - and qtc - interval duration ) in the general population. the rationale behind this technique is that common genetic variation present in the general population can influence cardiac conduction in non - diseased individuals. these studies consistently identified the scn5a - scn10a genomic region on chromosome 3 to be associated with variation in qtc - interval, qrs duration and pr - interval. these results indicate that genetic variation at the scn5a locus is not only involved in disease genetics but also plays a role in the variation in cardiac function between individuals in the general population. the cardiac sodium channel na1. 5 has long been a common target in the pharmacologic treatment of arrhythmic events. classically, sodium channel blockers that block the peak sodium current are classified as class i anti - arrhythmic agents and further subdivided in class ia, ib and ic, depending on their ability to change the length of the cardiac action potential. use of such sodium channel blockers is among others indicated in patients with ventricular reentrant tachyarrhythmia in the setting of cardiac ischemia and in patients with atrial fibrillation in absence of structural heart disease.
|
Root microbiome
|
https://en.wikipedia.org/wiki?curid=42251979
| 13,307,962 |
the root microbiome ( also called rhizosphere microbiome ) is the dynamic community of microorganisms associated with plant roots. because they are rich in a variety of carbon compounds, plant roots provide unique environments for a diverse assemblage of soil microorganisms, including bacteria, fungi and archaea. the microbial communities inside the root and in the rhizosphere are distinct from each other, and from the microbial communities of bulk soil, although there is some overlap in species composition. different microorganisms, both beneficial and harmful affect development and physiology of plants. beneficial microorganisms include bacteria that fix nitrogen, promote plant growth, mycorrhizal fungi, mycoparasitic fungi, protozoa and certain biocontrol microorganisms. pathogenic microorganisms also span certain bacteria, pathogenic fungi and certain nematodes that can colonize the rhizosphere. pathogens are able to compete with protective microbes and break through innate plant defense mechanisms. apart from microbes that cause plant diseases, certain bacteria that are pathogenic and can be carried over to humans, such as " salmonella ", enterohaemorhagic " escherichia coli ", " burkholedria ( ceno ) cepacia ", " pseudomonas aeruginosa ", and " stenotrophomonas maltophilia " can also be detected in root associated microbiome and in plant tissues. root microbiota affect plant host fitness and productivity in a variety of ways. members of the root microbiome benefit from plant sugars or other carbon rich molecules. individual members of the root microbiome may behave differently in association with different plant hosts, or may change the nature of their interaction ( along the mutualist - parasite continuum ) within a single host as environmental conditions or host health change. despite the potential importance of the root microbiome for plants and ecosystems, our understanding of how root microbial communities are assembled is in its infancy. this is in part because until recent advances in sequencing technologies, root microbes were difficult to study due to high species diversity, the large number of cryptic species, and the fact that most species have yet to be retrieved in culture. evidence suggests both biotic ( such as host identity and plant neighbor ) and abiotic ( such as soil structure and nutrient availability ) factors affect community composition. root associated microbes include fungi, bacteria
|
Skew-Hamiltonian matrix
|
https://en.wikipedia.org/wiki?curid=11380117
| 28,430,226 |
in linear algebra, skew - hamiltonian matrices are special matrices which correspond to skew - symmetric bilinear forms on a symplectic vector space. let " v " be a vector space, equipped with a symplectic form formula _ 1. such a space must be even - dimensional. a linear map formula _ 2 is called a skew - hamiltonian operator with respect to formula _ 1 if the form formula _ 4 is skew - symmetric. choose a basis formula _ 5 in " v ", such that formula _ 1 is written as formula _ 7. then a linear operator is skew - hamiltonian with respect to formula _ 1 if and only if its matrix " a " satisfies formula _ 9, where " j " is the skew - symmetric matrix the square of a hamiltonian matrix is skew - hamiltonian. the converse is also true : every skew - hamiltonian matrix can be obtained as the square of a hamiltonian matrix.
|
Subatomic particle
|
https://en.wikipedia.org/wiki?curid=212490
| 1,283,827 |
##rks ) are called hadrons. due to a property known as color confinement, quarks are never found singly but always occur in hadrons containing multiple quarks. the hadrons are divided by number of quarks ( including antiquarks ) into the baryons containing an odd number of quarks ( almost always 3 ), of which the proton and neutron ( the two nucleons ) are by far the best known ; and the mesons containing an even number of quarks ( almost always 2, one quark and one antiquark ), of which the pions and kaons are the best known. except for the proton and neutron, all other hadrons are unstable and decay into other particles in microseconds or less. a proton is made of two up quarks and one down quark, while the neutron is made of two down quarks and one up quark. these commonly bind together into an atomic nucleus, e. g. a helium - 4 nucleus is composed of two protons and two neutrons. most hadrons do not live long enough to bind into nucleus - like composites ; those that do ( other than the proton and neutron ) form exotic nuclei. any subatomic particle, like any particle in the three - dimensional space that obeys the laws of quantum mechanics, can be either a boson ( with integer spin ) or a fermion ( with odd half - integer spin ). in the standard model, all the elementary fermions have spin 1 / 2, and are divided into the quarks which carry color charge and therefore feel the strong interaction, and the leptons which do not. the elementary bosons comprise the gauge bosons ( photon, w and z, gluons ) with spin 1, while the higgs boson is the only elementary particle with spin zero. the hypothetical graviton is required theoretically to have spin 2, but is not part of the standard model. some extensions such as supersymmetry predict additional elementary particles with spin 3 / 2, but none have been discovered as of 2021. due to the laws for spin of composite particles, the baryons ( 3 quarks ) have spin either 1 / 2 or 3 / 2, and are therefore fermions ; the mesons ( 2 quarks ) have integer spin of either 0 or 1, and are therefore bosons. in special relativity, the energy of a particle at rest equals its mass times the
|
Sensor-based sorting
|
https://en.wikipedia.org/wiki?curid=34076303
| 21,311,020 |
sensor - based sorting, is an umbrella term for all applications in which particles are detected using a sensor technique and rejected by an amplified mechanical, hydraulic or pneumatic process. the technique is generally applied in mining, recycling and food processing and used in the particle size range between. since sensor - based sorting is a single particle separation technology, the throughput is proportional to the average particle size and weight fed onto the machine. the main subprocesses of sensor - based sorting are material conditioning, material presentation, detection, data processing and separation. there are two types of sensor - based sorters : the chute type and the belt type. for both types the first step in acceleration is spreading out the particles by a vibrating feeder followed by either a fast belt or a chute. on the belt type the sensor usually detects the particles horizontally while they pass it on the belt. for the chute type the material detection is usually done vertically while the material passes the sensor in a free fall. the data processing is done in real time by a computer. the computer transfers the result of the data processing to an ultra fast ejection unit which, depending on the sorting decision, ejects a particle or lets it pass. sensor - based ore sorting is the terminology used in the mining industry. it is a coarse physical coarse particle separation technology usually applied in the size range for. aim is either to create a lumpy product in ferrous metals, coal or industrial minerals applications or to reject waste before it enters production bottlenecks and more expensive comminution and concentration steps in the process. in the majority of all mining processes, particles of sub - economic grade enter the traditional comminution, classification and concentration steps. if the amount of sub - economic material in the above - mentioned fraction is roughly 25 % or more, there is good potential that sensor - based ore sorting is a technically and financially viable option. high added value can be achieved with relatively low capital expenditure, especially when increasing the productivity through downstream processing of higher grade feed and through increased overall recovery when rejecting deleterious waste. sensor - based sorting is a coarse particle separation technology applied in mining for the dry separation of bulk materials. the functional principle does not limit the technology to any kind of segment or mineral application but makes the technical viability mainly depend on the liberation characteristics at the size range, which is usually sorted. if physical liberation is present there is a good potential that one of the sensors available on industrial scale sorting machines can differentiate between valuable and non - valuable
|
Copper(II) borate
|
https://en.wikipedia.org/wiki?curid=72370259
| 34,514,149 |
copper ( ii ) borate is an inorganic compound with the formula cu ( bo ). it has previously studied due to its photocatalytic properties. copper ( ii ) borate can be prepared by heating a stoichiometric mixture of copper ( ii ) oxide and diboron trioxideto 900°c.
|
Moisture stress
|
https://en.wikipedia.org/wiki?curid=3727062
| 16,202,246 |
moisture stress is a form of abiotic stress that occurs when the moisture of plant tissues is reduced to suboptimal levels. water stress occurs in response to atmospheric and soil water availability when the transpiration rate exceeds the rate of water uptake by the roots and cells lose turgor pressure. moisture stress is described by two main metrics, water potential and water content. moisture stress has an effect on stomatal opening, mainly causing a closure in stomata as to reduce the amount of carbon dioxide assimilation. closing of the stomata also slows the rate of transpiration, which limits water loss and helps to prevent the wilting effects of moisture stress. this closing can be trigged by the roots sensing dry soil and in response producing the hormone aba which when transported up the xylem into the leaves will reduce stomatal conductance and wall extensibility of growing cells. this lowers the rates of transpiration, photosynthesis and leaf expansion. aba also increases the loosening of growing root cell walls and in turn increases root growth in an effort to find water in the soil. phenotypic response of plants to long - term water stress was measured in corn and showed that plants respond to water stress with both an increase in root growth both laterally and vertically. in all droughted conditions the corn showed decrease in plant height and yield due to the decrease in water availability. genes induced during water - stress conditions are thought to function not only in protecting cells from water deficit by the production of important metabolic proteins but also in the regulation of genes for signal transduction in the water - stress response. there are four pathways that have been described that show the plants genetic response to moisture stress ; two are aba dependent while two are aba independent. they all affect gene expression that increases the plants water stress tolerance. the effects of moisture stress on photosynthesis can depend as much on the velocity and degree of photosynthetic recovery, as it depends on the degree and velocity of photosynthesis decline during water depletion. plants that are subjected to mild stress can recover in 1 – 2 days however, plants subjected to severe water stress will only recover 40 - 60 % of its maximum photosynthetic rates the day after re watering and may never reach maximum photosynthetic rates. the recovery from moisture stress starts with an increase in water content in leaves reopening the stomata then the synthesis of photosynthetic proteins.
|
Hall effect
|
https://en.wikipedia.org/wiki?curid=14307
| 1,443,435 |
can detect stray magnetic fields easily, including that of earth, so they work well as electronic compasses : but this also means that such stray fields can hinder accurate measurements of small magnetic fields. to solve this problem, hall sensors are often integrated with magnetic shielding of some kind. for example, a hall sensor integrated into a ferrite ring ( as shown ) can reduce the detection of stray fields by a factor of 100 or better ( as the external magnetic fields cancel across the ring, giving no residual magnetic flux ). this configuration also provides an improvement in signal - to - noise ratio and drift effects of over 20 times that of a bare hall device. the range of a given feedthrough sensor may be extended upward and downward by appropriate wiring. to extend the range to lower currents, multiple turns of the current - carrying wire may be made through the opening, each turn adding to the sensor output the same quantity ; when the sensor is installed onto a printed circuit board, the turns can be carried out by a staple on the board. to extend the range to higher currents, a current divider may be used. the divider splits the current across two wires of differing widths and the thinner wire, carrying a smaller proportion of the total current, passes through the sensor. a variation on the ring sensor uses a split sensor which is clamped onto the line enabling the device to be used in temporary test equipment. if used in a permanent installation, a split sensor allows the electric current to be tested without dismantling the existing circuit. the output is proportional to both the applied magnetic field and the applied sensor voltage. if the magnetic field is applied by a solenoid, the sensor output is proportional to the product of the current through the solenoid and the sensor voltage. as most applications requiring computation are now performed by small digital computers, the remaining useful application is in power sensing, which combines current sensing with voltage sensing in a single hall effect device. by sensing the current provided to a load and using the device ' s applied voltage as a sensor voltage it is possible to determine the power dissipated by a device. hall effect devices used in motion sensing and motion limit switches can offer enhanced reliability in extreme environments. as there are no moving parts involved within the sensor or magnet, typical life expectancy is improved compared to traditional electromechanical switches. additionally, the sensor and magnet may be encapsulated in an appropriate protective material. this application is used in brushless dc motors. hall effect sensors, affixed to mechanical gauges
|
Reversible addition−fragmentation chain-transfer polymerization
|
https://en.wikipedia.org/wiki?curid=2918563
| 9,094,555 |
##thio chain transfer agents. it was first reported by rizzardo " et al. " in 1998. raft is one of the most versatile methods of controlled radical polymerization because it is tolerant of a very wide range of functionality in the monomer and solvent, including aqueous solutions. raft polymerization has also been effectively carried out over a wide temperature range. a temperature is chosen such that ( a ) chain growth occurs at an appropriate rate, ( b ) the chemical initiator ( radical source ) delivers radicals at an appropriate rate and ( c ) the central raft equilibrium ( see later ) favors the active rather than dormant state to an acceptable extent. raft polymerization can be performed by adding a chosen quantity of an appropriate raft agent to a conventional free radical polymerization. usually the same monomers, initiators, solvents and temperatures can be used. radical initiators such as azobisisobutyronitrile ( aibn ) and 4, 4 ' - azobis ( 4 - cyanovaleric acid ) ( acva ), also called 4, 4 ' - azobis ( 4 - cyanopentanoic acid ), are widely used as the initiator in raft. figure 3 provides a visual description of raft polymerizations of poly ( methyl methacrylate ) and polyacrylic acid using aibn as the initiator and two raft agents. raft polymerization is known for its compatibility with a wide range of monomers compared to other controlled radical polymerizations. these monomers include ( meth ) acrylates, ( meth ) acrylamides, acrylonitrile, styrene and derivatives, butadiene, vinyl acetate and n - vinylpyrrolidone. the process is also suitable for use under a wide range of reaction parameters such as temperature or the level of impurities, as compared to nmp or atrp. the z and r group of a raft agent must be chosen according to a number of considerations. the z group primarily affects the stability of the s = c bond and the stability of the adduct radical ( polymer - s - c • ( z ) - s - polymer, see section on mechanism ). these in turn affect the position of and rates of the elementary reactions in the pre - and main - equilibrium. the r group must be able to stabilize a radical such that the right hand side of the pre - equilibrium is favored, but unstable enough that it can reinitiate growth
|
White dwarf
|
https://en.wikipedia.org/wiki?curid=33501
| 945,039 |
young ( estimated to have formed from its agb progenitor about 500 million years ago ) white dwarf g29 - 38, which may have been created by tidal disruption of a comet passing close to the white dwarf. some estimations based on the metal content of the atmospheres of the white dwarfs consider that at least 15 % of them may be orbited by planets and / or asteroids, or at least their debris. another suggested idea is that white dwarfs could be orbited by the stripped cores of rocky planets, that would have survived the red giant phase of their star but losing their outer layers and, given those planetary remnants would likely be made of metals, to attempt to detect them looking for the signatures of their interaction with the white dwarf ' s magnetic field. other suggested ideas of how white dwarfs are polluted with dust involve the scattering of asteroids by planets or via planet - planet scattering. liberation of exomoons from their host planet could cause white dwarf pollution with dust. either the liberation could cause asteroids to be scattered towards the white dwarf or the exomoon could be scattered into the roche - radius of the white dwarf. the mechanism behind the pollution of white dwarfs in binaries was also explored as these systems are more likely to lack a major planet, but this idea cannot explain the presence of dust around single white dwarfs. while old white dwarfs show evidence of dust accretion, white dwarfs older than ~ 1 billion years or > 7000 k with dusty infrared excess were not detected until the discovery of lspm j0207 + 3331 in 2018, which has a cooling age of ~ 3 billion years. the white dwarf shows two dusty components that are being explained with two rings with different temperatures. the metal - rich white dwarf wd 1145 + 017 is the first white dwarf observed with a disintegrating minor planet which transits the star. the disintegration of the planetesimal generates a debris cloud which passes in front of the star every 4. 5 hours, causing a 5 - minute - long fade in the star ' s optical brightness. the depth of the transit is highly variable. the giant planet wd j0914 + 1914b is being evaporated by the strong ultraviolet radiation of the hot white dwarf. part of the evaporated material is being accreted in a gaseous disk around the white dwarf. the weak hydrogen line as well as other lines in the spectrum of the white dwarf revealed the presence of the giant planet. the white dwarf wd 01
|
Q-system (genetics)
|
https://en.wikipedia.org/wiki?curid=51758505
| 22,188,415 |
q - system is a genetic tool that allows to express transgenes in a living organism. originally the q - system was developed for use in the vinegar fly " drosophila melanogaster ", and was rapidly adapted for use in cultured mammalian cells, zebrafish, worms and mosquitoes. the q - system utilizes genes from the " qa " cluster of the bread fungus " neurospora crassa ", and consists of four components : the transcriptional activator ( qf / qf2 / qf2 ), the enhancer quas, the repressor qs, and the chemical de - repressor quinic acid. similarly to gal4 / uas and lexa / lexaop, the q - system is a binary expression system that allows to express reporters or effectors ( e. g. fluorescent proteins, ion channels, toxins and other genes ) in a defined subpopulation of cells with the purpose of visualising these cells or altering their function. in addition, gal4 / uas, lexa / lexaop and the q - system function independently of each other and can be used simultaneously to achieve a desired pattern of reporter expression, or to express several reporters in different subsets of cells. the q - system is based on two out of the seven genes of the " qa " gene cluster of the bread fungus " neurospora crassa ". the genes of the " qa " cluster are responsible for the catabolism of quinic acid, which is used by the fungus as a carbon source in conditions of low glucose. the cluster contains a transcriptional activator " qa - 1f ", a transcriptional repressor " qa - 1s ", and five structural genes. the " qa - 1f " binds to a specific dna sequence, found upstream of the " qa " genes. the presence of quinic acid disrupts interaction between " qa - 1f " and " qa - 1s ", thus disinhibiting the transcriptional activity of " qa - 1f ". genes " qa - 1f ", " qa - 1s " and the dna binding sequence of " qa - 1f " form the basis of the q - system. the genes were renamed to simplify their use as follows : transcriptional activator " qa - 1f " as qf, repressor " qa -
|
Velocity-addition formula
|
https://en.wikipedia.org/wiki?curid=1437696
| 5,430,736 |
3 - dimensional subspace of the lie algebra formula _ 57 of the lorentz group spanned by the boost generators formula _ 58. this space, call it " rapidity space ", is isomorphic to as a vector space, and is mapped to the open unit ball, formula _ 59, " velocity space ", via the above relation. the addition law on collinear form coincides with the law of addition of hyperbolic tangents the line element in velocity space formula _ 62 follows from the expression for " relativistic relative velocity " in any frame, where the speed of light is set to unity so that formula _ 64 and formula _ 65 agree. it this expression, formula _ 66 and formula _ 67 are velocities of two objects in any one given frame. the quantity formula _ 68 is the speed of one or the other object " relative " to the other object as seen " in the given frame ". the expression is lorentz invariant, i. e. independent of which frame is the given frame, but the quantity it calculates is " not ". for instance, if the given frame is the rest frame of object one, then formula _ 69. with and the usual spherical angle coordinates for formula _ 53 taken in the - direction. now introduce through in scattering experiments the primary objective is to measure the invariant scattering cross section. this enters the formula for scattering of two particle types into a final state formula _ 77 assumed to have two or more particles, the objective is to find a correct expression for " relativistic relative speed " formula _ 91 and an invariant expression for the incident flux. non - relativistically, one has for relative speed formula _ 92. if the system in which velocities are measured is the rest frame of particle type formula _ 93, it is required that formula _ 94 setting the speed of light formula _ 95, the expression for formula _ 91 follows immediately from the formula for the norm ( second formula ) in the " general configuration " as the formula reduces in the classical limit to formula _ 98 as it should, and gives the correct result in the rest frames of the particles. the relative velocity is " incorrectly given " in most, perhaps " all " books on particle physics and quantum field theory. this is mostly harmless, since if either one particle type is stationary or the relative motion is collinear, then the right result is obtained from the incorrect formulas. the formula is invariant, but not manifestly so. it can be rewritten in terms of four
|
Machine learning
|
https://en.wikipedia.org/wiki?curid=233488
| 225,410 |
scaling exist to use svm in a probabilistic classification setting. in addition to performing linear classification, svms can efficiently perform a non - linear classification using what is called the kernel trick, implicitly mapping their inputs into high - dimensional feature spaces. regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. the latter is often extended by regularization methods to mitigate overfitting and bias, as in ridge regression. when dealing with non - linear problems, go - to models include polynomial regression ( for example, used for trendline fitting in microsoft excel ), logistic regression ( often used in statistical classification ) or even kernel regression, which introduces non - linearity by taking advantage of the kernel trick to implicitly map input variables to higher - dimensional space. a bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph ( dag ). for example, a bayesian network could represent the probabilistic relationships between diseases and symptoms. given symptoms, the network can be used to compute the probabilities of the presence of various diseases. efficient algorithms exist that perform inference and learning. bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic bayesian networks. generalizations of bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. a gaussian process is a stochastic process in which every finite collection of the random variables in the process has a multivariate normal distribution, and it relies on a pre - defined covariance function, or kernel, that models how pairs of points relate to each other depending on their locations. given a set of observed points, or input – output examples, the distribution of the ( unobserved ) output of a new point as function of its input data, can be directly computed by looking as the observed points and the covariances between those points and the new, unobserved point. gaussian processes are popular surrogate models in bayesian optimization used to do hyperparameter optimization. a genetic algorithm ( ga ) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and
|
Cerebral hemisphere
|
https://en.wikipedia.org/wiki?curid=406608
| 4,014,425 |
them. the intraventricular foramina ( also called the foramina of monro ) allows communication with the lateral ventricles. broad generalizations are often made in popular psychology about certain functions ( e. g. logic, creativity ) being lateralized, that is, located in the right or left side of the brain. these claims are often inaccurate, as most brain functions are actually distributed across both hemispheres. most scientific evidence for asymmetry relates to low - level perceptual functions rather than the higher - level functions popularly discussed ( e. g. subconscious processing of grammar, not " logical thinking " in general ). in addition to this lateralization of some functions, the low - level representations also tend to represent the contralateral side of the body. the best example of an established lateralization is that of broca ' s and wernicke ' s areas ( language ) where both are often found exclusively on the left hemisphere. these areas frequently correspond to handedness however, meaning the localization of these areas is regularly found on the hemisphere opposite to the dominant hand. function lateralization, such as semantics, intonation, accentuation, and prosody, has since been called into question and largely been found to have a neuronal basis in both hemispheres. perceptual information is processed in both hemispheres, but is laterally partitioned : information from each side of the body is sent to the opposite hemisphere ( visual information is partitioned somewhat differently, but still lateralized ). similarly, motor control signals sent out to the body also come from the hemisphere on the opposite side. thus, hand preference ( which hand someone prefers to use ) is also related to hemisphere lateralization. in some aspects, the hemispheres are asymmetrical ; the right side is slightly bigger. there are higher levels of the neurotransmitter norepinephrine on the right and higher levels of dopamine on the left. there is more white matter ( longer axons ) on the right and more grey matter ( cell bodies ) on the left. linear reasoning functions of language such as grammar and word production are often lateralized to the left hemisphere of the brain. in contrast, holistic reasoning functions of language such as intonation and emphasis are often lateralized to the right hemisphere of the brain. other integrative functions such as intuitive or heuristic arithmetic, binaural sound localization, etc. seem to be more bilaterally controlled. as a treatment for ep
|
Bernard Bolzano
|
https://en.wikipedia.org/wiki?curid=302185
| 7,145,866 |
##lzano ' s theorem ). today he is mostly remembered for the bolzano – weierstrass theorem, which karl weierstrass developed independently and published years after bolzano ' s first proof and which was initially called the weierstrass theorem until bolzano ' s earlier work was rediscovered. bolzano ' s posthumously published work " paradoxien des unendlichen ( the paradoxes of the infinite ) " ( 1851 ) was greatly admired by many of the eminent logicians who came after him, including charles sanders peirce, georg cantor, and richard dedekind. bolzano ' s main claim to fame, however, is his 1837 " wissenschaftslehre " ( " theory of science " ), a work in four volumes that covered not only philosophy of science in the modern sense but also logic, epistemology and scientific pedagogy. the logical theory that bolzano developed in this work has come to be acknowledged as ground - breaking. other works are a four - volume " lehrbuch der religionswissenschaft " ( " textbook of the science of religion " ) and the metaphysical work " athanasia ", a defense of the immortality of the soul. bolzano also did valuable work in mathematics, which remained virtually unknown until otto stolz rediscovered many of his lost journal articles and republished them in 1881. in his 1837 " wissenschaftslehre " bolzano attempted to provide logical foundations for all sciences, building on abstractions like part - relation, abstract objects, attributes, sentence - shapes, ideas and propositions in themselves, sums and sets, collections, substances, adherences, subjective ideas, judgments, and sentence - occurrences. these attempts were an extension of his earlier thoughts in the philosophy of mathematics, for example his 1810 " beitrage " where he emphasized the distinction between the objective relationship between logical consequences and our subjective recognition of these connections. for bolzano, it was not enough that we merely have " confirmation " of natural or mathematical truths, but rather it was the proper role of the sciences ( both pure and applied ) to seek out " justification " in terms of the fundamental truths that may or may not appear to be obvious to our intuitions. bolzano begins his work by explaining what he means by " theory of science ", and the relation between our knowledge, truths and sciences. human knowledge, he states, is made of all truths ( or true propositions ) that men know or have known. however,
|
Suffix array
|
https://en.wikipedia.org/wiki?curid=1303494
| 7,655,986 |
only character comparisons are allowed. a well - known recursive algorithm for integer alphabets is the " dc3 / skew " algorithm of. it runs in linear time and has successfully been used as the basis for parallel and external memory suffix array construction algorithms. recent work by proposes an algorithm for updating the suffix array of a text that has been edited instead of rebuilding a new suffix array from scratch. even if the theoretical worst - case time complexity is formula _ 28, it appears to perform well in practice : experimental results from the authors showed that their implementation of dynamic suffix arrays is generally more efficient than rebuilding when considering the insertion of a reasonable number of letters in the original text. in practical open source work, a commonly used routine for suffix array construction was qsufsort, based on the 1999 larsson - sadakane algorithm. this routine has been superseded by yuta mori ' s divsufsort, " the fastest known suffix sorting algorithm in main memory " as of 2017. it too can be modified to compute an lcp array. it uses a induced copying combined with itoh - tanaka. in 2021 a faster implementation of the algorithm was presented by ilya grebnov which in average showed 65 % performance improvement over divsufsort implementation on silesia corpus. the concept of a suffix array can be extended to more than one string. this is called a generalized suffix array ( or gsa ), a suffix array that contains all suffixes for a set of strings ( for example, formula _ 42 and is lexicographically sorted with all suffixes of each string. the suffix array of a string can be used as an index to quickly locate every occurrence of a substring pattern formula _ 43 within the string formula _ 6. finding every occurrence of the pattern is equivalent to finding every suffix that begins with the substring. thanks to the lexicographical ordering, these suffixes will be grouped together in the suffix array and can be found efficiently with two binary searches. the first search locates the starting position of the interval, and the second one determines the end position : finding the substring pattern formula _ 43 of length formula _ 46 in the string formula _ 6 of length formula _ 24 takes formula _ 49 time, given that a single suffix comparison needs to compare formula _ 46 characters. describe how this bound can be improved to formula _ 51 time using lcp information. the idea is that a pattern comparison does not need to re - compare certain characters, when it is already known
|
Sexual dimorphism
|
https://en.wikipedia.org/wiki?curid=197179
| 614,162 |
serve to ' activate ' certain behaviors when appropriate, such as territoriality during breeding season. organizational hormones occur only during a critical period early in development, either just before or just after hatching in most birds, and determine patterns of behavior for the rest of the bird ' s life. such behavioral differences can cause disproportionate sensitivities to anthropogenic pressures. females of the whinchat in switzerland breed in intensely managed grasslands. earlier harvesting of the grasses during the breeding season lead to more female deaths. populations of many birds are often male - skewed and when sexual differences in behavior increase this ratio, populations decline at a more rapid rate. also not all male dimorphic traits are due to hormones like testosterone, instead they are a naturally occurring part of development, for example plumage. in addition, the strong hormonal influence on phenotypic differences suggest that the genetic mechanism and genetic basis of these sexually dimorphic traits may involve transcription factors or cofactors rather than regulatory sequences. sexual dimorphism may also influence differences in parental investment during times of food scarcity. for example, in the blue - footed booby, the female chicks grow faster than the males, resulting in booby parents producing the smaller sex, the males, during times of food shortage. this then results in the maximization of parental lifetime reproductive success. in black - tailed godwits " limosa limosa limosa " females are also the larger sex, and the growth rates of female chicks are more susceptible to limited environmental conditions. sexual dimorphism may also only appear during mating season, some species of birds only show dimorphic traits in seasonal variation. the males of these species will molt into a less bright or less exaggerated color during the off breeding season. this occurs because the species is more focused on survival than reproduction, causing a shift into a less ornate state. consequently, sexual dimorphism has important ramifications for conservation. however, sexual dimorphism is not only found in birds and is thus important to the conservation of many animals. such differences in form and behavior can lead to sexual segregation, defined as sex differences in space and resource use. most sexual segregation research has been done on ungulates, but such research extends to bats, kangaroos, and birds. sex - specific conservation plans have even been suggested for species with pronounced sexual segregation. the term sesquimorphism ( the latin numeral prefix " sesqui " - means one - and
|
Histone
|
https://en.wikipedia.org/wiki?curid=14029
| 2,980,909 |
alter multiple lysines to have a significant effect on chromatin structure. the modification includes h3k27ac. addition of a negatively charged phosphate group can lead to major changes in protein structure, leading to the well - characterised role of phosphorylation in controlling protein function. it is not clear what structural implications histone phosphorylation has, but histone phosphorylation has clear functions as a post - translational modification, and binding domains such as brct have been characterised. analysis of histone modifications in embryonic stem cells ( and other stem cells ) revealed many gene promoters carrying both h3k4me3 and h3k27me3, in other words these promoters display both activating and repressing marks simultaneously. this peculiar combination of modifications marks genes that are poised for transcription ; they are not required in stem cells, but are rapidly required after differentiation into some lineages. once the cell starts to differentiate, these bivalent promoters are resolved to either active or repressive states depending on the chosen lineage. marking sites of dna damage is an important function for histone modifications. it also protects dna from getting destroyed by ultraviolet radiation of sun. h3k36me3 has the ability to recruit the msh2 - msh6 ( hmutsα ) complex of the dna mismatch repair pathway. consistently, regions of the human genome with high levels of h3k36me3 accumulate less somatic mutations due to mismatch repair activity. epigenetic modifications of histone tails in specific regions of the brain are of central importance in addictions. once particular epigenetic alterations occur, they appear to be long lasting " molecular scars " that may account for the persistence of addictions. cigarette smokers ( about 15 % of the us population ) are usually addicted to nicotine. after 7 days of nicotine treatment of mice, acetylation of both histone h3 and histone h4 was increased at the fosb promoter in the nucleus accumbens of the brain, causing 61 % increase in fosb expression. this would also increase expression of the splice variant delta fosb. in the nucleus accumbens of the brain, delta fosb functions as a " sustained molecular switch " and " master control protein " in the development of an addiction. about 7 % of the us population is addicted to alcohol. in rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine
|
Discrete tomography
|
https://en.wikipedia.org/wiki?curid=6778984
| 19,490,962 |
discrete tomography focuses on the problem of reconstruction of binary images ( or finite subsets of the integer lattice ) from a small number of their projections. in general, tomography deals with the problem of determining shape and dimensional information of an object from a set of projections. from the mathematical point of view, the object corresponds to a function and the problem posed is to reconstruct this function from its integrals or sums over subsets of its domain. in general, the tomographic inversion problem may be continuous or discrete. in continuous tomography both the domain and the range of the function are continuous and line integrals are used. in discrete tomography the domain of the function may be either discrete or continuous, and the range of the function is a finite set of real, usually nonnegative numbers. in continuous tomography when a large number of projections is available, accurate reconstructions can be made by many different algorithms. it is typical for discrete tomography that only a few projections ( line sums ) are used. in this case, conventional techniques all fail. a special case of discrete tomography deals with the problem of the reconstruction of a binary image from a small number of projections. the name " discrete tomography " is due to larry shepp, who organized the first meeting devoted to this topic ( dimacs mini - symposium on discrete tomography, september 19, 1994, rutgers university ). discrete tomography has strong connections with other mathematical fields, such as number theory, discrete mathematics, computational complexity theory and combinatorics. in fact, a number of discrete tomography problems were first discussed as combinatorial problems. in 1957, h. j. ryser found a necessary and sufficient condition for a pair of vectors being the two orthogonal projections of a discrete set. in the proof of his theorem, ryser also described a reconstruction algorithm, the very first reconstruction algorithm for a general discrete set from two orthogonal projections. in the same year, david gale found the same consistency conditions, but in connection with the network flow problem. another result of ryser ' s is the definition of the switching operation by which discrete sets having the same projections can be transformed into each other. the problem of reconstructing a binary image from a small number of projections generally leads to a large number of solutions. it is desirable to limit the class of possible solutions to only those that are typical of the class of the images which contains the image being reconstructed by using a priori information, such as convexity or connectedness. a form of discrete tom
|
Water quality
|
https://en.wikipedia.org/wiki?curid=306773
| 3,981,790 |
identify the sources and fates of contaminants. environmental lawyers and policymakers work to define legislation with the intention that water is maintained at an appropriate quality for its identified use. another general perception of water quality is that of a simple property that tells whether water is polluted or not. in fact, water quality is a complex subject, in part because water is a complex medium intrinsically tied to the ecology, geology, and anthropogenic activities of a region. industrial and commercial activities ( e. g. manufacturing, mining, construction, transport ) are a major cause of water pollution as are runoff from agricultural areas, urban runoff and discharge of treated and untreated sewage. water quality guidelines for south africa are grouped according to potential user types ( e. g. domestic, industrial ) in the 1996 water quality guidelines. drinking water quality is subject to the south african national standard ( sans ) 241 drinking water specification. in england and wales acceptable levels for drinking water supply are listed in the " water supply ( water quality ) regulations 2000. " in the united states, water quality standards are defined by state agencies for various water bodies, guided by the desired uses for the water body ( e. g., fish habitat, drinking water supply, recreational use ). the clean water act ( cwa ) requires each governing jurisdiction ( states, territories, and covered tribal entities ) to submit a set of biennial reports on the quality of water in their area. these reports are known as the 303 ( d ) and 305 ( b ) reports, named for their respective cwa provisions, and are submitted to, and approved by, epa. these reports are completed by the governing jurisdiction, typically a. epa recommends that each state submit a single " integrated report " comprising its list of impaired waters and the status of all water bodies in the state. the " national water quality inventory report to congress " is a general report on water quality, providing overall information about the number of miles of streams and rivers and their aggregate condition. the cwa requires states to adopt standards for each of the possible designated uses that they assign to their waters. should evidence suggest or document that a stream, river or lake has failed to meet the water quality criteria for one or more of its designated uses, it is placed on a list of impaired waters. once a state has placed a water body on this list, it must develop a management plan establishing total maximum daily loads ( tmdls ) for the pollutant ( s ) impairing the use of the water. these
|
Retraction (topology)
|
https://en.wikipedia.org/wiki?curid=2120001
| 8,300,597 |
retract need not be a deformation retract. for instance, having a single point as a deformation retract of a space " x " would imply that " x " is path connected ( and in fact that " x " is contractible ). " note : " an equivalent definition of deformation retraction is the following. a continuous map formula _ 5 is a deformation retraction if it is a retraction and its composition with the inclusion is homotopic to the identity map on " x ". in this formulation, a deformation retraction carries with it a homotopy between the identity map on " x " and itself. for all " t " in [ 0, 1 ] and " a " in " a ", then " f " is called a strong deformation retraction. in other words, a strong deformation retraction leaves points in " a " fixed throughout the homotopy. ( some authors, such as hatcher, take this as the definition of deformation retraction. ) as an example, the " n " - sphere " formula _ 11 " is a strong deformation retract of formula _ 12 as strong deformation retraction one can choose the map a map " f " : " a " → " x " of topological spaces is a ( hurewicz ) cofibration if it has the homotopy extension property for maps to any space. this is one of the central concepts of homotopy theory. a cofibration " f " is always injective, in fact a homeomorphism to its image. if " x " is hausdorff ( or a compactly generated weak hausdorff space ), then the image of a cofibration " f " is closed in " x ". among all closed inclusions, cofibrations can be characterized as follows. the inclusion of a closed subspace " a " in a space " x " is a cofibration if and only if " a " is a neighborhood deformation retract of " x ", meaning that there is a continuous map formula _ 14 with formula _ 15 and a homotopy formula _ 16 such that formula _ 17 for all formula _ 18 formula _ 19 for all formula _ 20 and formula _ 21 and formula _ 22 if formula _ 23. the boundary of the " n " - dimensional ball, that is, the ( " n " −1 ) - sphere, is not a retract of the ball
|
Distance geometry
|
https://en.wikipedia.org/wiki?curid=654387
| 13,877,271 |
distance geometry is the branch of mathematics concerned with characterizing and studying sets of points based " only " on given values of the distances between pairs of points. more abstractly, it is the study of semimetric spaces and the isometric transformations between them. in this view, it can be considered as a subject within general topology. historically, the first result in distance geometry is heron ' s formula in 1st century ad. the modern theory began in 19th century with work by arthur cayley, followed by more extensive developments in the 20th century by karl menger and others. distance geometry problems arise whenever one needs to infer the shape of a configuration of points ( relative positions ) from the distances between them, such as in biology, sensor network, surveying, navigation, cartography, and physics. consider three ground radio stations a, b, c, whose locations are known. a radio receiver is at an unknown location. the times it takes for a radio signal to travel from the stations to the receiver, formula _ 1, are unknown, but the time differences, formula _ 2 and formula _ 3, are known. from them, one knows the distance differences formula _ 4 and formula _ 5, from which the position of the receiver can be found. in data analysis, one is often given a list of data represented as vectors formula _ 6, and one needs to find out whether they lie within a low - dimensional affine subspace. a low - dimensional representation of data has many advantages, such as saving storage space, computation time, and giving better insight into data. given a list of points on formula _ 7, formula _ 8, we can arbitrarily specify the distances between pairs of points by a list of formula _ 9, formula _ 10. this defines a semimetric space : a metric space without triangle inequality. explicitly, we define a semimetric space as a nonempty set formula _ 11 equipped with a semimetric formula _ 12 such that, for all formula _ 13, any metric space is " a fortiori " a semimetric space. in particular, formula _ 17, the formula _ 18 - dimensional euclidean space, is the canonical metric space in distance geometry. the triangle inequality is omitted in the definition, because we do not want to enforce more constraints on the distances formula _ 19 than the mere requirement that they be positive. in practice, semimetric spaces naturally arise from inaccurate measurements. for example, given three points formula _ 20 on a line, with formula _ 21, an inaccurate measurement could give
|
Algorithms for Recovery and Isolation Exploiting Semantics
|
https://en.wikipedia.org/wiki?curid=245955
| 11,705,513 |
in computer science, algorithms for recovery and isolation exploiting semantics, or aries is a recovery algorithm designed to work with a no - force, steal database approach ; it is used by ibm db2, microsoft sql server and many other database systems. ibm fellow dr. c. mohan is the primary inventor of the aries family of algorithms. the aries algorithm relies on logging of all database operations with ascending sequence numbers. usually the resulting logfile is stored on so - called " stable storage ", that is a storage medium that is assumed to survive crashes and hardware failures. to gather the necessary information for the logs, two data structures have to be maintained : the dirty page table ( dpt ) and the transaction table ( tt ). the dirty page table keeps record of all the pages that have been modified, and not yet written to disk, and the first sequence number that caused that page to become dirty. the transaction table contains all currently running transactions and the sequence number of the last log entry they created. we create log records of the form ( sequence number, transaction id, page id, redo, undo, previous sequence number ). the redo and undo fields keep information about the changes this log record saves and how to undo them. the previous sequence number is a reference to the previous log record that was created for this transaction. in the case of an aborted transaction, it ' s possible to traverse the log file in reverse order using the previous sequence numbers, undoing all actions taken within the specific transaction. every transaction implicitly begins with the first " update " type of entry for the given transaction id, and is committed with " end of log " ( eol ) entry for the transaction. during a recovery, or while undoing the actions of an aborted transaction, a special kind of log record is written, the compensation log record ( clr ), to record that the action has already been undone. clrs are of the form ( sequence number, transaction id, page id, redo, previous sequence number, next undo sequence number ). the redo field contains application of undo field of reverted action, and the undo field is omitted because clr is never reverted. the recovery works in three phases. the first phase, analysis, computes all the necessary information from the logfile. the redo phase restores the database to the exact state at the crash, including all the changes of uncommitted transactions that were running at that point in time. the undo phase then undoes
|
Multi-armed bandit
|
https://en.wikipedia.org/wiki?curid=2854828
| 2,433,913 |
horizon reward under sufficient assumptions of finite state - action spaces and irreducibility of the transition law. a main feature of these policies is that the choice of actions, at each state and time period, is based on indices that are inflations of the right - hand side of the estimated average reward optimality equations. these inflations have recently been called the optimistic approach in the work of tewari and bartlett, ortner filippi, cappe, and garivier, and honda and takemura. for bernoulli multi - armed bandits, pilarski et al. studied computation methods of deriving fully optimal solutions ( not just asymptotically ) using dynamic programming in the paper " optimal policy for bernoulli bandits : computation and algorithm gauge. " via indexing schemes, lookup tables, and other techniques, this work provided practically applicable optimal solutions for bernoulli bandits provided that time horizons and numbers of arms did not become excessively large. pilarski et al. later extended this work in " delayed reward bernoulli bandits : optimal policy and predictive meta - algorithm pardi " to create a method of determining the optimal policy for bernoulli bandits when rewards may not be immediately revealed following a decision and may be delayed. this method relies upon calculating expected values of reward outcomes which have not yet been revealed and updating posterior probabilities when rewards are revealed. when optimal solutions to multi - arm bandit tasks are used to derive the value of animals ' choices, the activity of neurons in the amygdala and ventral striatum encodes the values derived from these policies, and can be used to decode when the animals make exploratory versus exploitative choices. moreover, optimal policies better predict animals ' choice behavior than alternative strategies ( described below ). this suggests that the optimal solutions to multi - arm bandit problems are biologically plausible, despite being computationally demanding. many strategies exist which provide an approximate solution to the bandit problem, and can be put into the four broad categories detailed below. semi - uniform strategies were the earliest ( and simplest ) strategies discovered to approximately solve the bandit problem. all those strategies have in common a greedy behavior where the " best " lever ( based on previous observations ) is always pulled except when a ( uniformly ) random action is taken. probability matching strategies reflect the idea that the number of pulls for a given lever should " match " its actual probability of being the optimal lever. probability matching strategies are also known as thompson sampling or bayesian bandits, and
|
Lommel polynomial
|
https://en.wikipedia.org/wiki?curid=17777195
| 25,075,482 |
a lommel polynomial " r " ( " z " ), introduced by, is a polynomial in 1 / " z " giving the recurrence relation
|
Crystal structure
|
https://en.wikipedia.org/wiki?curid=58690
| 1,645,587 |
fraction of the material, with profound effects on such properties as diffusion and plasticity. in the limit of small crystallites, as the volume fraction of grain boundaries approaches 100 %, the material ceases to have any crystalline character, and thus becomes an amorphous solid. the difficulty of predicting stable crystal structures based on the knowledge of only the chemical composition has long been a stumbling block on the way to fully computational materials design. now, with more powerful algorithms and high - performance computing, structures of medium complexity can be predicted using such approaches as evolutionary algorithms, random sampling, or metadynamics. the crystal structures of simple ionic solids ( e. g., nacl or table salt ) have long been rationalized in terms of pauling ' s rules, first set out in 1929 by linus pauling, referred to by many since as the " father of the chemical bond ". pauling also considered the nature of the interatomic forces in metals, and concluded that about half of the five d - orbitals in the transition metals are involved in bonding, with the remaining nonbonding d - orbitals being responsible for the magnetic properties. he, therefore, was able to correlate the number of d - orbitals in bond formation with the bond length as well as many of the physical properties of the substance. he subsequently introduced the metallic orbital, an extra orbital necessary to permit uninhibited resonance of valence bonds among various electronic structures. in the resonating valence bond theory, the factors that determine the choice of one from among alternative crystal structures of a metal or intermetallic compound revolve around the energy of resonance of bonds among interatomic positions. it is clear that some modes of resonance would make larger contributions ( be more mechanically stable than others ), and that in particular a simple ratio of number of bonds to number of positions would be exceptional. the resulting principle is that a special stability is associated with the simplest ratios or " bond numbers " :,,,,, etc. the choice of structure and the value of the axial ratio ( which determines the relative bond lengths ) are thus a result of the effort of an atom to use its valency in the formation of stable bonds with simple fractional bond numbers. after postulating a direct correlation between electron concentration and crystal structure in beta - phase alloys, hume - rothery analyzed the trends in melting points, compressibilities and bond lengths as a function of group number in the periodic table in order to establish a
|
Gene therapy
|
https://en.wikipedia.org/wiki?curid=12891
| 1,834,587 |
##topsia ( color blindness ) in dogs by targeting cone photoreceptors. cone function and day vision were restored for at least 33 months in two young specimens. the therapy was less efficient for older dogs. in september it was announced that an 18 - year - old male patient in france with beta thalassemia major had been successfully treated. beta thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions. the technique used a lentiviral vector to transduce the human β - globin gene into purified blood and marrow cells obtained from the patient in june 2007. the patient ' s haemoglobin levels were stable at 9 to 10 g / dl. about a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed. further clinical trials were planned. bone marrow transplants are the only cure for thalassemia, but 75 % of patients do not find a matching donor. cancer immunogene therapy using modified antigene, antisense / triple helix approach was introduced in south america in 2010 / 11 in la sabana university, bogota ( ethical committee 14 december 2010, no p - 004 - 10 ). considering the ethical aspect of gene diagnostic and gene therapy targeting igf - i, the igf - i expressing tumors i. e. lung and epidermis cancers were treated ( trojan et al. 2016 ). in 2007 and 2008, a man ( timothy ray brown ) was cured of hiv by repeated hematopoietic stem cell transplantation ( see also allogeneic stem cell transplantation, allogeneic bone marrow transplantation, allotransplantation ) with double - delta - 32 mutation which disables the ccr5 receptor. this cure was accepted by the medical community in 2011. it required complete ablation of existing bone marrow, which is very debilitating. in august two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia ( cll ). the therapy used genetically modified t cells to attack cells that expressed the cd19 protein to fight the disease. in 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor - free. human hgf plasmid dna therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well
|
Stomatal conductance
|
https://en.wikipedia.org/wiki?curid=32086016
| 11,724,347 |
stomatal conductance, usually measured in mmol m s by a porometer, estimates the rate of gas exchange ( i. e., carbon dioxide uptake ) and transpiration ( i. e., water loss as water vapor ) through the leaf stomata as determined by the degree of stomatal aperture ( and therefore the physical resistances to the movement of gases between the air and the interior of the leaf ). the stomatal conductance, or its inverse, stomatal resistance, is under the direct biological control of the leaf through its guard cells, which surround the stomatal pore. the turgor pressure and osmotic potential of guard cells are directly related to the stomatal conductance. stomatal conductance is a function of stomatal density, stomatal aperture, and stomatal size. stomatal conductance is integral to leaf level calculations of transpiration. multiple studies have shown a direct correlation between the use of herbicides and changes in physiological and biochemical growth processes in plants, particularly non - target plants, resulting in a reduction in stomatal conductance and turgor pressure in leaves. " for mechanism, see : " stomatal opening and closingstomatal conductance is a function of the density, size and degree of opening of the stomata ; with more open stomata allowing greater conductance, and consequently indicating that photosynthesis and transpiration rates are potentially higher. therefore, stomatal opening and closing has a direct relationship to stomatal conductance. light - dependent stomatal opening occurs in many species and under many different conditions. light is a major stimulus involved in stomatal conductance, and has two key elements that are involved in the process : 1 ) the stomatal response to blue light, and 2 ) photosynthesis in the chloroplast of the guard cell. in c3 and c4 plants, the stomata open when there is an increase in light, and they close when there is a decrease in light. in cam plants, however, the stomata open when there is a decrease in light. " for more details about cam plant stomatal conductance, see : cam plants " stomatal opening occurs as a response to blue light. blue light activates the blue light receptor on the guard cell membrane which induces the pumping of protons out of the guard cell. this efflux of protons creates an electrochemical
|
High-energy nuclear physics
|
https://en.wikipedia.org/wiki?curid=1171044
| 15,776,684 |
high - energy nuclear physics studies the behavior of nuclear matter in energy regimes typical of high - energy physics. the primary focus of this field is the study of heavy - ion collisions, as compared to lighter atoms in other particle accelerators. at sufficient collision energies, these types of collisions are theorized to produce the quark – gluon plasma. in peripheral nuclear collisions at high energies one expects to obtain information on the electromagnetic production of leptons and mesons that are not accessible in electron – positron colliders due to their much smaller luminosities. previous high - energy nuclear accelerator experiments have studied heavy - ion collisions using projectile energies of 1 gev / nucleon at jinr and lbnl - bevalac up to 158 gev / nucleon at cern - sps. experiments of this type, called " fixed - target " experiments, primarily accelerate a " bunch " of ions ( typically around 10 to 10 ions per bunch ) to speeds approaching the speed of light ( 0. 999 " c " ) and smash them into a target of similar heavy ions. while all collision systems are interesting, great focus was applied in the late 1990s to symmetric collision systems of gold beams on gold targets at brookhaven national laboratory ' s alternating gradient synchrotron ( ags ) and uranium beams on uranium targets at cern ' s super proton synchrotron. high - energy nuclear physics experiments are continued at the brookhaven national laboratory ' s relativistic heavy ion collider ( rhic ) and at the cern large hadron collider. at rhic the programme began with four experiments — phenix, star, phobos, and brahms — all dedicated to study collisions of highly relativistic nuclei. unlike fixed - target experiments, collider experiments steer two accelerated beams of ions toward each other at ( in the case of rhic ) six interaction regions. at rhic, ions can be accelerated ( depending on the ion size ) from 100 gev / nucleon to 250 gev / nucleon. since each colliding ion possesses this energy moving in opposite directions, the maximal energy of the collisions can achieve a center - of - mass collision energy of 200 gev / nucleon for gold and 500 gev / nucleon for protons. the ( a large ion collider experiment ) detector at the lhc at cern is specialized in studying pb – pb nuclei collisions at a center - of - mass
|
Function composition (computer science)
|
https://en.wikipedia.org/wiki?curid=1911084
| 9,839,494 |
of composition is central. whole programs or systems can be treated as functions, which can be readily composed if their inputs and outputs are well - defined. pipelines allowing easy composition of filters were so successful that they became a design pattern of operating systems. imperative procedures with side effects violate referential transparency and therefore are not cleanly composable. however if one considers the " state of the world " before and after running the code as its input and output, one gets a clean function. composition of such functions corresponds to running the procedures one after the other. the monad formalism uses this idea to incorporate side effects and input / output ( i / o ) into functional languages.
|
Sinusitis
|
https://en.wikipedia.org/wiki?curid=28598
| 1,115,242 |
infection than rvb ), coronaviruses, and influenza viruses, others caused by adenoviruses, human parainfluenza viruses, human respiratory syncytial virus, enteroviruses other than rhinoviruses, and metapneumovirus. if the infection is of bacterial origin, the most common three causative agents are " streptococcus pneumoniae ( 38 % ) ", " haemophilus influenzae ( 36 % ) ", and " moraxella catarrhalis ( 16 % ) ". until recently, " h. influenzae " was the most common bacterial agent to cause sinus infections. however, introduction of the " h. influenzae " type b ( hib ) vaccine has dramatically decreased these infections and now non - typable " h. influenzae " ( nthi ) is predominantly seen in clinics. other sinusitis - causing bacterial pathogens include " s. aureus " and other streptococci species, anaerobic bacteria and, less commonly, gram - negative bacteria. viral sinusitis typically lasts for 7 to 10 days. acute episodes of sinusitis can also result from fungal invasion. these infections are typically seen in people with diabetes or other immune deficiencies ( such as aids or transplant on immunosuppressive antirejection medications ) and can be life - threatening. in type i diabetics, ketoacidosis can be associated with sinusitis due to mucormycosis. by definition, chronic sinusitis lasts longer than 12 weeks and can be caused by many different diseases that share chronic inflammation of the sinuses as a common symptom. it is subdivided into cases with and without polyps. when polyps are present, the condition is called chronic hyperplastic sinusitis ; however, the causes are poorly understood. it may develop with anatomic derangements, including deviation of the nasal septum and the presence of concha bullosa ( pneumatization of the middle concha ) that inhibit the outflow of mucus, or with allergic rhinitis, asthma, cystic fibrosis, and dental infections. chronic rhinosinusitis represents a multifactorial inflammatory disorder, rather than simply a persistent bacterial infection. the medical management of chronic rhinosinusitis is now focused upon controlling the inflammation that predisposes people to obstruction, reducing the incidence of infections. surgery may be needed if medications are not working
|
Mobile mapping
|
https://en.wikipedia.org/wiki?curid=31775293
| 16,170,546 |
mobile mapping is the process of collecting geospatial data from a mobile vehicle, typically fitted with a range of gnss, photographic, radar, laser, lidar or any number of remote sensing systems. such systems are composed of an integrated array of time synchronised navigation sensors and imaging sensors mounted on a mobile platform. the primary output from such systems include gis data, digital maps, and georeferenced images and video. the development of direct reading georeferencing technologies opened the way for mobile mapping systems. gps and inertial navigation systems, have allowed rapid and accurate determination of position and attitude of remote sensing equipment, effectively leading to direct mapping of features of interest without the need for complex post - processing of observed data. traditional techniques of geo - referencing aerial photography, ground profiling radar, or lidar are prohibitively expensive, particularly in inaccessible areas, or where the type of data collected makes interpretation of individual features difficult. image direct georeferencing, simplifies the mapping control for large scale mapping tasks. mobile mapping systems allow rapid collection of data to allow accurate assessment of conditions on the ground. internet, and mobile device users, are increasingly utilising geo - spatial information, either in the form of mapping, or geo - referenced imaging. google, microsoft, and yahoo have adapted both aerial photographs and satellite images to develop online mapping systems. " street view " type images are also an increasing market. the same system can be utilised to carry out efficient road condition surveys, and facilities management. laser scanning technologies, applied in the mobile mapping sense, allow full 3d data collection of slope, bankings, etc. mobile lidar with a digital imaging system is being used to gather data which after post - processing generates strip plan, horizontal and vertical profile, all other asset within and beyond row including abutting land use and deficient geometry. this also calls for riding quality of pavement, existing traffic characteristics and capacity of the corridor, speed - flow - density analysis, road safety review of the corridor, junction, and median opening, facilities for commercial vehicles. thus all data being used to form a performance matrix help identifying the gaps in corridor efficiency for prioritization of interventions to improve corridor efficiency. mobile mapping combined with indoor mapping are being used in creation of digital twins. these digital twins can be a single building or an entire city or country. several mobile mapping companies, known as " maker of digital twins " are embarking on capturing the digital twins market amid the growing trend among organizations
|
AIX Toolbox for Linux Applications
|
https://en.wikipedia.org/wiki?curid=50501391
| 25,724,083 |
the aix toolbox for linux applications is a collection of gnu tools for ibm aix. these tools are available for installation using red hat ' s rpm format. each of these packages includes its own licensing information and while ibm has made the code available to aix users, the code is provided as is and has not been thoroughly tested. the toolbox is meant to provide a core set of some of the most common development tools and libraries along with the more popular gnu packages.
|
Handicap principle
|
https://en.wikipedia.org/wiki?curid=333925
| 6,403,881 |
while being chased, telling their predator that they will be difficult to capture.
|
Whispering gallery
|
https://en.wikipedia.org/wiki?curid=1037163
| 7,256,946 |
a whispering gallery is usually a circular, hemispherical, elliptical or ellipsoidal enclosure, often beneath a dome or a vault, in which whispers can be heard clearly in other parts of the gallery. such galleries can also be set up using two parabolic dishes. sometimes the phenomenon is detected in caves. a whispering gallery is most simply constructed in the form of a circular wall, and allows whispered communication from any part of the internal side of the circumference to any other part. the sound is carried by waves, known as whispering - gallery waves, that travel around the circumference clinging to the walls, an effect that was discovered in the whispering gallery of st paul ' s cathedral in london. the extent to which the sound travels at st paul ' s can also be judged by clapping in the gallery, which produces four echoes. other historical examples are the gol gumbaz mausoleum in bijapur and the echo wall of the temple of heaven in beijing. a hemispherical enclosure will also guide whispering gallery waves. the waves carry the words so that others will be able to hear them from the opposite side of the gallery. the gallery may also be in the form of an ellipse or ellipsoid, with an accessible point at each focus. in this case, when a visitor stands at one focus and whispers, the line of sound emanating from this focus reflects directly to the focus at the other end of the gallery, where the whispers may be heard. in a similar way, two large concave parabolic dishes, serving as acoustic mirrors, may be erected facing each other in a room or outdoors to serve as a whispering gallery, a common feature of science museums. egg - shaped galleries, such as the golghar granary at bankipore, and irregularly shaped smooth - walled galleries in the form of caves, such as the ear of dionysius in syracuse, also exist. the term " whispering gallery " has been borrowed in the physical sciences to describe other forms of whispering - gallery waves such as light or matter waves.
|
UML state machine
|
https://en.wikipedia.org/wiki?curid=23959612
| 5,576,527 |
and guards ). the exact syntax of action and guard expressions isn ' t defined in the uml specification, so many people use either structured english or, more formally, expressions in an implementation language such as c, c + +, or java. in practice, this means that uml statechart notation depends heavily on the specific programming language. nevertheless, most of the statecharts semantics are heavily biased toward graphical notation. for example, state diagrams poorly represent the sequence of processing, be it order of evaluation of guards or order of dispatching events to orthogonal regions. the uml specification sidesteps these problems by putting the burden on the designer not to rely on any particular sequencing. however, it is the case that when uml state machines are actually implemented, there is inevitably full control over order of execution, giving rise to criticism that the uml semantics may be unnecessarily restrictive. similarly, statechart diagrams require a lot of plumbing gear ( pseudostates, like joins, forks, junctions, choicepoints, etc. ) to represent the flow of control graphically. in other words, these elements of the graphical notation do not add much value in representing flow of control as compared to plain structured code. the uml notation and semantics are really geared toward computerized uml tools. a uml state machine, as represented in a tool, is not just the state diagram, but rather a mixture of graphical and textual representation that precisely captures both the state topology and the actions. the users of the tool can get several complementary views of the same state machine, both visual and textual, whereas the generated code is just one of the many available views.
|
DNA profiling
|
https://en.wikipedia.org/wiki?curid=44290
| 2,237,745 |
guilty to both murders. although 99. 9 % of human dna sequences are the same in every person, enough of the dna is different that it is possible to distinguish one individual from another, unless they are monozygotic ( identical ) twins. dna profiling uses repetitive sequences that are highly variable, called variable number tandem repeats ( vntrs ), in particular short tandem repeats ( strs ), also known as microsatellites, and minisatellites. vntr loci are similar between closely related individuals, but are so variable that unrelated individuals are unlikely to have the same vntrs. when a sample such as blood or saliva is obtained, the dna is only a small part of what is present in the sample. before the dna can be analyzed, it must be extracted from the cells and purified. there are many ways this can be accomplished, but all methods follow the same basic procedure. the cell and nuclear membranes need to be broken up to allow the dna to be free in solution. once the dna is free, it can be separated from all other cellular components. after the dna has been separated in solution, the remaining cellular debris can then be removed from the solution and discarded, leaving only dna. the most common methods of dna extraction include organic extraction ( also called phenol chloroform extraction ), chelex extraction, and solid phase extraction. differential extraction is a modified version of extraction in which dna from two different types of cells can be separated from each other before being purified from the solution. each method of extraction works well in the laboratory, but analysts typically select their preferred method based on factors such as the cost, the time involved, the quantity of dna yielded, and the quality of dna yielded. rflp stands for restriction fragment length polymorphism and, in terms of dna analysis, describes a dna testing method which utilizes restriction enzymes to " cut " the dna at short and specific sequences throughout the sample. to start off processing in the laboratory, the sample has to first go through an extraction protocol, which may vary depending on the sample type and / or laboratory sops ( standard operating procedures ). once the dna has been " extracted " from the cells within the sample and separated away from extraneous cellular materials and any nucleases that would degrade the dna, the sample can then be introduced to the desired restriction enzymes to be cut up into discernable fragments. following the enzyme digestion, a southern blot is performed. southern blots are
|
Ideal solution
|
https://en.wikipedia.org/wiki?curid=731401
| 7,776,193 |
the vapor pressure of component formula _ 3 above the solution, formula _ 4 is its mole fraction and formula _ 5 is the vapor pressure of the pure substance formula _ 3 at the same temperature. this definition depends on vapor pressure, which is a directly measurable property, at least for volatile components. the thermodynamic properties may then be obtained from the chemical potential μ ( which is the partial molar gibbs energy " g " ) of each component. if the vapor is an ideal gas, the reference pressure formula _ 8 may be taken as formula _ 9 = 1 bar, or as the pressure of the mix, whichever is simpler. this equation for the chemical potential can be used as an alternate definition for an ideal solution. however, the vapor above the solution may not actually behave as a mixture of ideal gases. some authors therefore define an ideal solution as one for which each component obeys the fugacity analogue of raoult ' s law formula _ 12. here formula _ 13 is the fugacity of component formula _ 3 in solution and formula _ 15 is the fugacity of formula _ 3 as a pure substance. since the fugacity is defined by the equation this definition leads to ideal values of the chemical potential and other thermodynamic properties even when the component vapors above the solution are not ideal gases. an equivalent statement uses thermodynamic activity instead of fugacity. since all this, done as a pure substance, is valid in an ideal mix just adding the subscript formula _ 3 to all the intensive variables and changing formula _ 22 to formula _ 26, with optional overbar, standing for partial molar volume : which means that the partial molar volumes in an ideal mix are independent of composition. consequently, the total volume is the sum of the volumes of the components in their pure forms : proceeding in a similar way but taking the derivative with respect to formula _ 19 we get a similar result for molar enthalpies : which in turn means that formula _ 34 and that the enthalpy of the mix is equal to the sum of its component enthalpies. solvent – solute interactions are the same as solute – solute and solvent – solvent interactions, on average. consequently, the enthalpy of mixing ( solution ) is zero and the change in gibbs free energy on mixing is determined solely by the entropy of mixing. hence the molar gibbs free energy of mixing is where m denotes molar, i. e., change in gibbs free energy per
|
System of polynomial equations
|
https://en.wikipedia.org/wiki?curid=27420015
| 9,696,424 |
generator to the equations of the system. thus solving a polynomial system over a number field is reduced to solving another system over the rational numbers. for example, if a system contains formula _ 7, a system over the rational numbers is obtained by adding the equation and replacing formula _ 7 by in the other equations. in the case of a finite field, the same transformation allows always supposing that the field has a prime order. the usual way of representing the solutions is through zero - dimensional regular chains. such a chain consists of a sequence of polynomials,,..., such that, for every such that the solutions of this system are obtained by solving the first univariate equation, substituting the solutions in the other equations, then solving the second equation which is now univariate, and so on. the definition of regular chains implies that the univariate equation obtained from has degree and thus that the system has solutions, provided that there is no multiple root in this resolution process ( fundamental theorem of algebra ). every zero - dimensional system of polynomial equations is equivalent ( i. e. has the same solutions ) to a finite number of regular chains. several regular chains may be needed, as it is the case for the following system which has three solutions. there are several algorithms for computing a triangular decomposition of an arbitrary polynomial system ( not necessarily zero - dimensional ) into regular chains ( or regular semi - algebraic systems ). there is also an algorithm which is specific to the zero - dimensional case and is competitive, in this case, with the direct algorithms. it consists in computing first the grobner basis for the graded reverse lexicographic order ( grevlex ), then deducing the lexicographical grobner basis by fglm algorithm and finally applying the lextriangular algorithm. this representation of the solutions are fully convenient for coefficients in a finite field. however, for rational coefficients, two aspects have to be taken care of : the first issue has been solved by dahan and schost : among the sets of regular chains that represent a given set of solutions, there is a set for which the coefficients are explicitly bounded in terms of the size of the input system, with a nearly optimal bound. this set, called " equiprojectable decomposition ", depends only on the choice of the coordinates. this allows the use of modular methods for computing efficiently the equiprojectable decomposition. the second issue is generally solved by outputting regular chains of a special form, sometimes called "
|
Birefringence
|
https://en.wikipedia.org/wiki?curid=174412
| 2,407,496 |
. then we shall find the possible wave vectors. by combining maxwell ' s equations for and, we can eliminate to obtain : we can apply the vector identity to the left hand side of, and use the spatial dependence in which each differentiation in ( for instance ) results in multiplication by to find : the right hand side of can be expressed in terms of through application of the permittivity tensor and noting that differentiation in time results in multiplication by, then becomes : finding the allowed values of for a given is easiest done by using cartesian coordinates with the, and axes chosen in the directions of the symmetry axes of the crystal ( or simply choosing in the direction of the optic axis of a uniaxial crystal ), resulting in a diagonal matrix for the permittivity tensor : where the diagonal values are squares of the refractive indices for polarizations along the three principal axes, and. with in this form, and substituting in the speed of light using, the component of the vector equation becomes where,, are the components of ( at any given position in space and time ) and,, are the components of. rearranging, we can write ( and similarly for the and components of ) this is a set of linear equations in,,, so it can have a nontrivial solution ( that is, one other than ) as long as the following determinant is zero : evaluating the determinant of, and rearranging the terms according to the powers of formula _ 2, the constant terms cancel. after eliminating the common factor formula _ 2 from the remaining terms, we obtain in the case of a uniaxial material, choosing the optic axis to be in the direction so that and, this expression can be factored into setting either of the factors in to zero will define an ellipsoidal surface in the space of wavevectors that are allowed for a given. the first factor being zero defines a sphere ; this is the solution for so - called ordinary rays, in which the effective refractive index is exactly regardless of the direction of. the second defines a spheroid symmetric about the axis. this solution corresponds to the so - called extraordinary rays in which the effective refractive index is in between and, depending on the direction of. therefore, for any arbitrary direction of propagation ( other than in the direction of the optic axis ), two distinct wavevectors are allowed corresponding to the polarizations of the ordinary and extraordinary rays. for a biaxial material a similar but more complicated condition on the two
|
Force-directed graph drawing
|
https://en.wikipedia.org/wiki?curid=710331
| 6,344,119 |
force - directed graph drawing algorithms are a class of algorithms for drawing graphs in an aesthetically - pleasing way. their purpose is to position the nodes of a graph in two - dimensional or three - dimensional space so that all the edges are of more or less equal length and there are as few crossing edges as possible, by assigning forces among the set of edges and the set of nodes, based on their relative positions, and then using these forces either to simulate the motion of the edges and nodes or to minimize their energy. while graph drawing can be a difficult problem, force - directed algorithms, being physical simulations, usually require no special knowledge about graph theory such as planarity. force - directed graph drawing algorithms assign forces among the set of edges and the set of nodes of a graph drawing. typically, spring - like attractive forces based on hooke ' s law are used to attract pairs of endpoints of the graph ' s edges towards each other, while simultaneously repulsive forces like those of electrically charged particles based on coulomb ' s law are used to separate all pairs of nodes. in equilibrium states for this system of forces, the edges tend to have uniform length ( because of the spring forces ), and nodes that are not connected by an edge tend to be drawn further apart ( because of the electrical repulsion ). edge attraction and vertex repulsion forces may be defined using functions that are not based on the physical behavior of springs and particles ; for instance, some force - directed systems use springs whose attractive force is logarithmic rather than linear. an alternative model considers a spring - like force for every pair of nodes formula _ 1 where the ideal length formula _ 2 of each spring is proportional to the graph - theoretic distance between nodes " i " and " j ", without using a separate repulsive force. minimizing the difference ( usually the squared difference ) between euclidean and ideal distances between nodes is then equivalent to a metric multidimensional scaling problem. a force - directed graph can involve forces other than mechanical springs and electrical repulsion. a force analogous to gravity may be used to pull vertices towards a fixed point of the drawing space ; this may be used to pull together different connected components of a disconnected graph, which would otherwise tend to fly apart from each other because of the repulsive forces, and to draw nodes with greater centrality to more central positions in the drawing ; it may also affect the vertex spacing within a single component. analogues of magnetic fields may be used for directed graphs. rep
|
Abelian and Tauberian theorems
|
https://en.wikipedia.org/wiki?curid=411990
| 13,580,800 |
terms up to " c " average to at most " ε " / 2, while each term in the tail is bounded by ε / 2 so that the average is also necessarily bounded. the name derives from abel ' s theorem on power series. in that case " l " is the " radial limit " ( thought of within the complex unit disk ), where we let " r " tend to the limit 1 from below along the real axis in the power series with term and set " z " = " r " · " e ". that theorem has its main interest in the case that the power series has radius of convergence exactly 1 : if the radius of convergence is greater than one, the convergence of the power series is uniform for " r " in [ 0, 1 ] so that the sum is automatically continuous and it follows directly that the limit as " r " tends up to 1 is simply the sum of the " a ". when the radius is 1 the power series will have some singularity on | " z " | = 1 ; the assertion is that, nonetheless, if the sum of the " a " exists, it is equal to the limit over " r ". this therefore fits exactly into the abstract picture. partial converses to abelian theorems are called tauberian theorems. the original result of stated that if we assume also ( see little o notation ) and the radial limit exists, then the series obtained by setting " z " = 1 is actually convergent. this was strengthened by john edensor littlewood : we need only assume o ( 1 / " n " ). a sweeping generalization is the hardy – littlewood tauberian theorem. in the abstract setting, therefore, an " abelian " theorem states that the domain of " l " contains the convergent sequences, and its values there are equal to those of the " lim " functional. a " tauberian " theorem states, under some growth condition, that the domain of " l " is exactly the convergent sequences and no more. if one thinks of " l " as some generalised type of " weighted average ", taken to the limit, a tauberian theorem allows one to discard the weighting, under the correct hypotheses. there are many applications of this kind of result in number theory, in particular in handling dirichlet series. the development of the field of tauberian theorems received a fresh turn with norbert wiener ' s very general results, namely wiener ' s tauberian theorem
|
Joule effect
|
https://en.wikipedia.org/wiki?curid=5143685
| 10,168,078 |
joule effect and joule ' s law are any of several different physical effects discovered or characterized by english physicist james prescott joule. these physical effects are not the same, but all are frequently or occasionally referred to in the literature as the " joule effect " or " joule law " these physical effects include : between 1840 and 1843, joule carefully studied the heat produced by an electric current. from this study, he developed joule ' s laws of heating, the first of which is commonly referred to as the " joule effect ". joule ' s first law expresses the relationship between heat generated in a conductor and current flow, resistance, and time. the magnetostriction effect describes a property of ferromagnetic materials which causes them to change their shape when subjected to a magnetic field. joule first reported observing the change in the length of ferromagnetic rods in 1842. in 1845, joule studied the free expansion of a gas into a larger volume. this became known as joule expansion. the cooling of a gas by allowing it to expand freely is occasionally referred to as the joule effect. if an elastic band is first stretched and then subjected to heating, it will shrink rather than expand. this effect was first observed by john gough in 1802, and was investigated further by joule in the 1850s, when it then became known as the gough – joule effect.
|
Ataxin 1
|
https://en.wikipedia.org/wiki?curid=6129774
| 19,645,921 |
mutations in ataxin - 1 cause spinocerebellar ataxia type 1, an inherited neurodegenerative disease characterized by a progressive loss of cerebellar neurons, particularly purkinje neurons. in humans, " atxn1 " is located on the short arm of chromosome 6. the gene contains 9 exons, two of which are protein - coding. there is a cag repeat in the coding sequence which is longer in humans than other species ( 6 - 38 uninterrupted cag repeats in healthy humans versus 2 in the mouse gene ). this repeat is prone to errors in dna replication and can vary widely in length between individuals. the function of ataxin - 1 is not completely understood. it appears to be involved in regulating gene expression based on its location in the nucleus of the cell, its association with promoter regions of several genes, and its interactions with transcriptional regulators and parts of the rna splicing machinery. " atxn1 " is the gene mutated in spinocerebellar ataxia type 1 ( sca1 ), a dominantly - inherited, fatal genetic disease in which neurons in the cerebellum and brain stem degenerate over the course of years or decades. sca1 is a trinucleotide repeat disorder caused by expansion of the cag repeat in " atxn1 " ; this leads to an expanded polyglutamine tract in the protein. this elongation is variable in length, with as few as 6 and as many as 81 repeats reported in humans. repeats of 39 or more uninterrupted cag triplets cause disease, and longer repeat tracts are correlated with earlier age of onset and faster progression. how polyglutamine expansion in ataxin - 1 causes neuronal dysfunction and degeneration is still unclear. disease likely occurs through the combination of several processes. mutant ataxin - 1 protein spontaneously misfolds and forms aggregates in cells, much like other disease - associated proteins such as tau, aβ, and huntingtin. this led to the hypothesis that the aggregates are toxic to neurons, but it has been shown in mice that aggregation is not required for pathogenesis. other neuronal proteins can modulate the formation of ataxin - 1 aggregates and this in turn may affect aggregate - induced toxicity. soluble ataxin - 1 interacts with many other proteins. polyglutamine expansion in ataxin - 1 can affect these interactions, sometimes causing loss of function ( where the protein fails
|
Link aggregation
|
https://en.wikipedia.org/wiki?curid=1952952
| 2,975,270 |
the extreme, one link is fully loaded while the others are completely idle and aggregate bandwidth is limited to this single member ' s maximum bandwidth. for this reason, an even load balancing and full utilization of all trunked links is almost never reached in real - life implementations. nics trunked together can also provide network links beyond the throughput of any one single nic. for example, this allows a central file server to establish an aggregate 2 - gigabit connection using two 1 - gigabit nics teamed together. note the data signaling rate will still be 1 gbit / s, which can be misleading depending on methodologies used to test throughput after link aggregation is employed. microsoft windows server 2012 supports link aggregation natively. previous windows server versions relied on manufacturer support of the feature within their device driver software. intel, for example, released advanced networking services ( ans ) to bond intel fast ethernet and gigabit cards. nvidia supports teaming with their nvidia network access manager / firewall tool. hp has a teaming tool for hp - branded nics which supports several modes of link aggregation including 802. 3ad with lacp. in addition, there is a basic layer - 3 aggregation that allows servers with multiple ip interfaces on the same network to perform load balancing, and for home users with more than one internet connection, to increase connection speed by sharing the load on all interfaces. broadcom offers advanced functions via broadcom advanced control suite ( bacs ), via which the teaming functionality of basp ( broadcom advanced server program ) is available, offering 802. 3ad static lags, lacp, and " smart teaming " which doesn ' t require any configuration on the switches to work. it is possible to configure teaming with bacs with a mix of nics from different vendors as long as at least one of them is from broadcom and the other nics have the required capabilities to support teaming. linux, freebsd, netbsd, openbsd, macos, opensolaris and commercial unix distributions such as aix implement ethernet bonding at a higher level and, as long as the nic is supported by the kernel, can deal with nics from different manufacturers or using different drivers. citrix xenserver and vmware esx have native support for link - aggregation. xenserver offers both static lags as well as lacp. vsphere 5. 1 ( esxi ) supports both static lags and lacp natively with their virtual
|
Linear energy transfer
|
https://en.wikipedia.org/wiki?curid=4579933
| 7,922,486 |
of secondary radiation and the non - linear path of delta rays, but simplifies analytic evaluation. where formula _ 2 is the energy loss of the charged particle due to electronic collisions while traversing a distance formula _ 3, " excluding " all secondary electrons with kinetic energies larger than δ. if δ tends toward infinity, then there are no electrons with larger energy, and the linear energy transfer becomes the unrestricted linear energy transfer which is identical to the linear electronic " stopping power ". here, the use of the term " infinity " is not to be taken literally ; it simply means that no energy transfers, however large, are excluded. during his investigations of radioactivity, ernest rutherford coined the terms alpha rays, beta rays and gamma rays for the three types of emissions that occur during radioactive decay. linear energy transfer is best defined for monoenergetic ions, i. e. protons, alpha particles, and the heavier nuclei called hze ions found in cosmic rays or produced by particle accelerators. these particles cause frequent direct ionizations within a narrow diameter around a relatively straight track, thus approximating continuous deceleration. as they slow down, the changing particle cross section modifies their let, generally increasing it to a bragg peak just before achieving thermal equilibrium with the absorber, i. e., before the end of range. at equilibrium, the incident particle essentially comes to rest or is absorbed, at which point let is undefined. since the let varies over the particle track, an average value is often used to represent the spread. averages weighted by track length or weighted by absorbed dose are present in the literature, with the latter being more common in dosimetry. these averages are not widely separated for heavy particles with high let, but the difference becomes more important in the other type of radiations discussed below. often overlooked for alpha particles is the recoil - nucleus of the alpha emitter, which has significant ionization energy of roughly 5 % of the alpha particle, but because of its high electric charge and large mass, has an ultra - short range of only a few angstroms. this can skew results significantly if one is examining the relative biological effectiveness of the alpha particle in the cytoplasm, while ignoring the recoil nucleus contribution, which alpha - parent being one of numerous heavy metals, is typically adhered to chromatic material such as chromosomes. electrons produced in nuclear decay are called beta particles. because of their low mass relative to atoms, they are strongly scattered by nuclei ( coulomb or rutherford scattering
|
Cone (topology)
|
https://en.wikipedia.org/wiki?curid=782162
| 13,344,504 |
then formula _ 31 is defined by where we take the basepoint of the reduced cone to be the equivalence class of formula _ 35. with this definition, the natural inclusion formula _ 36 becomes a based map. this construction also gives a functor, from the category of pointed spaces to itself.
|
Test-driven development
|
https://en.wikipedia.org/wiki?curid=357881
| 1,416,767 |
tactic is to fix it early. also, if a poor architecture, a poor design, or a poor testing strategy leads to a late change that makes dozens of existing tests fail, then it is important that they are individually fixed. merely deleting, disabling or rashly altering them can lead to undetectable holes in the test coverage. test - driven development has been adopted outside of software development, in both product and service teams, as test - driven work. for testing to be successful, it needs to be practiced at the micro and macro levels. every method in a class, every input data value, log message, and error code, amongst other data points, need to be tested. similar to tdd, non - software teams develop quality control ( qc ) checks ( usually manual tests rather than automated tests ) for each aspect of the work prior to commencing. these qc checks are then used to inform the design and validate the associated outcomes. the six steps of the tdd sequence are applied with minor semantic changes : test - driven development is related to, but different from acceptance test – driven development ( atdd ). tdd is primarily a developer ' s tool to help create well - written unit of code ( function, class, or module ) that correctly performs a set of operations. atdd is a communication tool between the customer, developer, and tester to ensure that the requirements are well - defined. tdd requires test automation. atdd does not, although automation helps with regression testing. tests used in tdd can often be derived from atdd tests, since the code units implement some portion of a requirement. atdd tests should be readable by the customer. tdd tests do not need to be. it includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. tools such as jbehave, cucumber, mspec and specflow provide syntaxes which allow product owners, developers and test engineers to define together the behaviors which can then be translated into automated tests. test suite code clearly has to be able to access the code it is testing. on the other hand, normal design criteria such as information hiding, encapsulation and the separation of concerns should not be compromised. therefore, unit test code for tdd is usually written within the same project or module as the code being tested. in object oriented design this still does not provide access to private data and methods. therefore, extra work may be necessary
|
Flat band potential
|
https://en.wikipedia.org/wiki?curid=66494982
| 14,700,694 |
in semiconductor physics, the flat band potential of a semiconductor defines the potential at which there is no depletion layer at the junction between a semiconductor and an electrolyte or p - n - junction. this is a consequence of the condition that the redox fermi level of the electrolyte must be equal to the fermi level of the semiconductor and therefore preventing any band bending of the conduction and valence band. an application of the flat band potential can be found in the determining the width of the space charge region in a semiconductor - electrolyte junction. furthermore, it is used in the mott - schottky equation to determine the capacitance of the semiconductor - electrolyte junction and plays a role in the photocurrent of a photoelectrochemical cell. the value of the flat band potential depends on many factors, such as the material, ph and crystal structure of the material in semiconductors, valence electrons are located in energy bands. according to band theory, the electrons are either located in the valence band ( lower energy ) or the conduction band ( higher energy ), which are separated by an energy gap. in general, electrons will occupy different energy levels following the fermi - dirac distribution ; for energy levels higher than the fermi energy ef, the occupation will be minimal. electrons in lower levels can be excited into the higher levels through thermal or photoelectric excitations, leaving a positively - charged hole in the band they left. due to conservation of net charge, the concentration of electrons ( n ) and of protons or holes ( p ) in a ( pure ) semiconductor must always be equal. semiconductors can be doped to increase these concentrations : n - doping increases the concentration of electrons while p - doping increases the concentration of holes. this also affects the fermi energy of the electrons : n - doped means a higher fermi energy, while p - doped means a lower energy. at the interface between a n - doped and p - doped region in a semiconductor, band bending will occur. due to the different charge distributions in the regions, an electric field will be induced, creating a so - called depletion region at the interface. similar interfaces also appear at junctions between ( doped ) semiconductors and other materials, such as metals / electrolytes. a way to counteract this band bending is by applying a potential to the system. this potential would have to be the flat band potential and is defined to
|
Total derivative
|
https://en.wikipedia.org/wiki?curid=1070326
| 3,471,953 |
in mathematics, the total derivative of a function at a point is the best linear approximation near this point of the function with respect to its arguments. unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. in many situations, this is the same as considering all partial derivatives simultaneously. the term " total derivative " is primarily used when is a function of several variables, because when is a function of a single variable, the total derivative is the same as the ordinary derivative of the function. let formula _ 1 be an open subset. then a function formula _ 2 is said to be ( totally ) differentiable at a point formula _ 3 if there exists a linear transformation formula _ 4 such that the linear map formula _ 6 is called the ( total ) derivative or ( total ) differential of formula _ 7 at formula _ 8. other notations for the total derivative include formula _ 9 and formula _ 10. a function is ( totally ) differentiable if its total derivative exists at every point in its domain. conceptually, the definition of the total derivative expresses the idea that formula _ 6 is the best linear approximation to formula _ 7 at the point formula _ 8. this can be made precise by quantifying the error in the linear approximation determined by formula _ 6. to do so, write where formula _ 16 equals the error in the approximation. to say that the derivative of formula _ 7 at formula _ 8 is formula _ 6 is equivalent to the statement where formula _ 21 is little - o notation and indicates that formula _ 16 is much smaller than formula _ 23 as formula _ 24. the total derivative formula _ 6 is the " unique " linear transformation for which the error term is this small, and this is the sense in which it is the best linear approximation to formula _ 7. the function formula _ 7 is differentiable if and only if each of its components formula _ 28 is differentiable, so when studying total derivatives, it is often possible to work one coordinate at a time in the codomain. however, the same is not true of the coordinates in the domain. it is true that if formula _ 7 is differentiable at formula _ 8, then each partial derivative formula _ 31 exists at formula _ 8. the converse is false : it can happen that all of the partial derivatives of formula _ 7 at formula _ 8 exist, but formula _ 7 is not differentiable at formula _ 8. this means that the function is very " rough " at formula _ 8
|
Initialization vector
|
https://en.wikipedia.org/wiki?curid=105971
| 4,794,158 |
called the block size. for example, a single invocation of the aes algorithm transforms a 128 - bit plaintext block into a ciphertext block of 128 bits in size. the key, which is given as one input to the cipher, defines the mapping between plaintext and ciphertext. if data of arbitrary length is to be encrypted, a simple strategy is to split the data into blocks each matching the cipher ' s block size, and encrypt each block separately using the same key. this method is not secure as equal plaintext blocks get transformed into equal ciphertexts, and a third party observing the encrypted data may easily determine its content even when not knowing the encryption key. to hide patterns in encrypted data while avoiding the re - issuing of a new key after each block cipher invocation, a method is needed to randomize the input data. in 1980, the nist published a national standard document designated federal information processing standard ( fips ) pub 81, which specified four so - called block cipher modes of operation, each describing a different solution for encrypting a set of input blocks. the first mode implements the simple strategy described above, and was specified as the electronic codebook ( ecb ) mode. in contrast, each of the other modes describe a process where ciphertext from one block encryption step gets intermixed with the data from the next encryption step. to initiate this process, an additional input value is required to be mixed with the first block, and which is referred to as an " initialization vector ". for example, the cipher - block chaining ( cbc ) mode requires an unpredictable value, of size equal to the cipher ' s block size, as additional input. this unpredictable value is added to the first plaintext block before subsequent encryption. in turn, the ciphertext produced in the first encryption step is added to the second plaintext block, and so on. the ultimate goal for encryption schemes is to provide semantic security : by this property, it is practically impossible for an attacker to draw any knowledge from observed ciphertext. it can be shown that each of the three additional modes specified by the nist are semantically secure under so - called chosen - plaintext attacks. properties of an iv depend on the cryptographic scheme used. a basic requirement is " uniqueness ", which means that no iv may be reused under the same key. for block ciphers, repeated iv values devolve the encryption scheme into electronic codebook mode : equal
|
Ontology learning
|
https://en.wikipedia.org/wiki?curid=11053817
| 13,550,215 |
##yclic graphs ) is an ontology generation plugin for protege 4. 1 and oboedit 2. 1. it allows for term generation, sibling generation, definition generation, and relationship induction. integrated into protege 4. 1 and obo - edit 2. 1, dog4dag allows ontology extension for all common ontology formats ( e. g., owl and obo ). limited largely to ebi and bio portal lookup service extensions.
|
De Broglie–Bohm theory
|
https://en.wikipedia.org/wiki?curid=54717
| 4,274,691 |
and the particles ' evolutions are governed by the guiding equation. collapse only occurs in a phenomenological way for systems that seem to follow their own schrodinger ' s equation. as this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include, and this will affect when " collapse " occurs. in the standard quantum formalism, measuring observables is generally thought of as measuring operators on the hilbert space. for example, measuring position is considered to be a measurement of the position operator. this relationship between physical measurements and hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. the de broglie – bohm theory, by contrast, requires no such measurement axioms ( and measurement as such is not a dynamically distinct or special sub - category of physical processes in the theory ). in particular, the usual operators - as - observables formalism is, for de broglie – bohm theory, a theorem. a major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles ; they are ( as in the case of spin discussed above ) measurements of the wavefunction. in the history of de broglie – bohm theory, the proponents have often had to deal with claims that this theory is impossible. such arguments are generally based on inappropriate analysis of operators as observables. if one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. de broglie – bohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. as such, it only has a definite outcome once the experimental apparatus is chosen. once that is taken into account, the impossibility theorems become irrelevant. there have also been claims that experiments reject the bohm trajectories in favor of the standard qm lines. but as shown in other work, such experiments cited above only disprove a misinterpretation of the de broglie – bohm theory, not the theory itself. there are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. for example, the ground state of hydrogen is a real wavefunction. according to the guiding equation, this means that the electron is at rest when in
|
Biorefinery
|
https://en.wikipedia.org/wiki?curid=1637397
| 9,559,507 |
. the biorefinery utilizes source separated organics from the metro edmonton region, open pen feedlot manure, and food processing waste. chemrec ' s technology for black liquor gasification and production of second - generation biofuels such as biomethanol or biodme is integrated with a host pulp mill and utilizes a major sulfate or sulfite process waste product as feedstock. novamont has converted old petrochemical factories into biorefineries, producing protein, plastics, animal feed, lubricants, herbicides and elastomers from cardoon. c16 biosciences produces synthetic palm oil from carbon - containing waste ( i. e. food waste, glycerol ) by means of yeast. macrocascade aims to refine seaweed into food and fodder, and then products for healthcare, cosmetics, and fine chemicals industries. the side streams will be used for the production of fertilizer and biogas. other seaweed biorefinery projects include macroalgaebiorefinery ( mab4 ), searefinery and seafarm. fumi ingredients produces foaming agents, heat - set gels and emulsifiers from micro - algae with the help of micro - organisms such as brewer ' s yeast and baker ' s yeast. the biocon platform is researching the processing of wood into various products. more precisely, their researchers are looking at transforming lignin and cellulose into various products. lignin for example can be transformed into phenolic components which can be used to make glue, plastics and agricultural products ( e. g. crop protection ). cellulose can be transformed into clothes and packaging. in south africa, numbitrax llc bought a blume biorefinery system for producing bioethanol as well as additional high - return offtake products from local and readily available resources such as the prickly pear cactus. circular organics ( part of kempen insect valley ) grows black soldier fly larvae on waste from the agricultural and food industry ( i. e. fruit and vegetable surplus, remaining waste from fruit juice and jam production ). these larvae are used to produce protein, grease, and chitin. the grease is usable in the pharmaceutical industry ( cosmetics, surfactants for shower gel ), replacing other vegetable oils such as palm oil, or it can be used in fodder. biteback insect makes insect cooking oil, insect butter, fatty alcohols, insect fra
|
Robustness (evolution)
|
https://en.wikipedia.org/wiki?curid=31066305
| 17,318,068 |
in evolutionary biology, robustness of a biological system ( also called biological or genetic robustness ) is the persistence of a certain characteristic or trait in a system under perturbations or conditions of uncertainty. robustness in development is known as canalization. according to the kind of perturbation involved, robustness can be classified as mutational, environmental, recombinational, or behavioral robustness " etc ". robustness is achieved through the combination of many genetic and molecular mechanisms and can evolve by either direct or indirect selection. several model systems have been developed to experimentally study robustness and its evolutionary consequences. mutational robustness ( also called mutation tolerance ) describes the extent to which an organism ’ s phenotype remains constant in spite of mutation. robustness can be empirically measured for several genomes and individual genes by inducing mutations and measuring what proportion of mutants retain the same phenotype, function or fitness. more generally robustness corresponds to the neutral band in the distribution of fitness effects of mutation ( i. e. the frequencies of different fitnesses of mutants ). proteins so far investigated have shown a tolerance to mutations of roughly 66 % ( i. e. two thirds of mutations are neutral ). conversely, measured mutational robustnesses of organisms vary widely. for example, > 95 % of point mutations in " c. elegans " have no detectable effect and even 90 % of single gene knockouts in " e. coli " are non - lethal. viruses, however, only tolerate 20 - 40 % of mutations and hence are much more sensitive to mutation. biological processes at the molecular scale are inherently stochastic. they emerge from a combination of stochastic events that happen given the physico - chemical properties of molecules. for instance, gene expression is intrinsically noisy. this means that two cells in exactly identical regulatory states will exhibit different mrna contents. the cell population level log - normal distribution of mrna content follows directly from the application of the central limit theorem to the multi - step nature of gene expression regulation. in varying environments, perfect adaptation to one condition may come at the expense of adaptation to another. consequently, the total selection pressure on an organism is the average selection across all environments weighted by the percentage time spent in that environment. variable environment can therefore select for environmental robustness where organisms can function across a wide range of conditions with little change in phenotype or fitness ( biology ). some organisms show adaptations to tolerate large changes in temperature, water availability, salinity or
|
Metatranscriptomics
|
https://en.wikipedia.org/wiki?curid=46204126
| 13,609,112 |
metatranscriptomics is the science that studies gene expression of microbes within natural environments, i. e., the metatranscriptome. it also allows to obtain whole gene expression profiling of complex microbial communities. while metagenomics focuses on studying the genomic content and on identifying which microbes are present within a community, metatranscriptomics can be used to study the diversity of the active genes within such community, to quantify their expression levels and to monitor how these levels change in different conditions ( e. g., physiological vs. pathological conditions in an organism ). the advantage of metatranscriptomics is that it can provide information about differences in the active functions of microbial communities which appear to be the same in terms of microbe composition. the microbiome has been defined as a microbial community occupying a well - defined habitat. they are ubiquitous and extremely relevant for the maintenance of ” the characteristic of the environment in which they reside and an imbalance in these communities can affect negatively the activity of the setting in which they reside. to study these communities, and to then determine their impact and correlation with their niche, different omics - approaches have been used. while metagenomics allows to obtain a taxonomic profile of the sample, metatrascriptomics provides a functional profile by analysing which genes are expressed by the community. it is possible to infer what genes are expressed under specific conditions, and this can be done using functional annotations of expressed genes. since metatranscriptomics focuses on what genes are expressed, it allows to understand the active functional profile of the entire microbial community. the overview of the gene expression in a given sample is obtained by capturing the total mrna of the microbiome and by performing a whole metatranscriptomics shotgun sequencing. although microarrays can be exploited to determine the gene expression profiles of some model organisms, next - generation sequencing and third - generation sequencing are the preferred techniques in metatranscriptomics. the protocol that is used to perform a metatranscriptome analysis may vary depending on the type of sample that needs to be analysed. indeed, many different protocols have been developed for studying the metatranscriptome of microbial samples. generally, the steps include sample harvesting, rna extraction ( different extraction methods for different kinds of samples have been reported in the literature ), mrna enrichment, cdna synthesis and preparation of metatranscriptomic libraries, sequencing and data processing and analysis. the first
|
Loop quantum gravity
|
https://en.wikipedia.org/wiki?curid=152664
| 2,856,601 |
at the vertex. this then naturally gives rise to the two - complex ( a combinatorial set of faces that join along edges, which in turn join on vertices ) underlying the spin foam description ; we evolve forward an initial spin network sweeping out a surface, the action of the hamiltonian constraint operator is to produce a new planar surface starting at the vertex. we are able to use the action of the hamiltonian constraint on the vertex of a spin network state to associate an amplitude to each " interaction " ( in analogy to feynman diagrams ). see figure below. this opens up a way of trying to directly link canonical lqg to a path integral description. now just as a spin networks describe quantum space, each configuration contributing to these path integrals, or sums over history, describe ' quantum space - time '. because of their resemblance to soap foams and the way they are labeled john baez gave these ' quantum space - times ' the name ' spin foams '. there are however severe difficulties with this particular approach, for example the hamiltonian operator is not self - adjoint, in fact it is not even a normal operator ( i. e. the operator does not commute with its adjoint ) and so the spectral theorem cannot be used to define the exponential in general. the most serious problem is that the formula _ 177 ' s are not mutually commuting, it can then be shown the formal quantity formula _ 178 cannot even define a ( generalized ) projector. the master constraint ( see below ) does not suffer from these problems and as such offers a way of connecting the canonical theory to the path integral formulation. it turns out there are alternative routes to formulating the path integral, however their connection to the hamiltonian formalism is less clear. one way is to start with the bf theory. this is a simpler theory than general relativity, it has no local degrees of freedom and as such depends only on topological aspects of the fields. bf theory is what is known as a topological field theory. surprisingly, it turns out that general relativity can be obtained from bf theory by imposing a constraint, bf theory involves a field formula _ 179 and if one chooses the field formula _ 136 to be the ( anti - symmetric ) product of two tetrads ( tetrads are like triads but in four spacetime dimensions ), one recovers general relativity. the condition that the formula _ 136 field be given by the product of two tetrads is called the simplicity constraint. the spin foam
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 7