source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Arnljot Høyland#0
Arnljot Høyland (19 February 1924 – 21 December 2002) was a Norwegian mathematical statistician. == Biography == Høyland was born in Bærum. He studied at the University of Oslo and later at the University of California, Berkeley in the USA. While a student he worked for the intelligence department at the Norwegian High Command, a military officer with the rank of Major. He lectured at the University of Oslo from 1959 to 1965, and then at the Norwegian Institute of Technology, eventually as a Professor of mathematical statistics. He published the textbooks Sannsynlighetsregning og statistisk metodelære (two volumes) in 1972 and 1973. In 1944 Høyland composed the melody for Alf Prøysen's song "Julekveldsvise". He was decorated Knight, First Class of the Order of St. Olav in 1995. == References ==
Wikipedia:Arnold Dresden#0
Arnold Dresden (1882–1954) was a Dutch-American mathematician, known for his work in the calculus of variations and collegiate mathematics education. He was a president of the Mathematical Association of America and a member of the American Philosophical Society. == Background == Dresden was born in Amsterdam on November 23, 1882, into a wealthy banking family. After matriculating for three years at the University of Amsterdam he used tuition money in 1903 to book passage on a ship to New York City. He then traveled to Chicago to help a friend, arriving there on his 21st birthday. Two years later, after saving money from working at various jobs, he enrolled in the graduate program at the University of Chicago, where he earned his Ph.D. in 1909 under the direction of Oskar Bolza with thesis The Second Derivatives of the Extremal Integral. == Research and teaching == Dresden taught at the University of Wisconsin 1909–1927. During this time he wrote several papers on the calculus of variations and systems of linear differential equations. He directed one doctoral dissertation. He was elected a Fellow of the American Association for the Advancement of Science in 1911. He was recruited to Swarthmore College by President Frank Aydelotte to initiate an honors program in mathematics that ended up being a model for other colleges and universities throughout the U.S. Dresden remained at the elite Quaker college until retiring in 1952; he was adored by many of his students. He was a Guggenheim Fellow for the academic years 1930–1931 and 1934–1935. In 1935–1936 he was on sabbatical at the Institute for Advanced Study, where he wrote An Invitation to Mathematics. He died on April 10, 1954, in Swarthmore, Pennsylvania, at age 71. While at Wisconsin Arnold Dresden was active in and served as secretary of, the Chicago Section of the American Mathematical Society. A charter member of the Mathematical Association of America, he was elected President for 1933–1934. He also served as Vice-President during 1931 and as a member of the Board of Governors for 1935–1940 and 1943–1945. His retiring presidential address, “A program for mathematics", encapsulated his deep concern about the place of mathematics in general culture and about the mathematical community's laissez-faire attitude toward the role it should play. A recurring theme was his belief that abstract concepts can be grasped by young people, which he preached in his 1936 book, An Invitation to Mathematics. He was also known as an ally to women in the field, as well. He also wrote three textbooks and translated van der Waerden’s classic Science Awakening from Dutch into English. == Articles == Dresden, Arnold (1908). "The second derivatives of the extremal-integral". Trans. Amer. Math. Soc. 9 (4): 467–486. doi:10.1090/s0002-9947-1908-1500822-8. MR 1500822. Dresden, Arnold (1916). "On the second derivatives of an extremal-integral with an application to a problem with variable end points". Trans. Amer. Math. Soc. 17 (4): 425–436. doi:10.1090/s0002-9947-1916-1501051-9. MR 1501051. Dresden, Arnold (1924). "Brouwer's contributions to the foundations of mathematics". Bull. Amer. Math. Soc. 30 (1–2): 31–40. doi:10.1090/s0002-9904-1924-03844-0. MR 1560837. Dresden, Arnold (1926). "Some recent work in the calculus of variations". Bull. Amer. Math. Soc. 32 (5): 475–521. doi:10.1090/s0002-9904-1926-04248-8. MR 1561253. Dresden, Arnold (1928). "Some philosophical aspects of mathematics". Bull. Amer. Math. Soc. 34 (4): 438–452. doi:10.1090/s0002-9904-1928-04560-3. MR 1561587. Dresden, Arnold (1933). "On the generalized Vandermonde determinant and symmetric functions". Bull. Amer. Math. Soc. 39 (6): 443–449. doi:10.1090/s0002-9904-1933-05664-1. MR 1562644. Dresden, Arnold (1942). "On the iteration of linear homogeneous transformations". Bull. Amer. Math. Soc. 48 (8): 577–579. doi:10.1090/s0002-9904-1942-07736-6. MR 0006984. == Books == Dresden, Arnold (1921). Plane trigonometry. John Wiley. Dresden, Arnold (1964) [1930]. Solid Analytical Geometry and Determinants. NY and London (1930): John Wiley and Chapman & Hall; (reprint) Dover.{{cite book}}: CS1 maint: location (link) Dresden, Arnold (1936). An Invitation to Mathematics. H. Holt. Dresden, Arnold (1940). Introduction to the Calculus. H. Holt. Waerden, B. L., van der; English trans. Arnold Dresden (1954). Science Awakening. Noordhoff.{{cite book}}: CS1 maint: multiple names: authors list (link) == References == == External links == Rank and File American Mathematicians (pdf) by David Zitarelli Records of editors, presidents, and secretaries from MAA headquarters, Arnold Dresden, 1932-1950 at the Archives of American Mathematics from Texas Archival Resources Online
Wikipedia:Arnold's spectral sequence#0
In mathematics, Arnold's spectral sequence (also spelled Arnol'd) is a spectral sequence used in singularity theory and normal form theory as an efficient computational tool for reducing a function to canonical form near critical points. It was introduced by Vladimir Arnold in 1975. == Definition == == References ==
Wikipedia:Arnon Avron#0
Arnon Avron (Hebrew: ארנון אברון; born 1952) is an Israeli mathematician and Professor at the School of Computer Science at Tel Aviv University. His research focuses on applications of mathematical logic to computer science and artificial intelligence. == Biography == Born in Tel Aviv in 1952, Arnon Avron studied mathematics at Tel Aviv University and the Hebrew University of Jerusalem, receiving a Ph.D. magna cum laude from Tel Aviv University in 1985. Between 1986 and 1988, he was a visitor at the University of Edinburgh's Laboratory for Foundations of Computer Science, where he began his association with computer science. In 1988 he became a senior faculty member of the Department of Computer Science (later School of Computer Science) of Tel Aviv University, chairing the School in 1996–1998, and becoming a Full Professor in 1999. == Research == Avron's research interests include proof theory, automated reasoning, non-classical logics, foundations of mathematics. For example, using analytic geometry he proved the Mohr–Mascheroni theorem. In applying mathematical logic in computer science to artificial intelligence, Avron contributed to the theory of automated reasoning with his introduction of hypersequents, a generalization of the sequent calculus. Avron also introduced the use of bilattices to paraconsistent logic, and made contributions to predicative set theory and geometry. == Selected works == === Books === Avron, Arnon (2001). Introduction to Discrete Mathematics (in Hebrew). Tel Aviv: Tel Aviv University Press. Avron, Arnon (1998). Gödel's Theorems and the Problem of the Foundations of Mathematics. Broadcast University Series (in Hebrew). Israel: Ministry of Defense Publications. === Articles === Avron, Arnon (1996). "The method of hypersequents in the proof theory of propositional non-classical logics" (PDF). In Hodges, Wilfrid; Hyland, Martin; Steinhorn, Charles; Truss, John (eds.). Logic: From Foundations to Applications. New York: Clarendon Press. pp. 1–32. ISBN 978-0-19-853862-2. Avron, Arnon; Honsell, Furio; Mason, Ian A.; Pollack, Robert (1992). "Using typed lambda calculus to implement formal systems on a machine". Journal of Automated Reasoning. 9 (3): 309–354. doi:10.1007/BF00245294. S2CID 2528793. Avron, Arnon (1991). "Natural 3-valued logics—characterization and proof theory". The Journal of Symbolic Logic. 56 (1): 276–294. CiteSeerX 10.1.1.638.9332. doi:10.2307/2274919. JSTOR 2274919. S2CID 15084999. Avron, Arnon (1991). "Hypersequents, logical consequence and intermediate logics for concurrency". Annals of Mathematics and Artificial Intelligence. 4 (3–4): 225–248. doi:10.1007/BF01531058. S2CID 9610134. Avron, Arnon (1988). "The semantics and proof theory of linear logic". Theoretical Computer Science. 57 (2–3): 161–184. CiteSeerX 10.1.1.29.9. doi:10.1016/0304-3975(88)90037-0. == References ==
Wikipedia:Aron Simis#0
Aron Simis is a mathematician born in Recife, Brazil in 1942. He is a full professor at the Universidade Federal de Pernambuco, Brazil, and Class A research scholarship recipient from the Brazilian Research Council. He earned his PhD from Queen's University, Canada. He has previously held a full professorship at IMPA (Instituto de Matemática Pura e Applicada) in Rio de Janeiro, Brazil. He was president of the Brazilian Mathematical Society and member on several occasions of international commissions of the IMU (International Mathematical Union) and TWAS (Academy of Sciences for the Developing World). He has been director of three workshops in his field at the ICTP (Abdus Salam International Centre for Theoretical Physics). In Brazil he is a recipient of the National Medal for Scientific Merit at the order of Grã-Cruz and a member of the Brazilian Research Group in Commutative Algebra and Algebraic Geometry (1997–2007). At large he is a John Simon Guggenheim Fellow and has been awarded other fellowships from the Max Planck Institute, Japan Society for Promotion of Science, and the Istituto Nazionale di Alta Matematica. He is a member both of the Brazilian Academy of Sciences and the Academy of Sciences for the Developing World (Trieste, Italy). His main research interests in mathematics include: main structures in commutative algebra; projective varieties in algebraic geometry; aspects of algebraic combinatorics; special graded algebras; foundations of Rees algebras; cremona and birational maps; algebraic vector fields; differential methods. Simis is of Romanian origin, his parents immigrated to Brazil from Romania in the 1920s. == References ==
Wikipedia:Arran Fernandez#0
Arran Fernandez (born June 1995) is a British mathematician who, in June 2013, became Senior Wrangler at Cambridge University, aged 18 years and 0 months. He is thought to be the youngest Senior Wrangler ever. == Biography == Prior to university, Fernandez was educated at home, predominantly by his father, Neil Fernandez. In 2001 he broke the age record for gaining a General Certificate of Secondary Education (GCSE), the English academic qualification usually taken at age 16, for which he sat the examinations aged five. In 2003 he became the youngest person ever to gain an A* grade at GCSE, also for Mathematics. In October 2010, when Fernandez began studying the Cambridge Mathematical Tripos aged 15 years and 3 months, he was the youngest Cambridge University undergraduate since William Pitt the Younger in 1773. Fernandez believes it was his exceptional environment rather than exceptional nature, that enabled him to achieve his academic successes. "Everything I achieved is because of my education and the opportunities I had. And the big part of my story is that I never went to school. My parents never believed in the official education system." In a 2020 interview with Raidio Teilifis Eireann he stated his opinion that a large number of people could achieve at the same level if they had the same opportunities as he did, and that those opportunities "would have to start at a very young age", such as at two years old. Starting in 2000 (aged five) Fernandez had several sequences published in the On-Line Encyclopedia of Integer Sequences (OEIS), the number theory database established by Neil Sloane. Since 2017, he has had more than 20 mathematical research articles published in peer-reviewed international journals. Television work featuring Fernandez has included an appearance as a "Person of the Week" on Frank Elstner's talk show on German TV in 2001, and an appearance on Terry Wogan’s and Gaby Roslin's The Terry and Gaby Show on British TV in 2003, when he beat mathematics popularizer Johnny Ball in a live mental arithmetic contest, successfully extracting the fifth roots of several large integers. In September 2018, having completed master's and doctoral degrees at the University of Cambridge, Fernandez joined the faculty of the Eastern Mediterranean University in Northern Cyprus as an assistant professor of mathematics, where in 2022 he became an associate professor. His main research areas are in fractional calculus and analytic number theory. == References ==
Wikipedia:Arthur Preston Mellish#0
Arthur Preston Mellish (10 June 1905 – 7 February 1930) was a Canadian mathematician, known for his generalization of Barbier's theorem. Arthur Mellish received in 1928 an M.A. in mathematics from the University of British Columbia with thesis An illustrative example of the ellipsoid pendulum. He died at age 24 and had no mathematical publications during his lifetime. After his death, his colleagues at Brown University examined his notes on mathematics. Jacob Tamarkin prepared a paper based upon the notes and published it in the Annals of Mathematics in 1931. In the statement of the following theorem, an oval means a closed convex curve. Mellish's Theorem: The statements (i) a curve is of constant width; (ii) a curve is of constant diameter; (iii) all the normals of a curve (an oval) are double; (iv) the sum of radii of curvature at opposite points of a curve (an oval) is constant; are equivalent, in the sense that whenever one of statements (i–iv) is true, all other statements also hold. (v) All curves of the same (constant) width a have the same length L given by L = πa. == References ==
Wikipedia:Arthur Stanley Mackenzie#0
Arthur Stanley Mackenzie (September 20, 1865 – October 2, 1938) was a Canadian physicist and university president. He was born in Pictou, Nova Scotia and educated at Dalhousie University, Halifax, and Johns Hopkins University. He was instructor in mathematics at Dalhousie from 1887 to 1889. At Bryn Mawr College, Pennsylvania, he was a lecturer and associate in physics (1891–92), associate professor (1894–97), and professor (1897–1905). Mackenzie then returned to Dalhousie to become a Munro professor of physics (1905–10). In 1911, he became president of the university, succeeding John Forrest. Mackenzie was made a Fellow of the Royal Society of Canada in 1908 and was elected a member of the Nova Scotia Institute of Science, of the American Physical Society, and of the American Philosophical Society. His scientific papers were published in the Physical Review, Journal of the Franklin Institute, and Proceedings of the American Philosophical Society. He also translated and edited a collection of memoirs on The Laws of Gravitation (1900). == References == == External links == This article incorporates text from a publication now in the public domain: Gilman, D. C.; Peck, H. T.; Colby, F. M., eds. (1905). New International Encyclopedia (1st ed.). New York: Dodd, Mead. {{cite encyclopedia}}: Missing or empty |title= (help)
Wikipedia:Arthur's conjectures#0
In mathematics, the Arthur conjectures refer to a set of conjectures proposed by James Arthur in 1989. These conjectures pertain to the properties of automorphic representations of reductive groups over adele rings and the unitary representations of reductive groups over local fields. Arthur’s work, which was motivated by the Arthur–Selberg trace formula, suggests a framework for understanding complex relationships in these areas. Arthur's conjectures have implications for other mathematical theories, notably implying the generalized Ramanujan conjectures for cusp forms on general linear groups. The Ramanujan conjectures, in turn, are central to the study of automorphic forms, as they predict specific behaviors of certain classes of mathematical functions known as cusp forms. To better understand the Arthur conjectures, familiarity with automorphic forms and reductive groups is useful, as is knowledge of the trace formula developed by Arthur and Atle Selberg. These mathematical tools allow for analysis of representations of groups in number theory, geometry, and physics. == References ==
Wikipedia:Artin's constant#0
In number theory, Artin's conjecture on primitive roots states that a given integer a that is neither a square number nor −1 is a primitive root modulo infinitely many primes p. The conjecture also ascribes an asymptotic density to these primes. This conjectural density equals Artin's constant or a rational multiple thereof. The conjecture was made by Emil Artin to Helmut Hasse on September 27, 1927, according to the latter's diary. The conjecture is still unresolved as of 2025. In fact, there is no single value of a for which Artin's conjecture is proved. == Formulation == Let a be an integer that is not a square number and not −1. Write a = a0b2 with a0 square-free. Denote by S(a) the set of prime numbers p such that a is a primitive root modulo p. Then the conjecture states S(a) has a positive asymptotic density inside the set of primes. In particular, S(a) is infinite. Under the conditions that a is not a perfect power and a0 is not congruent to 1 modulo 4 (sequence A085397 in the OEIS), this density is independent of a and equals Artin's constant, which can be expressed as an infinite product C A r t i n = ∏ p p r i m e ( 1 − 1 p ( p − 1 ) ) = 0.3739558136 … {\displaystyle C_{\mathrm {Artin} }=\prod _{p\ \mathrm {prime} }\left(1-{\frac {1}{p(p-1)}}\right)=0.3739558136\ldots } (sequence A005596 in the OEIS). The positive integers satisfying these conditions are: 2, 3, 6, 7, 10, 11, 12, 14, 15, 18, 19, 22, 23, 24, 26, 28, 30, 31, 34, 35, 38, 39, 40, 42, 43, 44, 46, 47, 48, 50, 51, 54, 55, 56, 58, 59, 60, 62, 63, … (sequence A085397 in the OEIS) The negative integers satisfying these conditions are: 2, 4, 5, 6, 9, 10, 13, 14, 16, 17, 18, 20, 21, 22, 24, 25, 26, 29, 30, 33, 34, 36, 37, 38, 40, 41, 42, 45, 46, 49, 50, 52, 53, 54, 56, 57, 58, 61, 62, … (sequence A120629 in the OEIS) Similar conjectural product formulas exist for the density when a does not satisfy the above conditions. In these cases, the conjectural density is always a rational multiple of CArtin. If a is a square number or a = −1, then the density is 0; more generally, if a is a perfect pth power for prime p, then the number needs to be multiplied by p ( p − 2 ) p 2 − p − 1 ; {\displaystyle {\frac {p(p-2)}{p^{2}-p-1}};} if there is more than one such prime p, then the number needs to be multiplied by p ( p − 2 ) p 2 − p − 1 {\displaystyle {\frac {p(p-2)}{p^{2}-p-1}}} for all such primes p). Similarly, if a0 is congruent to 1 mod 4, then the number needs to be multiplied by p ( p − 1 ) p 2 − p − 1 {\displaystyle {\frac {p(p-1)}{p^{2}-p-1}}} for all prime factors p of a0. == Examples == For example, take a = 2. The conjecture is that the set of primes p for which 2 is a primitive root has the density CArtin. The set of such primes is (sequence A001122 in the OEIS) S(2) = {3, 5, 11, 13, 19, 29, 37, 53, 59, 61, 67, 83, 101, 107, 131, 139, 149, 163, 173, 179, 181, 197, 211, 227, 269, 293, 317, 347, 349, 373, 379, 389, 419, 421, 443, 461, 467, 491, ...}. It has 38 elements smaller than 500 and there are 95 primes smaller than 500. The ratio (which conjecturally tends to CArtin) is 38/95 = 2/5 = 0.4. For a = 8 = 23, which is a power of 2, the conjectured density is 3 5 C {\displaystyle {\frac {3}{5}}C} , and for a = 5, which is congruent to 1 mod 4, the density is 20 19 C {\displaystyle {\frac {20}{19}}C} . == Partial results == In 1967, Christopher Hooley published a conditional proof for the conjecture, assuming certain cases of the generalized Riemann hypothesis. Without the generalized Riemann hypothesis, there is no single value of a for which Artin's conjecture is proved. However, D. R. Heath-Brown proved in 1986 (Corollary 1) that at least one of 2, 3, or 5 is a primitive root modulo infinitely many primes p. He also proved (Corollary 2) that there are at most two primes for which Artin's conjecture fails. == Some variations of Artin's problem == === Elliptic curve === An elliptic curve E {\displaystyle E} given by y 2 = x 3 + a x + b {\displaystyle y^{2}=x^{3}+ax+b} , Lang and Trotter gave a conjecture for rational points on E ( Q ) {\displaystyle E(\mathbb {Q} )} analogous to Artin's primitive root conjecture. Specifically, they said there exists a constant C E {\displaystyle C_{E}} for a given point of infinite order P {\displaystyle P} in the set of rational points E ( Q ) {\displaystyle E(\mathbb {Q} )} such that the number N ( P ) {\displaystyle N(P)} of primes ( p ≤ x {\displaystyle p\leq x} ) for which the reduction of the point P ( mod p ) {\displaystyle P{\pmod {p}}} denoted by P ¯ {\displaystyle {\bar {P}}} generates the whole set of points in F p {\displaystyle \mathbb {F_{p}} } in E {\displaystyle E} , denoted by E ¯ ( F p ) {\displaystyle {\bar {E}}(\mathbb {F_{p}} )} , is given by N ( P ) ∼ C E ( x log ⁡ x ) {\displaystyle N(P)\sim C_{E}\left({\frac {x}{\log x}}\right)} . Here we exclude the primes which divide the denominators of the coordinates of P {\displaystyle P} . Gupta and Murty proved the Lang and Trotter conjecture for E / Q {\displaystyle E/\mathbb {Q} } with complex multiplication under the Generalized Riemann Hypothesis, for primes splitting in the relevant imaginary quadratic field. === Even order === Krishnamurty proposed the question how often the period of the decimal expansion 1 / p {\displaystyle 1/p} of a prime p {\displaystyle p} is even. The claim is that the period of the decimal expansion of a prime in base g {\displaystyle g} is even if and only if g ( p − 1 2 j ) ≢ 1 mod p {\displaystyle g^{\left({\frac {p-1}{2^{j}}}\right)}\not \equiv 1{\bmod {p}}} where j ≥ 1 {\displaystyle j\geq 1} and j {\displaystyle j} is unique and p is such that p ≡ 1 + 2 j mod 2 j {\displaystyle p\equiv 1+2^{j}\mod {2^{j}}} . The result was proven by Hasse in 1966. == See also == Stephens' constant, a number that plays the same role in a generalization of Artin's conjecture as Artin's constant plays here Brown–Zassenhaus conjecture Full reptend prime Cyclic number (group theory) == References ==
Wikipedia:Artin–Mazur zeta function#0
In mathematics, the Artin–Mazur zeta function, named after Michael Artin and Barry Mazur, is a function that is used for studying the iterated functions that occur in dynamical systems and fractals. It is defined from a given function f {\displaystyle f} as the formal power series ζ f ( z ) = exp ⁡ ( ∑ n = 1 ∞ | Fix ⁡ ( f n ) | z n n ) , {\displaystyle \zeta _{f}(z)=\exp \left(\sum _{n=1}^{\infty }{\bigl |}\operatorname {Fix} (f^{n}){\bigr |}{\frac {z^{n}}{n}}\right),} where Fix ⁡ ( f n ) {\displaystyle \operatorname {Fix} (f^{n})} is the set of fixed points of the n {\displaystyle n} th iterate of the function f {\displaystyle f} , and | Fix ⁡ ( f n ) | {\displaystyle |\operatorname {Fix} (f^{n})|} is the number of fixed points (i.e. the cardinality of that set). Note that the zeta function is defined only if the set of fixed points is finite for each n {\displaystyle n} . This definition is formal in that the series does not always have a positive radius of convergence. The Artin–Mazur zeta function is invariant under topological conjugation. The Milnor–Thurston theorem states that the Artin–Mazur zeta function of an interval map f {\displaystyle f} is the inverse of the kneading determinant of f {\displaystyle f} . == Analogues == The Artin–Mazur zeta function is formally similar to the local zeta function, when a diffeomorphism on a compact manifold replaces the Frobenius mapping for an algebraic variety over a finite field. The Ihara zeta function of a graph can be interpreted as an example of the Artin–Mazur zeta function. == See also == Lefschetz number Lefschetz zeta-function == References == Artin, Michael; Mazur, Barry (1965), "On periodic points", Annals of Mathematics, Second Series, 81 (1), Annals of Mathematics: 82–99, doi:10.2307/1970384, ISSN 0003-486X, JSTOR 1970384, MR 0176482 Ruelle, David (2002), "Dynamical zeta functions and transfer operators" (PDF), Notices of the American Mathematical Society, 49 (8): 887–895, MR 1920859 Kotani, Motoko; Sunada, Toshikazu (2000), "Zeta functions of finite graphs", J. Math. Sci. Univ. Tokyo, 7: 7–25, CiteSeerX 10.1.1.531.9769 Terras, Audrey (2010), Zeta Functions of Graphs: A Stroll through the Garden, Cambridge Studies in Advanced Mathematics, vol. 128, Cambridge University Press, ISBN 978-0-521-11367-0, Zbl 1206.05003
Wikipedia:Arto Salomaa#0
Arto Kustaa Salomaa (6 June 1934 – 26 January 2025) was a Finnish mathematician and computer scientist. His research career, which spanned over 40 years, was focused on formal languages and automata theory. == Early life and education == Salomaa was born in Turku, Finland on 6 June 1934. He earned a Bachelor's degree from the University of Turku in 1954 and a PhD from the same university in 1960. Salomaa's father was a professor of philosophy at the University of Turku. Salomaa was introduced to the theory of automata and formal languages during seminars at Berkeley given by John Myhill in 1957. == Career == In 1965 Salomaa became a professor of mathematics at the University of Turku, a position he retired from in 1999. He also spent two years in the late 1960s at the University of Western Ontario in London, Ontario, Canada, and two years in the 1970s at Aarhus University in Aarhus, Denmark. Salomaa was president of the European Association for Theoretical Computer Science from 1979 until 1985. == Publications == Salomaa authored or co-authored 46 textbooks, including Theory of Automata (1969), Formal Languages (1973), The Mathematical Theory of L-Systems (1980, with Grzegorz Rozenberg), Jewels of Formal Language Theory (1981) Public-Key Cryptography (1990) and DNA Computing (1998, with Grzegorz Rozenberg and Gheorghe Paun). With Rozenberg, Salomaa edited the Handbook of Formal Languages (1997), a 3-volume, 2000-page reference on formal language theory. These books have often become standard references in their respective areas. For example, Formal Languages was reported in 1991 to be among the 100 most cited texts in mathematics. Salomaa published over 400 articles in scientific journals during his professional career. He also authored non-scientific articles such as "What computer scientists should know about sauna". From his retirement until 2014, Salomaa published over 100 scientific articles. == Personal life and death == Salomaa married in 1959. He had two children, Kirsti and Kai, the latter of whom is a professor of Computer Science at Queen's University at Kingston and also works in the field of formal languages and automata theory. Salomaa died on 26 January 2025, at the age of 90. The Research Council of Finland reported his death two days later in a press release, on 28 January. == Awards and recognition == Salomaa was awarded the title of Academician by the Academy of Finland, one of twelve living Finnish individuals awarded the title. He also received the EATCS Award in 2004. Salomaa received seven honorary degrees. On 13 June 2013, Salomaa was awarded a Doctor Honoris Causa from the University of Western Ontario. == References == == External links == Arto Salomaa at the Mathematics Genealogy Project Arto Salomaa home page Arto Salomaa at the Academia Europaea Arto Salomaa publications indexed by Microsoft Academic
Wikipedia:Arturo Reghini#0
Arturo Reghini (12 November 1878 – 1 July 1946) was an Italian mathematician, philosopher and esotericist. == Biography == Arturo Reghini was born in Florence on 12 November 1878. In 1898, he became a member of the Theosophical Society for which he founded a section in Rome. In 1903, he published in Palermo the first books of the editorial series named Biblioteca Teosofica (Theosophical Library) and later Biblioteca filosofica). In the same year, he was initiated in the Memphis' rite, a Masonic spiritual path that is derived by the ancient Egyptians and in Italy is uniquely practised in Palermo. In 1907, he was admitted to the regular Scottish Rite Masonic Lodge "Lucifero" in Florence, affiliated to the Grand Orient of Italy. Subsequently, Reghini adhered for a short period to the Martinism of Gérard Encausse and started to report the errors of the lawyer and Grand Master Sacchi about his administration of the Italian Freemasonry, also confuting his publications. In 1907, Amedeo Rocco Armentano introduced Reghini to the knowledge of the Pythagoreanism. In 1912, Reghini was in directorate of the Italian Freemasonry (in Italian: Supremo Consiglio Universale of the Rito filosofico italiano) from which he resigned in 1940 with a strongly negative judgement about the national brotherhood. In 1921, he was initiated to the 33rd and highest degree of the Scottish Rite. Then he was elected as effective member of the Supremo Consiglio d'Italia of which he became the Great Commendor and the General Secretary. In 1925, Reghini signed the internal decree No 245 related to its termination. On May 19, the Italian Parliament had approved the law of reform for the freedom of association, banning the masonic lodges out of the country. Reghini edited the journals Atanór (1924) and Ignis (1925) devoted to initiate studies, covering topics such as Pythagoreanism, yoga, Hebrew Cabalism and the Freemasonry of Alessandro Cagliostro. A circle of esotericists formed around these journals and adopted the name Gruppo di Ur. The group's members included Julius Evola and the anthroposophists Giovanni Colazza and Giovanni Antonio Colonna di Cesarò. From 1927 to 1928 the group published the monthly journal UR. Reghini fell out with Evola and the Ur group in 1928; a major reason was Reghini's support for Freemasonry, which was not in line with the direction the journal had taken. Reghini left the editorial board and UR was discontinued. It was briefly replaced in 1929 by a journal named Krur, without Reghini's involvement. Reghini was opposed to Christianity, which he associated with modernity and egalitarianism, and sought to establish a form of modern Paganism he called "magia colta", "cultured magic", which he drew from Hermeticism and Platonism. A critic of democracy and an advocate for the ancient Roman aristocracy, Reghini welcomed the rise of Italian Fascism, which he associated with the ancient world. He wrote in Atanór in 1924 that he had anticipated the emergence of such a regime in Italy 15 years prior. From the second half of the 1920s, he wrote critically about clerical fascism and the increasing fascist hostility towards non-Catholic religious views. He adopted an ironic writing style associated with the anti-clericalism of the era before World War I and the Risorgimento. Reghini died in Budrio on 1 July 1946. == Legacy == Reghini was an important influence on Evola during the years 1924 to 1930. He introduced Evola to the major texts on alchemy, which became the basis for Evola's book The Hermetic Tradition (1931). It was also through Reghini that Evola came in contact with René Guénon, whose Traditionalism would have a profound impact on his thinking. Reghini's journals and the works of the Ur group have influenced the development of Italic-Roman neopaganism and Roman polytheistic reconstructionism. == See also == Occult Imperium UR Group Giustiniano Lebano Julius Evola Giuliano Kremmerz Roman way to the gods == Bibliography == Le parole sacre e di passo dei primi tre gradi ed il massimo mistero massonico, Atanor, Rome, 1922. Per la restituzione della geometria pitagorica (1935); new edition Il Basilisco, Genoa, 1988, which also includes I numeri sacri nella tradizione pitagorica; new title Numeri sacri e geometria pitagorica. Il fascio littorio, ovvero il simbolismo duodecimale e il fascio etrusco (1935); new edition Il Basilisco, Genoa, 1980. Dei Numeri pitagorici (Libri sette) (1940) – Prologo – Associazione culturale Ignis, 2004. Dei Numeri Pitagorici (Libri sette) – Parte Prima – Volume Primo – Dell'equazione indeterminata di secondo grado con due incognite – Archè/pizeta, 2006. Dei Numeri Pitagorici (Libri sette) – Parte Prima – Volume Secondo – Delle soluzioni primitive dell'equazione di tipo Pell x2 − Dy2 = B e del loro numero – Archè/pizeta, 2012. Dizionario Filologico, ("Associazione culturale Ignis"), 2008. Cagliostro, ("Associazione culturale Ignis"), 2007. Considerazioni sul Rituale dell'apprendista libero muratore, Phoenix, Genoa, 1978. Paganesimo, Pitagorismo, Massoneria, Mantinea, Furnari (Messina), 1986. Per la restituzione della Massoneria Pitagorica Italiana, introduction by Vinicio Serino, Raffaelli Editore, Rimini, 2005, ISBN 88-89642-01-7 La Tradizione Pitagorica Massonica, Fratelli Melita Editori, Genoa, 1988, ISBN 88-403-9155-X Trascendenza di Spazio e Tempo, "Mondo Occulto", Napoli, 1926, reprint Libreria Ed. ASEQ 2010. Selected translations with introductions and annotations: De occulta philosophia by Heinrich Cornelius Agrippa (Alberto Fidi, Milan, 1926; two volumes); reprinted by Edizioni Mediterranee and I Dioscuri, Genoa, 1988. Le Roi du Monde by René Guénon (Alberto Fidi editore, Milan, 1927). == References == == Further reading == Giudice, Christian (14 October 2016). Occultism and Traditionalism: Arturo Reghini and the Antimodern Reaction in Early Twentieth-Century Italy (PhD). University of Gothenburg. Retrieved 22 October 2019.
Wikipedia:Aryabhatiya#0
Aryabhata ( ISO: Āryabhaṭa) or Aryabhata I (476–550 CE) was the first of the major mathematician-astronomers from the classical age of Indian mathematics and Indian astronomy. His works include the Āryabhaṭīya (which mentions that in 3600 Kali Yuga, 499 CE, he was 23 years old) and the Arya-siddhanta. For his explicit mention of the relativity of motion, he also qualifies as a major early physicist. == Biography == === Name === While there is a tendency to misspell his name as "Aryabhatta" by analogy with other names having the "bhatta" suffix, his name is properly spelled Aryabhata: every astronomical text spells his name thus, including Brahmagupta's references to him "in more than a hundred places by name". Furthermore, in most instances "Aryabhatta" would not fit the metre either. === Time and place of birth === Aryabhata mentions in the Aryabhatiya that he was 23 years old 3,600 years into the Kali Yuga, but this is not to mean that the text was composed at that time. This mentioned year corresponds to 499 CE, and implies that he was born in 476. Aryabhata called himself a native of Kusumapura or Pataliputra (present day Patna, Bihar). ==== Other hypothesis ==== Bhāskara I describes Aryabhata as āśmakīya, "one belonging to the Aśmaka country." During the Buddha's time, a branch of the Aśmaka people settled in the region between the Narmada and Godavari rivers in central India. It has been claimed that the aśmaka (Sanskrit for "stone") where Aryabhata originated may be the present day Kodungallur which was the historical capital city of Thiruvanchikkulam of ancient Kerala. This is based on the belief that Koṭuṅṅallūr was earlier known as Koṭum-Kal-l-ūr ("city of hard stones"); however, old records show that the city was actually Koṭum-kol-ūr ("city of strict governance"). Similarly, the fact that several commentaries on the Aryabhatiya have come from Kerala has been used to suggest that it was Aryabhata's main place of life and activity; however, many commentaries have come from outside Kerala, and the Aryasiddhanta was completely unknown in Kerala. K. Chandra Hari has argued for the Kerala hypothesis on the basis of astronomical evidence. Aryabhata mentions "Lanka" on several occasions in the Aryabhatiya, but his "Lanka" is an abstraction, standing for a point on the equator at the same longitude as his Ujjayini. === Education === It is fairly certain that, at some point, he went to Kusumapura for advanced studies and lived there for some time. Both Hindu and Buddhist tradition, as well as Bhāskara I (CE 629), identify Kusumapura as Pāṭaliputra, modern Patna. A verse mentions that Aryabhata was the head of an institution (kulapa) at Kusumapura, and, because the university of Nalanda was in Pataliputra at the time, it is speculated that Aryabhata might have been the head of the Nalanda university as well. Aryabhata is also reputed to have set up an observatory at the Sun temple in Taregana, Bihar. == Works == Aryabhata is the author of several treatises on mathematics and astronomy, though Aryabhatiya is the only one which survives. Much of the research included subjects in astronomy, mathematics, physics, biology, medicine, and other fields. Aryabhatiya, a compendium of mathematics and astronomy, was referred to in the Indian mathematical literature and has survived to modern times. The mathematical part of the Aryabhatiya covers arithmetic, algebra, plane trigonometry, and spherical trigonometry. It also contains continued fractions, quadratic equations, sums-of-power series, and a table of sines. The Arya-siddhanta, a lost work on astronomical computations, is known through the writings of Aryabhata's contemporary, Varahamihira, and later mathematicians and commentators, including Brahmagupta and Bhaskara I. This work appears to be based on the older Surya Siddhanta and uses the midnight-day reckoning, as opposed to sunrise in Aryabhatiya. It also contained a description of several astronomical instruments: the gnomon (shanku-yantra), a shadow instrument (chhAyA-yantra), possibly angle-measuring devices, semicircular and circular (dhanur-yantra / chakra-yantra), a cylindrical stick yasti-yantra, an umbrella-shaped device called the chhatra-yantra, and water clocks of at least two types, bow-shaped and cylindrical. A third text, which may have survived in the Arabic translation, is Al ntf or Al-nanf. It claims that it is a translation by Aryabhata, but the Sanskrit name of this work is not known. Probably dating from the 9th century, it is mentioned by the Persian scholar and chronicler of India, Abū Rayhān al-Bīrūnī. === Aryabhatiya === Direct details of Aryabhata's work are known only from the Aryabhatiya. The name "Aryabhatiya" is due to later commentators. Aryabhata himself may not have given it a name. His disciple Bhaskara I calls it Ashmakatantra (or the treatise from the Ashmaka). It is also occasionally referred to as Arya-shatas-aShTa (literally, Aryabhata's 108), because there are 108 verses in the text. It is written in the very terse style typical of sutra literature, in which each line is an aid to memory for a complex system. Thus, the explication of meaning is due to commentators. The text consists of the 108 verses and 13 introductory verses, and is divided into four pādas or chapters: Gitikapada: (13 verses): large units of time—kalpa, manvantra, and yuga—which present a cosmology different from earlier texts such as Lagadha's Vedanga Jyotisha (c. 1st century BCE). There is also a table of sines (jya), given in a single verse. The duration of the planetary revolutions during a mahayuga is given as 4.32 million years. Ganitapada (33 verses): covering mensuration (kṣetra vyāvahāra), arithmetic and geometric progressions, gnomon / shadows (shanku-chhAyA), simple, quadratic, simultaneous, and indeterminate equations (kuṭṭaka). Kalakriyapada (25 verses): different units of time and a method for determining the positions of planets for a given day, calculations concerning the intercalary month (adhikamAsa), kShaya-tithis, and a seven-day week with names for the days of week. Golapada (50 verses): Geometric/trigonometric aspects of the celestial sphere, features of the ecliptic, celestial equator, node, shape of the earth, cause of day and night, rising of zodiacal signs on horizon, etc. In addition, some versions cite a few colophons added at the end, extolling the virtues of the work, etc. The Aryabhatiya presented a number of innovations in mathematics and astronomy in verse form, which were influential for many centuries. The extreme brevity of the text was elaborated in commentaries by his disciple Bhaskara I (Bhashya, c. 600 CE) and by Nilakantha Somayaji in his Aryabhatiya Bhasya (1465 CE). Aryabhatiya is also well-known for his description of relativity of motion. He expressed this relativity thus: "Just as a man in a boat moving forward sees the stationary objects (on the shore) as moving backward, just so are the stationary stars seen by the people on earth as moving exactly towards the west." == Mathematics == === Place value system and zero === The place-value system, first seen in the 3rd-century Bakhshali Manuscript, was clearly in place in his work. While he did not use a symbol for zero, the French mathematician Georges Ifrah argues that knowledge of zero was implicit in Aryabhata's place-value system as a place holder for the powers of ten with null coefficients. However, Aryabhata did not use the Brahmi numerals. Continuing the Sanskritic tradition from Vedic times, he used letters of the alphabet to denote numbers, expressing quantities, such as the table of sines in a mnemonic form. === Approximation of π === Aryabhata worked on the approximation for pi (π), and may have come to the conclusion that π is irrational. In the second part of the Aryabhatiyam (gaṇitapāda 10), he writes: caturadhikaṃ śatamaṣṭaguṇaṃ dvāṣaṣṭistathā sahasrāṇām ayutadvayaviṣkambhasyāsanno vṛttapariṇāhaḥ. "Add four to 100, multiply by eight, and then add 62,000. By this rule the circumference of a circle with a diameter of 20,000 can be approached." This implies that for a circle whose diameter is 20000, the circumference will be 62832 i.e, π {\displaystyle \pi } = 62832 20000 {\displaystyle 62832 \over 20000} = 3.1416 {\displaystyle 3.1416} , which is accurate to two parts in one million. It is speculated that Aryabhata used the word āsanna (approaching), to mean that not only is this an approximation but that the value is incommensurable (or irrational). If this is correct, it is quite a sophisticated insight, because the irrationality of pi (π) was proved in Europe only in 1761 by Lambert. After Aryabhatiya was translated into Arabic (c. 820 CE), this approximation was mentioned in Al-Khwarizmi's book on algebra. === Trigonometry === In Ganitapada 6, Aryabhata gives the area of a triangle as tribhujasya phalaśarīraṃ samadalakoṭī bhujārdhasaṃvargaḥ that translates to: "for a triangle, the result of a perpendicular with the half-side is the area." Aryabhata discussed the concept of sine in his work by the name of ardha-jya, which literally means "half-chord". For simplicity, people started calling it jya. When Arabic writers translated his works from Sanskrit into Arabic, they referred it as jiba. However, in Arabic writings, vowels are omitted, and it was abbreviated as jb. Later writers substituted it with jaib, meaning "pocket" or "fold (in a garment)". (In Arabic, jiba is a meaningless word.) Later in the 12th century, when Gherardo of Cremona translated these writings from Arabic into Latin, he replaced the Arabic jaib with its Latin counterpart, sinus, which means "cove" or "bay"; thence comes the English word sine. === Indeterminate equations === A problem of great interest to Indian mathematicians since ancient times has been to find integer solutions to Diophantine equations that have the form ax + by = c. (This problem was also studied in ancient Chinese mathematics, and its solution is usually referred to as the Chinese remainder theorem.) This is an example from Bhāskara's commentary on Aryabhatiya: Find the number which gives 5 as the remainder when divided by 8, 4 as the remainder when divided by 9, and 1 as the remainder when divided by 7 That is, find N = 8x+5 = 9y+4 = 7z+1. It turns out that the smallest value for N is 85. In general, diophantine equations, such as this, can be notoriously difficult. They were discussed extensively in ancient Vedic text Sulba Sutras, whose more ancient parts might date to 800 BCE. Aryabhata's method of solving such problems, elaborated by Bhaskara in 621 CE, is called the kuṭṭaka (कुट्टक) method. Kuṭṭaka means "pulverizing" or "breaking into small pieces", and the method involves a recursive algorithm for writing the original factors in smaller numbers. This algorithm became the standard method for solving first-order diophantine equations in Indian mathematics, and initially the whole subject of algebra was called kuṭṭaka-gaṇita or simply kuṭṭaka. === Algebra === In Aryabhatiya, Aryabhata provided elegant results for the summation of series of squares and cubes: 1 2 + 2 2 + ⋯ + n 2 = n ( n + 1 ) ( 2 n + 1 ) 6 {\displaystyle 1^{2}+2^{2}+\cdots +n^{2}={n(n+1)(2n+1) \over 6}} and 1 3 + 2 3 + ⋯ + n 3 = ( 1 + 2 + ⋯ + n ) 2 {\displaystyle 1^{3}+2^{3}+\cdots +n^{3}=(1+2+\cdots +n)^{2}} (see squared triangular number) == Astronomy == Aryabhata's system of astronomy was called the audAyaka system, in which days are reckoned from uday, dawn at lanka or "equator". Some of his later writings on astronomy, which apparently proposed a second model (or ardha-rAtrikA, midnight) are lost but can be partly reconstructed from the discussion in Brahmagupta's Khandakhadyaka. In some texts, he seems to ascribe the apparent motions of the heavens to the Earth's rotation. He may have believed that the planet's orbits are elliptical rather than circular. === Motions of the Solar System === Aryabhata correctly insisted that the Earth rotates about its axis daily, and that the apparent movement of the stars is a relative motion caused by the rotation of the Earth, contrary to the then-prevailing view, that the sky rotated. This is indicated in the first chapter of the Aryabhatiya, where he gives the number of rotations of the Earth in a yuga, and made more explicit in his gola chapter: In the same way that someone in a boat going forward sees an unmoving [object] going backward, so [someone] on the equator sees the unmoving stars going uniformly westward. The cause of rising and setting [is that] the sphere of the stars together with the planets [apparently?] turns due west at the equator, constantly pushed by the cosmic wind. Aryabhata described a geocentric model of the Solar System, in which the Sun and Moon are each carried by epicycles. They in turn revolve around the Earth. In this model, which is also found in the Paitāmahasiddhānta (c. 425 CE), the motions of the planets are each governed by two epicycles, a smaller manda (slow) and a larger śīghra (fast). The order of the planets in terms of distance from earth is taken as: the Moon, Mercury, Venus, the Sun, Mars, Jupiter, Saturn, and the asterisms. The positions and periods of the planets was calculated relative to uniformly moving points. In the case of Mercury and Venus, they move around the Earth at the same mean speed as the Sun. In the case of Mars, Jupiter, and Saturn, they move around the Earth at specific speeds, representing each planet's motion through the zodiac. Most historians of astronomy consider that this two-epicycle model reflects elements of pre-Ptolemaic Greek astronomy. Another element in Aryabhata's model, the śīghrocca, the basic planetary period in relation to the Sun, is seen by some historians as a sign of an underlying heliocentric model. === Eclipses === Solar and lunar eclipses were scientifically explained by Aryabhata. He states that the Moon and planets shine by reflected sunlight. Instead of the prevailing cosmogony in which eclipses were caused by Rahu and Ketu (identified as the pseudo-planetary lunar nodes), he explains eclipses in terms of shadows cast by and falling on Earth. Thus, the lunar eclipse occurs when the Moon enters into the Earth's shadow (verse gola.37). He discusses at length the size and extent of the Earth's shadow (verses gola.38–48) and then provides the computation and the size of the eclipsed part during an eclipse. Later Indian astronomers improved on the calculations, but Aryabhata's methods provided the core. His computational paradigm was so accurate that 18th-century scientist Guillaume Le Gentil, during a visit to Pondicherry, India, found the Indian computations of the duration of the lunar eclipse of 30 August 1765 to be short by 41 seconds, whereas his charts (by Tobias Mayer, 1752) were long by 68 seconds. === Sidereal periods === Considered in modern English units of time, Aryabhata calculated the sidereal rotation (the rotation of the earth referencing the fixed stars) as 23 hours, 56 minutes, and 4.1 seconds; the modern value is 23:56:4.091. Similarly, his value for the length of the sidereal year at 365 days, 6 hours, 12 minutes, and 30 seconds (365.25858 days) is an error of 3 minutes and 20 seconds over the length of a year (365.25636 days). === Heliocentrism === As mentioned, Aryabhata advocated an astronomical model in which the Earth turns on its own axis. His model also gave corrections (the śīgra anomaly) for the speeds of the planets in the sky in terms of the mean speed of the Sun. Thus, it has been suggested that Aryabhata's calculations were based on an underlying heliocentric model, in which the planets orbit the Sun, though this has been rebutted. It has also been suggested that aspects of Aryabhata's system may have been derived from an earlier, likely pre-Ptolemaic Greek, heliocentric model of which Indian astronomers were unaware, though the evidence is scant. The general consensus is that a synodic anomaly (depending on the position of the Sun) does not imply a physically heliocentric orbit (such corrections being also present in late Babylonian astronomical texts), and that Aryabhata's system was not explicitly heliocentric. == Legacy == Aryabhata's work was of great influence in the Indian astronomical tradition and influenced several neighbouring cultures through translations. The Arabic translation during the Islamic Golden Age (c. 820 CE), was particularly influential. Some of his results are cited by Al-Khwarizmi and in the 10th century Al-Biruni stated that Aryabhata's followers believed that the Earth rotated on its axis. His definitions of sine (jya), cosine (kojya), versine (utkrama-jya), and inverse sine (otkram jya) influenced the birth of trigonometry. He was also the first to specify sine and versine (1 − cos x) tables, in 3.75° intervals from 0° to 90°, to an accuracy of 4 decimal places. In fact, the modern terms "sine" and "cosine" are mistranscriptions of the words jya and kojya as introduced by Aryabhata. As mentioned, they were translated as jiba and kojiba in Arabic and then misunderstood by Gerard of Cremona while translating an Arabic geometry text to Latin. He assumed that jiba was the Arabic word jaib, which means "fold in a garment", L. sinus (c. 1150). Aryabhata's astronomical calculation methods were also very influential. Along with the trigonometric tables, they came to be widely used in the Islamic world and used to compute many Arabic astronomical tables (zijes). In particular, the astronomical tables in the work of the Arabic Spain scientist Al-Zarqali (11th century) were translated into Latin as the Tables of Toledo (12th century) and remained the most accurate ephemeris used in Europe for centuries. Calendric calculations devised by Aryabhata and his followers have been in continuous use in India for the practical purposes of fixing the Panchangam (the Hindu calendar). In the Islamic world, they formed the basis of the Jalali calendar introduced in 1073 CE by a group of astronomers including Omar Khayyam, versions of which (modified in 1925) are the national calendars in use in Iran and Afghanistan today. The dates of the Jalali calendar are based on actual solar transit, as in Aryabhata and earlier Siddhanta calendars. This type of calendar requires an ephemeris for calculating dates. Although dates were difficult to compute, seasonal errors were less in the Jalali calendar than in the Gregorian calendar. Aryabhatta Knowledge University (AKU), Patna has been established by Government of Bihar for the development and management of educational infrastructure related to technical, medical, management and allied professional education in his honour. The university is governed by Bihar State University Act 2008. India's first satellite Aryabhata and the lunar crater Aryabhata are both named in his honour, the Aryabhata satellite also featured on the reverse of the Indian 2-rupee note. An Institute for conducting research in astronomy, astrophysics and atmospheric sciences is the Aryabhatta Research Institute of Observational Sciences (ARIES) near Nainital, India. The inter-school Aryabhata Maths Competition is also named after him, as is Bacillus aryabhata, a species of bacteria discovered in the stratosphere by ISRO scientists in 2009. == See also == Āryabhaṭa numeration Āryabhaṭa's sine table Indian mathematics List of Indian mathematicians == References == === Works cited === Cooke, Roger (1997). The History of Mathematics: A Brief Course. Wiley-Interscience. ISBN 0-471-18082-3. Clark, Walter Eugene (1930). The Āryabhaṭīya of Āryabhaṭa: An Ancient Indian Work on Mathematics and Astronomy. University of Chicago Press; reprint: Kessinger Publishing (2006). ISBN 978-1-4254-8599-3. {{cite book}}: ISBN / Date incompatibility (help) Kak, Subhash C. (2000). 'Birth and Early Development of Indian Astronomy'. In Selin, Helaine, ed. (2000). Astronomy Across Cultures: The History of Non-Western Astronomy. Boston: Kluwer. ISBN 0-7923-6363-9.{{cite encyclopedia}}: CS1 maint: publisher location (link) Shukla, Kripa Shankar. Aryabhata: Indian Mathematician and Astronomer. New Delhi: Indian National Science Academy, 1976. Thurston, H. (1994). Early Astronomy. Springer-Verlag, New York. ISBN 0-387-94107-X. == External links == 1930 English translation of The Aryabhatiya in various formats at the Internet Archive. O'Connor, John J.; Robertson, Edmund F., "Aryabhata", MacTutor History of Mathematics Archive, University of St Andrews Achar, Narahari (2007). "Āryabhaṭa I". In Thomas Hockey; et al. (eds.). The Biographical Encyclopedia of Astronomers. New York: Springer. p. 63. ISBN 978-0-387-31022-0. (PDF version) "Aryabhata and Diophantus' son", Hindustan Times Storytelling Science column, November 2004 Surya Siddhanta translations
Wikipedia:Asano contraction#0
In complex analysis, a discipline in mathematics, and in statistical physics, the Asano contraction or Asano–Ruelle contraction is a transformation on a separately affine multivariate polynomial. It was first presented in 1970 by Taro Asano to prove the Lee–Yang theorem in the Heisenberg spin model case. This also yielded a simple proof of the Lee–Yang theorem in the Ising model. David Ruelle proved a general theorem relating the location of the roots of a contracted polynomial to that of the original. Asano contractions have also been used to study polynomials in graph theory. == Definition == Let Φ ( z 1 , z 2 , … , z n ) {\displaystyle \Phi (z_{1},z_{2},\ldots ,z_{n})} be a polynomial which, when viewed as a function of only one of these variables is an affine function. Such functions are called separately affine. For example, a + b z 1 + c z 2 + d z 1 z 2 {\displaystyle a+bz_{1}+cz_{2}+dz_{1}z_{2}} is the general form of a separately affine function in two variables. Any separately affine function can be written in terms of any two of its variables as Φ ( z i , z j ) = a + b z i + c z j + d z i z j {\displaystyle \Phi (z_{i},z_{j})=a+bz_{i}+cz_{j}+dz_{i}z_{j}} . The Asano contraction ( z i , z j ) ↦ z {\displaystyle (z_{i},z_{j})\mapsto z} sends Φ {\displaystyle \Phi } to Φ ~ = a + d z {\displaystyle {\tilde {\Phi }}=a+dz} . == Location of zeroes == Asano contractions are often used in the context of theorems about the location of roots. Asano originally used them because they preserve the property of having no roots when all the variables have magnitude greater than 1. Ruelle provided a more general relationship which allowed the contractions to be used in more applications. He showed that if there are closed sets M 1 , M 2 , … , M n {\displaystyle M_{1},M_{2},\ldots ,M_{n}} not containing 0 such that Φ {\displaystyle \Phi } cannot vanish unless z i ∈ M i {\displaystyle z_{i}\in M_{i}} for some index i {\displaystyle i} , then Φ ~ = ( ( z j , z k ) ↦ z ) ( Φ ) {\displaystyle {\tilde {\Phi }}=((z_{j},z_{k})\mapsto z)(\Phi )} can only vanish if z i ∈ M i {\displaystyle z_{i}\in M_{i}} for some index i ≠ k , j {\displaystyle i\neq k,j} or z ∈ − M j M k {\displaystyle z\in -M_{j}M_{k}} where − M j M k = { − a b ; a ∈ M j , b ∈ M k } {\displaystyle -M_{j}M_{k}=\{-ab;a\in M_{j},b\in M_{k}\}} . Ruelle and others have used this theorem to relate the zeroes of the partition function to zeroes of the partition function of its subsystems. == Use == Asano contractions can be used in statistical physics to gain information about a system from its subsystems. For example, suppose we have a system with a finite set Λ {\displaystyle \Lambda } of particles with magnetic spin either 1 or -1. For each site, we have a complex variable z x {\displaystyle z_{x}} Then we can define a separately affine polynomial P ( z Λ ) = ∑ X ⊆ Λ c X z X {\displaystyle P(z_{\Lambda })=\sum _{X\subseteq \Lambda }c_{X}z^{X}} where z X = ∏ x ∈ X z x {\displaystyle z^{X}=\prod _{x\in X}z_{x}} , c X = e − β U ( X ) {\displaystyle c_{X}=e^{-\beta U(X)}} and U ( X ) {\displaystyle U(X)} is the energy of the state where only the sites in X {\displaystyle X} have positive spin. If all the variables are the same, this is the partition function. Now if Λ = Λ 1 ∩ Λ 2 {\displaystyle \Lambda =\Lambda _{1}\cap \Lambda _{2}} , then P ( z Λ ) {\displaystyle P(z_{\Lambda })} is obtained from P ( z Λ 1 ) P ( z Λ 2 ) {\displaystyle P(z_{\Lambda _{1}})P(z_{\Lambda _{2}})} by contracting the variable attached to identical sites. This is because the Asano contraction essentially eliminates all terms where the spins at a site are distinct in the P ( z Λ 1 ) {\displaystyle P(z_{\Lambda _{1}})} and P ( z Λ 2 ) {\displaystyle P(z_{\Lambda _{2}})} . Ruelle has also used Asano contractions to find information about the location of roots of a generalization of matching polynomials which he calls graph-counting polynomials. He assigns a variable to each edge. For each vertex, he computes a symmetric polynomial in the variables corresponding to the edges incident on that vertex. The symmetric polynomial contains the terms of degree equal to the allowed degree for that node. He then multiplies these symmetric polynomials together and uses Asano contractions to only keep terms where the edge is present at both its endpoints. By using the Grace–Walsh–Szegő theorem and intersecting all the sets that can be obtained, Ruelle gives sets containing the roots of several types of these symmetric polynomials. Since the graph-counting polynomial was obtained from these by Asano contractions, most of the remaining work is computing products of these sets. == References ==
Wikipedia:Asher Kravitz#0
Asher Kravitz (Hebrew: אשר קרביץ; born 1969), is an Israeli author and lecturer on physics and mathematics at the Academic College of Engineering in Jerusalem and the Open University. He is also an animal rights activist and wildlife photographer. == Biography == Kravitz was born in Jerusalem and raised in a traditional Jewish home. He studied electronics at Kiryat Noar, a vocational yeshiva high school, and at the Djanogly High School in Jerusalem. His military service in the Israeli Army began in the Commando Brigade of the Armored Corps. Toward the end of his service, he served as an instructor of Krav Maga. Kravitz completed his bachelor's degree in Physics at the Hebrew University and his master's degree at the Technion. While studying at the Technion, he joined the Israeli Police Force and served as an investigator in the National Unit for the Investigation of Serious Crimes. After leaving the police force, he taught for two years at the High School of Arts and Sciences. Since the year 2000, Kravitz has been teaching courses in mathematics and physics at the Academic College of Engineering in Jerusalem and at the Open University of Israel. He also lectured on literature at the Hebrew University School for Overseas Students. == Photography and documentation of wildlife == Since 1997, Kravitz has worked on documenting wildlife through photography, both in Israel and in Africa. During the 2000s, a number of his articles on animal rights and well-being were published. Kravitz documented his many excursions to Africa with extensive photography of its wildlife and also participated in an Israeli mission to set up a haven for orphaned gorillas in Cameroon. == Books == His first two books, Magic Square (G'vanim, 2002) and Boomerang (Keter, 2003), are whodunits with plots built around complex criminal cases. His third book, I'm Mustafa Rabinowitz (Kibbutz M'uhad, 2005), is a story about a soldier fighting in an anti-terrorist unit in the Israeli army and the moral dilemmas that he faces. His fourth book, The Jewish Dog (Yediot Books, 2007), is the post-mortem autobiography of Koresh, a dog born into the household of a German Jewish family during the pre-Holocaust period in Germany, and his lifelong travails. This last novel was awarded a "Diamond Citation" by the Book Publishers Association of Israel. == References == == External links == [1], website of The Institute for Translation of Hebrew Literature Asher Kravitz, The Lexicon of Modern Hebrew Literature The first chapter of I'm Mustafa Rabinowitz, website of Yediot Books
Wikipedia:Askar Dzhumadildayev#0
Askar Dzhumadildayev (Kazakh: Асқар Серқұлұлы Жұмаділдаев, Asqar Serqūlūly Jūmadıldaev; born 25 February 1956) is a Kazakh mathematician, doctor of physics and mathematics, professor, and a Full Member of the Kazakhstan National Academy of Science. He was also member Supreme Council of Kazakh SSR and Republic of Kazakhstan. == Biography == === Early life === Askar Serkululy Dzhumadilyavev was born on 25 April 1956 in Shieli, Kyzylorda Region, Kazakhstan. He was the member of the 51st IMO Jury. == Scientific degrees == 1977 – M.A. in mathematics (Moscow State University) 1981 – Ph.D. in mathematics (Steklov Institute of Mathematics) 1988 – second Ph.D. in mathematics (Steklov Institute of Mathematics) 1990 – Professor of Kazakh State University 1995 – Corresponding Member of the National Kazakh Academy of Sciences 2004 – Full Member of the National Kazakh Academy of Sciences == Professional experience == 1980-90 Junior, senior, leading Researcher of the Institute of Mathematics, Kazakh SSR Academy of Sciences. 1990– Head of algebra laboratory He has online course "Matrices and Determinants"(Kazakh: «Матрицалар және анықтауыштар») at openU.kz. === Visiting positions === 1988 – Hamburg University (2 month) 1995-1996 – Munich University (18 month) 1997, 1998, 1999 – Bielefeld University (4 month) 1997 – Newton Institute, Cambridge, UK (lent term, 4 month) 1998, 2001, 2002, 2003 – International Centre for Theoretical Physics (Trieste,9 month) 1998, 1999 – Mittag-Leffler Inst. of Mathematics, Sweden (9 month) 1999 – Kyoto University, Japan (1 month) 2000-2001, 2002, 2003 – Stockholm University, Sweden (6 month) 2000 – Oxford University, UK (1 month) 2001 – Fields Institute, Toronto (1 month) 2001, 2002, 2003, 2005 – Institut des Hautes Études Scientifiques, France (5 month) 2002 – Erwin Schrödinger International Institute for Mathematical Physics, Vien (1 month) 2005 – Max-Planck Institute fuer Mathematik, Bonn (3 month) == Awards and grants == 1983 – Prize of Republic Counsel for Science and Technology 1993–1995 – Grants of American Mathematical Society, International Science Foundation (Soros Foundation) INTAS (International association for the Promotion of Cooperation with Scientists from former USSR) 1995–1996 – Alexander von Humboldt Fellowship 1999–2004 – Grant of Swedish Royal Academy of Sciences 1999 – Grant of JSPS (Japan Soc. Promotion of Sciences) 2000–2003 – Grant of INTAS 2000–2004 – Kazakh State Fellowship for distinguished scholars 2007, 2016 – Grant of Kazakh Ministry of Education "Best professor of Higher School" 2011–2012 – Kazakh State Fellowship for distinguished scholars 2011 – State Prize of the Republic of Kazakhstan in science and technology 2012 – International Khwarizmi Award (Islamic Republic of Iran) == Recognition == In 2016, Askar Dzhumadilyavev was chosen as one of the nominees in the "Science" category of the national project «El Tulgasy» (Name of the Motherland) The idea of the project was to select the most significant citizens of Kazakhstan whose names are now associated with the achievements of the country. More than 350,000 people voted in this project, and Dzhumadilyavev was voted into second place in his category. == Selected publications == Dzhumadildaev A.S., Yeliussizov D., Walks, partitions, and normal ordering // Electronic J. Combin., 22(4)(2015), \#P4.10, 23 pages. Dzhumadildaev A.S., Yeliussizov D., Path decompositions of digraphs and their applications to Weyl algebra // Advances in Applied Mathematics. – 2015. – V. 67. – P. 36–54. Dzhumadildaev A. S., Ismailov N. A., S-n- and GL(n)-module structures on free Novikov algebras // Journal of Algebra. – 2014. – V. 416. – P. 287–313. Dzhumadildaev A.S., 2p-Commutator on differential operators of order p // Letters in Mathematical Physics. – 2014. – V. 104, No.7. – P. 849–869. Dzhumadildaev A.S., Omirov B.A., Rozikov U.A. On a class of evolution algebras of "chicken" population // International Journal of Mathematics. – 2014. – V. 25, No.8. – P. 849–869. Dzhumadildaev A.S., Yeliussizov D., Stirling permutations on multisets // European Journal of Combinatorics. – 2014. – V. 36. – P. 377–392. Dzhumadildaev A.S., The Dynkin theorem for multi linear Lie elements // Journal of Lie Theory. – 2013. – V. 23, No.3. – P. 795–801. Dzhumadildaev A.S., D. Yeliussizov, Power sums of binomial coefficients // J. Integer Seq.V. 16–2013, art. 13.1.4 Dzhumadildaev A.S., Zusmanovich P., The alternative operad is not Koszul // Experimental Mathematics. – 2011. – V. 20, No.2. – P. 138–144. Dzhumadildaev A. S. Worpitzky identity for multipermutations // Mathematical Notes – 2011. – V. 90, No.3. – P. 448–450. Dzhumadildaev A.S., Lie expression for multi-parameter Klyachko idempotent // Journal of Algebraic Combinatorics. – 2011. – V. 33, No.4. – P. 531–542. Dzhumadildaev A.S., Codimension growth and non-Koszulity of Novikov operad // Communications in Algebra. – 2011. – V. 39, No.8. – P. 2943–2952. Dzhumadildaev A.S., Jordan elements and left-center of a free Leibniz algebra // Electronic Research Announcements in Mathematical Sciences. – 2011. – V. 18, – P. 31–49. Dzhumadildaev, N. Ismailov, K. Tulenbaev, Free bicommutative algebras // Serdica Math, V. 37-2011- pp. 25–44. Dzhumadildaev A.S., Zusmanovich P., Commutative 2-cocycles on Lie algebras // Journal of Algebra. – 2010. – V. 324, No.4. – P. 732–748. Dzhumadildaev A.S., On the Hesse-Muir formula for the determinant of the matrix A (n-1) B (2) // Mathematical Notes. – 2010. – V. 87, No.3. – P. 428–429. Dzhumadildaev A.S., MacMahon's theorem for a set of permutuations with given descent indices and right-maximal records // Electronic Journal of Combinatorics. – 2010. – V. 17, No.1. – R34. Dzhumadildaev A.S., Anti-commutative algebras with skew-symmetric identities // Journal of Algebra and its Applications. – 2009. – V. 8, No.2. – P. 157–180. Dzhumadildaev A.S., 10-commutators, 13-commutators and odd derivations // Journal of Nonlinear Mathematical Physics. – 2008. – V. 15, No.1. – P. 87–103. Dzhumadildaev A.S., q-Leibniz algebras // Serdica Math. J., V.34 - 2008, 415–440. Dzhumadildaev A.S., Algebras with skew-symmetric identity of degree 3 // J.Math. Sci, V.161-2009- No.1, p. 11-30 Dzhumadildaev A.S., K.M. Tulenbaev, Exceptional 0-Alia Algebras // J. Math. Sci., V.161-2009- No.1, p. 37-40. Dzhumadildaev A.S., The n-Lie property of the Jacobian as a condition for complete integrability // Siberian Mathematical Journal. – 2006. – V. 47, No.4. – P. 643–652. Dzhumadildaev A.S., Tulenbaev K.M., Engel theorem for Novikov algebras // Communications in Algebra. – 2006. – V. 34, No.3. – P. 883–888. Dzhumadildaev A.S., n-Lie structures that are generated by Wronskians // Siberian Mathematical Journal. – 2005. – V. 46, No.4. – P. 601–612. Dzhumadildaev A.S., Zinbiel algebras under q-commutator // Fundamental and Applied Math. V.11-2005- No.3, 57–78. Dzhumadildaev A.S., Tulenbaev K.M., Nilpotency of Zinbiel algebras // Journal of Dynamical and Control Systems. – 2005. – V. 11, No.2. – P. 195–213. Dzhumadildaev A.S., Hadamard invertible matrices, n-scalar products, and determinants // Mathematical Notes. – 2005. – V. 77, No.3. – P. 440–443. Dzhumadildaev A.S., Special identity for Novikov-Jordan algebras // Communications in Algebra. – 2005. – V. 33, No.5. – P. 1279–1287. n-Lie Structures That Are Generated by Wronskians //Sibirskii Matematicheskii Zhurnal, V.46-2005, No. 4, pp. 759–773, 2005 =engl. transl. Siberian Mathematical Journal, {\bf 46}(2005), No.4, pp. 601 – 612= Preprint available math.RA/0202043 Dzhumadildaev A.S., Representations of vector product n-Lie algebras // Communications In Algebra. – 2004. – V. 32, No.9. – P. 3315–3326. Dzhumadildaev A.S. N-commutators // Commentarii Mathematici Helvetici. – 2004. – V. 79, No.3. – P. 516–553. Dzhumadildaev A.S., K.M. Tulenbaev Filiform Leibniz dual algebras // International Conference Humboldt-Kolleg II, October 24–16, 2004, p. 62-63. Dzhumadildaev A.S., Novikov-Jordan algebras // Communications In Algebra. – 2002. – V. 30, No. 11. – P. 5207–5240. Dzhumadildaev A.S. Identities and derivations for Jacobian algebras//"Quantization, Poisson brackets and beyond", Contemp. Math. v.315, 245–278, 2002. Preprint available math.RA/0202040 Dzhumadildaev A.S., C. Lofwall Trees, free right-symmetric algebras, free Novikov algebras and identities // Homology, Homotopy and Applications, V. 4–2002, No.2(1), 165–190. Dzhumadildaev A.S., Jacobson formula for right-symmetric algebras in characteristic p // Communications In Algebra. – 2001. – V. 29, No.9. – P. 3759–3771. Dzhumadildaev A.S., Abdykassymova S.A., Leibniz algebras in characteristic p // Comptes Rendus de l'Académie des Sciences Série I-Mathématique. – 2001. – V. 332, No. 12. – P. 1047–1052. Dzhumadildaev A.S., Davydov A.A., Factor-complex for Leibniz cohomology // Communications In Algebra. – 2001. – V. 29, No. 9. – P. 4197–4210. Dzhumadildaev A.S., Minimal identities for right-symmetric algebras // Journal of Algebra. – 2000. – V. 225, No.1. – P. 201–230. Dzhumadildaev A.S., A.I. Kostrikin, Modular Lie algebras: new trends // Algebra (Proc. Kurosh Conf. may, 1998), Walter de Gruyter, p. 181-203, 2000. Dzhumadildaev A.S., Cohomologies of colour Leibniz algebras: pre-simplicial approach // Lie Theory and its Applications III, (Clausthal, 11–14 July 1999), World Sci., 124–136, 2000. Dzhumadildaev A.S., Cohomologies and deformations of right-symmetric Algebras // J.Math. Sci, V. 93–1999, No. 6, 1836–1876. Preprint available math.DG/9807065. Dzhumadildaev A.S., Symmetric (co)homologies of Lie algebras // Comptes Rendus de l'Académie des Sciences - Series I - Mathematique. – 1997. – V. 324, No. 5. – P. 497–502. Dzhumadildaev A.S. Cosmologies and deformations of semiprime sum of Lie algebras // Doklady Akademii Nauk. – 1997. – V. 355, No. 5. – P. 586–588. Dzhumadildaev A.S., Virasoro Type Lie algebras and deformations // Zeitschrift für Physik C-Particles and Fields.¬ – 1996. – V. 72, No. 3. – P. 509–517. Dzhumadildaev A.S., Odd central extensions of Lie superalgebras // Functional Analysis and its Applications. – 1995. – V.29, No.3. – P.202–204. Dzhumadildaev A.S. Differentiations and central extensions of Lie algebra of formal pseudo-differential operators // Algebra i Analis, {\bf 6}(1994), No.1, p. 140-158=engl.transl. St.Petersbourg Math.J. {\bf 6}(1995), No.1, p. 121-136. Dzhumadildaev A.S., Central extensions of infinite-dimensional Lie-algebras // Functional Analysis and its Applications. – 1992. – V. 26, No.4. – P.247–253. Dzhumadildaev A.S., Cohomology of truncated coinduced representations of Lie-algebras of positive characteristic // Mathematics of the USSR-Sbornik. – 1990. – V. 66, No.2. – P.461–473. Dzhumadildaev A.S. Integral and mod p-cohomologies of the lie-algebra W1 // Functional Analysis and its Applications. – 1988. – V. 22, No.3. – P. 226–228. Dzhumadildaev A.S. On a Levi theorem for lie-algebras of characteristic-p // Russian Mathematical Surveys. – 1986. – V. 41, No.5. – P. 139–140. Dzhumadildaev A.S. 2-cohomologies of nilpotent subalgebra of Zassenhaus algebra // Izvestiya vysshikh uchebnykh zavedenii Matematika. – 1986. – No.2. – P. 59– 61. Dzhumadildaev A.S., Central extensions of Zassenhaus algebra and their irreducible representations // Math.USSR Sb., V. 54–1986, p.;457-474. Dzhumadildaev A.S., Generalized casimir elements // Mathematics of the USSR-Izvestiya. – 1986. – V. 49, No.5. – P. 391–400. Dzhumadildaev A.S. Central extensions of the zassenhaus algebra and their irreducible representations // Mathematics of the USSR-Sbornik. – 1985. – V.126, No.3. – P. 457–474. Dzhumadildaev A.S., Simple Lie-algebras with a subalgebra of codimension one // Russian Mathematical Surveys. – 1985. –V. 40, No.1. – P. 215– 216. Dzhumadildaev A.S., On the cohomology of modular Lie-algebras // Mathematics of the USSR-Sbornik. – 1982. – V. 119, No. 1. – P. 127–143. == References ==
Wikipedia:Askold Khovanskii#0
Askold Georgievich Khovanskii (Russian: Аскольд Георгиевич Хованский; born 3 June 1947, Moscow) is a Russian and Canadian mathematician currently a professor of mathematics at the University of Toronto, Canada. His areas of research are algebraic geometry, commutative algebra, singularity theory, differential geometry and differential equations. His research is in the development of the theory of toric varieties and Newton polyhedra in algebraic geometry. He is also the inventor of the theory of fewnomials, and the Bernstein–Khovanskii–Kushnirenko theorem is named after him. He obtained his Ph.D. from Steklov Mathematical Institute in Moscow under the supervision of Vladimir Arnold. In his Ph.D. thesis, he developed a topological version of Galois theory. He studies the theory of Newton–Okounkov bodies, or Okounkov bodies for short. Among his graduate students are Olga Gel'fond, Feodor Borodich, H. Petrov-Tan'kin, Kiumars Kaveh, Farzali Izadi, Ivan Soprunov, Jenya Soprunova, Vladlen Timorin, Valentina Kirichenko, Sergey Chulkov, V. Kisunko, Mikhail Mazin, O. Ivrii, K. Matveev, Yuri Burda, and J. Yang. In 2014, he received the Jeffery–Williams Prize of the Canadian Mathematical Society for outstanding contributions to mathematical research in Canada. == References == == External links == Askold Khovanskii at the Mathematics Genealogy Project Homepage of Askold Khovanskii at the University of Toronto Moscow Mathematical Journal volume in honor of Askold Khovanskii (Mosc. Math. J., 7:2 (2007), 169–171) Askoldfest
Wikipedia:Askold Vinogradov#0
Askold Ivanovich Vinogradov (Russian: Аско́льд Ива́нович Виногра́дов; 1929 – 31 December 2005) was a Russian mathematician who worked in analytic number theory. The Bombieri–Vinogradov theorem is partially named after him. == References == == External links == Publications of A.I. Vinogradov
Wikipedia:Aslak Tveito#0
Aslak Tveito (born 17 February 1961) is a Norwegian scientist in the field of numerical analysis and scientific computing. Tveito was the Managing Director of the Simula Research Laboratory, a Norwegian research center owned by the Norwegian Government, and is Professor of Scientific Computing at the University of Oslo. == Education and career == Tveito obtained an MSc degree in Numerical Analysis from the University of Oslo, Department of Informatics in 1985. He obtained PhD from the same department in 1988, focusing on numerical solution of partial differential equations. In 1991 Tveito joined the Applied Mathematics department at SINTEF as a research scientist and from 1993 to 1997 held the position of Chief Scientist. He was appointed Professor of Numerical Analysis at the University of Oslo, Department of Informatics in 1994. In 1997 Tveito co-founded Numerical Objects, a company that commercialized the Diffpack software, and served on its board until 2001. Tveito joined the Simula Research Laboratory upon its establishment in 2001 and has been its Managing Director since 2002. The scientific computing activities at Simula have been awarded the top grade, Excellent, in all international evaluations in their lifetime (2001–2015) During this time he has retained his Professorship at the University of Oslo, and also served as Chairman of the Board of Simula Innovation (2006–2008), Kalkulo (2006–2008), Simula School of Research and Innovation (2007–2010), Simula UiB (2016–2017), Simula Metropolitan Center for Digital Engineering (2018–2019) and the Norwegian Defence Research Establishment (2021-). Tveito is a member of the Norwegian Academy of Technological Sciences. == Research == Tveito's research has included the numerical solution of linear systems arising from the discretization of partial differential equations; the numerical and mathematical analysis of hyperbolic conservation laws; mathematical models of two-phase flow; upscaling; nonlinear water waves; the numerical solution of the Black-Scholes equations; parallel computing for partial differential equations; and numerical software tools. In his early career he focused on numerical analysis, before shifting to research on software and computing tools, parallel computing, and the application of computational models in science. Since 2005 he has worked almost exclusively on mathematical and computational issues related to understanding the electrophysiology of the heart. He is involved in the Centre for Integrative Neuroplasticity (CINPLA) at the University of Oslo. == Authorship == Tveito has co-authored three research monographs [1, 2, 3] and two textbooks in scientific computing [4, 5]. He has co-edited seven books, and has published more than 100 papers in international journals, collections, and proceedings. A complete publication list can be found at Google Scholar. Tveito is on the editorial board of Encyclopedia of Applied and Computational Mathematics. == Selected bibliography == Sundnes, G. T. Lines, X. Cai, B. F. Nielsen, K.-A. Mardal, and A. Tveito. Computing the Electrical Activity in the Heart. Springer-Verlag Berlin Heidelberg, 2006. ISBN 978-3-540-33437-8 A. Tveito and G. Lines. Computing characterizations of drugs for ion channels and receptors using Markov models. Springer-Verlag, lecture notes in Computational Science and Engineering, vol 111, 2016. ISBN 978-3-319-30029-0 A. Tveito, K.A. Mardal, and M. Rognes. Modeling Excitable Tissue. Simula SpringerBriefs on Computing, 2021. ISBN 978-3-030-61156-9 A. Tveito and R. Winther. Introduction to partial differential equations. A computational approach. Springer-Verlag Berlin-Heidelberg, 2nd ed. 2009. (also available in German). A. Tveito, H. P. Langtangen, B. F. Nielsen, and X. Cai, Elements of Scientific Computing. Springer-Verlag Berlin-Heidelberg, 2010. ISBN 978-3-642-11299-7 == References ==
Wikipedia:Assaf Naor#0
Assaf Naor (Hebrew: אסף נאור; born May 7, 1975) is an Israeli American and Czech mathematician, computer scientist, and a professor of mathematics at Princeton University. == Academic career == Naor earned a baccalaureate from Hebrew University of Jerusalem in 1996 and a doctorate from the same university in 2002, under the supervision of Joram Lindenstrauss. He worked at Microsoft Research from 2002 until 2007, with an affiliated faculty position at the University of Washington, and joined the NYU faculty in 2006. == Research == Naor's research concerns metric spaces, their properties, and related algorithms, including improved upper bounds on the Grothendieck inequality, applications of this inequality, and research on metrical task systems. == Awards and honors == Naor won the Bergmann award of the United States – Israel Binational Science Foundation in 2007, and the Pazy award of the BSF in 2011. In 2012 he was one of four faculty winners of the Leonard Blavatnik Award of the New York Academy of Sciences, given to young scientists and engineers in New York, New Jersey, and Connecticut. He won the Salem Prize in 2008 for "contributions to the structural theory of metric spaces and its applications to computer science", and in the same year was given a European Mathematical Society Prize (one of ten awarded to outstanding younger mathematicians). He won the Bôcher Memorial Prize in 2011 "for introducing new invariants of metric spaces and for applying his new understanding of the distortion between various metric structures to theoretical computer science". In 2012 he became a fellow of the American Mathematical Society. He received the Nemmers Prize in Mathematics in 2018 and in 2019 the Ostrowski Prize. He gave an invited talk at the International Congress of Mathematicians in 2010, on the topic of "Functional Analysis and Applications". == References ==
Wikipedia:Association of Mathematics Teachers of India#0
The Association of Mathematics Teachers of India or AMTI is an academically oriented body of professionals and students interested in the fields of mathematics and mathematics education. The AMTI's main base is Tamil Nadu, but it has recently been spreading its network in other parts of India, particularly in South India. == Examinations and Olympiads == === National Mathematics Talent Contest === AMTI conducts a National Mathematics Talent Contest or NMTC at Primary(Gauss Contest) (Standards 4 to 6), Sub-junior (Kaprekar Contest) (Standards 7 and 8), Junior (Bhaskara Contest) (Standards 9 and 10), Inter(Ramanujan Contest) (Standards 11 and 12) and Senior (Aryabhata Contest) (B.Sc.) levels. For students at the Junior and Inter levels from Tamil Nadu, the NMTC also plays the role of Regional Mathematical Olympiad. Although the question papers are different for Junior and Inter levels, students from both levels may be chosen to appear at INMO based on their performance. The NMTC is usually held around the last week of October. A preliminary examination is conducted earlier (in September) for all levels except B.Sc. students. Students (Junior and Inter) qualifying the preliminary examination are invited for an Orientation Camp one week before the NMTC, where Olympiad problems and theories are taught. This is also useful for those students qualifying further for INMO. === Grand Achievement Test === This test is for students studying in 12th standard under the Tamil Nadu State Board. It is intended to give a perfectly simulated atmosphere of the board's examination. == Training Activities == === Ten-week training session === In 2005, AMTI started a ten-week training programme for students for Olympiad-related problems. The training batches were split into: Primary level: Standards 4 to 6 Sub-junior level: Standards 7 and 8 Junior level: Standards 9 and 10 Inter level: Standards 11 and 12 Around 85 students attended the ten-week training session. AMTI conducted the programme again in 2006, and received a much better response. == Workshops and conferences == The AMTI has been organizing conferences in different parts of the country to meet and deliberate issues of mathematics education, particularly at the school level. == Notable office bearers == P. K. Srinivasan, a famous teacher of mathematics, was the first Editor of the magazine Junior Mathematician (1990 to 1994) and the Academic Secretary of AMTI from 1981 to 1994. == External links == AMTI official page
Wikipedia:Association of Teachers of Mathematics#0
The Association of Teachers of Mathematics (ATM) was established by Caleb Gattegno in 1950 to encourage the development of mathematics education to be more closely related to the needs of the learner. ATM is a membership organisation representing a community of students, nursery, infant, primary, secondary and tertiary teachers, numeracy consultants, overseas teachers, academics and anybody interested in mathematics education. == Aims == The stated aims of the Association of Teachers of Mathematics are to support the teaching and learning of mathematics by: encouraging increased understanding and enjoyment of mathematics encouraging increased understanding of how people learn mathematics encouraging the sharing and evaluation of teaching and learning strategies and practices promoting the exploration of new ideas and possibilities initiating and contributing to discussion of and developments in mathematics education at all levels == Guiding principles == ATM lists as its guiding principles: The ability to operate mathematically is an aspect of human functioning which is as universal as language itself. Attention needs constantly to be drawn to this fact. Any possibility of intimidating with mathematical expertise is to be avoided. The power to learn rests with the learner. Teaching has a subordinate role. The teacher has a duty to seek out ways to engage the power of the learner. It is important to examine critically approaches to teaching and to explore new possibilities, whether deriving from research, from technological developments or from the imaginative and insightful ideas of others. Teaching and learning are cooperative activities. Encouraging a questioning approach and giving due attention to the ideas of others are attitudes to be encouraged. Influence is best sought by building networks of contacts in professional circles. == Structure == There are about 3500 members, mainly teachers in primary and secondary schools. It is a registered charity and all profits from subscriptions and trading are re-invested. Its head office is located in central Derby. === Branches === Working within the aims and guiding principles of the Association of Teachers of Mathematics, ATM Branches provide the opportunity for professionals to share ideas and experiences in their own areas. == Publications == ATM publishes Mathematics Teaching, a non-refereed journal with articles of interest to those involved in mathematics education. The journal is sent to all registered members. There are some free 'open access' journals available to all on the ATM website. ATM also publishes a range of resources suitable for teachers at all levels of teaching. == See also == Association for Science Education Science, Technology, Engineering and Mathematics Network Science Learning Centres - based at the University of York == References == == External links == Web site Easter Professional Development Conference Mathematics Teaching journal === News items === Learning of maths plateaus in December 2007 Difficult maths in May 2007 Times tables in September 2004
Wikipedia:Associative property#0
In mathematics, the associative property is a property of some binary operations that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs. Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is (after rewriting the expression with parentheses and in infix notation if necessary), rearranging the parentheses in such an expression will not change its value. Consider the following equations: ( 2 + 3 ) + 4 = 2 + ( 3 + 4 ) = 9 2 × ( 3 × 4 ) = ( 2 × 3 ) × 4 = 24. {\displaystyle {\begin{aligned}(2+3)+4&=2+(3+4)=9\,\\2\times (3\times 4)&=(2\times 3)\times 4=24.\end{aligned}}} Even though the parentheses were rearranged on each line, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that "addition and multiplication of real numbers are associative operations". Associativity is not the same as commutativity, which addresses whether the order of two operands affects the result. For example, the order does not matter in the multiplication of real numbers, that is, a × b = b × a, so we say that the multiplication of real numbers is a commutative operation. However, operations such as function composition and matrix multiplication are associative, but not (generally) commutative. Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative. However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation, and the vector cross product. In contrast to the theoretical properties of real numbers, the addition of floating point numbers in computer science is not associative, and the choice of how to associate an expression can have a significant effect on rounding error. == Definition == Formally, a binary operation ∗ {\displaystyle \ast } on a set S is called associative if it satisfies the associative law: ( x ∗ y ) ∗ z = x ∗ ( y ∗ z ) {\displaystyle (x\ast y)\ast z=x\ast (y\ast z)} , for all x , y , z {\displaystyle x,y,z} in S. Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol (juxtaposition) as for multiplication. ( x y ) z = x ( y z ) {\displaystyle (xy)z=x(yz)} , for all x , y , z {\displaystyle x,y,z} in S. The associative law can also be expressed in functional notation thus: ( f ∘ ( g ∘ h ) ) ( x ) = ( ( f ∘ g ) ∘ h ) ( x ) {\displaystyle (f\circ (g\circ h))(x)=((f\circ g)\circ h)(x)} == Generalized associative law == If a binary operation is associative, repeated application of the operation produces the same result regardless of how valid pairs of parentheses are inserted in the expression. This is called the generalized associative law. The number of possible bracketings is just the Catalan number, C n {\displaystyle C_{n}} , for n operations on n+1 values. For instance, a product of 3 operations on 4 elements may be written (ignoring permutations of the arguments), in C 3 = 5 {\displaystyle C_{3}=5} possible ways: ( ( a b ) c ) d {\displaystyle ((ab)c)d} ( a ( b c ) ) d {\displaystyle (a(bc))d} a ( ( b c ) d ) {\displaystyle a((bc)d)} ( a ( b ( c d ) ) {\displaystyle (a(b(cd))} ( a b ) ( c d ) {\displaystyle (ab)(cd)} If the product operation is associative, the generalized associative law says that all these expressions will yield the same result. So unless the expression with omitted parentheses already has a different meaning (see below), the parentheses can be considered unnecessary and "the" product can be written unambiguously as a b c d {\displaystyle abcd} As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they remain unnecessary for disambiguation. An example where this does not work is the logical biconditional ↔. It is associative; thus, A ↔ (B ↔ C) is equivalent to (A ↔ B) ↔ C, but A ↔ B ↔ C most commonly means (A ↔ B) and (B ↔ C), which is not equivalent. == Examples == Some examples of associative operations include the following. == Propositional logic == === Rule of replacement === In standard truth-functional propositional logic, association, or associativity are two valid rules of replacement. The rules allow one to move parentheses in logical expressions in logical proofs. The rules (using logical connectives notation) are: ( P ∨ ( Q ∨ R ) ) ⇔ ( ( P ∨ Q ) ∨ R ) {\displaystyle (P\lor (Q\lor R))\Leftrightarrow ((P\lor Q)\lor R)} and ( P ∧ ( Q ∧ R ) ) ⇔ ( ( P ∧ Q ) ∧ R ) , {\displaystyle (P\land (Q\land R))\Leftrightarrow ((P\land Q)\land R),} where " ⇔ {\displaystyle \Leftrightarrow } " is a metalogical symbol representing "can be replaced in a proof with". === Truth functional connectives === Associativity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that associativity is a property of particular connectives. The following (and their converses, since ↔ is commutative) are truth-functional tautologies. Associativity of disjunction ( ( P ∨ Q ) ∨ R ) ↔ ( P ∨ ( Q ∨ R ) ) {\displaystyle ((P\lor Q)\lor R)\leftrightarrow (P\lor (Q\lor R))} Associativity of conjunction ( ( P ∧ Q ) ∧ R ) ↔ ( P ∧ ( Q ∧ R ) ) {\displaystyle ((P\land Q)\land R)\leftrightarrow (P\land (Q\land R))} Associativity of equivalence ( ( P ↔ Q ) ↔ R ) ↔ ( P ↔ ( Q ↔ R ) ) {\displaystyle ((P\leftrightarrow Q)\leftrightarrow R)\leftrightarrow (P\leftrightarrow (Q\leftrightarrow R))} Joint denial is an example of a truth functional connective that is not associative. == Non-associative operation == A binary operation ∗ {\displaystyle *} on a set S that does not satisfy the associative law is called non-associative. Symbolically, ( x ∗ y ) ∗ z ≠ x ∗ ( y ∗ z ) for some x , y , z ∈ S . {\displaystyle (x*y)*z\neq x*(y*z)\qquad {\mbox{for some }}x,y,z\in S.} For such an operation the order of evaluation does matter. For example: Subtraction ( 5 − 3 ) − 2 ≠ 5 − ( 3 − 2 ) {\displaystyle (5-3)-2\,\neq \,5-(3-2)} Division ( 4 / 2 ) / 2 ≠ 4 / ( 2 / 2 ) {\displaystyle (4/2)/2\,\neq \,4/(2/2)} Exponentiation 2 ( 1 2 ) ≠ ( 2 1 ) 2 {\displaystyle 2^{(1^{2})}\,\neq \,(2^{1})^{2}} Vector cross product i × ( i × j ) = i × k = − j ( i × i ) × j = 0 × j = 0 {\displaystyle {\begin{aligned}\mathbf {i} \times (\mathbf {i} \times \mathbf {j} )&=\mathbf {i} \times \mathbf {k} =-\mathbf {j} \\(\mathbf {i} \times \mathbf {i} )\times \mathbf {j} &=\mathbf {0} \times \mathbf {j} =\mathbf {0} \end{aligned}}} Also although addition is associative for finite sums, it is not associative inside infinite sums (series). For example, ( 1 + − 1 ) + ( 1 + − 1 ) + ( 1 + − 1 ) + ( 1 + − 1 ) + ( 1 + − 1 ) + ( 1 + − 1 ) + ⋯ = 0 {\displaystyle (1+-1)+(1+-1)+(1+-1)+(1+-1)+(1+-1)+(1+-1)+\dots =0} whereas 1 + ( − 1 + 1 ) + ( − 1 + 1 ) + ( − 1 + 1 ) + ( − 1 + 1 ) + ( − 1 + 1 ) + ( − 1 + 1 ) + ⋯ = 1. {\displaystyle 1+(-1+1)+(-1+1)+(-1+1)+(-1+1)+(-1+1)+(-1+1)+\dots =1.} Some non-associative operations are fundamental in mathematics. They appear often as the multiplication in structures called non-associative algebras, which have also an addition and a scalar multiplication. Examples are the octonions and Lie algebras. In Lie algebras, the multiplication satisfies Jacobi identity instead of the associative law; this allows abstracting the algebraic nature of infinitesimal transformations. Other examples are quasigroup, quasifield, non-associative ring, and commutative non-associative magmas. === Nonassociativity of floating point calculation === In mathematics, addition and multiplication of real numbers are associative. By contrast, in computer science, addition and multiplication of floating point numbers are not associative, as different rounding errors may be introduced when dissimilar-sized values are joined in a different order. To illustrate this, consider a floating point representation with a 4-bit significand: Even though most computers compute with 24 or 53 bits of significand, this is still an important source of rounding error, and approaches such as the Kahan summation algorithm are ways to minimise the errors. It can be especially problematic in parallel computing. === Notation for non-associative operations === In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression (unless the notation specifies the order in another way, like 2 3 / 4 {\displaystyle {\dfrac {2}{3/4}}} ). However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses. A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e., a ∗ b ∗ c = ( a ∗ b ) ∗ c a ∗ b ∗ c ∗ d = ( ( a ∗ b ) ∗ c ) ∗ d a ∗ b ∗ c ∗ d ∗ e = ( ( ( a ∗ b ) ∗ c ) ∗ d ) ∗ e etc. } for all a , b , c , d , e ∈ S {\displaystyle \left.{\begin{array}{l}a*b*c=(a*b)*c\\a*b*c*d=((a*b)*c)*d\\a*b*c*d*e=(((a*b)*c)*d)*e\quad \\{\mbox{etc.}}\end{array}}\right\}{\mbox{for all }}a,b,c,d,e\in S} while a right-associative operation is conventionally evaluated from right to left: x ∗ y ∗ z = x ∗ ( y ∗ z ) w ∗ x ∗ y ∗ z = w ∗ ( x ∗ ( y ∗ z ) ) v ∗ w ∗ x ∗ y ∗ z = v ∗ ( w ∗ ( x ∗ ( y ∗ z ) ) ) etc. } for all z , y , x , w , v ∈ S {\displaystyle \left.{\begin{array}{l}x*y*z=x*(y*z)\\w*x*y*z=w*(x*(y*z))\quad \\v*w*x*y*z=v*(w*(x*(y*z)))\quad \\{\mbox{etc.}}\end{array}}\right\}{\mbox{for all }}z,y,x,w,v\in S} Both left-associative and right-associative operations occur. Left-associative operations include the following: Subtraction and division of real numbers x − y − z = ( x − y ) − z {\displaystyle x-y-z=(x-y)-z} x / y / z = ( x / y ) / z {\displaystyle x/y/z=(x/y)/z} Function application ( f x y ) = ( ( f x ) y ) {\displaystyle (f\,x\,y)=((f\,x)\,y)} This notation can be motivated by the currying isomorphism, which enables partial application. Right-associative operations include the following: Exponentiation of real numbers in superscript notation x y z = x ( y z ) {\displaystyle x^{y^{z}}=x^{(y^{z})}} Exponentiation is commonly used with brackets or right-associatively because a repeated left-associative exponentiation operation is of little use. Repeated powers would mostly be rewritten with multiplication: ( x y ) z = x ( y z ) {\displaystyle (x^{y})^{z}=x^{(yz)}} Formatted correctly, the superscript inherently behaves as a set of parentheses; e.g. in the expression 2 x + 3 {\displaystyle 2^{x+3}} the addition is performed before the exponentiation despite there being no explicit parentheses 2 ( x + 3 ) {\displaystyle 2^{(x+3)}} wrapped around it. Thus given an expression such as x y z {\displaystyle x^{y^{z}}} , the full exponent y z {\displaystyle y^{z}} of the base x {\displaystyle x} is evaluated first. However, in some contexts, especially in handwriting, the difference between x y z = ( x y ) z {\displaystyle {x^{y}}^{z}=(x^{y})^{z}} , x y z = x ( y z ) {\displaystyle x^{yz}=x^{(yz)}} and x y z = x ( y z ) {\displaystyle x^{y^{z}}=x^{(y^{z})}} can be hard to see. In such a case, right-associativity is usually implied. Function definition Z → Z → Z = Z → ( Z → Z ) {\displaystyle \mathbb {Z} \rightarrow \mathbb {Z} \rightarrow \mathbb {Z} =\mathbb {Z} \rightarrow (\mathbb {Z} \rightarrow \mathbb {Z} )} x ↦ y ↦ x − y = x ↦ ( y ↦ x − y ) {\displaystyle x\mapsto y\mapsto x-y=x\mapsto (y\mapsto x-y)} Using right-associative notation for these operations can be motivated by the Curry–Howard correspondence and by the currying isomorphism. Non-associative operations for which no conventional evaluation order is defined include the following. Exponentiation of real numbers in infix notation ( x ∧ y ) ∧ z ≠ x ∧ ( y ∧ z ) {\displaystyle (x^{\wedge }y)^{\wedge }z\neq x^{\wedge }(y^{\wedge }z)} Knuth's up-arrow operators a ↑↑ ( b ↑↑ c ) ≠ ( a ↑↑ b ) ↑↑ c {\displaystyle a\uparrow \uparrow (b\uparrow \uparrow c)\neq (a\uparrow \uparrow b)\uparrow \uparrow c} a ↑↑↑ ( b ↑↑↑ c ) ≠ ( a ↑↑↑ b ) ↑↑↑ c {\displaystyle a\uparrow \uparrow \uparrow (b\uparrow \uparrow \uparrow c)\neq (a\uparrow \uparrow \uparrow b)\uparrow \uparrow \uparrow c} Taking the cross product of three vectors a → × ( b → × c → ) ≠ ( a → × b → ) × c → for some a → , b → , c → ∈ R 3 {\displaystyle {\vec {a}}\times ({\vec {b}}\times {\vec {c}})\neq ({\vec {a}}\times {\vec {b}})\times {\vec {c}}\qquad {\mbox{ for some }}{\vec {a}},{\vec {b}},{\vec {c}}\in \mathbb {R} ^{3}} Taking the pairwise average of real numbers ( x + y ) / 2 + z 2 ≠ x + ( y + z ) / 2 2 for all x , y , z ∈ R with x ≠ z . {\displaystyle {(x+y)/2+z \over 2}\neq {x+(y+z)/2 \over 2}\qquad {\mbox{for all }}x,y,z\in \mathbb {R} {\mbox{ with }}x\neq z.} Taking the relative complement of sets ( A ∖ B ) ∖ C ≠ A ∖ ( B ∖ C ) {\displaystyle (A\backslash B)\backslash C\neq A\backslash (B\backslash C)} .(Compare material nonimplication in logic.) == History == William Rowan Hamilton seems to have coined the term "associative property" around 1844, a time when he was contemplating the non-associative algebra of the octonions he had learned about from John T. Graves. == See also == Light's associativity test Telescoping series, the use of addition associativity for cancelling terms in an infinite series A semigroup is a set with an associative binary operation. Commutativity and distributivity are two other frequently discussed properties of binary operations. Power associativity, alternativity, flexibility and N-ary associativity are weak forms of associativity. Moufang identities also provide a weak form of associativity. == References ==
Wikipedia:Associator#0
In abstract algebra, the term associator is used in different ways as a measure of the non-associativity of an algebraic structure. Associators are commonly studied as triple systems. == Ring theory == For a non-associative ring or algebra R, the associator is the multilinear map [ ⋅ , ⋅ , ⋅ ] : R × R × R → R {\displaystyle [\cdot ,\cdot ,\cdot ]:R\times R\times R\to R} given by [ x , y , z ] = ( x y ) z − x ( y z ) . {\displaystyle [x,y,z]=(xy)z-x(yz).} Just as the commutator [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} measures the degree of non-commutativity, the associator measures the degree of non-associativity of R. For an associative ring or algebra the associator is identically zero. The associator in any ring obeys the identity w [ x , y , z ] + [ w , x , y ] z = [ w x , y , z ] − [ w , x y , z ] + [ w , x , y z ] . {\displaystyle w[x,y,z]+[w,x,y]z=[wx,y,z]-[w,xy,z]+[w,x,yz].} The associator is alternating precisely when R is an alternative ring. The associator is symmetric in its two rightmost arguments when R is a pre-Lie algebra. The nucleus is the set of elements that associate with all others: that is, the n in R such that [ n , R , R ] = [ R , n , R ] = [ R , R , n ] = { 0 } . {\displaystyle [n,R,R]=[R,n,R]=[R,R,n]=\{0\}\ .} The nucleus is an associative subring of R. == Quasigroup theory == A quasigroup Q is a set with a binary operation ⋅ : Q × Q → Q {\displaystyle \cdot :Q\times Q\to Q} such that for each a, b in Q, the equations a ⋅ x = b {\displaystyle a\cdot x=b} and y ⋅ a = b {\displaystyle y\cdot a=b} have unique solutions x, y in Q. In a quasigroup Q, the associator is the map ( ⋅ , ⋅ , ⋅ ) : Q × Q × Q → Q {\displaystyle (\cdot ,\cdot ,\cdot ):Q\times Q\times Q\to Q} defined by the equation ( a ⋅ b ) ⋅ c = ( a ⋅ ( b ⋅ c ) ) ⋅ ( a , b , c ) {\displaystyle (a\cdot b)\cdot c=(a\cdot (b\cdot c))\cdot (a,b,c)} for all a, b, c in Q. As with its ring theory analog, the quasigroup associator is a measure of nonassociativity of Q. == Higher-dimensional algebra == In higher-dimensional algebra, where there may be non-identity morphisms between algebraic expressions, an associator is an isomorphism a x , y , z : ( x y ) z ↦ x ( y z ) . {\displaystyle a_{x,y,z}:(xy)z\mapsto x(yz).} == Category theory == In category theory, the associator expresses the associative properties of the internal product functor in monoidal categories. == See also == Commutator Non-associative algebra Quasi-bialgebra – discusses the Drinfeld associator == References == Bremner, M.; Hentzel, I. (March 2002). "Identities for the Associator in Alternative Algebras". Journal of Symbolic Computation. 33 (3): 255–273. CiteSeerX 10.1.1.85.1905. doi:10.1006/jsco.2001.0510. Schafer, Richard D. (1995) [1966]. An Introduction to Nonassociative Algebras. Dover. ISBN 0-486-68813-5.
Wikipedia:Assouad dimension#0
In mathematics — specifically, in fractal geometry — the Assouad dimension is a definition of fractal dimension for subsets of a metric space. It was introduced by Patrice Assouad in his 1977 PhD thesis and later published in 1979, although the same notion had been studied in 1928 by Georges Bouligand. As well as being used to study fractals, the Assouad dimension has also been used to study quasiconformal mappings and embeddability problems. == Definition == The Assouad dimension of X , d A ( X ) {\displaystyle X,d_{A}(X)} , is the infimum of all s {\displaystyle s} such that ( X , ς ) {\displaystyle (X,\varsigma )} is ( M , s ) {\displaystyle (M,s)} -homogeneous for some M ≥ 1 {\displaystyle M\geq 1} . Let ( X , d ) {\displaystyle (X,d)} be a metric space, and let E be a non-empty subset of X. For r > 0, let N r ( E ) {\displaystyle N_{r}(E)} denote the least number of metric open balls of radius less than or equal to r with which it is possible to cover the set E. The Assouad dimension of E is defined to be the infimal α ≥ 0 {\displaystyle \alpha \geq 0} for which there exist positive constants C and ρ {\displaystyle \rho } so that, whenever 0 < r < R ≤ ρ , {\displaystyle 0<r<R\leq \rho ,} the following bound holds: sup x ∈ E N r ( B R ( x ) ∩ E ) ≤ C ( R r ) α . {\displaystyle \sup _{x\in E}N_{r}(B_{R}(x)\cap E)\leq C\left({\frac {R}{r}}\right)^{\alpha }.} The intuition underlying this definition is that, for a set E with "ordinary" integer dimension n, the number of small balls of radius r needed to cover the intersection of a larger ball of radius R with E will scale like (R/r)n. == Relationships to other notions of dimension == The Assouad dimension of a metric space is always greater than or equal to its Assouad–Nagata dimension. The Assouad dimension of a metric space is always greater than or equal to its upper box dimension, which in turn is greater than or equal to the Hausdorff dimension. The Lebesgue covering dimension of a metrizable space X is the minimal Assouad dimension of any metric on X. In particular, for every metrizable space there is a metric for which the Assouad dimension is equal to the Lebesgue covering dimension. == References == == Further reading == Fraser, Jonathan M. (2020). Assouad Dimension and Fractal Geometry. Cambridge University Press. doi:10.1017/9781108778459. ISBN 9781108478656. S2CID 218571013.
Wikipedia:Assyr Abdulle#0
Assyr Abdulle (19 January 1971 – 1 September 2021) was a Swiss mathematician. He specialized in numerical mathematics. == Biography == Abdulle earned a doctorate in mathematics under Gerhard Wanner and Ernst Hairer at the University of Geneva with the thesis Méthodes de Chebyshev basées sur des polynômes orthogonaux. He also earned a degree in violin and music from the Conservatoire de Musique de Genève in 1993. From 2001 to 2002, he was a postdoctoral researcher at Princeton University and worked at the computational laboratory at ETH Zurich from 2002 to 2003. In 2003, he became an assistant professor at the University of Basel and an associate professor at the University of Edinburgh in 2007. He then became a full professor at the École Polytechnique Fédérale de Lausanne. At the school, he started the master's degree in computational science. In 2016, he became Director of the Institut Mathicse and was founding Director of the Institut de Mathématiques in 2017. Abdulle was impassioned with modeling and numerical simulations in biology, chemistry, geology, and medicine. He notably contributed to the development of heterogeneous multi-scale methods. He developed methods for solving multiscale and ergodic stochastic problems. He also invented the Orthogonal Runge-Kutta-Chebyshev, which was used to solve stiff differential equations which were then generalized to multiscale stochastic systems. In 2005, Abdulle won the New Talent Award at the International Conference on Scientific Computation and Differential Equations. He received an advanced research fellowship from the Engineering and Physical Sciences Research Council in 2007. In 2009, he won the James H. Wilkinson Prize in Numerical Analysis and Scientific Computing, awarded by the Society for Industrial and Applied Mathematics, for his contributions to applied mathematics. He won the Germund Dahlquist Prize in 2013. Assyr Abdulle died on 1 September 2021 at the age of 50. == References ==
Wikipedia:Asthana Kolahalam#0
Asthana Kolahalam is the title of two different Tamil books both dealing with elementary mathematics but with totally different contents. One of them was published by Government Oriental Manuscripts Library (GOML), Madras (now Chennai) in 1951 with 167 pages and the other published by Saraswathi Mahal Library (SML), Thanjavur, Tamil Nadu in 2004 with 306 pages. Both books are based on old palm-leaf manuscripts with the same title composed in the form of verses and both contain explanations and illustrative examples. The title "Asthana Kolahalam" may be literally translated as "assembly room (royal audience place) uproar" and the term "uproar" suggests making the audience happy, jovial and cheerful. == GOML's "Asthana Kolahalam" == GOML's "Asthana Kolahalam" publication is based on a single palm-leaf manuscript submitted to GOML in 1921 by one Sankaravenkataramayyangar of Periyakulam. The work contains 57 stanzas of various meters including an invocation stanza. These stanzas specify rules for carrying out elementary arithmetical operations. One interesting feature of this work is that bulk of the work has been devoted to discussing topics in mathematics not contained in the various stanzas of the original manuscript. After completing the explanation of stanza 46 in page 52, the editor embarks on a grand detour of various topics in mathematics and returns to stanza 47 only in page 140. The context for this digression is apparently the value of the mathematical constant pi. The topics covered in this detour include the history of the computations of the value of pi (incidentally, he mentions Sangamagrama Madhava's approximate value of pi also, namely the value 2827433388233/(9 x 1011) ), the various trigonometric functions, formula for the area of a circle, formulas for the surface area and volume of cones, cylinders, etc., formulas for the circumference and area of ellipses, Pythagorean theorem and several related geometrical problems. == SML's "Asthana Kolahalam" == SML's "Asthana Kolahalam" is a work compiled from three different palm leaf manuscripts kept in SML, all having the same title. There are altogether 92 verses in the work. As per one of the verses in the text, the author of the work is Naviliperumal, son of Nagan. The date of composition of the manuscripts has not been determined. This publication carries detailed explanations of the various verses by K. Sathyabhama. == See also == Kanakkusaram Kaṇita Tīpikai Kaṇakkatikāram == References ==
Wikipedia:Asuman Aksoy#0
Asuman Güven Aksoy is a Turkish-American mathematician whose research concerns topics in functional analysis, metric geometry, and operator theory including Banach spaces, measures of non-compactness, fixed points, Birnbaum–Orlicz spaces, real trees, injective metric spaces, and tight spans. She works at Claremont McKenna College, where she is Crown Professor of Mathematics and George R. Roberts Fellow. == Education == Aksoy studied mathematics and physics at Ankara University, graduating with a bachelor's degree in 1976. She earned a master's degree in mathematics at Middle East Technical University in 1978, with a thesis Subspaces of Nuclear Fréchet Spaces supervised by Tosun Terzioğlu. She moved to the United States in 1978 for additional graduate study at the University of Michigan, and eventually became a US citizen. She completed her doctorate at the University of Michigan in 1984. Her dissertation, Approximation Schemes, Related s {\displaystyle s} -Numbers, and Applications, was supervised by Melapalayam S. Ramanujan. == Career == After completing her doctorate, Aksoy joined the faculty of Oakland University in 1984, and was tenured there in 1987. She moved to Claremont McKenna in 1990, and chaired the mathematics department there from 1997 to 2000 and again from 2007 to 2009. She was given the Crown Professorship and Roberts Fellowship in 2009. == Books == With Mohamed Amine Khamsi, Aksoy is the author of two books: Nonstandard Methods in Fixed Point Theory (Universitext, Springer, 1990) A Problem Book in Real Analysis (Springer, 2009) == Recognition == In 2006 the Southern California–Nevada Section of the Mathematical Association of America gave Aksoy their annual Award for Distinguished College or University Teaching of Mathematics. == References == == External links == Official website Asuman Aksoy publications indexed by Google Scholar
Wikipedia:Asymmetric norm#0
In mathematics, an asymmetric norm on a vector space is a generalization of the concept of a norm. == Definition == An asymmetric norm on a real vector space X {\displaystyle X} is a function p : X → [ 0 , + ∞ ) {\displaystyle p:X\to [0,+\infty )} that has the following properties: Subadditivity, or the triangle inequality: p ( x + y ) ≤ p ( x ) + p ( y ) for all x , y ∈ X . {\displaystyle p(x+y)\leq p(x)+p(y){\text{ for all }}x,y\in X.} Nonnegative homogeneity: p ( r x ) = r p ( x ) for all x ∈ X {\displaystyle p(rx)=rp(x){\text{ for all }}x\in X} and every non-negative real number r ≥ 0. {\displaystyle r\geq 0.} Positive definiteness: p ( x ) > 0 unless x = 0 {\displaystyle p(x)>0{\text{ unless }}x=0} Asymmetric norms differ from norms in that they need not satisfy the equality p ( − x ) = p ( x ) . {\displaystyle p(-x)=p(x).} If the condition of positive definiteness is omitted, then p {\displaystyle p} is an asymmetric seminorm. A weaker condition than positive definiteness is non-degeneracy: that for x ≠ 0 , {\displaystyle x\neq 0,} at least one of the two numbers p ( x ) {\displaystyle p(x)} and p ( − x ) {\displaystyle p(-x)} is not zero. == Examples == On the real line R , {\displaystyle \mathbb {R} ,} the function p {\displaystyle p} given by p ( x ) = { | x | , x ≤ 0 ; 2 | x | , x ≥ 0 ; {\displaystyle p(x)={\begin{cases}|x|,&x\leq 0;\\2|x|,&x\geq 0;\end{cases}}} is an asymmetric norm but not a norm. In a real vector space X , {\displaystyle X,} the Minkowski functional p B {\displaystyle p_{B}} of a convex subset B ⊆ X {\displaystyle B\subseteq X} that contains the origin is defined by the formula p B ( x ) = inf { r ≥ 0 : x ∈ r B } {\displaystyle p_{B}(x)=\inf \left\{r\geq 0:x\in rB\right\}\,} for x ∈ X {\displaystyle x\in X} . This functional is an asymmetric seminorm if B {\displaystyle B} is an absorbing set, which means that ⋃ r ≥ 0 r B = X , {\displaystyle \bigcup _{r\geq 0}rB=X,} and ensures that p ( x ) {\displaystyle p(x)} is finite for each x ∈ X . {\displaystyle x\in X.} == Corresponce between asymmetric seminorms and convex subsets of the dual space == If B ∗ ⊆ R n {\displaystyle B^{*}\subseteq \mathbb {R} ^{n}} is a convex set that contains the origin, then an asymmetric seminorm p {\displaystyle p} can be defined on R n {\displaystyle \mathbb {R} ^{n}} by the formula p ( x ) = max φ ∈ B ∗ ⟨ φ , x ⟩ . {\displaystyle p(x)=\max _{\varphi \in B^{*}}\langle \varphi ,x\rangle .} For instance, if B ∗ ⊆ R 2 {\displaystyle B^{*}\subseteq \mathbb {R} ^{2}} is the square with vertices ( ± 1 , ± 1 ) , {\displaystyle (\pm 1,\pm 1),} then p {\displaystyle p} is the taxicab norm x = ( x 0 , x 1 ) ↦ | x 0 | + | x 1 | . {\displaystyle x=\left(x_{0},x_{1}\right)\mapsto \left|x_{0}\right|+\left|x_{1}\right|.} Different convex sets yield different seminorms, and every asymmetric seminorm on R n {\displaystyle \mathbb {R} ^{n}} can be obtained from some convex set, called its dual unit ball. Therefore, asymmetric seminorms are in one-to-one correspondence with convex sets that contain the origin. The seminorm p {\displaystyle p} is positive definite if and only if B ∗ {\displaystyle B^{*}} contains the origin in its topological interior, degenerate if and only if B ∗ {\displaystyle B^{*}} is contained in a linear subspace of dimension less than n , {\displaystyle n,} and symmetric if and only if B ∗ = − B ∗ . {\displaystyle B^{*}=-B^{*}.} More generally, if X {\displaystyle X} is a finite-dimensional real vector space and B ∗ ⊆ X ∗ {\displaystyle B^{*}\subseteq X^{*}} is a compact convex subset of the dual space X ∗ {\displaystyle X^{*}} that contains the origin, then p ( x ) = max φ ∈ B ∗ φ ( x ) {\displaystyle p(x)=\max _{\varphi \in B^{*}}\varphi (x)} is an asymmetric seminorm on X . {\displaystyle X.} == See also == Finsler manifold – Generalization of Riemannian manifolds Minkowski functional – Function made from a set == References == Cobzaş, S. (2006). "Compact operators on spaces with asymmetric norm". Stud. Univ. Babeş-Bolyai Math. 51 (4): 69–87. arXiv:math/0608031. Bibcode:2006math......8031C. ISSN 0252-1938. MR 2314639. S. Cobzas, Functional Analysis in Asymmetric Normed Spaces, Frontiers in Mathematics, Basel: Birkhäuser, 2013; ISBN 978-3-0348-0477-6.
Wikipedia:Asymptote#0
In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity. The word asymptote is derived from the Greek ἀσύμπτωτος (asumptōtos) which means "not falling together", from ἀ priv. + σύν "together" + πτωτ-ός "fallen". The term was introduced by Apollonius of Perga in his work on conic sections, but in contrast to its modern meaning, he used it to mean any line that does not intersect the given curve. There are three kinds of asymptotes: horizontal, vertical and oblique. For curves given by the graph of a function y = ƒ(x), horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to +∞ or −∞. Vertical asymptotes are vertical lines near which the function grows without bound. An oblique asymptote has a slope that is non-zero but finite, such that the graph of the function approaches it as x tends to +∞ or −∞. More generally, one curve is a curvilinear asymptote of another (as opposed to a linear asymptote) if the distance between the two curves tends to zero as they tend to infinity, although the term asymptote by itself is usually reserved for linear asymptotes. Asymptotes convey information about the behavior of curves in the large, and determining the asymptotes of a function is an important step in sketching its graph. The study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis. == Introduction == The idea that a curve may come arbitrarily close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a computer screen have a positive width. So if they were to be extended far enough they would seem to merge, at least as far as the eye could discern. But these are physical representations of the corresponding mathematical entities; the line and the curve are idealized concepts whose width is 0 (see Line). Therefore, the understanding of the idea of an asymptote requires an effort of reason rather than experience. Consider the graph of the function f ( x ) = 1 x {\displaystyle f(x)={\frac {1}{x}}} shown in this section. The coordinates of the points on the curve are of the form ( x , 1 x ) {\displaystyle \left(x,{\frac {1}{x}}\right)} where x is a number other than 0. For example, the graph contains the points (1, 1), (2, 0.5), (5, 0.2), (10, 0.1), ... As the values of x {\displaystyle x} become larger and larger, say 100, 1,000, 10,000 ..., putting them far to the right of the illustration, the corresponding values of y {\displaystyle y} , .01, .001, .0001, ..., become infinitesimal relative to the scale shown. But no matter how large x {\displaystyle x} becomes, its reciprocal 1 x {\displaystyle {\frac {1}{x}}} is never 0, so the curve never actually touches the x-axis. Similarly, as the values of x {\displaystyle x} become smaller and smaller, say .01, .001, .0001, ..., making them infinitesimal relative to the scale shown, the corresponding values of y {\displaystyle y} , 100, 1,000, 10,000 ..., become larger and larger. So the curve extends further and further upward as it comes closer and closer to the y-axis. Thus, both the x and y-axis are asymptotes of the curve. These ideas are part of the basis of concept of a limit in mathematics, and this connection is explained more fully below. == Asymptotes of functions == The asymptotes most commonly encountered in the study of calculus are of curves of the form y = ƒ(x). These can be computed using limits and classified into horizontal, vertical and oblique asymptotes depending on their orientation. Horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to +∞ or −∞. As the name indicates they are parallel to the x-axis. Vertical asymptotes are vertical lines (perpendicular to the x-axis) near which the function grows without bound. Oblique asymptotes are diagonal lines such that the difference between the curve and the line approaches 0 as x tends to +∞ or −∞. === Vertical asymptotes === The line x = a is a vertical asymptote of the graph of the function y = ƒ(x) if at least one of the following statements is true: lim x → a − f ( x ) = ± ∞ , {\displaystyle \lim _{x\to a^{-}}f(x)=\pm \infty ,} lim x → a + f ( x ) = ± ∞ , {\displaystyle \lim _{x\to a^{+}}f(x)=\pm \infty ,} where lim x → a − {\displaystyle \lim _{x\to a^{-}}} is the limit as x approaches the value a from the left (from lesser values), and lim x → a + {\displaystyle \lim _{x\to a^{+}}} is the limit as x approaches a from the right. For example, if ƒ(x) = x/(x–1), the numerator approaches 1 and the denominator approaches 0 as x approaches 1. So lim x → 1 + x x − 1 = + ∞ {\displaystyle \lim _{x\to 1^{+}}{\frac {x}{x-1}}=+\infty } lim x → 1 − x x − 1 = − ∞ {\displaystyle \lim _{x\to 1^{-}}{\frac {x}{x-1}}=-\infty } and the curve has a vertical asymptote x = 1. The function ƒ(x) may or may not be defined at a, and its precise value at the point x = a does not affect the asymptote. For example, for the function f ( x ) = { 1 x if x > 0 , 5 if x ≤ 0. {\displaystyle f(x)={\begin{cases}{\frac {1}{x}}&{\text{if }}x>0,\\5&{\text{if }}x\leq 0.\end{cases}}} has a limit of +∞ as x → 0+, ƒ(x) has the vertical asymptote x = 0, even though ƒ(0) = 5. The graph of this function does intersect the vertical asymptote once, at (0, 5). It is impossible for the graph of a function to intersect a vertical asymptote (or a vertical line in general) in more than one point. Moreover, if a function is continuous at each point where it is defined, it is impossible that its graph does intersect any vertical asymptote. A common example of a vertical asymptote is the case of a rational function at a point x such that the denominator is zero and the numerator is non-zero. If a function has a vertical asymptote, then it isn't necessarily true that the derivative of the function has a vertical asymptote at the same place. An example is f ( x ) = 1 x + sin ⁡ ( 1 x ) {\displaystyle f(x)={\tfrac {1}{x}}+\sin({\tfrac {1}{x}})\quad } at x = 0 {\displaystyle \quad x=0} . This function has a vertical asymptote at x = 0 , {\displaystyle x=0,} because lim x → 0 + f ( x ) = lim x → 0 + ( 1 x + sin ⁡ ( 1 x ) ) = + ∞ , {\displaystyle \lim _{x\to 0^{+}}f(x)=\lim _{x\to 0^{+}}\left({\tfrac {1}{x}}+\sin \left({\tfrac {1}{x}}\right)\right)=+\infty ,} and lim x → 0 − f ( x ) = lim x → 0 − ( 1 x + sin ⁡ ( 1 x ) ) = − ∞ {\displaystyle \lim _{x\to 0^{-}}f(x)=\lim _{x\to 0^{-}}\left({\tfrac {1}{x}}+\sin \left({\tfrac {1}{x}}\right)\right)=-\infty } . The derivative of f {\displaystyle f} is the function f ′ ( x ) = − ( cos ⁡ ( 1 x ) + 1 ) x 2 {\displaystyle f'(x)={\frac {-(\cos({\tfrac {1}{x}})+1)}{x^{2}}}} . For the sequence of points x n = ( − 1 ) n ( 2 n + 1 ) π , {\displaystyle x_{n}={\frac {(-1)^{n}}{(2n+1)\pi }},\quad } for n = 0 , 1 , 2 , … {\displaystyle \quad n=0,1,2,\ldots } that approaches x = 0 {\displaystyle x=0} both from the left and from the right, the values f ′ ( x n ) {\displaystyle f'(x_{n})} are constantly 0 {\displaystyle 0} . Therefore, both one-sided limits of f ′ {\displaystyle f'} at 0 {\displaystyle 0} can be neither + ∞ {\displaystyle +\infty } nor − ∞ {\displaystyle -\infty } . Hence f ′ ( x ) {\displaystyle f'(x)} doesn't have a vertical asymptote at x = 0 {\displaystyle x=0} . === Horizontal asymptotes === Horizontal asymptotes are horizontal lines that the graph of the function approaches as x → ±∞. The horizontal line y = c is a horizontal asymptote of the function y = ƒ(x) if lim x → − ∞ f ( x ) = c {\displaystyle \lim _{x\rightarrow -\infty }f(x)=c} or lim x → + ∞ f ( x ) = c {\displaystyle \lim _{x\rightarrow +\infty }f(x)=c} . In the first case, ƒ(x) has y = c as asymptote when x tends to −∞, and in the second ƒ(x) has y = c as an asymptote as x tends to +∞. For example, the arctangent function satisfies lim x → − ∞ arctan ⁡ ( x ) = − π 2 {\displaystyle \lim _{x\rightarrow -\infty }\arctan(x)=-{\frac {\pi }{2}}} and lim x → + ∞ arctan ⁡ ( x ) = π 2 . {\displaystyle \lim _{x\rightarrow +\infty }\arctan(x)={\frac {\pi }{2}}.} So the line y = –π/2 is a horizontal asymptote for the arctangent when x tends to –∞, and y = π/2 is a horizontal asymptote for the arctangent when x tends to +∞. Functions may lack horizontal asymptotes on either or both sides, or may have one horizontal asymptote that is the same in both directions. For example, the function ƒ(x) = 1/(x2+1) has a horizontal asymptote at y = 0 when x tends both to −∞ and +∞ because, respectively, lim x → − ∞ 1 x 2 + 1 = lim x → + ∞ 1 x 2 + 1 = 0. {\displaystyle \lim _{x\to -\infty }{\frac {1}{x^{2}+1}}=\lim _{x\to +\infty }{\frac {1}{x^{2}+1}}=0.} Other common functions that have one or two horizontal asymptotes include x ↦ 1/x (that has an hyperbola as it graph), the Gaussian function x ↦ exp ⁡ ( − x 2 ) , {\displaystyle x\mapsto \exp(-x^{2}),} the error function, and the logistic function. === Oblique asymptotes === When a linear asymptote is not parallel to the x- or y-axis, it is called an oblique asymptote or slant asymptote. A function ƒ(x) is asymptotic to the straight line y = mx + n (m ≠ 0) if lim x → + ∞ [ f ( x ) − ( m x + n ) ] = 0 or lim x → − ∞ [ f ( x ) − ( m x + n ) ] = 0. {\displaystyle \lim _{x\to +\infty }\left[f(x)-(mx+n)\right]=0\,{\mbox{ or }}\lim _{x\to -\infty }\left[f(x)-(mx+n)\right]=0.} In the first case the line y = mx + n is an oblique asymptote of ƒ(x) when x tends to +∞, and in the second case the line y = mx + n is an oblique asymptote of ƒ(x) when x tends to −∞. An example is ƒ(x) = x + 1/x, which has the oblique asymptote y = x (that is m = 1, n = 0) as seen in the limits lim x → ± ∞ [ f ( x ) − x ] {\displaystyle \lim _{x\to \pm \infty }\left[f(x)-x\right]} = lim x → ± ∞ [ ( x + 1 x ) − x ] {\displaystyle =\lim _{x\to \pm \infty }\left[\left(x+{\frac {1}{x}}\right)-x\right]} = lim x → ± ∞ 1 x = 0. {\displaystyle =\lim _{x\to \pm \infty }{\frac {1}{x}}=0.} == Elementary methods for identifying asymptotes == The asymptotes of many elementary functions can be found without the explicit use of limits (although the derivations of such methods typically use limits). === General computation of oblique asymptotes for functions === The oblique asymptote, for the function f(x), will be given by the equation y = mx + n. The value for m is computed first and is given by m = def lim x → a f ( x ) / x {\displaystyle m\;{\stackrel {\text{def}}{=}}\,\lim _{x\rightarrow a}f(x)/x} where a is either − ∞ {\displaystyle -\infty } or + ∞ {\displaystyle +\infty } depending on the case being studied. It is good practice to treat the two cases separately. If this limit doesn't exist then there is no oblique asymptote in that direction. Having m then the value for n can be computed by n = def lim x → a ( f ( x ) − m x ) {\displaystyle n\;{\stackrel {\text{def}}{=}}\,\lim _{x\rightarrow a}(f(x)-mx)} where a should be the same value used before. If this limit fails to exist then there is no oblique asymptote in that direction, even should the limit defining m exist. Otherwise y = mx + n is the oblique asymptote of ƒ(x) as x tends to a. For example, the function ƒ(x) = (2x2 + 3x + 1)/x has m = lim x → + ∞ f ( x ) / x = lim x → + ∞ 2 x 2 + 3 x + 1 x 2 = 2 {\displaystyle m=\lim _{x\rightarrow +\infty }f(x)/x=\lim _{x\rightarrow +\infty }{\frac {2x^{2}+3x+1}{x^{2}}}=2} and then n = lim x → + ∞ ( f ( x ) − m x ) = lim x → + ∞ ( 2 x 2 + 3 x + 1 x − 2 x ) = 3 {\displaystyle n=\lim _{x\rightarrow +\infty }(f(x)-mx)=\lim _{x\rightarrow +\infty }\left({\frac {2x^{2}+3x+1}{x}}-2x\right)=3} so that y = 2x + 3 is the asymptote of ƒ(x) when x tends to +∞. The function ƒ(x) = ln x has m = lim x → + ∞ f ( x ) / x = lim x → + ∞ ln ⁡ x x = 0 {\displaystyle m=\lim _{x\rightarrow +\infty }f(x)/x=\lim _{x\rightarrow +\infty }{\frac {\ln x}{x}}=0} and then n = lim x → + ∞ ( f ( x ) − m x ) = lim x → + ∞ ln ⁡ x {\displaystyle n=\lim _{x\rightarrow +\infty }(f(x)-mx)=\lim _{x\rightarrow +\infty }\ln x} , which does not exist. So y = ln x does not have an asymptote when x tends to +∞. === Asymptotes for rational functions === A rational function has at most one horizontal asymptote or oblique (slant) asymptote, and possibly many vertical asymptotes. The degree of the numerator and degree of the denominator determine whether or not there are any horizontal or oblique asymptotes. The cases are tabulated below, where deg(numerator) is the degree of the numerator, and deg(denominator) is the degree of the denominator. The vertical asymptotes occur only when the denominator is zero (If both the numerator and denominator are zero, the multiplicities of the zero are compared). For example, the following function has vertical asymptotes at x = 0, and x = 1, but not at x = 2. f ( x ) = x 2 − 5 x + 6 x 3 − 3 x 2 + 2 x = ( x − 2 ) ( x − 3 ) x ( x − 1 ) ( x − 2 ) {\displaystyle f(x)={\frac {x^{2}-5x+6}{x^{3}-3x^{2}+2x}}={\frac {(x-2)(x-3)}{x(x-1)(x-2)}}} ==== Oblique asymptotes of rational functions ==== When the numerator of a rational function has degree exactly one greater than the denominator, the function has an oblique (slant) asymptote. The asymptote is the polynomial term after dividing the numerator and denominator. This phenomenon occurs because when dividing the fraction, there will be a linear term, and a remainder. For example, consider the function f ( x ) = x 2 + x + 1 x + 1 = x + 1 x + 1 {\displaystyle f(x)={\frac {x^{2}+x+1}{x+1}}=x+{\frac {1}{x+1}}} shown to the right. As the value of x increases, f approaches the asymptote y = x. This is because the other term, 1/(x+1), approaches 0. If the degree of the numerator is more than 1 larger than the degree of the denominator, and the denominator does not divide the numerator, there will be a nonzero remainder that goes to zero as x increases, but the quotient will not be linear, and the function does not have an oblique asymptote. === Transformations of known functions === If a known function has an asymptote (such as y=0 for f(x)=ex), then the translations of it also have an asymptote. If x=a is a vertical asymptote of f(x), then x=a+h is a vertical asymptote of f(x-h) If y=c is a horizontal asymptote of f(x), then y=c+k is a horizontal asymptote of f(x)+k If a known function has an asymptote, then the scaling of the function also have an asymptote. If y=ax+b is an asymptote of f(x), then y=cax+cb is an asymptote of cf(x) For example, f(x)=ex-1+2 has horizontal asymptote y=0+2=2, and no vertical or oblique asymptotes. == General definition == Let A : (a,b) → R2 be a parametric plane curve, in coordinates A(t) = (x(t),y(t)). Suppose that the curve tends to infinity, that is: lim t → b ( x 2 ( t ) + y 2 ( t ) ) = ∞ . {\displaystyle \lim _{t\rightarrow b}(x^{2}(t)+y^{2}(t))=\infty .} A line ℓ is an asymptote of A if the distance from the point A(t) to ℓ tends to zero as t → b. From the definition, only open curves that have some infinite branch can have an asymptote. No closed curve can have an asymptote. For example, the upper right branch of the curve y = 1/x can be defined parametrically as x = t, y = 1/t (where t > 0). First, x → ∞ as t → ∞ and the distance from the curve to the x-axis is 1/t which approaches 0 as t → ∞. Therefore, the x-axis is an asymptote of the curve. Also, y → ∞ as t → 0 from the right, and the distance between the curve and the y-axis is t which approaches 0 as t → 0. So the y-axis is also an asymptote. A similar argument shows that the lower left branch of the curve also has the same two lines as asymptotes. Although the definition here uses a parameterization of the curve, the notion of asymptote does not depend on the parameterization. In fact, if the equation of the line is a x + b y + c = 0 {\displaystyle ax+by+c=0} then the distance from the point A(t) = (x(t),y(t)) to the line is given by | a x ( t ) + b y ( t ) + c | a 2 + b 2 {\displaystyle {\frac {|ax(t)+by(t)+c|}{\sqrt {a^{2}+b^{2}}}}} if γ(t) is a change of parameterization then the distance becomes | a x ( γ ( t ) ) + b y ( γ ( t ) ) + c | a 2 + b 2 {\displaystyle {\frac {|ax(\gamma (t))+by(\gamma (t))+c|}{\sqrt {a^{2}+b^{2}}}}} which tends to zero simultaneously as the previous expression. An important case is when the curve is the graph of a real function (a function of one real variable and returning real values). The graph of the function y = ƒ(x) is the set of points of the plane with coordinates (x,ƒ(x)). For this, a parameterization is t ↦ ( t , f ( t ) ) . {\displaystyle t\mapsto (t,f(t)).} This parameterization is to be considered over the open intervals (a,b), where a can be −∞ and b can be +∞. An asymptote can be either vertical or non-vertical (oblique or horizontal). In the first case its equation is x = c, for some real number c. The non-vertical case has equation y = mx + n, where m and n {\displaystyle n} are real numbers. All three types of asymptotes can be present at the same time in specific examples. Unlike asymptotes for curves that are graphs of functions, a general curve may have more than two non-vertical asymptotes, and may cross its vertical asymptotes more than once. == Curvilinear asymptotes == Let A : (a,b) → R2 be a parametric plane curve, in coordinates A(t) = (x(t),y(t)), and B be another (unparameterized) curve. Suppose, as before, that the curve A tends to infinity. The curve B is a curvilinear asymptote of A if the shortest distance from the point A(t) to a point on B tends to zero as t → b. Sometimes B is simply referred to as an asymptote of A, when there is no risk of confusion with linear asymptotes. For example, the function y = x 3 + 2 x 2 + 3 x + 4 x {\displaystyle y={\frac {x^{3}+2x^{2}+3x+4}{x}}} has a curvilinear asymptote y = x2 + 2x + 3, which is known as a parabolic asymptote because it is a parabola rather than a straight line. == Asymptotes and curve sketching == Asymptotes are used in procedures of curve sketching. An asymptote serves as a guide line to show the behavior of the curve towards infinity. In order to get better approximations of the curve, curvilinear asymptotes have also been used although the term asymptotic curve seems to be preferred. == Algebraic curves == The asymptotes of an algebraic curve in the affine plane are the lines that are tangent to the projectivized curve through a point at infinity. For example, one may identify the asymptotes to the unit hyperbola in this manner. Asymptotes are often considered only for real curves, although they also make sense when defined in this way for curves over an arbitrary field. A plane curve of degree n intersects its asymptote at most at n−2 other points, by Bézout's theorem, as the intersection at infinity is of multiplicity at least two. For a conic, there are a pair of lines that do not intersect the conic at any complex point: these are the two asymptotes of the conic. A plane algebraic curve is defined by an equation of the form P(x,y) = 0 where P is a polynomial of degree n P ( x , y ) = P n ( x , y ) + P n − 1 ( x , y ) + ⋯ + P 1 ( x , y ) + P 0 {\displaystyle P(x,y)=P_{n}(x,y)+P_{n-1}(x,y)+\cdots +P_{1}(x,y)+P_{0}} where Pk is homogeneous of degree k. Vanishing of the linear factors of the highest degree term Pn defines the asymptotes of the curve: setting Q = Pn, if Pn(x, y) = (ax − by) Qn−1(x, y), then the line Q x ′ ( b , a ) x + Q y ′ ( b , a ) y + P n − 1 ( b , a ) = 0 {\displaystyle Q'_{x}(b,a)x+Q'_{y}(b,a)y+P_{n-1}(b,a)=0} is an asymptote if Q x ′ ( b , a ) {\displaystyle Q'_{x}(b,a)} and Q y ′ ( b , a ) {\displaystyle Q'_{y}(b,a)} are not both zero. If Q x ′ ( b , a ) = Q y ′ ( b , a ) = 0 {\displaystyle Q'_{x}(b,a)=Q'_{y}(b,a)=0} and P n − 1 ( b , a ) ≠ 0 {\displaystyle P_{n-1}(b,a)\neq 0} , there is no asymptote, but the curve has a branch that looks like a branch of parabola. Such a branch is called a parabolic branch, even when it does not have any parabola that is a curvilinear asymptote. If Q x ′ ( b , a ) = Q y ′ ( b , a ) = P n − 1 ( b , a ) = 0 , {\displaystyle Q'_{x}(b,a)=Q'_{y}(b,a)=P_{n-1}(b,a)=0,} the curve has a singular point at infinity which may have several asymptotes or parabolic branches. Over the complex numbers, Pn splits into linear factors, each of which defines an asymptote (or several for multiple factors). Over the reals, Pn splits in factors that are linear or quadratic factors. Only the linear factors correspond to infinite (real) branches of the curve, but if a linear factor has multiplicity greater than one, the curve may have several asymptotes or parabolic branches. It may also occur that such a multiple linear factor corresponds to two complex conjugate branches, and does not corresponds to any infinite branch of the real curve. For example, the curve x4 + y2 - 1 = 0 has no real points outside the square | x | ≤ 1 , | y | ≤ 1 {\displaystyle |x|\leq 1,|y|\leq 1} , but its highest order term gives the linear factor x with multiplicity 4, leading to the unique asymptote x=0. == Asymptotic cone == The hyperbola x 2 a 2 − y 2 b 2 = 1 {\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1} has the two asymptotes y = ± b a x . {\displaystyle y=\pm {\frac {b}{a}}x.} The equation for the union of these two lines is x 2 a 2 − y 2 b 2 = 0. {\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=0.} Similarly, the hyperboloid x 2 a 2 − y 2 b 2 − z 2 c 2 = 1 {\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}-{\frac {z^{2}}{c^{2}}}=1} is said to have the asymptotic cone x 2 a 2 − y 2 b 2 − z 2 c 2 = 0. {\displaystyle {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}-{\frac {z^{2}}{c^{2}}}=0.} The distance between the hyperboloid and cone approaches 0 as the distance from the origin approaches infinity. More generally, consider a surface that has an implicit equation P d ( x , y , z ) + P d − 2 ( x , y , z ) + ⋯ P 0 = 0 , {\displaystyle P_{d}(x,y,z)+P_{d-2}(x,y,z)+\cdots P_{0}=0,} where the P i {\displaystyle P_{i}} are homogeneous polynomials of degree i {\displaystyle i} and P d − 1 = 0 {\displaystyle P_{d-1}=0} . Then the equation P d ( x , y , z ) = 0 {\displaystyle P_{d}(x,y,z)=0} defines a cone which is centered at the origin. It is called an asymptotic cone, because the distance to the cone of a point of the surface tends to zero when the point on the surface tends to infinity. == See also == Big O notation == References == General references Kuptsov, L.P. (2001) [1994], "Asymptote", Encyclopedia of Mathematics, EMS Press Specific references == External links == Asymptote at PlanetMath. Hyperboloid and Asymptotic Cone, string surface model, 1872 Archived 2012-02-15 at the Wayback Machine from the Science Museum
Wikipedia:Asymptotic analysis#0
In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior. As an illustration, suppose that we are interested in the properties of a function f (n) as n becomes very large. If f(n) = n2 + 3n, then as n becomes very large, the term 3n becomes insignificant compared to n2. The function f(n) is said to be "asymptotically equivalent to n2, as n → ∞". This is often written symbolically as f (n) ~ n2, which is read as "f(n) is asymptotic to n2". An example of an important asymptotic result is the prime number theorem. Let π(x) denote the prime-counting function (which is not directly related to the constant pi), i.e. π(x) is the number of prime numbers that are less than or equal to x. Then the theorem states that π ( x ) ∼ x ln ⁡ x . {\displaystyle \pi (x)\sim {\frac {x}{\ln x}}.} Asymptotic analysis is commonly used in computer science as part of the analysis of algorithms and is often expressed there in terms of big O notation. == Definition == Formally, given functions f (x) and g(x), we define a binary relation f ( x ) ∼ g ( x ) ( as x → ∞ ) {\displaystyle f(x)\sim g(x)\quad ({\text{as }}x\to \infty )} if and only if (de Bruijn 1981, §1.4) lim x → ∞ f ( x ) g ( x ) = 1. {\displaystyle \lim _{x\to \infty }{\frac {f(x)}{g(x)}}=1.} The symbol ~ is the tilde. The relation is an equivalence relation on the set of functions of x; the functions f and g are said to be asymptotically equivalent. The domain of f and g can be any set for which the limit is defined: e.g. real numbers, complex numbers, positive integers. The same notation is also used for other ways of passing to a limit: e.g. x → 0, x ↓ 0, |x| → 0. The way of passing to the limit is often not stated explicitly, if it is clear from the context. Although the above definition is common in the literature, it is problematic if g(x) is zero infinitely often as x goes to the limiting value. For that reason, some authors use an alternative definition. The alternative definition, in little-o notation, is that f ~ g if and only if f ( x ) = g ( x ) ( 1 + o ( 1 ) ) . {\displaystyle f(x)=g(x)(1+o(1)).} This definition is equivalent to the prior definition if g(x) is not zero in some neighbourhood of the limiting value. == Properties == If f ∼ g {\displaystyle f\sim g} and a ∼ b {\displaystyle a\sim b} , then, under some mild conditions, the following hold: f r ∼ g r {\displaystyle f^{r}\sim g^{r}} , for every real r log ⁡ ( f ) ∼ log ⁡ ( g ) {\displaystyle \log(f)\sim \log(g)} if lim g ≠ 1 {\displaystyle \lim g\neq 1} f × a ∼ g × b {\displaystyle f\times a\sim g\times b} f / a ∼ g / b {\displaystyle f/a\sim g/b} Such properties allow asymptotically equivalent functions to be freely exchanged in many algebraic expressions. Also, if we further have g ∼ h {\displaystyle g\sim h} , then, because the asymptote is a transitive relation, then we also have f ∼ h {\displaystyle f\sim h} . == Examples of asymptotic formulas == Factorial n ! ∼ 2 π n ( n e ) n {\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}} —this is Stirling's approximation Partition function For a positive integer n, the partition function, p(n), gives the number of ways of writing the integer n as a sum of positive integers, where the order of addends is not considered. p ( n ) ∼ 1 4 n 3 e π 2 n 3 {\displaystyle p(n)\sim {\frac {1}{4n{\sqrt {3}}}}e^{\pi {\sqrt {\frac {2n}{3}}}}} Airy function The Airy function, Ai(x), is a solution of the differential equation y″ − xy = 0; it has many applications in physics. Ai ⁡ ( x ) ∼ e − 2 3 x 3 2 2 π x 1 / 4 {\displaystyle \operatorname {Ai} (x)\sim {\frac {e^{-{\frac {2}{3}}x^{\frac {3}{2}}}}{2{\sqrt {\pi }}x^{1/4}}}} Hankel functions H α ( 1 ) ( z ) ∼ 2 π z e i ( z − 2 π α − π 4 ) H α ( 2 ) ( z ) ∼ 2 π z e − i ( z − 2 π α − π 4 ) {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {2\pi \alpha -\pi }{4}}\right)}\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {2\pi \alpha -\pi }{4}}\right)}\end{aligned}}} == Asymptotic expansion == An asymptotic expansion of a function f(x) is in practice an expression of that function in terms of a series, the partial sums of which do not necessarily converge, but such that taking any initial partial sum provides an asymptotic formula for f. The idea is that successive terms provide an increasingly accurate description of the order of growth of f. In symbols, it means we have f ∼ g 1 , {\displaystyle f\sim g_{1},} but also f − g 1 ∼ g 2 {\displaystyle f-g_{1}\sim g_{2}} and f − g 1 − ⋯ − g k − 1 ∼ g k {\displaystyle f-g_{1}-\cdots -g_{k-1}\sim g_{k}} for each fixed k. In view of the definition of the ∼ {\displaystyle \sim } symbol, the last equation means f − ( g 1 + ⋯ + g k ) = o ( g k ) {\displaystyle f-(g_{1}+\cdots +g_{k})=o(g_{k})} in the little o notation, i.e., f − ( g 1 + ⋯ + g k ) {\displaystyle f-(g_{1}+\cdots +g_{k})} is much smaller than g k . {\displaystyle g_{k}.} The relation f − g 1 − ⋯ − g k − 1 ∼ g k {\displaystyle f-g_{1}-\cdots -g_{k-1}\sim g_{k}} takes its full meaning if g k + 1 = o ( g k ) {\displaystyle g_{k+1}=o(g_{k})} for all k, which means the g k {\displaystyle g_{k}} form an asymptotic scale. In that case, some authors may abusively write f ∼ g 1 + ⋯ + g k {\displaystyle f\sim g_{1}+\cdots +g_{k}} to denote the statement f − ( g 1 + ⋯ + g k ) = o ( g k ) . {\displaystyle f-(g_{1}+\cdots +g_{k})=o(g_{k}).} One should however be careful that this is not a standard use of the ∼ {\displaystyle \sim } symbol, and that it does not correspond to the definition given in § Definition. In the present situation, this relation g k = o ( g k − 1 ) {\displaystyle g_{k}=o(g_{k-1})} actually follows from combining steps k and k−1; by subtracting f − g 1 − ⋯ − g k − 2 = g k − 1 + o ( g k − 1 ) {\displaystyle f-g_{1}-\cdots -g_{k-2}=g_{k-1}+o(g_{k-1})} from f − g 1 − ⋯ − g k − 2 − g k − 1 = g k + o ( g k ) , {\displaystyle f-g_{1}-\cdots -g_{k-2}-g_{k-1}=g_{k}+o(g_{k}),} one gets g k + o ( g k ) = o ( g k − 1 ) , {\displaystyle g_{k}+o(g_{k})=o(g_{k-1}),} i.e. g k = o ( g k − 1 ) . {\displaystyle g_{k}=o(g_{k-1}).} In case the asymptotic expansion does not converge, for any particular value of the argument there will be a particular partial sum which provides the best approximation and adding additional terms will decrease the accuracy. This optimal partial sum will usually have more terms as the argument approaches the limit value. === Examples of asymptotic expansions === Gamma function e x x x 2 π x Γ ( x + 1 ) ∼ 1 + 1 12 x + 1 288 x 2 − 139 51840 x 3 − ⋯ ( x → ∞ ) {\displaystyle {\frac {e^{x}}{x^{x}{\sqrt {2\pi x}}}}\Gamma (x+1)\sim 1+{\frac {1}{12x}}+{\frac {1}{288x^{2}}}-{\frac {139}{51840x^{3}}}-\cdots \ (x\to \infty )} Exponential integral x e x E 1 ( x ) ∼ ∑ n = 0 ∞ ( − 1 ) n n ! x n ( x → ∞ ) {\displaystyle xe^{x}E_{1}(x)\sim \sum _{n=0}^{\infty }{\frac {(-1)^{n}n!}{x^{n}}}\ (x\to \infty )} Error function π x e x 2 erfc ⁡ ( x ) ∼ 1 + ∑ n = 1 ∞ ( − 1 ) n ( 2 n − 1 ) ! ! n ! ( 2 x 2 ) n ( x → ∞ ) {\displaystyle {\sqrt {\pi }}xe^{x^{2}}\operatorname {erfc} (x)\sim 1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{n!(2x^{2})^{n}}}\ (x\to \infty )} where m!! is the double factorial. === Worked example === Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of its domain of convergence. For example, we might start with the ordinary series 1 1 − w = ∑ n = 0 ∞ w n {\displaystyle {\frac {1}{1-w}}=\sum _{n=0}^{\infty }w^{n}} The expression on the left is valid on the entire complex plane w ≠ 1 {\displaystyle w\neq 1} , while the right hand side converges only for | w | < 1 {\displaystyle |w|<1} . Multiplying by e − w / t {\displaystyle e^{-w/t}} and integrating both sides yields ∫ 0 ∞ e − w t 1 − w d w = ∑ n = 0 ∞ t n + 1 ∫ 0 ∞ e − u u n d u {\displaystyle \int _{0}^{\infty }{\frac {e^{-{\frac {w}{t}}}}{1-w}}\,dw=\sum _{n=0}^{\infty }t^{n+1}\int _{0}^{\infty }e^{-u}u^{n}\,du} The integral on the left hand side can be expressed in terms of the exponential integral. The integral on the right hand side, after the substitution u = w / t {\displaystyle u=w/t} , may be recognized as the gamma function. Evaluating both, one obtains the asymptotic expansion e − 1 t Ei ⁡ ( 1 t ) = ∑ n = 0 ∞ n ! t n + 1 {\displaystyle e^{-{\frac {1}{t}}}\operatorname {Ei} \left({\frac {1}{t}}\right)=\sum _{n=0}^{\infty }n!\;t^{n+1}} Here, the right hand side is clearly not convergent for any non-zero value of t. However, by keeping t small, and truncating the series on the right to a finite number of terms, one may obtain a fairly good approximation to the value of Ei ⁡ ( 1 / t ) {\displaystyle \operatorname {Ei} (1/t)} . Substituting x = − 1 / t {\displaystyle x=-1/t} and noting that Ei ⁡ ( x ) = − E 1 ( − x ) {\displaystyle \operatorname {Ei} (x)=-E_{1}(-x)} results in the asymptotic expansion given earlier in this article. == Asymptotic distribution == In mathematical statistics, an asymptotic distribution is a hypothetical distribution that is in a sense the "limiting" distribution of a sequence of distributions. A distribution is an ordered set of random variables Zi for i = 1, …, n, for some positive integer n. An asymptotic distribution allows i to range without bound, that is, n is infinite. A special case of an asymptotic distribution is when the late entries go to zero—that is, the Zi go to 0 as i goes to infinity. Some instances of "asymptotic distribution" refer only to this special case. This is based on the notion of an asymptotic function which cleanly approaches a constant value (the asymptote) as the independent variable goes to infinity; "clean" in this sense meaning that for any desired closeness epsilon there is some value of the independent variable after which the function never differs from the constant by more than epsilon. An asymptote is a straight line that a curve approaches but never meets or crosses. Informally, one may speak of the curve meeting the asymptote "at infinity" although this is not a precise definition. In the equation y = 1 x , {\displaystyle y={\frac {1}{x}},} y becomes arbitrarily small in magnitude as x increases. == Applications == Asymptotic analysis is used in several mathematical sciences. In statistics, asymptotic theory provides limiting approximations of the probability distribution of sample statistics, such as the likelihood ratio statistic and the expected value of the deviance. Asymptotic theory does not provide a method of evaluating the finite-sample distributions of sample statistics, however. Non-asymptotic bounds are provided by methods of approximation theory. Examples of applications are the following. In applied mathematics, asymptotic analysis is used to build numerical methods to approximate equation solutions. In mathematical statistics and probability theory, asymptotics are used in analysis of long-run or large-sample behaviour of random variables and estimators. In computer science in the analysis of algorithms, considering the performance of algorithms. The behavior of physical systems, an example being statistical mechanics. In accident analysis when identifying the causation of crash through count modeling with large number of crash counts in a given time and space. Asymptotic analysis is a key tool for exploring the ordinary and partial differential equations which arise in the mathematical modelling of real-world phenomena. An illustrative example is the derivation of the boundary layer equations from the full Navier-Stokes equations governing fluid flow. In many cases, the asymptotic expansion is in power of a small parameter, ε: in the boundary layer case, this is the nondimensional ratio of the boundary layer thickness to a typical length scale of the problem. Indeed, applications of asymptotic analysis in mathematical modelling often center around a nondimensional parameter which has been shown, or assumed, to be small through a consideration of the scales of the problem at hand. Asymptotic expansions typically arise in the approximation of certain integrals (Laplace's method, saddle-point method, method of steepest descent) or in the approximation of probability distributions (Edgeworth series). The Feynman graphs in quantum field theory are another example of asymptotic expansions which often do not converge. === Asymptotic versus Numerical Analysis === De Bruijn illustrates the use of asymptotics in the following dialog between Dr. N.A., a Numerical Analyst, and Dr. A.A., an Asymptotic Analyst: N.A.: I want to evaluate my function f ( x ) {\displaystyle f(x)} for large values of x {\displaystyle x} , with a relative error of at most 1%. A.A.: f ( x ) = x − 1 + O ( x − 2 ) ( x → ∞ ) {\displaystyle f(x)=x^{-1}+\mathrm {O} (x^{-2})\qquad (x\to \infty )} . N.A.: I am sorry, I don't understand. A.A.: | f ( x ) − x − 1 | < 8 x − 2 ( x > 10 4 ) . {\displaystyle |f(x)-x^{-1}|<8x^{-2}\qquad (x>10^{4}).} N.A.: But my value of x {\displaystyle x} is only 100. A.A.: Why did you not say so? My evaluations give | f ( x ) − x − 1 | < 57000 x − 2 ( x > 100 ) . {\displaystyle |f(x)-x^{-1}|<57000x^{-2}\qquad (x>100).} N.A.: This is no news to me. I know already that 0 < f ( 100 ) < 1 {\displaystyle 0<f(100)<1} . A.A.: I can gain a little on some of my estimates. Now I find that | f ( x ) − x − 1 | < 20 x − 2 ( x > 100 ) . {\displaystyle |f(x)-x^{-1}|<20x^{-2}\qquad (x>100).} N.A.: I asked for 1%, not for 20%. A.A.: It is almost the best thing I possibly can get. Why don't you take larger values of x {\displaystyle x} ? N.A.: !!! I think it's better to ask my electronic computing machine. Machine: f(100) = 0.01137 42259 34008 67153 A.A.: Haven't I told you so? My estimate of 20% was not far off from the 14% of the real error. N.A.: !!! . . . ! Some days later, Miss N.A. wants to know the value of f(1000), but her machine would take a month of computation to give the answer. She returns to her Asymptotic Colleague, and gets a fully satisfactory reply. == See also == == Notes == == References == Balser, W. (1994), From Divergent Power Series To Analytic Functions, Springer-Verlag, ISBN 9783540485940 de Bruijn, N. G. (1981), Asymptotic Methods in Analysis, Dover Publications, ISBN 9780486642215 Estrada, R.; Kanwal, R. P. (2002), A Distributional Approach to Asymptotics, Birkhäuser, ISBN 9780817681302 Miller, P. D. (2006), Applied Asymptotic Analysis, American Mathematical Society, ISBN 9780821840788 Murray, J. D. (1984), Asymptotic Analysis, Springer, ISBN 9781461211228 Paris, R. B.; Kaminsky, D. (2001), Asymptotics and Mellin-Barnes Integrals, Cambridge University Press == External links == Asymptotic Analysis —home page of the journal, which is published by IOS Press A paper on time series analysis using asymptotic distribution
Wikipedia:Asymptotic expansion#0
In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion (after Henri Poincaré) is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Investigations by Dingle (1973) revealed that the divergent part of an asymptotic expansion is latently meaningful, i.e. contains information about the exact value of the expanded function. The theory of asymptotic series was created by Poincaré (and independently by Stieltjes) in 1886. The most common type of asymptotic expansion is a power series in either positive or negative powers. Methods of generating such expansions include the Euler–Maclaurin summation formula and integral transforms such as the Laplace and Mellin transforms. Repeated integration by parts will often lead to an asymptotic expansion. Since a convergent Taylor series fits the definition of asymptotic expansion as well, the phrase "asymptotic series" usually implies a non-convergent series. Despite non-convergence, the asymptotic expansion is useful when truncated to a finite number of terms. The approximation may provide benefits by being more mathematically tractable than the function being expanded, or by an increase in the speed of computation of the expanded function. Typically, the best approximation is given when the series is truncated at the smallest term. This way of optimally truncating an asymptotic expansion is known as superasymptotics. The error is then typically of the form ~ exp(−c/ε) where ε is the expansion parameter. The error is thus beyond all orders in the expansion parameter. It is possible to improve on the superasymptotic error, e.g. by employing resummation methods such as Borel resummation to the divergent tail. Such methods are often referred to as hyperasymptotic approximations. See asymptotic analysis and big O notation for the notation used in this article. == Formal definition == First we define an asymptotic scale, and then give the formal definition of an asymptotic expansion. If φ n {\displaystyle \ \varphi _{n}\ } is a sequence of continuous functions on some domain, and if L {\displaystyle \ L\ } is a limit point of the domain, then the sequence constitutes an asymptotic scale if for every n, φ n + 1 ( x ) = o ( φ n ( x ) ) ( x → L ) . {\displaystyle \varphi _{n+1}(x)=o(\varphi _{n}(x))\quad (x\to L)\ .} ( L {\displaystyle \ L\ } may be taken to be infinity.) In other words, a sequence of functions is an asymptotic scale if each function in the sequence grows strictly slower (in the limit x → L {\displaystyle \ x\to L\ } ) than the preceding function. If f {\displaystyle \ f\ } is a continuous function on the domain of the asymptotic scale, then f has an asymptotic expansion of order N {\displaystyle \ N\ } with respect to the scale as a formal series ∑ n = 0 N a n φ n ( x ) {\displaystyle \sum _{n=0}^{N}a_{n}\varphi _{n}(x)} if f ( x ) − ∑ n = 0 N − 1 a n φ n ( x ) = O ( φ N ( x ) ) ( x → L ) {\displaystyle f(x)-\sum _{n=0}^{N-1}a_{n}\varphi _{n}(x)=O(\varphi _{N}(x))\quad (x\to L)} or the weaker condition f ( x ) − ∑ n = 0 N − 1 a n φ n ( x ) = o ( φ N − 1 ( x ) ) ( x → L ) {\displaystyle f(x)-\sum _{n=0}^{N-1}a_{n}\varphi _{n}(x)=o(\varphi _{N-1}(x))\quad (x\to L)\ } is satisfied. Here, o {\displaystyle o} is the little o notation. If one or the other holds for all N {\displaystyle \ N\ } , then we write f ( x ) ∼ ∑ n = 0 ∞ a n φ n ( x ) ( x → L ) . {\displaystyle f(x)\sim \sum _{n=0}^{\infty }a_{n}\varphi _{n}(x)\quad (x\to L)\ .} In contrast to a convergent series for f {\displaystyle \ f\ } , wherein the series converges for any fixed x {\displaystyle \ x\ } in the limit N → ∞ {\displaystyle N\to \infty } , one can think of the asymptotic series as converging for fixed N {\displaystyle \ N\ } in the limit x → L {\displaystyle \ x\to L\ } (with L {\displaystyle \ L\ } possibly infinite). == Examples == Gamma function (Stirling's approximation) e x x x 2 π x Γ ( x + 1 ) ∼ 1 + 1 12 x + 1 288 x 2 − 139 51840 x 3 − ⋯ ( x → ∞ ) {\displaystyle {\frac {e^{x}}{x^{x}{\sqrt {2\pi x}}}}\Gamma (x+1)\sim 1+{\frac {1}{12x}}+{\frac {1}{288x^{2}}}-{\frac {139}{51840x^{3}}}-\cdots \ (x\to \infty )} Exponential integral x e x E 1 ( x ) ∼ ∑ n = 0 ∞ ( − 1 ) n n ! x n ( x → ∞ ) {\displaystyle xe^{x}E_{1}(x)\sim \sum _{n=0}^{\infty }{\frac {(-1)^{n}n!}{x^{n}}}\ (x\to \infty )} Logarithmic integral li ⁡ ( x ) ∼ x ln ⁡ x ∑ k = 0 ∞ k ! ( ln ⁡ x ) k {\displaystyle \operatorname {li} (x)\sim {\frac {x}{\ln x}}\sum _{k=0}^{\infty }{\frac {k!}{(\ln x)^{k}}}} Riemann zeta function ζ ( s ) ∼ ∑ n = 1 N n − s + N 1 − s s − 1 − N − s 2 + N − s ∑ m = 1 ∞ B 2 m s 2 m − 1 ¯ ( 2 m ) ! N 2 m − 1 {\displaystyle \zeta (s)\sim \sum _{n=1}^{N}n^{-s}+{\frac {N^{1-s}}{s-1}}-{\frac {N^{-s}}{2}}+N^{-s}\sum _{m=1}^{\infty }{\frac {B_{2m}s^{\overline {2m-1}}}{(2m)!N^{2m-1}}}} where B 2 m {\displaystyle B_{2m}} are Bernoulli numbers and s 2 m − 1 ¯ {\displaystyle s^{\overline {2m-1}}} is a rising factorial. This expansion is valid for all complex s and is often used to compute the zeta function by using a large enough value of N, for instance N > | s | {\displaystyle N>|s|} . Error function π x e x 2 e r f c ( x ) ∼ 1 + ∑ n = 1 ∞ ( − 1 ) n ( 2 n − 1 ) ! ! ( 2 x 2 ) n ( x → ∞ ) {\displaystyle {\sqrt {\pi }}xe^{x^{2}}{\rm {erfc}}(x)\sim 1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{(2x^{2})^{n}}}\ (x\to \infty )} where (2n − 1)!! is the double factorial. == Worked example == Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of its domain of convergence. Thus, for example, one may start with the ordinary series 1 1 − w = ∑ n = 0 ∞ w n . {\displaystyle {\frac {1}{1-w}}=\sum _{n=0}^{\infty }w^{n}.} The expression on the left is valid on the entire complex plane w ≠ 1 {\displaystyle w\neq 1} , while the right hand side converges only for | w | < 1 {\displaystyle |w|<1} . Multiplying by e − w / t {\displaystyle e^{-w/t}} and integrating both sides yields ∫ 0 ∞ e − w t 1 − w d w = ∑ n = 0 ∞ t n + 1 ∫ 0 ∞ e − u u n d u , {\displaystyle \int _{0}^{\infty }{\frac {e^{-{\frac {w}{t}}}}{1-w}}\,dw=\sum _{n=0}^{\infty }t^{n+1}\int _{0}^{\infty }e^{-u}u^{n}\,du,} after the substitution u = w / t {\displaystyle u=w/t} on the right hand side. The integral on the left hand side, understood as a Cauchy principal value, can be expressed in terms of the exponential integral. The integral on the right hand side may be recognized as the gamma function. Evaluating both, one obtains the asymptotic expansion e − 1 t Ei ⁡ ( 1 t ) = ∑ n = 0 ∞ n ! t n + 1 . {\displaystyle e^{-{\frac {1}{t}}}\operatorname {Ei} \left({\frac {1}{t}}\right)=\sum _{n=0}^{\infty }n!t^{n+1}.} Here, the right hand side is clearly not convergent for any non-zero value of t. However, by truncating the series on the right to a finite number of terms, one may obtain a fairly good approximation to the value of Ei ⁡ ( 1 t ) {\displaystyle \operatorname {Ei} \left({\tfrac {1}{t}}\right)} for sufficiently small t. Substituting x = − 1 t {\displaystyle x=-{\tfrac {1}{t}}} and noting that Ei ⁡ ( x ) = − E 1 ( − x ) {\displaystyle \operatorname {Ei} (x)=-E_{1}(-x)} results in the asymptotic expansion given earlier in this article. === Integration by parts === Using integration by parts, we can obtain an explicit formula Ei ⁡ ( z ) = e z z ( ∑ k = 0 n k ! z k + e n ( z ) ) , e n ( z ) ≡ ( n + 1 ) ! z e − z ∫ − ∞ z e t t n + 2 d t {\displaystyle \operatorname {Ei} (z)={\frac {e^{z}}{z}}\left(\sum _{k=0}^{n}{\frac {k!}{z^{k}}}+e_{n}(z)\right),\quad e_{n}(z)\equiv (n+1)!\ ze^{-z}\int _{-\infty }^{z}{\frac {e^{t}}{t^{n+2}}}\,dt} For any fixed z {\displaystyle z} , the absolute value of the error term | e n ( z ) | {\displaystyle |e_{n}(z)|} decreases, then increases. The minimum occurs at n ∼ | z | {\displaystyle n\sim |z|} , at which point | e n ( z ) | ≤ 2 π | z | e − | z | {\displaystyle \vert e_{n}(z)\vert \leq {\sqrt {\frac {2\pi }{\vert z\vert }}}e^{-\vert z\vert }} . This bound is said to be "asymptotics beyond all orders". == Properties == === Uniqueness for a given asymptotic scale === For a given asymptotic scale { φ n ( x ) } {\displaystyle \{\varphi _{n}(x)\}} the asymptotic expansion of function f ( x ) {\displaystyle f(x)} is unique. That is the coefficients { a n } {\displaystyle \{a_{n}\}} are uniquely determined in the following way: a 0 = lim x → L f ( x ) φ 0 ( x ) a 1 = lim x → L f ( x ) − a 0 φ 0 ( x ) φ 1 ( x ) ⋮ a N = lim x → L f ( x ) − ∑ n = 0 N − 1 a n φ n ( x ) φ N ( x ) {\displaystyle {\begin{aligned}a_{0}&=\lim _{x\to L}{\frac {f(x)}{\varphi _{0}(x)}}\\a_{1}&=\lim _{x\to L}{\frac {f(x)-a_{0}\varphi _{0}(x)}{\varphi _{1}(x)}}\\&\;\;\vdots \\a_{N}&=\lim _{x\to L}{\frac {f(x)-\sum _{n=0}^{N-1}a_{n}\varphi _{n}(x)}{\varphi _{N}(x)}}\end{aligned}}} where L {\displaystyle L} is the limit point of this asymptotic expansion (may be ± ∞ {\displaystyle \pm \infty } ). === Non-uniqueness for a given function === A given function f ( x ) {\displaystyle f(x)} may have many asymptotic expansions (each with a different asymptotic scale). === Subdominance === An asymptotic expansion may be an asymptotic expansion to more than one function. == See also == === Related fields === Asymptotic analysis Singular perturbation === Asymptotic methods === Watson's lemma Mellin transform Laplace's method Stationary phase approximation Method of dominant balance Method of steepest descent == Notes == == References == Ablowitz, M. J., & Fokas, A. S. (2003). Complex variables: introduction and applications. Cambridge University Press. Bender, C. M., & Orszag, S. A. (2013). Advanced mathematical methods for scientists and engineers I: Asymptotic methods and perturbation theory. Springer Science & Business Media. Bleistein, N., Handelsman, R. (1975), Asymptotic Expansions of Integrals, Dover Publications. Carrier, G. F., Krook, M., & Pearson, C. E. (2005). Functions of a complex variable: Theory and technique. Society for Industrial and Applied Mathematics. Copson, E. T. (1965), Asymptotic Expansions, Cambridge University Press. Dingle, R. B. (1973), Asymptotic Expansions: Their Derivation and Interpretation, Academic Press. Erdélyi, A. (1955), Asymptotic Expansions, Dover Publications. Fruchard, A., Schäfke, R. (2013), Composite Asymptotic Expansions, Springer. Hardy, G. H. (1949), Divergent Series, Oxford University Press. Olver, F. (1997). Asymptotics and Special functions. AK Peters/CRC Press. Paris, R. B., Kaminsky, D. (2001), Asymptotics and Mellin-Barnes Integrals, Cambridge University Press. Pascal Remy(2024). Asymptotic Expansions and Summability : Application to Partial Differential Equations, Springer, LNM 2351. Whittaker, E. T., Watson, G. N. (1963), A Course of Modern Analysis, fourth edition, Cambridge University Press. == External links == "Asymptotic expansion", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Wolfram Mathworld: Asymptotic Series
Wikipedia:Asymptotic homogenization#0
In mathematics and physics, homogenization is a method of studying partial differential equations with rapidly oscillating coefficients, such as ∇ ⋅ ( A ( x → ϵ ) ∇ u ϵ ) = f {\displaystyle \nabla \cdot \left(A\left({\frac {\vec {x}}{\epsilon }}\right)\nabla u_{\epsilon }\right)=f} where ϵ {\displaystyle \epsilon } is a very small parameter and A ( y → ) {\displaystyle A\left({\vec {y}}\right)} is a 1-periodic coefficient: A ( y → + e → i ) = A ( y → ) {\displaystyle A\left({\vec {y}}+{\vec {e}}_{i}\right)=A\left({\vec {y}}\right)} , i = 1 , … , n {\displaystyle i=1,\dots ,n} . It turns out that the study of these equations is also of great importance in physics and engineering, since equations of this type govern the physics of inhomogeneous or heterogeneous materials. Of course, all matter is inhomogeneous at some scale, but frequently it is convenient to treat it as homogeneous. A good example is the continuum concept which is used in continuum mechanics. Under this assumption, materials such as fluids, solids, etc. can be treated as homogeneous materials and associated with these materials are material properties such as shear modulus, elastic moduli, etc. Frequently, inhomogeneous materials (such as composite materials) possess microstructure and therefore they are subjected to loads or forcings which vary on a length scale which is far bigger than the characteristic length scale of the microstructure. In this situation, one can often replace the equation above with an equation of the form ∇ ⋅ ( A ∗ ∇ u ) = f {\displaystyle \nabla \cdot \left(A^{*}\nabla u\right)=f} where A ∗ {\displaystyle A^{*}} is a constant tensor coefficient and is known as the effective property associated with the material in question. It can be explicitly computed as A i j ∗ = ∫ ( 0 , 1 ) n A ( y → ) ( ∇ w j ( y → ) + e → j ) ⋅ e → i d y 1 … d y n , i , j = 1 , … , n {\displaystyle A_{ij}^{*}=\int _{(0,1)^{n}}A({\vec {y}})\left(\nabla w_{j}({\vec {y}})+{\vec {e}}_{j}\right)\cdot {\vec {e}}_{i}\,dy_{1}\dots dy_{n},\qquad i,j=1,\dots ,n} from 1-periodic functions w j {\displaystyle w_{j}} satisfying: ∇ y ⋅ ( A ( y → ) ∇ w j ) = − ∇ y ⋅ ( A ( y → ) e → j ) . {\displaystyle \nabla _{y}\cdot \left(A({\vec {y}})\nabla w_{j}\right)=-\nabla _{y}\cdot \left(A({\vec {y}}){\vec {e}}_{j}\right).} This process of replacing an equation with a highly oscillatory coefficient with one with a homogeneous (uniform) coefficient is known as homogenization. This subject is inextricably linked with the subject of micromechanics for this very reason. In homogenization one equation is replaced by another if u ϵ ≈ u {\displaystyle u_{\epsilon }\approx u} for small enough ϵ {\displaystyle \epsilon } , provided u ϵ → u {\displaystyle u_{\epsilon }\to u} in some appropriate norm as ϵ → 0 {\displaystyle \epsilon \to 0} . As a result of the above, homogenization can therefore be viewed as an extension of the continuum concept to materials which possess microstructure. The analogue of the differential element in the continuum concept (which contains enough atom, or molecular structure to be representative of that material), is known as the "Representative Volume Element" in homogenization and micromechanics. This element contains enough statistical information about the inhomogeneous medium in order to be representative of the material. Therefore averaging over this element gives an effective property such as A ∗ {\displaystyle A^{*}} above. Classical results of homogenization theory were obtained for media with periodic microstructure modeled by partial differential equations with periodic coefficients. These results were later generalized to spatially homogeneous random media modeled by differential equations with random coefficients which statistical properties are the same at every point in space. In practice, many applications require a more general way of modeling that is neither periodic nor statistically homogeneous. For this end the methods of the homogenization theory have been extended to partial differential equations, which coefficients are neither periodic nor statistically homogeneous (so-called arbitrarily rough coefficients). == The method of asymptotic homogenization == Mathematical homogenization theory dates back to the French, Russian and Italian schools. The method of asymptotic homogenization proceeds by introducing the fast variable y → = x → / ϵ {\displaystyle {\vec {y}}={\vec {x}}/\epsilon } and posing a formal expansion in ϵ {\displaystyle \epsilon } : u ϵ ( x → ) = u ( x → , y → ) = u 0 ( x → , y → ) + ϵ u 1 ( x → , y → ) + ϵ 2 u 2 ( x → , y → ) + O ( ϵ 3 ) {\displaystyle u_{\epsilon }({\vec {x}})=u({\vec {x}},{\vec {y}})=u_{0}({\vec {x}},{\vec {y}})+\epsilon u_{1}({\vec {x}},{\vec {y}})+\epsilon ^{2}u_{2}({\vec {x}},{\vec {y}})+O(\epsilon ^{3})\,} which generates a hierarchy of problems. The homogenized equation is obtained and the effective coefficients are determined by solving the so-called "cell problems" for the function u 1 ( x → , x → / ϵ ) {\displaystyle u_{1}({\vec {x}},{\vec {x}}/\epsilon )} . == See also == Asymptotic analysis Γ-convergence Mosco convergence Effective medium approximations == Notes == == References == Kozlov, S.M.; Oleinik, O.A.; Zhikov, V.V. (1994), Homogenization of differential operators and integral functionals, Berlin-Heidelberg-New York City: Springer-Verlag, ISBN 3-540-54809-2, Zbl 0838.35001 Oleinik, O.A.; Shamaev, A.S.; Yosifian, G.A. (1991), Mathematical problems in elasticity and homogenization, Studies in Mathematics and its Applications, vol. 26, Amsterdam - London - New York City - Tokyo: North-Holland, ISBN 0-444-88441-6, Zbl 0768.73003 Hornung, Ulrich (Ed.). (1997), Homogenization and Porous Media, Interdisciplinary Applied Mathematics, vol. 6, Springer-Verlag, doi:10.1007/978-1-4612-1920-0, ISBN 978-1-4612-7339-4 Bakhvalov, N. S.; Panasenko, G. P. (1984), Averaging of Processes in Periodic Media (English translation: Kluwer,1989), Moscow: Nauka, Zbl 0607.73009 Braides, A.; Defranceschi, A. (1998), Homogenization of Multiple Integrals, Oxford Lecture Series in Mathematics and Its Applications, Oxford: Clarendon Press, ISBN 978-0-198-50246-3
Wikipedia:Asymptotic safety in quantum gravity#0
Asymptotic safety (sometimes also referred to as nonperturbative renormalizability) is a concept in quantum field theory which aims at finding a consistent and predictive quantum theory of the gravitational field. Its key ingredient is a nontrivial fixed point of the theory's renormalization group flow which controls the behavior of the coupling constants in the ultraviolet (UV) regime and renders physical quantities safe from divergences. Although originally proposed by Steven Weinberg to find a theory of quantum gravity, the idea of a nontrivial fixed point providing a possible UV completion can be applied also to other field theories, in particular to perturbatively nonrenormalizable ones. In this respect, it is similar to quantum triviality. The essence of asymptotic safety is the observation that nontrivial renormalization group fixed points can be used to generalize the procedure of perturbative renormalization. In an asymptotically safe theory the couplings do not need to be small or tend to zero in the high energy limit but rather tend to finite values: they approach a nontrivial UV fixed point. The running of the coupling constants, i.e. their scale dependence described by the renormalization group (RG), is thus special in its UV limit in the sense that all their dimensionless combinations remain finite. This suffices to avoid unphysical divergences, e.g. in scattering amplitudes. The requirement of a UV fixed point restricts the form of the bare action and the values of the bare coupling constants, which become predictions of the asymptotic safety program rather than inputs. As for gravity, the standard procedure of perturbative renormalization fails since Newton's constant, the relevant expansion parameter, has negative mass dimension rendering general relativity perturbatively nonrenormalizable. This has driven the search for nonperturbative frameworks describing quantum gravity, including asymptotic safety which – in contrast to other approaches – is characterized by its use of quantum field theory methods, without depending on perturbative techniques, however. At the present time, there is accumulating evidence for a fixed point suitable for asymptotic safety, while a rigorous proof of its existence is still lacking. == Motivation == Gravity, at the classical level, is described by Einstein's field equations of general relativity, R μ ν − 1 2 g μ ν R + g μ ν Λ = 8 π G c 4 T μ ν {\displaystyle \textstyle R_{\mu \nu }-{1 \over 2}g_{\mu \nu }\,R+g_{\mu \nu }\Lambda ={8\pi G \over c^{4}}\,T_{\mu \nu }} . These equations combine the spacetime geometry encoded in the metric g μ ν {\displaystyle g_{\mu \nu }} with the matter content comprised in the energy–momentum tensor T μ ν {\displaystyle T_{\mu \nu }} . The quantum nature of matter has been tested experimentally, for instance quantum electrodynamics is by now one of the most accurately confirmed theories in physics. For this reason quantization of gravity seems plausible, too. Unfortunately the quantization cannot be performed in the standard way (perturbative renormalization): Already a simple power-counting consideration signals the perturbative nonrenormalizability since the mass dimension of Newton's constant is − 2 {\displaystyle -2} . The problem occurs as follows. According to the traditional point of view renormalization is implemented via the introduction of counterterms that should cancel divergent expressions appearing in loop integrals. Applying this method to gravity, however, the counterterms required to eliminate all divergences proliferate to an infinite number. As this inevitably leads to an infinite number of free parameters to be measured in experiments, the program is unlikely to have predictive power beyond its use as a low energy effective theory. It turns out that the first divergences in the quantization of general relativity which cannot be absorbed in counterterms consistently (i.e. without the necessity of introducing new parameters) appear already at one-loop level in the presence of matter fields. At two-loop level the problematic divergences arise even in pure gravity. In order to overcome this conceptual difficulty the development of nonperturbative techniques was required, providing various candidate theories of quantum gravity. For a long time the prevailing view has been that the very concept of quantum field theory – even though remarkably successful in the case of the other fundamental interactions – is doomed to failure for gravity. By way of contrast, the idea of asymptotic safety retains quantum fields as the theoretical arena and instead abandons only the traditional program of perturbative renormalization. == History == After having realized the perturbative nonrenormalizability of gravity, physicists tried to employ alternative techniques to cure the divergence problem, for instance resummation or extended theories with suitable matter fields and symmetries, all of which come with their own drawbacks. In 1976, Steven Weinberg proposed a generalized version of the condition of renormalizability, based on a nontrivial fixed point of the underlying renormalization group (RG) flow for gravity. This was called asymptotic safety. The idea of a UV completion by means of a nontrivial fixed point of the renormalization groups had been proposed earlier by Kenneth G. Wilson and Giorgio Parisi in scalar field theory (see also Quantum triviality). The applicability to perturbatively nonrenormalizable theories was first demonstrated explicitly for the non-linear sigma model and for a variant of the Gross–Neveu model. As for gravity, the first studies concerning this new concept were performed in d = 2 + ϵ {\displaystyle d=2+\epsilon } spacetime dimensions in the late seventies. In exactly two dimensions there is a theory of pure gravity that is renormalizable according to the old point of view. (In order to render the Einstein–Hilbert action 1 16 π G ∫ d 2 x g R {\displaystyle \textstyle {1 \over 16\pi G}\int \mathrm {d} ^{2}x{\sqrt {g}}\,R} dimensionless, Newton's constant G {\displaystyle G} must have mass dimension zero.) For small but finite ϵ {\displaystyle \epsilon } perturbation theory is still applicable, and one can expand the beta-function ( β {\displaystyle \beta } -function) describing the renormalization group running of Newton's constant as a power series in ϵ {\displaystyle \epsilon } . Indeed, in this spirit it was possible to prove that it displays a nontrivial fixed point. However, it was not clear how to do a continuation from d = 2 + ϵ {\displaystyle d=2+\epsilon } to d = 4 {\displaystyle d=4} dimensions as the calculations relied on the smallness of the expansion parameter ϵ {\displaystyle \epsilon } . The computational methods for a nonperturbative treatment were not at hand by this time. For this reason the idea of asymptotic safety in quantum gravity was put aside for some years. Only in the early 90s, aspects of 2 + ϵ {\displaystyle 2+\epsilon } dimensional gravity have been revised in various works, but still not continuing the dimension to four. As for calculations beyond perturbation theory, the situation improved with the advent of new functional renormalization group methods, in particular the so-called effective average action (a scale dependent version of the effective action). Introduced in 1993 by Christof Wetterich and Tim R Morris for scalar theories, and by Martin Reuter and Christof Wetterich for general gauge theories (on flat Euclidean space), it is similar to a Wilsonian action (coarse grained free energy) and although it is argued to differ at a deeper level, it is in fact related by a Legendre transform. The cutoff scale dependence of this functional is governed by a functional flow equation which, in contrast to earlier attempts, can easily be applied in the presence of local gauge symmetries also. In 1996, Martin Reuter constructed a similar effective average action and the associated flow equation for the gravitational field. It complies with the requirement of background independence, one of the fundamental tenets of quantum gravity. This work can be considered an essential breakthrough in asymptotic safety related studies on quantum gravity as it provides the possibility of nonperturbative computations for arbitrary spacetime dimensions. It was shown that at least for the Einstein–Hilbert truncation, the simplest ansatz for the effective average action, a nontrivial fixed point is indeed present. These results mark the starting point for many calculations that followed. Since it was not clear in the pioneer work by Martin Reuter to what extent the findings depended on the truncation ansatz considered, the next obvious step consisted in enlarging the truncation. This process was initiated by Roberto Percacci and collaborators, starting with the inclusion of matter fields. Up to the present many different works by a continuously growing community – including, e.g., f ( R ) {\displaystyle f(R)} - and Weyl tensor squared truncations – have confirmed independently that the asymptotic safety scenario is actually possible: The existence of a nontrivial fixed point was shown within each truncation studied so far. Although still lacking a final proof, there is mounting evidence that the asymptotic safety program can ultimately lead to a consistent and predictive quantum theory of gravity within the general framework of quantum field theory. == Main ideas == === Theory space === The asymptotic safety program adopts a modern Wilsonian viewpoint on quantum field theory. Here the basic input data to be fixed at the beginning are, firstly, the kinds of quantum fields carrying the theory's degrees of freedom and, secondly, the underlying symmetries. For any theory considered, these data determine the stage the renormalization group dynamics takes place on, the so-called theory space. It consists of all possible action functionals depending on the fields selected and respecting the prescribed symmetry principles. Each point in this theory space thus represents one possible action. Often one may think of the space as spanned by all suitable field monomials. In this sense any action in theory space is a linear combination of field monomials, where the corresponding coefficients are the coupling constants, { g α } {\displaystyle \{g_{\alpha }\}} . (Here all couplings are assumed to be dimensionless. Couplings can always be made dimensionless by multiplication with a suitable power of the RG scale.) === Renormalization group flow === The renormalization group (RG) describes the change of a physical system due to smoothing or averaging out microscopic details when going to a lower resolution. This brings into play a notion of scale dependence for the action functionals of interest. Infinitesimal RG transformations map actions to nearby ones, thus giving rise to a vector field on theory space. The scale dependence of an action is encoded in a "running" of the coupling constants parametrizing this action, { g α } ≡ { g α ( k ) } {\displaystyle \{g_{\alpha }\}\equiv \{g_{\alpha }(k)\}} , with the RG scale k {\displaystyle k} . This gives rise to a trajectory in theory space (RG trajectory), describing the evolution of an action functional with respect to the scale. Which of all possible trajectories is realized in Nature has to be determined by measurements. === Taking the UV limit === The construction of a quantum field theory amounts to finding an RG trajectory which is infinitely extended in the sense that the action functional described by { g α ( k ) } {\displaystyle \{g_{\alpha }(k)\}} is well-behaved for all values of the momentum scale parameter k {\displaystyle k} , including the infrared limit k → 0 {\displaystyle k\rightarrow 0} and the ultraviolet (UV) limit k → ∞ {\displaystyle k\rightarrow \infty } . Asymptotic safety is a way of dealing with the latter limit. Its fundamental requirement is the existence of a fixed point of the RG flow. By definition this is a point { g α ∗ } {\displaystyle \{g_{\alpha }^{*}\}} in the theory space where the running of all couplings stops, or, in other words, a zero of all beta-functions: β γ ( { g α ∗ } ) = 0 {\displaystyle \beta _{\gamma }(\{g_{\alpha }^{*}\})=0} for all γ {\displaystyle \gamma } . In addition that fixed point must have at least one UV-attractive direction. This ensures that there are one or more RG trajectories which run into the fixed point for increasing scale. The set of all points in the theory space that are "pulled" into the UV fixed point by going to larger scales is referred to as UV critical surface. Thus the UV critical surface consists of all those trajectories which are safe from UV divergences in the sense that all couplings approach finite fixed point values as k → ∞ {\displaystyle k\rightarrow \infty } . The key hypothesis underlying asymptotic safety is that only trajectories running entirely within the UV critical surface of an appropriate fixed point can be infinitely extended and thus define a fundamental quantum field theory. It is obvious that such trajectories are well-behaved in the UV limit as the existence of a fixed point allows them to "stay at a point" for an infinitely long RG "time". With regard to the fixed point, UV-attractive directions are called relevant, UV-repulsive ones irrelevant, since the corresponding scaling fields increase and decrease, respectively, when the scale is lowered. Therefore, the dimensionality of the UV critical surface equals the number of relevant couplings. An asymptotically safe theory is thus the more predictive the smaller is the dimensionality of the corresponding UV critical surface. For instance, if the UV critical surface has the finite dimension n {\displaystyle n} it is sufficient to perform only n {\displaystyle n} measurements in order to uniquely identify Nature's RG trajectory. Once the n {\displaystyle n} relevant couplings are measured, the requirement of asymptotic safety fixes all other couplings since the latter have to be adjusted in such a way that the RG trajectory lies within the UV critical surface. In this spirit the theory is highly predictive as infinitely many parameters are fixed by a finite number of measurements. In contrast to other approaches, a bare action which should be promoted to a quantum theory is not needed as an input here. It is the theory space and the RG flow equations that determine possible UV fixed points. Since such a fixed point, in turn, corresponds to a bare action, one can consider the bare action a prediction in the asymptotic safety program. This may be thought of as a systematic search strategy among theories that are already "quantum" which identifies the "islands" of physically acceptable theories in the "sea" of unacceptable ones plagued by short distance singularities. === Gaussian and non-Gaussian fixed points === A fixed point is called Gaussian if it corresponds to a free theory. Its critical exponents agree with the canonical mass dimensions of the corresponding operators which usually amounts to the trivial fixed point values g α ∗ = 0 {\displaystyle g_{\alpha }^{*}=0} for all essential couplings g α {\displaystyle g_{\alpha }} . Thus standard perturbation theory is applicable only in the vicinity of a Gaussian fixed point. In this regard asymptotic safety at the Gaussian fixed point is equivalent to perturbative renormalizability plus asymptotic freedom. Due to the arguments presented in the introductory sections, however, this possibility is ruled out for gravity. In contrast, a nontrivial fixed point, that is, a fixed point whose critical exponents differ from the canonical ones, is referred to as non-Gaussian. Usually this requires g α ∗ ≠ 0 {\displaystyle g_{\alpha }^{*}\neq 0} for at least one essential g α {\displaystyle g_{\alpha }} . It is such a non-Gaussian fixed point that provides a possible scenario for quantum gravity. As yet, studies on this subject thus mainly focused on establishing its existence. === Quantum Einstein gravity (QEG) === Quantum Einstein gravity (QEG) is the generic name for any quantum field theory of gravity that (regardless of its bare action) takes the spacetime metric as the dynamical field variable and whose symmetry is given by diffeomorphism invariance. This fixes the theory space and an RG flow of the effective average action defined over it, but it does not single out a priori any specific action functional. However, the flow equation determines a vector field on that theory space which can be investigated. If it displays a non-Gaussian fixed point by means of which the UV limit can be taken in the "asymptotically safe" way, this point acquires the status of the bare action. === Quantum quadratic gravity (QQG) === A specific realisation of QEG is quantum quadratic gravity (QQG). This a quantum extension of general relativity obtained by adding all local quadratic-in-curvature terms to the Einstein-Hilbert Lagrangian. QQG, besides being renormalizable, has also been shown to feature a UV fixed point (even in the presence of realistic matter sectors). It can, therefore, be regarded as a concrete realisation of asymptotic safety. == Implementation via the effective average action == === Exact functional renormalization group equation === The primary tool for investigating the gravitational RG flow with respect to the energy scale k {\displaystyle k} at the nonperturbative level is the effective average action Γ k {\displaystyle \Gamma _{k}} for gravity. It is the scale dependent version of the effective action where in the underlying functional integral field modes with covariant momenta below k {\displaystyle k} are suppressed while only the remaining are integrated out. For a given theory space, let Φ {\displaystyle \Phi } and Φ ¯ {\displaystyle {\bar {\Phi }}} denote the set of dynamical and background fields, respectively. Then Γ k {\displaystyle \Gamma _{k}} satisfies the following Wetterich–Morris-type functional RG equation (FRGE): k ∂ k Γ k [ Φ , Φ ¯ ] = 1 2 STr [ ( Γ k ( 2 ) [ Φ , Φ ¯ ] + R k [ Φ ¯ ] ) − 1 k ∂ k R k [ Φ ¯ ] ] . {\displaystyle k\partial _{k}\Gamma _{k}{\big [}\Phi ,{\bar {\Phi }}{\big ]}={\frac {1}{2}}\,{\mbox{STr}}{\Big [}{\big (}\Gamma _{k}^{(2)}{\big [}\Phi ,{\bar {\Phi }}{\big ]}+{\mathcal {R}}_{k}[{\bar {\Phi }}]{\big )}^{-1}k\partial _{k}{\mathcal {R}}_{k}[{\bar {\Phi }}]{\Big ]}.} Here Γ k ( 2 ) {\displaystyle \Gamma _{k}^{(2)}} is the second functional derivative of Γ k {\displaystyle \Gamma _{k}} with respect to the quantum fields Φ {\displaystyle \Phi } at fixed Φ ¯ {\displaystyle {\bar {\Phi }}} . The mode suppression operator R k [ Φ ¯ ] {\displaystyle {\mathcal {R}}_{k}[{\bar {\Phi }}]} provides a k {\displaystyle k} -dependent mass-term for fluctuations with covariant momenta p 2 ≪ k 2 {\displaystyle p^{2}\ll k^{2}} and vanishes for p 2 ≫ k 2 {\displaystyle p^{2}\gg k^{2}} . Its appearance in the numerator and denominator renders the supertrace ( STr ) {\displaystyle ({\mbox{STr}})} both infrared and UV finite, peaking at momenta p 2 ≈ k 2 {\displaystyle p^{2}\approx k^{2}} . The FRGE is an exact equation without any perturbative approximations. Given an initial condition it determines Γ k {\displaystyle \Gamma _{k}} for all scales uniquely. The solutions Γ k {\displaystyle \Gamma _{k}} of the FRGE interpolate between the bare (microscopic) action at k → ∞ {\displaystyle k\rightarrow \infty } and the effective action Γ [ Φ ] = Γ k = 0 [ Φ , Φ ¯ = Φ ] {\displaystyle \Gamma [\Phi ]=\Gamma _{k=0}{\big [}\Phi ,{\bar {\Phi }}=\Phi {\big ]}} at k → 0 {\displaystyle k\rightarrow 0} . They can be visualized as trajectories in the underlying theory space. Note that the FRGE itself is independent of the bare action. In the case of an asymptotically safe theory, the bare action is determined by the fixed point functional Γ ∗ = Γ k → ∞ {\displaystyle \Gamma _{*}=\Gamma _{k\rightarrow \infty }} . === Truncations of the theory space === Let us assume there is a set of basis functionals { P α [ ⋅ ] } {\displaystyle \{P_{\alpha }[\,\cdot \,]\}} spanning the theory space under consideration so that any action functional, i.e. any point of this theory space, can be written as a linear combination of the P α {\displaystyle P_{\alpha }} 's. Then solutions Γ k {\displaystyle \Gamma _{k}} of the FRGE have expansions of the form Γ k [ Φ , Φ ¯ ] = ∑ α = 1 ∞ g α ( k ) P α [ Φ , Φ ¯ ] . {\displaystyle \Gamma _{k}[\Phi ,{\bar {\Phi }}]=\sum \limits _{\alpha =1}^{\infty }g_{\alpha }(k)P_{\alpha }[\Phi ,{\bar {\Phi }}].} Inserting this expansion into the FRGE and expanding the trace on its right-hand side in order to extract the beta-functions, one obtains the exact RG equation in component form: k ∂ k g α ( k ) = β α ( g 1 , g 2 , ⋯ ) {\displaystyle k\partial _{k}g_{\alpha }(k)=\beta _{\alpha }(g_{1},g_{2},\cdots )} . Together with the corresponding initial conditions these equations fix the evolution of the running couplings g α ( k ) {\displaystyle g_{\alpha }(k)} , and thus determine Γ k {\displaystyle \Gamma _{k}} completely. As one can see, the FRGE gives rise to a system of infinitely many coupled differential equations since there are infinitely many couplings, and the β {\displaystyle \beta } -functions can depend on all of them. This makes it very hard to solve the system in general. A possible way out is to restrict the analysis on a finite-dimensional subspace as an approximation of the full theory space. In other words, such a truncation of the theory space sets all but a finite number of couplings to zero, considering only the reduced basis { P α [ ⋅ ] } {\displaystyle \{P_{\alpha }[\,\cdot \,]\}} with α = 1 , ⋯ , N {\displaystyle \alpha =1,\cdots ,N} . This amounts to the ansatz Γ k [ Φ , Φ ¯ ] = ∑ α = 1 N g α ( k ) P α [ Φ , Φ ¯ ] , {\displaystyle \Gamma _{k}[\Phi ,{\bar {\Phi }}]=\sum \limits _{\alpha =1}^{N}g_{\alpha }(k)P_{\alpha }[\Phi ,{\bar {\Phi }}],} leading to a system of finitely many coupled differential equations, k ∂ k g α ( k ) = β α ( g 1 , ⋯ , g N ) {\displaystyle k\partial _{k}g_{\alpha }(k)=\beta _{\alpha }(g_{1},\cdots ,g_{N})} , which can now be solved employing analytical or numerical techniques. Clearly a truncation should be chosen such that it incorporates as many features of the exact flow as possible. Although it is an approximation, the truncated flow still exhibits the nonperturbative character of the FRGE, and the β {\displaystyle \beta } -functions can contain contributions from all powers of the couplings. == Evidence from truncated flow equations == === Einstein–Hilbert truncation === As described in the previous section, the FRGE lends itself to a systematic construction of nonperturbative approximations to the gravitational beta-functions by projecting the exact RG flow onto subspaces spanned by a suitable ansatz for Γ k {\displaystyle \Gamma _{k}} . In its simplest form, such an ansatz is given by the Einstein–Hilbert action where Newton's constant G k {\displaystyle G_{k}} and the cosmological constant Λ k {\displaystyle \Lambda _{k}} depend on the RG scale k {\displaystyle k} . Let g μ ν {\displaystyle g_{\mu \nu }} and g ¯ μ ν {\displaystyle {\bar {g}}_{\mu \nu }} denote the dynamical and the background metric, respectively. Then Γ k {\displaystyle \Gamma _{k}} reads, for arbitrary spacetime dimension d {\displaystyle d} , Γ k [ g , g ¯ , ξ , ξ ¯ ] = 1 16 π G k ∫ d d x g ( − R ( g ) + 2 Λ k ) + Γ k gf [ g , g ¯ ] + Γ k gh [ g , g ¯ , ξ , ξ ¯ ] . {\displaystyle \Gamma _{k}[g,{\bar {g}},\xi ,{\bar {\xi }}]={\frac {1}{16\pi G_{k}}}\int {\text{d}}^{d}x\,{\sqrt {g}}\,{\big (}-R(g)+2\Lambda _{k}{\big )}+\Gamma _{k}^{\text{gf}}[g,{\bar {g}}]+\Gamma _{k}^{\text{gh}}[g,{\bar {g}},\xi ,{\bar {\xi }}].} Here R ( g ) {\displaystyle R(g)} is the scalar curvature constructed from the metric g μ ν {\displaystyle g_{\mu \nu }} . Furthermore, Γ k gf {\displaystyle \Gamma _{k}^{\text{gf}}} denotes the gauge fixing action, and Γ k gh {\displaystyle \Gamma _{k}^{\text{gh}}} the ghost action with the ghost fields ξ {\displaystyle \xi } and ξ ¯ {\displaystyle {\bar {\xi }}} . The corresponding β {\displaystyle \beta } -functions, describing the evolution of the dimensionless Newton constant g k = k d − 2 G k {\displaystyle g_{k}=k^{d-2}G_{k}} and the dimensionless cosmological constant λ k = k − 2 Λ k {\displaystyle \lambda _{k}=k^{-2}\Lambda _{k}} , have been derived for the first time in reference for any value of the spacetime dimensionality, including the cases of d {\displaystyle d} below and above 4 {\displaystyle 4} dimensions. In particular, in d = 4 {\displaystyle d=4} dimensions they give rise to the RG flow diagram shown on the left-hand side. The most important result is the existence of a non-Gaussian fixed point suitable for asymptotic safety. It is UV-attractive both in g {\displaystyle g} - and in λ {\displaystyle \lambda } -direction. This fixed point is related to the one found in d = 2 + ϵ {\displaystyle d=2+\epsilon } dimensions by perturbative methods in the sense that it is recovered in the nonperturbative approach presented here by inserting d = 2 + ϵ {\displaystyle d=2+\epsilon } into the β {\displaystyle \beta } -functions and expanding in powers of ϵ {\displaystyle \epsilon } . Since the β {\displaystyle \beta } -functions were shown to exist and explicitly computed for any real, i.e., not necessarily integer value of d {\displaystyle d} , no analytic continuation is involved here. The fixed point in d = 4 {\displaystyle d=4} dimensions, too, is a direct result of the nonperturbative flow equations, and, in contrast to the earlier attempts, no extrapolation in ϵ {\displaystyle \epsilon } is required. === Extended truncations === Subsequently, the existence of the fixed point found within the Einstein–Hilbert truncation has been confirmed in subspaces of successively increasing complexity. The next step in this development was the inclusion of an R 2 {\displaystyle R^{2}} -term in the truncation ansatz. This has been extended further by taking into account polynomials of the scalar curvature R {\displaystyle R} (so-called f ( R ) {\displaystyle f(R)} -truncations), and the square of the Weyl curvature tensor. Also, f(R) theories have been investigated in the Local Potential Approximation finding nonperturbative fixed points in support of the Asymptotic Safety scenario, leading to the so-called Benedetti–Caravelli (BC) fixed point. In such BC formulation, the differential equation for the Ricci scalar R is overconstrained, but some of these constraints can be removed via the resolution of movable singularities. Moreover, the impact of various kinds of matter fields has been investigated. Also computations based on a field reparametrization invariant effective average action seem to recover the crucial fixed point. In combination these results constitute strong evidence that gravity in four dimensions is a nonperturbatively renormalizable quantum field theory, indeed with a UV critical surface of reduced dimensionality, coordinatized by only a few relevant couplings. == Microscopic structure of spacetime == Results of asymptotic safety related investigations indicate that the effective spacetimes of QEG have fractal-like properties on microscopic scales. It is possible to determine, for instance, their spectral dimension and argue that they undergo a dimensional reduction from 4 dimensions at macroscopic distances to 2 dimensions microscopically. In this context it might be possible to draw the connection to other approaches to quantum gravity, e.g. to causal dynamical triangulations, and compare the results. == Physics applications == Phenomenological consequences of the asymptotic safety scenario have been investigated in many areas of gravitational physics. As an example, asymptotic safety in combination with the Standard Model allows a statement about the mass of the Higgs boson and the value of the fine-structure constant. Furthermore, it provides possible explanations for particular phenomena in cosmology and astrophysics, concerning black holes or inflation, for instance. These different studies take advantage of the possibility that the requirement of asymptotic safety can give rise to new predictions and conclusions for the models considered, often without depending on additional, possibly unobserved, assumptions. == Criticism == Some researchers argued that the current implementations of the asymptotic safety program for gravity have unphysical features, such as the running of the Newton constant. Others argued that the very concept of asymptotic safety is a misnomer, as it suggests a novel feature compared to the Wilsonian RG paradigm, while there is none (at least in the quantum field theory context, where this term is also used). == See also == == References == == Further reading == Niedermaier, Max; Reuter, Martin (2006). "The Asymptotic Safety Scenario in Quantum Gravity". Living Rev. Relativ. 9 (1): 5. Bibcode:2006LRR.....9....5N. doi:10.12942/lrr-2006-5. PMC 5256001. PMID 28179875. Percacci, Roberto (2009). "Asymptotic Safety". In Oriti, D. (ed.). Approaches to Quantum Gravity: Towards a New Understanding of Space, Time and Matter. Cambridge University Press. arXiv:0709.3851. Bibcode:2007arXiv0709.3851P. Berges, Jürgen; Tetradis, Nikolaos; Wetterich, Christof (2002). "Non-perturbative renormalization flow in quantum field theory and statistical physics". Physics Reports. 363 (4–6): 223–386. arXiv:hep-ph/0005122. Bibcode:2002PhR...363..223B. doi:10.1016/S0370-1573(01)00098-9. S2CID 119033356. Reuter, Martin; Saueressig, Frank (2012). "Quantum Einstein Gravity". New J. Phys. 14 (5): 055022. arXiv:1202.2274. Bibcode:2012NJPh...14e5022R. doi:10.1088/1367-2630/14/5/055022. S2CID 119205964. Bonanno, Alfio; Saueressig, Frank (2017). "Asymptotically safe cosmology – a status report". Comptes Rendus Physique. 18 (3–4): 254. arXiv:1702.04137. Bibcode:2017CRPhy..18..254B. doi:10.1016/j.crhy.2017.02.002. S2CID 119045691. Litim, Daniel (2011). "Renormalisation group and the Planck scale". Philosophical Transactions of the Royal Society A. 69 (1946): 2759–2778. arXiv:1102.4624. Bibcode:2011RSPTA.369.2759L. doi:10.1098/rsta.2011.0103. PMID 21646277. S2CID 8888965. Nagy, Sandor (2012). "Lectures on renormalization and asymptotic safety". Annals of Physics. 350: 310–346. arXiv:1211.4151. Bibcode:2014AnPhy.350..310N. doi:10.1016/j.aop.2014.07.027. S2CID 119183995. == External links == The Asymptotic Safety FAQs – A collection of questions and answers about asymptotic safety and a comprehensive list of references. Asymptotic Safety in quantum gravity – A Scholarpedia article about the same topic with some more details on the gravitational effective average action. The Quantum Theory of Fields: Effective or Fundamental? – A talk by Steven Weinberg at CERN on July 7, 2009. Asymptotic Safety - 30 Years Later – All talks of the workshop held at the Perimeter Institute on November 5 – 8, 2009. Four radical routes to a theory of everything – An article by Amanda Gefter on quantum gravity, published 2008 in New Scientist (Physics & Math). "Weinberg "Living with infinities" - Källén Lecture 2009". YouTube. Andrea Idini. January 14, 2022. (From 1:11:28 to 1:18:10 in the video, Weinberg gives a brief discussion of asymptotic safety. Also see Weinberg's answer to Cecilia Jarlskog's question at the end of the lecture. The 2009 Källén lecture was recorded on February 13, 2009.)
Wikipedia:Asymptotology#0
Asymptotology has been defined as “the art of dealing with applied mathematical systems in limiting cases” as well as “the science about the synthesis of simplicity and exactness by means of localization". == Principles == The field of asymptotics is normally first encountered in school geometry with the introduction of the asymptote, a line to which a curve tends at infinity. The word Ασύμπτωτος (asymptotos) in Greek means non-coincident and puts strong emphasis on the point that approximation does not turn into coincidence. It is a salient feature of asymptotics, but this property alone does not entirely cover the idea of asymptotics and, etymologically, the term seems to be quite insufficient. == Perturbation theory, small and large parameters == In physics and other fields of science, one frequently comes across problems of an asymptotic nature, such as damping, orbiting, stabilization of a perturbed motion, etc. Their solutions lend themselves to asymptotic analysis (perturbation theory), which is widely used in modern applied mathematics, mechanics and physics. But asymptotic methods put a claim on being more than a part of classical mathematics. K. Friedrichs said: “Asymptotic description is not only a convenient tool in the mathematical analysis of nature, it has some more fundamental significance”. M. Kruskal introduced the special term asymptotology, defined above, and called for a formalization of the accumulated experience to convert the art of asymptotology to a science. A general term is capable of possessing significant heuristic value. In his essay "The Future of Mathematics", H. Poincaré wrote the following. The invention of a new word will often be sufficient to bring out the relation, and the word will be creative.... It is hardly possible to believe what economy of thought, as Mach used to say, can be effected by a well-chosen term.... Mathematics is the art of giving the same name to different things.... When language has been well chosen, one is astonished to find that all demonstrations made for a known object apply immediately to many new objects: nothing requires to be changed, not even the terms, since the names have become the same.... The bare fact, then, has sometimes no great interest ... it only acquires a value when some more careful thinker perceives the connection it brings out, and symbolizes it by a term. In addition, “the success of ‘cybernetics’, ‘attractors’ and ‘catastrophe theory’ illustrates the fruitfulness of word creation as scientific research”. Almost every physical theory, formulated in the most general manner, is rather difficult from a mathematical point of view. Therefore, both at the genesis of the theory and its further development, the simplest limiting cases, which allow analytical solutions, are of particular importance. In those limits, the number of equations usually decreases, their order reduces, nonlinear equations can be replaced by linear ones, the initial system becomes averaged in a certain sense, and so on. All these idealizations, different as they may seem, increase the degree of symmetry of the mathematical model of the phenomenon under consideration. == Asymptotic approach == In essence, the asymptotic approach to a complex problem consists in treating the insufficiently symmetrical governing system as close to a certain symmetrical one as possible. In attempting to obtain a better approximation of the exact solution to the given problem, it is crucial that the determination of corrective solutions, which depart from the limit case, be much simpler than directly investigating the governing system. At first sight, the possibilities of such an approach seem restricted to varying the parameters determining the system only within a narrow range. However, experience in the investigation of different physical problems shows that if the system's parameters have changed sufficiently and the system has deviated far from the symmetrical limit case, another limit system, often with less obvious symmetries can be found, to which an asymptotic analysis is also applicable. This allows one to describe the system's behavior on the basis of a small number of limit cases over the whole range of parameter variations. Such an approach corresponds to the maximum level of intuition, promotes further insights, and eventually leads to the formulation of new physical concepts. It is also important that asymptotic analysis helps to establish the connection between different physical theories. The aim of the asymptotic approach is to simplify the object. This simplification is attained by decreasing the vicinity of the singularity under consideration. It is typical that the accuracy of asymptotic expansions grows with localization. Exactness and simplicity are commonly regarded as mutually exclusive notions. When tending to simplicity, we sacrifice exactness, and trying to achieve exactness, we expect no simplicity. Under localization, however, the antipodes converge; the contradiction is resolved in a synthesis called asymptotics. In other words, simplicity and exactness are coupled by an “uncertainty principle” relation while the domain size serves as a small parameter – a measure of uncertainty. == Asymptotic uncertainty principle == Let us illustrate the “asymptotic uncertainty principle”. Take the expansion of the function f ( x ) {\displaystyle f(x)} in an asymptotic sequence ϕ n ( x ) {\displaystyle {\phi _{n}(x)}} : f ( x ) = ∑ n = 0 ∞ a n ϕ n ( x ) {\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}\phi _{n}(x)} , x {\displaystyle x} → 0 {\displaystyle 0} . A partial sum of the series is designated by S N ( x ) {\displaystyle S_{N}(x)} , and the exactness of approximation at a given N {\displaystyle N} is estimated by Δ N ( x ) = | f ( x ) − S N ( x ) | {\displaystyle \Delta _{N}(x)=|f(x)-S_{N}(x)|} . Simplicity is characterized here by the number N {\displaystyle N} and the locality by the length of interval x {\displaystyle x} . Based on known properties of the asymptotic expansion, we consider the pair wise interrelation of values x {\displaystyle x} , N {\displaystyle N} , and Δ {\displaystyle \Delta } . At a fixed x {\displaystyle x} the expansion initially converges, i.e., the exactness increases at the cost of simplicity. If we fix N {\displaystyle N} , the exactness and the interval size begin to compete. The smaller the interval, the given value of Δ {\displaystyle \Delta } is reached more simply. We illustrate these regularities using a simple example. Consider the exponential integral function: Ei ⁡ ( y ) = ∫ − ∞ y e ζ ζ − 1 d ζ , y < 0 {\displaystyle \operatorname {Ei} (y)=\int _{-\infty }^{y}e^{\zeta }\zeta ^{-1}d{\zeta },y<0} . Integrating by parts, we obtain the following asymptotic expansion Ei ⁡ ( y ) ∼ e y ∑ n = 1 ∞ ( n − 1 ) ! y − n , y {\displaystyle \operatorname {Ei} (y)\sim e^{y}\sum _{n=1}^{\infty }(n-1)!y^{-n},\;y} → − ∞ {\displaystyle -\infty } . Put f ( x ) = − e − y Ei ⁡ ( y ) {\displaystyle f(x)=-e-y\operatorname {Ei} (y)} , y = − x − 1 {\displaystyle y=-x-1} . Calculating the partial sums of this series and the values Δ N ( x ) {\displaystyle \Delta _{N}(x)} and f ( x ) {\displaystyle f(x)} for different x {\displaystyle x} yields: x {\displaystyle x} f ( x ) {\displaystyle f(x)} Δ 1 {\displaystyle \Delta _{1}} Δ 2 {\displaystyle \Delta _{2}} Δ 3 {\displaystyle \Delta _{3}} Δ 4 {\displaystyle \Delta _{4}} Δ 5 {\displaystyle \Delta _{5}} Δ 6 {\displaystyle \Delta _{6}} Δ 7 {\displaystyle \Delta _{7}} 1/3 0.262 0.071 0.040 0.034 0.040 0.060 0.106 0.223 1/5 0.171 0.029 0.011 0.006 0.004 0.0035 0.0040 0.0043 1/7 0.127 0.016 0.005 0.002 0.001 0.0006 0.0005 0.0004 Thus, at a given x {\displaystyle x} , the exactness first increases with the growth of N {\displaystyle N} and then decreases (so one has an asymptotic expansion). For a given N {\displaystyle N} , one may observe an improvement of exactness with diminishing x {\displaystyle x} . Finally, is it worth using asymptotic analysis if computers and numerical methods have reached such an advanced state? As D. G. Crighton has mentioned, Design of computational or experimental schemes without the guidance of asymptotic information is wasteful at best, dangerous at worst, because of the possible failure to identify crucial (stiff) features of the process and their localization in coordinate and parameter space. Moreover, all experience suggests that asymptotic solutions are useful numerically far beyond their nominal range of validity, and can often be used directly, at least at a preliminary product design stage, for example, saving the need for accurate computation until the final design stage where many variables have been restricted to narrow ranges. == Notes == == References == Andrianov I.V., Manevitch L.I. Asymptotology: Ideas, Methods, and Applications. Kluwer Academic Publishers, 2002. Dewar R.L. "Asymptotology – a cautionary tale", ANZIAM Journal, 2002, 44, 33–40. doi:10.1017/S1446181100007884 Friedrichs K.O. "Asymptotic phenomena in mathematical physics", Bulletin of the American Mathematical Society, 1955, 61, 485–504. Segel L.A. "The importance of asymptotic analysis in Applied Mathematics", American Mathematical Monthly, 1966, 73, 7–14. White R.B. Asymptotic Analysis of Differential Equations, Revised Edition, London: Imperial College Press, 2010.
Wikipedia:Athanase Dupré#0
Louis Victoire Athanase Dupré (28 December 1808 – 10 August 1869) was a French mathematician and physicist noted for his 1860s publications on the mechanical theory of heat (thermodynamics); work that was said to have inspired the publications of engineer François Massieu and his Massieu functions; which in turn inspired the work of American engineer Willard Gibbs and his fundamental equations. == See also == Young–Dupré equation == References == Athanase Dupre Biography at the MacTutor History of Mathematics archive
Wikipedia:Atiyah–Singer index theorem#0
In differential geometry, the Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963), states that for an elliptic differential operator on a compact manifold, the analytical index (related to the dimension of the space of solutions) is equal to the topological index (defined in terms of some topological data). It includes many other theorems, such as the Chern–Gauss–Bonnet theorem and Riemann–Roch theorem, as special cases, and has applications to theoretical physics. == History == The index problem for elliptic differential operators was posed by Israel Gel'fand. He noticed the homotopy invariance of the index, and asked for a formula for it by means of topological invariants. Some of the motivating examples included the Riemann–Roch theorem and its generalization the Hirzebruch–Riemann–Roch theorem, and the Hirzebruch signature theorem. Friedrich Hirzebruch and Armand Borel had proved the integrality of the  genus of a spin manifold, and Atiyah suggested that this integrality could be explained if it were the index of the Dirac operator (which was rediscovered by Atiyah and Singer in 1961). The Atiyah–Singer theorem was announced in 1963. The proof sketched in this announcement was never published by them, though it appears in Palais's book. It appears also in the "Séminaire Cartan-Schwartz 1963/64" that was held in Paris simultaneously with the seminar led by Richard Palais at Princeton University. The last talk in Paris was by Atiyah on manifolds with boundary. Their first published proof replaced the cobordism theory of the first proof with K-theory, and they used this to give proofs of various generalizations in another sequence of papers. 1965: Sergey P. Novikov published his results on the topological invariance of the rational Pontryagin classes on smooth manifolds. Robion Kirby and Laurent C. Siebenmann's results, combined with René Thom's paper proved the existence of rational Pontryagin classes on topological manifolds. The rational Pontryagin classes are essential ingredients of the index theorem on smooth and topological manifolds. 1969: Michael Atiyah defines abstract elliptic operators on arbitrary metric spaces. Abstract elliptic operators became protagonists in Kasparov's theory and Connes's noncommutative differential geometry. 1971: Isadore Singer proposes a comprehensive program for future extensions of index theory. 1972: Gennadi G. Kasparov publishes his work on the realization of K-homology by abstract elliptic operators. 1973: Atiyah, Raoul Bott, and Vijay Patodi gave a new proof of the index theorem using the heat equation, described in a paper by Melrose. 1977: Dennis Sullivan establishes his theorem on the existence and uniqueness of Lipschitz and quasiconformal structures on topological manifolds of dimension different from 4. 1983: Ezra Getzler motivated by ideas of Edward Witten and Luis Alvarez-Gaume, gave a short proof of the local index theorem for operators that are locally Dirac operators; this covers many of the useful cases. 1983: Nicolae Teleman proves that the analytical indices of signature operators with values in vector bundles are topological invariants. 1984: Teleman establishes the index theorem on topological manifolds. 1986: Alain Connes publishes his fundamental paper on noncommutative geometry. 1989: Simon K. Donaldson and Sullivan study Yang–Mills theory on quasiconformal manifolds of dimension 4. They introduce the signature operator S defined on differential forms of degree two. 1990: Connes and Henri Moscovici prove the local index formula in the context of non-commutative geometry. 1994: Connes, Sullivan, and Teleman prove the index theorem for signature operators on quasiconformal manifolds. == Notation == X is a compact smooth manifold (without boundary). E and F are smooth vector bundles over X. D is an elliptic differential operator from E to F. So in local coordinates it acts as a differential operator, taking smooth sections of E to smooth sections of F. == Symbol of a differential operator == If D is a differential operator on a Euclidean space of order n in k variables x 1 , … , x k {\displaystyle x_{1},\dots ,x_{k}} , then its symbol is the function of 2k variables x 1 , … , x k , y 1 , … , y k {\displaystyle x_{1},\dots ,x_{k},y_{1},\dots ,y_{k}} , given by dropping all terms of order less than n and replacing ∂ / ∂ x i {\displaystyle \partial /\partial x_{i}} by y i {\displaystyle y_{i}} . So the symbol is homogeneous in the variables y, of degree n. The symbol is well defined even though ∂ / ∂ x i {\displaystyle \partial /\partial x_{i}} does not commute with x i {\displaystyle x_{i}} because we keep only the highest order terms and differential operators commute "up to lower-order terms". The operator is called elliptic if the symbol is nonzero whenever at least one y is nonzero. Example: The Laplace operator in k variables has symbol y 1 2 + ⋯ + y k 2 {\displaystyle y_{1}^{2}+\cdots +y_{k}^{2}} , and so is elliptic as this is nonzero whenever any of the y i {\displaystyle y_{i}} 's are nonzero. The wave operator has symbol − y 1 2 + ⋯ + y k 2 {\displaystyle -y_{1}^{2}+\cdots +y_{k}^{2}} , which is not elliptic if k ≥ 2 {\displaystyle k\geq 2} , as the symbol vanishes for some non-zero values of the ys. The symbol of a differential operator of order n on a smooth manifold X is defined in much the same way using local coordinate charts, and is a function on the cotangent bundle of X, homogeneous of degree n on each cotangent space. (In general, differential operators transform in a rather complicated way under coordinate transforms (see jet bundle); however, the highest order terms transform like tensors so we get well defined homogeneous functions on the cotangent spaces that are independent of the choice of local charts.) More generally, the symbol of a differential operator between two vector bundles E and F is a section of the pullback of the bundle Hom(E, F) to the cotangent space of X. The differential operator is called elliptic if the element of Hom(Ex, Fx) is invertible for all non-zero cotangent vectors at any point x of X. A key property of elliptic operators is that they are almost invertible; this is closely related to the fact that their symbols are almost invertible. More precisely, an elliptic operator D on a compact manifold has a (non-unique) parametrix (or pseudoinverse) D′ such that DD′ -1 and D′D -1 are both compact operators. An important consequence is that the kernel of D is finite-dimensional, because all eigenspaces of compact operators, other than the kernel, are finite-dimensional. (The pseudoinverse of an elliptic differential operator is almost never a differential operator. However, it is an elliptic pseudodifferential operator.) == Analytical index == As the elliptic differential operator D has a pseudoinverse, it is a Fredholm operator. Any Fredholm operator has an index, defined as the difference between the (finite) dimension of the kernel of D (solutions of Df = 0), and the (finite) dimension of the cokernel of D (the constraints on the right-hand-side of an inhomogeneous equation like Df = g, or equivalently the kernel of the adjoint operator). In other words, Index(D) = dim Ker(D) − dim Coker(D) = dim Ker(D) − dim Ker(D*). This is sometimes called the analytical index of D. Example: Suppose that the manifold is the circle (thought of as R/Z), and D is the operator d/dx − λ for some complex constant λ. (This is the simplest example of an elliptic operator.) Then the kernel is the space of multiples of exp(λx) if λ is an integral multiple of 2πi and is 0 otherwise, and the kernel of the adjoint is a similar space with λ replaced by its complex conjugate. So D has index 0. This example shows that the kernel and cokernel of elliptic operators can jump discontinuously as the elliptic operator varies, so there is no nice formula for their dimensions in terms of continuous topological data. However the jumps in the dimensions of the kernel and cokernel are the same, so the index, given by the difference of their dimensions, does indeed vary continuously, and can be given in terms of topological data by the index theorem. == Topological index == The topological index of an elliptic differential operator D {\displaystyle D} between smooth vector bundles E {\displaystyle E} and F {\displaystyle F} on an n {\displaystyle n} -dimensional compact manifold X {\displaystyle X} is given by ( − 1 ) n ch ⁡ ( D ) Td ⁡ ( X ) [ X ] = ( − 1 ) n ∫ X ch ⁡ ( D ) Td ⁡ ( X ) {\displaystyle (-1)^{n}\operatorname {ch} (D)\operatorname {Td} (X)[X]=(-1)^{n}\int _{X}\operatorname {ch} (D)\operatorname {Td} (X)} in other words the value of the top dimensional component of the mixed cohomology class ch ⁡ ( D ) Td ⁡ ( X ) {\displaystyle \operatorname {ch} (D)\operatorname {Td} (X)} on the fundamental homology class of the manifold X {\displaystyle X} up to a difference of sign. Here, Td ⁡ ( X ) {\displaystyle \operatorname {Td} (X)} is the Todd class of the complexified tangent bundle of X {\displaystyle X} . ch ⁡ ( D ) {\displaystyle \operatorname {ch} (D)} is equal to φ − 1 ( ch ⁡ ( d ( p ∗ E , p ∗ F , σ ( D ) ) ) ) {\displaystyle \varphi ^{-1}(\operatorname {ch} (d(p^{*}E,p^{*}F,\sigma (D))))} , where φ : H k ( X ; Q ) → H n + k ( B ( X ) / S ( X ) ; Q ) {\displaystyle \varphi :H^{k}(X;\mathbb {Q} )\to H^{n+k}(B(X)/S(X);\mathbb {Q} )} is the Thom isomorphism for the sphere bundle p : B ( X ) / S ( X ) → X {\displaystyle p:B(X)/S(X)\to X} ch : K ( X ) ⊗ Q → H ∗ ( X ; Q ) {\displaystyle \operatorname {ch} :K(X)\otimes \mathbb {Q} \to H^{*}(X;\mathbb {Q} )} is the Chern character d ( p ∗ E , p ∗ F , σ ( D ) ) {\displaystyle d(p^{*}E,p^{*}F,\sigma (D))} is the "difference element" in K ( B ( X ) / S ( X ) ) {\displaystyle K(B(X)/S(X))} associated to two vector bundles p ∗ E {\displaystyle p^{*}E} and p ∗ F {\displaystyle p^{*}F} on B ( X ) {\displaystyle B(X)} and an isomorphism σ ( D ) {\displaystyle \sigma (D)} between them on the subspace S ( X ) {\displaystyle S(X)} . σ ( D ) {\displaystyle \sigma (D)} is the symbol of D {\displaystyle D} In some situations, it is possible to simplify the above formula for computational purposes. In particular, if X {\displaystyle X} is a 2 m {\displaystyle 2m} -dimensional orientable (compact) manifold with non-zero Euler class e ( T X ) {\displaystyle e(TX)} , then applying the Thom isomorphism and dividing by the Euler class, the topological index may be expressed as ( − 1 ) m ∫ X ch ⁡ ( E ) − ch ⁡ ( F ) e ( T X ) Td ⁡ ( X ) {\displaystyle (-1)^{m}\int _{X}{\frac {\operatorname {ch} (E)-\operatorname {ch} (F)}{e(TX)}}\operatorname {Td} (X)} where division makes sense by pulling e ( T X ) − 1 {\displaystyle e(TX)^{-1}} back from the cohomology ring of the classifying space B S O {\displaystyle BSO} . One can also define the topological index using only K-theory (and this alternative definition is compatible in a certain sense with the Chern-character construction above). If X is a compact submanifold of a manifold Y then there is a pushforward (or "shriek") map from K(TX) to K(TY). The topological index of an element of K(TX) is defined to be the image of this operation with Y some Euclidean space, for which K(TY) can be naturally identified with the integers Z (as a consequence of Bott-periodicity). This map is independent of the embedding of X in Euclidean space. Now a differential operator as above naturally defines an element of K(TX), and the image in Z under this map "is" the topological index. As usual, D is an elliptic differential operator between vector bundles E and F over a compact manifold X. The index problem is the following: compute the (analytical) index of D using only the symbol s and topological data derived from the manifold and the vector bundle. The Atiyah–Singer index theorem solves this problem, and states: The analytical index of D is equal to its topological index. In spite of its formidable definition, the topological index is usually straightforward to evaluate explicitly. So this makes it possible to evaluate the analytical index. (The cokernel and kernel of an elliptic operator are in general extremely hard to evaluate individually; the index theorem shows that we can usually at least evaluate their difference.) Many important invariants of a manifold (such as the signature) can be given as the index of suitable differential operators, so the index theorem allows us to evaluate these invariants in terms of topological data. Although the analytical index is usually hard to evaluate directly, it is at least obviously an integer. The topological index is by definition a rational number, but it is usually not at all obvious from the definition that it is also integral. So the Atiyah–Singer index theorem implies some deep integrality properties, as it implies that the topological index is integral. The index of an elliptic differential operator obviously vanishes if the operator is self adjoint. It also vanishes if the manifold X has odd dimension, though there are pseudodifferential elliptic operators whose index does not vanish in odd dimensions. === Relation to Grothendieck–Riemann–Roch === The Grothendieck–Riemann–Roch theorem was one of the main motivations behind the index theorem because the index theorem is the counterpart of this theorem in the setting of real manifolds. Now, if there's a map f : X → Y {\displaystyle f:X\to Y} of compact stably almost complex manifolds, then there is a commutative diagram K ( X ) → Td ( X ) ⋅ ch H ( X ; Q ) f ∗ ↓ ↓ f ∗ K ( Y ) → Td ( Y ) ⋅ ch H ( Y ; Q ) {\displaystyle {\begin{array}{ccc}&&&\\&K(X)&{\xrightarrow[{}]{{\text{Td}}(X)\cdot {\text{ch}}}}&H(X;\mathbb {Q} )&\\&f_{*}{\Bigg \downarrow }&&{\Bigg \downarrow }f_{*}\\&K(Y)&{\xrightarrow[{{\text{Td}}(Y)\cdot {\text{ch}}}]{}}&H(Y;\mathbb {Q} )&\\&&&\\\end{array}}} if Y = ∗ {\displaystyle Y=*} is a point, then we recover the statement above. Here K ( X ) {\displaystyle K(X)} is the Grothendieck group of complex vector bundles. This commutative diagram is formally very similar to the GRR theorem because the cohomology groups on the right are replaced by the Chow ring of a smooth variety, and the Grothendieck group on the left is given by the Grothendieck group of algebraic vector bundles. == Extensions of the Atiyah–Singer index theorem == === Teleman index theorem === Due to (Teleman 1983), (Teleman 1984): For any abstract elliptic operator (Atiyah 1970) on a closed, oriented, topological manifold, the analytical index equals the topological index. The proof of this result goes through specific considerations, including the extension of Hodge theory on combinatorial and Lipschitz manifolds (Teleman 1980), (Teleman 1983), the extension of Atiyah–Singer's signature operator to Lipschitz manifolds (Teleman 1983), Kasparov's K-homology (Kasparov 1972) and topological cobordism (Kirby & Siebenmann 1977). This result shows that the index theorem is not merely a differentiability statement, but rather a topological statement. === Connes–Donaldson–Sullivan–Teleman index theorem === Due to (Donaldson & Sullivan 1989), (Connes, Sullivan & Teleman 1994): For any quasiconformal manifold there exists a local construction of the Hirzebruch–Thom characteristic classes. This theory is based on a signature operator S, defined on middle degree differential forms on even-dimensional quasiconformal manifolds (compare (Donaldson & Sullivan 1989)). Using topological cobordism and K-homology one may provide a full statement of an index theorem on quasiconformal manifolds (see page 678 of (Connes, Sullivan & Teleman 1994)). The work (Connes, Sullivan & Teleman 1994) "provides local constructions for characteristic classes based on higher dimensional relatives of the measurable Riemann mapping in dimension two and the Yang–Mills theory in dimension four." These results constitute significant advances along the lines of Singer's program Prospects in Mathematics (Singer 1971). At the same time, they provide, also, an effective construction of the rational Pontrjagin classes on topological manifolds. The paper (Teleman 1985) provides a link between Thom's original construction of the rational Pontrjagin classes (Thom 1956) and index theory. It is important to mention that the index formula is a topological statement. The obstruction theories due to Milnor, Kervaire, Kirby, Siebenmann, Sullivan, Donaldson show that only a minority of topological manifolds possess differentiable structures and these are not necessarily unique. Sullivan's result on Lipschitz and quasiconformal structures (Sullivan 1979) shows that any topological manifold in dimension different from 4 possesses such a structure which is unique (up to isotopy close to identity). The quasiconformal structures (Connes, Sullivan & Teleman 1994) and more generally the Lp-structures, p > n(n+1)/2, introduced by M. Hilsum (Hilsum 1999), are the weakest analytical structures on topological manifolds of dimension n for which the index theorem is known to hold. === Other extensions === The Atiyah–Singer theorem applies to elliptic pseudodifferential operators in much the same way as for elliptic differential operators. In fact, for technical reasons most of the early proofs worked with pseudodifferential rather than differential operators: their extra flexibility made some steps of the proofs easier. Instead of working with an elliptic operator between two vector bundles, it is sometimes more convenient to work with an elliptic complex 0 → E 0 → E 1 → E 2 → ⋯ → E m → 0 {\displaystyle 0\rightarrow E_{0}\rightarrow E_{1}\rightarrow E_{2}\rightarrow \dotsm \rightarrow E_{m}\rightarrow 0} of vector bundles. The difference is that the symbols now form an exact sequence (off the zero section). In the case when there are just two non-zero bundles in the complex this implies that the symbol is an isomorphism off the zero section, so an elliptic complex with 2 terms is essentially the same as an elliptic operator between two vector bundles. Conversely the index theorem for an elliptic complex can easily be reduced to the case of an elliptic operator: the two vector bundles are given by the sums of the even or odd terms of the complex, and the elliptic operator is the sum of the operators of the elliptic complex and their adjoints, restricted to the sum of the even bundles. If the manifold is allowed to have boundary, then some restrictions must be put on the domain of the elliptic operator in order to ensure a finite index. These conditions can be local (like demanding that the sections in the domain vanish at the boundary) or more complicated global conditions (like requiring that the sections in the domain solve some differential equation). The local case was worked out by Atiyah and Bott, but they showed that many interesting operators (e.g., the signature operator) do not admit local boundary conditions. To handle these operators, Atiyah, Patodi and Singer introduced global boundary conditions equivalent to attaching a cylinder to the manifold along the boundary and then restricting the domain to those sections that are square integrable along the cylinder. This point of view is adopted in the proof of Melrose (1993) of the Atiyah–Patodi–Singer index theorem. Instead of just one elliptic operator, one can consider a family of elliptic operators parameterized by some space Y. In this case the index is an element of the K-theory of Y, rather than an integer. If the operators in the family are real, then the index lies in the real K-theory of Y. This gives a little extra information, as the map from the real K-theory of Y to the complex K-theory is not always injective. If there is a group action of a group G on the compact manifold X, commuting with the elliptic operator, then one replaces ordinary K-theory with equivariant K-theory. Moreover, one gets generalizations of the Lefschetz fixed-point theorem, with terms coming from fixed-point submanifolds of the group G. See also: equivariant index theorem. Atiyah (1976) showed how to extend the index theorem to some non-compact manifolds, acted on by a discrete group with compact quotient. The kernel of the elliptic operator is in general infinite dimensional in this case, but it is possible to get a finite index using the dimension of a module over a von Neumann algebra; this index is in general real rather than integer valued. This version is called the L2 index theorem, and was used by Atiyah & Schmid (1977) to rederive properties of the discrete series representations of semisimple Lie groups. The Callias index theorem is an index theorem for a Dirac operator on a noncompact odd-dimensional space. The Atiyah–Singer index is only defined on compact spaces, and vanishes when their dimension is odd. In 1978 Constantine Callias, at the suggestion of his Ph.D. advisor Roman Jackiw, used the axial anomaly to derive this index theorem on spaces equipped with a Hermitian matrix called the Higgs field. The index of the Dirac operator is a topological invariant which measures the winding of the Higgs field on a sphere at infinity. If U is the unit matrix in the direction of the Higgs field, then the index is proportional to the integral of U(dU)n−1 over the (n−1)-sphere at infinity. If n is even, it is always zero. The topological interpretation of this invariant and its relation to the Hörmander index proposed by Boris Fedosov, as generalized by Lars Hörmander, was published by Raoul Bott and Robert Thomas Seeley. == Examples == === Chern-Gauss-Bonnet theorem === Suppose that M {\displaystyle M} is a compact oriented manifold of dimension n = 2 r {\displaystyle n=2r} . If we take Λ even {\displaystyle \Lambda ^{\text{even}}} to be the sum of the even exterior powers of the cotangent bundle, and Λ odd {\displaystyle \Lambda ^{\text{odd}}} to be the sum of the odd powers, define D = d + d ∗ {\displaystyle D=d+d^{*}} , considered as a map from Λ even {\displaystyle \Lambda ^{\text{even}}} to Λ odd {\displaystyle \Lambda ^{\text{odd}}} . Then the analytical index of D {\displaystyle D} is the Euler characteristic χ ( M ) {\displaystyle \chi (M)} of the Hodge cohomology of M {\displaystyle M} , and the topological index is the integral of the Euler class over the manifold. The index formula for this operator yields the Chern–Gauss–Bonnet theorem. The concrete computation goes as follows: according to one variation of the splitting principle, if E {\displaystyle E} is a real vector bundle of dimension n = 2 r {\displaystyle n=2r} , in order to prove assertions involving characteristic classes, we may suppose that there are complex line bundles l 1 , … , l r {\displaystyle l_{1},\,\ldots ,\,l_{r}} such that E ⊗ C = l 1 ⊕ l 1 ¯ ⊕ ⋯ l r ⊕ l r ¯ {\displaystyle E\otimes \mathbb {C} =l_{1}\oplus {\overline {l_{1}}}\oplus \dotsm l_{r}\oplus {\overline {l_{r}}}} . Therefore, we can consider the Chern roots x i ( E ⊗ C ) = c 1 ( l i ) {\displaystyle x_{i}(E\otimes \mathbb {C} )=c_{1}(l_{i})} , x r + i ( E ⊗ C ) = c 1 ( l i ¯ ) = − x i ( E ⊗ C ) {\displaystyle x_{r+i}(E\otimes \mathbb {C} )=c_{1}{\mathord {\left({\overline {l_{i}}}\right)}}=-x_{i}(E\otimes \mathbb {C} )} , i = 1 , … , r {\displaystyle i=1,\,\ldots ,\,r} . Using Chern roots as above and the standard properties of the Euler class, we have that e ( T M ) = ∏ i r x i ( T M ⊗ C ) {\textstyle e(TM)=\prod _{i}^{r}x_{i}(TM\otimes \mathbb {C} )} . As for the Chern character and the Todd class, ch ⁡ ( Λ even − Λ odd ) = 1 − ch ⁡ ( T ∗ M ⊗ C ) + ch ⁡ ( Λ 2 T ∗ M ⊗ C ) − … + ( − 1 ) n ch ⁡ ( Λ n T ∗ M ⊗ C ) = 1 − ∑ i n e − x i ( T M ⊗ C ) + ∑ i < j e − x i e − x j ( T M ⊗ C ) + … + ( − 1 ) n e − x 1 ⋯ e − x n ( T M ⊗ C ) = ∏ i n ( 1 − e − x i ) ( T M ⊗ C ) Td ⁡ ( T M ⊗ C ) = ∏ i n x i 1 − e − x i ( T M ⊗ C ) {\displaystyle {\begin{aligned}\operatorname {ch} {\mathord {\left(\Lambda ^{\text{even}}-\Lambda ^{\text{odd}}\right)}}&=1-\operatorname {ch} (T^{*}M\otimes \mathbb {C} )+\operatorname {ch} {\mathord {\left(\Lambda ^{2}T^{*}M\otimes \mathbb {C} \right)}}-\ldots +(-1)^{n}\operatorname {ch} {\mathord {\left(\Lambda ^{n}T^{*}M\otimes \mathbb {C} \right)}}\\&=1-\sum _{i}^{n}e^{-x_{i}}(TM\otimes \mathbb {C} )+\sum _{i<j}e^{-x_{i}}e^{-x_{j}}(TM\otimes \mathbb {C} )+\ldots +(-1)^{n}e^{-x_{1}}\dotsm e^{-x_{n}}(TM\otimes \mathbb {C} )\\&=\prod _{i}^{n}\left(1-e^{-x_{i}}\right)(TM\otimes \mathbb {C} )\\[3pt]\operatorname {Td} (TM\otimes \mathbb {C} )&=\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}(TM\otimes \mathbb {C} )\end{aligned}}} Applying the index theorem, χ ( M ) = ( − 1 ) r ∫ M ∏ i n ( 1 − e − x i ) ∏ i r x i ∏ i n x i 1 − e − x i ( T M ⊗ C ) = ( − 1 ) r ∫ M ( − 1 ) r ∏ i r x i ( T M ⊗ C ) = ∫ M e ( T M ) {\displaystyle \chi (M)=(-1)^{r}\int _{M}{\frac {\prod _{i}^{n}\left(1-e^{-x_{i}}\right)}{\prod _{i}^{r}x_{i}}}\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}(TM\otimes \mathbb {C} )=(-1)^{r}\int _{M}(-1)^{r}\prod _{i}^{r}x_{i}(TM\otimes \mathbb {C} )=\int _{M}e(TM)} which is the "topological" version of the Chern-Gauss-Bonnet theorem (the geometric one being obtained by applying the Chern-Weil homomorphism). === Hirzebruch–Riemann–Roch theorem === Take X to be a complex manifold of (complex) dimension n with a holomorphic vector bundle V. We let the vector bundles E and F be the sums of the bundles of differential forms with coefficients in V of type (0, i) with i even or odd, and we let the differential operator D be the sum ∂ ¯ + ∂ ¯ ∗ {\displaystyle {\overline {\partial }}+{\overline {\partial }}^{*}} restricted to E. This derivation of the Hirzebruch–Riemann–Roch theorem is more natural if we use the index theorem for elliptic complexes rather than elliptic operators. We can take the complex to be 0 → V → V ⊗ Λ 0 , 1 T ∗ ( X ) → V ⊗ Λ 0 , 2 T ∗ ( X ) → ⋯ {\displaystyle 0\rightarrow V\rightarrow V\otimes \Lambda ^{0,1}T^{*}(X)\rightarrow V\otimes \Lambda ^{0,2}T^{*}(X)\rightarrow \dotsm } with the differential given by ∂ ¯ {\displaystyle {\overline {\partial }}} . Then the i'th cohomology group is just the coherent cohomology group Hi(X, V), so the analytical index of this complex is the holomorphic Euler characteristic of V: index ⁡ ( D ) = ∑ p ( − 1 ) p dim ⁡ H p ( X , V ) = χ ( X , V ) {\displaystyle \operatorname {index} (D)=\sum _{p}(-1)^{p}\dim H^{p}(X,V)=\chi (X,V)} Since we are dealing with complex bundles, the computation of the topological index is simpler. Using Chern roots and doing similar computations as in the previous example, the Euler class is given by e ( T X ) = ∏ i n x i ( T X ) {\textstyle e(TX)=\prod _{i}^{n}x_{i}(TX)} and ch ⁡ ( ∑ j n ( − 1 ) j V ⊗ Λ j T ∗ X ¯ ) = ch ⁡ ( V ) ∏ j n ( 1 − e x j ) ( T X ) Td ⁡ ( T X ⊗ C ) = Td ⁡ ( T X ) Td ⁡ ( T X ¯ ) = ∏ i n x i 1 − e − x i ∏ j n − x j 1 − e x j ( T X ) {\displaystyle {\begin{aligned}\operatorname {ch} \left(\sum _{j}^{n}(-1)^{j}V\otimes \Lambda ^{j}{\overline {T^{*}X}}\right)&=\operatorname {ch} (V)\prod _{j}^{n}\left(1-e^{x_{j}}\right)(TX)\\\operatorname {Td} (TX\otimes \mathbb {C} )=\operatorname {Td} (TX)\operatorname {Td} \left({\overline {TX}}\right)&=\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}\prod _{j}^{n}{\frac {-x_{j}}{1-e^{x_{j}}}}(TX)\end{aligned}}} Applying the index theorem, we obtain the Hirzebruch-Riemann-Roch theorem: χ ( X , V ) = ∫ X ch ⁡ ( V ) Td ⁡ ( T X ) {\displaystyle \chi (X,V)=\int _{X}\operatorname {ch} (V)\operatorname {Td} (TX)} In fact we get a generalization of it to all complex manifolds: Hirzebruch's proof only worked for projective complex manifolds X. === Hirzebruch signature theorem === The Hirzebruch signature theorem states that the signature of a compact oriented manifold X of dimension 4k is given by the L genus of the manifold. This follows from the Atiyah–Singer index theorem applied to the following signature operator. The bundles E and F are given by the +1 and −1 eigenspaces of the operator on the bundle of differential forms of X, that acts on k-forms as i k ( k − 1 ) {\displaystyle i^{k(k-1)}} times the Hodge star operator. The operator D is the Hodge Laplacian D ≡ Δ := ( d + d ∗ ) 2 {\displaystyle D\equiv \Delta \mathrel {:=} \left(\mathbf {d} +\mathbf {d^{*}} \right)^{2}} restricted to E, where d is the Cartan exterior derivative and d* is its adjoint. The analytic index of D is the signature of the manifold X, and its topological index is the L genus of X, so these are equal. ===  genus and Rochlin's theorem === The  genus is a rational number defined for any manifold, but is in general not an integer. Borel and Hirzebruch showed that it is integral for spin manifolds, and an even integer if in addition the dimension is 4 mod 8. This can be deduced from the index theorem, which implies that the  genus for spin manifolds is the index of a Dirac operator. The extra factor of 2 in dimensions 4 mod 8 comes from the fact that in this case the kernel and cokernel of the Dirac operator have a quaternionic structure, so as complex vector spaces they have even dimensions, so the index is even. In dimension 4 this result implies Rochlin's theorem that the signature of a 4-dimensional spin manifold is divisible by 16: this follows because in dimension 4 the  genus is minus one eighth of the signature. == Proof techniques == === Pseudodifferential operators === Pseudodifferential operators can be explained easily in the case of constant coefficient operators on Euclidean space. In this case, constant coefficient differential operators are just the Fourier transforms of multiplication by polynomials, and constant coefficient pseudodifferential operators are just the Fourier transforms of multiplication by more general functions. Many proofs of the index theorem use pseudodifferential operators rather than differential operators. The reason for this is that for many purposes there are not enough differential operators. For example, a pseudoinverse of an elliptic differential operator of positive order is not a differential operator, but is a pseudodifferential operator. Also, there is a direct correspondence between data representing elements of K(B(X), S(X)) (clutching functions) and symbols of elliptic pseudodifferential operators. Pseudodifferential operators have an order, which can be any real number or even −∞, and have symbols (which are no longer polynomials on the cotangent space), and elliptic differential operators are those whose symbols are invertible for sufficiently large cotangent vectors. Most versions of the index theorem can be extended from elliptic differential operators to elliptic pseudodifferential operators. === Cobordism === The initial proof was based on that of the Hirzebruch–Riemann–Roch theorem (1954), and involved cobordism theory and pseudodifferential operators. The idea of this first proof is roughly as follows. Consider the ring generated by pairs (X, V) where V is a smooth vector bundle on the compact smooth oriented manifold X, with relations that the sum and product of the ring on these generators are given by disjoint union and product of manifolds (with the obvious operations on the vector bundles), and any boundary of a manifold with vector bundle is 0. This is similar to the cobordism ring of oriented manifolds, except that the manifolds also have a vector bundle. The topological and analytical indices are both reinterpreted as functions from this ring to the integers. Then one checks that these two functions are in fact both ring homomorphisms. In order to prove they are the same, it is then only necessary to check they are the same on a set of generators of this ring. Thom's cobordism theory gives a set of generators; for example, complex vector spaces with the trivial bundle together with certain bundles over even dimensional spheres. So the index theorem can be proved by checking it on these particularly simple cases. === K-theory === Atiyah and Singer's first published proof used K-theory rather than cobordism. If i is any inclusion of compact manifolds from X to Y, they defined a 'pushforward' operation i! on elliptic operators of X to elliptic operators of Y that preserves the index. By taking Y to be some sphere that X embeds in, this reduces the index theorem to the case of spheres. If Y is a sphere and X is some point embedded in Y, then any elliptic operator on Y is the image under i! of some elliptic operator on the point. This reduces the index theorem to the case of a point, where it is trivial. === Heat equation === Atiyah, Bott, and Patodi (1973) gave a new proof of the index theorem using the heat equation, see e.g. Berline, Getzler & Vergne (1992). The proof is also published in (Melrose 1993) and (Gilkey 1994). If D is a differential operator with adjoint D*, then D*D and DD* are self adjoint operators whose non-zero eigenvalues have the same multiplicities. However their zero eigenspaces may have different multiplicities, as these multiplicities are the dimensions of the kernels of D and D*. Therefore, the index of D is given by index ⁡ ( D ) = dim ⁡ Ker ⁡ ( D ) − dim ⁡ Ker ⁡ ( D ∗ ) = dim ⁡ Ker ⁡ ( D ∗ D ) − dim ⁡ Ker ⁡ ( D D ∗ ) = Tr ⁡ ( e − t D ∗ D ) − Tr ⁡ ( e − t D D ∗ ) {\displaystyle \operatorname {index} (D)=\dim \operatorname {Ker} (D)-\dim \operatorname {Ker} (D^{*})=\dim \operatorname {Ker} (D^{*}D)-\dim \operatorname {Ker} (DD^{*})=\operatorname {Tr} \left(e^{-tD^{*}D}\right)-\operatorname {Tr} \left(e^{-tDD^{*}}\right)} for any positive t. The right hand side is given by the trace of the difference of the kernels of two heat operators. These have an asymptotic expansion for small positive t, which can be used to evaluate the limit as t tends to 0, giving a proof of the Atiyah–Singer index theorem. The asymptotic expansions for small t appear very complicated, but invariant theory shows that there are huge cancellations between the terms, which makes it possible to find the leading terms explicitly. These cancellations were later explained using supersymmetry. == See also == (-1)F – Term in quantum field theoryPages displaying short descriptions of redirect targets Witten index – Modified partition function == Citations == == References == The papers by Atiyah are reprinted in volumes 3 and 4 of his collected works, (Atiyah 1988a, 1988b) == External links == === Links on the theory === Mazzeo, Rafe. "The Atiyah–Singer Index Theorem: What it is and why you should care" (PDF). Archived from the original on June 24, 2006. Retrieved January 3, 2006.{{cite web}}: CS1 maint: bot: original URL status unknown (link) Pdf presentation. Voitsekhovskii, M.I.; Shubin, M.A. (2001) [1994], "Index formulas", Encyclopedia of Mathematics, EMS Press Wassermann, Antony. "Lecture notes on the Atiyah–Singer Index Theorem". Archived from the original on March 29, 2017. === Links of interviews === Raussen, Martin; Skau, Christian (2005), "Interview with Michael Atiyah and Isadore Singer" (PDF), Notices of AMS, pp. 223–231 R. R. Seeley and other (1999) Recollections from the early days of index theory and pseudo-differential operators - A partial transcript of informal post–dinner conversation during a symposium held in Roskilde, Denmark, in September 1998.
Wikipedia:Atkinson–Mingarelli theorem#0
In applied mathematics, the Atkinson–Mingarelli theorem, named after Frederick Valentine Atkinson and A. B. Mingarelli, concerns eigenvalues of certain Sturm–Liouville differential operators. In the simplest of formulations let p, q, w be real-valued piecewise continuous functions defined on a closed bounded real interval, I = [a, b]. The function w(x), which is sometimes denoted by r(x), is called the "weight" or "density" function. Consider the Sturm–Liouville differential equation where y is a function of the independent variable x. In this case, y is called a solution if it is continuously differentiable on (a,b) and (p y′)(x) is piecewise continuously differentiable and y satisfies the equation (1) at all except a finite number of points in (a,b). The unknown function y is typically required to satisfy some boundary conditions at a and b. The boundary conditions under consideration here are usually called separated boundary conditions and they are of the form: where the { α i , β i } {\displaystyle \{\alpha _{i},\beta _{i}\}} , i = 1, 2 are real numbers. We define == The theorem == Assume that p(x) has a finite number of sign changes and that the positive (resp. negative) part of the function p(x)/w(x) defined by ( w / p ) + ( x ) = max { w ( x ) / p ( x ) , 0 } {\displaystyle (w/p)_{+}(x)=\max\{w(x)/p(x),0\}} , (resp. ( w / p ) − ( x ) = max { − w ( x ) / p ( x ) , 0 } ) {\displaystyle (w/p)_{-}(x)=\max\{-w(x)/p(x),0\})} are not identically zero functions over I. Then the eigenvalue problem (1), (2)–(3) has an infinite number of real positive eigenvalues λ i + {\displaystyle {\lambda _{i}}^{+}} , 0 < λ 1 + < λ 2 + < λ 3 + < ⋯ < λ n + < ⋯ → ∞ ; {\displaystyle 0<{\lambda _{1}}^{+}<{\lambda _{2}}^{+}<{\lambda _{3}}^{+}<\cdots <{\lambda _{n}}^{+}<\cdots \to \infty ;} and an infinite number of negative eigenvalues λ i − {\displaystyle {\lambda _{i}}^{-}} , 0 > λ 1 − > λ 2 − > λ 3 − > ⋯ > λ n − > ⋯ → − ∞ ; {\displaystyle 0>{\lambda _{1}}^{-}>{\lambda _{2}}^{-}>{\lambda _{3}}^{-}>\cdots >{\lambda _{n}}^{-}>\cdots \to -\infty ;} whose spectral asymptotics are given by their solution [2] of Jörgens' Conjecture [3]: λ n + ∼ n 2 π 2 ( ∫ a b ( w / p ) + ( x ) d x ) 2 , n → ∞ , {\displaystyle {\lambda _{n}}^{+}\sim {\frac {n^{2}\pi ^{2}}{\left(\int _{a}^{b}{\sqrt {(w/p)_{+}(x)}}\,dx\right)^{2}}},\quad n\to \infty ,} and λ n − ∼ − n 2 π 2 ( ∫ a b ( w / p ) − ( x ) d x ) 2 , n → ∞ . {\displaystyle {\lambda _{n}}^{-}\sim {\frac {-n^{2}\pi ^{2}}{\left(\int _{a}^{b}{\sqrt {(w/p)_{-}(x)}}\,dx\right)^{2}}},\quad n\to \infty .} For more information on the general theory behind (1) see the article on Sturm–Liouville theory. The stated theorem is actually valid more generally for coefficient functions 1 / p , q , w {\displaystyle 1/p,\,q,\,w} that are Lebesgue integrable over I. == References == F. V. Atkinson, A. B. Mingarelli, Multiparameter Eigenvalue Problems – Sturm–Liouville Theory, CRC Press, Taylor and Francis, 2010. ISBN 978-1-4398-1622-6 F. V. Atkinson, A. B. Mingarelli, Asymptotics of the number of zeros and of the eigenvalues of general weighted Sturm–Liouville problems, J. für die Reine und Ang. Math. (Crelle), 375/376 (1987), 380–393. See also free download of the original paper. K. Jörgens, Spectral theory of second-order ordinary differential operators, Lectures delivered at Aarhus Universitet, 1962/63.
Wikipedia:Atle Selberg#0
Atle Selberg (14 June 1917 – 6 August 2007) was a Norwegian mathematician known for his work in analytic number theory and the theory of automorphic forms, and in particular for bringing them into relation with spectral theory. He was awarded the Fields Medal in 1950 and an honorary Abel Prize in 2002. == Early years == Selberg was born in Langesund, Norway, the son of teacher Anna Kristina Selberg and mathematician Ole Michael Ludvigsen Selberg. Two of his three brothers, Sigmund and Henrik, were also mathematicians. His other brother, Arne, was a professor of engineering. While he was still at school he was influenced by the work of Srinivasa Ramanujan and he found an exact analytical formula for the partition function as suggested by the works of Ramanujan; however, this result was first published by Hans Rademacher. He studied at the University of Oslo and completed his doctorate in 1943. == World War II == During World War II, Selberg worked in isolation due to the German occupation of Norway. After the war, his accomplishments became known, including a proof that a positive proportion of the zeros of the Riemann zeta function lie on the line ℜ ( s ) = 1 2 {\displaystyle \Re (s)={\tfrac {1}{2}}} . During the war, he fought against the German invasion of Norway, and was imprisoned several times. == Post-war in Norway == After the war, he turned to sieve theory, a previously neglected topic which Selberg's work brought into prominence. In a 1947 paper he introduced the Selberg sieve, a method well adapted in particular to providing auxiliary upper bounds, and which contributed to Chen's theorem, among other important results. In 1948 Selberg submitted two papers in Annals of Mathematics in which he proved by elementary means the theorems for primes in arithmetic progression and the density of primes. This challenged the widely held view of his time that certain theorems are only obtainable with the advanced methods of complex analysis. Both results were based on his work on the asymptotic formula ϑ ( x ) log ⁡ ( x ) + ∑ p ≤ x log ⁡ ( p ) ϑ ( x p ) = 2 x log ⁡ ( x ) + O ( x ) {\displaystyle \vartheta \left(x\right)\log \left(x\right)+\sum \limits _{p\leq x}{\log \left(p\right)}\vartheta \left({\frac {x}{p}}\right)=2x\log \left(x\right)+O\left(x\right)} where ϑ ( x ) = ∑ p ≤ x log ⁡ ( p ) {\displaystyle \vartheta \left(x\right)=\sum \limits _{p\leq x}{\log \left(p\right)}} for primes p {\displaystyle p} . He established this result by elementary means in March 1948, and by July of that year, Selberg and Paul Erdős each obtained elementary proofs of the prime number theorem, both using the asymptotic formula above as a starting point. Circumstances leading up to the proofs, as well as publication disagreements, led to a bitter dispute between the two mathematicians. For his fundamental accomplishments during the 1940s, Selberg received the 1950 Fields Medal. == Institute for Advanced Study == Selberg moved to the United States and worked as an associate professor at Syracuse University and later settled at the Institute for Advanced Study in Princeton, New Jersey in the 1950s, where he remained until his death. During the 1950s he worked on introducing spectral theory into number theory, culminating in his development of the Selberg trace formula, the most famous and influential of his results. In its simplest form, this establishes a duality between the lengths of closed geodesics on a compact Riemann surface and the eigenvalues of the Laplacian, which is analogous to the duality between the prime numbers and the zeros of the zeta function. He generally worked alone. His only coauthor was Sarvadaman Chowla. Selberg was awarded the 1986 Wolf Prize in Mathematics. He was also awarded an honorary Abel Prize in 2002, its founding year, before the awarding of the regular prizes began. Selberg received many distinctions for his work, in addition to the Fields Medal, the Wolf Prize and the Gunnerus Medal. He was elected to the Norwegian Academy of Science and Letters, the Royal Danish Academy of Sciences and Letters and the American Academy of Arts and Sciences. In 1972, he was awarded an honorary degree, doctor philos. honoris causa, at the Norwegian Institute of Technology, later part of Norwegian University of Science and Technology. His first wife, Hedvig, died in 1995. With her, Selberg had two children: Ingrid Selberg (married to playwright Mustapha Matura) and Lars Selberg. In 2003 Atle Selberg married Betty Frances ("Mickey") Compton (born in 1929). He died at home in Princeton, New Jersey on 6 August 2007 of heart failure. Upon his death he was survived by his widow, daughter, son, and four grandchildren. == Selected publications == Selberg's collected works were published in two volumes. The first volume contains 41 articles, and the second volume contains three additional articles, in addition to Selberg's lectures on sieves. Selberg, Atle (1989). Collected Papers. Volume I. Berlin, Heidelberg: Springer-Verlag. ISBN 3-540-18389-2. MR 1117906. Zbl 0675.10001. Selberg, Atle (28 July 2014). 2014 pbk edition. Springer. ISBN 9783642410215. Description at M.I.T. Press Bookstore Selberg, Atle (1991). Collected Papers. Volume II. Berlin, Heidelberg: Springer-Verlag. ISBN 3-540-50626-8. MR 1295844. Zbl 0729.11001. Description at M.I.T. Press Bookstore == References == == Further reading == Albers, Donald J. and Alexanderson, Gerald L. (2011), Fascinating Mathematical People: interviews and memoirs, "Atle Selberg", pp 254–73, Princeton University Press, ISBN 978-0-691-14829-8. Baas, Nils A.; Skau, Christian F. (2008). "The lord of the numbers, Atle Selberg. On his life and mathematics". Bull. Amer. Math. Soc. 45 (4): 617–649. doi:10.1090/S0273-0979-08-01223-8. Interview with Selberg Hejhal, Dennis (June–July 2009). "Remembering Atle Selberg, 1917–2007" (PDF). Notices of the American Mathematical Society. 56 (6): 692–710. Selberg (1996). "Reflections Around the Ramanujan Centenary" (PDF). Resonance. 1 (12): 81–91. doi:10.1007/BF02838915. S2CID 120285506. Archived (PDF) from the original on 9 October 2022. == External links == Atle Selberg at the Mathematics Genealogy Project O'Connor, John J.; Robertson, Edmund F., "Atle Selberg", MacTutor History of Mathematics Archive, University of St Andrews Atle Selberg archive webpage Obituary at Institute for Advanced Study Obituary in The Times Atle Selbergs private archive exists at NTNU University Library
Wikipedia:Atsuko Miyaji#0
Atsuko Miyaji (Japanese: 宮地充子, born 1965) is a Japanese cryptographer and number theorist known for her research on elliptic-curve cryptography and software obfuscation. She is a professor in the Division of Electrical, Electronic and Information Engineering, at Osaka University. == Education and career == Miyaji was born in Osaka Prefecture and became interested in mathematics as an elementary school student after learning of the Epimenides paradox. She studied mathematics as an undergraduate at Osaka University, and chose to go into industry instead of continuing as a graduate student, working from 1990 to 1998 for Matsushita Electric Industrial. During this time she returned to graduate school, and earned a doctorate from Osaka University in 1997. She became an associate professor at the Japan Advanced Institute of Science and Technology in 1998, and returned to Osaka University as a professor in 2015. She has also held short-term teaching or visiting positions at Osaka Prefecture University, the University of Tsukuba, the University of California, Davis, and Kyoto University. == Book == Miyaji is the author of a 2012 Japanese language book on cryptography, "代数学から学ぶ暗号理論:整数論の基礎から楕円曲線暗号の実装まで". == References == == External links == Atsuko Miyaji publications indexed by Google Scholar ResearchMap profile
Wikipedia:Attic numerals#0
The Attic numerals are a symbolic number notation used by the ancient Greeks. They were also known as Herodianic numerals because they were first described in a 2nd-century manuscript by Herodian; or as acrophonic numerals (from acrophony) because the basic symbols derive from the first letters of the (ancient) Greek words that the symbols represented. The Attic numerals were a decimal (base 10) system, like the older Egyptian and the later Etruscan, Roman, and Hindu-Arabic systems. Namely, the number to be represented was broken down into simple multiples (1 to 9) of powers of ten — units, tens, hundred, thousands, etc.. Then these parts were written down in sequence, in order of decreasing value. As in the basic Roman system, each part was written down using a combination of two symbols, representing one and five times that power of ten. Attic numerals were adopted possibly starting in the 7th century BCE and although presently called Attic, they or variations thereof were universally used by the Greeks. No other numeral system is known to have been used on Attic inscriptions before the Common Era. Their replacement by the classic Greek numerals started in other parts of the Greek World around the 3rd century BCE. They are believed to have served as model for the Etruscan number system, although the two were nearly contemporary and the symbols are not obviously related. == The system == === Symbols === The Attic numerals used the following main symbols, with the given values: The symbols representing 50, 500, 5000, and 50000 were composites of an old form of the capital letter pi (with a short right leg) and a tiny version of the applicable power of ten. For example, 𐅆 was five times one thousand. ==== Special symbols ==== The fractions "one half" and "one quarter" were written "𐅁" and "𐅀", respectively. The symbols were slightly modified when used to encode amounts in talents (with a small capital tau, "Τ") or in staters (with a small capital sigma, "Σ"). Specific numeral symbols were used to represent one drachma ("𐅂") and ten minas "𐅗". ==== The symbol for 100 ==== The use of "Η" (capital eta) for 100 reflects the early date of this numbering system. In the Greek language of the time, the word for a hundred would be pronounced [hɛkaton] (with a "rough aspirated" sound /h/) and written "ΗΕΚΑΤΟΝ", because "Η" represented the sound /h/ in the Attic alphabet. In later, "classical" Greek, with the adoption of the Ionic alphabet throughout the majority of Greece, the letter eta had come to represent the long e sound while the rough aspiration was no longer marked. It was not until Aristophanes of Byzantium introduced the various accent markings during the Hellenistic period that the spiritus asper began to represent /h/, resulting in the spelling ἑκατόν. === Simple multiples of powers of ten === Multiples 1 to 9 of each power of ten were written by combining the two corresponding "1" and "5" digits, namely: Unlike the more familiar Roman numeral system, the Attic system used only the so-called "additive" notation. Thus, the numbers 4 and 9 were written ΙΙΙΙ and ΠΙΙΙΙ, not ΙΠ and ΙΔ. === General numbers === In general, the number to be represented was broken down into simple multiples (1 to 9) of powers of ten — units, tens, hundred, thousands, etc.. Then these parts would be written down in sequence, from largest to smallest value. For example: 49 = 40 + 9 = ΔΔΔΔ + ΠΙΙΙΙ = ΔΔΔΔΠΙΙΙΙ 2001 = 2000 + 1 = ΧΧ + I = ΧΧΙ 1982 = 1000 + 900 + 80 + 2 = Χ + 𐅅ΗΗΗΗ + 𐅄ΔΔΔ + ΙΙ = Χ𐅅ΗΗΗΗ𐅄ΔΔΔΙΙ 62708 = 60000 + 2000 + 700 + 8 = 𐅇Μ + ΧΧ + 𐅅ΗΗ + ΠΙΙΙ = 𐅇ΜΧΧ𐅅ΗΗΠΙΙΙ. == Unicode == == See also == Etruscan numerals – Words, phrases and symbols for numbers of the Etruscan language Greek mathematics – Mathematics of Ancient GreecePages displaying short descriptions of redirect targets Greek numerals – System of writing numbers using Greek letters History of ancient numeral systems – Symbols representing numbers List of numeral system topics List of numeral systems == Notes and references ==
Wikipedia:Attila Aşkar#0
Attila Aşkar (born September 4, 1943) is a Turkish civil engineer, scientist and former president of the Koç University in Rumelifeneri, Istanbul, Turkey during 2001 and 2009. == Life == Attila Aşkar was born on September 4, 1943 in Afyonkarahisar, Turkey. He is the son of Kemal and Nüzhet Aşkar, and was married to Elsie Vance, the daughter of former Secretary of State Cyrus R. Vance on August 30, 1998. === Education === Aşkar graduated from St. Joseph High School in Istanbul, Turkey in 1961. He received his B.Sc. in Civil Engineering from the Technical University of Istanbul in 1966, and his Ph.D. in applied and computational mathematics founded by A. Cemal Eringen under the supervision of Ahmet Çakmak at Princeton University in the United States in 1969. === Academic life === He was the head in the department of Mathematics at Boğaziçi University, Istanbul, Turkey. After losing the Boğaziçi University rectorship elections in Boğaziçi University, he moved to Koç University's İstinye Campus in Istanbul as a professor of Mathematics and the Dean of the College of Arts and Sciences. Professor Aşkar then was appointed as the president and rector of the Koç University. ==== Visiting positions ==== He held many visiting research scientist and professor positions at prestigious universities like Brown University, Princeton University, Paris University VI, the Max-Planck Institute in Göttingen, Germany and the Royal Institute of Technology in Stockholm, Sweden. ==== Research areas ==== Aşkar's recent research interests included scattering of classical and quantum waves, wavelet analysis and molecular dynamics. He is the author of over eighty research journal articles and two books. ==== Writings ==== Lattice Dynamical Foundations of Continuum Theories: Elasticity, Piezoelectricity... (Series in Theoretical and Applied Mechanics, Vol 2) ==== Memberships ==== Dr. Aşkar is also on the board of directors at the Center for Excellence in Education, a non-profit organization located in McLean, Virginia. ==== Awards ==== He received recognitions, which include the Junior Scientist and Science awards of the National Research Council (TÜBİTAK), the Information Age Award of the Ministry of Culture, and entry to the Turkish Academy of Sciences. == Representative scientific journal publications == A. Aşkar, A. Çakmak, and H. Rabitz, Nodal structure and global behavior of scattering wave functions, J. Chem. Phys., 72, 5287, DOI: 10.1063/1.439739, (1980) M. Duff, H. Rabitz, A. Aşkar, A. Çakmak, and M. Ablowitz, A Comparison Between Finite Element Methods and Spectral Methods as Applied to Bound State Problems, J. Chem. Phys., 72, 1543, DOI: 10.1063/1.439381, (1980) A. Aşkar, A. S. Çakmak, and H. Rabitz, Finite Element Methods for Reactive Scattering, Chem. Phys., 33, 367, DOI: 10.1016/0301-0104(78)87134-1, (1978) H. Rabitz, A. Aşkar, and A. S. Çakmak, The Use of Global Wavefunctions in Scattering Theory, Chem. Phys., 29, 61, DOI: 10.1016/0301-0104(78)85061-7, (1978) == References == == External links == Home page
Wikipedia:Aubrey E. Landry#0
Aubrey Edward Landry (1880–1972) was a Canadian-American mathematician. He was the dissertation director of many of the earliest women to earn doctorates in mathematics in the United States, including the first African American woman to do so, Euphemia Haynes. == Early life and education == He was born in Westmorland, New Brunswick, to Elizabeth R. "Eliza" McSweeney Landry and Tilman T. Landry, and was the oldest of nine children. He received an AB degree (bachelor's) from Harvard University in 1900 and a PhD from Johns Hopkins University in 1907 with the dissertation: "A Geometrical Application of Binary Syzygies" under Frank Morley. == Career and mentorship of women == Landry's dissertation director was Frank Morley, himself also a frequent advisor to women doctoral candidates (see inset quote below). Landry spent his career at Catholic University of America, where he began as a teaching fellow following his graduation from Harvard. He joined the permanent faculty in 1902 after receiving his doctorate at Johns Hopkins. He served as mathematics department chairman for 45 years and directed 28 dissertations until his retirement in 1952, out of which 18 went to women. Lenore Blum wrote, Of 229 pre-1940 [women] Ph.D.s in mathematics, more than a third were advised by eight mathematicians: Charlotte Angas Scott and Anna Pell Wheeler (at Bryn Mawr), and six men—Frank Morley (at Johns Hopkins) and A. B. Coble (at Johns Hopkins and Illinois), Aubrey Landry (at Catholic University), Virgil Snyder (at Cornell), and Gilbert Ames Bliss and L. E. Dickson (both at Chicago, where together they advised 30 women Ph.D.s). It is not hard to surmise that each of these men felt secure in his position in mathematics... all but one were at one time president of the American Mathematical Society! All but two of these women were Roman Catholic sisters, a historical phenomenon nationwide because Catholic men's universities were sometimes open by special arrangement to nuns. == Notable women mentored == This list is incomplete, as Landry directed the dissertations of at least 18 women. Some of these come from the Mathematics Genealogy Project, and others from Pioneering Women in American Mathematics. Mary Nicholas Arnoldy, Ph.D. 1937, Dissertation: "The Reality of the Double Tangents of the Rational Symmetric Quartic Curve." Leonarda Burke, Ph.D. 1931, Dissertation: "On a case of the triangles in-and-circumscribed to a rational quartic curve with a line of symmetry." Mary Charlotte Fowler, Ph.D. 1937, Dissertation: "The discriminant of the sextic of double point parameters of the plane rational quartic curve." Catherine Francis Galvin, Ph.D. 1938, Dissertation: "Two Geometrical Representations of the Symmetric Correspondence C(N,N) with Their Interrelations." Mary de Lellis Gough, Ph.D. 1931, first known Irish woman to earn a doctorate in mathematics. Dissertation: "On the Condition for the Existence of Triangles In-and-Circumscribed to Certain Types of Rational Quartic Curve and Having a Common Side." Euphemia Haynes, Ph.D. 1943, Dissertation: "Determination of Sets of Independent Conditions Characterizing Certain Special Cases of Symmetric Correspondences." Mary Laetitia Hill, Ph.D. 1935, Dissertation: "The Number and Reality of Quadrilaterals In-and-Circumscribed to a Rational Unicuspidal Quartic with Real Tangents from the Cusp." Mary Gervase Kelley, Ph.D. 1917, Dissertation: "On the Cardioids Fulfilling Certain Assigned Conditions." Marie Cecilia Mangold, Ph.D. 1929, Dissertation: "The Loci Described by the Vertices of Singly Infinite Systems of Triangles Circumscribed about a Fixed Conic." Charles Mary Morrison, Ph.D. 1931, Dissertation: "The Triangles In-and-Circumscribed to the Biflecnodal Rational Quartic." M. Henrietta Reilly, Ph.D. 1936, Dissertation: "Self-Symmetric Quadrilaterals In-and-Circumscribed to the Plane Rational Quartic Curve with a Line of Symmetry." M. Helen Sullivan, Ph.D. 1934, Dissertation: "The Number and Reality of the Non-Self-Symmetric Quadrilaterals In-and-Circumscribed to the Rational Unicuspidal Quartic with a Line of Symmetry." Mary Domitilla Thuener, Ph.D. 1932, Dissertation: "On the Number and Reality of the Self-Symmetric Quadrilaterals In-and-Circumscribed to the Triangular-Symmetric Rational Quartic." Mary Felice Vaudreuil, Ph.D. 1931, Dissertation: "Two Correspondences Determined by the Tangents to a Rational Cuspidal Quartic with a Line of Symmetry." == Selected publications == A geometrical application of binary syzygies by AE Landry, American Mathematical Society, Jan 10, 1909. == References == == External links == Aubrey E. Landry at the Mathematics Genealogy Project
Wikipedia:Augmentation (algebra)#0
In algebra, an augmentation of an associative algebra A over a commutative ring k is a k-algebra homomorphism A → k {\displaystyle A\to k} , typically denoted by ε. An algebra together with an augmentation is called an augmented algebra. The kernel of the augmentation is a two-sided ideal called the augmentation ideal of A. For example, if A = k [ G ] {\displaystyle A=k[G]} is the group algebra of a finite group G, then A → k , ∑ a i x i ↦ ∑ a i {\displaystyle A\to k,\,\sum a_{i}x_{i}\mapsto \sum a_{i}} is an augmentation. If A is a graded algebra which is connected, i.e. A 0 = k {\displaystyle A_{0}=k} , then the homomorphism A → k {\displaystyle A\to k} which maps an element to its homogeneous component of degree 0 is an augmentation. For example, k [ x ] → k , ∑ a i x i ↦ a 0 {\displaystyle k[x]\to k,\sum a_{i}x^{i}\mapsto a_{0}} is an augmentation on the polynomial ring k [ x ] {\displaystyle k[x]} . == References == Loday, Jean-Louis; Vallette, Bruno (2012). Algebraic operads. Grundlehren der Mathematischen Wissenschaften. Vol. 346. Berlin: Springer-Verlag. p. 2. ISBN 978-3-642-30361-6. Zbl 1260.18001.
Wikipedia:August Kasvand#0
August Kasvand (December 30, 1890 – March 7, 1980) was an Estonian mathematician and educator. == Early life and education == Kasvand was born in Erastvere in the Governorate of Livonia, Russian Empire, the son of Gustav Kasvand (1865–1944) and Ann Kasvand (née Luts, 1868–1938). He attended the village school in Kärgula, Võru elementary school, and Võru city school, where he graduated in 1909. In 1910, he passed the professional exam for a primary school teacher and in 1913 the professional exam for a home school teacher in mathematics and geography. From 1919 to 1920, he took in courses for secondary school assistant teachers at the University of Tartu. In 1933, he graduated from the Faculty of Mathematics and Natural Sciences at the University of Tartu. == Career == Kasvand worked as a teacher in Piilsi and Nüpli (1910–1914), and at Nuustaku High School, Võru County Public Education Society High School for Boys, Tartu City Primary School No. 16, Tartu City High School for Girls, and Tartu High School No. 1. From 1944, he worked at Tartu Teacher Training College. After the reorganization of the college into the Tartu Teachers' Institute, he was appointed head of the mathematics-physics and natural science department in 1947. In 1950, Kasvand became the head of the Department of Physics and Mathematics. From 1957 to 1959, he was the teaching practice supervisor of the same school (then called Tartu Pedagogical School). He retired from the school in 1957. From 1959 to 1962, Kasvand taught elementary mathematics and mathematics teaching methodology at the University of Tartu. Kasvand was the author or coauthor of many mathematics textbooks. == Awards == 1947: Honored Teacher of the Estonian SSR == References ==
Wikipedia:Auguste Dick#0
Auguste Franziska Dick (née Kraus, 1910–1993) was an Austrian mathematician, historian of mathematics, and handwriting expert, known for her research on the history of mathematics under the Nazis, and for her biography of Emmy Noether. Dick earned a doctorate from the University of Vienna, and a teaching credential in mathematics and physics, in 1934. At Vienna, she was one of the students working with Olga Taussky-Todd in the seminar of Hans Hahn. She worked as a schoolteacher, and began producing scholarly publications after her retirement. Her book on Noether, Emmy Noether, 1882–1935 (Birkhäuser 1970) has been translated into both Japanese and English (Heidi I. Blocher, trans., Birkhäuser, 1981). She also assisted in editing the works of Erwin Schrödinger. == References ==
Wikipedia:Augustin Sesmat#0
Augustin Sesmat ((1885-04-07)April 7, 1885 Dieulouard -- December 12, 1957(1957-12-12) (aged 72)) was a French mathematician and logician. He was professor of history and criticism of science at the Institut Catholique de Paris in the 1930s. He was probably the first person to discover the logical hexagon, thus solving a problem posed by Aristotle. == Works == Le système absolu classique et les mouvements réels, 1936. Logique. I. Les définitions, les jugements . ouvrage publiés avec le concours de CNRS, Paris, 1950, 359 pp. Logique. II. Les raisonnements, la logistique . Hermann & Cie, Paris,1951, pp. 361–776. Dialectique, Hamelin et la philosophie chrétienne, Bloud & Gay, Paris,1955, 38 pp. == References ==
Wikipedia:Automatic differentiation#0
In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, and differentiation arithmetic is a set of techniques to evaluate the partial derivative of a function specified by a computer program. Automatic differentiation is a subtle and central tool to automatize the simultaneous computation of the numerical values of arbitrarily complex functions and their derivatives with no need for the symbolic representation of the derivative, only the function rule or an algorithm thereof is required. Auto-differentiation is thus neither numeric nor symbolic, nor is it a combination of both. It is also preferable to ordinary numerical methods: In contrast to the more traditional numerical methods based on finite differences, auto-differentiation is 'in theory' exact, and in comparison to symbolic algorithms, it is computationally inexpensive. Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program. == Difference from other differentiation methods == Automatic differentiation is distinct from symbolic differentiation and numerical differentiation. Symbolic differentiation faces the difficulty of converting a computer program into a single mathematical expression and can lead to inefficient code. Numerical differentiation (the method of finite differences) can introduce round-off errors in the discretization process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems. == Applications == Currently, for its efficiency and accuracy in computing first and higher order derivatives, auto-differentiation is a celebrated technique with diverse applications in scientific computing and mathematics. It should therefore come as no surprise that there are numerous computational implementations of auto-differentiation. Among these, one mentions INTLAB, Sollya, and InCLosure. In practice, there are two types (modes) of algorithmic differentiation: a forward-type and a reversed-type. Presently, the two types are highly correlated and complementary and both have a wide variety of applications in, e.g., non-linear optimization, sensitivity analysis, robotics, machine learning, computer graphics, and computer vision. Automatic differentiation is particularly important in the field of machine learning. For example, it allows one to implement backpropagation in a neural network without a manually-computed derivative. == Forward and reverse accumulation == === Chain rule of partial derivatives of composite functions === Fundamental to automatic differentiation is the decomposition of differentials provided by the chain rule of partial derivatives of composite functions. For the simple composition y = f ( g ( h ( x ) ) ) = f ( g ( h ( w 0 ) ) ) = f ( g ( w 1 ) ) = f ( w 2 ) = w 3 w 0 = x w 1 = h ( w 0 ) w 2 = g ( w 1 ) w 3 = f ( w 2 ) = y {\displaystyle {\begin{aligned}y&=f(g(h(x)))=f(g(h(w_{0})))=f(g(w_{1}))=f(w_{2})=w_{3}\\w_{0}&=x\\w_{1}&=h(w_{0})\\w_{2}&=g(w_{1})\\w_{3}&=f(w_{2})=y\end{aligned}}} the chain rule gives ∂ y ∂ x = ∂ y ∂ w 2 ∂ w 2 ∂ w 1 ∂ w 1 ∂ x = ∂ f ( w 2 ) ∂ w 2 ∂ g ( w 1 ) ∂ w 1 ∂ h ( w 0 ) ∂ x {\displaystyle {\frac {\partial y}{\partial x}}={\frac {\partial y}{\partial w_{2}}}{\frac {\partial w_{2}}{\partial w_{1}}}{\frac {\partial w_{1}}{\partial x}}={\frac {\partial f(w_{2})}{\partial w_{2}}}{\frac {\partial g(w_{1})}{\partial w_{1}}}{\frac {\partial h(w_{0})}{\partial x}}} === Two types of automatic differentiation === Usually, two distinct modes of automatic differentiation are presented. forward accumulation (also called bottom-up, forward mode, or tangent mode) reverse accumulation (also called top-down, reverse mode, or adjoint mode) Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute ∂ w 1 / ∂ x {\displaystyle \partial w_{1}/\partial x} and then ∂ w 2 / ∂ w 1 {\displaystyle \partial w_{2}/\partial w_{1}} and lastly ∂ y / ∂ w 2 {\displaystyle \partial y/\partial w_{2}} ), while reverse accumulation traverses from outside to inside (first compute ∂ y / ∂ w 2 {\displaystyle \partial y/\partial w_{2}} and then ∂ w 2 / ∂ w 1 {\displaystyle \partial w_{2}/\partial w_{1}} and lastly ∂ w 1 / ∂ x {\displaystyle \partial w_{1}/\partial x} ). More succinctly, Forward accumulation computes the recursive relation: ∂ w i ∂ x = ∂ w i ∂ w i − 1 ∂ w i − 1 ∂ x {\displaystyle {\frac {\partial w_{i}}{\partial x}}={\frac {\partial w_{i}}{\partial w_{i-1}}}{\frac {\partial w_{i-1}}{\partial x}}} with w 3 = y {\displaystyle w_{3}=y} , and, Reverse accumulation computes the recursive relation: ∂ y ∂ w i = ∂ y ∂ w i + 1 ∂ w i + 1 ∂ w i {\displaystyle {\frac {\partial y}{\partial w_{i}}}={\frac {\partial y}{\partial w_{i+1}}}{\frac {\partial w_{i+1}}{\partial w_{i}}}} with w 0 = x {\displaystyle w_{0}=x} . The value of the partial derivative, called the seed, is propagated forward or backward and is initially ∂ x ∂ x = 1 {\displaystyle {\frac {\partial x}{\partial x}}=1} or ∂ y ∂ y = 1 {\displaystyle {\frac {\partial y}{\partial y}}=1} . Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one ( ∂ x 1 ∂ x 1 = 1 {\displaystyle {\frac {\partial x_{1}}{\partial x_{1}}}=1} ) and of all others to zero ( ∂ x 2 ∂ x 1 = ⋯ = ∂ x n ∂ x 1 = 0 {\displaystyle {\frac {\partial x_{2}}{\partial x_{1}}}=\dots ={\frac {\partial x_{n}}{\partial x_{1}}}=0} ). In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass. Which of these two types should be used depends on the sweep count. The computational complexity of one sweep is proportional to the complexity of the original code. Forward accumulation is more efficient than reverse accumulation for functions f : Rn → Rm with n ≪ m as only n sweeps are necessary, compared to m sweeps for reverse accumulation. Reverse accumulation is more efficient than forward accumulation for functions f : Rn → Rm with n ≫ m as only m sweeps are necessary, compared to n sweeps for forward accumulation. Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse accumulation. Forward accumulation was introduced by R.E. Wengert in 1964. According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown. Seppo Linnainmaa published reverse accumulation in 1976. === Forward accumulation === In forward accumulation AD, one first fixes the independent variable with respect to which differentiation is performed and computes the derivative of each sub-expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the inner functions in the chain rule: ∂ y ∂ x = ∂ y ∂ w n − 1 ∂ w n − 1 ∂ x = ∂ y ∂ w n − 1 ( ∂ w n − 1 ∂ w n − 2 ∂ w n − 2 ∂ x ) = ∂ y ∂ w n − 1 ( ∂ w n − 1 ∂ w n − 2 ( ∂ w n − 2 ∂ w n − 3 ∂ w n − 3 ∂ x ) ) = ⋯ {\displaystyle {\begin{aligned}{\frac {\partial y}{\partial x}}&={\frac {\partial y}{\partial w_{n-1}}}{\frac {\partial w_{n-1}}{\partial x}}\\[6pt]&={\frac {\partial y}{\partial w_{n-1}}}\left({\frac {\partial w_{n-1}}{\partial w_{n-2}}}{\frac {\partial w_{n-2}}{\partial x}}\right)\\[6pt]&={\frac {\partial y}{\partial w_{n-1}}}\left({\frac {\partial w_{n-1}}{\partial w_{n-2}}}\left({\frac {\partial w_{n-2}}{\partial w_{n-3}}}{\frac {\partial w_{n-3}}{\partial x}}\right)\right)\\[6pt]&=\cdots \end{aligned}}} This can be generalized to multiple variables as a matrix product of Jacobians. Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable w i {\displaystyle w_{i}} is augmented with its derivative w ˙ i {\displaystyle {\dot {w}}_{i}} (stored as a numerical value, not a symbolic expression), w ˙ i = ∂ w i ∂ x {\displaystyle {\dot {w}}_{i}={\frac {\partial w_{i}}{\partial x}}} as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule. Using the chain rule, if w i {\displaystyle w_{i}} has predecessors in the computational graph: w ˙ i = ∑ j ∈ { predecessors of i } ∂ w i ∂ w j w ˙ j {\displaystyle {\dot {w}}_{i}=\sum _{j\in \{{\text{predecessors of i}}\}}{\frac {\partial w_{i}}{\partial w_{j}}}{\dot {w}}_{j}} As an example, consider the function: y = f ( x 1 , x 2 ) = x 1 x 2 + sin ⁡ x 1 = w 1 w 2 + sin ⁡ w 1 = w 3 + w 4 = w 5 {\displaystyle {\begin{aligned}y&=f(x_{1},x_{2})\\&=x_{1}x_{2}+\sin x_{1}\\&=w_{1}w_{2}+\sin w_{1}\\&=w_{3}+w_{4}\\&=w_{5}\end{aligned}}} For clarity, the individual sub-expressions have been labeled with the variables w i {\displaystyle w_{i}} . The choice of the independent variable to which differentiation is performed affects the seed values ẇ1 and ẇ2. Given interest in the derivative of this function with respect to x1, the seed values should be set to: w ˙ 1 = ∂ w 1 ∂ x 1 = ∂ x 1 ∂ x 1 = 1 w ˙ 2 = ∂ w 2 ∂ x 1 = ∂ x 2 ∂ x 1 = 0 {\displaystyle {\begin{aligned}{\dot {w}}_{1}={\frac {\partial w_{1}}{\partial x_{1}}}={\frac {\partial x_{1}}{\partial x_{1}}}=1\\{\dot {w}}_{2}={\frac {\partial w_{2}}{\partial x_{1}}}={\frac {\partial x_{2}}{\partial x_{1}}}=0\end{aligned}}} With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph. To compute the gradient of this example function, which requires not only ∂ y ∂ x 1 {\displaystyle {\tfrac {\partial y}{\partial x_{1}}}} but also ∂ y ∂ x 2 {\displaystyle {\tfrac {\partial y}{\partial x_{2}}}} , an additional sweep is performed over the computational graph using the seed values w ˙ 1 = 0 ; w ˙ 2 = 1 {\displaystyle {\dot {w}}_{1}=0;{\dot {w}}_{2}=1} . ==== Implementation ==== ===== Pseudocode ===== Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression Z to be derived with regard to a variable V. The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated. ===== C++ ===== === Reverse accumulation === In reverse accumulation AD, the dependent variable to be differentiated is fixed and the derivative is computed with respect to each sub-expression recursively. In a pen-and-paper calculation, the derivative of the outer functions is repeatedly substituted in the chain rule: ∂ y ∂ x = ∂ y ∂ w 1 ∂ w 1 ∂ x = ( ∂ y ∂ w 2 ∂ w 2 ∂ w 1 ) ∂ w 1 ∂ x = ( ( ∂ y ∂ w 3 ∂ w 3 ∂ w 2 ) ∂ w 2 ∂ w 1 ) ∂ w 1 ∂ x = ⋯ {\displaystyle {\begin{aligned}{\frac {\partial y}{\partial x}}&={\frac {\partial y}{\partial w_{1}}}{\frac {\partial w_{1}}{\partial x}}\\&=\left({\frac {\partial y}{\partial w_{2}}}{\frac {\partial w_{2}}{\partial w_{1}}}\right){\frac {\partial w_{1}}{\partial x}}\\&=\left(\left({\frac {\partial y}{\partial w_{3}}}{\frac {\partial w_{3}}{\partial w_{2}}}\right){\frac {\partial w_{2}}{\partial w_{1}}}\right){\frac {\partial w_{1}}{\partial x}}\\&=\cdots \end{aligned}}} In reverse accumulation, the quantity of interest is the adjoint, denoted with a bar w ¯ i {\displaystyle {\bar {w}}_{i}} ; it is a derivative of a chosen dependent variable with respect to a subexpression w i {\displaystyle w_{i}} : w ¯ i = ∂ y ∂ w i {\displaystyle {\bar {w}}_{i}={\frac {\partial y}{\partial w_{i}}}} Using the chain rule, if w i {\displaystyle w_{i}} has successors in the computational graph: w ¯ i = ∑ j ∈ { successors of i } w ¯ j ∂ w j ∂ w i {\displaystyle {\bar {w}}_{i}=\sum _{j\in \{{\text{successors of i}}\}}{\bar {w}}_{j}{\frac {\partial w_{j}}{\partial w_{i}}}} Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables wi as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list (however, Wengert published forward accumulation, not reverse accumulation), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as rematerialization. Checkpointing is also used to save intermediary states. The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order): The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function y = f(x) in the primal causes x̄ = ȳ f′(x) in the adjoint; etc. ==== Implementation ==== ===== Pseudo code ===== Reverse accumulation requires two passes: In the forward pass, the function is evaluated first and the partial results are cached. In the reverse pass, the partial derivatives are calculated and the previously derived value is backpropagated. The corresponding method call expects the expression Z to be derived and seeded with the derived value of the parent expression. For the top expression, Z differentiated with respect to Z, this is 1. The method traverses the expression tree recursively until a variable is reached and adds the current seed value to the derivative expression. ===== C++ ===== === Beyond forward and reverse accumulation === Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of f : Rn → Rm with a minimum number of arithmetic operations is known as the optimal Jacobian accumulation (OJA) problem, which is NP-complete. Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent. == Automatic differentiation using dual numbers == Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers. Replace every number x {\displaystyle \,x} with the number x + x ′ ε {\displaystyle x+x'\varepsilon } , where x ′ {\displaystyle x'} is a real number, but ε {\displaystyle \varepsilon } is an abstract number with the property ε 2 = 0 {\displaystyle \varepsilon ^{2}=0} (an infinitesimal; see Smooth infinitesimal analysis). Using only this, regular arithmetic gives ( x + x ′ ε ) + ( y + y ′ ε ) = x + y + ( x ′ + y ′ ) ε ( x + x ′ ε ) − ( y + y ′ ε ) = x − y + ( x ′ − y ′ ) ε ( x + x ′ ε ) ⋅ ( y + y ′ ε ) = x y + x y ′ ε + y x ′ ε + x ′ y ′ ε 2 = x y + ( x y ′ + y x ′ ) ε ( x + x ′ ε ) / ( y + y ′ ε ) = ( x / y + x ′ ε / y ) / ( 1 + y ′ ε / y ) = ( x / y + x ′ ε / y ) ⋅ ( 1 − y ′ ε / y ) = x / y + ( x ′ / y − x y ′ / y 2 ) ε {\displaystyle {\begin{aligned}(x+x'\varepsilon )+(y+y'\varepsilon )&=x+y+(x'+y')\varepsilon \\(x+x'\varepsilon )-(y+y'\varepsilon )&=x-y+(x'-y')\varepsilon \\(x+x'\varepsilon )\cdot (y+y'\varepsilon )&=xy+xy'\varepsilon +yx'\varepsilon +x'y'\varepsilon ^{2}=xy+(xy'+yx')\varepsilon \\(x+x'\varepsilon )/(y+y'\varepsilon )&=(x/y+x'\varepsilon /y)/(1+y'\varepsilon /y)=(x/y+x'\varepsilon /y)\cdot (1-y'\varepsilon /y)=x/y+(x'/y-xy'/y^{2})\varepsilon \end{aligned}}} using ( 1 + y ′ ε / y ) ⋅ ( 1 − y ′ ε / y ) = 1 {\displaystyle (1+y'\varepsilon /y)\cdot (1-y'\varepsilon /y)=1} . Now, polynomials can be calculated in this augmented arithmetic. If P ( x ) = p 0 + p 1 x + p 2 x 2 + ⋯ + p n x n {\displaystyle P(x)=p_{0}+p_{1}x+p_{2}x^{2}+\cdots +p_{n}x^{n}} , then P ( x + x ′ ε ) = p 0 + p 1 ( x + x ′ ε ) + ⋯ + p n ( x + x ′ ε ) n = p 0 + p 1 x + ⋯ + p n x n + p 1 x ′ ε + 2 p 2 x x ′ ε + ⋯ + n p n x n − 1 x ′ ε = P ( x ) + P ( 1 ) ( x ) x ′ ε {\displaystyle {\begin{aligned}P(x+x'\varepsilon )&=p_{0}+p_{1}(x+x'\varepsilon )+\cdots +p_{n}(x+x'\varepsilon )^{n}\\&=p_{0}+p_{1}x+\cdots +p_{n}x^{n}+p_{1}x'\varepsilon +2p_{2}xx'\varepsilon +\cdots +np_{n}x^{n-1}x'\varepsilon \\&=P(x)+P^{(1)}(x)x'\varepsilon \end{aligned}}} where P ( 1 ) {\displaystyle P^{(1)}} denotes the derivative of P {\displaystyle P} with respect to its first argument, and x ′ {\displaystyle x'} , called a seed, can be chosen arbitrarily. The new arithmetic consists of ordered pairs, elements written ⟨ x , x ′ ⟩ {\displaystyle \langle x,x'\rangle } , with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions gives a list of the basic arithmetic and some standard functions for the new arithmetic: ⟨ u , u ′ ⟩ + ⟨ v , v ′ ⟩ = ⟨ u + v , u ′ + v ′ ⟩ ⟨ u , u ′ ⟩ − ⟨ v , v ′ ⟩ = ⟨ u − v , u ′ − v ′ ⟩ ⟨ u , u ′ ⟩ ∗ ⟨ v , v ′ ⟩ = ⟨ u v , u ′ v + u v ′ ⟩ ⟨ u , u ′ ⟩ / ⟨ v , v ′ ⟩ = ⟨ u v , u ′ v − u v ′ v 2 ⟩ ( v ≠ 0 ) sin ⁡ ⟨ u , u ′ ⟩ = ⟨ sin ⁡ ( u ) , u ′ cos ⁡ ( u ) ⟩ cos ⁡ ⟨ u , u ′ ⟩ = ⟨ cos ⁡ ( u ) , − u ′ sin ⁡ ( u ) ⟩ exp ⁡ ⟨ u , u ′ ⟩ = ⟨ exp ⁡ u , u ′ exp ⁡ u ⟩ log ⁡ ⟨ u , u ′ ⟩ = ⟨ log ⁡ ( u ) , u ′ / u ⟩ ( u > 0 ) ⟨ u , u ′ ⟩ k = ⟨ u k , u ′ k u k − 1 ⟩ ( u ≠ 0 ) | ⟨ u , u ′ ⟩ | = ⟨ | u | , u ′ sign ⁡ u ⟩ ( u ≠ 0 ) {\displaystyle {\begin{aligned}\left\langle u,u'\right\rangle +\left\langle v,v'\right\rangle &=\left\langle u+v,u'+v'\right\rangle \\\left\langle u,u'\right\rangle -\left\langle v,v'\right\rangle &=\left\langle u-v,u'-v'\right\rangle \\\left\langle u,u'\right\rangle *\left\langle v,v'\right\rangle &=\left\langle uv,u'v+uv'\right\rangle \\\left\langle u,u'\right\rangle /\left\langle v,v'\right\rangle &=\left\langle {\frac {u}{v}},{\frac {u'v-uv'}{v^{2}}}\right\rangle \quad (v\neq 0)\\\sin \left\langle u,u'\right\rangle &=\left\langle \sin(u),u'\cos(u)\right\rangle \\\cos \left\langle u,u'\right\rangle &=\left\langle \cos(u),-u'\sin(u)\right\rangle \\\exp \left\langle u,u'\right\rangle &=\left\langle \exp u,u'\exp u\right\rangle \\\log \left\langle u,u'\right\rangle &=\left\langle \log(u),u'/u\right\rangle \quad (u>0)\\\left\langle u,u'\right\rangle ^{k}&=\left\langle u^{k},u'ku^{k-1}\right\rangle \quad (u\neq 0)\\\left|\left\langle u,u'\right\rangle \right|&=\left\langle \left|u\right|,u'\operatorname {sign} u\right\rangle \quad (u\neq 0)\end{aligned}}} and in general for the primitive function g {\displaystyle g} , g ( ⟨ u , u ′ ⟩ , ⟨ v , v ′ ⟩ ) = ⟨ g ( u , v ) , g u ( u , v ) u ′ + g v ( u , v ) v ′ ⟩ {\displaystyle g(\langle u,u'\rangle ,\langle v,v'\rangle )=\langle g(u,v),g_{u}(u,v)u'+g_{v}(u,v)v'\rangle } where g u {\displaystyle g_{u}} and g v {\displaystyle g_{v}} are the derivatives of g {\displaystyle g} with respect to its first and second arguments, respectively. When a binary basic arithmetic operation is applied to mixed arguments—the pair ⟨ u , u ′ ⟩ {\displaystyle \langle u,u'\rangle } and the real number c {\displaystyle c} —the real number is first lifted to ⟨ c , 0 ⟩ {\displaystyle \langle c,0\rangle } . The derivative of a function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } at the point x 0 {\displaystyle x_{0}} is now found by calculating f ( ⟨ x 0 , 1 ⟩ ) {\displaystyle f(\langle x_{0},1\rangle )} using the above arithmetic, which gives ⟨ f ( x 0 ) , f ′ ( x 0 ) ⟩ {\displaystyle \langle f(x_{0}),f'(x_{0})\rangle } as the result. === Implementation === An example implementation based on the dual number approach follows. ==== Pseudo code ==== Dual plus(Dual A, Dual B) { return { realPartOf(A) + realPartOf(B), infinitesimalPartOf(A) + infinitesimalPartOf(B) }; } Dual minus(Dual A, Dual B) { return { realPartOf(A) - realPartOf(B), infinitesimalPartOf(A) - infinitesimalPartOf(B) }; } Dual multiply(Dual A, Dual B) { return { realPartOf(A) * realPartOf(B), realPartOf(B) * infinitesimalPartOf(A) + realPartOf(A) * infinitesimalPartOf(B) }; } X = {x, 0}; Y = {y, 0}; Epsilon = {0, 1}; xPartial = infinitesimalPartOf(f(X + Epsilon, Y)); yPartial = infinitesimalPartOf(f(X, Y + Epsilon)); ==== C++ ==== === Vector arguments and functions === Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute y ′ = ∇ f ( x ) ⋅ x ′ {\displaystyle y'=\nabla f(x)\cdot x'} , the directional derivative y ′ ∈ R m {\displaystyle y'\in \mathbb {R} ^{m}} of f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} at x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} in the direction x ′ ∈ R n {\displaystyle x'\in \mathbb {R} ^{n}} may be calculated as ( ⟨ y 1 , y 1 ′ ⟩ , … , ⟨ y m , y m ′ ⟩ ) = f ( ⟨ x 1 , x 1 ′ ⟩ , … , ⟨ x n , x n ′ ⟩ ) {\displaystyle (\langle y_{1},y'_{1}\rangle ,\ldots ,\langle y_{m},y'_{m}\rangle )=f(\langle x_{1},x'_{1}\rangle ,\ldots ,\langle x_{n},x'_{n}\rangle )} using the same arithmetic as above. If all the elements of ∇ f {\displaystyle \nabla f} are desired, then n {\displaystyle n} function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient. === High order and many variables === The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow complicated: complexity is quadratic in the highest derivative degree. Instead, truncated Taylor polynomial algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows efficient computation using functions as if they were a data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted. == Implementation == Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: source code transformation or operator overloading. === Source code transformation (SCT) === The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions. Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult and the build system is more complex. === Operator overloading (OO) === Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations. Due to the inherent operator overloading overhead on each loop, this approach usually demonstrates weaker speed performance. === Operator overloading and source code transformation === Overloaded Operators can be used to extract the valuation graph, followed by automatic generation of the AD-version of the primal function at run-time. Unlike the classic OO AAD, such AD-function does not change from one iteration to the next one. Hence there is any OO or tape interpretation run-time overhead per Xi sample. With the AD-function being generated at runtime, it can be optimised to take into account the current state of the program and precompute certain values. In addition, it can be generated in a way to consistently utilize native CPU vectorization to process 4(8)-double chunks of user data (AVX2\AVX512 speed up x4-x8). With multithreading added into account, such approach can lead to a final acceleration of order 8 × #Cores compared to the traditional AAD tools. A reference implementation is available on GitHub. == See also == Differentiable programming == Notes == == References == == Further reading == Rall, Louis B. (1981). Automatic Differentiation: Techniques and Applications. Lecture Notes in Computer Science. Vol. 120. Springer. ISBN 978-3-540-10861-0. Griewank, Andreas; Walther, Andrea (2008). Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. Other Titles in Applied Mathematics. Vol. 105 (2nd ed.). SIAM. doi:10.1137/1.9780898717761. ISBN 978-0-89871-659-7. Neidinger, Richard (2010). "Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming" (PDF). SIAM Review. 52 (3): 545–563. CiteSeerX 10.1.1.362.6580. doi:10.1137/080743627. S2CID 17134969. Retrieved 2013-03-15. Naumann, Uwe (2012). The Art of Differentiating Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1. Henrard, Marc (2017). Algorithmic Differentiation in Finance Explained. Financial Engineering Explained. Palgrave Macmillan. ISBN 978-3-319-53978-2. == External links == www.autodiff.org, An "entry site to everything you want to know about automatic differentiation" Automatic Differentiation of Parallel OpenMP Programs Automatic Differentiation, C++ Templates and Photogrammetry Automatic Differentiation, Operator Overloading Approach Compute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interface Automatic Differentiation of Fortran programs Description and example code for forward Automatic Differentiation in Scala Archived 2016-08-03 at the Wayback Machine finmath-lib stochastic automatic differentiation, Automatic differentiation for random variables (Java implementation of the stochastic automatic differentiation). Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem C++ Template-based automatic differentiation article and implementation Tangent Source-to-Source Debuggable Derivatives Exact First- and Second-Order Greeks by Algorithmic Differentiation Adjoint Algorithmic Differentiation of a GPU Accelerated Application Adjoint Methods in Computational Finance Software Tool Support for Algorithmic Differentiationop More than a Thousand Fold Speed Up for xVA Pricing Calculations with Intel Xeon Scalable Processors Sparse truncated Taylor series implementation with VBIC95 example for higher order derivatives
Wikipedia:Automorphic number#0
In mathematics, an automorphic number (sometimes referred to as a circular number) is a natural number in a given number base b {\displaystyle b} whose square "ends" in the same digits as the number itself. == Definition and properties == Given a number base b {\displaystyle b} , a natural number n {\displaystyle n} with k {\displaystyle k} digits is an automorphic number if n {\displaystyle n} is a fixed point of the polynomial function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} over Z / b k Z {\displaystyle \mathbb {Z} /b^{k}\mathbb {Z} } , the ring of integers modulo b k {\displaystyle b^{k}} . As the inverse limit of Z / b k Z {\displaystyle \mathbb {Z} /b^{k}\mathbb {Z} } is Z b {\displaystyle \mathbb {Z} _{b}} , the ring of b {\displaystyle b} -adic integers, automorphic numbers are used to find the numerical representations of the fixed points of f ( x ) = x 2 {\displaystyle f(x)=x^{2}} over Z b {\displaystyle \mathbb {Z} _{b}} . For example, with b = 10 {\displaystyle b=10} , there are four 10-adic fixed points of f ( x ) = x 2 {\displaystyle f(x)=x^{2}} , the last 10 digits of which are: … 0000000000 {\displaystyle \ldots 0000000000} … 0000000001 {\displaystyle \ldots 0000000001} … 8212890625 {\displaystyle \ldots 8212890625} (sequence A018247 in the OEIS) … 1787109376 {\displaystyle \ldots 1787109376} (sequence A018248 in the OEIS) Thus, the automorphic numbers in base 10 are 0, 1, 5, 6, 25, 76, 376, 625, 9376, 90625, 109376, 890625, 2890625, 7109376, 12890625, 87109376, 212890625, 787109376, 1787109376, 8212890625, 18212890625, 81787109376, 918212890625, 9918212890625, 40081787109376, 59918212890625, ... (sequence A003226 in the OEIS). A fixed point of f ( x ) {\displaystyle f(x)} is a zero of the function g ( x ) = f ( x ) − x {\displaystyle g(x)=f(x)-x} . In the ring of integers modulo b {\displaystyle b} , there are 2 ω ( b ) {\displaystyle 2^{\omega (b)}} zeroes to g ( x ) = x 2 − x {\displaystyle g(x)=x^{2}-x} , where the prime omega function ω ( b ) {\displaystyle \omega (b)} is the number of distinct prime factors in b {\displaystyle b} . An element x {\displaystyle x} in Z / b Z {\displaystyle \mathbb {Z} /b\mathbb {Z} } is a zero of g ( x ) = x 2 − x {\displaystyle g(x)=x^{2}-x} if and only if x ≡ 0 mod p v p ( b ) {\displaystyle x\equiv 0{\bmod {p}}^{v_{p}(b)}} or x ≡ 1 mod p v p ( b ) {\displaystyle x\equiv 1{\bmod {p}}^{v_{p}(b)}} for all p | b {\displaystyle p|b} . Since there are two possible values in { 0 , 1 } {\displaystyle \lbrace 0,1\rbrace } , and there are ω ( b ) {\displaystyle \omega (b)} such p | b {\displaystyle p|b} , there are 2 ω ( b ) {\displaystyle 2^{\omega (b)}} zeroes of g ( x ) = x 2 − x {\displaystyle g(x)=x^{2}-x} , and thus there are 2 ω ( b ) {\displaystyle 2^{\omega (b)}} fixed points of f ( x ) = x 2 {\displaystyle f(x)=x^{2}} . According to Hensel's lemma, if there are k {\displaystyle k} zeroes or fixed points of a polynomial function modulo b {\displaystyle b} , then there are k {\displaystyle k} corresponding zeroes or fixed points of the same function modulo any power of b {\displaystyle b} , and this remains true in the inverse limit. Thus, in any given base b {\displaystyle b} there are 2 ω ( b ) {\displaystyle 2^{\omega (b)}} b {\displaystyle b} -adic fixed points of f ( x ) = x 2 {\displaystyle f(x)=x^{2}} . As 0 is always a zero-divisor, 0 and 1 are always fixed points of f ( x ) = x 2 {\displaystyle f(x)=x^{2}} , and 0 and 1 are automorphic numbers in every base. These solutions are called trivial automorphic numbers. If b {\displaystyle b} is a prime power, then the ring of b {\displaystyle b} -adic numbers has no zero-divisors other than 0, so the only fixed points of f ( x ) = x 2 {\displaystyle f(x)=x^{2}} are 0 and 1. As a result, nontrivial automorphic numbers, those other than 0 and 1, only exist when the base b {\displaystyle b} has at least two distinct prime factors. === Automorphic numbers in base b === All b {\displaystyle b} -adic numbers are represented in base b {\displaystyle b} , using A−Z to represent digit values 10 to 35. == Extensions == Automorphic numbers can be extended to any such polynomial function of degree n {\displaystyle n} f ( x ) = ∑ i = 0 n a i x i {\textstyle f(x)=\sum _{i=0}^{n}a_{i}x^{i}} with b-adic coefficients a i {\displaystyle a_{i}} . These generalised automorphic numbers form a tree. === a-automorphic numbers === An a {\displaystyle a} -automorphic number occurs when the polynomial function is f ( x ) = a x 2 {\displaystyle f(x)=ax^{2}} For example, with b = 10 {\displaystyle b=10} and a = 2 {\displaystyle a=2} , as there are two fixed points for f ( x ) = 2 x 2 {\displaystyle f(x)=2x^{2}} in Z / 10 Z {\displaystyle \mathbb {Z} /10\mathbb {Z} } ( x = 0 {\displaystyle x=0} and x = 8 {\displaystyle x=8} ), according to Hensel's lemma there are two 10-adic fixed points for f ( x ) = 2 x 2 {\displaystyle f(x)=2x^{2}} , … 0000000000 {\displaystyle \ldots 0000000000} … 0893554688 {\displaystyle \ldots 0893554688} so the 2-automorphic numbers in base 10 are 0, 8, 88, 688, 4688... === Trimorphic numbers === A trimorphic number or spherical number occurs when the polynomial function is f ( x ) = x 3 {\displaystyle f(x)=x^{3}} . All automorphic numbers are trimorphic. The terms circular and spherical were formerly used for the slightly different case of a number whose powers all have the same last digit as the number itself. For base b = 10 {\displaystyle b=10} , the trimorphic numbers are: 0, 1, 4, 5, 6, 9, 24, 25, 49, 51, 75, 76, 99, 125, 249, 251, 375, 376, 499, 501, 624, 625, 749, 751, 875, 999, 1249, 3751, 4375, 4999, 5001, 5625, 6249, 8751, 9375, 9376, 9999, ... (sequence A033819 in the OEIS) For base b = 12 {\displaystyle b=12} , the trimorphic numbers are: 0, 1, 3, 4, 5, 7, 8, 9, B, 15, 47, 53, 54, 5B, 61, 68, 69, 75, A7, B3, BB, 115, 253, 368, 369, 4A7, 5BB, 601, 715, 853, 854, 969, AA7, BBB, 14A7, 2369, 3853, 3854, 4715, 5BBB, 6001, 74A7, 8368, 8369, 9853, A715, BBBB, ... == Programming example == == See also == Arithmetic dynamics Kaprekar number p-adic number p-adic analysis Zero-divisor == References == examples of 1-automorphic numbers at PlanetMath. == External links == Weisstein, Eric W. "Automorphic number". MathWorld. Weisstein, Eric W. "Trimorphic Number". MathWorld.
Wikipedia:Automorphism#0
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object. == Definition == In an algebraic structure such as a group, a ring, or vector space, an automorphism is simply a bijective homomorphism of an object into itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.) More generally, for an object in some category, an automorphism is a morphism of the object to itself that has an inverse morphism; that is, a morphism f : X → X {\displaystyle f:X\to X} is an automorphism if there is a morphism g : X → X {\displaystyle g:X\to X} such that g ∘ f = f ∘ g = id X , {\displaystyle g\circ f=f\circ g=\operatorname {id} _{X},} where id X {\displaystyle \operatorname {id} _{X}} is the identity morphism of X. For algebraic structures, the two definitions are equivalent; in this case, the identity morphism is simply the identity function, and is often called the trivial automorphism. == Automorphism group == The automorphisms of an object X form a group under composition of morphisms, which is called the automorphism group of X. This results straightforwardly from the definition of a category. The automorphism group of an object X in a category C is often denoted AutC(X), or simply Aut(X) if the category is clear from context. == Examples == In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X. In elementary arithmetic, the set of integers, ⁠ Z {\displaystyle \mathbb {Z} } ⁠, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field. A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group. In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V). (The algebraic structure of all endomorphisms of V is itself an algebra over the same base field as V, whose invertible elements precisely consist of GL(V).) A field automorphism is a bijective ring homomorphism from a field to itself. The field Q {\displaystyle \mathbb {Q} } of the rational numbers has no other automorphism than the identity, since an automorphism must fix the additive identity 0 and the multiplicative identity 1; the sum of a finite number of 1 must be fixed, as well as the additive inverses of these sums (that is, the automorphism fixes all integers); finally, since every rational number is the quotient of two integers, all rational numbers must be fixed by any automorphism. The field R {\displaystyle \mathbb {R} } of the real numbers has no automorphisms other than the identity. Indeed, the rational numbers must be fixed by every automorphism, per above; an automorphism must preserve inequalities since x < y {\displaystyle x<y} is equivalent to ∃ z ∣ y − x = z 2 , {\displaystyle \exists z\mid y-x=z^{2},} and the latter property is preserved by every automorphism; finally every real number must be fixed since it is the least upper bound of a sequence of rational numbers. The field C {\displaystyle \mathbb {C} } of the complex numbers has a unique nontrivial automorphism that fixes the real numbers. It is the complex conjugation, which maps i {\displaystyle i} to − i . {\displaystyle -i.} The axiom of choice implies the existence of uncountably many automorphisms that do not fix the real numbers. The study of automorphisms of algebraic field extensions is the starting point and the main object of Galois theory. The automorphism group of the quaternions (⁠ H {\displaystyle \mathbb {H} } ⁠) as a ring are the inner automorphisms, by the Skolem–Noether theorem: maps of the form a ↦ bab−1. This group is isomorphic to SO(3), the group of rotations in 3-dimensional space. The automorphism group of the octonions (⁠ O {\displaystyle \mathbb {O} } ⁠) is the exceptional Lie group G2. In graph theory an automorphism of a graph is a permutation of the nodes that preserves edges and non-edges. In particular, if two nodes are joined by an edge, so are their images under the permutation. In geometry, an automorphism may be called a motion of the space. Specialized terminology is also used: In metric geometry an automorphism is a self-isometry. The automorphism group is also called the isometry group. In the category of Riemann surfaces, an automorphism is a biholomorphic map (also called a conformal map), from a surface to itself. For example, the automorphisms of the Riemann sphere are Möbius transformations. An automorphism of a differentiable manifold M is a diffeomorphism from M to itself. The automorphism group is sometimes denoted Diff(M). In topology, morphisms between topological spaces are called continuous maps, and an automorphism of a topological space is a homeomorphism of the space to itself, or self-homeomorphism (see homeomorphism group). In this example it is not sufficient for a morphism to be bijective to be an isomorphism. == History == One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus, where he discovered an order two automorphism, writing: so that μ {\displaystyle \mu } is a new fifth root of unity, connected with the former fifth root λ {\displaystyle \lambda } by relations of perfect reciprocity. == Inner and outer automorphisms == In some categories—notably groups, rings, and Lie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms. In the case of groups, the inner automorphisms are the conjugations by the elements of the group itself. For each element a of a group G, conjugation by a is the operation φa : G → G given by φa(g) = aga−1 (or a−1ga; usage varies). One can easily check that conjugation by a is a group automorphism. The inner automorphisms form a normal subgroup of Aut(G), denoted by Inn(G); this is called Goursat's lemma. The other automorphisms are called outer automorphisms. The quotient group Aut(G) / Inn(G) is usually denoted by Out(G); the non-trivial elements are the cosets that contain the outer automorphisms. The same definition holds in any unital ring or algebra where a is any invertible element. For Lie algebras the definition is slightly different. == See also == Antiautomorphism Automorphism (in Sudoku puzzles) Characteristic subgroup Endomorphism ring Frobenius automorphism Morphism Order automorphism (in order theory). Relation-preserving automorphism Fractional Fourier transform == References == == External links == Automorphism at Encyclopaedia of Mathematics Weisstein, Eric W. "Automorphism". MathWorld.
Wikipedia:Autonomous convergence theorem#0
In mathematics, an autonomous convergence theorem is one of a family of related theorems which specify conditions guaranteeing global asymptotic stability of a continuous autonomous dynamical system. == History == The Markus–Yamabe conjecture was formulated as an attempt to give conditions for global stability of continuous dynamical systems in two dimensions. However, the Markus–Yamabe conjecture does not hold for dimensions higher than two, a problem which autonomous convergence theorems attempt to address. The first autonomous convergence theorem was constructed by Russell Smith. This theorem was later refined by Michael Li and James Muldowney. == An example autonomous convergence theorem == A comparatively simple autonomous convergence theorem is as follows: Let x {\displaystyle x} be a vector in some space X ⊆ R n {\displaystyle X\subseteq \mathbb {R} ^{n}} , evolving according to an autonomous differential equation x ˙ = f ( x ) {\displaystyle {\dot {x}}=f(x)} . Suppose that X {\displaystyle X} is convex and forward invariant under f {\displaystyle f} , and that there exists a fixed point x ^ ∈ X {\displaystyle {\hat {x}}\in X} such that f ( x ^ ) = 0 {\displaystyle f({\hat {x}})=0} . If there exists a logarithmic norm μ {\displaystyle \mu } such that the Jacobian J ( x ) = D x f {\displaystyle J(x)=D_{x}f} satisfies μ ( J ( x ) ) < 0 {\displaystyle \mu (J(x))<0} for all values of x {\displaystyle x} , then x ^ {\displaystyle {\hat {x}}} is the only fixed point, and it is globally asymptotically stable. This autonomous convergence theorem is very closely related to the Banach fixed-point theorem. == How autonomous convergence works == Note: this is an intuitive description of how autonomous convergence theorems guarantee stability, not a strictly mathematical description. The key point in the example theorem given above is the existence of a negative logarithmic norm, which is derived from a vector norm. The vector norm effectively measures the distance between points in the vector space on which the differential equation is defined, and the negative logarithmic norm means that distances between points, as measured by the corresponding vector norm, are decreasing with time under the action of f {\displaystyle f} . So long as the trajectories of all points in the phase space are bounded, all trajectories must therefore eventually converge to the same point. The autonomous convergence theorems by Russell Smith, Michael Li and James Muldowney work in a similar manner, but they rely on showing that the area of two-dimensional shapes in phase space decrease with time. This means that no periodic orbits can exist, as all closed loops must shrink to a point. If the system is bounded, then according to Pugh's closing lemma there can be no chaotic behaviour either, so all trajectories must eventually reach an equilibrium. Michael Li has also developed an extended autonomous convergence theorem which is applicable to dynamical systems containing an invariant manifold. == Notes ==
Wikipedia:Avadhesh Narayan Singh#0
Avadhesh Narayan Singh (Benares, 1901 – July 10, 1954) was an Indian mathematician and historian of mathematics. Singh received a master's degree from Banaras Hindu University in his hometown (Varanasi was then called Banaras or Benares) in 1924, where he was a student of Ganesh Prasad. He received his DSc in mathematics from the University of Calcutta in 1929 for his dissertation titled "Derivation and Non-Differentiable functions". After securing a DSc, Singh went to Lucknow University, where he became a Reader in 1940 and a professor in 1943. There he opened a Hindu Mathematics section and revived the nearly defunct Banaras Mathematical Society under the name of Bharata Ganita Parisad. In the 1930s he wrote a history of Indian mathematics with Bibhutibhushan Datta, which became a standard work. As a mathematician, he dealt with non-differentiable functions (an example of an everywhere non-differentiable function is the Weierstrass function). == Publications == Singh published about a dozen papers related to the history of Indian mathematics, and three dozen papers related to the non-differentiability of functions. He also published the following two books: "The Theory and Construction of Non-Differentiable Functions", Lucknow University Studies No. I, 1935. [1] (accessed on 2 August 2023). Bibhuti Bhushan Datta and Avadhesh Narayan Singh (1935). A History of Hindu Mathematics: A Source Book (Part I Numerical Notation and Arithmetic) (First ed.). Lahore: Motilal Banarsidass. Retrieved 2 August 2023. Bibhuti Bhushan Datta and Avadhesh Narayan Singh (1938). A History of Hindu Mathematics: A Source Book (Part II Algebra) (First ed.). Lahore: Motilal Banarsi Das. Retrieved 2 August 2023. Volume 3 of the "History of Hindu Mathematics" was edited by Kripa Shankar Shukla and published in several papers in the Indian Journal of History of Science (Vol. 5, 1980 to Vol. 28, 1993). These edited papers are available in the Studies in Indian Mathematics and Astronomy (Selected Articles of Kripa Shankar Shukla). == References ==
Wikipedia:Awi Federgruen#0
Awi Federgruen (born 1953, in Geneva) is a Dutch/American mathematician and operations researcher and Charles E. Exley Professor of Management at the Columbia Business School and affiliate professor at the university's Fu Foundation School of Engineering and Applied Science. == Biography == Federgruen received his BA from the University of Amsterdam in 1972, where he also received his MS in 1975 and his PhD in Operations Research in 1978 with a thesis entitled "Markovian Control Problems, Functional Equations and Algorithms" under supervision of Gijsbert de Leve and Henk Tijms. Federgruen started his academic career as a Research Fellow at the Centrum Wiskunde & Informatica, Amsterdam, in the early 1970s, and was a faculty member of the University of Rochester, Graduate School of Management. In 1979 he was appointed Professor at the Columbia University. In 1992 he was named the first Charles E. Exley Jr. Professor of Management, and holds the Chair of the Decision, Risk and Operations (DRO) Division. From 1997 to 2002 he was Vice Dean of the University. He serves as a principal consultant for the Israel Air Force, in the area of logistics and procurement policies. Federgruen has supervised many PhD students; recent graduates include Yusheng Zheng (at Wharton Business School), Ziv Katalan (at Wharton Business School), Yossi Aviv (Olin Business School), Fernando Bernstein (Fuqua School of Business), Joern Meissner (Kuehne Logistics University), Gad Allon (Wharton Business School), Nan Yang (Miami Herbert Business School), Margaret Pierson (Tuck School of Business), Lijian Lu (HKUST Business School) and Zhe Liu (Imperial College Business School), see PhD in Decision, Risk, and Operations Placement. Federgruen was awarded the 2004 Distinguished Fellowship Award by the Manufacturing, Service and Operations Management Society for Outstanding Research and Scholarship in Operations Management; and also the Distinguished Fellow, Manufacturing and Service Operations Management Society. He was elected to the 2009 class of Fellows of the Institute for Operations Research and the Management Sciences. == Work == Federgruen is known for his work in the development and implementation of planning models for supply chain management and logistical systems. His work on scenario planning is widely cited, and the field has gained prominence as computers now allow the processing of large masses of complex data. His work on supply chain models has wide applications in, for example, flu vaccine and the risks of relying too heavily on a single vaccine supplier. He is also an expert on applied probability models and dynamic programming. In the wake of Hurricane Katrina, Federgruen was quoted on the subject of applying predictive models to minimize risk in disaster situations. Together with Ran Kivetz, Federgruen analyzed available data regarding the issue of famine in Gaza concluding that "sufficient amounts of food are being supplied into Gaza”. == Publications == Books, a selection: 1978. Markovian Control Problems, Functional Equations and Algorithms. Doctorate thesis University of Amsterdam. Articles, a selection: Federgruen, Awi; Heching, Aliza (1999). "Combined pricing and inventory control under uncertainty" (PDF). Operations Research. 47 (3): 454–475. doi:10.1287/opre.47.3.454. Archived from the original (PDF) on 2014-05-14. Chen, Fangruo; Federgruen, Awi; Zheng, Yu-Sheng (2001). "Coordination mechanisms for a distribution system with one supplier and multiple retailers". Management Science. 47 (5): 693–708. doi:10.1287/mnsc.47.5.693.10484. Bernstein, Fernando; Federgruen, Awi (2005). "Decentralized supply chains with competing retailers under demand uncertainty". Management Science. 51 (1): 18–29. CiteSeerX 10.1.1.198.1317. doi:10.1287/mnsc.1040.0218. == References == == External links == Awi Federgruen at Columbia A. Federgroen at the University of Amsterdam Album Academicum website
Wikipedia:Axel Sophus Guldberg#0
Axel Sophus Guldberg (2 November 1838 – 28 February 1913) was a Norwegian mathematician. == Biography == Born in Christiania (now called Oslo), Guldberg was the second oldest out of 11 siblings. He and his siblings were initially homeschooled, but he and his older brother, Cato Maximilian Guldberg, later began going to school in Fredrikstad, where they lived together with relatives. He completed his examen artium in 1856, cand.real. in 1863 and dr.philos. in 1867. In 1863, he was an adjunct professor in Drammen. From 1864 to 1865, he studied mathematics in Germany and France, while simultaneously on his honeymoon. In 1865, Guldberg became a rector in Stavanger. The same year, he began teaching mathematics at the Norwegian Military Academy until 1899. He was an important figure in the insurance industry. He also served in the Norwegian law commission. In 1866, he had a son, Alf Victor Guldberg, with his wife, Fredrikke Borchsenius. == References ==
Wikipedia:Ax–Grothendieck theorem#0
In mathematics, the Ax–Grothendieck theorem is a result about injectivity and surjectivity of polynomials that was proved independently by James Ax and Alexander Grothendieck. The theorem is often given as this special case: If P {\displaystyle P} is an injective polynomial function from an n {\displaystyle n} -dimensional complex vector space to itself then P {\displaystyle P} is bijective. That is, if P {\displaystyle P} always maps distinct arguments to distinct values, then the values of P {\displaystyle P} cover all of C n {\displaystyle \mathbb {C} ^{n}} . The full theorem generalizes to any algebraic variety over an algebraically closed field. == Proof via finite fields == Grothendieck's proof of the theorem is based on proving the analogous theorem for finite fields and their algebraic closures. That is, for any field F {\displaystyle F} that is itself finite or that is the closure of a finite field, if a polynomial P {\displaystyle P} from F n {\displaystyle F^{n}} to itself is injective then it is bijective. If F {\displaystyle F} is a finite field, then F n {\displaystyle F^{n}} is finite. In this case the theorem is true for trivial reasons having nothing to do with the representation of the function as a polynomial: any injection of a finite set to itself is a bijection. When F {\displaystyle F} is the algebraic closure of a finite field, the result follows from Hilbert's Nullstellensatz. The Ax–Grothendieck theorem for complex numbers can therefore be proven by showing that a counterexample over C {\displaystyle \mathbb {C} } would translate into a counterexample in some algebraic extension of a finite field. This method of proof is noteworthy in that it is an example of the idea that finitistic algebraic relations in fields of characteristic 0 translate into algebraic relations over finite fields with large characteristic. Thus, one can use the arithmetic of finite fields to prove a statement about C {\displaystyle \mathbb {C} } even though there is no homomorphism from any finite field to C {\displaystyle \mathbb {C} } . The proof thus uses model-theoretic principles such as the compactness theorem to prove an elementary statement about polynomials. The proof for the general case uses a similar method. == Other proofs == There are other proofs of the theorem. Armand Borel gave a proof using topology. The case of n = 1 {\displaystyle n=1} and field C {\displaystyle \mathbb {C} } follows since C {\displaystyle \mathbb {C} } is algebraically closed and can also be thought of as a special case of the result that for any analytic function f {\displaystyle f} on C {\displaystyle \mathbb {C} } , injectivity of f {\displaystyle f} implies surjectivity of f {\displaystyle f} . This is a corollary of Picard's theorem. == Related results == Another example of reducing theorems about morphisms of finite type to finite fields can be found in EGA IV: There, it is proved that a radicial S {\displaystyle S} -endomorphism of a scheme X {\displaystyle X} of finite type over S {\displaystyle S} is bijective (10.4.11), and that if X / S {\displaystyle X/S} is of finite presentation, and the endomorphism is a monomorphism, then it is an automorphism (17.9.6). Therefore, a scheme of finite presentation over a base S {\displaystyle S} is a cohopfian object in the category of S {\displaystyle S} -schemes. The Ax–Grothendieck theorem may also be used to prove the Garden of Eden theorem, a result that like the Ax–Grothendieck theorem relates injectivity with surjectivity but in cellular automata rather than in algebraic fields. Although direct proofs of this theorem are known, the proof via the Ax–Grothendieck theorem extends more broadly, to automata acting on amenable groups. Some partial converses to the Ax-Grothendieck Theorem: A generically surjective polynomial map of n {\displaystyle n} -dimensional affine space over a finitely generated extension of Z {\displaystyle \mathbb {Z} } or Z / p Z [ t ] {\displaystyle \mathbb {Z} /p\mathbb {Z} [t]} is bijective with a polynomial inverse rational over the same ring (and therefore bijective on affine space of the algebraic closure). A generically surjective rational map of n {\displaystyle n} -dimensional affine space over a Hilbertian field is generically bijective with a rational inverse defined over the same field. ("Hilbertian field" being defined here as a field for which Hilbert's Irreducibility Theorem holds, such as the rational numbers and function fields.) == References == == External links == O’Connor, Michael (2008). "Ax's Theorem: An Application of Logic to Ordinary Mathematics".
Wikipedia:Ayşe Şahin#0
Ayşe Arzu Şahin is a Turkish-American mathematician who works in dynamical systems. She was appointed the Dean of the College of Science and Mathematics at Wright State University in June 2020, and is a co-author of two textbooks on calculus and dynamical systems. == Education and career == Şahin graduated from Mount Holyoke College in 1988. She completed her Ph.D. in 1994 at the University of Maryland, College Park. Her dissertation, Tiling Representations of R 2 {\displaystyle \mathbb {R} ^{2}} Actions and α {\displaystyle \alpha } -Equivalence in Two Dimensions, was supervised by Daniel Rudolph. She joined the mathematics faculty at North Dakota State University, where she worked from 1994 until 2001, when she moved to DePaul University. At DePaul, she became a full professor in 2010, and co-directed a master's program in Middle School Mathematics. She moved again to Wright State as Chair of the Department of Mathematics and Statistics in 2015. In addition to her role as Dean, Şahin's research focuses on ergodic theory and symbolic dynamics, areas within dynamical systems that examine the long-term behavior of systems through a mathematical lens. She is also involved in initiatives that support student success in STEM fields and serves as an advocate for increased access to mathematics education for underrepresented groups. == Books == In 2017, with Kathleen Madden and Aimee Johnson, Şahin published the textbook Discovering Discrete Dynamical Systems through the Mathematical Association of America. She is also a co-author of Calculus: Single and Multivariable (7th ed., Wiley, 2016), a text whose many other co-authors include Deborah Hughes Hallett, William G. McCallum, and Andrew M. Gleason. == References ==
Wikipedia:Azriel Lévy#0
Azriel Lévy (Hebrew: עזריאל לוי; born c. 1934) is an Israeli mathematician, logician, and a professor emeritus at the Hebrew University of Jerusalem. == Biography == Lévy obtained his Ph.D. at the Hebrew University of Jerusalem in 1958, under the supervision of Abraham Fraenkel and Abraham Robinson. Later, using Cohen's method of forcing, he proved several results on the consistency of various statements contradicting the axiom of choice. For example, with J. D. Halpern he proved that the Boolean prime ideal theorem does not imply the axiom of choice. He discovered the models L[x] used in inner model theory. He also introduced the notions of Lévy hierarchy of the formulas of set theory, Levy collapse and the Feferman–Levy model. His students include Dov Gabbay, Moti Gitik, and Menachem Magidor. == Selected works == Lévy, Azriel (1958). "The independence of various definitions of finiteness" (PDF). Fundamenta Mathematicae. 46: 1–13. doi:10.4064/fm-46-1-1-13. A. Lévy: A hierarchy of formulas in set theory, Memoirs of the American Mathematical Society, 57, 1965. J. D. Halpern, A. Lévy: The Boolean prime ideal theorem does not imply the axiom of choice, Axiomatic Set Theory, Symposia Pure Math., 1971, 83–134. A. Lévy: Basic Set Theory, Springer-Verlag, Berlin, 1979, 391 pages; reprinted by Dover Publications, 2003. == Notes == == References == Kanamori, Akihiro (2006). "Levy and set theory". Annals of Pure and Applied Logic. 140 (1–3): 233–252. doi:10.1016/j.apal.2005.09.009. Zbl 1089.03004. == External links == Azriel Lévy at the Mathematics Genealogy Project
Wikipedia:A∞-operad#0
In mathematics, an operad is a structure that consists of abstract operations, each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad O {\displaystyle O} , one defines an algebra over O {\displaystyle O} to be a set together with concrete operations on this set which behave just like the abstract operations of O {\displaystyle O} . For instance, there is a Lie operad L {\displaystyle L} such that the algebras over L {\displaystyle L} are precisely the Lie algebras; in a sense L {\displaystyle L} abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations. == History == Operads originate in algebraic topology; they were introduced to characterize iterated loop spaces by J. Michael Boardman and Rainer M. Vogt in 1968 and by J. Peter May in 1972. Martin Markl, Steve Shnider, and Jim Stasheff write in their book on operads: "The name operad and the formal definition appear first in the early 1970's in J. Peter May's "The Geometry of Iterated Loop Spaces", but a year or more earlier, Boardman and Vogt described the same concept under the name categories of operators in standard form, inspired by PROPs and PACTs of Adams and Mac Lane. In fact, there is an abundance of prehistory. Weibel [Wei] points out that the concept first arose a century ago in A.N. Whitehead's "A Treatise on Universal Algebra", published in 1898." The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer). Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwacher. == Intuition == Suppose X {\displaystyle X} is a set and for n ∈ N {\displaystyle n\in \mathbb {N} } we define P ( n ) := { f : X n → X } {\displaystyle P(n):=\{f\colon X^{n}\to X\}} , the set of all functions from the cartesian product of n {\displaystyle n} copies of X {\displaystyle X} to X {\displaystyle X} . We can compose these functions: given f ∈ P ( n ) {\displaystyle f\in P(n)} , f 1 ∈ P ( k 1 ) , … , f n ∈ P ( k n ) {\displaystyle f_{1}\in P(k_{1}),\ldots ,f_{n}\in P(k_{n})} , the function f ∘ ( f 1 , … , f n ) ∈ P ( k 1 + ⋯ + k n ) {\displaystyle f\circ (f_{1},\ldots ,f_{n})\in P(k_{1}+\cdots +k_{n})} is defined as follows: given k 1 + ⋯ + k n {\displaystyle k_{1}+\cdots +k_{n}} arguments from X {\displaystyle X} , we divide them into n {\displaystyle n} blocks, the first one having k 1 {\displaystyle k_{1}} arguments, the second one k 2 {\displaystyle k_{2}} arguments, etc., and then apply f 1 {\displaystyle f_{1}} to the first block, f 2 {\displaystyle f_{2}} to the second block, etc. We then apply f {\displaystyle f} to the list of n {\displaystyle n} values obtained from X {\displaystyle X} in such a way. We can also permute arguments, i.e. we have a right action ∗ {\displaystyle *} of the symmetric group S n {\displaystyle S_{n}} on P ( n ) {\displaystyle P(n)} , defined by ( f ∗ s ) ( x 1 , … , x n ) = f ( x s − 1 ( 1 ) , … , x s − 1 ( n ) ) {\displaystyle (f*s)(x_{1},\ldots ,x_{n})=f(x_{s^{-1}(1)},\ldots ,x_{s^{-1}(n)})} for f ∈ P ( n ) {\displaystyle f\in P(n)} , s ∈ S n {\displaystyle s\in S_{n}} and x 1 , … , x n ∈ X {\displaystyle x_{1},\ldots ,x_{n}\in X} . The definition of a symmetric operad given below captures the essential properties of these two operations ∘ {\displaystyle \circ } and ∗ {\displaystyle *} . == Definition == === Non-symmetric operad === A non-symmetric operad (sometimes called an operad without permutations, or a non- Σ {\displaystyle \Sigma } or plain operad) consists of the following: a sequence ( P ( n ) ) n ∈ N {\displaystyle (P(n))_{n\in \mathbb {N} }} of sets, whose elements are called n {\displaystyle n} -ary operations, an element 1 {\displaystyle 1} in P ( 1 ) {\displaystyle P(1)} called the identity, for all positive integers n {\displaystyle n} , k 1 , … , k n {\textstyle k_{1},\ldots ,k_{n}} , a composition function ∘ : P ( n ) × P ( k 1 ) × ⋯ × P ( k n ) → P ( k 1 + ⋯ + k n ) ( θ , θ 1 , … , θ n ) ↦ θ ∘ ( θ 1 , … , θ n ) , {\displaystyle {\begin{aligned}\circ :P(n)\times P(k_{1})\times \cdots \times P(k_{n})&\to P(k_{1}+\cdots +k_{n})\\(\theta ,\theta _{1},\ldots ,\theta _{n})&\mapsto \theta \circ (\theta _{1},\ldots ,\theta _{n}),\end{aligned}}} satisfying the following coherence axioms: identity: θ ∘ ( 1 , … , 1 ) = θ = 1 ∘ θ {\displaystyle \theta \circ (1,\ldots ,1)=\theta =1\circ \theta } associativity: θ ∘ ( θ 1 ∘ ( θ 1 , 1 , … , θ 1 , k 1 ) , … , θ n ∘ ( θ n , 1 , … , θ n , k n ) ) = ( θ ∘ ( θ 1 , … , θ n ) ) ∘ ( θ 1 , 1 , … , θ 1 , k 1 , … , θ n , 1 , … , θ n , k n ) {\displaystyle {\begin{aligned}&\theta \circ {\Big (}\theta _{1}\circ (\theta _{1,1},\ldots ,\theta _{1,k_{1}}),\ldots ,\theta _{n}\circ (\theta _{n,1},\ldots ,\theta _{n,k_{n}}){\Big )}\\={}&{\Big (}\theta \circ (\theta _{1},\ldots ,\theta _{n}){\Big )}\circ (\theta _{1,1},\ldots ,\theta _{1,k_{1}},\ldots ,\theta _{n,1},\ldots ,\theta _{n,k_{n}})\end{aligned}}} === Symmetric operad === A symmetric operad (often just called operad) is a non-symmetric operad P {\displaystyle P} as above, together with a right action of the symmetric group S n {\displaystyle S_{n}} on P ( n ) {\displaystyle P(n)} for n ∈ N {\displaystyle n\in \mathbb {N} } , denoted by ∗ {\displaystyle *} and satisfying equivariance: given a permutation t ∈ S n {\displaystyle t\in S_{n}} , ( θ ∗ t ) ∘ ( θ 1 , … , θ n ) = ( θ ∘ ( θ t − 1 ( 1 ) , … , θ t − 1 ( n ) ) ) ∗ t ′ {\displaystyle (\theta *t)\circ (\theta _{1},\ldots ,\theta _{n})=(\theta \circ (\theta _{t^{-1}(1)},\ldots ,\theta _{t^{-1}(n)}))*t'} (where t ′ {\displaystyle t'} on the right hand side refers to the element of S k 1 + ⋯ + k n {\displaystyle S_{k_{1}+\dots +k_{n}}} that acts on the set { 1 , 2 , … , k 1 + ⋯ + k n } {\displaystyle \{1,2,\dots ,k_{1}+\dots +k_{n}\}} by breaking it into n {\displaystyle n} blocks, the first of size k 1 {\displaystyle k_{1}} , the second of size k 2 {\displaystyle k_{2}} , through the n {\displaystyle n} th block of size k n {\displaystyle k_{n}} , and then permutes these n {\displaystyle n} blocks by t {\displaystyle t} , keeping each block intact) and given n {\displaystyle n} permutations s i ∈ S k i {\displaystyle s_{i}\in S_{k_{i}}} , θ ∘ ( θ 1 ∗ s 1 , … , θ n ∗ s n ) = ( θ ∘ ( θ 1 , … , θ n ) ) ∗ ( s 1 , … , s n ) {\displaystyle \theta \circ (\theta _{1}*s_{1},\ldots ,\theta _{n}*s_{n})=(\theta \circ (\theta _{1},\ldots ,\theta _{n}))*(s_{1},\ldots ,s_{n})} (where ( s 1 , … , s n ) {\displaystyle (s_{1},\ldots ,s_{n})} denotes the element of S k 1 + ⋯ + k n {\displaystyle S_{k_{1}+\dots +k_{n}}} that permutes the first of these blocks by s 1 {\displaystyle s_{1}} , the second by s 2 {\displaystyle s_{2}} , etc., and keeps their overall order intact). The permutation actions in this definition are vital to most applications, including the original application to loop spaces. === Morphisms === A morphism of operads f : P → Q {\displaystyle f:P\to Q} consists of a sequence ( f n : P ( n ) → Q ( n ) ) n ∈ N {\displaystyle (f_{n}:P(n)\to Q(n))_{n\in \mathbb {N} }} that: preserves the identity: f ( 1 ) = 1 {\displaystyle f(1)=1} preserves composition: for every n-ary operation θ {\displaystyle \theta } and operations θ 1 , … , θ n {\displaystyle \theta _{1},\ldots ,\theta _{n}} , f ( θ ∘ ( θ 1 , … , θ n ) ) = f ( θ ) ∘ ( f ( θ 1 ) , … , f ( θ n ) ) {\displaystyle f(\theta \circ (\theta _{1},\ldots ,\theta _{n}))=f(\theta )\circ (f(\theta _{1}),\ldots ,f(\theta _{n}))} preserves the permutation actions: f ( x ∗ s ) = f ( x ) ∗ s {\displaystyle f(x*s)=f(x)*s} . Operads therefore form a category denoted by O p e r {\displaystyle {\mathsf {Oper}}} . === In other categories === So far operads have only been considered in the category of sets. More generally, it is possible to define operads in any symmetric monoidal category C . In that case, each P ( n ) {\displaystyle P(n)} is an object of C, the composition ∘ {\displaystyle \circ } is a morphism P ( n ) ⊗ P ( k 1 ) ⊗ ⋯ ⊗ P ( k n ) → P ( k 1 + ⋯ + k n ) {\displaystyle P(n)\otimes P(k_{1})\otimes \cdots \otimes P(k_{n})\to P(k_{1}+\cdots +k_{n})} in C (where ⊗ {\displaystyle \otimes } denotes the tensor product of the monoidal category), and the actions of the symmetric group elements are given by isomorphisms in C. A common example is the category of topological spaces and continuous maps, with the monoidal product given by the cartesian product. In this case, an operad is given by a sequence of spaces (instead of sets) { P ( n ) } n ≥ 0 {\displaystyle \{P(n)\}_{n\geq 0}} . The structure maps of the operad (the composition and the actions of the symmetric groups) are then assumed to be continuous. The result is called a topological operad. Similarly, in the definition of a morphism of operads, it would be necessary to assume that the maps involved are continuous. Other common settings to define operads include, for example, modules over a commutative ring, chain complexes, groupoids (or even the category of categories itself), coalgebras, etc. === Algebraist definition === Given a commutative ring R we consider the category R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} of modules over R. An operad over R can be defined as a monoid object ( T , γ , η ) {\displaystyle (T,\gamma ,\eta )} in the monoidal category of endofunctors on R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} (it is a monad) satisfying some finiteness condition. For example, a monoid object in the category of "polynomial endofunctors" on R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} is an operad. Similarly, a symmetric operad can be defined as a monoid object in the category of S {\displaystyle \mathbb {S} } -objects, where S {\displaystyle \mathbb {S} } means a symmetric group. A monoid object in the category of combinatorial species is an operad in finite sets. An operad in the above sense is sometimes thought of as a generalized ring. For example, Nikolai Durov defines his generalized rings as monoid objects in the monoidal category of endofunctors on Set {\displaystyle {\textbf {Set}}} that commute with filtered colimits. This is a generalization of a ring since each ordinary ring R defines a monad Σ R : Set → Set {\displaystyle \Sigma _{R}:{\textbf {Set}}\to {\textbf {Set}}} that sends a set X to the underlying set of the free R-module R ( X ) {\displaystyle R^{(X)}} generated by X. == Understanding the axioms == === Associativity axiom === "Associativity" means that composition of operations is associative (the function ∘ {\displaystyle \circ } is associative), analogous to the axiom in category theory that f ∘ ( g ∘ h ) = ( f ∘ g ) ∘ h {\displaystyle f\circ (g\circ h)=(f\circ g)\circ h} ; it does not mean that the operations themselves are associative as operations. Compare with the associative operad, below. Associativity in operad theory means that expressions can be written involving operations without ambiguity from the omitted compositions, just as associativity for operations allows products to be written without ambiguity from the omitted parentheses. For instance, if θ {\displaystyle \theta } is a binary operation, which is written as θ ( a , b ) {\displaystyle \theta (a,b)} or ( a b ) {\displaystyle (ab)} . So that θ {\displaystyle \theta } may or may not be associative. Then what is commonly written ( ( a b ) c ) {\displaystyle ((ab)c)} is unambiguously written operadically as θ ∘ ( θ , 1 ) {\displaystyle \theta \circ (\theta ,1)} . This sends ( a , b , c ) {\displaystyle (a,b,c)} to ( a b , c ) {\displaystyle (ab,c)} (apply θ {\displaystyle \theta } on the first two, and the identity on the third), and then the θ {\displaystyle \theta } on the left "multiplies" a b {\displaystyle ab} by c {\displaystyle c} . This is clearer when depicted as a tree: which yields a 3-ary operation: However, the expression ( ( ( a b ) c ) d ) {\displaystyle (((ab)c)d)} is a priori ambiguous: it could mean θ ∘ ( ( θ , 1 ) ∘ ( ( θ , 1 ) , 1 ) ) {\displaystyle \theta \circ ((\theta ,1)\circ ((\theta ,1),1))} , if the inner compositions are performed first, or it could mean ( θ ∘ ( θ , 1 ) ) ∘ ( ( θ , 1 ) , 1 ) {\displaystyle (\theta \circ (\theta ,1))\circ ((\theta ,1),1)} , if the outer compositions are performed first (operations are read from right to left). Writing x = θ , y = ( θ , 1 ) , z = ( ( θ , 1 ) , 1 ) {\displaystyle x=\theta ,y=(\theta ,1),z=((\theta ,1),1)} , this is x ∘ ( y ∘ z ) {\displaystyle x\circ (y\circ z)} versus ( x ∘ y ) ∘ z {\displaystyle (x\circ y)\circ z} . That is, the tree is missing "vertical parentheses": If the top two rows of operations are composed first (puts an upward parenthesis at the ( a b ) c d {\displaystyle (ab)c\ \ d} line; does the inner composition first), the following results: which then evaluates unambiguously to yield a 4-ary operation. As an annotated expression: θ ( a b ) c ⋅ d ∘ ( ( θ a b ⋅ c , 1 d ) ∘ ( ( θ a ⋅ b , 1 c ) , 1 d ) ) {\displaystyle \theta _{(ab)c\cdot d}\circ ((\theta _{ab\cdot c},1_{d})\circ ((\theta _{a\cdot b},1_{c}),1_{d}))} If the bottom two rows of operations are composed first (puts a downward parenthesis at the a b c d {\displaystyle ab\quad c\ \ d} line; does the outer composition first), following results: which then evaluates unambiguously to yield a 4-ary operation: The operad axiom of associativity is that these yield the same result, and thus that the expression ( ( ( a b ) c ) d ) {\displaystyle (((ab)c)d)} is unambiguous. === Identity axiom === The identity axiom (for a binary operation) can be visualized in a tree as: meaning that the three operations obtained are equal: pre- or post- composing with the identity makes no difference. As for categories, 1 ∘ 1 = 1 {\displaystyle 1\circ 1=1} is a corollary of the identity axiom. == Examples == === Endomorphism operad in sets and operad algebras === The most basic operads are the ones given in the section on "Intuition", above. For any set X {\displaystyle X} , we obtain the endomorphism operad E n d X {\displaystyle {\mathcal {End}}_{X}} consisting of all functions X n → X {\displaystyle X^{n}\to X} . These operads are important because they serve to define operad algebras. If O {\displaystyle {\mathcal {O}}} is an operad, an operad algebra over O {\displaystyle {\mathcal {O}}} is given by a set X {\displaystyle X} and an operad morphism O → E n d X {\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{X}} . Intuitively, such a morphism turns each "abstract" operation of O ( n ) {\displaystyle {\mathcal {O}}(n)} into a "concrete" n {\displaystyle n} -ary operation on the set X {\displaystyle X} . An operad algebra over O {\displaystyle {\mathcal {O}}} thus consists of a set X {\displaystyle X} together with concrete operations on X {\displaystyle X} that follow the rules abstractely specified by the operad O {\displaystyle {\mathcal {O}}} . === Endomorphism operad in vector spaces and operad algebras === If k is a field, we can consider the category of finite-dimensional vector spaces over k; this becomes a monoidal category using the ordinary tensor product over k. We can then define endomorphism operads in this category, as follows. Let V be a finite-dimensional vector space The endomorphism operad E n d V = { E n d V ( n ) } {\displaystyle {\mathcal {End}}_{V}=\{{\mathcal {End}}_{V}(n)\}} of V consists of E n d V ( n ) {\displaystyle {\mathcal {End}}_{V}(n)} = the space of linear maps V ⊗ n → V {\displaystyle V^{\otimes n}\to V} , (composition) given f ∈ E n d V ( n ) {\displaystyle f\in {\mathcal {End}}_{V}(n)} , g 1 ∈ E n d V ( k 1 ) {\displaystyle g_{1}\in {\mathcal {End}}_{V}(k_{1})} , ..., g n ∈ E n d V ( k n ) {\displaystyle g_{n}\in {\mathcal {End}}_{V}(k_{n})} , their composition is given by the map V ⊗ k 1 ⊗ ⋯ ⊗ V ⊗ k n ⟶ g 1 ⊗ ⋯ ⊗ g n V ⊗ n → f V {\displaystyle V^{\otimes k_{1}}\otimes \cdots \otimes V^{\otimes k_{n}}\ {\overset {g_{1}\otimes \cdots \otimes g_{n}}{\longrightarrow }}\ V^{\otimes n}\ {\overset {f}{\to }}\ V} , (identity) The identity element in E n d V ( 1 ) {\displaystyle {\mathcal {End}}_{V}(1)} is the identity map id V {\displaystyle \operatorname {id} _{V}} , (symmetric group action) S n {\displaystyle S_{n}} operates on E n d V ( n ) {\displaystyle {\mathcal {End}}_{V}(n)} by permuting the components of the tensors in V ⊗ n {\displaystyle V^{\otimes n}} . If O {\displaystyle {\mathcal {O}}} is an operad, a k-linear operad algebra over O {\displaystyle {\mathcal {O}}} is given by a finite-dimensional vector space V over k and an operad morphism O → E n d V {\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{V}} ; this amounts to specifying concrete multilinear operations on V that behave like the operations of O {\displaystyle {\mathcal {O}}} . (Notice the analogy between operads&operad algebras and rings&modules: a module over a ring R is given by an abelian group M together with a ring homomorphism R → End ⁡ ( M ) {\displaystyle R\to \operatorname {End} (M)} .) Depending on applications, variations of the above are possible: for example, in algebraic topology, instead of vector spaces and tensor products between them, one uses (reasonable) topological spaces and cartesian products between them. === "Little something" operads === The little 2-disks operad is a topological operad where P ( n ) {\displaystyle P(n)} consists of ordered lists of n disjoint disks inside the unit disk of R 2 {\displaystyle \mathbb {R} ^{2}} centered at the origin. The symmetric group acts on such configurations by permuting the list of little disks. The operadic composition for little disks is illustrated in the accompanying figure to the right, where an element θ ∈ P ( 3 ) {\displaystyle \theta \in P(3)} is composed with an element ( θ 1 , θ 2 , θ 3 ) ∈ P ( 2 ) × P ( 3 ) × P ( 4 ) {\displaystyle (\theta _{1},\theta _{2},\theta _{3})\in P(2)\times P(3)\times P(4)} to yield the element θ ∘ ( θ 1 , θ 2 , θ 3 ) ∈ P ( 9 ) {\displaystyle \theta \circ (\theta _{1},\theta _{2},\theta _{3})\in P(9)} obtained by shrinking the configuration of θ i {\displaystyle \theta _{i}} and inserting it into the i-th disk of θ {\displaystyle \theta } , for i = 1 , 2 , 3 {\displaystyle i=1,2,3} . Analogously, one can define the little n-disks operad by considering configurations of disjoint n-balls inside the unit ball of R n {\displaystyle \mathbb {R} ^{n}} . Originally the little n-cubes operad or the little intervals operad (initially called little n-cubes PROPs) was defined by Michael Boardman and Rainer Vogt in a similar way, in terms of configurations of disjoint axis-aligned n-dimensional hypercubes (n-dimensional intervals) inside the unit hypercube. Later it was generalized by May to the little convex bodies operad, and "little disks" is a case of "folklore" derived from the "little convex bodies". === Rooted trees === In graph theory, rooted trees form a natural operad. Here, P ( n ) {\displaystyle P(n)} is the set of all rooted trees with n leaves, where the leaves are numbered from 1 to n. The group S n {\displaystyle S_{n}} operates on this set by permuting the leaf labels. Operadic composition T ∘ ( S 1 , … , S n ) {\displaystyle T\circ (S_{1},\ldots ,S_{n})} is given by replacing the i-th leaf of T {\displaystyle T} by the root of the i-th tree S i {\displaystyle S_{i}} , for i = 1 , … , n {\displaystyle i=1,\ldots ,n} , thus attaching the n trees to T {\displaystyle T} and forming a larger tree, whose root is taken to be the same as the root of T {\displaystyle T} and whose leaves are numbered in order. === Swiss-cheese operad === The Swiss-cheese operad is a two-colored topological operad defined in terms of configurations of disjoint n-dimensional disks inside a unit n-semidisk and n-dimensional semidisks, centered at the base of the unit semidisk and sitting inside of it. The operadic composition comes from gluing configurations of "little" disks inside the unit disk into the "little" disks in another unit semidisk and configurations of "little" disks and semidisks inside the unit semidisk into the other unit semidisk. The Swiss-cheese operad was defined by Alexander A. Voronov. It was used by Maxim Kontsevich to formulate a Swiss-cheese version of Deligne's conjecture on Hochschild cohomology. Kontsevich's conjecture was proven partly by Po Hu, Igor Kriz, and Alexander A. Voronov and then fully by Justin Thomas. === Associative operad === Another class of examples of operads are those capturing the structures of algebraic structures, such as associative algebras, commutative algebras and Lie algebras. Each of these can be exhibited as a finitely presented operad, in each of these three generated by binary operations. For example, the associative operad is a symmetric operad generated by a binary operation ψ {\displaystyle \psi } , subject only to the condition that ψ ∘ ( ψ , 1 ) = ψ ∘ ( 1 , ψ ) . {\displaystyle \psi \circ (\psi ,1)=\psi \circ (1,\psi ).} This condition corresponds to associativity of the binary operation ψ {\displaystyle \psi } ; writing ψ ( a , b ) {\displaystyle \psi (a,b)} multiplicatively, the above condition is ( a b ) c = a ( b c ) {\displaystyle (ab)c=a(bc)} . This associativity of the operation should not be confused with associativity of composition which holds in any operad; see the axiom of associativity, above. In the associative operad, each P ( n ) {\displaystyle P(n)} is given by the symmetric group S n {\displaystyle S_{n}} , on which S n {\displaystyle S_{n}} acts by right multiplication. The composite σ ∘ ( τ 1 , … , τ n ) {\displaystyle \sigma \circ (\tau _{1},\dots ,\tau _{n})} permutes its inputs in blocks according to σ {\displaystyle \sigma } , and within blocks according to the appropriate τ i {\displaystyle \tau _{i}} . The algebras over the associative operad are precisely the semigroups: sets together with a single binary associative operation. The k-linear algebras over the associative operad are precisely the associative k-algebras. === Terminal symmetric operad === The terminal symmetric operad is the operad which has a single n-ary operation for each n, with each S n {\displaystyle S_{n}} acting trivially. The algebras over this operad are the commutative semigroups; the k-linear algebras are the commutative associative k-algebras. === Operads from the braid groups === Similarly, there is a non- Σ {\displaystyle \Sigma } operad for which each P ( n ) {\displaystyle P(n)} is given by the Artin braid group B n {\displaystyle B_{n}} . Moreover, this non- Σ {\displaystyle \Sigma } operad has the structure of a braided operad, which generalizes the notion of an operad from symmetric to braid groups. === Linear algebra === In linear algebra, real vector spaces can be considered to be algebras over the operad R ∞ {\displaystyle \mathbb {R} ^{\infty }} of all linear combinations . This operad is defined by R ∞ ( n ) = R n {\displaystyle \mathbb {R} ^{\infty }(n)=\mathbb {R} ^{n}} for n ∈ N {\displaystyle n\in \mathbb {N} } , with the obvious action of S n {\displaystyle S_{n}} permuting components, and composition x → ∘ ( y 1 → , … , y n → ) {\displaystyle {\vec {x}}\circ ({\vec {y_{1}}},\ldots ,{\vec {y_{n}}})} given by the concatentation of the vectors x ( 1 ) y 1 → , … , x ( n ) y n → {\displaystyle x^{(1)}{\vec {y_{1}}},\ldots ,x^{(n)}{\vec {y_{n}}}} , where x → = ( x ( 1 ) , … , x ( n ) ) ∈ R n {\displaystyle {\vec {x}}=(x^{(1)},\ldots ,x^{(n)})\in \mathbb {R} ^{n}} . The vector x → = ( 2 , 3 , − 5 , 0 , … ) {\displaystyle {\vec {x}}=(2,3,-5,0,\dots )} for instance represents the operation of forming a linear combination with coefficients 2,3,-5,0,... This point of view formalizes the notion that linear combinations are the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations. The basic operations of vector addition and scalar multiplication are a generating set for the operad of all linear combinations, while the linear combinations operad canonically encodes all possible operations on a vector space. Similarly, affine combinations, conical combinations, and convex combinations can be considered to correspond to the sub-operads where the terms of the vector x → {\displaystyle {\vec {x}}} sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by R n {\displaystyle \mathbb {R} ^{n}} being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories. === Commutative-ring operad and Lie operad === The commutative-ring operad is an operad whose algebras are the commutative rings. It is defined by P ( n ) = Z [ x 1 , … , x n ] {\displaystyle P(n)=\mathbb {Z} [x_{1},\ldots ,x_{n}]} , with the obvious action of S n {\displaystyle S_{n}} and operadic composition given by substituting polynomials (with renumbered variables) for variables. A similar operad can be defined whose algebras are the associative, commutative algebras over some fixed base field. The Koszul-dual of this operad is the Lie operad (whose algebras are the Lie algebras), and vice versa. == Free Operads == Typical algebraic constructions (e.g., free algebra construction) can be extended to operads. Let S e t S n {\displaystyle \mathbf {Set} ^{S_{n}}} denote the category whose objects are sets on which the group S n {\displaystyle S_{n}} acts. Then there is a forgetful functor O p e r → ∏ n ∈ N S e t S n {\displaystyle {\mathsf {Oper}}\to \prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}} , which simply forgets the operadic composition. It is possible to construct a left adjoint Γ : ∏ n ∈ N S e t S n → O p e r {\displaystyle \Gamma :\prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}\to {\mathsf {Oper}}} to this forgetful functor (this is the usual definition of free functor). Given a collection of operations E, Γ ( E ) {\displaystyle \Gamma (E)} is the free operad on E. Like a group or a ring, the free construction allows to express an operad in terms of generators and relations. By a free representation of an operad O {\displaystyle {\mathcal {O}}} , we mean writing O {\displaystyle {\mathcal {O}}} as a quotient of a free operad F = Γ ( E ) {\displaystyle {\mathcal {F}}=\Gamma (E)} where E describes generators of O {\displaystyle {\mathcal {O}}} and the kernel of the epimorphism F → O {\displaystyle {\mathcal {F}}\to {\mathcal {O}}} describes the relations. A (symmetric) operad O = { O ( n ) } {\displaystyle {\mathcal {O}}=\{{\mathcal {O}}(n)\}} is called quadratic if it has a free presentation such that E = O ( 2 ) {\displaystyle E={\mathcal {O}}(2)} is the generator and the relation is contained in Γ ( E ) ( 3 ) {\displaystyle \Gamma (E)(3)} . == Clones == Clones are the special case of operads that are also closed under identifying arguments together ("reusing" some data). Clones can be equivalently defined as operads that are also a minion (or clonoid). == Operads in homotopy theory == In Stasheff (2004), Stasheff writes: Operads are particularly important and useful in categories with a good notion of "homotopy", where they play a key role in organizing hierarchies of higher homotopies. == See also == PRO (category theory) Algebra over an operad Higher-order operad E∞-operad Pseudoalgebra Multicategory == Notes == === Citations === == References == Tom Leinster (2004). Higher Operads, Higher Categories. Cambridge University Press. arXiv:math/0305049. Bibcode:2004hohc.book.....L. ISBN 978-0-521-53215-0. Martin Markl, Steve Shnider, Jim Stasheff (2002). Operads in Algebra, Topology and Physics. American Mathematical Society. ISBN 978-0-8218-4362-8.{{cite book}}: CS1 maint: multiple names: authors list (link) Markl, Martin (June 2006). "Operads and PROPs". arXiv:math/0601129. Stasheff, Jim (June–July 2004). "What Is...an Operad?" (PDF). Notices of the American Mathematical Society. 51 (6): 630–631. Retrieved 17 January 2008. Loday, Jean-Louis; Vallette, Bruno (2012), Algebraic Operads (PDF), Grundlehren der Mathematischen Wissenschaften, vol. 346, Berlin, New York: Springer-Verlag, ISBN 978-3-642-30361-6 Zinbiel, Guillaume W. (2012), "Encyclopedia of types of algebras 2010", in Bai, Chengming; Guo, Li; Loday, Jean-Louis (eds.), Operads and universal algebra, Nankai Series in Pure, Applied Mathematics and Theoretical Physics, vol. 9, pp. 217–298, arXiv:1101.0267, Bibcode:2011arXiv1101.0267Z, ISBN 9789814365116 Fresse, Benoit (17 May 2017), Homotopy of Operads and Grothendieck-Teichmüller Groups, Mathematical Surveys and Monographs, American Mathematical Society, ISBN 978-1-4704-3480-9, MR 3643404, Zbl 1373.55014 Miguel A. Mendéz (2015). Set Operads in Combinatorics and Computer Science. SpringerBriefs in Mathematics. ISBN 978-3-319-11712-6. Samuele Giraudo (2018). Nonsymmetric Operads in Combinatorics. Springer International Publishing. ISBN 978-3-030-02073-6. == External links == operad at the nLab https://golem.ph.utexas.edu/category/2011/05/an_operadic_introduction_to_en.html
Wikipedia:B-theorem#0
In mathematics, the Pythagorean theorem or Pythagoras' theorem is a fundamental relation in Euclidean geometry between the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares on the other two sides. The theorem can be written as an equation relating the lengths of the sides a, b and the hypotenuse c, sometimes called the Pythagorean equation: a 2 + b 2 = c 2 . {\displaystyle a^{2}+b^{2}=c^{2}.} The theorem is named for the Greek philosopher Pythagoras, born around 570 BC. The theorem has been proved numerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including both geometric proofs and algebraic proofs, with some dating back thousands of years. When Euclidean space is represented by a Cartesian coordinate system in analytic geometry, Euclidean distance satisfies the Pythagorean relation: the squared distance between two points equals the sum of squares of the difference in each coordinate between the points. The theorem can be generalized in various ways: to higher-dimensional spaces, to spaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all but n-dimensional solids. == Proofs using constructed squares == === Rearrangement proofs === In one rearrangement proof, two squares are used whose sides have a measure of a + b {\displaystyle a+b} and which contain four right triangles whose sides are a, b and c, with the hypotenuse being c. In the square on the right side, the triangles are placed such that the corners of the square correspond to the corners of the right angle in the triangles, forming a square in the center whose sides are length c. Each outer square has an area of ( a + b ) 2 {\displaystyle (a+b)^{2}} as well as 2 a b + c 2 {\displaystyle 2ab+c^{2}} , with 2 a b {\displaystyle 2ab} representing the total area of the four triangles. Within the big square on the left side, the four triangles are moved to form two similar rectangles with sides of length a and b. These rectangles in their new position have now delineated two new squares, one having side length a is formed in the bottom-left corner, and another square of side length b formed in the top-right corner. In this new position, this left side now has a square of area ( a + b ) 2 {\displaystyle (a+b)^{2}} as well as 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . Since both squares have the area of ( a + b ) 2 {\displaystyle (a+b)^{2}} it follows that the other measure of the square area also equal each other such that 2 a b + c 2 {\displaystyle 2ab+c^{2}} = 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . With the area of the four triangles removed from both side of the equation what remains is a 2 + b 2 = c 2 . {\displaystyle a^{2}+b^{2}=c^{2}.} In another proof rectangles in the second box can also be placed such that both have one corner that correspond to consecutive corners of the square. In this way they also form two boxes, this time in consecutive corners, with areas a 2 {\displaystyle a^{2}} and b 2 {\displaystyle b^{2}} which will again lead to a second square of with the area 2 a b + a 2 + b 2 {\displaystyle 2ab+a^{2}+b^{2}} . English mathematician Sir Thomas Heath gives this proof in his commentary on Proposition I.47 in Euclid's Elements, and mentions the proposals of German mathematicians Carl Anton Bretschneider and Hermann Hankel that Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him." Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues. === Algebraic proofs === The theorem can be proved algebraically using four copies of the same triangle arranged symmetrically around a square with side c, as shown in the lower part of the diagram. This results in a larger square, with side a + b and area (a + b)2. The four triangles and the square side c must have the same area as the larger square, ( b + a ) 2 = c 2 + 4 a b 2 = c 2 + 2 a b , {\displaystyle (b+a)^{2}=c^{2}+4{\frac {ab}{2}}=c^{2}+2ab,} giving c 2 = ( b + a ) 2 − 2 a b = b 2 + 2 a b + a 2 − 2 a b = a 2 + b 2 . {\displaystyle c^{2}=(b+a)^{2}-2ab=b^{2}+2ab+a^{2}-2ab=a^{2}+b^{2}.} A similar proof uses four copies of a right triangle with sides a, b and c, arranged inside a square with side c as in the top half of the diagram. The triangles are similar with area 1 2 a b {\displaystyle {\tfrac {1}{2}}ab} , while the small square has side b − a and area (b − a)2. The area of the large square is therefore ( b − a ) 2 + 4 a b 2 = ( b − a ) 2 + 2 a b = b 2 − 2 a b + a 2 + 2 a b = a 2 + b 2 . {\displaystyle (b-a)^{2}+4{\frac {ab}{2}}=(b-a)^{2}+2ab=b^{2}-2ab+a^{2}+2ab=a^{2}+b^{2}.} But this is a square with side c and area c2, so c 2 = a 2 + b 2 . {\displaystyle c^{2}=a^{2}+b^{2}.} == Other proofs of the theorem == This theorem may have more known proofs than any other (the law of quadratic reciprocity being another contender for that distinction); the book The Pythagorean Proposition contains 370 proofs. === Proof using similar triangles === This proof is based on the proportionality of the sides of three similar triangles, that is, upon the fact that the ratio of any two corresponding sides of similar triangles is the same regardless of the size of the triangles. Let ABC represent a right triangle, with the right angle located at C, as shown on the figure. Draw the altitude from point C, and call H its intersection with the side AB. Point H divides the length of the hypotenuse c into parts d and e. The new triangle, ACH, is similar to triangle ABC, because they both have a right angle (by definition of the altitude), and they share the angle at A, meaning that the third angle will be the same in both triangles as well, marked as θ in the figure. By a similar reasoning, the triangle CBH is also similar to ABC. The proof of similarity of the triangles requires the triangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to the parallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides: B C A B = B H B C and A C A B = A H A C . {\displaystyle {\frac {BC}{AB}}={\frac {BH}{BC}}{\text{ and }}{\frac {AC}{AB}}={\frac {AH}{AC}}.} The first result equates the cosines of the angles θ, whereas the second result equates their sines. These ratios can be written as B C 2 = A B × B H and A C 2 = A B × A H . {\displaystyle BC^{2}=AB\times BH{\text{ and }}AC^{2}=AB\times AH.} Summing these two equalities results in B C 2 + A C 2 = A B × B H + A B × A H = A B ( A H + B H ) = A B 2 , {\displaystyle BC^{2}+AC^{2}=AB\times BH+AB\times AH=AB(AH+BH)=AB^{2},} which, after simplification, demonstrates the Pythagorean theorem: B C 2 + A C 2 = A B 2 . {\displaystyle BC^{2}+AC^{2}=AB^{2}.} The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. One conjecture is that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in the Elements, and that the theory of proportions needed further development at that time. === Einstein's proof by dissection without rearrangement === Albert Einstein gave a proof by dissection in which the pieces do not need to be moved. Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and two similar shapes that each include one of two legs instead of the hypotenuse (see Similar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well. === Euclid's proof === In outline, here is how the proof in Euclid's Elements proceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to be congruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow. Let A, B, C be the vertices of a right triangle, with a right angle at A. Drop a perpendicular from A to the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs. For the formal proof, we require four elementary lemmata: If two triangles have two sides of the one equal to two sides of the other, each to each, and the angles included by those sides equal, then the triangles are congruent (side-angle-side). The area of a triangle is half the area of any parallelogram on the same base and having the same altitude. The area of a rectangle is equal to the product of two adjacent sides. The area of a square is equal to the product of two of its sides (follows from 3). Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square. The proof is as follows: Let ACB be a right-angled triangle with right angle CAB. On each of the sides BC, AB, and CA, squares are drawn, CBDE, BAGF, and ACIH, in that order. The construction of squares requires the immediately preceding theorems in Euclid, and depends upon the parallel postulate. From A, draw a line parallel to BD and CE. It will perpendicularly intersect BC and DE at K and L, respectively. Join CF and AD, to form the triangles BCF and BDA. Angles CAB and BAG are both right angles; therefore C, A, and G are collinear. Angles CBD and FBA are both right angles; therefore angle ABD equals angle FBC, since both are the sum of a right angle and angle ABC. Since AB is equal to FB, BD is equal to BC and angle ABD equals angle FBC, triangle ABD must be congruent to triangle FBC. Since A-K-L is a straight line, parallel to BD, then rectangle BDLK has twice the area of triangle ABD because they share the base BD and have the same altitude BK, i.e., a line normal to their common base, connecting the parallel lines BD and AL. (lemma 2) Since C is collinear with A and G, and this line is parallel to FB, then square BAGF must be twice in area to triangle FBC. Therefore, rectangle BDLK must have the same area as square BAGF = AB2. By applying steps 3 to 10 to the other side of the figure, it can be similarly shown that rectangle CKLE must have the same area as square ACIH = AC2. Adding these two results, AB2 + AC2 = BD × BK + KL × KC Since BD = KL, BD × BK + KL × KC = BD(BK + KC) = BD × BC Therefore, AB2 + AC2 = BC2, since CBDE is a square. This proof, which appears in Euclid's Elements as that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares. This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used. === Proofs by dissection and rearrangement === Another by rearrangement is given by the middle animation. A large square is formed with area c2, from four identical right triangles with sides a, b and c, fitted around a small central square. Then two rectangles are formed with sides a and b by moving the triangles. Combining the smaller square with these rectangles produces two squares of areas a2 and b2, which must have the same area as the initial large square. The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is called dissection. This shows the area of the large square equals that of the two smaller ones. === Proof by area-preserving shearing === As shown in the accompanying animation, area-preserving shear mappings and translations can transform the squares on the sides adjacent to the right-angle onto the square on the hypotenuse, together covering it exactly. Each shear leaves the base and height unchanged, thus leaving the area unchanged too. The translations also leave the area unchanged, as they do not alter the shapes at all. Each square is first sheared into a parallelogram, and then into a rectangle which can be translated onto one section of the square on the hypotenuse. === Other algebraic proofs === A related proof by U.S. President James A. Garfield was published before he was elected president; while he was a U.S. Representative. Instead of a square it uses a trapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. The area of the trapezoid can be calculated to be half the area of the square, that is 1 2 ( b + a ) 2 . {\displaystyle {\frac {1}{2}}(b+a)^{2}.} The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of 1 2 {\displaystyle {\frac {1}{2}}} , which is removed by multiplying by two to give the result. === Proof using differentials === One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employing calculus. The triangle ABC is a right triangle, as shown in the upper part of the diagram, with BC the hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of length y, the side AC of length x and the side AB of length a, as seen in the lower diagram part. If x is increased by a small amount dx by extending the side AC slightly to D, then y also increases by dy. These form two sides of a triangle, CDE, which (with E chosen so CE is perpendicular to the hypotenuse) is a right triangle approximately similar to ABC. Therefore, the ratios of their sides must be the same, that is: d y d x = x y . {\displaystyle {\frac {dy}{dx}}={\frac {x}{y}}.} This can be rewritten as y d y = x d x {\displaystyle y\,dy=x\,dx} , which is a differential equation that can be solved by direct integration: ∫ y d y = ∫ x d x , {\displaystyle \int y\,dy=\int x\,dx\,,} giving y 2 = x 2 + C . {\displaystyle y^{2}=x^{2}+C.} The constant can be deduced from x = 0, y = a to give the equation y 2 = x 2 + a 2 . {\displaystyle y^{2}=x^{2}+a^{2}.} This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place of dx and dy. == Converse == The converse of the theorem is also true: Given a triangle with sides of length a, b, and c, if a2 + b2 = c2, then the angle between sides a and b is a right angle. For any three positive real numbers a, b, and c such that a2 + b2 = c2, there exists a triangle with sides a, b and c as a consequence of the converse of the triangle inequality. This converse appears in Euclid's Elements (Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right." It can be proved using the law of cosines or as follows: Let ABC be a triangle with side lengths a, b, and c, with a2 + b2 = c2. Construct a second triangle with sides of length a and b containing a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has length c = √a2 + b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengths a, b and c, the triangles are congruent and must have the same angles. Therefore, the angle between the side of lengths a and b in the original triangle is a right angle. The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem. A corollary of the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Let c be chosen to be the longest of the three sides and a + b > c (otherwise there is no triangle according to the triangle inequality). The following statements apply: If a2 + b2 = c2, then the triangle is right. If a2 + b2 > c2, then the triangle is acute. If a2 + b2 < c2, then the triangle is obtuse. Edsger W. Dijkstra has stated this proposition about acute, right, and obtuse triangles in this language: sgn(α + β − γ) = sgn(a2 + b2 − c2), where α is the angle opposite to side a, β is the angle opposite to side b, γ is the angle opposite to side c, and sgn is the sign function. == Consequences and uses of the theorem == === Pythagorean triples === A Pythagorean triple has three positive integers a, b, and c, such that a2 + b2 = c2. In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths. Such a triple is commonly written (a, b, c). Some well-known examples are (3, 4, 5) and (5, 12, 13). A primitive Pythagorean triple is one in which a, b and c are coprime (the greatest common divisor of a, b and c is 1). The following is a list of primitive Pythagorean triples with values less than 100: (3, 4, 5), (5, 12, 13), (7, 24, 25), (8, 15, 17), (9, 40, 41), (11, 60, 61), (12, 35, 37), (13, 84, 85), (16, 63, 65), (20, 21, 29), (28, 45, 53), (33, 56, 65), (36, 77, 85), (39, 80, 89), (48, 55, 73), (65, 72, 97) There are many formulas for generating Pythagorean triples. Of these, Euclid's formula is the most well-known: given arbitrary positive integers m and n, the formula states that the integers a = m 2 − n 2 , b = 2 m n , c = m 2 + n 2 {\displaystyle a=m^{2}-n^{2},\quad \,b=2mn,\quad \,c=m^{2}+n^{2}} forms a Pythagorean triple. === Inverse Pythagorean theorem === Given a right triangle with sides a , b , c {\displaystyle a,b,c} and altitude d {\displaystyle d} (a line from the right angle and perpendicular to the hypotenuse c {\displaystyle c} ). The Pythagorean theorem has, a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} while the inverse Pythagorean theorem relates the two legs a , b {\displaystyle a,b} to the altitude d {\displaystyle d} , 1 a 2 + 1 b 2 = 1 d 2 {\displaystyle {\frac {1}{a^{2}}}+{\frac {1}{b^{2}}}={\frac {1}{d^{2}}}} The equation can be transformed to, 1 ( x z ) 2 + 1 ( y z ) 2 = 1 ( x y ) 2 {\displaystyle {\frac {1}{(xz)^{2}}}+{\frac {1}{(yz)^{2}}}={\frac {1}{(xy)^{2}}}} where x 2 + y 2 = z 2 {\displaystyle x^{2}+y^{2}=z^{2}} for any non-zero real x , y , z {\displaystyle x,y,z} . If the a , b , d {\displaystyle a,b,d} are to be integers, the smallest solution a > b > d {\displaystyle a>b>d} is then 1 20 2 + 1 15 2 = 1 12 2 {\displaystyle {\frac {1}{20^{2}}}+{\frac {1}{15^{2}}}={\frac {1}{12^{2}}}} using the smallest Pythagorean triple 3 , 4 , 5 {\displaystyle 3,4,5} . The reciprocal Pythagorean theorem is a special case of the optic equation 1 p + 1 q = 1 r {\displaystyle {\frac {1}{p}}+{\frac {1}{q}}={\frac {1}{r}}} where the denominators are squares and also for a heptagonal triangle whose sides p , q , r {\displaystyle p,q,r} are square numbers. === Incommensurable lengths === One of the consequences of the Pythagorean theorem is that line segments whose lengths are incommensurable (so the ratio of which is not a rational number) can be constructed using a straightedge and compass. Pythagoras' theorem enables construction of incommensurable lengths because the hypotenuse of a triangle is related to the sides by the square root operation. The figure on the right shows how to construct line segments whose lengths are in the ratio of the square root of any positive integer. Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as √2, √3, √5 . For more detail, see Quadratic irrational. Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit. According to one legend, Hippasus of Metapontum (ca. 470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable. A careful discussion of Hippasus's contributions is found in Fritz. === Complex numbers === For any complex number z = x + i y , {\displaystyle z=x+iy,} the absolute value or modulus is given by r = | z | = x 2 + y 2 . {\displaystyle r=|z|={\sqrt {x^{2}+y^{2}}}.} So the three quantities, r, x and y are related by the Pythagorean equation, r 2 = x 2 + y 2 . {\displaystyle r^{2}=x^{2}+y^{2}.} Note that r is defined to be a positive number or zero but x and y can be negative as well as positive. Geometrically r is the distance of the z from zero or the origin O in the complex plane. This can be generalised to find the distance between two points, z1 and z2 say. The required distance is given by | z 1 − z 2 | = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 , {\displaystyle |z_{1}-z_{2}|={\sqrt {(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}},} so again they are related by a version of the Pythagorean equation, | z 1 − z 2 | 2 = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 . {\displaystyle |z_{1}-z_{2}|^{2}=(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}.} === Euclidean distance === The distance formula in Cartesian coordinates is derived from the Pythagorean theorem. If (x1, y1) and (x2, y2) are points in the plane, then the distance between them, also called the Euclidean distance, is given by ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 . {\displaystyle {\sqrt {(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}}.} More generally, in Euclidean n-space, the Euclidean distance between two points, A = ( a 1 , a 2 , … , a n ) {\displaystyle A\,=\,(a_{1},a_{2},\dots ,a_{n})} and B = ( b 1 , b 2 , … , b n ) {\displaystyle B\,=\,(b_{1},b_{2},\dots ,b_{n})} , is defined, by generalization of the Pythagorean theorem, as: ( a 1 − b 1 ) 2 + ( a 2 − b 2 ) 2 + ⋯ + ( a n − b n ) 2 = ∑ i = 1 n ( a i − b i ) 2 . {\displaystyle {\sqrt {(a_{1}-b_{1})^{2}+(a_{2}-b_{2})^{2}+\cdots +(a_{n}-b_{n})^{2}}}={\sqrt {\sum _{i=1}^{n}(a_{i}-b_{i})^{2}}}.} If instead of Euclidean distance, the square of this value (the squared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates: ( a 1 − b 1 ) 2 + ( a 2 − b 2 ) 2 + ⋯ + ( a n − b n ) 2 = ∑ i = 1 n ( a i − b i ) 2 . {\displaystyle (a_{1}-b_{1})^{2}+(a_{2}-b_{2})^{2}+\cdots +(a_{n}-b_{n})^{2}=\sum _{i=1}^{n}(a_{i}-b_{i})^{2}.} The squared form is a smooth, convex function of both points, and is widely used in optimization theory and statistics, forming the basis of least squares. === Euclidean distance in other coordinate systems === If Cartesian coordinates are not used, for example, if polar coordinates are used in two dimensions or, in more general terms, if curvilinear coordinates are used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in the applications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates (r, θ) can be introduced as: x = r cos ⁡ θ , y = r sin ⁡ θ . {\displaystyle x=r\cos \theta ,\ y=r\sin \theta .} Then two points with locations (r1, θ1) and (r2, θ2) are separated by a distance s: s 2 = ( x 1 − x 2 ) 2 + ( y 1 − y 2 ) 2 = ( r 1 cos ⁡ θ 1 − r 2 cos ⁡ θ 2 ) 2 + ( r 1 sin ⁡ θ 1 − r 2 sin ⁡ θ 2 ) 2 . {\displaystyle s^{2}=(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}=(r_{1}\cos \theta _{1}-r_{2}\cos \theta _{2})^{2}+(r_{1}\sin \theta _{1}-r_{2}\sin \theta _{2})^{2}.} Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as: s 2 = r 1 2 + r 2 2 − 2 r 1 r 2 ( cos ⁡ θ 1 cos ⁡ θ 2 + sin ⁡ θ 1 sin ⁡ θ 2 ) = r 1 2 + r 2 2 − 2 r 1 r 2 cos ⁡ ( θ 1 − θ 2 ) = r 1 2 + r 2 2 − 2 r 1 r 2 cos ⁡ Δ θ , {\displaystyle {\begin{aligned}s^{2}&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\left(\cos \theta _{1}\cos \theta _{2}+\sin \theta _{1}\sin \theta _{2}\right)\\&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\cos \left(\theta _{1}-\theta _{2}\right)\\&=r_{1}^{2}+r_{2}^{2}-2r_{1}r_{2}\cos \Delta \theta ,\end{aligned}}} using the trigonometric product-to-sum formulas. This formula is the law of cosines, sometimes called the generalized Pythagorean theorem. From this result, for the case where the radii to the two locations are at right angles, the enclosed angle Δθ = π/2, and the form corresponding to Pythagoras' theorem is regained: s 2 = r 1 2 + r 2 2 . {\displaystyle s^{2}=r_{1}^{2}+r_{2}^{2}.} The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles. === Pythagorean trigonometric identity === In a right triangle with sides a, b and hypotenuse c, trigonometry determines the sine and cosine of the angle θ between side a and the hypotenuse as: sin ⁡ θ = b c , cos ⁡ θ = a c . {\displaystyle \sin \theta ={\frac {b}{c}},\quad \cos \theta ={\frac {a}{c}}.} From that it follows: cos 2 θ + sin 2 θ = a 2 + b 2 c 2 = 1 , {\displaystyle {\cos }^{2}\theta +{\sin }^{2}\theta ={\frac {a^{2}+b^{2}}{c^{2}}}=1,} where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity. In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of size sin θ and adjacent side of size cos θ in units of the hypotenuse. === Relation to the cross product === The Pythagorean theorem relates the cross product and dot product in a similar way: ‖ a × b ‖ 2 + ( a ⋅ b ) 2 = ‖ a ‖ 2 ‖ b ‖ 2 . {\displaystyle \|\mathbf {a} \times \mathbf {b} \|^{2}+(\mathbf {a} \cdot \mathbf {b} )^{2}=\|\mathbf {a} \|^{2}\|\mathbf {b} \|^{2}.} This can be seen from the definitions of the cross product and dot product, as a × b = a b n sin ⁡ θ a ⋅ b = a b cos ⁡ θ , {\displaystyle {\begin{aligned}\mathbf {a} \times \mathbf {b} &=ab\mathbf {n} \sin {\theta }\\\mathbf {a} \cdot \mathbf {b} &=ab\cos {\theta },\end{aligned}}} with n a unit vector normal to both a and b. The relationship follows from these definitions and the Pythagorean trigonometric identity. This can also be used to define the cross product. By rearranging the following equation is obtained ‖ a × b ‖ 2 = ‖ a ‖ 2 ‖ b ‖ 2 − ( a ⋅ b ) 2 . {\displaystyle \|\mathbf {a} \times \mathbf {b} \|^{2}=\|\mathbf {a} \|^{2}\|\mathbf {b} \|^{2}-(\mathbf {a} \cdot \mathbf {b} )^{2}.} This can be considered as a condition on the cross product and so part of its definition, for example in seven dimensions. === As an axiom === If the first four of the Euclidean geometry axioms are assumed to be true then the Pythagorean theorem is equivalent to the fifth. That is, Euclid's fifth postulate implies the Pythagorean theorem and vice-versa. == Generalizations == === Similar figures on the three sides === The Pythagorean theorem generalizes beyond the areas of squares on the three sides to any similar figures. This was known by Hippocrates of Chios in the 5th century BC, and was included by Euclid in his Elements: If one erects similar figures (see Euclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side. This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures are a:b:c). While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle). The basic idea behind this generalization is that the area of a plane figure is proportional to the square of any linear dimension, and in particular is proportional to the square of the length of any side. Thus, if similar figures with areas A, B and C are erected on sides with corresponding lengths a, b and c then: A a 2 = B b 2 = C c 2 , {\displaystyle {\frac {A}{a^{2}}}={\frac {B}{b^{2}}}={\frac {C}{c^{2}}}\,,} ⇒ A + B = a 2 c 2 C + b 2 c 2 C . {\displaystyle \Rightarrow A+B={\frac {a^{2}}{c^{2}}}C+{\frac {b^{2}}{c^{2}}}C\,.} But, by the Pythagorean theorem, a2 + b2 = c2, so A + B = C. Conversely, if we can prove that A + B = C for three similar figures without using the Pythagorean theorem, then we can work backwards to construct a proof of the theorem. For example, the starting center triangle can be replicated and used as a triangle C on its hypotenuse, and two similar right triangles (A and B ) constructed on the other two sides, formed by dividing the central triangle by its altitude. The sum of the areas of the two smaller triangles therefore is that of the third, thus A + B = C and reversing the above logic leads to the Pythagorean theorem a2 + b2 = c2. (See also Einstein's proof by dissection without rearrangement) === Law of cosines === The Pythagorean theorem is a special case of the more general theorem relating the lengths of sides in any triangle, the law of cosines, which states that a 2 + b 2 − 2 a b cos ⁡ θ = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos {\theta }=c^{2}} where θ {\displaystyle \theta } is the angle between sides a {\displaystyle a} and b {\displaystyle b} . When θ {\displaystyle \theta } is π 2 {\displaystyle {\frac {\pi }{2}}} radians or 90°, then cos ⁡ θ = 0 {\displaystyle \cos {\theta }=0} , and the formula reduces to the usual Pythagorean theorem. === Arbitrary triangle === At any selected angle of a general triangle of sides a, b, c, inscribe an isosceles triangle such that the equal angles at its base θ are the same as the selected angle. Suppose the selected angle θ is opposite the side labeled c. Inscribing the isosceles triangle forms triangle CAD with angle θ opposite side b and with side r along c. A second triangle is formed with angle θ opposite side a and a side with length s along c, as shown in the figure. Thābit ibn Qurra stated that the sides of the three triangles were related as: a 2 + b 2 = c ( r + s ) . {\displaystyle a^{2}+b^{2}=c(r+s)\ .} As the angle θ approaches π/2, the base of the isosceles triangle narrows, and lengths r and s overlap less and less. When θ = π/2, ADB becomes a right triangle, r + s = c, and the original Pythagorean theorem is regained. One proof observes that triangle ABC has the same angles as triangle CAD, but in opposite order. (The two triangles share the angle at vertex A, both contain the angle θ, and so also have the same third angle by the triangle postulate.) Consequently, ABC is similar to the reflection of CAD, the triangle DAC in the lower panel. Taking the ratio of sides opposite and adjacent to θ, c b = b r . {\displaystyle {\frac {c}{b}}={\frac {b}{r}}\ .} Likewise, for the reflection of the other triangle, c a = a s . {\displaystyle {\frac {c}{a}}={\frac {a}{s}}\ .} Clearing fractions and adding these two relations: c s + c r = a 2 + b 2 , {\displaystyle cs+cr=a^{2}+b^{2}\ ,} the required result. The theorem remains valid if the angle θ {\displaystyle \theta } is obtuse so the lengths r and s are non-overlapping. === General triangles using parallelograms === Pappus's area theorem is a further generalization, that applies to triangles that are not right triangles, using parallelograms on the three sides in place of squares (squares are a special case, of course). The upper figure shows that for a scalene triangle, the area of the parallelogram on the longest side is the sum of the areas of the parallelograms on the other two sides, provided the parallelogram on the long side is constructed as indicated (the dimensions labeled with arrows are the same, and determine the sides of the bottom parallelogram). This replacement of squares with parallelograms bears a clear resemblance to the original Pythagoras' theorem, and was considered a generalization by Pappus of Alexandria in 4 AD The lower figure shows the elements of the proof. Focus on the left side of the figure. The left green parallelogram has the same area as the left, blue portion of the bottom parallelogram because both have the same base b and height h. However, the left green parallelogram also has the same area as the left green parallelogram of the upper figure, because they have the same base (the upper left side of the triangle) and the same height normal to that side of the triangle. Repeating the argument for the right side of the figure, the bottom parallelogram has the same area as the sum of the two green parallelograms. === Solid geometry === In terms of solid geometry, Pythagoras' theorem can be applied to three dimensions as follows. Consider the cuboid shown in the figure. The length of face diagonal AC is found from Pythagoras' theorem as: A C ¯ 2 = A B ¯ 2 + B C ¯ 2 , {\displaystyle {\overline {AC}}^{\,2}={\overline {AB}}^{\,2}+{\overline {BC}}^{\,2}\,,} where these three sides form a right triangle. Using diagonal AC and the horizontal edge CD, the length of body diagonal AD then is found by a second application of Pythagoras' theorem as: A D ¯ 2 = A C ¯ 2 + C D ¯ 2 , {\displaystyle {\overline {AD}}^{\,2}={\overline {AC}}^{\,2}+{\overline {CD}}^{\,2}\,,} or, doing it all in one step: A D ¯ 2 = A B ¯ 2 + B C ¯ 2 + C D ¯ 2 . {\displaystyle {\overline {AD}}^{\,2}={\overline {AB}}^{\,2}+{\overline {BC}}^{\,2}+{\overline {CD}}^{\,2}\,.} This result is the three-dimensional expression for the magnitude of a vector v (the diagonal AD) in terms of its orthogonal components {vk} (the three mutually perpendicular sides): ‖ v ‖ 2 = ∑ k = 1 3 ‖ v k ‖ 2 . {\displaystyle \|\mathbf {v} \|^{2}=\sum _{k=1}^{3}\|\mathbf {v} _{k}\|^{2}.} This one-step formulation may be viewed as a generalization of Pythagoras' theorem to higher dimensions. However, this result is really just the repeated application of the original Pythagoras' theorem to a succession of right triangles in a sequence of orthogonal planes. A substantial generalization of the Pythagorean theorem to three dimensions is de Gua's theorem, named for Jean Paul de Gua de Malves: If a tetrahedron has a right angle corner (like a corner of a cube), then the square of the area of the face opposite the right angle corner is the sum of the squares of the areas of the other three faces. This result can be generalized as in the "n-dimensional Pythagorean theorem": Let x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} be orthogonal vectors in Rn. Consider the n-dimensional simplex S with vertices 0 , x 1 , … , x n {\displaystyle 0,x_{1},\ldots ,x_{n}} . (Think of the (n − 1)-dimensional simplex with vertices x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} not including the origin as the "hypotenuse" of S and the remaining (n − 1)-dimensional faces of S as its "legs".) Then the square of the volume of the hypotenuse of S is the sum of the squares of the volumes of the n legs. This statement is illustrated in three dimensions by the tetrahedron in the figure. The "hypotenuse" is the base of the tetrahedron at the back of the figure, and the "legs" are the three sides emanating from the vertex in the foreground. As the depth of the base from the vertex increases, the area of the "legs" increases, while that of the base is fixed. The theorem suggests that when this depth is at the value creating a right vertex, the generalization of Pythagoras' theorem applies. In a different wording: Given an n-rectangular n-dimensional simplex, the square of the (n − 1)-content of the facet opposing the right vertex will equal the sum of the squares of the (n − 1)-contents of the remaining facets. === Inner product spaces === The Pythagorean theorem can be generalized to inner product spaces, which are generalizations of the familiar 2-dimensional and 3-dimensional Euclidean spaces. For example, a function may be considered as a vector with infinitely many components in an inner product space, as in functional analysis. In an inner product space, the concept of perpendicularity is replaced by the concept of orthogonality: two vectors v and w are orthogonal if their inner product ⟨ v , w ⟩ {\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle } is zero. The inner product is a generalization of the dot product of vectors. The dot product is called the standard inner product or the Euclidean inner product. However, other inner products are possible. The concept of length is replaced by the concept of the norm ‖v‖ of a vector v, defined as: ‖ v ‖ ≡ ⟨ v , v ⟩ . {\displaystyle \lVert \mathbf {v} \rVert \equiv {\sqrt {\langle \mathbf {v} ,\mathbf {v} \rangle }}\,.} In an inner-product space, the Pythagorean theorem states that for any two orthogonal vectors v and w we have ‖ v + w ‖ 2 = ‖ v ‖ 2 + ‖ w ‖ 2 . {\displaystyle \left\|\mathbf {v} +\mathbf {w} \right\|^{2}=\left\|\mathbf {v} \right\|^{2}+\left\|\mathbf {w} \right\|^{2}.} Here the vectors v and w are akin to the sides of a right triangle with hypotenuse given by the vector sum v + w. This form of the Pythagorean theorem is a consequence of the properties of the inner product: ‖ v + w ‖ 2 = ⟨ v + w , v + w ⟩ = ⟨ v , v ⟩ + ⟨ w , w ⟩ + ⟨ v , w ⟩ + ⟨ w , v ⟩ = ‖ v ‖ 2 + ‖ w ‖ 2 , {\displaystyle {\begin{aligned}\left\|\mathbf {v} +\mathbf {w} \right\|^{2}&=\langle \mathbf {v+w} ,\ \mathbf {v+w} \rangle \\[3mu]&=\langle \mathbf {v} ,\ \mathbf {v} \rangle +\langle \mathbf {w} ,\ \mathbf {w} \rangle +\langle \mathbf {v,\ w} \rangle +\langle \mathbf {w,\ v} \rangle \\[3mu]&=\left\|\mathbf {v} \right\|^{2}+\left\|\mathbf {w} \right\|^{2},\end{aligned}}} where ⟨ v , w ⟩ = ⟨ w , v ⟩ = 0 {\displaystyle \langle \mathbf {v,\ w} \rangle =\langle \mathbf {w,\ v} \rangle =0} because of orthogonality. A further generalization of the Pythagorean theorem in an inner product space to non-orthogonal vectors is the parallelogram law: 2 ‖ v ‖ 2 + 2 ‖ w ‖ 2 = ‖ v + w ‖ 2 + ‖ v − w ‖ 2 , {\displaystyle 2\|\mathbf {v} \|^{2}+2\|\mathbf {w} \|^{2}=\|\mathbf {v+w} \|^{2}+\|\mathbf {v-w} \|^{2}\ ,} which says that twice the sum of the squares of the lengths of the sides of a parallelogram is the sum of the squares of the lengths of the diagonals. Any norm that satisfies this equality is ipso facto a norm corresponding to an inner product. The Pythagorean identity can be extended to sums of more than two orthogonal vectors. If v1, v2, ..., vn are pairwise-orthogonal vectors in an inner-product space, then application of the Pythagorean theorem to successive pairs of these vectors (as described for 3-dimensions in the section on solid geometry) results in the equation ‖ ∑ k = 1 n v k ‖ 2 = ∑ k = 1 n ‖ v k ‖ 2 {\displaystyle {\biggl \|}\sum _{k=1}^{n}\mathbf {v} _{k}{\biggr \|}^{2}=\sum _{k=1}^{n}\|\mathbf {v} _{k}\|^{2}} === Sets of m-dimensional objects in n-dimensional space === Another generalization of the Pythagorean theorem applies to Lebesgue-measurable sets of objects in any number of dimensions. Specifically, the square of the measure of an m-dimensional set of objects in one or more parallel m-dimensional flats in n-dimensional Euclidean space is equal to the sum of the squares of the measures of the orthogonal projections of the object(s) onto all m-dimensional coordinate subspaces. In mathematical terms: μ m s 2 = ∑ i = 1 x μ 2 m p i {\displaystyle \mu _{ms}^{2}=\sum _{i=1}^{x}\mathbf {\mu ^{2}} _{mp_{i}}} where: μ m {\displaystyle \mu _{m}} is a measure in m-dimensions (a length in one dimension, an area in two dimensions, a volume in three dimensions, etc.). s {\displaystyle s} is a set of one or more non-overlapping m-dimensional objects in one or more parallel m-dimensional flats in n-dimensional Euclidean space. μ m s {\displaystyle \mu _{ms}} is the total measure (sum) of the set of m-dimensional objects. p {\displaystyle p} represents an m-dimensional projection of the original set onto an orthogonal coordinate subspace. μ m p i {\displaystyle \mu _{mp_{i}}} is the measure of the m-dimensional set projection onto m-dimensional coordinate subspace i {\displaystyle i} . Because object projections can overlap on a coordinate subspace, the measure of each object projection in the set must be calculated individually, then measures of all projections added together to provide the total measure for the set of projections on the given coordinate subspace. x {\displaystyle x} is the number of orthogonal, m-dimensional coordinate subspaces in n-dimensional space (Rn) onto which the m-dimensional objects are projected (m ≤ n): x = ( n m ) = n ! m ! ( n − m ) ! {\displaystyle x={\binom {n}{m}}={\frac {n!}{m!(n-m)!}}} === Non-Euclidean geometry === The Pythagorean theorem is derived from the axioms of Euclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theorem implies, and is implied by, Euclid's Parallel (Fifth) Postulate. Thus, right triangles in a non-Euclidean geometry do not satisfy the Pythagorean theorem. For example, in spherical geometry, all three sides of the right triangle (say a, b, and c) bounding an octant of the unit sphere have length equal to π/2, and all its angles are right angles, which violates the Pythagorean theorem because a 2 + b 2 = 2 c 2 > c 2 {\displaystyle a^{2}+b^{2}=2c^{2}>c^{2}} . Here two cases of non-Euclidean geometry are considered—spherical geometry and hyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines. However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, say A+B = C. The sides are then related as follows: the sum of the areas of the circles with diameters a and b equals the area of the circle with diameter c. ==== Spherical geometry ==== For any right triangle on a sphere of radius R (for example, if γ in the figure is a right angle), with sides a, b, c, the relation between the sides takes the form: cos ⁡ c R = cos ⁡ a R cos ⁡ b R . {\displaystyle \cos {\frac {c}{R}}=\cos {\frac {a}{R}}\,\cos {\frac {b}{R}}.} This equation can be derived as a special case of the spherical law of cosines that applies to all spherical triangles: cos ⁡ c R = cos ⁡ a R cos ⁡ b R + sin ⁡ a R sin ⁡ b R cos ⁡ γ . {\displaystyle \cos {\frac {c}{R}}=\cos {\frac {a}{R}}\,\cos {\frac {b}{R}}+\sin {\frac {a}{R}}\,\sin {\frac {b}{R}}\,\cos {\gamma }.} For infinitesimal triangles on the sphere (or equivalently, for finite spherical triangles on a sphere of infinite radius), the spherical relation between the sides of a right triangle reduces to the Euclidean form of the Pythagorean theorem. To see how, assume we have a spherical triangle of fixed side lengths a, b, and c on a sphere with expanding radius R. As R approaches infinity the quantities a/R, b/R, and c/R tend to zero and the spherical Pythagorean identity reduces to 1 = 1 , {\displaystyle 1=1,} so we must look at its asymptotic expansion. The Maclaurin series for the cosine function can be written as cos ⁡ x = 1 − 1 2 x 2 + O ( x 4 ) {\textstyle \cos x=1-{\tfrac {1}{2}}x^{2}+O{\left(x^{4}\right)}} with the remainder term in big O notation. Letting x = c / R {\displaystyle x=c/R} be a side of the triangle, and treating the expression as an asymptotic expansion in terms of R for a fixed c, cos ⁡ c R = 1 − c 2 2 R 2 + O ( R − 4 ) {\displaystyle {\begin{aligned}\cos {\frac {c}{R}}=1-{\frac {c^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\end{aligned}}} and likewise for a and b. Substituting the asymptotic expansion for each of the cosines into the spherical relation for a right triangle yields 1 − c 2 2 R 2 + O ( R − 4 ) = ( 1 − a 2 2 R 2 + O ( R − 4 ) ) ( 1 − b 2 2 R 2 + O ( R − 4 ) ) = 1 − a 2 2 R 2 − b 2 2 R 2 + O ( R − 4 ) . {\displaystyle {\begin{aligned}1-{\frac {c^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}&=\left(1-{\frac {a^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\right)\left(1-{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}\right)\\&=1-{\frac {a^{2}}{2R^{2}}}-{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}.\end{aligned}}} Subtracting 1 and then negating each side, c 2 2 R 2 = a 2 2 R 2 + b 2 2 R 2 + O ( R − 4 ) . {\displaystyle {\frac {c^{2}}{2R^{2}}}={\frac {a^{2}}{2R^{2}}}+{\frac {b^{2}}{2R^{2}}}+O{\left(R^{-4}\right)}.} Multiplying through by 2R2, the asymptotic expansion for c in terms of fixed a, b and variable R is c 2 = a 2 + b 2 + O ( R − 2 ) . {\displaystyle c^{2}=a^{2}+b^{2}+O{\left(R^{-2}\right)}.} The Euclidean Pythagorean relationship c 2 = a 2 + b 2 {\textstyle c^{2}=a^{2}+b^{2}} is recovered in the limit, as the remainder vanishes when the radius R approaches infinity. For practical computation in spherical trigonometry with small right triangles, cosines can be replaced with sines using the double-angle identity cos ⁡ 2 θ = 1 − 2 sin 2 ⁡ θ {\displaystyle \cos {2\theta }=1-2\sin ^{2}{\theta }} to avoid loss of significance. Then the spherical Pythagorean theorem can alternately be written as sin 2 ⁡ c 2 R = sin 2 ⁡ a 2 R + sin 2 ⁡ b 2 R − 2 sin 2 ⁡ a 2 R sin 2 ⁡ b 2 R . {\displaystyle \sin ^{2}{\frac {c}{2R}}=\sin ^{2}{\frac {a}{2R}}+\sin ^{2}{\frac {b}{2R}}-2\sin ^{2}{\frac {a}{2R}}\,\sin ^{2}{\frac {b}{2R}}.} ==== Hyperbolic geometry ==== In a hyperbolic space with uniform Gaussian curvature −1/R2, for a right triangle with legs a, b, and hypotenuse c, the relation between the sides takes the form: cosh ⁡ c R = cosh ⁡ a R cosh ⁡ b R {\displaystyle \cosh {\frac {c}{R}}=\cosh {\frac {a}{R}}\,\cosh {\frac {b}{R}}} where cosh is the hyperbolic cosine. This formula is a special form of the hyperbolic law of cosines that applies to all hyperbolic triangles: cosh ⁡ c R = cosh ⁡ a R cosh ⁡ b R − sinh ⁡ a R sinh ⁡ b R cos ⁡ γ , {\displaystyle \cosh {\frac {c}{R}}=\cosh {\frac {a}{R}}\ \cosh {\frac {b}{R}}-\sinh {\frac {a}{R}}\ \sinh {\frac {b}{R}}\ \cos \gamma \ ,} with γ the angle at the vertex opposite the side c. By using the Maclaurin series for the hyperbolic cosine, cosh x ≈ 1 + x2/2, it can be shown that as a hyperbolic triangle becomes very small (that is, as a, b, and c all approach zero), the hyperbolic relation for a right triangle approaches the form of Pythagoras' theorem. For small right triangles (a, b << R), the hyperbolic cosines can be eliminated to avoid loss of significance, giving sinh 2 ⁡ c 2 R = sinh 2 ⁡ a 2 R + sinh 2 ⁡ b 2 R + 2 sinh 2 ⁡ a 2 R sinh 2 ⁡ b 2 R . {\displaystyle \sinh ^{2}{\frac {c}{2R}}=\sinh ^{2}{\frac {a}{2R}}+\sinh ^{2}{\frac {b}{2R}}+2\sinh ^{2}{\frac {a}{2R}}\sinh ^{2}{\frac {b}{2R}}\,.} ==== Very small triangles ==== For any uniform curvature K (positive, zero, or negative), in very small right triangles (|K|a2, |K|b2 << 1) with hypotenuse c, it can be shown that c 2 = a 2 + b 2 − K 3 a 2 b 2 − K 2 45 a 2 b 2 ( a 2 + b 2 ) − 2 K 3 945 a 2 b 2 ( a 2 − b 2 ) 2 + O ( K 4 c 10 ) . {\displaystyle c^{2}=a^{2}+b^{2}-{\frac {K}{3}}a^{2}b^{2}-{\frac {K^{2}}{45}}a^{2}b^{2}(a^{2}+b^{2})-{\frac {2K^{3}}{945}}a^{2}b^{2}(a^{2}-b^{2})^{2}+O(K^{4}c^{10})\,.} === Differential geometry === The Pythagorean theorem applies to infinitesimal triangles seen in differential geometry. In three dimensional space, the distance between two infinitesimally separated points satisfies d s 2 = d x 2 + d y 2 + d z 2 , {\displaystyle ds^{2}=dx^{2}+dy^{2}+dz^{2},} with ds the element of distance and (dx, dy, dz) the components of the vector separating the two points. Such a space is called a Euclidean space. However, in Riemannian geometry, a generalization of this expression useful for general coordinates (not just Cartesian) and general spaces (not just Euclidean) takes the form: d s 2 = ∑ i , j n g i j d x i d x j {\displaystyle ds^{2}=\sum _{i,j}^{n}g_{ij}\,dx_{i}\,dx_{j}} which is called the metric tensor. (Sometimes, by abuse of language, the same term is applied to the set of coefficients gij.) It may be a function of position, and often describes curved space. A simple example is Euclidean (flat) space expressed in curvilinear coordinates. For example, in polar coordinates: d s 2 = d r 2 + r 2 d θ 2 . {\displaystyle ds^{2}=dr^{2}+r^{2}d\theta ^{2}\ .} == History == There is debate whether the Pythagorean theorem was discovered once, or many times in many places, and the date of first discovery is uncertain, as is the date of the first proof. Historians of Mesopotamian mathematics have concluded that the Pythagorean rule was in widespread use during the Old Babylonian period (20th to 16th centuries BC), over a thousand years before Pythagoras was born. The history of the theorem can be divided into four parts: knowledge of Pythagorean triples, knowledge of the relationship among the sides of a right triangle, knowledge of the relationships among adjacent angles, and proofs of the theorem within some deductive system. Written c. 1800 BC, the Egyptian Middle Kingdom Berlin Papyrus 6619 includes a problem whose solution is the Pythagorean triple 6:8:10, but the problem does not mention a triangle. The Mesopotamian tablet Plimpton 322, written near Larsa also c. 1800 BC, contains many entries closely related to Pythagorean triples. In India, the Baudhayana Shulba Sutra, the dates of which are given variously as between the 8th and 5th century BC, contains a list of Pythagorean triples and a statement of the Pythagorean theorem, both in the special case of the isosceles right triangle and in the general case, as does the Apastamba Shulba Sutra (c. 600 BC). Byzantine Neoplatonic philosopher and mathematician Proclus, writing in the fifth century AD, states two arithmetic rules, "one of them attributed to Plato, the other to Pythagoras", for generating special Pythagorean triples. The rule attributed to Pythagoras (c. 570 – c. 495 BC) starts from an odd number and produces a triple with leg and hypotenuse differing by one unit; the rule attributed to Plato (428/427 or 424/423 – 348/347 BC) starts from an even number and produces a triple with leg and hypotenuse differing by two units. According to Thomas L. Heath (1861–1940), no specific attribution of the theorem to Pythagoras exists in the surviving Greek literature from the five centuries after Pythagoras lived. However, when authors such as Plutarch and Cicero attributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted. Classicist Kurt von Fritz wrote, "Whether this formula is rightly attributed to Pythagoras personally ... one can safely assume that it belongs to the very oldest period of Pythagorean mathematics." Around 300 BC, in Euclid's Elements, the oldest extant axiomatic proof of the theorem is presented. With contents known much earlier, but in surviving texts dating from roughly the 1st century BC, the Chinese text Zhoubi Suanjing (周髀算经), (The Arithmetical Classic of the Gnomon and the Circular Paths of Heaven) gives a reasoning for the Pythagorean theorem for the (3, 4, 5) triangle — in China it is called the "Gougu theorem" (勾股定理). During the Han Dynasty (202 BC to 220 AD), Pythagorean triples appear in The Nine Chapters on the Mathematical Art, together with a mention of right triangles. Some believe the theorem arose first in China in the 11th century BC, where it is alternatively known as the "Shang Gao theorem" (商高定理), named after the Duke of Zhou's astronomer and mathematician, whose reasoning composed most of what was in the Zhoubi Suanjing. == See also == == Notes and references == === Notes === === References === === Works cited === == External links == Euclid (1997) [c. 300 BC]. David E. Joyce (ed.). Elements. Retrieved 2006-08-30. In HTML with Java-based interactive figures. "Pythagorean theorem". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. History topic: Pythagoras's theorem in Babylonian mathematics Interactive links: Interactive proof in Java of the Pythagorean theorem Another interactive proof in Java of the Pythagorean theorem Pythagorean theorem with interactive animation Animated, non-algebraic, and user-paced Pythagorean theorem Pythagorean theorem water demo on YouTube Pythagorean theorem (more than 70 proofs from cut-the-knot) Weisstein, Eric W. "Pythagorean theorem". MathWorld.
Wikipedia:Babai's problem#0
Babai's problem is a problem in algebraic graph theory first proposed in 1979 by László Babai. == Babai's problem == Let G {\displaystyle G} be a finite group, let Irr ⁡ ( G ) {\displaystyle \operatorname {Irr} (G)} be the set of all irreducible characters of G {\displaystyle G} , let Γ = Cay ⁡ ( G , S ) {\displaystyle \Gamma =\operatorname {Cay} (G,S)} be the Cayley graph (or directed Cayley graph) corresponding to a generating subset S {\displaystyle S} of G ∖ { 1 } {\displaystyle G\setminus \{1\}} , and let ν {\displaystyle \nu } be a positive integer. Is the set M ν S = { ∑ s ∈ S χ ( s ) | χ ∈ Irr ⁡ ( G ) , χ ( 1 ) = ν } {\displaystyle M_{\nu }^{S}=\left\{\sum _{s\in S}\chi (s)\;|\;\chi \in \operatorname {Irr} (G),\;\chi (1)=\nu \right\}} an invariant of the graph Γ {\displaystyle \Gamma } ? In other words, does Cay ⁡ ( G , S ) ≅ Cay ⁡ ( G , S ′ ) {\displaystyle \operatorname {Cay} (G,S)\cong \operatorname {Cay} (G,S')} imply that M ν S = M ν S ′ {\displaystyle M_{\nu }^{S}=M_{\nu }^{S'}} ? == BI-group == A finite group G {\displaystyle G} is called a BI-group (Babai Invariant group) if Cay ⁡ ( G , S ) ≅ Cay ⁡ ( G , T ) {\displaystyle \operatorname {Cay} (G,S)\cong \operatorname {Cay} (G,T)} for some inverse closed subsets S {\displaystyle S} and T {\displaystyle T} of G ∖ { 1 } {\displaystyle G\setminus \{1\}} implies that M ν S = M ν T {\displaystyle M_{\nu }^{S}=M_{\nu }^{T}} for all positive integers ν {\displaystyle \nu } . == Open problem == Which finite groups are BI-groups? == See also == List of unsolved problems in mathematics List of problems solved since 1995 == References ==
Wikipedia:Babuška–Lax–Milgram theorem#0
In mathematics, the Generalized–Lax–Milgram theorem is a generalization of the famous Lax–Milgram theorem, which gives conditions under which a bilinear form can be "inverted" to show the existence and uniqueness of a weak solution to a given boundary value problem. The result was proved by J Necas in 1962, and is a generalization of the famous Lax Milgram theorem by Peter Lax and Arthur Milgram. == Background == In the modern, functional-analytic approach to the study of partial differential equations, one does not attempt to solve a given partial differential equation directly, but by using the structure of the vector space of possible solutions, e.g. a Sobolev space W k,p. Abstractly, consider two real normed spaces U and V with their continuous dual spaces U∗ and V∗ respectively. In many applications, U is the space of possible solutions; given some partial differential operator Λ : U → V∗ and a specified element f ∈ V∗, the objective is to find a u ∈ U such that Λ u = f . {\displaystyle \Lambda u=f.} However, in the weak formulation, this equation is only required to hold when "tested" against all other possible elements of V. This "testing" is accomplished by means of a bilinear function B : U × V → R which encodes the differential operator Λ; a weak solution to the problem is to find a u ∈ U such that B ( u , v ) = ⟨ f , v ⟩ for all v ∈ V . {\displaystyle B(u,v)=\langle f,v\rangle {\mbox{ for all }}v\in V.} The achievement of Lax and Milgram in their 1954 result was to specify sufficient conditions for this weak formulation to have a unique solution that depends continuously upon the specified datum f ∈ V∗: it suffices that U = V is a Hilbert space, that B is continuous, and that B is strongly coercive, i.e. | B ( u , u ) | ≥ c ‖ u ‖ 2 {\displaystyle |B(u,u)|\geq c\|u\|^{2}} for some constant c > 0 and all u ∈ U. For example, in the solution of the Poisson equation on a bounded, open domain Ω ⊂ Rn, { − Δ u ( x ) = f ( x ) , x ∈ Ω ; u ( x ) = 0 , x ∈ ∂ Ω ; {\displaystyle {\begin{cases}-\Delta u(x)=f(x),&x\in \Omega ;\\u(x)=0,&x\in \partial \Omega ;\end{cases}}} the space U could be taken to be the Sobolev space H01(Ω) with dual H−1(Ω); the former is a subspace of the Lp space V = L2(Ω); the bilinear form B associated to −Δ is the L2(Ω) inner product of the derivatives: B ( u , v ) = ∫ Ω ∇ u ( x ) ⋅ ∇ v ( x ) d x . {\displaystyle B(u,v)=\int _{\Omega }\nabla u(x)\cdot \nabla v(x)\,\mathrm {d} x.} Hence, the weak formulation of the Poisson equation, given f ∈ L2(Ω), is to find uf such that ∫ Ω ∇ u f ( x ) ⋅ ∇ v ( x ) d x = ∫ Ω f ( x ) v ( x ) d x for all v ∈ H 0 1 ( Ω ) . {\displaystyle \int _{\Omega }\nabla u_{f}(x)\cdot \nabla v(x)\,\mathrm {d} x=\int _{\Omega }f(x)v(x)\,\mathrm {d} x{\mbox{ for all }}v\in H_{0}^{1}(\Omega ).} == Statement of the theorem == In 1962 J Necas provided the following generalization of Lax and Milgram's earlier result, which begins by dispensing with the requirement that U and V be the same space. Let U and V be two real Hilbert spaces and let B : U × V → R be a continuous bilinear functional. Suppose also that B is weakly coercive: for some constant c > 0 and all u ∈ U, sup ‖ v ‖ = 1 | B ( u , v ) | ≥ c ‖ u ‖ {\displaystyle \sup _{\|v\|=1}|B(u,v)|\geq c\|u\|} and, for all 0 ≠ v ∈ V, sup ‖ u ‖ = 1 | B ( u , v ) | > 0 {\displaystyle \sup _{\|u\|=1}|B(u,v)|>0} Then, for all f ∈ V∗, there exists a unique solution u = uf ∈ U to the weak problem B ( u f , v ) = ⟨ f , v ⟩ for all v ∈ V . {\displaystyle B(u_{f},v)=\langle f,v\rangle {\mbox{ for all }}v\in V.} Moreover, the solution depends continuously on the given data: ‖ u f ‖ ≤ 1 c ‖ f ‖ . {\displaystyle \|u_{f}\|\leq {\frac {1}{c}}\|f\|.} Necas' proof extends directly to the situation where U {\displaystyle U} is a Banach space and V {\displaystyle V} a reflexive Banach space. == See also == Lions–Lax–Milgram theorem == References == Babuška, Ivo (1970–1971). "Error-bounds for finite element method". Numerische Mathematik. 16 (4): 322–333. doi:10.1007/BF02165003. hdl:10338.dmlcz/103498. ISSN 0029-599X. MR 0288971. S2CID 122191183. Zbl 0214.42001. Lax, Peter D.; Milgram, Arthur N. (1954), "Parabolic equations", Contributions to the theory of partial differential equations, Annals of Mathematics Studies, vol. 33, Princeton, N. J.: Princeton University Press, pp. 167–190, MR 0067317, Zbl 0058.08703 – via De Gruyter Nečas, Jindřich, Sur une méthode pour résoudre les équations aux dérivées partielles du type elliptique, voisine de la variationnelle, Annali della Scuola Normale Superiore di Pisa - Scienze Fisiche e Matematiche, Serie 3, Volume 16 (1962) no. 4, pp. 305-326. == External links == Roşca, Ioan (2001) [1994], "Babuška–Lax–Milgram theorem", Encyclopedia of Mathematics, EMS Press
Wikipedia:Babylonian cuneiform numerals#0
Babylonian cuneiform numerals, also used in Assyria and Chaldea, were written in cuneiform, using a wedge-tipped reed stylus to print a mark on a soft clay tablet which would be exposed in the sun to harden to create a permanent record. The Babylonians, who were famous for their astronomical observations, as well as their calculations (aided by their invention of the abacus), used a sexagesimal (base-60) positional numeral system inherited from either the Sumerian or the Akkadian civilizations. Neither of the predecessors was a positional system (having a convention for which 'end' of the numeral represented the units). == Origin == This system first appeared around 2000 BC; its structure reflects the decimal lexical numerals of Semitic languages rather than Sumerian lexical numbers. However, the use of a special Sumerian sign for 60 (beside two Semitic signs for the same number) attests to a relation with the Sumerian system. == Symbols == The Babylonian system is credited as being the first known positional numeral system, in which the value of a particular digit depends both on the digit itself and its position within the number. This was an extremely important development because non-place-value systems require unique symbols to represent each power of a base (ten, one hundred, one thousand, and so forth), which can make calculations more difficult. Only two symbols (𒁹 to count units and 𒌋 to count tens) were used to notate the 59 non-zero digits. These symbols and their values were combined to form a digit in a sign-value notation quite similar to that of Roman numerals; for example, the combination 𒌋𒌋𒁹𒁹𒁹 represented the digit for 23 (see table of digits above). These digits were used to represent larger numbers in the base 60 (sexagesimal) positional system. For example, 𒁹𒁹 𒌋𒌋𒁹𒁹𒁹 𒁹𒁹𒁹 would represent 2×602+23×60+3 = 8583. A space was left to indicate a place without value, similar to the modern-day zero. Babylonians later devised a sign to represent this empty place. They lacked a symbol to serve the function of radix point, so the place of the units had to be inferred from context: 𒌋𒌋𒁹𒁹𒁹 could have represented 23, 23×60 (𒌋𒌋𒁹𒁹𒁹␣), 23×60×60 (𒌋𒌋𒁹𒁹𒁹␣␣), or 23/60, etc. Their system clearly used internal decimal to represent digits, but it was not really a mixed-radix system of bases 10 and 6, since the ten sub-base was used merely to facilitate the representation of the large set of digits needed, while the place-values in a digit string were consistently 60-based and the arithmetic needed to work with these digit strings was correspondingly sexagesimal. The legacy of sexagesimal still survives to this day, in the form of degrees (360° in a circle or 60° in an angle of an equilateral triangle), arcminutes, and arcseconds in trigonometry and the measurement of time, although both of these systems are actually mixed radix. A common theory is that 60, a superior highly composite number (the previous and next in the series being 12 and 120), was chosen due to its prime factorization: 2×2×3×5, which makes it divisible by 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60. Integers and fractions were represented identically—a radix point was not written but rather made clear by context. === Zero === The Babylonians did not technically have a digit for, nor a concept of, the number zero. Although they understood the idea of nothingness, it was not seen as a number—merely the lack of a number. Later Babylonian texts used a placeholder () to represent zero, but only in the medial positions, and not on the right-hand side of the number, as is done in numbers like 100. == See also == Akkadian language § Numerals Babylon Babylonia Babylonian mathematics Cuneiform (Unicode block) History of zero Numeral system Sumerian language § Numerals == References == === Bibliography === Menninger, Karl W. (1969). Number Words and Number Symbols: A Cultural History of Numbers. MIT Press. ISBN 0-262-13040-8. McLeish, John (1991). Number: From Ancient Civilisations to the Computer. HarperCollins. ISBN 0-00-654484-3. == External links == Babylonian numerals Archived 2017-05-20 at the Wayback Machine Cuneiform numbers Archived 2020-06-27 at the Wayback Machine Babylonian Mathematics High resolution photographs, descriptions, and analysis of the root(2) tablet (YBC 7289) from the Yale Babylonian Collection Photograph, illustration, and description of the root(2) tablet from the Yale Babylonian Collection Archived 2012-08-13 at the Wayback Machine Babylonian Numerals by Michael Schreiber, Wolfram Demonstrations Project. Weisstein, Eric W. "Sexagesimal". MathWorld. CESCNC – a handy and easy-to use numeral converter
Wikipedia:Babylonian mathematics#0
Babylonian mathematics (also known as Assyro-Babylonian mathematics) is the mathematics developed or practiced by the people of Mesopotamia, as attested by sources mainly surviving from the Old Babylonian period (1830–1531 BC) to the Seleucid from the last three or four centuries BC. With respect to content, there is scarcely any difference between the two groups of texts. Babylonian mathematics remained constant, in character and content, for over a millennium. In contrast to the scarcity of sources in Egyptian mathematics, knowledge of Babylonian mathematics is derived from hundreds of clay tablets unearthed since the 1850s. Written in cuneiform, tablets were inscribed while the clay was moist, and baked hard in an oven or by the heat of the sun. The majority of recovered clay tablets date from 1800 to 1600 BC, and cover topics that include fractions, algebra, quadratic and cubic equations and the Pythagorean theorem. The Babylonian tablet YBC 7289 gives an approximation of 2 {\displaystyle {\sqrt {2}}} accurate to three significant sexagesimal digits (about six significant decimal digits). == Origins of Babylonian mathematics == Babylonian mathematics is a range of numeric and more advanced mathematical practices in the ancient Near East, written in cuneiform script. Study has historically focused on the First Babylonian dynasty old Babylonian period in the early second millennium BC due to the wealth of data available. There has been debate over the earliest appearance of Babylonian mathematics, with historians suggesting a range of dates between the 5th and 3rd millennia BC. Babylonian mathematics was primarily written on clay tablets in cuneiform script in the Akkadian or Sumerian languages. "Babylonian mathematics" is perhaps an unhelpful term since the earliest suggested origins date to the use of accounting devices, such as bullae and tokens, in the 5th millennium BC. == Babylonian numerals == The Babylonian system of mathematics was a sexagesimal (base 60) numeral system. From this we derive the modern-day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 degrees in a circle. The Babylonians were able to make great advances in mathematics for two reasons. Firstly, the number 60 is a superior highly composite number, having factors of 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 (including those that are themselves composite), facilitating calculations with fractions. Additionally, unlike the Egyptians and Romans, the Babylonians had a true place-value system, where digits written in the left column represented larger values (much as, in our base ten system, 734 = 7×100 + 3×10 + 4×1). == Old Babylonian mathematics (2000–1600 BC) == === Arithmetic === The Babylonians used pre-calculated tables to assist with arithmetic, including multiplication tables, tables of reciprocals, and tables of squares (or, by using the same table in the opposite way, tables of square roots). Their multiplication tables were not the 60 × 60 {\displaystyle 60\times 60} tables that one might expect by analogy to decimal multiplication tables. Instead, they kept only tables for multiplication by certain "principal numbers" (the regular numbers and 7). To calculate other products, they would split one of the numbers to be multiplied into a sum of principal numbers. Although many Babylonian tablets record exercises in multi-digit multiplication, these typically jump directly from the numbers being multiplied to their product, without showing intermediate values. Based on this, and on certain patterns of mistakes in some of these tablets, Jens Høyrup has suggested that long multiplication was performed in such a way that each step of the calculation erased the record of previous steps, as would happen using an abacus or counting board and would not happen with written long multiplication. A rare exception, "the only one of its kind known", is the Late Babylonian/Seleucid tablet BM 34601, which has been reconstructed as computing the square of a 13-digit sexagesimal number (the number 5 ⋅ 3 25 {\displaystyle 5\cdot 3^{25}} ) using a "slanting column of partial products" resembling modern long multiplication. The Babylonians did not have an algorithm for long division. Instead they based their method on the fact that: a b = a × 1 b {\displaystyle {\frac {a}{b}}=a\times {\frac {1}{b}}} together with a table of reciprocals. Numbers whose only prime factors are 2, 3 or 5 (known as 5-smooth or regular numbers) have finite reciprocals in sexagesimal notation, and tables with extensive lists of these reciprocals have been found. Reciprocals such as 1/7, 1/11, 1/13, etc. do not have finite representations in sexagesimal notation. To compute 1/13 or to divide a number by 13 the Babylonians would use an approximation such as: 1 13 = 7 91 = 7 × 1 91 ≈ 7 × 1 90 = 7 × 40 3600 = 280 3600 = 4 60 + 40 3600 . {\displaystyle {\frac {1}{13}}={\frac {7}{91}}=7\times {\frac {1}{91}}\approx 7\times {\frac {1}{90}}=7\times {\frac {40}{3600}}={\frac {280}{3600}}={\frac {4}{60}}+{\frac {40}{3600}}.} The Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) gives an approximation of the square root of 2 in four sexagesimal figures, 𒐕 𒌋𒌋𒐼 𒐐𒐕 𒌋 = 1;24,51,10, which is accurate to about six decimal digits, and is the closest possible three-place sexagesimal representation of √2: 1 + 24 60 + 51 60 2 + 10 60 3 = 305470 216000 = 1.41421 296 ¯ . {\displaystyle 1+{\frac {24}{60}}+{\frac {51}{60^{2}}}+{\frac {10}{60^{3}}}={\frac {305470}{216000}}=1.41421{\overline {296}}.} === Algebra === As well as arithmetical calculations, Babylonian mathematicians also developed algebraic methods of solving equations. Once again, these were based on pre-calculated tables. To solve a quadratic equation, the Babylonians essentially used the standard quadratic formula. They considered quadratic equations of the form: x 2 + b x = c {\displaystyle \ x^{2}+bx=c} where b and c were not necessarily integers, but c was always positive. They knew that a solution to this form of equation is: x = − b 2 + ( b 2 ) 2 + c {\displaystyle x=-{\frac {b}{2}}+{\sqrt {\left({\frac {b}{2}}\right)^{2}+c}}} and they found square roots efficiently using division and averaging. Problems of this type included finding the dimensions of a rectangle given its area and the amount by which the length exceeds the width. Tables of values of n3 + n2 were used to solve certain cubic equations. For example, consider the equation: a x 3 + b x 2 = c . {\displaystyle \ ax^{3}+bx^{2}=c.} Multiplying the equation by a2 and dividing by b3 gives: ( a x b ) 3 + ( a x b ) 2 = c a 2 b 3 . {\displaystyle \left({\frac {ax}{b}}\right)^{3}+\left({\frac {ax}{b}}\right)^{2}={\frac {ca^{2}}{b^{3}}}.} Substituting y = ax/b gives: y 3 + y 2 = c a 2 b 3 {\displaystyle y^{3}+y^{2}={\frac {ca^{2}}{b^{3}}}} which could now be solved by looking up the n3 + n2 table to find the value closest to the right-hand side. The Babylonians accomplished this without algebraic notation, showing a remarkable depth of understanding. However, they did not have a method for solving the general cubic equation. === Growth === Babylonians modeled exponential growth, constrained growth (via a form of sigmoid functions), and doubling time, the latter in the context of interest on loans. Clay tablets from c. 2000 BC include the exercise "Given an interest rate of 1/60 per month (no compounding), compute the doubling time." This yields an annual interest rate of 12/60 = 20%, and hence a doubling time of 100% growth/20% growth per year = 5 years. === Plimpton 322 === The Plimpton 322 tablet contains a list of "Pythagorean triples", i.e., integers ( a , b , c ) {\displaystyle (a,b,c)} such that a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} . The triples are too many and too large to have been obtained by brute force. Much has been written on the subject, including some speculation (perhaps anachronistic) as to whether the tablet could have served as an early trigonometrical table. Care must be exercised to see the tablet in terms of methods familiar or accessible to scribes at the time. [...] the question "how was the tablet calculated?" does not have to have the same answer as the question "what problems does the tablet set?" The first can be answered most satisfactorily by reciprocal pairs, as first suggested half a century ago, and the second by some sort of right-triangle problems. === Geometry === Babylonians knew the common rules for measuring volumes and areas. They measured the circumference of a circle as three times the diameter and the area as one-twelfth the square of the circumference, which would be correct if π is estimated as 3. They were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated near Susa in 1936 (dated to between the 19th and 17th centuries BC) gives a better approximation of π as 25/8 = 3.125, about 0.5 percent below the exact value. The volume of a cylinder was taken as the product of the base and the height, however, the volume of the frustum of a cone or a square pyramid was incorrectly taken as the product of the height and half the sum of the bases. The Pythagorean rule was also known to the Babylonians. The "Babylonian mile" was a measure of distance equal to about 11.3 km (or about seven modern miles). This measurement for distances eventually was converted to a "time-mile" used for measuring the travel of the Sun, therefore, representing time. The Babylonian astronomers kept detailed records of the rising and setting of stars, the motion of the planets, and the solar and lunar eclipses, all of which required familiarity with angular distances measured on the celestial sphere. They also used a form of Fourier analysis to compute an ephemeris (table of astronomical positions), which was discovered in the 1950s by Otto Neugebauer. To make calculations of the movements of celestial bodies, the Babylonians used basic arithmetic and a coordinate system based on the ecliptic, the part of the heavens that the sun and planets travel through. Tablets kept in the British Museum provide evidence that the Babylonians even went so far as to have a concept of objects in an abstract mathematical space. The tablets date from between 350 and 50 BC, revealing that the Babylonians understood and used geometry even earlier than previously thought. The Babylonians used a method for estimating the area under a curve by drawing a trapezoid underneath, a technique previously believed to have originated in 14th century Europe. This method of estimation allowed them to, for example, find the distance Jupiter had traveled in a certain amount of time. == See also == Babylonia Babylonian astronomy History of mathematics Islamic mathematics for mathematics in Islamic Iraq/Mesopotamia == Notes == == References == Berriman, A. E. (1956). The Babylonian quadratic equation. Boyer, C. B. (1989). Merzbach, Uta C. (ed.). A History of Mathematics (2nd rev. ed.). New York: Wiley. ISBN 0-471-09763-2. (1991 pbk ed. ISBN 0-471-54397-7). Høyrup, Jens. "Pythagorean 'Rule' and 'Theorem' – Mirror of the Relation Between Babylonian and Greek Mathematics". In Renger, Johannes (ed.). Babylon: Focus mesopotamischer Geschichte, Wiege früher Gelehrsamkeit, Mythos in der Moderne. 2. Internationales Colloquium der Deutschen Orient-Gesellschaft 24.–26. März 1998 in Berlin (PDF). Berlin: Deutsche Orient-Gesellschaft / Saarbrücken: SDV Saarbrücker Druckerei und Verlag. pp. 393–407. Joseph, G. G. (2000). The Crest of the Peacock. Princeton University Press. ISBN 0-691-00659-8. Joyce, David E. (1995). "Plimpton 322". Neugebauer, Otto (1969). The Exact Sciences in Antiquity (2nd ed.). Dover Publications. ISBN 978-0-486-22332-2. Muroi, Kazuo (2022). "Sexagesimal Calculations in Ancient Sumer". arXiv:2207.12102 [math.HO]. O'Connor, J. J.; Robertson, E. F. (December 2000). "An overview of Babylonian mathematics". MacTutor History of Mathematics. Robson, Eleanor (2001). "Neither Sherlock Holmes nor Babylon: a reassessment of Plimpton 322". Historia Math. 28 (3): 167–206. doi:10.1006/hmat.2001.2317. MR 1849797. Robson, E. (2002). "Words and pictures: New light on Plimpton 322". American Mathematical Monthly. 109 (2). Washington: 105–120. doi:10.1080/00029890.2002.11919845. JSTOR 2695324. S2CID 33907668. Robson, E. (2008). Mathematics in Ancient Iraq: A Social History. Princeton University Press. Toomer, G. J. (1981). Hipparchus and Babylonian Astronomy.
Wikipedia:Backus–Gilbert method#0
In mathematics, the Backus–Gilbert method, also known as the optimally localized average (OLA) method is named for its discoverers, geophysicists George E. Backus and James Freeman Gilbert. It is a regularization method for obtaining meaningful solutions to ill-posed inverse problems. Where other regularization methods, such as the frequently used Tikhonov regularization method, seek to impose smoothness constraints on the solution, Backus–Gilbert instead seeks to impose stability constraints, so that the solution would vary as little as possible if the input data were resampled multiple times. In practice, and to the extent that is justified by the data, smoothness results from this. Given a data array X, the basic Backus-Gilbert inverse is: H θ = C − 1 G θ G θ T C − 1 G θ {\displaystyle \mathbf {H} _{\theta }={\frac {\mathbf {C} ^{-1}\mathbf {G} _{\theta }}{\mathbf {G} _{\theta }^{T}\mathbf {C} ^{-1}\mathbf {G} _{\theta }}}} where C is the covariance matrix of the data, and Gθ is an a priori constraint representing the source θ for which a solution is sought. Regularization is implemented by "whitening" the covariance matrix: C ′ = C + λ I {\displaystyle \mathbf {C} '=\mathbf {C} +\lambda \mathbf {I} } with C′ replacing C in the equation for Hθ. Then, H θ T X {\displaystyle \mathbf {H} _{\theta }^{T}\mathbf {X} } is an estimate of the activity of the source θ. == References == Backus, G.E., and Gilbert, F. 1968, "The Resolving power of Gross Earth Data", Geophysical Journal of the Royal Astronomical Society, vol. 16, pp. 169–205. Backus, G.E., and Gilbert, F. 1970, "Uniqueness in the Inversion of inaccurate Gross Earth Data", Philosophical Transactions of the Royal Society of London A, vol. 266, pp. 123–192. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 19.6. Backus–Gilbert Method". Numerical Recipes (3rd ed.). Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17.
Wikipedia:Bakhshali manuscript#0
The Bakhshali manuscript is an ancient Indian mathematical text written on birch bark that was found in 1881 in the village of Bakhshali, Mardan (near Peshawar in present-day Pakistan, historical Gandhara). It is perhaps "the oldest extant manuscript in Indian mathematics". For some portions a carbon-date was proposed of AD 224–383 while for other portions a carbon-date as late as AD 885–993 in a 2017 study. The open manner and timing of the publication of these test dates was criticised by a group of Indian mathematical historians (Plofker et al. 2017 and Houben 2018 §3). Up until Sep 2024 the manuscript is known to have contained the earliest known Indian use of a zero symbol. It is written in a form of literary Sanskrit influenced by contemporary dialects. However, in October 2024, Oxford University having revised its findings from second run of carbon dating tests in 2018, revealed that Bakshali manuscript dates from 799 - 1102 AD (9th - 11th century Approx). == Discovery == The manuscript was unearthed in a field in 1881. It was unearthed by a peasant in the village of Bakhshali, which is near Mardan, in present-day Khyber Pakhtunkhwa, Pakistan. The first research on the manuscript was done by A. F. R. Hoernlé. After his death, it was examined by G.R.Kaye, who edited the work and published it as a book in 1927. The extant manuscript is incomplete. It consists of 70 leaves of birch bark, whose intended order is not known. It is kept at the Bodleian Library at the University of Oxford (MS. Sansk. d. 14), though folio are periodically loaned to museums. == Contents == The manuscript is a compendium of rules and illustrative examples. Each example is stated as a problem, the solution is described, and it is verified that the problem has been solved. The sample problems are in verse and the commentary is in prose associated with calculations. The problems involve arithmetic, algebra and geometry, including mensuration. The topics covered include fractions, square roots, arithmetic and geometric progressions, solutions of simple equations, simultaneous linear equations, quadratic equations and indeterminate equations of the second degree. === Composition === The manuscript is written in an earlier form of Sharada script, a script which is known for having been in use mainly from the 8th to the 12th century in the northwestern part of South Asia, such as Kashmir and neighbouring regions. The language of the manuscript, though intended to be Sanskrit, was significantly influenced in its phonetics and morphology by a local artist dialect or dialects, and some of the resultant linguistic peculiarities of the text are shared with Buddhist Hybrid Sanskrit. The overlying dialects, though sharing affinities with Apabhraṃśa and with Old Kashmiri, have not been identified precisely. It is probable that most of the rules and examples had been originally composed in Sanskrit, while one of the sections was written entirely in a dialect. It is possible that the manuscript might be a compilation of fragments from different works composed in a number of language varieties. Hayashi admits that some of the irregularities are due to errors by scribes or may be orthographical. A colophon to one of the sections states that it was written by a brahmin identified as "the son of Chajaka", a "king of calculators," for the use of Vasiṣṭha's son Hasika. The brahmin might have been the author of the commentary as well as the scribe of the manuscript. Near the colophon appears a broken word rtikāvati, which has been interpreted as the place Mārtikāvata mentioned by Varāhamihira as being in northwestern India (along with Takṣaśilā, Gandhāra etc.), the supposed place where the manuscript might have been written. === Mathematics === The manuscript is a compilation of mathematical rules and examples (in verse), and prose commentaries on these verses. Typically, a rule is given, with one or more examples, where each example is followed by a "statement" (nyāsa / sthāpanā) of the example's numerical information in tabular form, then a computation that works out the example by following the rule step-by-step while quoting it, and finally a verification to confirm that the solution satisfies the problem. This is a style similar to that of Bhāskara I's commentary on the gaṇita (mathematics) chapter of the Āryabhaṭīya, including the emphasis on verification that became obsolete in later works. The rules are algorithms and techniques for a variety of problems, such as systems of linear equations, quadratic equations, arithmetic progressions and arithmetico-geometric series, computing square roots approximately, dealing with negative numbers (profit and loss), measurement such as of the fineness of gold, etc. Equality of Two Uniformly Accelerated Growths Let, S 1 = a + ( a + d ) + ( a + 2 d ) + … to n terms, {\displaystyle S_{1}=a+(a+d)+(a+2d)+\ldots {\text{ to }}n{\text{ terms,}}} S 2 = b + ( b + e ) + ( b + 2 e ) + … to n terms, {\displaystyle S_{2}=b+(b+e)+(b+2e)+\ldots {\text{ to }}n{\text{ terms,}}} If these two are equal, we must have ( n − 1 ) d + 2 a = ( n − 1 ) e + 2 b {\displaystyle (n-1)d+2a=(n-1)e+2b} n = 2 ( b − a ) / ( d − e ) + 1 {\displaystyle n=2(b-a)/(d-e)+1} This formula is contained in Bakshali Manuscript, folio 4v, rule 17 (Kaye III, p. 176) as follows: Ādyor viśeṣa dviguṇam cayasaṃdhiḥ-vibhājitam Rūpādhikaṃ tathā kālaṃ gati sāmyam tadā bhavet. "Twice the difference of the initial terms divided by the difference of the common differences is increased by one. That will be time (represented by n {\displaystyle n} , cf. kāla iha padasyopalakṣaṇam) when the distances moved (by the two travellers) will be same." Dvayāditricayaś caiva dvicayatryādikottaraḥ Dvayo ca bhavate paṃthā kena kālena sāsyatāṃ kriyate? The accompanying example reads: "The initial speed (of a traveller) is 2 and subsequent daily increment is 3. That of another, these are 3 initially and 2 as increment. Find in what time will their distances covered attain equality." The working is lost, but the answer, by the formula in the previous example, n = 2 ( 3 − 2 ) / ( 3 − 2 ) + 1 = 3 days. {\displaystyle n=2(3-2)/(3-2)+1=3{\text{ days.}}} === Numerals and zero === The Bakhshali manuscript uses numerals with a place-value system, using a dot as a place holder for zero. The dot symbol came to be called the shunya-bindu (literally, the dot of the empty place). References to the concept are found in Subandhu's Vasavadatta, which has been dated between 385 and 465 by the scholar Maan Singh even though the dates are disputed by other scholars Prior to the 2017 carbon dating, a 9th-century inscription of zero on the wall of a temple in Gwalior, Madhya Pradesh, was once thought to be the oldest Indian use of a zero symbol. == Date == In 2017, samples from 3 folios of the corpus were radiocarbon dated to three different centuries and empires, from AD 224–383 (Indo-Scythian), 680–779 (Turk Shahis), and 885–993 (Saffarid dynasty). If the dates are valid, it is not known how folios from different centuries came to be collected and buried. However, on 14 October 2024, Oxford University having revised its findings from second run of carbon dating tests in 2018, revealed that Bakshali manuscript dates from 799 - 1102 AD (9th - 11th century Approx). The publication of the radio carbon dates, initially via non-academic media, led Kim Plofker, Agathe Keller, Takao Hayashi, Clemency Montelle and Dominik Wujastyk to publicly object to the library making the dates globally available, usurping academic precedence: We express regret that the Bodleian Library kept their carbon-dating findings embargoed for many months, and then chose a newspaper press-release and YouTube as media for a first communication of these technical and historical matters. The Library thus bypassed standard academic channels that would have permitted serious collegial discussion and peer review prior to public announcements. While the excitement inspired by intriguing discoveries benefits our field and scholarly research in general, the confusion generated by broadcasting over-eager and carelessly inferred conclusions, with their inevitable aftermath of caveats and disputes, does not. Referring to the detailed reconsideration of the evidence by Plofker et al., Sanskrit scholar, Jan Houben remarked: "If the finding that samples of the same manuscript would be centuries apart is not based on mistakes ... there are still some factors that have evidently been overlooked by the Bodleian research team: the well-known divergence in exposure to cosmic radiation at different altitudes and the possible variation in background radiation due to the presence of certain minerals in exposed, mountainous rock have nowhere been taken into account." Prior to the proposed radiocarbon dates of the 2017 study, most scholars agreed that the physical manuscript was a copy of a more ancient text, whose date had to be estimated partly on the basis of its content. Hoernlé thought that the manuscript was from the 9th century, but the original was from the 3rd or 4th century. Indian scholars assigned it an earlier date. Datta assigned it to the "early centuries of the Christian era". Channabasappa dated it to AD 200–400, on the grounds that it uses mathematical terminology different from that of Aryabhata. Hayashi noted some similarities between the manuscript and Bhaskara I's work (AD 629), and said that it was "not much later than Bhaskara I". To settle the date of the Bakhshali manuscript, language use and especially palaeography are other major parameters to be taken into account. In this context Houben observed: "it is difficult to derive a linear chronological difference from the observed linguistic variation," and therefore it is necessary to "take quite seriously the judgement of palaeographists such as Richard Salomon who observed that, what he teleologically called “Proto-Śāradā,” “first emerged around the middle of the seventh century” (Salomon 1998: 40). This excludes the earlier dates attributed to manuscript folios on which a fully developed form of Śāradā appears." == See also == Birch bark manuscript Bakhshali approximation Indian mathematics Zero (number) == Notes == == References == == Bibliography == Hayashi, Takao (1995). The Bakhshālī manuscript: an ancient Indian mathematical treatise. Groningen Oriental studies. Groningen: Egbert Forsten. ISBN 978-90-6980-087-5. Hoernle, Augustus (1887), On the Bakshali manuscript, Vienna: Alfred Hölder (Editor of the Court and of the University) Kaye, George Rusby (2004) [1927]. The Bakhshālī manuscripts: a study in medieval mathematics. New Delhi: Aditya Prakashan. ISBN 978-81-7742-058-6. Plofker, Kim; Agathe Keller; Takao Hayashi; Clemency Montelle; and Dominik Wujastyk. "The Bakhshālī Manuscript: A Response to the Bodleian Library’s Radiocarbon Dating" History of Science in South Asia, 5.1: 134–150. doi:10.18732/H2XT07 == Further reading == Sarasvati, Svami Satya Prakash; Jyotishmati, Usha (1979), The Bakhshali Manuscript: An Ancient Treatise of Indian Arithmetic (PDF), Allahabad: Dr. Ratna Kumari Svadhyaya Sansthan, archived from the original (PDF) on 20 June 2014, retrieved 19 January 2016 with complete text in Devanagari, 110 pages M N Channabasappa (1976). "On the square root formula in the Bakhshali manuscript" (PDF). Indian J. History Sci. 11 (2): 112–124. David H. Bailey, Jonathan Borwein (2011). "A Quartically Convergent Square Root Algorithm: An Exercise in Forensic Paleo-Mathematics" (PDF). == External links == The Bakhshali manuscript, MacTutor History of Mathematics archive Ch. 6 – The Bakhshali manuscript (Ian G. Pearce, Indian Mathematics: Redressing the balance) Hoernle: On the Bakhshali Manuscript, 1887, archive.org "A Big Zero: Research uncovers the date of the Bakhshali Manuscript", YouTube video, University of Oxford Plofker, Kim, Agathe Keller, Takao Hayashi, Clemency Montelle, and Dominik Wujastyk. 2017. "The Bakhshālī Manuscript: A Response to the Bodleian Library’s Radiocarbon Dating”. History of Science in South Asia 5 (1). 134–50. https://doi.org/10.18732/H2XT07. Challenges the claims made in the YouTube video "A Big Zero."
Wikipedia:Balachandra Rao#0
Nandalike Balachandra Rao (12 March 1953 – 14 May 2025) was an Indian journalist and writer who was the son of Nandalike Subba Rao and Girijamma. == Biography == He did his B.A. at Government college, Mangalore and Diploma in public relations and journalism at Mysore University. Former banker Nandalike Balachandra Rao, who made meaningful efforts to immortalise Kannada poet laureate Kavi Muddana’s works, was conferred the Kavi Muddanna Award. Muddana (Kannada: ಮುದ್ದಣ; 24 January 1870 – 15 February 1901) was a Kannada poet, writer and a Yakshagana poet from Nandalike. Recently his book ‘Kumara Vijaya’ got released at Tulu Parba. He died on 14 May 2025 at the age of 72. == References == == See also == Muddana Nandalike
Wikipedia:Balanced set#0
In linear algebra and related areas of mathematics a balanced set, circled set or disk in a vector space (over a field K {\displaystyle \mathbb {K} } with an absolute value function | ⋅ | {\displaystyle |\cdot |} ) is a set S {\displaystyle S} such that a S ⊆ S {\displaystyle aS\subseteq S} for all scalars a {\displaystyle a} satisfying | a | ≤ 1. {\displaystyle |a|\leq 1.} The balanced hull or balanced envelope of a set S {\displaystyle S} is the smallest balanced set containing S . {\displaystyle S.} The balanced core of a set S {\displaystyle S} is the largest balanced set contained in S . {\displaystyle S.} Balanced sets are ubiquitous in functional analysis because every neighborhood of the origin in every topological vector space (TVS) contains a balanced neighborhood of the origin and every convex neighborhood of the origin contains a balanced convex neighborhood of the origin (even if the TVS is not locally convex). This neighborhood can also be chosen to be an open set or, alternatively, a closed set. == Definition == Let X {\displaystyle X} be a vector space over the field K {\displaystyle \mathbb {K} } of real or complex numbers. Notation If S {\displaystyle S} is a set, a {\displaystyle a} is a scalar, and B ⊆ K {\displaystyle B\subseteq \mathbb {K} } then let a S = { a s : s ∈ S } {\displaystyle aS=\{as:s\in S\}} and B S = { b s : b ∈ B , s ∈ S } {\displaystyle BS=\{bs:b\in B,s\in S\}} and for any 0 ≤ r ≤ ∞ , {\displaystyle 0\leq r\leq \infty ,} let B r = { a ∈ K : | a | < r } and B ≤ r = { a ∈ K : | a | ≤ r } . {\displaystyle B_{r}=\{a\in \mathbb {K} :|a|<r\}\qquad {\text{ and }}\qquad B_{\leq r}=\{a\in \mathbb {K} :|a|\leq r\}.} denote, respectively, the open ball and the closed ball of radius r {\displaystyle r} in the scalar field K {\displaystyle \mathbb {K} } centered at 0 {\displaystyle 0} where B 0 = ∅ , B ≤ 0 = { 0 } , {\displaystyle B_{0}=\varnothing ,B_{\leq 0}=\{0\},} and B ∞ = B ≤ ∞ = K . {\displaystyle B_{\infty }=B_{\leq \infty }=\mathbb {K} .} Every balanced subset of the field K {\displaystyle \mathbb {K} } is of the form B ≤ r {\displaystyle B_{\leq r}} or B r {\displaystyle B_{r}} for some 0 ≤ r ≤ ∞ . {\displaystyle 0\leq r\leq \infty .} Balanced set A subset S {\displaystyle S} of X {\displaystyle X} is called a balanced set or balanced if it satisfies any of the following equivalent conditions: Definition: a s ∈ S {\displaystyle as\in S} for all s ∈ S {\displaystyle s\in S} and all scalars a {\displaystyle a} satisfying | a | ≤ 1. {\displaystyle |a|\leq 1.} a S ⊆ S {\displaystyle aS\subseteq S} for all scalars a {\displaystyle a} satisfying | a | ≤ 1. {\displaystyle |a|\leq 1.} B ≤ 1 S ⊆ S {\displaystyle B_{\leq 1}S\subseteq S} (where B ≤ 1 := { a ∈ K : | a | ≤ 1 } {\displaystyle B_{\leq 1}:=\{a\in \mathbb {K} :|a|\leq 1\}} ). S = B ≤ 1 S . {\displaystyle S=B_{\leq 1}S.} For every s ∈ S , {\displaystyle s\in S,} S ∩ K s = B ≤ 1 ( S ∩ K s ) . {\displaystyle S\cap \mathbb {K} s=B_{\leq 1}(S\cap \mathbb {K} s).} K s = span ⁡ { s } {\displaystyle \mathbb {K} s=\operatorname {span} \{s\}} is a 0 {\displaystyle 0} (if s = 0 {\displaystyle s=0} ) or 1 {\displaystyle 1} (if s ≠ 0 {\displaystyle s\neq 0} ) dimensional vector subspace of X . {\displaystyle X.} If R := S ∩ K s {\displaystyle R:=S\cap \mathbb {K} s} then the above equality becomes R = B ≤ 1 R , {\displaystyle R=B_{\leq 1}R,} which is exactly the previous condition for a set to be balanced. Thus, S {\displaystyle S} is balanced if and only if for every s ∈ S , {\displaystyle s\in S,} S ∩ K s {\displaystyle S\cap \mathbb {K} s} is a balanced set (according to any of the previous defining conditions). For every 1-dimensional vector subspace Y {\displaystyle Y} of span ⁡ S , {\displaystyle \operatorname {span} S,} S ∩ Y {\displaystyle S\cap Y} is a balanced set (according to any defining condition other than this one). For every s ∈ S , {\displaystyle s\in S,} there exists some 0 ≤ r ≤ ∞ {\displaystyle 0\leq r\leq \infty } such that S ∩ K s = B r s {\displaystyle S\cap \mathbb {K} s=B_{r}s} or S ∩ K s = B ≤ r s . {\displaystyle S\cap \mathbb {K} s=B_{\leq r}s.} S {\displaystyle S} is a balanced subset of span ⁡ S {\displaystyle \operatorname {span} S} (according to any defining condition of "balanced" other than this one). Thus S {\displaystyle S} is a balanced subset of X {\displaystyle X} if and only if it is balanced subset of every (equivalently, of some) vector space over the field K {\displaystyle \mathbb {K} } that contains S . {\displaystyle S.} So assuming that the field K {\displaystyle \mathbb {K} } is clear from context, this justifies writing " S {\displaystyle S} is balanced" without mentioning any vector space. If S {\displaystyle S} is a convex set then this list may be extended to include: a S ⊆ S {\displaystyle aS\subseteq S} for all scalars a {\displaystyle a} satisfying | a | = 1. {\displaystyle |a|=1.} If K = R {\displaystyle \mathbb {K} =\mathbb {R} } then this list may be extended to include: S {\displaystyle S} is symmetric (meaning − S = S {\displaystyle -S=S} ) and [ 0 , 1 ) S ⊆ S . {\displaystyle [0,1)S\subseteq S.} === Balanced hull === bal ⁡ S = ⋃ | a | ≤ 1 a S = B ≤ 1 S {\displaystyle \operatorname {bal} S~=~\bigcup _{|a|\leq 1}aS=B_{\leq 1}S} The balanced hull of a subset S {\displaystyle S} of X , {\displaystyle X,} denoted by bal ⁡ S , {\displaystyle \operatorname {bal} S,} is defined in any of the following equivalent ways: Definition: bal ⁡ S {\displaystyle \operatorname {bal} S} is the smallest (with respect to ⊆ {\displaystyle \,\subseteq \,} ) balanced subset of X {\displaystyle X} containing S . {\displaystyle S.} bal ⁡ S {\displaystyle \operatorname {bal} S} is the intersection of all balanced sets containing S . {\displaystyle S.} bal ⁡ S = ⋃ | a | ≤ 1 ( a S ) . {\displaystyle \operatorname {bal} S=\bigcup _{|a|\leq 1}(aS).} bal ⁡ S = B ≤ 1 S . {\displaystyle \operatorname {bal} S=B_{\leq 1}S.} === Balanced core === balcore ⁡ S = { ⋂ | a | ≥ 1 a S if 0 ∈ S ∅ if 0 ∉ S {\displaystyle \operatorname {balcore} S~=~{\begin{cases}\displaystyle \bigcap _{|a|\geq 1}aS&{\text{ if }}0\in S\\\varnothing &{\text{ if }}0\not \in S\\\end{cases}}} The balanced core of a subset S {\displaystyle S} of X , {\displaystyle X,} denoted by balcore ⁡ S , {\displaystyle \operatorname {balcore} S,} is defined in any of the following equivalent ways: Definition: balcore ⁡ S {\displaystyle \operatorname {balcore} S} is the largest (with respect to ⊆ {\displaystyle \,\subseteq \,} ) balanced subset of S . {\displaystyle S.} balcore ⁡ S {\displaystyle \operatorname {balcore} S} is the union of all balanced subsets of S . {\displaystyle S.} balcore ⁡ S = ∅ {\displaystyle \operatorname {balcore} S=\varnothing } if 0 ∉ S {\displaystyle 0\not \in S} while balcore ⁡ S = ⋂ | a | ≥ 1 ( a S ) {\displaystyle \operatorname {balcore} S=\bigcap _{|a|\geq 1}(aS)} if 0 ∈ S . {\displaystyle 0\in S.} == Examples == The empty set is a balanced set. As is any vector subspace of any (real or complex) vector space. In particular, { 0 } {\displaystyle \{0\}} is always a balanced set. Any non-empty set that does not contain the origin is not balanced and furthermore, the balanced core of such a set will equal the empty set. Normed and topological vector spaces The open and closed balls centered at the origin in a normed vector space are balanced sets. If p {\displaystyle p} is a seminorm (or norm) on a vector space X {\displaystyle X} then for any constant c > 0 , {\displaystyle c>0,} the set { x ∈ X : p ( x ) ≤ c } {\displaystyle \{x\in X:p(x)\leq c\}} is balanced. If S ⊆ X {\displaystyle S\subseteq X} is any subset and B 1 := { a ∈ K : | a | < 1 } {\displaystyle B_{1}:=\{a\in \mathbb {K} :|a|<1\}} then B 1 S {\displaystyle B_{1}S} is a balanced set. In particular, if U ⊆ X {\displaystyle U\subseteq X} is any balanced neighborhood of the origin in a topological vector space X {\displaystyle X} then Int X ⁡ U ⊆ B 1 U = ⋃ 0 < | a | < 1 a U ⊆ U . {\displaystyle \operatorname {Int} _{X}U~\subseteq ~B_{1}U~=~\bigcup _{0<|a|<1}aU~\subseteq ~U.} Balanced sets in R {\displaystyle \mathbb {R} } and C {\displaystyle \mathbb {C} } Let K {\displaystyle \mathbb {K} } be the field real numbers R {\displaystyle \mathbb {R} } or complex numbers C , {\displaystyle \mathbb {C} ,} let | ⋅ | {\displaystyle |\cdot |} denote the absolute value on K , {\displaystyle \mathbb {K} ,} and let X := K {\displaystyle X:=\mathbb {K} } denotes the vector space over K . {\displaystyle \mathbb {K} .} So for example, if K := C {\displaystyle \mathbb {K} :=\mathbb {C} } is the field of complex numbers then X = K = C {\displaystyle X=\mathbb {K} =\mathbb {C} } is a 1-dimensional complex vector space whereas if K := R {\displaystyle \mathbb {K} :=\mathbb {R} } then X = K = R {\displaystyle X=\mathbb {K} =\mathbb {R} } is a 1-dimensional real vector space. The balanced subsets of X = K {\displaystyle X=\mathbb {K} } are exactly the following: ∅ {\displaystyle \varnothing } X {\displaystyle X} { 0 } {\displaystyle \{0\}} { x ∈ X : | x | < r } {\displaystyle \{x\in X:|x|<r\}} for some real r > 0 {\displaystyle r>0} { x ∈ X : | x | ≤ r } {\displaystyle \{x\in X:|x|\leq r\}} for some real r > 0. {\displaystyle r>0.} Consequently, both the balanced core and the balanced hull of every set of scalars is equal to one of the sets listed above. The balanced sets are C {\displaystyle \mathbb {C} } itself, the empty set and the open and closed discs centered at zero. Contrariwise, in the two dimensional Euclidean space there are many more balanced sets: any line segment with midpoint at the origin will do. As a result, C {\displaystyle \mathbb {C} } and R 2 {\displaystyle \mathbb {R} ^{2}} are entirely different as far as scalar multiplication is concerned. Balanced sets in R 2 {\displaystyle \mathbb {R} ^{2}} Throughout, let X = R 2 {\displaystyle X=\mathbb {R} ^{2}} (so X {\displaystyle X} is a vector space over R {\displaystyle \mathbb {R} } ) and let B ≤ 1 {\displaystyle B_{\leq 1}} is the closed unit ball in X {\displaystyle X} centered at the origin. If x 0 ∈ X = R 2 {\displaystyle x_{0}\in X=\mathbb {R} ^{2}} is non-zero, and L := R x 0 , {\displaystyle L:=\mathbb {R} x_{0},} then the set R := B ≤ 1 ∪ L {\displaystyle R:=B_{\leq 1}\cup L} is a closed, symmetric, and balanced neighborhood of the origin in X . {\displaystyle X.} More generally, if C {\displaystyle C} is any closed subset of X {\displaystyle X} such that ( 0 , 1 ) C ⊆ C , {\displaystyle (0,1)C\subseteq C,} then S := B ≤ 1 ∪ C ∪ ( − C ) {\displaystyle S:=B_{\leq 1}\cup C\cup (-C)} is a closed, symmetric, and balanced neighborhood of the origin in X . {\displaystyle X.} This example can be generalized to R n {\displaystyle \mathbb {R} ^{n}} for any integer n ≥ 1. {\displaystyle n\geq 1.} Let B ⊆ R 2 {\displaystyle B\subseteq \mathbb {R} ^{2}} be the union of the line segment between the points ( − 1 , 0 ) {\displaystyle (-1,0)} and ( 1 , 0 ) {\displaystyle (1,0)} and the line segment between ( 0 , − 1 ) {\displaystyle (0,-1)} and ( 0 , 1 ) . {\displaystyle (0,1).} Then B {\displaystyle B} is balanced but not convex. Nor is B {\displaystyle B} is absorbing (despite the fact that span ⁡ B = R 2 {\displaystyle \operatorname {span} B=\mathbb {R} ^{2}} is the entire vector space). For every 0 ≤ t ≤ π , {\displaystyle 0\leq t\leq \pi ,} let r t {\displaystyle r_{t}} be any positive real number and let B t {\displaystyle B^{t}} be the (open or closed) line segment in X := R 2 {\displaystyle X:=\mathbb {R} ^{2}} between the points ( cos ⁡ t , sin ⁡ t ) {\displaystyle (\cos t,\sin t)} and − ( cos ⁡ t , sin ⁡ t ) . {\displaystyle -(\cos t,\sin t).} Then the set B = ⋃ 0 ≤ t < π r t B t {\displaystyle B=\bigcup _{0\leq t<\pi }r_{t}B^{t}} is a balanced and absorbing set but it is not necessarily convex. The balanced hull of a closed set need not be closed. Take for instance the graph of x y = 1 {\displaystyle xy=1} in X = R 2 . {\displaystyle X=\mathbb {R} ^{2}.} The next example shows that the balanced hull of a convex set may fail to be convex (however, the convex hull of a balanced set is always balanced). For an example, let the convex subset be S := [ − 1 , 1 ] × { 1 } , {\displaystyle S:=[-1,1]\times \{1\},} which is a horizontal closed line segment lying above the x − {\displaystyle x-} axis in X := R 2 . {\displaystyle X:=\mathbb {R} ^{2}.} The balanced hull bal ⁡ S {\displaystyle \operatorname {bal} S} is a non-convex subset that is "hour glass shaped" and equal to the union of two closed and filled isosceles triangles T 1 {\displaystyle T_{1}} and T 2 , {\displaystyle T_{2},} where T 2 = − T 1 {\displaystyle T_{2}=-T_{1}} and T 1 {\displaystyle T_{1}} is the filled triangle whose vertices are the origin together with the endpoints of S {\displaystyle S} (said differently, T 1 {\displaystyle T_{1}} is the convex hull of S ∪ { ( 0 , 0 ) } {\displaystyle S\cup \{(0,0)\}} while T 2 {\displaystyle T_{2}} is the convex hull of ( − S ) ∪ { ( 0 , 0 ) } {\displaystyle (-S)\cup \{(0,0)\}} ). === Sufficient conditions === A set T {\displaystyle T} is balanced if and only if it is equal to its balanced hull bal ⁡ T {\displaystyle \operatorname {bal} T} or to its balanced core balcore ⁡ T , {\displaystyle \operatorname {balcore} T,} in which case all three of these sets are equal: T = bal ⁡ T = balcore ⁡ T . {\displaystyle T=\operatorname {bal} T=\operatorname {balcore} T.} The Cartesian product of a family of balanced sets is balanced in the product space of the corresponding vector spaces (over the same field K {\displaystyle \mathbb {K} } ). The balanced hull of a compact (respectively, totally bounded, bounded) set has the same property. The convex hull of a balanced set is convex and balanced (that is, it is absolutely convex). However, the balanced hull of a convex set may fail to be convex (a counter-example is given above). Arbitrary unions of balanced sets are balanced, and the same is true of arbitrary intersections of balanced sets. Scalar multiples and (finite) Minkowski sums of balanced sets are again balanced. Images and preimages of balanced sets under linear maps are again balanced. Explicitly, if L : X → Y {\displaystyle L:X\to Y} is a linear map and B ⊆ X {\displaystyle B\subseteq X} and C ⊆ Y {\displaystyle C\subseteq Y} are balanced sets, then L ( B ) {\displaystyle L(B)} and L − 1 ( C ) {\displaystyle L^{-1}(C)} are balanced sets. === Balanced neighborhoods === In any topological vector space, the closure of a balanced set is balanced. The union of the origin { 0 } {\displaystyle \{0\}} and the topological interior of a balanced set is balanced. Therefore, the topological interior of a balanced neighborhood of the origin is balanced. However, { ( z , w ) ∈ C 2 : | z | ≤ | w | } {\displaystyle \left\{(z,w)\in \mathbb {C} ^{2}:|z|\leq |w|\right\}} is a balanced subset of X = C 2 {\displaystyle X=\mathbb {C} ^{2}} that contains the origin ( 0 , 0 ) ∈ X {\displaystyle (0,0)\in X} but whose (nonempty) topological interior does not contain the origin and is therefore not a balanced set. Similarly for real vector spaces, if T {\displaystyle T} denotes the convex hull of ( 0 , 0 ) {\displaystyle (0,0)} and ( ± 1 , 1 ) {\displaystyle (\pm 1,1)} (a filled triangle whose vertices are these three points) then B := T ∪ ( − T ) {\displaystyle B:=T\cup (-T)} is an (hour glass shaped) balanced subset of X := R 2 {\displaystyle X:=\mathbb {R} ^{2}} whose non-empty topological interior does not contain the origin and so is not a balanced set (and although the set { ( 0 , 0 ) } ∪ Int X ⁡ B {\displaystyle \{(0,0)\}\cup \operatorname {Int} _{X}B} formed by adding the origin is balanced, it is neither an open set nor a neighborhood of the origin). Every neighborhood (respectively, convex neighborhood) of the origin in a topological vector space X {\displaystyle X} contains a balanced (respectively, convex and balanced) open neighborhood of the origin. In fact, the following construction produces such balanced sets. Given W ⊆ X , {\displaystyle W\subseteq X,} the symmetric set ⋂ | u | = 1 u W ⊆ W {\displaystyle \bigcap _{|u|=1}uW\subseteq W} will be convex (respectively, closed, balanced, bounded, a neighborhood of the origin, an absorbing subset of X {\displaystyle X} ) whenever this is true of W . {\displaystyle W.} It will be a balanced set if W {\displaystyle W} is a star shaped at the origin, which is true, for instance, when W {\displaystyle W} is convex and contains 0. {\displaystyle 0.} In particular, if W {\displaystyle W} is a convex neighborhood of the origin then ⋂ | u | = 1 u W {\displaystyle \bigcap _{|u|=1}uW} will be a balanced convex neighborhood of the origin and so its topological interior will be a balanced convex open neighborhood of the origin. Suppose that W {\displaystyle W} is a convex and absorbing subset of X . {\displaystyle X.} Then D := ⋂ | u | = 1 u W {\displaystyle D:=\bigcap _{|u|=1}uW} will be convex balanced absorbing subset of X , {\displaystyle X,} which guarantees that the Minkowski functional p D : X → R {\displaystyle p_{D}:X\to \mathbb {R} } of D {\displaystyle D} will be a seminorm on X , {\displaystyle X,} thereby making ( X , p D ) {\displaystyle \left(X,p_{D}\right)} into a seminormed space that carries its canonical pseduometrizable topology. The set of scalar multiples r D {\displaystyle rD} as r {\displaystyle r} ranges over { 1 2 , 1 3 , 1 4 , … } {\displaystyle \left\{{\tfrac {1}{2}},{\tfrac {1}{3}},{\tfrac {1}{4}},\ldots \right\}} (or over any other set of non-zero scalars having 0 {\displaystyle 0} as a limit point) forms a neighborhood basis of absorbing disks at the origin for this locally convex topology. If X {\displaystyle X} is a topological vector space and if this convex absorbing subset W {\displaystyle W} is also a bounded subset of X , {\displaystyle X,} then the same will be true of the absorbing disk D := ⋂ | u | = 1 u W ; {\displaystyle D:={\textstyle \bigcap \limits _{|u|=1}}uW;} if in addition D {\displaystyle D} does not contain any non-trivial vector subspace then p D {\displaystyle p_{D}} will be a norm and ( X , p D ) {\displaystyle \left(X,p_{D}\right)} will form what is known as an auxiliary normed space. If this normed space is a Banach space then D {\displaystyle D} is called a Banach disk. == Properties == Properties of balanced sets A balanced set is not empty if and only if it contains the origin. By definition, a set is absolutely convex if and only if it is convex and balanced. Every balanced set is star-shaped (at 0) and a symmetric set. If B {\displaystyle B} is a balanced subset of X {\displaystyle X} then: for any scalars c {\displaystyle c} and d , {\displaystyle d,} if | c | ≤ | d | {\displaystyle |c|\leq |d|} then c B ⊆ d B {\displaystyle cB\subseteq dB} and c B = | c | B . {\displaystyle cB=|c|B.} Thus if c {\displaystyle c} and d {\displaystyle d} are any scalars then ( c B ) ∩ ( d B ) = min { | c | , | d | } B . {\displaystyle (cB)\cap (dB)=\min _{}\{|c|,|d|\}B.} B {\displaystyle B} is absorbing in X {\displaystyle X} if and only if for all x ∈ X , {\displaystyle x\in X,} there exists r > 0 {\displaystyle r>0} such that x ∈ r B . {\displaystyle x\in rB.} for any 1-dimensional vector subspace Y {\displaystyle Y} of X , {\displaystyle X,} the set B ∩ Y {\displaystyle B\cap Y} is convex and balanced. If B {\displaystyle B} is not empty and if Y {\displaystyle Y} is a 1-dimensional vector subspace of span ⁡ B {\displaystyle \operatorname {span} B} then B ∩ Y {\displaystyle B\cap Y} is either { 0 } {\displaystyle \{0\}} or else it is absorbing in Y . {\displaystyle Y.} for any x ∈ X , {\displaystyle x\in X,} if B ∩ span ⁡ x {\displaystyle B\cap \operatorname {span} x} contains more than one point then it is a convex and balanced neighborhood of 0 {\displaystyle 0} in the 1-dimensional vector space span ⁡ x {\displaystyle \operatorname {span} x} when this space is endowed with the Hausdorff Euclidean topology; and the set B ∩ R x {\displaystyle B\cap \mathbb {R} x} is a convex balanced subset of the real vector space R x {\displaystyle \mathbb {R} x} that contains the origin. Properties of balanced hulls and balanced cores For any collection S {\displaystyle {\mathcal {S}}} of subsets of X , {\displaystyle X,} bal ⁡ ( ⋃ S ∈ S S ) = ⋃ S ∈ S bal ⁡ S and balcore ⁡ ( ⋂ S ∈ S S ) = ⋂ S ∈ S balcore ⁡ S . {\displaystyle \operatorname {bal} \left(\bigcup _{S\in {\mathcal {S}}}S\right)=\bigcup _{S\in {\mathcal {S}}}\operatorname {bal} S\quad {\text{ and }}\quad \operatorname {balcore} \left(\bigcap _{S\in {\mathcal {S}}}S\right)=\bigcap _{S\in {\mathcal {S}}}\operatorname {balcore} S.} In any topological vector space, the balanced hull of any open neighborhood of the origin is again open. If X {\displaystyle X} is a Hausdorff topological vector space and if K {\displaystyle K} is a compact subset of X {\displaystyle X} then the balanced hull of K {\displaystyle K} is compact. If a set is closed (respectively, convex, absorbing, a neighborhood of the origin) then the same is true of its balanced core. For any subset S ⊆ X {\displaystyle S\subseteq X} and any scalar c , {\displaystyle c,} bal ⁡ ( c S ) = c bal ⁡ S = | c | bal ⁡ S . {\displaystyle \operatorname {bal} (c\,S)=c\operatorname {bal} S=|c|\operatorname {bal} S.} For any scalar c ≠ 0 , {\displaystyle c\neq 0,} balcore ⁡ ( c S ) = c balcore ⁡ S = | c | balcore ⁡ S . {\displaystyle \operatorname {balcore} (c\,S)=c\operatorname {balcore} S=|c|\operatorname {balcore} S.} This equality holds for c = 0 {\displaystyle c=0} if and only if S ⊆ { 0 } . {\displaystyle S\subseteq \{0\}.} Thus if 0 ∈ S {\displaystyle 0\in S} or S = ∅ {\displaystyle S=\varnothing } then balcore ⁡ ( c S ) = c balcore ⁡ S = | c | balcore ⁡ S {\displaystyle \operatorname {balcore} (c\,S)=c\operatorname {balcore} S=|c|\operatorname {balcore} S} for every scalar c . {\displaystyle c.} == Related notions == A function p : X → [ 0 , ∞ ) {\displaystyle p:X\to [0,\infty )} on a real or complex vector space is said to be a balanced function if it satisfies any of the following equivalent conditions: p ( a x ) ≤ p ( x ) {\displaystyle p(ax)\leq p(x)} whenever a {\displaystyle a} is a scalar satisfying | a | ≤ 1 {\displaystyle |a|\leq 1} and x ∈ X . {\displaystyle x\in X.} p ( a x ) ≤ p ( b x ) {\displaystyle p(ax)\leq p(bx)} whenever a {\displaystyle a} and b {\displaystyle b} are scalars satisfying | a | ≤ | b | {\displaystyle |a|\leq |b|} and x ∈ X . {\displaystyle x\in X.} { x ∈ X : p ( x ) ≤ t } {\displaystyle \{x\in X:p(x)\leq t\}} is a balanced set for every non-negative real t ≥ 0. {\displaystyle t\geq 0.} If p {\displaystyle p} is a balanced function then p ( a x ) = p ( | a | x ) {\displaystyle p(ax)=p(|a|x)} for every scalar a {\displaystyle a} and vector x ∈ X ; {\displaystyle x\in X;} so in particular, p ( u x ) = p ( x ) {\displaystyle p(ux)=p(x)} for every unit length scalar u {\displaystyle u} (satisfying | u | = 1 {\displaystyle |u|=1} ) and every x ∈ X . {\displaystyle x\in X.} Using u := − 1 {\displaystyle u:=-1} shows that every balanced function is a symmetric function. A real-valued function p : X → R {\displaystyle p:X\to \mathbb {R} } is a seminorm if and only if it is a balanced sublinear function. == See also == Absolutely convex set – Convex and balanced set Absorbing set – Set that can be "inflated" to reach any point Bounded set (topological vector space) – Generalization of boundedness Convex set – In geometry, set whose intersection with every line is a single line segment Star domain – Property of point sets in Euclidean spaces Symmetric set – Property of group subsets (mathematics) Topological vector space – Vector space with a notion of nearness == References == Proofs === Sources === Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190. Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908. Dunford, Nelson; Schwartz, Jacob T. (1988). Linear Operators. Pure and applied mathematics. Vol. 1. New York: Wiley-Interscience. ISBN 978-0-471-60848-6. OCLC 18412261. Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138. Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342. Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704. Köthe, Gottfried (1979). Topological Vector Spaces II. Grundlehren der mathematischen Wissenschaften. Vol. 237. New York: Springer Science & Business Media. ISBN 978-0-387-90400-9. OCLC 180577972. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (October 24, 1996). Handbook of Analysis and Its Foundations. Academic Press. ISBN 978-0-08-053299-8. Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Wikipedia:Banks–Zaks fixed point#0
In quantum chromodynamics (and also N = 1 super quantum chromodynamics) with massless flavors, if the number of flavors, Nf, is sufficiently small (i.e. small enough to guarantee asymptotic freedom, depending on the number of colors), the theory can flow to an interacting conformal fixed point of the renormalization group. If the value of the coupling at that point is less than one (i.e. one can perform perturbation theory in weak coupling), then the fixed point is called a Banks–Zaks fixed point. The existence of the fixed point was first reported in 1974 by Belavin and Migdal and by Caswell, and later used by Banks and Zaks in their analysis of the phase structure of vector-like gauge theories with massless fermions. The name Caswell–Banks–Zaks fixed point is also used. More specifically, suppose that we find that the beta function of a theory up to two loops has the form β ( g ) = − b 0 g 3 + b 1 g 5 + O ( g 7 ) {\displaystyle \beta (g)=-b_{0}g^{3}+b_{1}g^{5}+{\mathcal {O}}(g^{7})\,} where b 0 {\displaystyle b_{0}} and b 1 {\displaystyle b_{1}} are positive constants. Then there exists a value g = g ∗ {\displaystyle g=g_{\ast }} such that β ( g ∗ ) = 0 {\displaystyle \beta (g_{\ast })=0} : g ∗ 2 = b 0 b 1 . {\displaystyle g_{\ast }^{2}={\frac {b_{0}}{b_{1}}}.} If we can arrange b 0 {\displaystyle b_{0}} to be smaller than b 1 {\displaystyle b_{1}} , then we have g ∗ 2 < 1 {\displaystyle g_{\ast }^{2}<1} . It follows that when the theory flows to the IR it is a conformal, weakly coupled theory with coupling g ∗ {\displaystyle g_{\ast }} . For the case of a non-Abelian gauge theory with gauge group S U ( N c ) {\displaystyle SU(N_{c})} and Dirac fermions in the fundamental representation of the gauge group for the flavored particles we have b 0 = 1 16 π 2 1 3 ( 11 N c − 2 N f ) and b 1 = − 1 ( 16 π 2 ) 2 ( 34 3 N c 2 − 1 2 N f ( 2 N c 2 − 1 N c + 20 3 N c ) ) {\displaystyle b_{0}={\frac {1}{16\pi ^{2}}}{\frac {1}{3}}(11N_{c}-2N_{f})\;\;\;\;{\text{ and }}\;\;\;\;b_{1}=-{\frac {1}{(16\pi ^{2})^{2}}}\left({\frac {34}{3}}N_{c}^{2}-{\frac {1}{2}}N_{f}\left(2{\frac {N_{c}^{2}-1}{N_{c}}}+{\frac {20}{3}}N_{c}\right)\right)} where N c {\displaystyle N_{c}} is the number of colors and N f {\displaystyle N_{f}} the number of flavors. Then N f {\displaystyle N_{f}} should lie just below 11 2 N c {\displaystyle {\tfrac {11}{2}}N_{c}} in order for the Banks–Zaks fixed point to appear. Note that this fixed point only occurs if, in addition to the previous requirement on N f {\displaystyle N_{f}} (which guarantees asymptotic freedom), 11 2 N c > N f > 34 N c 3 ( 13 N c 2 − 3 ) {\displaystyle {\frac {11}{2}}N_{c}>N_{f}>{\frac {34N_{c}^{3}}{(13N_{c}^{2}-3)}}} where the lower bound comes from requiring b 1 > 0 {\displaystyle b_{1}>0} . This way b 1 {\displaystyle b_{1}} remains positive while − b 0 {\displaystyle -b_{0}} is still negative (see first equation in article) and one can solve β ( g ) = 0 {\displaystyle \beta (g)=0} with real solutions for g {\displaystyle g} . The coefficient b 1 {\displaystyle b_{1}} was first correctly computed by Caswell, while the earlier paper by Belavin and Migdal has a wrong answer. == See also == Beta function == References == T. J. Hollowood, "Renormalization Group and Fixed Points in Quantum Field Theory", Springer, 2013, ISBN 978-3-642-36311-5.
Wikipedia:Bannihatti Parameshwarappa Dakshayani#0
Bannihatti (BP) Parameshwarappa Dakshayani is the former group director of the Flight Dynamics and Space Navigation groups of the Indian Space Research Organisation Satellite Centre. == Early life and education == Dakshayani was born and raised in Bhadravati, Karnataka. She was encouraged to study engineering by her father, but he thought a bachelor's degree would be sufficient. She studied mathematics at the University of Mysore and earned a master's degree in 1981. After graduating she worked at Sir Vishveshwaraiah Institute of Science & Technology teaching maths. In 1998 she completed a Masters in Aerospace Engineering at the Indian Institute of Science. == Career == Dakshayani was appointed to the Indian Space Research Organisation Satellite Centre in 1984. She was assigned to computer programming and orbital dynamics. She had never seen a computer when she applied, so had to teach herself during the evenings. The software she designed for trajectory generation was used in several space missions. She was made director for the Flight Dynamics and Space Navigation groups. Around 27% of the staff involved with the development and design of satellites were women. She was responsible for the low earth, geosynchronous orbit and interplanetary missions of the Indian Space Research Organisation Satellite Centre. She was project manager for the Space Capsule Recovery Experiment and deputy project manager for the Mars Orbiter Mission. After completing an analysis of orbit stability, she identified that a highly eccentric orbit would provide better position accuracy. She won an Indian Space Research Organisation Satellite Centre merit award for her Mars Orbiter Mission. The mission entered the desired Martian orbit in September 2014. In 2018 she appeared on the BBC World Service show My Indian Life with Kalki Koechlin. == References ==
Wikipedia:Bar complex#0
In mathematics, the bar complex, also called the bar resolution, bar construction, standard resolution, or standard complex, is a way of constructing resolutions in homological algebra. It was first introduced for the special case of algebras over a commutative ring by Samuel Eilenberg and Saunders Mac Lane (1953) and Henri Cartan and Eilenberg (1956, IX.6) and has since been generalized in many ways. The name "bar complex" comes from the fact that Eilenberg & Mac Lane (1953) used a vertical bar | as a shortened form of the tensor product ⊗ {\displaystyle \otimes } in their notation for the complex. == Definition == Let R {\displaystyle R} be an algebra over a field k {\displaystyle k} , let M 1 {\displaystyle M_{1}} be a right R {\displaystyle R} -module, and let M 2 {\displaystyle M_{2}} be a left R {\displaystyle R} -module. Then, one can form the bar complex Bar R ⁡ ( M 1 , M 2 ) {\displaystyle \operatorname {Bar} _{R}(M_{1},M_{2})} given by ⋯ → M 1 ⊗ k R ⊗ k R ⊗ k M 2 → M 1 ⊗ k R ⊗ k M 2 → M 1 ⊗ k M 2 → 0 , {\displaystyle \cdots \rightarrow M_{1}\otimes _{k}R\otimes _{k}R\otimes _{k}M_{2}\rightarrow M_{1}\otimes _{k}R\otimes _{k}M_{2}\rightarrow M_{1}\otimes _{k}M_{2}\rightarrow 0\,,} with the differential d ( m 1 ⊗ r 1 ⊗ ⋯ ⊗ r n ⊗ m 2 ) = m 1 r 1 ⊗ ⋯ ⊗ r n ⊗ m 2 + ∑ i = 1 n − 1 ( − 1 ) i m 1 ⊗ r 1 ⊗ ⋯ ⊗ r i r i + 1 ⊗ ⋯ ⊗ r n ⊗ m 2 + ( − 1 ) n m 1 ⊗ r 1 ⊗ ⋯ ⊗ r n m 2 {\displaystyle {\begin{aligned}d(m_{1}\otimes r_{1}\otimes \cdots \otimes r_{n}\otimes m_{2})&=m_{1}r_{1}\otimes \cdots \otimes r_{n}\otimes m_{2}\\&+\sum _{i=1}^{n-1}(-1)^{i}m_{1}\otimes r_{1}\otimes \cdots \otimes r_{i}r_{i+1}\otimes \cdots \otimes r_{n}\otimes m_{2}+(-1)^{n}m_{1}\otimes r_{1}\otimes \cdots \otimes r_{n}m_{2}\end{aligned}}} == Resolutions == The bar complex is useful because it provides a canonical way of producing (free) resolutions of modules over a ring. However, often these resolutions are very large, and can be prohibitively difficult to use for performing actual computations. === Free Resolution of a Module === Let M {\displaystyle M} be a left R {\displaystyle R} -module, with R {\displaystyle R} a unital k {\displaystyle k} -algebra. Then, the bar complex Bar R ⁡ ( R , M ) {\displaystyle \operatorname {Bar} _{R}(R,M)} gives a resolution of M {\displaystyle M} by free left R {\displaystyle R} -modules. Explicitly, the complex is ⋯ → R ⊗ k R ⊗ k R ⊗ k M → R ⊗ k R ⊗ k M → R ⊗ k M → 0 , {\displaystyle \cdots \rightarrow R\otimes _{k}R\otimes _{k}R\otimes _{k}M\rightarrow R\otimes _{k}R\otimes _{k}M\rightarrow R\otimes _{k}M\rightarrow 0\,,} This complex is composed of free left R {\displaystyle R} -modules, since each subsequent term is obtained by taking the free left R {\displaystyle R} -module on the underlying vector space of the previous term. To see that this gives a resolution of M {\displaystyle M} , consider the modified complex ⋯ → R ⊗ k R ⊗ k R ⊗ k M → R ⊗ k R ⊗ k M → R ⊗ k M → M → 0 , {\displaystyle \cdots \rightarrow R\otimes _{k}R\otimes _{k}R\otimes _{k}M\rightarrow R\otimes _{k}R\otimes _{k}M\rightarrow R\otimes _{k}M\rightarrow M\rightarrow 0\,,} Then, the above bar complex being a resolution of M {\displaystyle M} is equivalent to this extended complex having trivial homology. One can show this by constructing an explicit homotopy h n : R ⊗ k n ⊗ k M → R ⊗ k ( n + 1 ) ⊗ k M {\displaystyle h_{n}:R^{\otimes _{k}n}\otimes _{k}M\to R^{\otimes _{k}(n+1)}\otimes _{k}M} between the identity and 0. This homotopy is given by h n ( r 1 ⊗ ⋯ ⊗ r n ⊗ m ) = ∑ i = 1 n − 1 ( − 1 ) i + 1 r 1 ⊗ ⋯ ⊗ r i − 1 ⊗ 1 ⊗ r i ⊗ ⋯ ⊗ r n ⊗ m {\displaystyle {\begin{aligned}h_{n}(r_{1}\otimes \cdots \otimes r_{n}\otimes m)&=\sum _{i=1}^{n-1}(-1)^{i+1}r_{1}\otimes \cdots \otimes r_{i-1}\otimes 1\otimes r_{i}\otimes \cdots \otimes r_{n}\otimes m\end{aligned}}} One can similarly construct a resolution of a right R {\displaystyle R} -module N {\displaystyle N} by free right modules with the complex Bar R ⁡ ( N , R ) {\displaystyle \operatorname {Bar} _{R}(N,R)} . Notice that, in the case one wants to resolve R {\displaystyle R} as a module over itself, the above two complexes are the same, and actually give a resolution of R {\displaystyle R} by R {\displaystyle R} - R {\displaystyle R} -bimodules. This provides one with a slightly smaller resolution of R {\displaystyle R} by free R {\displaystyle R} - R {\displaystyle R} -bimodules than the naive option Bar R e ⁡ ( R e , M ) {\displaystyle \operatorname {Bar} _{R^{e}}(R^{e},M)} . Here we are using the equivalence between R {\displaystyle R} - R {\displaystyle R} -bimodules and R e {\displaystyle R^{e}} -modules, where R e = R ⊗ R op {\displaystyle R^{e}=R\otimes R^{\operatorname {op} }} , see bimodules for more details. == The Normalized Bar Complex == The normalized (or reduced) standard complex replaces A ⊗ A ⊗ ⋯ ⊗ A ⊗ A {\displaystyle A\otimes A\otimes \cdots \otimes A\otimes A} with A ⊗ ( A / K ) ⊗ ⋯ ⊗ ( A / K ) ⊗ A {\displaystyle A\otimes (A/K)\otimes \cdots \otimes (A/K)\otimes A} . == Monads == == See also == Koszul complex == References == Cartan, Henri; Eilenberg, Samuel (1956), Homological algebra, Princeton Mathematical Series, vol. 19, Princeton University Press, ISBN 978-0-691-04991-5, MR 0077480 {{citation}}: ISBN / Date incompatibility (help) Eilenberg, Samuel; Mac Lane, Saunders (1953), "On the groups of H ( Π , n ) {\displaystyle H(\Pi ,n)} . I", Annals of Mathematics, Second Series, 58: 55–106, doi:10.2307/1969820, ISSN 0003-486X, JSTOR 1969820, MR 0056295 Ginzburg, Victor (2005). "Lectures on Noncommutative Geometry". arXiv:math.AG/0506603. Weibel, Charles (1994), An Introduction to Homological Algebra, Cambridge Studies in Advanced Mathematics, vol. 38, Cambridge: Cambridge University Press, ISBN 0-521-43500-5
Wikipedia:Barbara Csima#0
Barbara Flora Csima is a Canadian mathematician specializing in computability theory and mathematical logic. She is a professor of pure mathematics and associate chair for graduate studies at the University of Waterloo, and the 2024 president of the Canadian Mathematical Society. == Education and career == Csima studied mathematics and actuarial science as an undergraduate at the University of Toronto, graduating with honours in 1998. She went to the University of Chicago in the US for graduate study in mathematics, earned a master's degree there in 1999, and completed her Ph.D. in 2003. Her dissertation, Applications of Computability Theory to Prime Models and Differential Geometry, was supervised by Robert I. Soare. After a postdoctoral stint as H. C. Wang Assistant Professor of Mathematics at Cornell University from 2003 to 2005, she obtained a regular-rank faculty position as assistant professor of pure mathematics at the University of Waterloo in 2005. She was promoted to associate professor in 2010 and full professor in 2015. Csima was elected as president of the Canadian Mathematical Society for a term beginning in 2024. == Recognition == Csima was elected as a Fellow of the Canadian Mathematical Society in 2023. == References == == External links == Home page
Wikipedia:Barbara Keyfitz#0
Barbara Lee Keyfitz is a Canadian-American mathematician, the Dr. Charles Saltzer Professor of Mathematics at Ohio State University. In her research, she studies nonlinear partial differential equations and associated conservation laws. == Professional career == Keyfitz did her undergraduate studies at the University of Toronto, and earned a Ph.D. in 1970 from New York University, under the supervision of Peter Lax. Before taking her present position at Ohio State, she taught at Columbia University, Princeton University, Arizona State University, and the University of Houston; at Houston, she was the John and Rebecca Moores Professor of Mathematics. She was also the director of the Fields Institute from 2004 to 2008. She was president of the Association for Women in Mathematics from 2005 to 2006, and in 2011 she became president of the International Council for Industrial and Applied Mathematics. She was Vice-President of the American Mathematical Society from 2011 - 2014. == Awards and honors == Keyfitz is the 2005 winner of the Krieger–Nelson Prize of the Canadian Mathematical Society, the 2011 Noether Lecturer of the Association for Women in Mathematics, the 2012 winner of the SIAM Prize for Distinguished Service to the Profession, and the 2012 AWM-SIAM Sonia Kovalevsky Lecturer. She was interviewed by Patricia Clark Kenschaft in her book Change is Possible:Stories of Women and Minorities in Mathematics. In 2012 she became a fellow of the American Mathematical Society. She is also a fellow of the American Association for the Advancement of Science, the Society for Industrial and Applied Mathematics and the Fields Institute. In 2017, she was selected as a fellow of the Association for Women in Mathematics in the inaugural class. == Publications == === Books edited === B. L. Keyfitz and H. C. Kranzer, eds., Nonstrictly Hyperbolic Conservation Laws, Contemporary Mathematics, 60, American Mathematical Society, Providence, 1987. B. L. Keyfitz and M. Shearer, eds., Nonlinear Evolution Equations that Change Type, IMA Series Volume 27, Springer Verlag, 1990. === Book chapter === B. L. Keyfitz, 'Hold that Light! Modeling of Traffic Flow by Differential Equations', in Six Themes on Variations, (R. Hardt and R. Forman, eds), American Mathematical Society, 2005. === Selected publications in refereed journals === B. L. Keyfitz, 'Solutions with shocks: an example of an L1 contractive semi-group', Comm. Pure Appl. Math. XXIV, (1971), 125-132. B. L. Keyfitz, R. E. Melnik and B. Grossman, 'An analysis of the leading-edge singularity in transonic small-disturbance theory', Quarterly Journal of Mechanics and Applied Mathematics, XXXI, (1978), 137-155. B. L. Keyfitz and H. C. Kranzer, 'Existence and uniqueness of entropy solutions to the Riemann problem for hyperbolic systems of two nonlinear conservation laws', Journal of Differential Equations, 27, (1978), 444-476. B. L. Keyfitz and H. C. Kranzer, 'The Riemann problem for a class of hyperbolic conservation laws exhibiting a parabolic degeneracy', Journal of Differential Equations, 47, (1983), 35-65. B. L. Keyfitz, 'Classification of one state variable bifurcation problems up to codimension seven', Dynamics and Stability of Systems, 1, (1986), 1-41. B. L. Keyfitz and G. G. Warnecke, `The existence of viscous profiles for transonic shocks', Communications in Partial Differential Equations, 16, (1991) 1197-1221. B. L. Keyfitz, 'A geometric theory of conservation laws which change type', Zeitschrift fur Angewandte Mathematik und Mechanik, 75, (1995), 571-581. B. L. Keyfitz and N. Keyfitz, 'The McKendrick Partial Differential Equation and its Uses in Epidemiology and Population Study', Mathematical and Computer Modelling, 26, (1997), 1-9. B. L. Keyfitz, 'Self-Similar Solutions of Two-Dimensional Conservation Laws', Journal of Hyperbolic Differential Equations, 1 (2004), 445-492. B. L. Keyfitz, 'The Fichera Function and Nonlinear Equations', Rendiconti Accademia delle Scienze detta dei XL, Memorie di Matematica e Applicazioni, XXX (2006), 83-94. B. L. Keyfitz, 'Singular Shocks: Retrospective and Prospective', Confluentes Mathematici, 3 (2011), 445-470. J. Holmes, B. L. Keyfitz and F. Tiglay, 'Nonuniform dependence on initial data for compressible gas dynamics: The Cauchy problem on R2', SIAM Journal of Mathematical Analysis, 50 (2018), 1237-1254. == Personal == Keyfitz was born in Ottawa, and is the daughter of Canadian demographer Nathan Keyfitz. She is married to Marty Golubitsky and has two children. == References ==
Wikipedia:Barbara Rokowska#0
Barbara Rokowska (1926-2012) was a Polish mathematician known for her work on Steiner systems and certain problems posed by Paul Erdős. She was a professor at Wrocław University of Science and Technology. Rokowska received an undergraduate degree in Polish from the University of Wrocław in 1951. She later began a second degree program at the University of Wrocław in mathematics. While pursuing her studies, she worked as a technical editor for mathematics journals including Colloquium Mathematicum. That journal received a submission from Erdős involving an estimate of a certain integral depending on k parameters. Though interesting, the submission was hastily written and incomplete. Rokowska's master's thesis filled in the details of Erdős' work. One of her first papers, written in collaboration with Andrzej Schinzel, treated a number theory problem also posed by Erdős. Rokowska received her PhD, on Steiner systems, in 1966. Her doctoral advisor was Czesław Ryll-Nardzewski. She had 5 PhD students of her own. == References ==
Wikipedia:Bareiss algorithm#0
In mathematics, the Bareiss algorithm, named after Erwin Bareiss, is an algorithm to calculate the determinant or the echelon form of a matrix with integer entries using only integer arithmetic; any divisions that are performed are guaranteed to be exact (there is no remainder). The method can also be used to compute the determinant of matrices with (approximated) real entries, avoiding the introduction of any round-off errors beyond those already present in the input. == Overview == Determinant definition has only multiplication, addition and subtraction operations. Obviously the determinant is integer if all matrix entries are integer. However actual computation of the determinant using the definition or Leibniz formula is impractical, as it requires O(n!) operations. Gaussian elimination has O(n3) complexity, but introduces division, which results in round-off errors when implemented using floating point numbers. Round-off errors can be avoided if all the numbers are kept as integer fractions instead of floating point. But then the size of each element grows in size exponentially with the number of rows. Bareiss brings up a question of performing an integer-preserving elimination while keeping the magnitudes of the intermediate coefficients reasonably small. Two algorithms are suggested: Division-free algorithm — performs matrix reduction to triangular form without any division operation. Fraction-free algorithm — uses division to keep the intermediate entries smaller, but due to the Sylvester's Identity the transformation is still integer-preserving (the division has zero remainder). For completeness Bareiss also suggests fraction-producing multiplication-free elimination methods. == The algorithm == The program structure of this algorithm is a simple triple-loop, as in the standard Gaussian elimination. However in this case the matrix is modified so that each Mk,k entry contains the leading principal minor [M]k,k. Algorithm correctness is easily shown by induction on k. If the assumption about principal minors turns out to be false, e.g. if Mk−1,k−1 = 0 and some Mi,k−1 ≠ 0 (i = k,...,n) then we can exchange the k−1-th row with the i-th row and change the sign of the final answer. == Analysis == During execution of the Bareiss algorithm, every integer that is computed is the determinant of a submatrix of the input matrix. This allows, using the Hadamard inequality, to bound the size of these integers. Otherwise, the Bareiss algorithm may be viewed as a variant of Gaussian elimination and needs roughly the same number of arithmetic operations. It follows that, for an n × n matrix of maximum (absolute) value 2L for each entry, the Bareiss algorithm runs in O(n3) elementary operations with an O(nn/2 2nL) bound on the absolute value of intermediate values needed. Its computational complexity is thus O(n5L2 (log(n)2 + L2)) when using elementary arithmetic or O(n4L (log(n) + L) log(log(n) + L))) by using fast multiplication. == References ==
Wikipedia:Barlow's formula#0
Barlow's formula (called "Kesselformel" in German) relates the internal pressure that a pipe can withstand to its dimensions and the strength of its material. This approximate formula is named after Peter Barlow, an English mathematician. P = 2 σ θ s D {\displaystyle P={\frac {2\sigma _{\theta }s}{D}}} , where P {\displaystyle P} : internal pressure, σ θ {\displaystyle \sigma _{\theta }} : allowable stress, s {\displaystyle s} : wall thickness, D {\displaystyle D} : outside diameter. This formula (DIN 2413) figures prominently in the design of autoclaves and other pressure vessels. == Other formulations == The design of a complex pressure containment system involves much more than the application of Barlow's formula. For example, in 100 countries the ASME BPVCcode stipulates the requirements for design and testing of pressure vessels. The formula is also common in the pipeline industry to verify that pipe used for gathering, transmission, and distribution lines can safely withstand operating pressures. The design factor is multiplied by the resulting pressure which gives the maximum operating pressure (MAOP) for the pipeline. In the United States, this design factor is dependent on Class locations which are defined in DOT Part 192. There are four class locations corresponding to four design factors: == External links == Barlow's Formula Calculator Barlow's Equation and Calculator Barlow's Formula Solver Barlow's Formula Calculator for Copper Tubes == References ==
Wikipedia:Bartel Leendert van der Waerden#0
Bartel Leendert van der Waerden (Dutch: [ˈbɑrtə(l) ˈleːndərt fɑn dər ˈʋaːrdə(n)]; 2 February 1903 – 12 January 1996) was a Dutch mathematician and historian of mathematics. == Biography == === Education and early career === Van der Waerden learned advanced mathematics at the University of Amsterdam and the University of Göttingen, from 1919 until 1926. He was much influenced by Emmy Noether at Göttingen, Germany. Amsterdam awarded him a Ph.D. for a thesis on algebraic geometry, supervised by Hendrick de Vries. Göttingen awarded him the habilitation in 1928. In that year, at the age of 25, he accepted a professorship at the University of Groningen. In his 27th year, Van der Waerden published his Moderne Algebra, an influential two-volume treatise on abstract algebra, still cited, and perhaps the first treatise to treat the subject as a comprehensive whole. This work systematized an ample body of research by Emmy Noether, David Hilbert, Richard Dedekind, and Emil Artin. In the following year, 1931, he was appointed professor at the University of Leipzig. In July 1929 he married the sister of mathematician Franz Rellich, Camilla Juliana Anna, and they had three children. === Nazi Germany === After the Nazis seized power, and through World War II, Van der Waerden remained at Leipzig, and passed up opportunities to leave Nazi Germany for Princeton and Utrecht. However, he was critical of the Nazis and refused to give up his Dutch nationality, both of which led to difficulties for him. === Postwar career === Following the war, Van der Waerden was repatriated to the Netherlands rather than returning to Leipzig (then under Soviet control), but struggled to find a position in the Dutch academic system, in part because his time in Germany made his politics suspect and in part due to Brouwer's opposition to Hilbert's school of mathematics. After a year visiting Johns Hopkins University and two years as a part-time professor, in 1950, Van der Waerden filled the chair in mathematics at the University of Amsterdam. In 1951, he moved to the University of Zurich, where he spent the rest of his career, supervising more than 40 Ph.D. students. In 1949, Van der Waerden became member of the Royal Netherlands Academy of Arts and Sciences, in 1951 this was changed to a foreign membership. In 1973 he received the Pour le Mérite. == Contributions == Van der Waerden is mainly remembered for his work on abstract algebra. He also wrote on algebraic geometry, topology, number theory, geometry, combinatorics, analysis, probability and statistics, and quantum mechanics (he and Heisenberg had been colleagues at Leipzig). In later years, he turned to the history of mathematics and science. His historical writings include Ontwakende wetenschap (1950), which was translated into English as Science Awakening (1954), Sources of Quantum Mechanics (1967), Geometry and Algebra in Ancient Civilizations (1983), and A History of Algebra (1985). Van der Waerden has over 1000 academic descendants, most of them through three of his students, David van Dantzig (Ph.D. Groningen 1931), Herbert Seifert (Ph.D. Leipzig 1932), and Hans Richter (Ph.D. Leipzig 1936, co-advised by Paul Koebe). == See also == == Notes == == References == Alexander Soifer (2009), The Mathematical Coloring Book, Springer-Verlag ISBN 978-0-387-74640-1. Soifer devotes four chapters and over 100 pages to biographical material about van der Waerden, some of which he had also published earlier in the journal Geombinatorics. Alexander Soifer (2015) The Scholar and the State: In Search of Van der Waerden, Springer books ISBN 978-3-0348-0711-1 == Further reading == Schlote, K.-H., 2005, "Moderne Algebra" in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics. Elsevier: 901–16. O'Connor, John J.; Robertson, Edmund F., "Bartel Leendert van der Waerden", MacTutor History of Mathematics Archive, University of St Andrews Dold-Samplonius, Yvonne (March 1997). "Interview with Bartel Leendert van Der Waerden (conducted in 1993)" (PDF). Notices of the American Mathematical Society. 44 (3): 313–320. Freudenthal, H., 1962, "Review: B. L. van der Waerden, Science Awakening" in Bull. Amer. Math. Soc., 68 (6):543–45. == External links == Bartel Leendert van der Waerden at the Mathematics Genealogy Project
Wikipedia:Bartolomeo Sovero#0
Bartolomeo Sovero (1576 – 23 July 1629) was a Swiss mathematician. == Biography == Sovero was born in Corbières in 1576. In 1594 he entered the Jesuit order and studied logic, mathematics and theology at the Jesuit College of Brera. In 1604 he left the Society of Jesus. in 1624 Sovero replaced Giovanni Camillo Glorioso on the chair of mathematics at the University of Padua. In his main work, Curvi ac recti proportio, Sovero proves to be a precursor of the geometry of indivisibles and of the method called "proportional parallel movement". It is elucidated by an algebraic formulation. The work of Sovero gave rise to two important polemics, one with the first successor of Galileo in Padua, Glorioso; the second between Guldin and Cavalieri on the subject of the latter's originality. == Works == Sovero, Bartolomeo (1630). Curvi ac recti proportio. Patavii: Varisco Varisco. == References == == Bibliography == Busulini, Bruno (1957–1958). "Le figure analoghe di Bartolomeo Sovero". Atti e Memorie dell'Accademia Patavina di Scienze, Lettere ed Arti. LXX (2): 35–88. Nenci, Elio (2019). "SOVERO, Bartolomeo". Dizionario Biografico degli Italiani, Volume 93: Sisto V–Stammati (in Italian). Rome: Istituto dell'Enciclopedia Italiana. ISBN 978-8-81200032-6.
Wikipedia:Baruch Barzel#0
Baruch Barzel (Hebrew: ברוך ברזל; March 19, 1976) is an Israeli physicist and applied mathematician at Bar-Ilan University, a member of the Gonda Multidisciplinary Brain Research Center and of the Bar-Ilan Data Science Institute. His main research areas are statistical physics, complex systems, nonlinear dynamics and network science. In 2013 he introduced the concept of universality in the dynamics of complex networks, showing that complex systems from different domains condense into discrete forms, or universality classes, of dynamic behavior. In the following years, Barzel and colleagues developed a theoretical framework to predict the observed behavior of complex networked systems: their patterns of information flow; the timescales of their signal propagation; their resilience against failures and disruptions and their recoverability. During the COVID-19 Pandemic Barzel's lab published the alternating quarantine strategy to mitigate the spread of SARS-CoV-2 alongside continuous socioeconomic activity. The strategy was implemented by several agencies in Israel and around the world. == Academic career == Barzel completed his Ph.D. in physics at the Hebrew University of Jerusalem as a Hoffman Fellow. He then pursued his postdoctoral training at the Center for Complex Network Research at Northeastern University and at the Channing Division of Network Medicine, Harvard Medical School. Barzel is a recipient of the Racah prize (2007) and the Krill prize of the Wolf Foundation (2019). Barzel is also an active public lecturer on science and on Judaism, and presents a weekly corner on Jewish thought on Israeli Public Broadcasting Corporation . Dr. Barzel's research focuses on the dynamic behavior of complex networks, uncovering universal principles that govern the dynamics of diverse systems, such as disease spreading, gene regulatory networks, protein interactions or population dynamics. == Selected publications == Barzel, B.; Biham, O. (2011). "Binomial Moment Equations for Stochastic Reaction Systems". Physical Review Letters. 106 (15): 150602. arXiv:1011.0012. Bibcode:2011PhRvL.106o0602B. doi:10.1103/PhysRevLett.106.150602. PMID 21568538. S2CID 293255. Barzel, B.; Barabási, A.-L. (2013). "Universality in Network Dynamics". Nature Physics. 9 (10): 673–681. Bibcode:2013NatPh...9..673B. doi:10.1038/nphys2741. PMC 3852675. PMID 24319492. Barzel, B.; Barabási, A.-L. (2013). "Network link prediction by global silencing of indirect correlations". Nature Biotechnology. 31 (8): 720–725. doi:10.1038/nbt.2601. PMC 3740009. PMID 23851447. S2CID 5470514. Barzel, Y.-Y. Liu; Barabási, A.-L. (2015). "Constructing minimal models for complex system dynamics". Nature Communications. 6: 7186. Bibcode:2015NatCo...6.7186B. doi:10.1038/ncomms8186. PMID 25990707. Yan, G.; Tsekenis, G.; Liu, Y.-Y.; Slotine, J.J.; Barabási, A.-L. (2015). "Spectrum of controlling and observing complex networks". Nature Physics. 11 (9): 779–786. arXiv:1503.01160. Bibcode:2015NatPh..11..779Y. doi:10.1038/nphys3422. S2CID 14168596. J. Gao, B. Barzel and A.-L. Barabási, "Universal resilience patterns in complex networks", Nature 530, 307 (2016) U. Harush and B. Barzel, "Dynamic patterns of information flow in complex networks", Nature Communications 8, 2181 (2017) C. Hens, U. Harush, S. Haber, R. Cohen and B. Barzel, "Spatiotemporal signal propagation in complex networks", Nature Physics (2019) D. Meidan, N. Schulmann, R. Cohen, S. Haber, E. Yaniv, R. Sarid and B. Barzel, Alternating quarantine for sustainable epidemic mitigation. Nature Communications 12, 220 (2021). H. Sanhedrai, J. Gao, A. Bashan, M. Schwartz, S. Havlin and B. Barzel, Reviving a failed network through microscopic interventions. Nature Physics 18, 338 (2022). C. Meena, C. Hens, S. Acharyya, S. Haber, S. Boccaletti and B. Barzel, Emergent stability in complex network dynamics. Nature Physics (2023). == Public lectures and media coverage == Universal resilience patterns in complex networks in Ynet (Hebrew) Bar-Ilan Nitzotzot meeting 2015 (Hebrew) "Connecting the world in six steps" Interview on Channel 20, 2019 (Hebrew) Network Earth, 2019 Universality in network dynamics in 2Physics Predicting the tipping point of complex systems in The Munich eye A new framework to predict spatiotemporal signal propagation in complex networks in Phys.org Profile article on the Complex Network Dynamics lab in Makor Rishon (Hebrew) Krav Mada radio lecture series in Galei Zahal (Hebrew) Israeli experts propose radical post-corona exit strategy in Israel21 An alternative quarantine strategy in El Economista A well-calculated proposal: mathematical proposal to fight COVID-19 and get out of the economic blockade in Aula Magna More here. == References == == External links == Professional Website Research Publications
Wikipedia:Baruch Berliner#0
Baruch Berliner (Hebrew: ברוך ברלינר; born in 1942) is an Israeli composer, mathematician and poet. He is the author of musical works, songs, books, and articles. == Biography == Baruch Berliner was born in Tel Aviv. He completed his doctoral studies in mathematics at the University of Zurich in Switzerland, where he also worked as an actuary at the Swiss reinsurance company "Swiss Re", One of The largest Reinsurance in the world, until 1990, when he returned with his family to Israel. From 1990 until his retirement in 2007, he served as a senior researcher at the Faculty of Management at Tel Aviv University and as the chairman of the Erhard Scientific insurance Institute. As part of his work and research, he was invited to lecture at many universities and actuarial conferences. Wrote two books and over 100 articles on actuarial, finance and economics. For years he has been writing poetry in Hebrew, German and English. Besides the work "The Creation of the World", other works such as "Abraham" which is a symphonic poem for a philharmonic orchestra, a narrator and a male choir, as well as pleasant South American dances to the waltz rhythm. In 2016 a memorial concert was held in Kyiv that marked the 75th anniversary of the Babi Yar massacre. At the same concert conducted by Alex Ansky, the Ukraine Symphony Orchestra played Berliner's works: "Abraham" and "Cain and Abel", after which his works were performed at the Huberman Festival in Poland, in Bulgaria, United States, Portugal, Russia, France, Serbia, Austria, Estonia, Kazakhstan, Moldova, Armenia, Austria and Romania. Also, the premiere of his work "Yakov's Ladder". 20 concerts that were supposed to take place during 2020, including concerts in Leipzig and Hamburg in Germany, were canceled due to the Coronavirus. In addition, a book of his humorous poems in German was published in 2020. Umgestülpter humor. In 2021, concerts of his works dealing with the Torah were presented for the first time in Muslim countries such as Turkey, Kyrgyzstan and Kazakhstan. In 2024, his work, the symphonic poem, Jacob's Dream, was honored to be performed at Carnegie Hall in New York. In 2022, a new composition by Berliner was released for the Jewish prayer "El Malei Rachamim", the new composition is part of the soundtrack of the movie "The address on the Wall". released that year. With Alex Ansky summarizing the visit and the concert in Kyiv in 2016, the film also describes how quickly and dramatically Jewish life changed with the entry of the Nazis into Kyiv and the massacre in which about 100,000 of Kyiv's Jews were murdered. When out of the chaos emerge several characters who were invited together to the war and the plot of the film, including the German soldier Hans, who was recruited into the Nazi army against his will, when, in contrast to the Nazi cruelty, we witness his innocent and gentle feelings. The film was shown in about 20 festivals around the world, including in Germany, Italy, Greece, Hungary, Cyprus, the United States and more. . In 2023, Berliner's works were played in the demilitarized zone between North Korea and South Korea. == For further reading == Gideon Dukov This is my feeling at these concerts, that I am a messenger who conveys the Torah to the world, Makor Rishon. October 3, 2021 == External links == Interview with Alex Ansky about the concert of Berliner's works, on the website (nrg) Criticism of a concert of Berliner's works, on the website (the stage) Jenny Elazari, the mathematician who fell in love with music: "I was born again" Yedioth Hasharon == References ==
Wikipedia:Barycentric coordinate system#0
In geometry, a barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex (a triangle for points in a plane, a tetrahedron for points in three-dimensional space, etc.). The barycentric coordinates of a point can be interpreted as masses placed at the vertices of the simplex, such that the point is the center of mass (or barycenter) of these masses. These masses can be zero or negative; they are all positive if and only if the point is inside the simplex. Every point has barycentric coordinates, and their sum is never zero. Two tuples of barycentric coordinates specify the same point if and only if they are proportional; that is to say, if one tuple can be obtained by multiplying the elements of the other tuple by the same non-zero number. Therefore, barycentric coordinates are either considered to be defined up to multiplication by a nonzero constant, or normalized for summing to unity. Barycentric coordinates were introduced by August Möbius in 1827. They are special homogeneous coordinates. Barycentric coordinates are strongly related with Cartesian coordinates and, more generally, to affine coordinates (see Affine space § Relationship between barycentric and affine coordinates). Barycentric coordinates are particularly useful in triangle geometry for studying properties that do not depend on the angles of the triangle, such as Ceva's theorem, Routh's theorem, and Menelaus's theorem. In computer-aided design, they are useful for defining some kinds of Bézier surfaces. == Definition == Let A 0 , … , A n {\displaystyle A_{0},\ldots ,A_{n}} be n + 1 points in a Euclidean space, a flat or an affine space A {\displaystyle \mathbf {A} } of dimension n that are affinely independent; this means that there is no affine subspace of dimension n − 1 that contains all the points, or, equivalently that the points define a simplex. Given any point P ∈ A , {\displaystyle P\in \mathbf {A} ,} there are scalars a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} that are not all zero, such that ( a 0 + ⋯ + a n ) O P → = a 0 O A 0 → + ⋯ + a n O A n → , {\displaystyle (a_{0}+\cdots +a_{n}){\overset {}{\overrightarrow {OP}}}=a_{0}{\overset {}{\overrightarrow {OA_{0}}}}+\cdots +a_{n}{\overset {}{\overrightarrow {OA_{n}}}},} for any point O. (As usual, the notation A B → {\displaystyle {\overset {}{\overrightarrow {AB}}}} represents the translation vector or free vector that maps the point A to the point B.) The elements of a (n + 1) tuple ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} that satisfies this equation are called barycentric coordinates of P with respect to A 0 , … , A n . {\displaystyle A_{0},\ldots ,A_{n}.} The use of colons in the notation of the tuple means that barycentric coordinates are a sort of homogeneous coordinates, that is, the point is not changed if all coordinates are multiplied by the same nonzero constant. Moreover, the barycentric coordinates are also not changed if the auxiliary point O, the origin, is changed. The barycentric coordinates of a point are unique up to a scaling. That is, two tuples ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} and ( b 0 : … : b n ) {\displaystyle (b_{0}:\dotsc :b_{n})} are barycentric coordinates of the same point if and only if there is a nonzero scalar λ {\displaystyle \lambda } such that b i = λ a i {\displaystyle b_{i}=\lambda a_{i}} for every i. In some contexts, it is useful to constrain the barycentric coordinates of a point so that they are unique. This is usually achieved by imposing the condition ∑ a i = 1 , {\displaystyle \sum a_{i}=1,} or equivalently by dividing every a i {\displaystyle a_{i}} by the sum of all a i . {\displaystyle a_{i}.} These specific barycentric coordinates are called normalized or absolute barycentric coordinates. Sometimes, they are also called affine coordinates, although this term refers commonly to a slightly different concept. Sometimes, it is the normalized barycentric coordinates that are called barycentric coordinates. In this case the above defined coordinates are called homogeneous barycentric coordinates. With above notation, the homogeneous barycentric coordinates of Ai are all zero, except the one of index i. When working over the real numbers (the above definition is also used for affine spaces over an arbitrary field), the points whose all normalized barycentric coordinates are nonnegative form the convex hull of { A 0 , … , A n } , {\displaystyle \{A_{0},\ldots ,A_{n}\},} which is the simplex that has these points as its vertices. With above notation, a tuple ( a 1 , … , a n ) {\displaystyle (a_{1},\ldots ,a_{n})} such that ∑ i = 0 n a i = 0 {\displaystyle \sum _{i=0}^{n}a_{i}=0} does not define any point, but the vector a 0 O A 0 → + ⋯ + a n O A n → {\displaystyle a_{0}{\overset {}{\overrightarrow {OA_{0}}}}+\cdots +a_{n}{\overset {}{\overrightarrow {OA_{n}}}}} is independent from the origin O. As the direction of this vector is not changed if all a i {\displaystyle a_{i}} are multiplied by the same scalar, the homogeneous tuple ( a 0 : … : a n ) {\displaystyle (a_{0}:\dotsc :a_{n})} defines a direction of lines, that is a point at infinity. See below for more details. == Relationship with Cartesian or affine coordinates == Barycentric coordinates are strongly related to Cartesian coordinates and, more generally, affine coordinates. For a space of dimension n, these coordinate systems are defined relative to a point O, the origin, whose coordinates are zero, and n points A 1 , … , A n , {\displaystyle A_{1},\ldots ,A_{n},} whose coordinates are zero except that of index i that equals one. A point has coordinates ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} for such a coordinate system if and only if its normalized barycentric coordinates are ( 1 − x 1 − ⋯ − x n , x 1 , … , x n ) {\displaystyle (1-x_{1}-\cdots -x_{n},x_{1},\ldots ,x_{n})} relatively to the points O , A 1 , … , A n . {\displaystyle O,A_{1},\ldots ,A_{n}.} The main advantage of barycentric coordinate systems is to be symmetric with respect to the n + 1 defining points. They are therefore often useful for studying properties that are symmetric with respect to n + 1 points. On the other hand, distances and angles are difficult to express in general barycentric coordinate systems, and when they are involved, it is generally simpler to use a Cartesian coordinate system. == Relationship with projective coordinates == Homogeneous barycentric coordinates are also strongly related with some projective coordinates. However this relationship is more subtle than in the case of affine coordinates, and, for being clearly understood, requires a coordinate-free definition of the projective completion of an affine space, and a definition of a projective frame. The projective completion of an affine space of dimension n is a projective space of the same dimension that contains the affine space as the complement of a hyperplane. The projective completion is unique up to an isomorphism. The hyperplane is called the hyperplane at infinity, and its points are the points at infinity of the affine space. Given a projective space of dimension n, a projective frame is an ordered set of n + 2 points that are not contained in the same hyperplane. A projective frame defines a projective coordinate system such that the coordinates of the (n + 2)th point of the frame are all equal, and, otherwise, all coordinates of the ith point are zero, except the ith one. When constructing the projective completion from an affine coordinate system, one commonly defines it with respect to a projective frame consisting of the intersections with the hyperplane at infinity of the coordinate axes, the origin of the affine space, and the point that has all its affine coordinates equal to one. This implies that the points at infinity have their last coordinate equal to zero, and that the projective coordinates of a point of the affine space are obtained by completing its affine coordinates by one as (n + 1)th coordinate. When one has n + 1 points in an affine space that define a barycentric coordinate system, this is another projective frame of the projective completion that is convenient to choose. This frame consists of these points and their centroid, that is the point that has all its barycentric coordinates equal. In this case, the homogeneous barycentric coordinates of a point in the affine space are the same as the projective coordinates of this point. A point is at infinity if and only if the sum of its coordinates is zero. This point is in the direction of the vector defined at the end of § Definition. == Barycentric coordinates on triangles == In the context of a triangle, barycentric coordinates are also known as area coordinates or areal coordinates, because the coordinates of P with respect to triangle ABC are equivalent to the (signed) ratios of the areas of PBC, PCA and PAB to the area of the reference triangle ABC. Areal and trilinear coordinates are used for similar purposes in geometry. Barycentric or areal coordinates are extremely useful in engineering applications involving triangular subdomains. These make analytic integrals often easier to evaluate, and Gaussian quadrature tables are often presented in terms of area coordinates. Consider a triangle A B C {\displaystyle ABC} with vertices A = ( a 1 , a 2 ) {\displaystyle A=(a_{1},a_{2})} , B = ( b 1 , b 2 ) {\displaystyle B=(b_{1},b_{2})} , C = ( c 1 , c 2 ) {\displaystyle C=(c_{1},c_{2})} in the x,y-plane, R 2 {\displaystyle \mathbb {R} ^{2}} . One may regard points in R 2 {\displaystyle \mathbb {R} ^{2}} as vectors, so it makes sense to add or subtract them and multiply them by scalars. Each triangle A B C {\displaystyle ABC} has a signed area or sarea, which is plus or minus its area: sarea ⁡ ( A B C ) = ± area ⁡ ( A B C ) . {\displaystyle \operatorname {sarea} (ABC)=\pm \operatorname {area} (ABC).} The sign is plus if the path from A {\displaystyle A} to B {\displaystyle B} to C {\displaystyle C} then back to A {\displaystyle A} goes around the triangle in a counterclockwise direction. The sign is minus if the path goes around in a clockwise direction. Let P {\displaystyle P} be a point in the plane, and let ( λ 1 , λ 2 , λ 3 ) {\displaystyle (\lambda _{1},\lambda _{2},\lambda _{3})} be its normalized barycentric coordinates with respect to the triangle A B C {\displaystyle ABC} , so P = λ 1 A + λ 2 B + λ 3 C {\displaystyle P=\lambda _{1}A+\lambda _{2}B+\lambda _{3}C} and 1 = λ 1 + λ 2 + λ 3 . {\displaystyle 1=\lambda _{1}+\lambda _{2}+\lambda _{3}.} Normalized barycentric coordinates ( λ 1 , λ 2 , λ 3 ) {\displaystyle (\lambda _{1},\lambda _{2},\lambda _{3})} are also called areal coordinates because they represent ratios of signed areas of triangles: λ 1 = sarea ⁡ ( P B C ) / sarea ⁡ ( A B C ) λ 2 = sarea ⁡ ( A P C ) / sarea ⁡ ( A B C ) λ 3 = sarea ⁡ ( A B P ) / sarea ⁡ ( A B C ) . {\displaystyle {\begin{aligned}\lambda _{1}&=\operatorname {sarea} (PBC)/\operatorname {sarea} (ABC)\\\lambda _{2}&=\operatorname {sarea} (APC)/\operatorname {sarea} (ABC)\\\lambda _{3}&=\operatorname {sarea} (ABP)/\operatorname {sarea} (ABC).\end{aligned}}} One may prove these ratio formulas based on the facts that a triangle is half of a parallelogram, and the area of a parallelogram is easy to compute using a determinant. Specifically, let D = − A + B + C . {\displaystyle D=-A+B+C.} A B C D {\displaystyle ABCD} is a parallelogram because its pairs of opposite sides, represented by the pairs of displacement vectors D − C = B − A {\displaystyle D-C=B-A} , and D − B = C − A {\displaystyle D-B=C-A} , are parallel and congruent. Triangle A B C {\displaystyle ABC} is half of the parallelogram A B D C {\displaystyle ABDC} , so twice its signed area is equal to the signed area of the parallelogram, which is given by the 2 × 2 {\displaystyle 2\times 2} determinant det ( B − A , C − A ) {\displaystyle \det(B-A,C-A)} whose columns are the displacement vectors B − A {\displaystyle B-A} and C − A {\displaystyle C-A} : sarea ⁡ ( A B C D ) = det ( b 1 − a 1 c 1 − a 1 b 2 − a 2 c 2 − a 2 ) {\displaystyle \operatorname {sarea} (ABCD)=\det {\begin{pmatrix}b_{1}-a_{1}&c_{1}-a_{1}\\b_{2}-a_{2}&c_{2}-a_{2}\end{pmatrix}}} Expanding the determinant, using its alternating and multilinear properties, one obtains det ( B − A , C − A ) = det ( B , C ) − det ( A , C ) − det ( B , A ) + det ( A , A ) = det ( A , B ) + det ( B , C ) + det ( C , A ) {\displaystyle {\begin{aligned}\det(B-A,C-A)&=\det(B,C)-\det(A,C)-\det(B,A)+\det(A,A)\\&=\det(A,B)+\det(B,C)+\det(C,A)\end{aligned}}} so 2 sarea ⁡ ( A B C ) = det ( A , B ) + det ( B , C ) + det ( C , A ) . {\displaystyle 2\operatorname {sarea} (ABC)=\det(A,B)+\det(B,C)+\det(C,A).} Similarly, 2 sarea ⁡ ( P B C ) = det ( P , B ) + det ( B , C ) + det ( C , P ) {\displaystyle 2\operatorname {sarea} (PBC)=\det(P,B)+\det(B,C)+\det(C,P)} , To obtain the ratio of these signed areas, express P {\displaystyle P} in the second formula in terms of its barycentric coordinates: 2 sarea ⁡ ( P B C ) = det ( λ 1 A + λ 2 B + λ 3 C , B ) + det ( B , C ) + det ( C , λ 1 A + λ 2 B + λ 3 C ) = λ 1 det ( A , B ) + λ 3 det ( C , B ) + det ( B , C ) + λ 1 det ( C , A ) + λ 2 det ( C , B ) = λ 1 det ( A , B ) + λ 1 det ( C , A ) + ( 1 − λ 2 − λ 3 ) det ( B , C ) . {\displaystyle {\begin{aligned}2\operatorname {sarea} (PBC)&=\det(\lambda _{1}A+\lambda _{2}B+\lambda _{3}C,B)+\det(B,C)+\det(C,\lambda _{1}A+\lambda _{2}B+\lambda _{3}C)\\&=\lambda _{1}\det(A,B)+\lambda _{3}\det(C,B)+\det(B,C)+\lambda _{1}\det(C,A)+\lambda _{2}\det(C,B)\\&=\lambda _{1}\det(A,B)+\lambda _{1}\det(C,A)+(1-\lambda _{2}-\lambda _{3})\det(B,C)\end{aligned}}.} The barycentric coordinates are normalized so 1 = λ 1 + λ 2 + λ 3 {\displaystyle 1=\lambda _{1}+\lambda _{2}+\lambda _{3}} , hence λ 1 = ( 1 − λ 2 − λ 3 ) {\displaystyle \lambda _{1}=(1-\lambda _{2}-\lambda _{3})} . Plug that into the previous line to obtain 2 sarea ⁡ ( P B C ) = λ 1 ( det ( A , B ) + det ( B , C ) + det ( C , A ) ) = ( λ 1 ) ( 2 sarea ⁡ ( A B C ) ) . {\displaystyle {\begin{aligned}2\operatorname {sarea} (PBC)&=\lambda _{1}(\det(A,B)+\det(B,C)+\det(C,A))\\&=(\lambda _{1})(2\operatorname {sarea} (ABC)).\end{aligned}}} Therefore λ 1 = sarea ⁡ ( P B C ) / sarea ⁡ ( A B C ) {\displaystyle \lambda _{1}=\operatorname {sarea} (PBC)/\operatorname {sarea} (ABC)} . Similar calculations prove the other two formulas λ 2 = sarea ⁡ ( A P C ) / sarea ⁡ ( A B C ) {\displaystyle \lambda _{2}=\operatorname {sarea} (APC)/\operatorname {sarea} (ABC)} λ 3 = sarea ⁡ ( A B P ) / sarea ⁡ ( A B C ) {\displaystyle \lambda _{3}=\operatorname {sarea} (ABP)/\operatorname {sarea} (ABC)} . Trilinear coordinates ( γ 1 , γ 2 , γ 3 ) {\displaystyle (\gamma _{1},\gamma _{2},\gamma _{3})} of P {\displaystyle P} are signed distances from P {\displaystyle P} to the lines BC, AC, and AB, respectively. The sign of γ 1 {\displaystyle \gamma _{1}} is positive if P {\displaystyle P} and A {\displaystyle A} lie on the same side of BC, negative otherwise. The signs of γ 2 {\displaystyle \gamma _{2}} and γ 3 {\displaystyle \gamma _{3}} are assigned similarly. Let a = length ⁡ ( B C ) {\displaystyle a=\operatorname {length} (BC)} , b = length ⁡ ( C A ) {\displaystyle b=\operatorname {length} (CA)} , c = length ⁡ ( A B ) {\displaystyle c=\operatorname {length} (AB)} . Then γ 1 a = ± 2 sarea ⁡ ( P B C ) γ 2 b = ± 2 sarea ⁡ ( A P C ) γ 3 c = ± 2 sarea ⁡ ( A B P ) {\displaystyle {\begin{aligned}\gamma _{1}a&=\pm 2\operatorname {sarea} (PBC)\\\gamma _{2}b&=\pm 2\operatorname {sarea} (APC)\\\gamma _{3}c&=\pm 2\operatorname {sarea} (ABP)\end{aligned}}} where, as above, sarea stands for signed area. All three signs are plus if triangle ABC is positively oriented, minus otherwise. The relations between trilinear and barycentric coordinates are obtained by substituting these formulas into the above formulas that express barycentric coordinates as ratios of areas. Switching back and forth between the barycentric coordinates and other coordinate systems makes some problems much easier to solve. === Conversion between barycentric and Cartesian coordinates === ==== Edge approach ==== Given a point r {\displaystyle \mathbf {r} } in a triangle's plane one can obtain the barycentric coordinates λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} and λ 3 {\displaystyle \lambda _{3}} from the Cartesian coordinates ( x , y ) {\displaystyle (x,y)} or vice versa. We can write the Cartesian coordinates of the point r {\displaystyle \mathbf {r} } in terms of the Cartesian components of the triangle vertices r 1 {\displaystyle \mathbf {r} _{1}} , r 2 {\displaystyle \mathbf {r} _{2}} , r 3 {\displaystyle \mathbf {r} _{3}} where r i = ( x i , y i ) {\displaystyle \mathbf {r} _{i}=(x_{i},y_{i})} and in terms of the barycentric coordinates of r {\displaystyle \mathbf {r} } as x = λ 1 x 1 + λ 2 x 2 + λ 3 x 3 y = λ 1 y 1 + λ 2 y 2 + λ 3 y 3 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\lambda _{3}x_{3}\\[2pt]y&=\lambda _{1}y_{1}+\lambda _{2}y_{2}+\lambda _{3}y_{3}\end{aligned}}} That is, the Cartesian coordinates of any point are a weighted average of the Cartesian coordinates of the triangle's vertices, with the weights being the point's barycentric coordinates summing to unity. To find the reverse transformation, from Cartesian coordinates to barycentric coordinates, we first substitute λ 3 = 1 − λ 1 − λ 2 {\displaystyle \lambda _{3}=1-\lambda _{1}-\lambda _{2}} into the above to obtain x = λ 1 x 1 + λ 2 x 2 + ( 1 − λ 1 − λ 2 ) x 3 y = λ 1 y 1 + λ 2 y 2 + ( 1 − λ 1 − λ 2 ) y 3 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+(1-\lambda _{1}-\lambda _{2})x_{3}\\[2pt]y&=\lambda _{1}y_{1}+\lambda _{2}y_{2}+(1-\lambda _{1}-\lambda _{2})y_{3}\end{aligned}}} Rearranging, this is λ 1 ( x 1 − x 3 ) + λ 2 ( x 2 − x 3 ) + x 3 − x = 0 λ 1 ( y 1 − y 3 ) + λ 2 ( y 2 − y 3 ) + y 3 − y = 0 {\displaystyle {\begin{aligned}\lambda _{1}(x_{1}-x_{3})+\lambda _{2}(x_{2}-x_{3})+x_{3}-x&=0\\[2pt]\lambda _{1}(y_{1}-y_{3})+\lambda _{2}(y_{2}-\,y_{3})+y_{3}-\,y&=0\end{aligned}}} This linear transformation may be written more succinctly as T ⋅ λ = r − r 3 {\displaystyle \mathbf {T} \cdot \lambda =\mathbf {r} -\mathbf {r} _{3}} where λ {\displaystyle \lambda } is the vector of the first two barycentric coordinates, r {\displaystyle \mathbf {r} } is the vector of Cartesian coordinates, and T {\displaystyle \mathbf {T} } is a matrix given by T = ( x 1 − x 3 x 2 − x 3 y 1 − y 3 y 2 − y 3 ) {\displaystyle \mathbf {T} =\left({\begin{matrix}x_{1}-x_{3}&x_{2}-x_{3}\\y_{1}-y_{3}&y_{2}-y_{3}\end{matrix}}\right)} Now the matrix T {\displaystyle \mathbf {T} } is invertible, since r 1 − r 3 {\displaystyle \mathbf {r} _{1}-\mathbf {r} _{3}} and r 2 − r 3 {\displaystyle \mathbf {r} _{2}-\mathbf {r} _{3}} are linearly independent (if this were not the case, then r 1 {\displaystyle \mathbf {r} _{1}} , r 2 {\displaystyle \mathbf {r} _{2}} , and r 3 {\displaystyle \mathbf {r} _{3}} would be collinear and would not form a triangle). Thus, we can rearrange the above equation to get ( λ 1 λ 2 ) = T − 1 ( r − r 3 ) {\displaystyle \left({\begin{matrix}\lambda _{1}\\\lambda _{2}\end{matrix}}\right)=\mathbf {T} ^{-1}(\mathbf {r} -\mathbf {r} _{3})} Finding the barycentric coordinates has thus been reduced to finding the 2×2 inverse matrix of T {\displaystyle \mathbf {T} } , an easy problem. Explicitly, the formulae for the barycentric coordinates of point r {\displaystyle \mathbf {r} } in terms of its Cartesian coordinates (x, y) and in terms of the Cartesian coordinates of the triangle's vertices are: λ 1 = ( y 2 − y 3 ) ( x − x 3 ) + ( x 3 − x 2 ) ( y − y 3 ) det ( T ) = ( y 2 − y 3 ) ( x − x 3 ) + ( x 3 − x 2 ) ( y − y 3 ) ( y 2 − y 3 ) ( x 1 − x 3 ) + ( x 3 − x 2 ) ( y 1 − y 3 ) = ( r − r 3 ) × ( r 2 − r 3 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) λ 2 = ( y 3 − y 1 ) ( x − x 3 ) + ( x 1 − x 3 ) ( y − y 3 ) det ( T ) = ( y 3 − y 1 ) ( x − x 3 ) + ( x 1 − x 3 ) ( y − y 3 ) ( y 2 − y 3 ) ( x 1 − x 3 ) + ( x 3 − x 2 ) ( y 1 − y 3 ) = ( r − r 3 ) × ( r 3 − r 1 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) λ 3 = 1 − λ 1 − λ 2 = 1 − ( r − r 3 ) × ( r 2 − r 1 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) = ( r − r 1 ) × ( r 1 − r 2 ) ( r 1 − r 3 ) × ( r 2 − r 3 ) {\displaystyle {\begin{aligned}\lambda _{1}=&\ {\frac {(y_{2}-y_{3})(x-x_{3})+(x_{3}-x_{2})(y-y_{3})}{\det(\mathbf {T} )}}\\[4pt]&={\frac {(y_{2}-y_{3})(x-x_{3})+(x_{3}-x_{2})(y-y_{3})}{(y_{2}-y_{3})(x_{1}-x_{3})+(x_{3}-x_{2})(y_{1}-y_{3})}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[12pt]\lambda _{2}=&\ {\frac {(y_{3}-y_{1})(x-x_{3})+(x_{1}-x_{3})(y-y_{3})}{\det(\mathbf {T} )}}\\[4pt]&={\frac {(y_{3}-y_{1})(x-x_{3})+(x_{1}-x_{3})(y-y_{3})}{(y_{2}-y_{3})(x_{1}-x_{3})+(x_{3}-x_{2})(y_{1}-y_{3})}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{3}} -\mathbf {r_{1}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[12pt]\lambda _{3}=&\ 1-\lambda _{1}-\lambda _{2}\\[4pt]&=1-{\frac {(\mathbf {r} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{1}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\\[4pt]&={\frac {(\mathbf {r} -\mathbf {r_{1}} )\times (\mathbf {r_{1}} -\mathbf {r_{2}} )}{(\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )}}\end{aligned}}} When understanding the last line of equation, note the identity ( r 1 − r 3 ) × ( r 2 − r 3 ) = ( r 3 − r 1 ) × ( r 1 − r 2 ) {\displaystyle (\mathbf {r_{1}} -\mathbf {r_{3}} )\times (\mathbf {r_{2}} -\mathbf {r_{3}} )=(\mathbf {r_{3}} -\mathbf {r_{1}} )\times (\mathbf {r_{1}} -\mathbf {r_{2}} )} . ==== Vertex approach ==== Another way to solve the conversion from Cartesian to barycentric coordinates is to write the relation in the matrix form R λ = r {\displaystyle \mathbf {R} {\boldsymbol {\lambda }}=\mathbf {r} } with R = ( r 1 | r 2 | r 3 ) {\displaystyle \mathbf {R} =\left(\,\mathbf {r} _{1}\,|\,\mathbf {r} _{2}\,|\,\mathbf {r} _{3}\right)} and λ = ( λ 1 , λ 2 , λ 3 ) ⊤ , {\displaystyle {\boldsymbol {\lambda }}=\left(\lambda _{1},\lambda _{2},\lambda _{3}\right)^{\top },} i.e. ( x 1 x 2 x 3 y 1 y 2 y 3 ) ( λ 1 λ 2 λ 3 ) = ( x y ) {\displaystyle {\begin{pmatrix}x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\end{pmatrix}}{\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}={\begin{pmatrix}x\\y\end{pmatrix}}} To get the unique normalized solution we need to add the condition λ 1 + λ 2 + λ 3 = 1 {\displaystyle \lambda _{1}+\lambda _{2}+\lambda _{3}=1} . The barycentric coordinates are thus the solution of the linear system ( 1 1 1 x 1 x 2 x 3 y 1 y 2 y 3 ) ( λ 1 λ 2 λ 3 ) = ( 1 x y ) {\displaystyle \left({\begin{matrix}1&1&1\\x_{1}&x_{2}&x_{3}\\y_{1}&y_{2}&y_{3}\end{matrix}}\right){\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}=\left({\begin{matrix}1\\x\\y\end{matrix}}\right)} which is ( λ 1 λ 2 λ 3 ) = 1 2 A ( x 2 y 3 − x 3 y 2 y 2 − y 3 x 3 − x 2 x 3 y 1 − x 1 y 3 y 3 − y 1 x 1 − x 3 x 1 y 2 − x 2 y 1 y 1 − y 2 x 2 − x 1 ) ( 1 x y ) {\displaystyle {\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{pmatrix}}={\frac {1}{2A}}{\begin{pmatrix}x_{2}y_{3}-x_{3}y_{2}&y_{2}-y_{3}&x_{3}-x_{2}\\x_{3}y_{1}-x_{1}y_{3}&y_{3}-y_{1}&x_{1}-x_{3}\\x_{1}y_{2}-x_{2}y_{1}&y_{1}-y_{2}&x_{2}-x_{1}\end{pmatrix}}{\begin{pmatrix}1\\x\\y\end{pmatrix}}} where 2 A = det ( 1 | R ) = x 1 ( y 2 − y 3 ) + x 2 ( y 3 − y 1 ) + x 3 ( y 1 − y 2 ) {\displaystyle 2A=\det(1|R)=x_{1}(y_{2}-y_{3})+x_{2}(y_{3}-y_{1})+x_{3}(y_{1}-y_{2})} is twice the signed area of the triangle. The area interpretation of the barycentric coordinates can be recovered by applying Cramer's rule to this linear system. === Conversion between barycentric and trilinear coordinates === A point with trilinear coordinates x : y : z has barycentric coordinates ax : by : cz where a, b, c are the side lengths of the triangle. Conversely, a point with barycentrics λ 1 : λ 2 : λ 3 {\displaystyle \lambda _{1}:\lambda _{2}:\lambda _{3}} has trilinears λ 1 / a : λ 2 / b : λ 3 / c . {\displaystyle \lambda _{1}/a:\lambda _{2}/b:\lambda _{3}/c.} === Equations in barycentric coordinates === The three sides a, b, c respectively have equations λ 1 = 0 , λ 2 = 0 , λ 3 = 0. {\displaystyle \lambda _{1}=0,\quad \lambda _{2}=0,\quad \lambda _{3}=0.} The equation of a triangle's Euler line is | λ 1 λ 2 λ 3 1 1 1 tan ⁡ A tan ⁡ B tan ⁡ C | = 0. {\displaystyle {\begin{vmatrix}\lambda _{1}&\lambda _{2}&\lambda _{3}\\1&1&1\\\tan A&\tan B&\tan C\end{vmatrix}}=0.} Using the previously given conversion between barycentric and trilinear coordinates, the various other equations given in Trilinear coordinates#Formulas can be rewritten in terms of barycentric coordinates. === Distance between points === The displacement vector of two normalized points P = ( p 1 , p 2 , p 3 ) {\displaystyle P=(p_{1},p_{2},p_{3})} and Q = ( q 1 , q 2 , q 3 ) {\displaystyle Q=(q_{1},q_{2},q_{3})} is P Q → = ( p 1 − q 1 , p 2 − q 2 , p 3 − q 3 ) . {\displaystyle {\overset {}{\overrightarrow {PQ}}}=(p_{1}-q_{1},p_{2}-q_{2},p_{3}-q_{3}).} The distance d between P and Q, or the length of the displacement vector P Q → = ( x , y , z ) , {\displaystyle {\overset {}{\overrightarrow {PQ}}}=(x,y,z),} is d 2 = | P Q | 2 = − a 2 y z − b 2 z x − c 2 x y = 1 2 [ x 2 ( b 2 + c 2 − a 2 ) + y 2 ( c 2 + a 2 − b 2 ) + z 2 ( a 2 + b 2 − c 2 ) ] . {\displaystyle {\begin{aligned}d^{2}&=|PQ|^{2}\\[2pt]&=-a^{2}yz-b^{2}zx-c^{2}xy\\[4pt]&={\frac {1}{2}}\left[x^{2}(b^{2}+c^{2}-a^{2})+y^{2}(c^{2}+a^{2}-b^{2})+z^{2}(a^{2}+b^{2}-c^{2})\right].\end{aligned}}} where a, b, c are the sidelengths of the triangle. The equivalence of the last two expressions follows from x + y + z = 0 , {\displaystyle x+y+z=0,} which holds because x + y + z = ( p 1 − q 1 ) + ( p 2 − q 2 ) + ( p 3 − q 3 ) = ( p 1 + p 2 + p 3 ) − ( q 1 + q 2 + q 3 ) = 1 − 1 = 0. {\displaystyle {\begin{aligned}x+y+z&=(p_{1}-q_{1})+(p_{2}-q_{2})+(p_{3}-q_{3})\\[2pt]&=(p_{1}+p_{2}+p_{3})-(q_{1}+q_{2}+q_{3})\\[2pt]&=1-1=0.\end{aligned}}} The barycentric coordinates of a point can be calculated based on distances di to the three triangle vertices by solving the equation ( − c 2 c 2 b 2 − a 2 − b 2 c 2 − a 2 b 2 1 1 1 ) λ = ( d A 2 − d B 2 d A 2 − d C 2 1 ) . {\displaystyle \left({\begin{matrix}-c^{2}&c^{2}&b^{2}-a^{2}\\-b^{2}&c^{2}-a^{2}&b^{2}\\1&1&1\end{matrix}}\right){\boldsymbol {\lambda }}=\left({\begin{matrix}d_{A}^{2}-d_{B}^{2}\\d_{A}^{2}-d_{C}^{2}\\1\end{matrix}}\right).} === Applications === ==== Determining location with respect to a triangle ==== Although barycentric coordinates are most commonly used to handle points inside a triangle, they can also be used to describe a point outside the triangle. If the point is not inside the triangle, then we can still use the formulas above to compute the barycentric coordinates. However, since the point is outside the triangle, at least one of the coordinates will violate our original assumption that λ 1...3 ≥ 0 {\displaystyle \lambda _{1...3}\geq 0} . In fact, given any point in cartesian coordinates, we can use this fact to determine where this point is with respect to a triangle. If a point lies in the interior of the triangle, all of the Barycentric coordinates lie in the open interval ( 0 , 1 ) . {\displaystyle (0,1).} If a point lies on an edge of the triangle but not at a vertex, one of the area coordinates λ 1...3 {\displaystyle \lambda _{1...3}} (the one associated with the opposite vertex) is zero, while the other two lie in the open interval ( 0 , 1 ) . {\displaystyle (0,1).} If the point lies on a vertex, the coordinate associated with that vertex equals 1 and the others equal zero. Finally, if the point lies outside the triangle at least one coordinate is negative. Summarizing, Point r {\displaystyle \mathbf {r} } lies inside the triangle if and only if 0 < λ i < 1 ∀ i in 1 , 2 , 3 {\displaystyle 0<\lambda _{i}<1\;\forall \;i{\text{ in }}{1,2,3}} . r {\displaystyle \mathbf {r} } lies on the edge or corner of the triangle if 0 ≤ λ i ≤ 1 ∀ i in 1 , 2 , 3 {\displaystyle 0\leq \lambda _{i}\leq 1\;\forall \;i{\text{ in }}{1,2,3}} and λ i = 0 , for some i in 1 , 2 , 3 {\displaystyle \lambda _{i}=0\;{\text{, for some i in }}{1,2,3}} . Otherwise, r {\displaystyle \mathbf {r} } lies outside the triangle. In particular, if a point lies on the far side of a line the barycentric coordinate of the point in the triangle that is not on the line will have a negative value. ==== Interpolation on a triangular unstructured grid ==== If f ( r 1 ) , f ( r 2 ) , f ( r 3 ) {\displaystyle f(\mathbf {r} _{1}),f(\mathbf {r} _{2}),f(\mathbf {r} _{3})} are known quantities, but the values of f inside the triangle defined by r 1 , r 2 , r 3 {\displaystyle \mathbf {r} _{1},\mathbf {r} _{2},\mathbf {r} _{3}} is unknown, they can be approximated using linear interpolation. Barycentric coordinates provide a convenient way to compute this interpolation. If r {\displaystyle \mathbf {r} } is a point inside the triangle with barycentric coordinates λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} , λ 3 {\displaystyle \lambda _{3}} , then f ( r ) ≈ λ 1 f ( r 1 ) + λ 2 f ( r 2 ) + λ 3 f ( r 3 ) {\displaystyle f(\mathbf {r} )\approx \lambda _{1}f(\mathbf {r} _{1})+\lambda _{2}f(\mathbf {r} _{2})+\lambda _{3}f(\mathbf {r} _{3})} In general, given any unstructured grid or polygon mesh, this kind of technique can be used to approximate the value of f at all points, as long as the function's value is known at all vertices of the mesh. In this case, we have many triangles, each corresponding to a different part of the space. To interpolate a function f at a point r {\displaystyle \mathbf {r} } , first a triangle must be found that contains r {\displaystyle \mathbf {r} } . To do so, r {\displaystyle \mathbf {r} } is transformed into the barycentric coordinates of each triangle. If some triangle is found such that the coordinates satisfy 0 ≤ λ i ≤ 1 ∀ i in 1 , 2 , 3 {\displaystyle 0\leq \lambda _{i}\leq 1\;\forall \;i{\text{ in }}1,2,3} , then the point lies in that triangle or on its edge (explained in the previous section). Then the value of f ( r ) {\displaystyle f(\mathbf {r} )} can be interpolated as described above. These methods have many applications, such as the finite element method (FEM). ==== Integration over a triangle or tetrahedron ==== The integral of a function over the domain of the triangle can be annoying to compute in a cartesian coordinate system. One generally has to split the triangle up into two halves, and great messiness follows. Instead, it is often easier to make a change of variables to any two barycentric coordinates, e.g. λ 1 , λ 2 {\displaystyle \lambda _{1},\lambda _{2}} . Under this change of variables, ∫ T f ( r ) d r = 2 A ∫ 0 1 ∫ 0 1 − λ 2 f ( λ 1 r 1 + λ 2 r 2 + ( 1 − λ 1 − λ 2 ) r 3 ) d λ 1 d λ 2 {\displaystyle \int _{T}f(\mathbf {r} )\ d\mathbf {r} =2A\int _{0}^{1}\int _{0}^{1-\lambda _{2}}f(\lambda _{1}\mathbf {r} _{1}+\lambda _{2}\mathbf {r} _{2}+(1-\lambda _{1}-\lambda _{2})\mathbf {r} _{3})\ d\lambda _{1}\ d\lambda _{2}} where A is the area of the triangle. This result follows from the fact that a rectangle in barycentric coordinates corresponds to a quadrilateral in cartesian coordinates, and the ratio of the areas of the corresponding shapes in the corresponding coordinate systems is given by 2 A {\displaystyle 2A} . Similarly, for integration over a tetrahedron, instead of breaking up the integral into two or three separate pieces, one could switch to 3D tetrahedral coordinates under the change of variables ∫ ∫ T f ( r ) d r = 6 V ∫ 0 1 ∫ 0 1 − λ 3 ∫ 0 1 − λ 2 − λ 3 f ( λ 1 r 1 + λ 2 r 2 + λ 3 r 3 + ( 1 − λ 1 − λ 2 − λ 3 ) r 4 ) d λ 1 d λ 2 d λ 3 {\displaystyle \int \int _{T}f(\mathbf {r} )\ d\mathbf {r} =6V\int _{0}^{1}\int _{0}^{1-\lambda _{3}}\int _{0}^{1-\lambda _{2}-\lambda _{3}}f(\lambda _{1}\mathbf {r} _{1}+\lambda _{2}\mathbf {r} _{2}+\lambda _{3}\mathbf {r} _{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})\mathbf {r} _{4})\ d\lambda _{1}\ d\lambda _{2}\ d\lambda _{3}} where V is the volume of the tetrahedron. === Examples of special points === In the homogeneous barycentric coordinate system defined with respect to a triangle A B C {\displaystyle ABC} , the following statements about special points of A B C {\displaystyle ABC} hold. The three vertices A, B, and C have coordinates A = 1 : 0 : 0 B = 0 : 1 : 0 C = 0 : 0 : 1 {\displaystyle {\begin{array}{rccccc}A=&1&:&0&:&0\\B=&0&:&1&:&0\\C=&0&:&0&:&1\end{array}}} The centroid has coordinates 1 : 1 : 1. {\displaystyle 1:1:1.} If a, b, c are the edge lengths B C {\displaystyle BC} , C A {\displaystyle CA} , A B {\displaystyle AB} respectively, α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } are the angle measures ∠ C A B {\displaystyle \angle CAB} , ∠ A B C {\displaystyle \angle ABC} , and ∠ B C A {\displaystyle \angle BCA} respectively, and s is the semiperimeter of A B C {\displaystyle ABC} , then the following statements about special points of A B C {\displaystyle ABC} hold in addition. The circumcenter has coordinates sin ⁡ 2 α : sin ⁡ 2 β : sin ⁡ 2 γ = 1 − cot ⁡ β cot ⁡ γ : 1 − cot ⁡ γ cot ⁡ α : 1 − cot ⁡ α cot ⁡ β = a 2 ( − a 2 + b 2 + c 2 ) : b 2 ( a 2 − b 2 + c 2 ) : c 2 ( a 2 + b 2 − c 2 ) {\displaystyle {\begin{array}{rccccc}&\sin 2\alpha &:&\sin 2\beta &:&\sin 2\gamma \\[2pt]=&1-\cot \beta \cot \gamma &:&1-\cot \gamma \cot \alpha &:&1-\cot \alpha \cot \beta \\[2pt]=&a^{2}(-a^{2}+b^{2}+c^{2})&:&b^{2}(a^{2}-b^{2}+c^{2})&:&c^{2}(a^{2}+b^{2}-c^{2})\end{array}}} The orthocenter has coordinates tan ⁡ α : tan ⁡ β : tan ⁡ γ = a cos ⁡ β cos ⁡ γ : b cos ⁡ γ cos ⁡ α : c cos ⁡ α cos ⁡ β = ( a 2 + b 2 − c 2 ) ( a 2 − b 2 + c 2 ) : ( − a 2 + b 2 + c 2 ) ( a 2 + b 2 − c 2 ) : ( a 2 − b 2 + c 2 ) ( − a 2 + b 2 + c 2 ) {\displaystyle {\begin{array}{rccccc}&\tan \alpha &:&\tan \beta &:&\tan \gamma \\[2pt]=&a\cos \beta \cos \gamma &:&b\cos \gamma \cos \alpha &:&c\cos \alpha \cos \beta \\[2pt]=&(a^{2}+b^{2}-c^{2})(a^{2}-b^{2}+c^{2})&:&(-a^{2}+b^{2}+c^{2})(a^{2}+b^{2}-c^{2})&:&(a^{2}-b^{2}+c^{2})(-a^{2}+b^{2}+c^{2})\end{array}}} The incenter has coordinates a : b : c = sin ⁡ α : sin ⁡ β : sin ⁡ γ . {\displaystyle a:b:c=\sin \alpha :\sin \beta :\sin \gamma .} The excenters have coordinates J A = − a : b : c J B = a : − b : c J C = a : b : − c {\displaystyle {\begin{array}{rrcrcr}J_{A}=&-a&:&b&:&c\\J_{B}=&a&:&-b&:&c\\J_{C}=&a&:&b&:&-c\end{array}}} The nine-point center has coordinates a cos ⁡ ( β − γ ) : b cos ⁡ ( γ − α ) : c cos ⁡ ( α − β ) = 1 + cot ⁡ β cot ⁡ γ : 1 + cot ⁡ γ cot ⁡ α : 1 + cot ⁡ α cot ⁡ β = a 2 ( b 2 + c 2 ) − ( b 2 − c 2 ) 2 : b 2 ( c 2 + a 2 ) − ( c 2 − a 2 ) 2 : c 2 ( a 2 + b 2 ) − ( a 2 − b 2 ) 2 {\displaystyle {\begin{array}{rccccc}&a\cos(\beta -\gamma )&:&b\cos(\gamma -\alpha )&:&c\cos(\alpha -\beta )\\[4pt]=&1+\cot \beta \cot \gamma &:&1+\cot \gamma \cot \alpha &:&1+\cot \alpha \cot \beta \\[4pt]=&a^{2}(b^{2}+c^{2})-(b^{2}-c^{2})^{2}&:&b^{2}(c^{2}+a^{2})-(c^{2}-a^{2})^{2}&:&c^{2}(a^{2}+b^{2})-(a^{2}-b^{2})^{2}\end{array}}} The Gergonne point has coordinates ( s − b ) ( s − c ) : ( s − c ) ( s − a ) : ( s − a ) ( s − b ) {\displaystyle (s-b)(s-c):(s-c)(s-a):(s-a)(s-b)} . The Nagel point has coordinates s − a : s − b : s − c {\displaystyle s-a:s-b:s-c} . The symmedian point has coordinates a 2 : b 2 : c 2 {\displaystyle a^{2}:b^{2}:c^{2}} . == Barycentric coordinates on tetrahedra == Barycentric coordinates may be easily extended to three dimensions. The 3D simplex is a tetrahedron, a polyhedron having four triangular faces and four vertices. Once again, the four barycentric coordinates are defined so that the first vertex r 1 {\displaystyle \mathbf {r} _{1}} maps to barycentric coordinates λ = ( 1 , 0 , 0 , 0 ) {\displaystyle \lambda =(1,0,0,0)} , r 2 → ( 0 , 1 , 0 , 0 ) {\displaystyle \mathbf {r} _{2}\to (0,1,0,0)} , etc. This is again a linear transformation, and we may extend the above procedure for triangles to find the barycentric coordinates of a point r {\displaystyle \mathbf {r} } with respect to a tetrahedron: ( λ 1 λ 2 λ 3 ) = T − 1 ( r − r 4 ) {\displaystyle \left({\begin{matrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\end{matrix}}\right)=\mathbf {T} ^{-1}(\mathbf {r} -\mathbf {r} _{4})} where T {\displaystyle \mathbf {T} } is now a 3×3 matrix: T = ( x 1 − x 4 x 2 − x 4 x 3 − x 4 y 1 − y 4 y 2 − y 4 y 3 − y 4 z 1 − z 4 z 2 − z 4 z 3 − z 4 ) {\displaystyle \mathbf {T} =\left({\begin{matrix}x_{1}-x_{4}&x_{2}-x_{4}&x_{3}-x_{4}\\y_{1}-y_{4}&y_{2}-y_{4}&y_{3}-y_{4}\\z_{1}-z_{4}&z_{2}-z_{4}&z_{3}-z_{4}\end{matrix}}\right)} and λ 4 = 1 − λ 1 − λ 2 − λ 3 {\displaystyle \lambda _{4}=1-\lambda _{1}-\lambda _{2}-\lambda _{3}} with the corresponding Cartesian coordinates: x = λ 1 x 1 + λ 2 x 2 + λ 3 x 3 + ( 1 − λ 1 − λ 2 − λ 3 ) x 4 y = λ 1 y 1 + λ 2 y 2 + λ 3 y 3 + ( 1 − λ 1 − λ 2 − λ 3 ) y 4 z = λ 1 z 1 + λ 2 z 2 + λ 3 z 3 + ( 1 − λ 1 − λ 2 − λ 3 ) z 4 {\displaystyle {\begin{aligned}x&=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\lambda _{3}x_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})x_{4}\\y&=\lambda _{1}y_{1}+\,\lambda _{2}y_{2}+\lambda _{3}y_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})y_{4}\\z&=\lambda _{1}z_{1}+\,\lambda _{2}z_{2}+\lambda _{3}z_{3}+(1-\lambda _{1}-\lambda _{2}-\lambda _{3})z_{4}\end{aligned}}} Once again, the problem of finding the barycentric coordinates has been reduced to inverting a 3×3 matrix. 3D barycentric coordinates may be used to decide if a point lies inside a tetrahedral volume, and to interpolate a function within a tetrahedral mesh, in an analogous manner to the 2D procedure. Tetrahedral meshes are often used in finite element analysis because the use of barycentric coordinates can greatly simplify 3D interpolation. == Generalized barycentric coordinates == Barycentric coordinates ( λ 1 , λ 2 , . . . , λ k ) {\displaystyle (\lambda _{1},\lambda _{2},...,\lambda _{k})} of a point p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} that are defined with respect to a finite set of k points x 1 , x 2 , . . . , x k ∈ R n {\displaystyle x_{1},x_{2},...,x_{k}\in \mathbb {R} ^{n}} instead of a simplex are called generalized barycentric coordinates. For these, the equation ( λ 1 + λ 2 + ⋯ + λ k ) p = λ 1 x 1 + λ 2 x 2 + ⋯ + λ k x k {\displaystyle (\lambda _{1}+\lambda _{2}+\cdots +\lambda _{k})p=\lambda _{1}x_{1}+\lambda _{2}x_{2}+\cdots +\lambda _{k}x_{k}} is still required to hold. Usually one uses normalized coordinates, λ 1 + λ 2 + ⋯ + λ k = 1 {\displaystyle \lambda _{1}+\lambda _{2}+\cdots +\lambda _{k}=1} . As for the case of a simplex, the points with nonnegative normalized generalized coordinates ( 0 ≤ λ i ≤ 1 {\displaystyle 0\leq \lambda _{i}\leq 1} ) form the convex hull of x1, ..., xn. If there are more points than in a full simplex ( k > n + 1 {\displaystyle k>n+1} ) the generalized barycentric coordinates of a point are not unique, as the defining linear system (here for n=2) ( 1 1 1 . . . x 1 x 2 x 3 . . . y 1 y 2 y 3 . . . ) ( λ 1 λ 2 λ 3 ⋮ ) = ( 1 x y ) {\displaystyle \left({\begin{matrix}1&1&1&...\\x_{1}&x_{2}&x_{3}&...\\y_{1}&y_{2}&y_{3}&...\end{matrix}}\right){\begin{pmatrix}\lambda _{1}\\\lambda _{2}\\\lambda _{3}\\\vdots \end{pmatrix}}=\left({\begin{matrix}1\\x\\y\end{matrix}}\right)} is underdetermined. The simplest example is a quadrilateral in the plane. Various kinds of additional restrictions can be used to define unique barycentric coordinates. === Abstraction === More abstractly, generalized barycentric coordinates express a convex polytope with n vertices, regardless of dimension, as the image of the standard ( n − 1 ) {\displaystyle (n-1)} -simplex, which has n vertices – the map is onto: Δ n − 1 ↠ P . {\displaystyle \Delta ^{n-1}\twoheadrightarrow P.} The map is one-to-one if and only if the polytope is a simplex, in which case the map is an isomorphism; this corresponds to a point not having unique generalized barycentric coordinates except when P is a simplex. Dual to generalized barycentric coordinates are slack variables, which measure by how much margin a point satisfies the linear constraints, and gives an embedding P ↪ ( R ≥ 0 ) f {\displaystyle P\hookrightarrow (\mathbf {R} _{\geq 0})^{f}} into the f-orthant, where f is the number of faces (dual to the vertices). This map is one-to-one (slack variables are uniquely determined) but not onto (not all combinations can be realized). This use of the standard ( n − 1 ) {\displaystyle (n-1)} -simplex and f-orthant as standard objects that map to a polytope or that a polytope maps into should be contrasted with the use of the standard vector space K n {\displaystyle K^{n}} as the standard object for vector spaces, and the standard affine hyperplane { ( x 0 , … , x n ) ∣ ∑ x i = 1 } ⊂ K n + 1 {\displaystyle \{(x_{0},\ldots ,x_{n})\mid \sum x_{i}=1\}\subset K^{n+1}} as the standard object for affine spaces, where in each case choosing a linear basis or affine basis provides an isomorphism, allowing all vector spaces and affine spaces to be thought of in terms of these standard spaces, rather than an onto or one-to-one map (not every polytope is a simplex). Further, the n-orthant is the standard object that maps to cones. === Applications === Generalized barycentric coordinates have applications in computer graphics and more specifically in geometric modelling. Often, a three-dimensional model can be approximated by a polyhedron such that the generalized barycentric coordinates with respect to that polyhedron have a geometric meaning. In this way, the processing of the model can be simplified by using these meaningful coordinates. Barycentric coordinates are also used in geophysics. == See also == Ternary plot Convex combination Water pouring puzzle Homogeneous coordinates == References == Scott, J. A. Some examples of the use of areal coordinates in triangle geometry, Mathematical Gazette 83, November 1999, 472–477. Schindler, Max; Chen, Evan (July 13, 2012). Barycentric Coordinates in Olympiad Geometry (PDF). Retrieved 14 January 2016. Clark Kimberling's Encyclopedia of Triangles Encyclopedia of Triangle Centers. Archived from the original on 2012-04-19. Retrieved 2012-06-02. Bradley, Christopher J. (2007). The Algebra of Geometry: Cartesian, Areal and Projective Co-ordinates. Bath: Highperception. ISBN 978-1-906338-00-8. Coxeter, H.S.M. (1969). Introduction to geometry (2nd ed.). John Wiley and Sons. pp. 216–221. ISBN 978-0-471-50458-0. Zbl 0181.48101. Barycentric Calculus In Euclidean And Hyperbolic Geometry: A Comparative Introduction, Abraham Ungar, World Scientific, 2010 Hyperbolic Barycentric Coordinates, Abraham A. Ungar, The Australian Journal of Mathematical Analysis and Applications, Vol.6, No.1, Article 18, pp. 1–35, 2009 Weisstein, Eric W. "Areal Coordinates". MathWorld. Weisstein, Eric W. "Barycentric Coordinates". MathWorld. Barycentric coordinates computation in homogeneous coordinates, Vaclav Skala, Computers and Graphics, Vol.32, No.1, pp. 120–127, 2008 == External links == Law of the lever The uses of homogeneous barycentric coordinates in plane euclidean geometry Barycentric Coordinates – a collection of scientific papers about (generalized) barycentric coordinates Barycentric coordinates: A Curious Application (solving the "three glasses" problem) at cut-the-knot Accurate point in triangle test Barycentric Coordinates in Olympiad Geometry by Evan Chen and Max Schindler Barycenter command and TriangleCurve command at Geogebra.
Wikipedia:Barzilai-Borwein method#0
The Barzilai-Borwein method is an iterative gradient descent method for unconstrained optimization using either of two step sizes derived from the linear trend of the most recent two iterates. This method, and modifications, are globally convergent under mild conditions, and perform competitively with conjugate gradient methods for many problems. Not depending on the objective itself, it can also solve some systems of linear and non-linear equations. == Method == To minimize a convex function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } with gradient vector g {\displaystyle g} at point x {\displaystyle x} , let there be two prior iterates, g k − 1 ( x k − 1 ) {\displaystyle g_{k-1}(x_{k-1})} and g k ( x k ) {\displaystyle g_{k}(x_{k})} , in which x k = x k − 1 − α k − 1 g k − 1 {\displaystyle x_{k}=x_{k-1}-\alpha _{k-1}g_{k-1}} where α k − 1 {\displaystyle \alpha _{k-1}} is the previous iteration's step size (not necessarily a Barzilai-Borwein step size), and for brevity, let Δ x = x k − x k − 1 {\displaystyle \Delta x=x_{k}-x_{k-1}} and Δ g = g k − g k − 1 {\displaystyle \Delta g=g_{k}-g_{k-1}} . A Barzilai-Borwein (BB) iteration is x k + 1 = x k − α k g k {\displaystyle x_{k+1}=x_{k}-\alpha _{k}g_{k}} where the step size α k {\displaystyle \alpha _{k}} is either [long BB step] α k L O N G = Δ x ⋅ Δ x Δ x ⋅ Δ g {\displaystyle \alpha _{k}^{LONG}={\frac {\Delta x\cdot \Delta x}{\Delta x\cdot \Delta g}}} , or [short BB step] α k S H O R T = Δ x ⋅ Δ g Δ g ⋅ Δ g {\displaystyle \alpha _{k}^{SHORT}={\frac {\Delta x\cdot \Delta g}{\Delta g\cdot \Delta g}}} . Barzilai-Borwein also applies to systems of equations g ( x ) = 0 {\displaystyle g(x)=0} for g : R n → R n {\displaystyle g:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}} in which the Jacobian of g {\displaystyle g} is positive-definite in the symmetric part, that is, Δ x ⋅ Δ g {\displaystyle \Delta x\cdot \Delta g} is necessarily positive. == Derivation == Despite its simplicity and optimality properties, Cauchy's classical steepest-descent method for unconstrained optimization often performs poorly. This has motivated many to propose alternate search directions, such as the conjugate gradient method. Jonathan Barzilai and Jonathan Borwein instead proposed new step sizes for the gradient by approximating the quasi-Newton method, creating a scalar approximation of the Hessian estimated from the finite differences between two evaluation points of the gradient, these being the most recent two iterates. In a quasi-Newton iteration, x k + 1 = x k − B − 1 g ( x k ) {\displaystyle x_{k+1}=x_{k}-B^{-1}g(x_{k})} where B {\displaystyle B} is some approximation of the Jacobian matrix of g {\displaystyle g} (i.e. Hessian of the objective function) which satisfies the secant equation B k Δ x k = Δ g k {\displaystyle B_{k}\Delta x_{k}=\Delta g_{k}} . Barzilai and Borwein simplify B {\displaystyle B} with a scalar 1 / α {\displaystyle 1/\alpha } , which usually cannot exactly satisfy the secant equation, but approximate it as 1 α Δ x ≈ Δ g {\displaystyle {\frac {1}{\alpha }}\Delta x\approx \Delta g} . Approximations by two least-squares criteria are: [1] Minimize ‖ Δ x / α − Δ g ‖ 2 {\displaystyle \|\Delta x/\alpha -\Delta g\|^{2}} with respect to α {\displaystyle \alpha } , yielding the long BB step, or [2] Minimize ‖ Δ x − α Δ g ‖ 2 {\displaystyle \|\Delta x-\alpha \Delta g\|^{2}} with respect to α {\displaystyle \alpha } , yielding the short BB step. == Properties == In one dimension, both BB step sizes are equal and same as the classical secant method. The long BB step size is the same as a linearized Cauchy step, i.e. the first estimate using a secant-method for the line search (also, for linear problems). The short BB step size is same as a linearized minimum-residual step. BB applies the step sizes upon the forward direction vector for the next iterate, instead of the prior direction vector as if for another line-search step. Barzilai and Borwein proved their method converges R-superlinearly for quadratic minimization in two dimensions. Raydan demonstrates convergence in general for quadratic problems. Convergence is usually non-monotone, that is, neither the objective function nor the residual or gradient magnitude necessarily decrease with each iteration along a successful convergence toward the solution. If f {\displaystyle f} is a quadratic function with Hessian A {\displaystyle A} , 1 / α L O N G {\displaystyle 1/\alpha ^{LONG}} is the Rayleigh quotient of A {\displaystyle A} by vector Δ x {\displaystyle \Delta x} , and 1 / α S H O R T {\displaystyle 1/\alpha ^{SHORT}} is the Rayleigh quotient of A {\displaystyle A} by vector A Δ x {\displaystyle {\sqrt {A}}\Delta x} (here taking A {\displaystyle {\sqrt {A}}} as a solution to ( A ) T A = A {\displaystyle ({\sqrt {A}})^{T}{\sqrt {A}}=A} , more at Definite matrix). Fletcher compared its computational performance to conjugate gradient (CG) methods, finding CG tending faster for linear problems, but BB often faster for non-linear problems versus applicable CG-based methods. BB has low storage requirements, suitable for large systems with millions of elements in x {\displaystyle x} . α S H O R T α L O N G = cos ⁡ ( angle between Δ x and Δ g ) 2 {\displaystyle {\frac {\alpha ^{SHORT}}{\alpha ^{LONG}}}=\cos({\text{angle between }}\Delta x{\text{ and }}\Delta g)^{2}} . == Modifications and related methods == Since being demonstrated by Raydan, BB is often applied with the non-monotone safeguarding strategy of Grippo, Lampariello, and Lucidi. This tolerates some rise of the objective, but excessive rise initiates a backtracking line search using smaller step sizes, to assure global convergence. Fletcher finds that allowing wider limits for non-monotonicity tend to result in more efficient convergence. Others have identified a step size being the geometric mean between the long and short BB step sizes, which exhibits similar properties. == References == == External links == Jonathan Barzilai
Wikipedia:Basic theorems in algebraic K-theory#0
Algebraic K-theory is a subject area in mathematics with connections to geometry, topology, ring theory, and number theory. Geometric, algebraic, and arithmetic objects are assigned objects called K-groups. These are groups in the sense of abstract algebra. They contain detailed information about the original object but are notoriously difficult to compute; for example, an important outstanding problem is to compute the K-groups of the integers. K-theory was discovered in the late 1950s by Alexander Grothendieck in his study of intersection theory on algebraic varieties. In the modern language, Grothendieck defined only K0, the zeroth K-group, but even this single group has plenty of applications, such as the Grothendieck–Riemann–Roch theorem. Intersection theory is still a motivating force in the development of (higher) algebraic K-theory through its links with motivic cohomology and specifically Chow groups. The subject also includes classical number-theoretic topics like quadratic reciprocity and embeddings of number fields into the real numbers and complex numbers, as well as more modern concerns like the construction of higher regulators and special values of L-functions. The lower K-groups were discovered first, in the sense that adequate descriptions of these groups in terms of other algebraic structures were found. For example, if F is a field, then K0(F) is isomorphic to the integers Z and is closely related to the notion of vector space dimension. For a commutative ring R, the group K0(R) is related to the Picard group of R, and when R is the ring of integers in a number field, this generalizes the classical construction of the class group. The group K1(R) is closely related to the group of units R×, and if R is a field, it is exactly the group of units. For a number field F, the group K2(F) is related to class field theory, the Hilbert symbol, and the solvability of quadratic equations over completions. In contrast, finding the correct definition of the higher K-groups of rings was a difficult achievement of Daniel Quillen, and many of the basic facts about the higher K-groups of algebraic varieties were not known until the work of Robert Thomason. == History == The history of K-theory was detailed by Charles Weibel. === The Grothendieck group K0 === In the 19th century, Bernhard Riemann and his student Gustav Roch proved what is now known as the Riemann–Roch theorem. If X is a Riemann surface, then the sets of meromorphic functions and meromorphic differential forms on X form vector spaces. A line bundle on X determines subspaces of these vector spaces, and if X is projective, then these subspaces are finite dimensional. The Riemann–Roch theorem states that the difference in dimensions between these subspaces is equal to the degree of the line bundle (a measure of twistedness) plus one minus the genus of X. In the mid-20th century, the Riemann–Roch theorem was generalized by Friedrich Hirzebruch to all algebraic varieties. In Hirzebruch's formulation, the Hirzebruch–Riemann–Roch theorem, the theorem became a statement about Euler characteristics: The Euler characteristic of a vector bundle on an algebraic variety (which is the alternating sum of the dimensions of its cohomology groups) equals the Euler characteristic of the trivial bundle plus a correction factor coming from characteristic classes of the vector bundle. This is a generalization because on a projective Riemann surface, the Euler characteristic of a line bundle equals the difference in dimensions mentioned previously, the Euler characteristic of the trivial bundle is one minus the genus, and the only nontrivial characteristic class is the degree. The subject of K-theory takes its name from a 1957 construction of Alexander Grothendieck which appeared in the Grothendieck–Riemann–Roch theorem, his generalization of Hirzebruch's theorem. Let X be a smooth algebraic variety. To each vector bundle on X, Grothendieck associates an invariant, its class. The set of all classes on X was called K(X) from the German Klasse. By definition, K(X) is a quotient of the free abelian group on isomorphism classes of vector bundles on X, and so it is an abelian group. If the basis element corresponding to a vector bundle V is denoted [V], then for each short exact sequence of vector bundles: 0 → V ′ → V → V ″ → 0 , {\displaystyle 0\to V'\to V\to V''\to 0,} Grothendieck imposed the relation [V] = [V′] + [V″]. These generators and relations define K(X), and they imply that it is the universal way to assign invariants to vector bundles in a way compatible with exact sequences. Grothendieck took the perspective that the Riemann–Roch theorem is a statement about morphisms of varieties, not the varieties themselves. He proved that there is a homomorphism from K(X) to the Chow groups of X coming from the Chern character and Todd class of X. Additionally, he proved that a proper morphism f : X → Y to a smooth variety Y determines a homomorphism f* : K(X) → K(Y) called the pushforward. This gives two ways of determining an element in the Chow group of Y from a vector bundle on X: Starting from X, one can first compute the pushforward in K-theory and then apply the Chern character and Todd class of Y, or one can first apply the Chern character and Todd class of X and then compute the pushforward for Chow groups. The Grothendieck–Riemann–Roch theorem says that these are equal. When Y is a point, a vector bundle is a vector space, the class of a vector space is its dimension, and the Grothendieck–Riemann–Roch theorem specializes to Hirzebruch's theorem. The group K(X) is now known as K0(X). Upon replacing vector bundles by projective modules, K0 also became defined for non-commutative rings, where it had applications to group representations. Atiyah and Hirzebruch quickly transported Grothendieck's construction to topology and used it to define topological K-theory. Topological K-theory was one of the first examples of an extraordinary cohomology theory: It associates to each topological space X (satisfying some mild technical constraints) a sequence of groups Kn(X) which satisfy all the Eilenberg–Steenrod axioms except the normalization axiom. The setting of algebraic varieties, however, is much more rigid, and the flexible constructions used in topology were not available. While the group K0 seemed to satisfy the necessary properties to be the beginning of a cohomology theory of algebraic varieties and of non-commutative rings, there was no clear definition of the higher Kn(X). Even as such definitions were developed, technical issues surrounding restriction and gluing usually forced Kn to be defined only for rings, not for varieties. === K0, K1, and K2 === A group closely related to K1 for group rings was earlier introduced by J.H.C. Whitehead. Henri Poincaré had attempted to define the Betti numbers of a manifold in terms of a triangulation. His methods, however, had a serious gap: Poincaré could not prove that two triangulations of a manifold always yielded the same Betti numbers. It was clearly true that Betti numbers were unchanged by subdividing the triangulation, and therefore it was clear that any two triangulations that shared a common subdivision had the same Betti numbers. What was not known was that any two triangulations admitted a common subdivision. This hypothesis became a conjecture known as the Hauptvermutung (roughly "main conjecture"). The fact that triangulations were stable under subdivision led J.H.C. Whitehead to introduce the notion of simple homotopy type. A simple homotopy equivalence is defined in terms of adding simplices or cells to a simplicial complex or cell complex in such a way that each additional simplex or cell deformation retracts into a subdivision of the old space. Part of the motivation for this definition is that a subdivision of a triangulation is simple homotopy equivalent to the original triangulation, and therefore two triangulations that share a common subdivision must be simple homotopy equivalent. Whitehead proved that simple homotopy equivalence is a finer invariant than homotopy equivalence by introducing an invariant called the torsion. The torsion of a homotopy equivalence takes values in a group now called the Whitehead group and denoted Wh(π), where π is the fundamental group of the target complex. Whitehead found examples of non-trivial torsion and thereby proved that some homotopy equivalences were not simple. The Whitehead group was later discovered to be a quotient of K1(Zπ), where Zπ is the integral group ring of π. Later John Milnor used Reidemeister torsion, an invariant related to Whitehead torsion, to disprove the Hauptvermutung. The first adequate definition of K1 of a ring was made by Hyman Bass and Stephen Schanuel. In topological K-theory, K1 is defined using vector bundles on a suspension of the space. All such vector bundles come from the clutching construction, where two trivial vector bundles on two halves of a space are glued along a common strip of the space. This gluing data is expressed using the general linear group, but elements of that group coming from elementary matrices (matrices corresponding to elementary row or column operations) define equivalent gluings. Motivated by this, the Bass–Schanuel definition of K1 of a ring R is GL(R) / E(R), where GL(R) is the infinite general linear group (the union of all GLn(R)) and E(R) is the subgroup of elementary matrices. They also provided a definition of K0 of a homomorphism of rings and proved that K0 and K1 could be fit together into an exact sequence similar to the relative homology exact sequence. Work in K-theory from this period culminated in Bass' book Algebraic K-theory. In addition to providing a coherent exposition of the results then known, Bass improved many of the statements of the theorems. Of particular note is that Bass, building on his earlier work with Murthy, provided the first proof of what is now known as the fundamental theorem of algebraic K-theory. This is a four-term exact sequence relating K0 of a ring R to K1 of R, the polynomial ring R[t], and the localization R[t, t−1]. Bass recognized that this theorem provided a description of K0 entirely in terms of K1. By applying this description recursively, he produced negative K-groups K−n(R). In independent work, Max Karoubi gave another definition of negative K-groups for certain categories and proved that his definitions yielded that same groups as those of Bass. The next major development in the subject came with the definition of K2. Steinberg studied the universal central extensions of a Chevalley group over a field and gave an explicit presentation of this group in terms of generators and relations. In the case of the group En(k) of elementary matrices, the universal central extension is now written Stn(k) and called the Steinberg group. In the spring of 1967, John Milnor defined K2(R) to be the kernel of the homomorphism St(R) → E(R). The group K2 further extended some of the exact sequences known for K1 and K0, and it had striking applications to number theory. Hideya Matsumoto's 1968 thesis showed that for a field F, K2(F) was isomorphic to: F × ⊗ Z F × / ⟨ x ⊗ ( 1 − x ) : x ∈ F ∖ { 0 , 1 } ⟩ . {\displaystyle F^{\times }\otimes _{\mathbf {Z} }F^{\times }/\langle x\otimes (1-x)\colon x\in F\setminus \{0,1\}\rangle .} This relation is also satisfied by the Hilbert symbol, which expresses the solvability of quadratic equations over local fields. In particular, John Tate was able to prove that K2(Q) is essentially structured around the law of quadratic reciprocity. === Higher K-groups === In the late 1960s and early 1970s, several definitions of higher K-theory were proposed. Swan and Gersten both produced definitions of Kn for all n, and Gersten proved that his and Swan's theories were equivalent, but the two theories were not known to satisfy all the expected properties. Nobile and Villamayor also proposed a definition of higher K-groups. Karoubi and Villamayor defined well-behaved K-groups for all n, but their equivalent of K1 was sometimes a proper quotient of the Bass–Schanuel K1. Their K-groups are now called KVn and are related to homotopy-invariant modifications of K-theory. Inspired in part by Matsumoto's theorem, Milnor made a definition of the higher K-groups of a field. He referred to his definition as "purely ad hoc", and it neither appeared to generalize to all rings nor did it appear to be the correct definition of the higher K-theory of fields. Much later, it was discovered by Nesterenko and Suslin and by Totaro that Milnor K-theory is actually a direct summand of the true K-theory of the field. Specifically, K-groups have a filtration called the weight filtration, and the Milnor K-theory of a field is the highest weight-graded piece of the K-theory. Additionally, Thomason discovered that there is no analog of Milnor K-theory for a general variety. The first definition of higher K-theory to be widely accepted was Daniel Quillen's. As part of Quillen's work on the Adams conjecture in topology, he had constructed maps from the classifying spaces BGL(Fq) to the homotopy fiber of ψq − 1, where ψq is the qth Adams operation acting on the classifying space BU. This map is acyclic, and after modifying BGL(Fq) slightly to produce a new space BGL(Fq)+, the map became a homotopy equivalence. This modification was called the plus construction. The Adams operations had been known to be related to Chern classes and to K-theory since the work of Grothendieck, and so Quillen was led to define the K-theory of R as the homotopy groups of BGL(R)+. Not only did this recover K1 and K2, the relation of K-theory to the Adams operations allowed Quillen to compute the K-groups of finite fields. The classifying space BGL is connected, so Quillen's definition failed to give the correct value for K0. Additionally, it did not give any negative K-groups. Since K0 had a known and accepted definition it was possible to sidestep this difficulty, but it remained technically awkward. Conceptually, the problem was that the definition sprung from GL, which was classically the source of K1. Because GL knows only about gluing vector bundles, not about the vector bundles themselves, it was impossible for it to describe K0. Inspired by conversations with Quillen, Segal soon introduced another approach to constructing algebraic K-theory under the name of Γ-objects. Segal's approach is a homotopy analog of Grothendieck's construction of K0. Where Grothendieck worked with isomorphism classes of bundles, Segal worked with the bundles themselves and used isomorphisms of the bundles as part of his data. This results in a spectrum whose homotopy groups are the higher K-groups (including K0). However, Segal's approach was only able to impose relations for split exact sequences, not general exact sequences. In the category of projective modules over a ring, every short exact sequence splits, and so Γ-objects could be used to define the K-theory of a ring. However, there are non-split short exact sequences in the category of vector bundles on a variety and in the category of all modules over a ring, so Segal's approach did not apply to all cases of interest. In the spring of 1972, Quillen found another approach to the construction of higher K-theory which was to prove enormously successful. This new definition began with an exact category, a category satisfying certain formal properties similar to, but slightly weaker than, the properties satisfied by a category of modules or vector bundles. From this he constructed an auxiliary category using a new device called his "Q-construction." Like Segal's Γ-objects, the Q-construction has its roots in Grothendieck's definition of K0. Unlike Grothendieck's definition, however, the Q-construction builds a category, not an abelian group, and unlike Segal's Γ-objects, the Q-construction works directly with short exact sequences. If C is an abelian category, then QC is a category with the same objects as C but whose morphisms are defined in terms of short exact sequences in C. The K-groups of the exact category are the homotopy groups of ΩBQC, the loop space of the geometric realization (taking the loop space corrects the indexing). Quillen additionally proved his "+ = Q theorem" that his two definitions of K-theory agreed with each other. This yielded the correct K0 and led to simpler proofs, but still did not yield any negative K-groups. All abelian categories are exact categories, but not all exact categories are abelian. Because Quillen was able to work in this more general situation, he was able to use exact categories as tools in his proofs. This technique allowed him to prove many of the basic theorems of algebraic K-theory. Additionally, it was possible to prove that the earlier definitions of Swan and Gersten were equivalent to Quillen's under certain conditions. K-theory now appeared to be a homology theory for rings and a cohomology theory for varieties. However, many of its basic theorems carried the hypothesis that the ring or variety in question was regular. One of the basic expected relations was a long exact sequence (called the "localization sequence") relating the K-theory of a variety X and an open subset U. Quillen was unable to prove the existence of the localization sequence in full generality. He was, however, able to prove its existence for a related theory called G-theory (or sometimes K′-theory). G-theory had been defined early in the development of the subject by Grothendieck. Grothendieck defined G0(X) for a variety X to be the free abelian group on isomorphism classes of coherent sheaves on X, modulo relations coming from exact sequences of coherent sheaves. In the categorical framework adopted by later authors, the K-theory of a variety is the K-theory of its category of vector bundles, while its G-theory is the K-theory of its category of coherent sheaves. Not only could Quillen prove the existence of a localization exact sequence for G-theory, he could prove that for a regular ring or variety, K-theory equaled G-theory, and therefore K-theory of regular varieties had a localization exact sequence. Since this sequence was fundamental to many of the facts in the subject, regularity hypotheses pervaded early work on higher K-theory. === Applications of algebraic K-theory in topology === The earliest application of algebraic K-theory to topology was Whitehead's construction of Whitehead torsion. A closely related construction was found by C. T. C. Wall in 1963. Wall found that a space X dominated by a finite complex has a generalized Euler characteristic taking values in a quotient of K0(Zπ), where π is the fundamental group of the space. This invariant is called Wall's finiteness obstruction because X is homotopy equivalent to a finite complex if and only if the invariant vanishes. Laurent Siebenmann in his thesis found an invariant similar to Wall's that gives an obstruction to an open manifold being the interior of a compact manifold with boundary. If two manifolds with boundary M and N have isomorphic interiors (in TOP, PL, or DIFF as appropriate), then the isomorphism between them defines an h-cobordism between M and N. Whitehead torsion was eventually reinterpreted in a more directly K-theoretic way. This reinterpretation happened through the study of h-cobordisms. Two n-dimensional manifolds M and N are h-cobordant if there exists an (n + 1)-dimensional manifold with boundary W whose boundary is the disjoint union of M and N and for which the inclusions of M and N into W are homotopy equivalences (in the categories TOP, PL, or DIFF). Stephen Smale's h-cobordism theorem asserted that if n ≥ 5, W is compact, and M, N, and W are simply connected, then W is isomorphic to the cylinder M × [0, 1] (in TOP, PL, or DIFF as appropriate). This theorem proved the Poincaré conjecture for n ≥ 5. If M and N are not assumed to be simply connected, then an h-cobordism need not be a cylinder. The s-cobordism theorem, due independently to Mazur, Stallings, and Barden, explains the general situation: An h-cobordism is a cylinder if and only if the Whitehead torsion of the inclusion M ⊂ W vanishes. This generalizes the h-cobordism theorem because the simple connectedness hypotheses imply that the relevant Whitehead group is trivial. In fact the s-cobordism theorem implies that there is a bijective correspondence between isomorphism classes of h-cobordisms and elements of the Whitehead group. An obvious question associated with the existence of h-cobordisms is their uniqueness. The natural notion of equivalence is isotopy. Jean Cerf proved that for simply connected smooth manifolds M of dimension at least 5, isotopy of h-cobordisms is the same as a weaker notion called pseudo-isotopy. Hatcher and Wagoner studied the components of the space of pseudo-isotopies and related it to a quotient of K2(Zπ). The proper context for the s-cobordism theorem is the classifying space of h-cobordisms. If M is a CAT manifold, then HCAT(M) is a space that classifies bundles of h-cobordisms on M. The s-cobordism theorem can be reinterpreted as the statement that the set of connected components of this space is the Whitehead group of π1(M). This space contains strictly more information than the Whitehead group; for example, the connected component of the trivial cobordism describes the possible cylinders on M and in particular is the obstruction to the uniqueness of a homotopy between a manifold and M × [0, 1]. Consideration of these questions led Waldhausen to introduce his algebraic K-theory of spaces. The algebraic K-theory of M is a space A(M) which is defined so that it plays essentially the same role for higher K-groups as K1(Zπ1(M)) does for M. In particular, Waldhausen showed that there is a map from A(M) to a space Wh(M) which generalizes the map K1(Zπ1(M)) → Wh(π1(M)) and whose homotopy fiber is a homology theory. In order to fully develop A-theory, Waldhausen made significant technical advances in the foundations of K-theory. Waldhausen introduced Waldhausen categories, and for a Waldhausen category C he introduced a simplicial category S⋅C (the S is for Segal) defined in terms of chains of cofibrations in C. This freed the foundations of K-theory from the need to invoke analogs of exact sequences. === Algebraic topology and algebraic geometry in algebraic K-theory === Quillen suggested to his student Kenneth Brown that it might be possible to create a theory of sheaves of spectra of which K-theory would provide an example. The sheaf of K-theory spectra would, to each open subset of a variety, associate the K-theory of that open subset. Brown developed such a theory for his thesis. Simultaneously, Gersten had the same idea. At a Seattle conference in autumn of 1972, they together discovered a spectral sequence converging from the sheaf cohomology of K n {\displaystyle {\mathcal {K}}_{n}} , the sheaf of Kn-groups on X, to the K-group of the total space. This is now called the Brown–Gersten spectral sequence. Spencer Bloch, influenced by Gersten's work on sheaves of K-groups, proved that on a regular surface, the cohomology group H 2 ( X , K 2 ) {\displaystyle H^{2}(X,{\mathcal {K}}_{2})} is isomorphic to the Chow group CH2(X) of codimension 2 cycles on X. Inspired by this, Gersten conjectured that for a regular local ring R with fraction field F, Kn(R) injects into Kn(F) for all n. Soon Quillen proved that this is true when R contains a field, and using this he proved that H p ( X , K p ) ≅ CH p ⁡ ( X ) {\displaystyle H^{p}(X,{\mathcal {K}}_{p})\cong \operatorname {CH} ^{p}(X)} for all p. This is known as Bloch's formula. While progress has been made on Gersten's conjecture since then, the general case remains open. Lichtenbaum conjectured that special values of the zeta function of a number field could be expressed in terms of the K-groups of the ring of integers of the field. These special values were known to be related to the étale cohomology of the ring of integers. Quillen therefore generalized Lichtenbaum's conjecture, predicting the existence of a spectral sequence like the Atiyah–Hirzebruch spectral sequence in topological K-theory. Quillen's proposed spectral sequence would start from the étale cohomology of a ring R and, in high enough degrees and after completing at a prime l invertible in R, abut to the l-adic completion of the K-theory of R. In the case studied by Lichtenbaum, the spectral sequence would degenerate, yielding Lichtenbaum's conjecture. The necessity of localizing at a prime l suggested to Browder that there should be a variant of K-theory with finite coefficients. He introduced K-theory groups Kn(R; Z/lZ) which were Z/lZ-vector spaces, and he found an analog of the Bott element in topological K-theory. Soule used this theory to construct "étale Chern classes", an analog of topological Chern classes which took elements of algebraic K-theory to classes in étale cohomology. Unlike algebraic K-theory, étale cohomology is highly computable, so étale Chern classes provided an effective tool for detecting the existence of elements in K-theory. William G. Dwyer and Eric Friedlander then invented an analog of K-theory for the étale topology called étale K-theory. For varieties defined over the complex numbers, étale K-theory is isomorphic to topological K-theory. Moreover, étale K-theory admitted a spectral sequence similar to the one conjectured by Quillen. Thomason proved around 1980 that after inverting the Bott element, algebraic K-theory with finite coefficients became isomorphic to étale K-theory. Throughout the 1970s and early 1980s, K-theory on singular varieties still lacked adequate foundations. While it was believed that Quillen's K-theory gave the correct groups, it was not known that these groups had all of the envisaged properties. For this, algebraic K-theory had to be reformulated. This was done by Thomason in a lengthy monograph which he co-credited to his dead friend Thomas Trobaugh, who he said gave him a key idea in a dream. Thomason combined Waldhausen's construction of K-theory with the foundations of intersection theory described in volume six of Grothendieck's Séminaire de Géométrie Algébrique du Bois Marie. There, K0 was described in terms of complexes of sheaves on algebraic varieties. Thomason discovered that if one worked with in derived category of sheaves, there was a simple description of when a complex of sheaves could be extended from an open subset of a variety to the whole variety. By applying Waldhausen's construction of K-theory to derived categories, Thomason was able to prove that algebraic K-theory had all the expected properties of a cohomology theory. In 1976, R. Keith Dennis discovered an entirely novel technique for computing K-theory based on Hochschild homology. This was based around the existence of the Dennis trace map, a homomorphism from K-theory to Hochschild homology. While the Dennis trace map seemed to be successful for calculations of K-theory with finite coefficients, it was less successful for rational calculations. Goodwillie, motivated by his "calculus of functors", conjectured the existence of a theory intermediate to K-theory and Hochschild homology. He called this theory topological Hochschild homology because its ground ring should be the sphere spectrum (considered as a ring whose operations are defined only up to homotopy). In the mid-1980s, Bokstedt gave a definition of topological Hochschild homology that satisfied nearly all of Goodwillie's conjectural properties, and this made possible further computations of K-groups. Bokstedt's version of the Dennis trace map was a transformation of spectra K → THH. This transformation factored through the fixed points of a circle action on THH, which suggested a relationship with cyclic homology. In the course of proving an algebraic K-theory analog of the Novikov conjecture, Bokstedt, Hsiang, and Madsen introduced topological cyclic homology, which bore the same relationship to topological Hochschild homology as cyclic homology did to Hochschild homology. The Dennis trace map to topological Hochschild homology factors through topological cyclic homology, providing an even more detailed tool for calculations. In 1996, Dundas, Goodwillie, and McCarthy proved that topological cyclic homology has in a precise sense the same local structure as algebraic K-theory, so that if a calculation in K-theory or topological cyclic homology is possible, then many other "nearby" calculations follow. == Lower K-groups == The lower K-groups were discovered first, and given various ad hoc descriptions, which remain useful. Throughout, let A be a ring. === K0 === The functor K0 takes a ring A to the Grothendieck group of the set of isomorphism classes of its finitely generated projective modules, regarded as a monoid under direct sum. Any ring homomorphism A → B gives a map K0(A) → K0(B) by mapping (the class of) a projective A-module M to M ⊗A B, making K0 a covariant functor. If the ring A is commutative, we can define a subgroup of K0(A) as the set K ~ 0 ( A ) = ⋂ p prime ideal of A K e r dim p , {\displaystyle {\tilde {K}}_{0}\left(A\right)=\bigcap \limits _{{\mathfrak {p}}{\text{ prime ideal of }}A}\mathrm {Ker} \dim _{\mathfrak {p}},} where : dim p : K 0 ( A ) → Z {\displaystyle \dim _{\mathfrak {p}}:K_{0}\left(A\right)\to \mathbf {Z} } is the map sending every (class of a) finitely generated projective A-module M to the rank of the free A p {\displaystyle A_{\mathfrak {p}}} -module M p {\displaystyle M_{\mathfrak {p}}} (this module is indeed free, as any finitely generated projective module over a local ring is free). This subgroup K ~ 0 ( A ) {\displaystyle {\tilde {K}}_{0}\left(A\right)} is known as the reduced zeroth K-theory of A. If B is a ring without an identity element, we can extend the definition of K0 as follows. Let A = B⊕Z be the extension of B to a ring with unity obtained by adjoining an identity element (0,1). There is a short exact sequence B → A → Z and we define K0(B) to be the kernel of the corresponding map K0(A) → K0(Z) = Z. ==== Examples ==== (Projective) modules over a field k are vector spaces and K0(k) is isomorphic to Z, by dimension. Finitely generated projective modules over a local ring A are free and so in this case once again K0(A) is isomorphic to Z, by rank. For A a Dedekind domain, K0(A) = Pic(A) ⊕ Z, where Pic(A) is the Picard group of A, An algebro-geometric variant of this construction is applied to the category of algebraic varieties; it associates with a given algebraic variety X the Grothendieck's K-group of the category of locally free sheaves (or coherent sheaves) on X. Given a compact topological space X, the topological K-theory Ktop(X) of (real) vector bundles over X coincides with K0 of the ring of continuous real-valued functions on X. ==== Relative K0 ==== Let I be an ideal of A and define the "double" to be a subring of the Cartesian product A×A: D ( A , I ) = { ( x , y ) ∈ A × A : x − y ∈ I } . {\displaystyle D(A,I)=\{(x,y)\in A\times A:x-y\in I\}\ .} The relative K-group is defined in terms of the "double" K 0 ( A , I ) = ker ⁡ ( K 0 ( D ( A , I ) ) → K 0 ( A ) ) . {\displaystyle K_{0}(A,I)=\ker \left({K_{0}(D(A,I))\rightarrow K_{0}(A)}\right)\ .} where the map is induced by projection along the first factor. The relative K0(A,I) is isomorphic to K0(I), regarding I as a ring without identity. The independence from A is an analogue of the Excision theorem in homology. ==== K0 as a ring ==== If A is a commutative ring, then the tensor product of projective modules is again projective, and so tensor product induces a multiplication turning K0 into a commutative ring with the class [A] as identity. The exterior product similarly induces a λ-ring structure. The Picard group embeds as a subgroup of the group of units K0(A)∗. === K1 === Hyman Bass provided this definition, which generalizes the group of units of a ring: K1(A) is the abelianization of the infinite general linear group: K 1 ( A ) = GL ⁡ ( A ) ab = GL ⁡ ( A ) / [ GL ⁡ ( A ) , GL ⁡ ( A ) ] {\displaystyle K_{1}(A)=\operatorname {GL} (A)^{\mbox{ab}}=\operatorname {GL} (A)/[\operatorname {GL} (A),\operatorname {GL} (A)]} Here GL ⁡ ( A ) = lim → ⁡ GL ⁡ ( n , A ) {\displaystyle \operatorname {GL} (A)=\varinjlim \operatorname {GL} (n,A)} is the direct limit of the GL ⁡ ( n ) {\displaystyle \operatorname {GL} (n)} , which embeds in GL ⁡ ( n + 1 ) {\displaystyle \operatorname {GL} (n+1)} as the upper left block matrix, and [ GL ⁡ ( A ) , GL ⁡ ( A ) ] {\displaystyle [\operatorname {GL} (A),\operatorname {GL} (A)]} is its commutator subgroup. Define an elementary matrix to be one which is the sum of an identity matrix and a single off-diagonal element (this is a subset of the elementary matrices used in linear algebra). Then Whitehead's lemma states that the group E ⁡ ( A ) {\displaystyle \operatorname {E} (A)} generated by elementary matrices equals the commutator subgroup [ GL ⁡ ( A ) , GL ⁡ ( A ) ] {\displaystyle [\operatorname {GL} (A),\operatorname {GL} (A)]} . Indeed, the group GL ⁡ ( A ) / E ⁡ ( A ) {\displaystyle \operatorname {GL} (A)/\operatorname {E} (A)} was first defined and studied by Whitehead, and is called the Whitehead group of the ring A {\displaystyle A} . ==== Relative K1 ==== The relative K-group is defined in terms of the "double" K 1 ( A , I ) = ker ⁡ ( K 1 ( D ( A , I ) ) → K 1 ( A ) ) . {\displaystyle K_{1}(A,I)=\ker \left({K_{1}(D(A,I))\rightarrow K_{1}(A)}\right)\ .} There is a natural exact sequence K 1 ( A , I ) → K 1 ( A ) → K 1 ( A / I ) → K 0 ( A , I ) → K 0 ( A ) → K 0 ( A / I ) . {\displaystyle K_{1}(A,I)\rightarrow K_{1}(A)\rightarrow K_{1}(A/I)\rightarrow K_{0}(A,I)\rightarrow K_{0}(A)\rightarrow K_{0}(A/I)\ .} ==== Commutative rings and fields ==== For a commutative ring A {\displaystyle A} , one can define a determinant det : GL ⁡ ( A ) → A × {\displaystyle \det :\operatorname {GL} (A)\to A^{\times }} to the group of units of A {\displaystyle A} , which vanishes on E ⁡ ( A ) {\displaystyle \operatorname {E} (A)} and thus descends to a map det : K 1 ( A ) → A × {\displaystyle \det :K_{1}(A)\to A^{\times }} . As E ⁡ ( A ) ◃ SL ⁡ ( A ) {\displaystyle \operatorname {E} (A)\triangleleft \operatorname {SL} (A)} , one can also define the special Whitehead group S K 1 ( A ) = SL ⁡ ( A ) / E ⁡ ( A ) {\displaystyle SK_{1}(A)=\operatorname {SL} (A)/\operatorname {E} (A)} . This map splits via the map A × → GL ⁡ ( 1 , A ) → K 1 ( A ) {\displaystyle A^{\times }\to \operatorname {GL} (1,A)\to K_{1}(A)} (unit in the upper left corner), and hence is onto, and has the special Whitehead group as kernel, yielding the split short exact sequence: 1 → S K 1 ( A ) → K 1 ( A ) → A ∗ → 1 , {\displaystyle 1\to SK_{1}(A)\to K_{1}(A)\to A^{*}\to 1,} which is a quotient of the usual split short exact sequence defining the special linear group, namely 1 → SL ⁡ ( A ) → GL ⁡ ( A ) → A ∗ → 1. {\displaystyle 1\to \operatorname {SL} (A)\to \operatorname {GL} (A)\to A^{*}\to 1.} The determinant is split by including the group of units A × = GL ⁡ ( 1 , A ) {\displaystyle A^{\times }=\operatorname {GL} (1,A)} into the general linear group GL ⁡ ( A ) {\displaystyle \operatorname {GL} (A)} , so K 1 ( A ) {\displaystyle K_{1}(A)} splits as the direct sum of the group of units and the special Whitehead group: K 1 ( A ) ≅ A × ⊕ S K 1 ( A ) {\displaystyle K_{1}(A)\cong A^{\times }\oplus SK_{1}(A)} . When A {\displaystyle A} is a Euclidean domain (e.g. a field, or the integers) S K 1 ( A ) {\displaystyle SK_{1}(A)} vanishes, and the determinant map is an isomorphism from K 1 ( A ) {\displaystyle K_{1}(A)} to A × {\displaystyle A^{\times }} . This is false in general for PIDs, thus providing one of the rare mathematical features of Euclidean domains that do not generalize to all PIDs. An explicit PID such that S K 1 {\displaystyle SK_{1}} is nonzero was given by Ischebeck in 1980 and by Grayson in 1981. If A {\displaystyle A} is a Dedekind domain whose quotient field is an algebraic number field (a finite extension of the rationals) then Milnor (1971, corollary 16.3) shows that S K 1 ( A ) {\displaystyle SK_{1}(A)} vanishes. The vanishing of S K 1 ( A ) {\displaystyle SK_{1}(A)} can be interpreted as saying that K 1 {\displaystyle K_{1}} is generated by the image of GL 1 {\displaystyle \operatorname {GL} _{1}} in GL. When this fails, one can ask whether K 1 {\displaystyle K_{1}} is generated by the image of GL 2 {\displaystyle \operatorname {GL} _{2}} . For a Dedekind domain, this is the case: indeed, K 1 {\displaystyle K_{1}} is generated by the images of GL 1 {\displaystyle \operatorname {GL} _{1}} and SL 2 {\displaystyle \operatorname {SL} _{2}} in GL {\displaystyle \operatorname {GL} } . The subgroup of S K 1 {\displaystyle SK_{1}} generated by SL 2 {\displaystyle \operatorname {SL} _{2}} may be studied by Mennicke symbols. For Dedekind domains with all quotients by maximal ideals finite, S K 1 {\displaystyle SK_{1}} is a torsion group. For a non-commutative ring, the determinant cannot in general be defined, but the map GL ⁡ ( A ) → K 1 ( A ) {\displaystyle \operatorname {GL} (A)\to K_{1}(A)} is a generalisation of the determinant. ==== Central simple algebras ==== In the case of a central simple algebra A {\displaystyle A} over a field F {\displaystyle F} , the reduced norm provides a generalisation of the determinant giving a map K 1 ( A ) → F × {\displaystyle K_{1}(A)\to F^{\times }} and S K 1 ( A ) {\displaystyle SK_{1}(A)} may be defined as the kernel. Wang's theorem states that if A {\displaystyle A} has prime degree then S K 1 ( A ) {\displaystyle SK_{1}(A)} is trivial, and this may be extended to square-free degree. Wang also showed that S K 1 ( A ) {\displaystyle SK_{1}(A)} is trivial for any central simple algebra over a number field, but Platonov has given examples of algebras of degree prime squared for which S K 1 ( A ) {\displaystyle SK_{1}(A)} is non-trivial. === K2 === John Milnor found the right definition of K2: it is the center of the Steinberg group St(A) of A. It can also be defined as the kernel of the map φ : St ⁡ ( A ) → G L ( A ) , {\displaystyle \varphi \colon \operatorname {St} (A)\to \mathrm {GL} (A),} or as the Schur multiplier of the group of elementary matrices. For a field, K2 is determined by Steinberg symbols: this leads to Matsumoto's theorem. One can compute that K2 is zero for any finite field. The computation of K2(Q) is complicated: Tate proved K 2 ( Q ) = ( Z / 4 ) ∗ × ∏ p odd prime ( Z / p ) ∗ {\displaystyle K_{2}(\mathbf {Q} )=(\mathbf {Z} /4)^{*}\times \prod _{p{\text{ odd prime}}}(\mathbf {Z} /p)^{*}\ } and remarked that the proof followed Gauss's first proof of the Law of Quadratic Reciprocity. For non-Archimedean local fields, the group K2(F) is the direct sum of a finite cyclic group of order m, say, and a divisible group K2(F)m. We have K2(Z) = Z/2, and in general K2 is finite for the ring of integers of a number field. We further have K2(Z/n) = Z/2 if n is divisible by 4, and otherwise zero. ==== Matsumoto's theorem ==== Matsumoto's theorem states that for a field k, the second K-group is given by K 2 ( k ) = k × ⊗ Z k × / ⟨ a ⊗ ( 1 − a ) ∣ a ≠ 0 , 1 ⟩ . {\displaystyle K_{2}(k)=k^{\times }\otimes _{\mathbf {Z} }k^{\times }/\langle a\otimes (1-a)\mid a\not =0,1\rangle .} Matsumoto's original theorem is even more general: For any root system, it gives a presentation for the unstable K-theory. This presentation is different from the one given here only for symplectic root systems. For non-symplectic root systems, the unstable second K-group with respect to the root system is exactly the stable K-group for GL(A). Unstable second K-groups (in this context) are defined by taking the kernel of the universal central extension of the Chevalley group of universal type for a given root system. This construction yields the kernel of the Steinberg extension for the root systems An (n > 1) and, in the limit, stable second K-groups. ==== Long exact sequences ==== If A is a Dedekind domain with field of fractions F then there is a long exact sequence K 2 F → ⊕ p K 1 A / p → K 1 A → K 1 F → ⊕ p K 0 A / p → K 0 A → K 0 F → 0 {\displaystyle K_{2}F\rightarrow \oplus _{\mathbf {p} }K_{1}A/{\mathbf {p} }\rightarrow K_{1}A\rightarrow K_{1}F\rightarrow \oplus _{\mathbf {p} }K_{0}A/{\mathbf {p} }\rightarrow K_{0}A\rightarrow K_{0}F\rightarrow 0\ } where p runs over all prime ideals of A. There is also an extension of the exact sequence for relative K1 and K0: K 2 ( A ) → K 2 ( A / I ) → K 1 ( A , I ) → K 1 ( A ) ⋯ . {\displaystyle K_{2}(A)\rightarrow K_{2}(A/I)\rightarrow K_{1}(A,I)\rightarrow K_{1}(A)\cdots \ .} ==== Pairing ==== There is a pairing on K1 with values in K2. Given commuting matrices X and Y over A, take elements x and y in the Steinberg group with X,Y as images. The commutator x y x − 1 y − 1 {\displaystyle xyx^{-1}y^{-1}} is an element of K2. The map is not always surjective. == Milnor K-theory == The above expression for K2 of a field k led Milnor to the following definition of "higher" K-groups by K ∗ M ( k ) := T ∗ ( k × ) / ( a ⊗ ( 1 − a ) ) , {\displaystyle K_{*}^{M}(k):=T^{*}(k^{\times })/(a\otimes (1-a)),} thus as graded parts of a quotient of the tensor algebra of the multiplicative group k× by the two-sided ideal, generated by the { a ⊗ ( 1 − a ) : a ≠ 0 , 1 } . {\displaystyle \left\{a\otimes (1-a):\ a\neq 0,1\right\}.} For n = 0,1,2 these coincide with those below, but for n ≧ 3 they differ in general. For example, we have KMn(Fq) = 0 for n ≧ 2 but KnFq is nonzero for odd n (see below). The tensor product on the tensor algebra induces a product K m × K n → K m + n {\displaystyle K_{m}\times K_{n}\rightarrow K_{m+n}} making K ∗ M ( F ) {\displaystyle K_{*}^{M}(F)} a graded ring which is graded-commutative. The images of elements a 1 ⊗ ⋯ ⊗ a n {\displaystyle a_{1}\otimes \cdots \otimes a_{n}} in K n M ( k ) {\displaystyle K_{n}^{M}(k)} are termed symbols, denoted { a 1 , … , a n } {\displaystyle \{a_{1},\ldots ,a_{n}\}} . For integer m invertible in k there is a map ∂ : k ∗ → H 1 ( k , μ m ) {\displaystyle \partial :k^{*}\rightarrow H^{1}(k,\mu _{m})} where μ m {\displaystyle \mu _{m}} denotes the group of m-th roots of unity in some separable extension of k. This extends to ∂ n : k ∗ × ⋯ × k ∗ → H n ( k , μ m ⊗ n ) {\displaystyle \partial ^{n}:k^{*}\times \cdots \times k^{*}\rightarrow H^{n}\left({k,\mu _{m}^{\otimes n}}\right)\ } satisfying the defining relations of the Milnor K-group. Hence ∂ n {\displaystyle \partial ^{n}} may be regarded as a map on K n M ( k ) {\displaystyle K_{n}^{M}(k)} , called the Galois symbol map. The relation between étale (or Galois) cohomology of the field and Milnor K-theory modulo 2 is the Milnor conjecture, proven by Vladimir Voevodsky. The analogous statement for odd primes is the Bloch-Kato conjecture, proved by Voevodsky, Rost, and others. == Higher K-theory == The accepted definitions of higher K-groups were given by Quillen (1973), after a few years during which several incompatible definitions were suggested. The object of the program was to find definitions of K(R) and K(R,I) in terms of classifying spaces so that R ⇒ K(R) and (R,I) ⇒ K(R,I) are functors into a homotopy category of spaces and the long exact sequence for relative K-groups arises as the long exact homotopy sequence of a fibration K(R,I) → K(R) → K(R/I). Quillen gave two constructions, the "plus-construction" and the "Q-construction", the latter subsequently modified in different ways. The two constructions yield the same K-groups. === The +-construction === One possible definition of higher algebraic K-theory of rings was given by Quillen K n ( R ) = π n ( B GL ⁡ ( R ) + ) , {\displaystyle K_{n}(R)=\pi _{n}(B\operatorname {GL} (R)^{+}),} Here πn is a homotopy group, GL(R) is the direct limit of the general linear groups over R for the size of the matrix tending to infinity, B is the classifying space construction of homotopy theory, and the + is Quillen's plus construction. He originally found this idea while studying the group cohomology of G L n ( F q ) {\displaystyle GL_{n}(\mathbb {F} _{q})} and noted some of his calculations were related to K 1 ( F q ) {\displaystyle K_{1}(\mathbb {F} _{q})} . This definition only holds for n > 0 so one often defines the higher algebraic K-theory via K n ( R ) = π n ( B GL ⁡ ( R ) + × K 0 ( R ) ) {\displaystyle K_{n}(R)=\pi _{n}(B\operatorname {GL} (R)^{+}\times K_{0}(R))} Since BGL(R)+ is path connected and K0(R) discrete, this definition doesn't differ in higher degrees and also holds for n = 0. === The Q-construction === The Q-construction gives the same results as the +-construction, but it applies in more general situations. Moreover, the definition is more direct in the sense that the K-groups, defined via the Q-construction are functorial by definition. This fact is not automatic in the plus-construction. Suppose P {\displaystyle P} is an exact category; associated to P {\displaystyle P} a new category Q P {\displaystyle QP} is defined, objects of which are those of P {\displaystyle P} and morphisms from M′ to M″ are isomorphism classes of diagrams M ′ ⟵ N ⟶ M ″ , {\displaystyle M'\longleftarrow N\longrightarrow M'',} where the first arrow is an admissible epimorphism and the second arrow is an admissible monomorphism. Note the morphisms in Q P {\displaystyle QP} are analogous to the definitions of morphisms in the category of motives, where morphisms are given as correspondences Z ⊂ X × Y {\displaystyle Z\subset X\times Y} such that X ← Z → Y {\displaystyle X\leftarrow Z\rightarrow Y} is a diagram where the arrow on the left is a covering map (hence surjective) and the arrow on the right is injective. This category can then be turned into a topological space using the classifying space construction B Q P {\displaystyle BQP} , which is defined to be the geometric realisation of the nerve of Q P {\displaystyle QP} . Then, the i-th K-group of the exact category P {\displaystyle P} is then defined as K i ( P ) = π i + 1 ( B Q P , 0 ) {\displaystyle K_{i}(P)=\pi _{i+1}(\mathrm {BQ} P,0)} with a fixed zero-object 0 {\displaystyle 0} . Note the classifying space of a groupoid B G {\displaystyle B{\mathcal {G}}} moves the homotopy groups up one degree, hence the shift in degrees for K i {\displaystyle K_{i}} being π i + 1 {\displaystyle \pi _{i+1}} of a space. This definition coincides with the above definition of K0(P). If P is the category of finitely generated projective R-modules, this definition agrees with the above BGL+ definition of Kn(R) for all n. More generally, for a scheme X, the higher K-groups of X are defined to be the K-groups of (the exact category of) locally free coherent sheaves on X. The following variant of this is also used: instead of finitely generated projective (= locally free) modules, take finitely generated modules. The resulting K-groups are usually written Gn(R). When R is a noetherian regular ring, then G- and K-theory coincide. Indeed, the global dimension of regular rings is finite, i.e. any finitely generated module has a finite projective resolution P* → M, and a simple argument shows that the canonical map K0(R) → G0(R) is an isomorphism, with [M]=Σ ± [Pn]. This isomorphism extends to the higher K-groups, too. === The S-construction === A third construction of K-theory groups is the S-construction, due to Waldhausen. It applies to categories with cofibrations (also called Waldhausen categories). This is a more general concept than exact categories. == Examples == While the Quillen algebraic K-theory has provided deep insight into various aspects of algebraic geometry and topology, the K-groups have proved particularly difficult to compute except in a few isolated but interesting cases. (See also: K-groups of a field.) === Algebraic K-groups of finite fields === The first and one of the most important calculations of the higher algebraic K-groups of a ring were made by Quillen himself for the case of finite fields: If Fq is the finite field with q elements, then: K0(Fq) = Z, K2i(Fq) = 0 for i ≥1, K2i–1(Fq) = Z/(q i − 1)Z for i ≥ 1. Rick Jardine (1993) reproved Quillen's computation using different methods. === Algebraic K-groups of rings of integers === Quillen proved that if A is the ring of algebraic integers in an algebraic number field F (a finite extension of the rationals), then the algebraic K-groups of A are finitely generated. Armand Borel used this to calculate Ki(A) and Ki(F) modulo torsion. For example, for the integers Z, Borel proved that (modulo torsion) Ki (Z)/tors.=0 for positive i unless i=4k+1 with k positive K4k+1 (Z)/tors.= Z for positive k. The torsion subgroups of K2i+1(Z), and the orders of the finite groups K4k+2(Z) have recently been determined, but whether the latter groups are cyclic, and whether the groups K4k(Z) vanish depends upon Vandiver's conjecture about the class groups of cyclotomic integers. See Quillen–Lichtenbaum conjecture for more details. == Applications and open questions == Algebraic K-groups are used in conjectures on special values of L-functions and the formulation of a non-commutative main conjecture of Iwasawa theory and in construction of higher regulators. Parshin's conjecture concerns the higher algebraic K-groups for smooth varieties over finite fields, and states that in this case the groups vanish up to torsion. Another fundamental conjecture due to Hyman Bass (Bass' conjecture) says that all of the groups Gn(A) are finitely generated when A is a finitely generated Z-algebra. (The groups Gn(A) are the K-groups of the category of finitely generated A-modules) == See also == Additive K-theory Bloch's formula Fundamental theorem of algebraic K-theory Basic theorems in algebraic K-theory K-theory K-theory of a category K-group of a field K-theory spectrum Redshift conjecture Topological K-theory Rigidity (K-theory) == Notes == == References == Bass, Hyman (1968), Algebraic K-theory, Mathematics Lecture Note Series, New York-Amsterdam: W.A. Benjamin, Inc., Zbl 0174.30302 Friedlander, Eric; Grayson, Daniel, eds. (2005), Handbook of K-Theory, Berlin, New York: Springer-Verlag, doi:10.1007/3-540-27855-9, ISBN 978-3-540-30436-4, MR 2182598 Friedlander, Eric M.; Weibel, Charles W. (1999), An overview of algebraic K-theory, World Sci. Publ., River Edge, NJ, pp. 1–119, MR 1715873 Gille, Philippe; Szamuely, Tamás (2006), Central simple algebras and Galois cohomology, Cambridge Studies in Advanced Mathematics, vol. 101, Cambridge: Cambridge University Press, ISBN 978-0-521-86103-8, Zbl 1137.12001 Gras, Georges (2003), Class field theory. From theory to practice, Springer Monographs in Mathematics, Berlin: Springer-Verlag, ISBN 978-3-540-44133-5, Zbl 1019.11032 Jardine, John Frederick (1993), "The K-theory of finite fields, revisited", K-Theory, 7 (6): 579–595, doi:10.1007/BF00961219, MR 1268594 Lam, Tsit-Yuen (2005), Introduction to Quadratic Forms over Fields, Graduate Studies in Mathematics, vol. 67, American Mathematical Society, ISBN 978-0-8218-1095-8, MR 2104929, Zbl 1068.11023 Lemmermeyer, Franz (2000), Reciprocity laws. From Euler to Eisenstein, Springer Monographs in Mathematics, Berlin: Springer-Verlag, doi:10.1007/978-3-662-12893-0, ISBN 978-3-540-66957-9, MR 1761696, Zbl 0949.11002 Milnor, John Willard (1970), "Algebraic K-theory and quadratic forms", Inventiones Mathematicae, 9 (4): 318–344, Bibcode:1970InMat...9..318M, doi:10.1007/BF01425486, ISSN 0020-9910, MR 0260844 Milnor, John Willard (1971), Introduction to algebraic K-theory, Annals of Mathematics Studies, vol. 72, Princeton, NJ: Princeton University Press, MR 0349811, Zbl 0237.18005 (lower K-groups) Quillen, Daniel (1973), "Higher algebraic K-theory. I", Algebraic K-theory, I: Higher K-theories (Proc. Conf., Battelle Memorial Inst., Seattle, Wash., 1972), Lecture Notes in Math, vol. 341, Berlin, New York: Springer-Verlag, pp. 85–147, doi:10.1007/BFb0067053, ISBN 978-3-540-06434-3, MR 0338129 Quillen, Daniel (1975), "Higher algebraic K-theory", Proceedings of the International Congress of Mathematicians (Vancouver, B. C., 1974), Vol. 1, Montreal, Quebec: Canad. Math. Congress, pp. 171–176, MR 0422392 (Quillen's Q-construction) Quillen, Daniel (1974), "Higher K-theory for categories with exact sequences", New developments in topology (Proc. Sympos. Algebraic Topology, Oxford, 1972), London Math. Soc. Lecture Note Ser., vol. 11, Cambridge University Press, pp. 95–103, MR 0335604 (relation of Q-construction to plus-construction) Rosenberg, Jonathan (1994), Algebraic K-theory and its applications, Graduate Texts in Mathematics, vol. 147, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-4314-4, ISBN 978-0-387-94248-3, MR 1282290, Zbl 0801.19001. Errata Seiler, Wolfgang (1988), "λ-Rings and Adams Operations in Algebraic K-Theory", in Rapoport, M.; Schneider, P.; Schappacher, N. (eds.), Beilinson's Conjectures on Special Values of L-Functions, Boston, MA: Academic Press, ISBN 978-0-12-581120-0 Silvester, John R. (1981), Introduction to algebraic K-theory, Chapman and Hall Mathematics Series, London, New York: Chapman and Hall, ISBN 978-0-412-22700-4, Zbl 0468.18006 Weibel, Charles (2005), "Algebraic K-theory of rings of integers in local and global fields" (PDF), Handbook of K-theory, Berlin, New York: Springer-Verlag, pp. 139–190, doi:10.1007/3-540-27855-9_5, ISBN 978-3-540-23019-9, MR 2181823 (survey article) Weibel, Charles (1999), "The development of algebraic 𝐾-theory before 1980", The development of algebraic K-theory before 1980, Contemporary Mathematics, vol. 243, Providence, RI: American Mathematical Society, pp. 211–238, doi:10.1090/conm/243/03695, ISBN 978-0-8218-1087-3, MR 1732049 == Further reading == Lluis-Puebla, Emilio; Loday, Jean-Louis; Gillet, Henri; Soulé, Christophe; Snaith, Victor (1992), Higher algebraic K-theory: an overview, Lecture Notes in Mathematics, vol. 1491, Berlin, Heidelberg: Springer-Verlag, ISBN 978-3-540-55007-5, Zbl 0746.19001 Magurn, Bruce A. (2009), An algebraic introduction to K-theory, Encyclopedia of Mathematics and its Applications, vol. 87 (corrected paperback ed.), Cambridge University Press, ISBN 978-0-521-10658-0 Srinivas, V. (2008), Algebraic K-theory, Modern Birkhäuser Classics (Paperback reprint of the 1996 2nd ed.), Boston, MA: Birkhäuser, ISBN 978-0-8176-4736-0, Zbl 1125.19300 Weibel, C., The K-book: An introduction to algebraic K-theory === Pedagogical references === Higher Algebraic K-Theory: an overview Rosenberg, Jonathan (1994), Algebraic K-theory and its applications, Graduate Texts in Mathematics, vol. 147, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-4314-4, ISBN 978-0-387-94248-3, MR 1282290, Zbl 0801.19001. Errata Weibel, Charles (2013), The K-book: an introduction to Algebraic K-theory, Graduate Studies in Mathematics, vol. 145, AMS === Historical references === Atiyah, Michael F.; Hirzebruch, Friedrich (1961), Vector bundles and homogeneous spaces, Proc. Sympos. Pure Math., vol. 3, American Mathematical Society, pp. 7–38 Barden, Dennis (1964), On the Structure and Classification of Differential Manifolds (Thesis), Cambridge University Bass, Hyman; Murthy, M.P. (1967), "Grothendieck groups and Picard groups of abelian group rings", Annals of Mathematics, 86 (1): 16–73, doi:10.2307/1970360, JSTOR 1970360 Bass, Hyman; Schanuel, S. (1962), "The homotopy theory of projective modules", Bulletin of the American Mathematical Society, 68 (4): 425–428, doi:10.1090/s0002-9904-1962-10826-x Bass, Hyman (1968), Algebraic K-theory, Benjamin Bloch, Spencer (1974), "K2 of algebraic cycles", Annals of Mathematics, 99 (2): 349–379, doi:10.2307/1970902, JSTOR 1970902 Bokstedt, M., Topological Hochschild homology. Preprint, Bielefeld, 1986. Bokstedt, M., Hsiang, W. C., Madsen, I., The cyclotomic trace and algebraic K-theory of spaces. Invent. Math., 111(3) (1993), 465–539. Borel, Armand; Serre, Jean-Pierre (1958), "Le theoreme de Riemann–Roch", Bulletin de la Société Mathématique de France, 86: 97–136, doi:10.24033/bsmf.1500 Browder, William (1978), Algebraic K-theory with coefficients Z/p, Lecture Notes in Mathematics, vol. 657, Springer–Verlag, pp. 40–84 Brown, K., Gersten, S., Algebraic K-theory as generalized sheaf cohomology, Algebraic K-theory I, Lecture Notes in Math., vol. 341, Springer-Verlag, 1973, pp. 266–292. Cerf, Jean (1970), "La stratification naturelle des espaces de fonctions differentiables reelles et le theoreme de la pseudo-isotopie", Publications Mathématiques de l'IHÉS, 39: 5–173, doi:10.1007/BF02684687 Dennis, R. K., Higher algebraic K-theory and Hochschild homology, unpublished preprint (1976). Gersten, S (1971), "On the functor K2", J. Algebra, 17 (2): 212–237, doi:10.1016/0021-8693(71)90030-5 Grothendieck, Alexander, Classes de fasiceaux et theoreme de Riemann–Roch, mimeographed notes, Princeton 1957. Hatcher, Allen; Wagoner, John (1973), "Pseudo-isotopies of compact manifolds", Astérisque, 6, MR 0353337 Karoubi, Max (1968), "Foncteurs derives et K-theorie. Categories filtres", Comptes Rendus de l'Académie des Sciences, Série A-B, 267: A328 – A331 Karoubi, Max; Villamayor, O. (1971), "K-theorie algebrique et K-theorie topologique", Math. Scand., 28: 265–307, doi:10.7146/math.scand.a-11024 Matsumoto, Hideya (1969), "Sur les sous-groupes aritmetiques des groupes semi-simples deployes", Annales Scientifiques de l'École Normale Supérieure, 2: 1–62, doi:10.24033/asens.1174 Mazur, Barry (1963), "Differential topology from the point of view of simple homotopy theory" (PDF), Publications Mathématiques de l'IHÉS, 15: 5–93 Milnor, J (1970), "Algebraic K-theory and Quadratic Forms", Invent. Math., 9 (4): 318–344, Bibcode:1970InMat...9..318M, doi:10.1007/bf01425486 Milnor, J., Introduction to Algebraic K-theory, Princeton Univ. Press, 1971. Nobile, A., Villamayor, O., Sur la K-theorie algebrique, Annales Scientifiques de l'École Normale Supérieure, 4e serie, 1, no. 3, 1968, 581–616. Quillen, Daniel, Cohomology of groups, Proc. ICM Nice 1970, vol. 2, Gauthier-Villars, Paris, 1971, 47–52. Quillen, Daniel, Higher algebraic K-theory I, Algebraic K-theory I, Lecture Notes in Math., vol. 341, Springer Verlag, 1973, 85–147. Quillen, Daniel, Higher algebraic K-theory, Proc. Intern. Congress Math., Vancouver, 1974, vol. I, Canad. Math. Soc., 1975, pp. 171–176. Segal, Graeme (1974), "Categories and cohomology theories", Topology, 13 (3): 293–312, doi:10.1016/0040-9383(74)90022-6 Siebenmann, Larry, The Obstruction to Finding a Boundary for an Open Manifold of Dimension Greater than Five, Thesis, Princeton University (1965). Smale, S (1962), "On the structure of manifolds", Amer. J. Math., 84 (3): 387–399, doi:10.2307/2372978, JSTOR 2372978 Steinberg, R., Generateurs, relations et revetements de groupes algebriques, ́Colloq. Theorie des Groupes Algebriques, Gauthier-Villars, Paris, 1962, pp. 113–127. (French) Swan, Richard, Nonabelian homological algebra and K-theory, Proc. Sympos. Pure Math., vol. XVII, 1970, pp. 88–123. Thomason, R. W., Algebraic K-theory and étale cohomology, Ann. Scient. Ec. Norm. Sup. 18, 4e serie (1985), 437–552; erratum 22 (1989), 675–677. Thomason, R. W., Le principe de sciendage et l'inexistence d'une K-theorie de Milnor globale, Topology 31, no. 3, 1992, 571–588. Thomason, Robert W.; Trobaugh, Thomas (1990), "Higher Algebraic K-Theory of Schemes and of Derived Categories", The Grothendieck Festschrift Volume III, Progr. Math., vol. 88, Boston, MA: Birkhäuser Boston, pp. 247–435, doi:10.1007/978-0-8176-4576-2_10, ISBN 978-0-8176-3487-2, MR 1106918 Waldhausen, F., Algebraic K-theory of topological spaces. I, in Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), Part 1, pp. 35–60, Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978. Waldhausen, F., Algebraic K-theory of spaces, in Algebraic and geometric topology (New Brunswick, N.J., 1983), Lecture Notes in Mathematics, vol. 1126 (1985), 318–419. Wall, C. T. C. (1965), "Finiteness conditions for CW-complexes", Annals of Mathematics, 81 (1): 56–69, doi:10.2307/1970382, JSTOR 1970382 Whitehead, J.H.C. (1941), "On incidence matrices, nuclei and homotopy types", Annals of Mathematics, 42 (5): 1197–1239, doi:10.2307/1970465, JSTOR 1970465 Whitehead, J.H.C. (1950), "Simple homotopy types", Amer. J. Math., 72 (1): 1–57, doi:10.2307/2372133, JSTOR 2372133 Whitehead, J.H.C. (1939), "Simplicial spaces, nuclei and m-groups", Proc. London Math. Soc., 45: 243–327, doi:10.1112/plms/s2-45.1.243 == External links == The K-Theory Foundation
Wikipedia:Basis function#0
In mathematics, a basis function is an element of a particular basis for a function space. Every function in the function space can be represented as a linear combination of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors. In numerical analysis and approximation theory, basis functions are also called blending functions, because of their use in interpolation: In this application, a mixture of the basis functions provides an interpolating function (with the "blend" depending on the evaluation of the basis functions at the data points). == Examples == === Monomial basis for Cω === The monomial basis for the vector space of analytic functions is given by { x n ∣ n ∈ N } . {\displaystyle \{x^{n}\mid n\in \mathbb {N} \}.} This basis is used in Taylor series, amongst others. === Monomial basis for polynomials === The monomial basis also forms a basis for the vector space of polynomials. After all, every polynomial can be written as a 0 + a 1 x 1 + a 2 x 2 + ⋯ + a n x n {\displaystyle a_{0}+a_{1}x^{1}+a_{2}x^{2}+\cdots +a_{n}x^{n}} for some n ∈ N {\displaystyle n\in \mathbb {N} } , which is a linear combination of monomials. === Fourier basis for L2[0,1] === Sines and cosines form an (orthonormal) Schauder basis for square-integrable functions on a bounded domain. As a particular example, the collection { 2 sin ⁡ ( 2 π n x ) ∣ n ∈ N } ∪ { 2 cos ⁡ ( 2 π n x ) ∣ n ∈ N } ∪ { 1 } {\displaystyle \{{\sqrt {2}}\sin(2\pi nx)\mid n\in \mathbb {N} \}\cup \{{\sqrt {2}}\cos(2\pi nx)\mid n\in \mathbb {N} \}\cup \{1\}} forms a basis for L2[0,1]. == See also == == References == Itô, Kiyosi (1993). Encyclopedic Dictionary of Mathematics (2nd ed.). MIT Press. p. 1141. ISBN 0-262-59020-4.
Wikipedia:Baudhayana sutras#0
The Baudhāyana sūtras (Sanskrit: बौधायन सूत्रस् ) are a group of Vedic Sanskrit texts which cover dharma, daily ritual, mathematics and is one of the oldest Dharma-related texts of Hinduism that have survived into the modern age from the 1st-millennium BCE. They belong to the Taittiriya branch of the Krishna Yajurveda school and are among the earliest texts of the genre. The Baudhayana sūtras consist of six texts: the Śrautasûtra, probably in 19 Praśnas (questions), the Karmāntasûtra in 20 Adhyāyas (chapters), the Dwaidhasûtra in 4 Praśnas, the Grihyasutra in 4 Praśnas, the Dharmasûtra in 4 Praśnas and the Śulbasûtra in 3 Adhyāyas. The Baudhāyana Śulbasûtra is noted for containing several early mathematical results, including an approximation of the square root of 2 and the statement of the Pythagorean theorem. == Baudhāyana Shrautasūtra == Baudhayana's Śrauta sūtras related to performing Vedic sacrifices have followers in some Smārta brāhmaṇas (Iyers) and some Iyengars of Tamil Nadu, Yajurvedis or Namboothiris of Kerala, Gurukkal Brahmins (Aadi Saivas) and Kongu Vellalars. The followers of this sūtra follow a different method and do 24 Tila-tarpaṇa, as Lord Krishna had done tarpaṇa on the day before amāvāsyā; they call themselves Baudhāyana Amavasya. == Baudhāyana Dharmasūtra == The Dharmasūtra of Baudhāyana like that of Apastamba also forms a part of the larger Kalpasutra. Likewise, it is composed of praśnas which literally means 'questions' or books. The structure of this Dharmasūtra is not very clear because it came down in an incomplete manner. Moreover, the text has undergone alterations in the form of additions and explanations over a period of time. The praśnas consist of the Srautasutra and other ritual treatises, the Sulvasutra which deals with vedic geometry, and the Grhyasutra which deals with domestic rituals. There are no commentaries on this Dharmasūtra with the exception of Govindasvāmin's Vivaraṇa. The date of the commentary is uncertain but according to Olivelle it is not very ancient. Also the commentary is inferior in comparison to that of Haradatta on Āpastamba and Gautama. This Dharmasūtra is divided into four books. Olivelle states that Book One and the first sixteen chapters of Book Two are the 'Proto-Baudhayana' even though this section has undergone alteration. Scholars like Bühler and Kane agree that the last two books of the Dharmasūtra are later additions. Chapter 17 and 18 in Book Two lays emphasis on various types of ascetics and acetic practices. The first book is primarily devoted to the student and deals in topics related to studentship. It also refers to social classes, the role of the king, marriage, and suspension of Vedic recitation. Book two refers to penances, inheritance, women, householder, orders of life, ancestral offerings. Book three refers to holy householders, forest hermit and penances. Book four primarily refers to the yogic practices and penances along with offenses regarding marriage. == Baudhāyana Śulvasūtra == === Pythagorean theorem === The Baudhāyana Śulvasūtra states the rule referred to today in most of the world as the Pythagorean Theorem. The rule was known to a number of ancient civilizations, including also the Greek and the Chinese, and was recorded in Mesopotamia as far back as 1800 BCE. For the most part, the Śulvasūtras do not contain proofs of the rules which they describe. The rule stated in the Baudhāyana Śulvasūtra is: दीर्घचतुरस्रस्याक्ष्णया रज्जुः पार्श्वमानी तिर्यग् मानी च यत् पृथग् भूते कुरूतस्तदुभयं करोति ॥ dīrghachatursrasyākṣaṇayā rajjuḥ pārśvamānī, tiryagmānī, cha yatpṛthagbhūte kurutastadubhayāṅ karoti. The diagonal of an oblong produces by itself both the areas which the two sides of the oblong produce separately. The diagonal and sides referred to are those of a rectangle (oblong), and the areas are those of the squares having these line segments as their sides. Since the diagonal of a rectangle is the hypotenuse of the right triangle formed by two adjacent sides, the statement is seen to be equivalent to the Pythagorean theorem. Baudhāyana also provides a statement using a rope measure of the reduced form of the Pythagorean theorem for an isosceles right triangle: The cord which is stretched across a square produces an area double the size of the original square. === Circling the square === Another problem tackled by Baudhāyana is that of finding a circle whose area is the same as that of a square (the reverse of squaring the circle). His sūtra i.58 gives this construction: Draw half its diagonal about the centre towards the East–West line; then describe a circle together with a third part of that which lies outside the square. Explanation: Draw the half-diagonal of the square, which is larger than the half-side by x = a 2 2 − a 2 {\displaystyle x={a \over 2}{\sqrt {2}}-{a \over 2}} . Then draw a circle with radius a 2 + x 3 {\displaystyle {a \over 2}+{x \over 3}} , or a 2 + a 6 ( 2 − 1 ) {\displaystyle {a \over 2}+{a \over 6}({\sqrt {2}}-1)} , which equals a 6 ( 2 + 2 ) {\displaystyle {a \over 6}(2+{\sqrt {2}})} . Now ( 2 + 2 ) 2 ≈ 11.66 ≈ 36.6 π {\displaystyle (2+{\sqrt {2}})^{2}\approx 11.66\approx {36.6 \over \pi }} , so the area π r 2 ≈ π × a 2 6 2 × 36.6 π ≈ a 2 {\displaystyle {\pi }r^{2}\approx \pi \times {a^{2} \over 6^{2}}\times {36.6 \over \pi }\approx a^{2}} . === Square root of 2 === Baudhāyana i.61-2 (elaborated in Āpastamba Sulbasūtra i.6) gives the length of the diagonal of a square in terms of its sides, which is equivalent to a formula for the square root of 2: samasya dvikaraṇī. pramāṇaṃ tṛtīyena vardhayet tac caturthenātmacatustriṃśonena saviśeṣaḥ The diagonal [lit. "doubler"] of a square. The measure is to be increased by a third and by a fourth decreased by the 34th. That is its diagonal approximately. That is, 2 ≈ 1 + 1 3 + 1 3 ⋅ 4 − 1 3 ⋅ 4 ⋅ 34 = 577 408 ≈ 1.414216 , {\displaystyle {\sqrt {2}}\approx 1+{\frac {1}{3}}+{\frac {1}{3\cdot 4}}-{\frac {1}{3\cdot 4\cdot 34}}={\frac {577}{408}}\approx 1.414216,} which is correct to five decimals. Other theorems include: diagonals of rectangle bisect each other, diagonals of rhombus bisect at right angles, area of a square formed by joining the middle points of a square is half of original, the midpoints of a rectangle joined forms a rhombus whose area is half the rectangle, etc. Note the emphasis on rectangles and squares; this arises from the need to specify yajña bhūmikās—i.e. the altar on which rituals were conducted, including fire offerings (yajña). == See also == Indian mathematics List of Indian mathematicians == Notes == == References == "The Śulvasútra of Baudháyana, with the commentary by Dvárakánáthayajvan", translated by George Thibaut, was published in a series of issues of The Pandit. A Monthly Journal, of the Benares College, devoted to Sanskrit Literature: (1875) 9 (108): 292–298 (1875–1876) 10 (109): 17–22, (110): 44–50, (111): 72–74, (114): 139–146, (115): 166–170, (116): 186–194, (117): 209–218 (new series) (1876–1877) 1 (5): 316–322, (9): 556–578, (10): 626–642, (11): 692–706, (12): 761–770 George Gheverghese Joseph. The Crest of the Peacock: Non-European Roots of Mathematics, 2nd Edition. Penguin Books, 2000. ISBN 0-14-027778-1. Vincent J. Katz. A History of Mathematics: An Introduction, 2nd Edition. Addison-Wesley, 1998. ISBN 0-321-01618-1 S. Balachandra Rao, Indian Mathematics and Astronomy: Some Landmarks. Jnana Deep Publications, Bangalore, 1998. ISBN 81-900962-0-6 O'Connor, John J.; Robertson, Edmund F., "Baudhayana sutras", MacTutor History of Mathematics Archive, University of St Andrews St Andrews University, 2000. Ian G. Pearce. Sulba Sutras at the MacTutor archive. St Andrews University, 2002. B.B. Dutta."The Science of the Shulba".
Wikipedia:Bauer–Fike theorem#0
In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eigenvalue from a properly chosen eigenvalue of the exact matrix. Informally speaking, what it says is that the sensitivity of the eigenvalues is estimated by the condition number of the matrix of eigenvectors. The theorem was proved by Friedrich L. Bauer and C. T. Fike in 1960. == The setup == In what follows we assume that: A ∈ Cn,n is a diagonalizable matrix; V ∈ Cn,n is the non-singular eigenvector matrix such that A = VΛV −1, where Λ is a diagonal matrix. If X ∈ Cn,n is invertible, its condition number in p-norm is denoted by κp(X) and defined by: κ p ( X ) = ‖ X ‖ p ‖ X − 1 ‖ p . {\displaystyle \kappa _{p}(X)=\|X\|_{p}\left\|X^{-1}\right\|_{p}.} == The Bauer–Fike Theorem == Bauer–Fike Theorem. Let μ be an eigenvalue of A + δA. Then there exists λ ∈ Λ(A) such that: | λ − μ | ≤ κ p ( V ) ‖ δ A ‖ p {\displaystyle |\lambda -\mu |\leq \kappa _{p}(V)\|\delta A\|_{p}} Proof. We can suppose μ ∉ Λ(A), otherwise take λ = μ and the result is trivially true since κp(V) ≥ 1. Since μ is an eigenvalue of A + δA, we have det(A + δA − μI) = 0 and so 0 = det ( A + δ A − μ I ) = det ( V − 1 ) det ( A + δ A − μ I ) det ( V ) = det ( V − 1 ( A + δ A − μ I ) V ) = det ( V − 1 A V + V − 1 δ A V − V − 1 μ I V ) = det ( Λ + V − 1 δ A V − μ I ) = det ( Λ − μ I ) det ( ( Λ − μ I ) − 1 V − 1 δ A V + I ) {\displaystyle {\begin{aligned}0&=\det(A+\delta A-\mu I)\\&=\det(V^{-1})\det(A+\delta A-\mu I)\det(V)\\&=\det \left(V^{-1}(A+\delta A-\mu I)V\right)\\&=\det \left(V^{-1}AV+V^{-1}\delta AV-V^{-1}\mu IV\right)\\&=\det \left(\Lambda +V^{-1}\delta AV-\mu I\right)\\&=\det(\Lambda -\mu I)\det \left((\Lambda -\mu I)^{-1}V^{-1}\delta AV+I\right)\\\end{aligned}}} However our assumption, μ ∉ Λ(A), implies that: det(Λ − μI) ≠ 0 and therefore we can write: det ( ( Λ − μ I ) − 1 V − 1 δ A V + I ) = 0. {\displaystyle \det \left((\Lambda -\mu I)^{-1}V^{-1}\delta AV+I\right)=0.} This reveals −1 to be an eigenvalue of ( Λ − μ I ) − 1 V − 1 δ A V . {\displaystyle (\Lambda -\mu I)^{-1}V^{-1}\delta AV.} Since all p-norms are consistent matrix norms we have |λ| ≤ ||A||p where λ is an eigenvalue of A. In this instance this gives us: 1 = | − 1 | ≤ ‖ ( Λ − μ I ) − 1 V − 1 δ A V ‖ p ≤ ‖ ( Λ − μ I ) − 1 ‖ p ‖ V − 1 ‖ p ‖ V ‖ p ‖ δ A ‖ p = ‖ ( Λ − μ I ) − 1 ‖ p κ p ( V ) ‖ δ A ‖ p {\displaystyle 1=|-1|\leq \left\|(\Lambda -\mu I)^{-1}V^{-1}\delta AV\right\|_{p}\leq \left\|(\Lambda -\mu I)^{-1}\right\|_{p}\left\|V^{-1}\right\|_{p}\|V\|_{p}\|\delta A\|_{p}=\left\|(\Lambda -\mu I)^{-1}\right\|_{p}\ \kappa _{p}(V)\|\delta A\|_{p}} But (Λ − μI)−1 is a diagonal matrix, the p-norm of which is easily computed: ‖ ( Λ − μ I ) − 1 ‖ p = max ‖ x ‖ p ≠ 0 ‖ ( Λ − μ I ) − 1 x ‖ p ‖ x ‖ p = max λ ∈ Λ ( A ) 1 | λ − μ | = 1 min λ ∈ Λ ( A ) | λ − μ | {\displaystyle \left\|\left(\Lambda -\mu I\right)^{-1}\right\|_{p}\ =\max _{\|{\boldsymbol {x}}\|_{p}\neq 0}{\frac {\left\|\left(\Lambda -\mu I\right)^{-1}{\boldsymbol {x}}\right\|_{p}}{\|{\boldsymbol {x}}\|_{p}}}=\max _{\lambda \in \Lambda (A)}{\frac {1}{|\lambda -\mu |}}\ ={\frac {1}{\min _{\lambda \in \Lambda (A)}|\lambda -\mu |}}} whence: min λ ∈ Λ ( A ) | λ − μ | ≤ κ p ( V ) ‖ δ A ‖ p . {\displaystyle \min _{\lambda \in \Lambda (A)}|\lambda -\mu |\leq \ \kappa _{p}(V)\|\delta A\|_{p}.} == An Alternate Formulation == The theorem can also be reformulated to better suit numerical methods. In fact, dealing with real eigensystem problems, one often has an exact matrix A, but knows only an approximate eigenvalue-eigenvector couple, (λa, va ) and needs to bound the error. The following version comes in help. Bauer–Fike Theorem (Alternate Formulation). Let (λa, va ) be an approximate eigenvalue-eigenvector couple, and r = Ava − λava. Then there exists λ ∈ Λ(A) such that: | λ − λ a | ≤ κ p ( V ) ‖ r ‖ p ‖ v a ‖ p {\displaystyle \left|\lambda -\lambda ^{a}\right|\leq \kappa _{p}(V){\frac {\|{\boldsymbol {r}}\|_{p}}{\left\|{\boldsymbol {v}}^{a}\right\|_{p}}}} Proof. We can suppose λa ∉ Λ(A), otherwise take λ = λa and the result is trivially true since κp(V) ≥ 1. So (A − λaI)−1 exists, so we can write: v a = ( A − λ a I ) − 1 r = V ( D − λ a I ) − 1 V − 1 r {\displaystyle {\boldsymbol {v}}^{a}=\left(A-\lambda ^{a}I\right)^{-1}{\boldsymbol {r}}=V\left(D-\lambda ^{a}I\right)^{-1}V^{-1}{\boldsymbol {r}}} since A is diagonalizable; taking the p-norm of both sides, we obtain: ‖ v a ‖ p = ‖ V ( D − λ a I ) − 1 V − 1 r ‖ p ≤ ‖ V ‖ p ‖ ( D − λ a I ) − 1 ‖ p ‖ V − 1 ‖ p ‖ r ‖ p = κ p ( V ) ‖ ( D − λ a I ) − 1 ‖ p ‖ r ‖ p . {\displaystyle \left\|{\boldsymbol {v}}^{a}\right\|_{p}=\left\|V\left(D-\lambda ^{a}I\right)^{-1}V^{-1}{\boldsymbol {r}}\right\|_{p}\leq \|V\|_{p}\left\|\left(D-\lambda ^{a}I\right)^{-1}\right\|_{p}\left\|V^{-1}\right\|_{p}\|{\boldsymbol {r}}\|_{p}=\kappa _{p}(V)\left\|\left(D-\lambda ^{a}I\right)^{-1}\right\|_{p}\|{\boldsymbol {r}}\|_{p}.} However ( D − λ a I ) − 1 {\displaystyle \left(D-\lambda ^{a}I\right)^{-1}} is a diagonal matrix and its p-norm is easily computed: ‖ ( D − λ a I ) − 1 ‖ p = max ‖ x ‖ p ≠ 0 ‖ ( D − λ a I ) − 1 x ‖ p ‖ x ‖ p = max λ ∈ σ ( A ) 1 | λ − λ a | = 1 min λ ∈ σ ( A ) | λ − λ a | {\displaystyle \left\|\left(D-\lambda ^{a}I\right)^{-1}\right\|_{p}=\max _{\|{\boldsymbol {x}}\|_{p}\neq 0}{\frac {\left\|\left(D-\lambda ^{a}I\right)^{-1}{\boldsymbol {x}}\right\|_{p}}{\|{\boldsymbol {x}}\|_{p}}}=\max _{\lambda \in \sigma (A)}{\frac {1}{\left|\lambda -\lambda ^{a}\right|}}={\frac {1}{\min _{\lambda \in \sigma (A)}\left|\lambda -\lambda ^{a}\right|}}} whence: min λ ∈ λ ( A ) | λ − λ a | ≤ κ p ( V ) ‖ r ‖ p ‖ v a ‖ p . {\displaystyle \min _{\lambda \in \lambda (A)}\left|\lambda -\lambda ^{a}\right|\leq \kappa _{p}(V){\frac {\|{\boldsymbol {r}}\|_{p}}{\left\|{\boldsymbol {v}}^{a}\right\|_{p}}}.} == A Relative Bound == Both formulations of Bauer–Fike theorem yield an absolute bound. The following corollary is useful whenever a relative bound is needed: Corollary. Suppose A is invertible and that μ is an eigenvalue of A + δA. Then there exists λ ∈ Λ(A) such that: | λ − μ | | λ | ≤ κ p ( V ) ‖ A − 1 δ A ‖ p {\displaystyle {\frac {|\lambda -\mu |}{|\lambda |}}\leq \kappa _{p}(V)\left\|A^{-1}\delta A\right\|_{p}} Note. ||A−1δA|| can be formally viewed as the relative variation of A, just as ⁠|λ − μ|/|λ|⁠ is the relative variation of λ. Proof. Since μ is an eigenvalue of A + δA and det(A) ≠ 0, by multiplying by −A−1 from left we have: − A − 1 ( A + δ A ) v = − μ A − 1 v . {\displaystyle -A^{-1}(A+\delta A){\boldsymbol {v}}=-\mu A^{-1}{\boldsymbol {v}}.} If we set: A a = μ A − 1 , ( δ A ) a = − A − 1 δ A {\displaystyle A^{a}=\mu A^{-1},\qquad (\delta A)^{a}=-A^{-1}\delta A} then we have: ( A a + ( δ A ) a − I ) v = 0 {\displaystyle \left(A^{a}+(\delta A)^{a}-I\right){\boldsymbol {v}}={\boldsymbol {0}}} which means that 1 is an eigenvalue of Aa + (δA)a, with v as an eigenvector. Now, the eigenvalues of Aa are ⁠μ/λi⁠, while it has the same eigenvector matrix as A. Applying the Bauer–Fike theorem to Aa + (δA)a with eigenvalue 1, gives us: min λ ∈ Λ ( A ) | μ λ − 1 | = min λ ∈ Λ ( A ) | λ − μ | | λ | ≤ κ p ( V ) ‖ A − 1 δ A ‖ p {\displaystyle \min _{\lambda \in \Lambda (A)}\left|{\frac {\mu }{\lambda }}-1\right|=\min _{\lambda \in \Lambda (A)}{\frac {|\lambda -\mu |}{|\lambda |}}\leq \kappa _{p}(V)\left\|A^{-1}\delta A\right\|_{p}} == The Case of Normal Matrices == If A is normal, V is a unitary matrix, therefore: ‖ V ‖ 2 = ‖ V − 1 ‖ 2 = 1 , {\displaystyle \|V\|_{2}=\left\|V^{-1}\right\|_{2}=1,} so that κ2(V) = 1. The Bauer–Fike theorem then becomes: ∃ λ ∈ Λ ( A ) : | λ − μ | ≤ ‖ δ A ‖ 2 {\displaystyle \exists \lambda \in \Lambda (A):\quad |\lambda -\mu |\leq \|\delta A\|_{2}} Or in alternate formulation: ∃ λ ∈ Λ ( A ) : | λ − λ a | ≤ ‖ r ‖ 2 ‖ v a ‖ 2 {\displaystyle \exists \lambda \in \Lambda (A):\quad \left|\lambda -\lambda ^{a}\right|\leq {\frac {\|{\boldsymbol {r}}\|_{2}}{\left\|{\boldsymbol {v}}^{a}\right\|_{2}}}} which obviously remains true if A is a Hermitian matrix. In this case, however, a much stronger result holds, known as the Weyl's theorem on eigenvalues. In the hermitian case one can also restate the Bauer–Fike theorem in the form that the map A ↦ Λ(A) that maps a matrix to its spectrum is a non-expansive function with respect to the Hausdorff distance on the set of compact subsets of C. == References == Bauer, F. L.; Fike, C. T. (1960). "Norms and Exclusion Theorems". Numer. Math. 2 (1): 137–141. doi:10.1007/BF01386217. S2CID 121278235. Eisenstat, S. C.; Ipsen, I. C. F. (1998). "Three absolute perturbation bounds for matrix eigenvalues imply relative bounds". SIAM Journal on Matrix Analysis and Applications. 20 (1): 149–158. CiteSeerX 10.1.1.45.3999. doi:10.1137/S0895479897323282.
Wikipedia:Baxter permutation#0
In combinatorial mathematics, a Baxter permutation is a permutation σ ∈ S n {\displaystyle \sigma \in S_{n}} which satisfies the following generalized pattern avoidance property: There are no indices i < j < k {\displaystyle i<j<k} such that σ ( j + 1 ) < σ ( i ) < σ ( k ) < σ ( j ) {\displaystyle \sigma (j+1)<\sigma (i)<\sigma (k)<\sigma (j)} or σ ( j ) < σ ( k ) < σ ( i ) < σ ( j + 1 ) {\displaystyle \sigma (j)<\sigma (k)<\sigma (i)<\sigma (j+1)} . Equivalently, using the notation for vincular patterns, a Baxter permutation is one that avoids the two dashed patterns 2 − 41 − 3 {\displaystyle 2-41-3} and 3 − 14 − 2 {\displaystyle 3-14-2} . For example, the permutation σ = 2413 {\displaystyle \sigma =2413} in S 4 {\displaystyle S_{4}} (written in one-line notation) is not a Baxter permutation because, taking i = 1 {\displaystyle i=1} , j = 2 {\displaystyle j=2} and k = 4 {\displaystyle k=4} , this permutation violates the first condition. These permutations were introduced by Glen E. Baxter in the context of mathematical analysis. == Enumeration == For n = 1 , 2 , 3 , … {\displaystyle n=1,2,3,\ldots } , the number a n {\displaystyle a_{n}} of Baxter permutations of length n {\displaystyle n} is 1, 2, 6, 22, 92, 422, 2074, 10754, 58202, 326240, 1882960, 11140560, 67329992, 414499438, 2593341586, 16458756586,... This is sequence OEIS: A001181 in the OEIS. In general, a n {\displaystyle a_{n}} has the following formula: a n = ∑ k = 1 n ( n + 1 k − 1 ) ( n + 1 k ) ( n + 1 k + 1 ) ( n + 1 1 ) ( n + 1 2 ) . {\displaystyle a_{n}\,=\,\sum _{k=1}^{n}{\frac {{\binom {n+1}{k-1}}{\binom {n+1}{k}}{\binom {n+1}{k+1}}}{{\binom {n+1}{1}}{\binom {n+1}{2}}}}.} In fact, this formula is graded by the number of descents in the permutations, i.e., there are ( n + 1 k − 1 ) ( n + 1 k ) ( n + 1 k + 1 ) ( n + 1 1 ) ( n + 1 2 ) {\displaystyle {\frac {{\binom {n+1}{k-1}}{\binom {n+1}{k}}{\binom {n+1}{k+1}}}{{\binom {n+1}{1}}{\binom {n+1}{2}}}}} Baxter permutations in S n {\displaystyle S_{n}} with k − 1 {\displaystyle k-1} descents. == Other properties == The number of alternating Baxter permutations of length 2 n {\displaystyle 2n} is ( C n ) 2 {\displaystyle (C_{n})^{2}} , the square of a Catalan number, and of length 2 n + 1 {\displaystyle 2n+1} is C n C n + 1 {\displaystyle C_{n}C_{n+1}} . The number of doubly alternating Baxter permutations of length 2 n {\displaystyle 2n} and 2 n + 1 {\displaystyle 2n+1} (i.e., those for which both σ {\displaystyle \sigma } and its inverse σ − 1 {\displaystyle \sigma ^{-1}} are alternating) is the Catalan number C n {\displaystyle C_{n}} . Baxter permutations are related to Hopf algebras, planar graphs, and tilings. == Motivation: commuting functions == Baxter introduced Baxter permutations while studying the fixed points of commuting continuous functions. In particular, if f {\displaystyle f} and g {\displaystyle g} are continuous functions from the interval [ 0 , 1 ] {\displaystyle [0,1]} to itself such that f ( g ( x ) ) = g ( f ( x ) ) {\displaystyle f(g(x))=g(f(x))} for all x {\displaystyle x} , and f ( g ( x ) ) = x {\displaystyle f(g(x))=x} for finitely many x {\displaystyle x} in [ 0 , 1 ] {\displaystyle [0,1]} , then: the number of these fixed points is odd; if the fixed points are x 1 < x 2 < … < x 2 k + 1 {\displaystyle x_{1}<x_{2}<\ldots <x_{2k+1}} then f {\displaystyle f} and g {\displaystyle g} act as mutually-inverse permutations on { x 1 , x 3 , … , x 2 k + 1 } {\displaystyle \{x_{1},x_{3},\ldots ,x_{2k+1}\}} and { x 2 , x 4 , … , x 2 k } {\displaystyle \{x_{2},x_{4},\ldots ,x_{2k}\}} ; the permutation induced by f {\displaystyle f} on { x 1 , x 3 , … , x 2 k + 1 } {\displaystyle \{x_{1},x_{3},\ldots ,x_{2k+1}\}} uniquely determines the permutation induced by f {\displaystyle f} on { x 2 < , x 4 , … , x 2 k } {\displaystyle \{x_{2}<,x_{4},\ldots ,x_{2k}\}} ; under the natural relabeling x 1 → 1 {\displaystyle x_{1}\to 1} , x 3 → 2 {\displaystyle x_{3}\to 2} , etc., the permutation induced on { 1 , 2 , … , k + 1 } {\displaystyle \{1,2,\ldots ,k+1\}} is a Baxter permutation. == See also == Enumerations of specific permutation classes == References == == External links == OEIS sequence A001181 (Number of Baxter permutations of length n)
Wikipedia:Beam and Warming scheme#0
In numerical mathematics, Beam and Warming scheme or Beam–Warming implicit scheme introduced in 1978 by Richard M. Beam and R. F. Warming, is a second order accurate implicit scheme, mainly used for solving non-linear hyperbolic equations. It is not used much nowadays. == Introduction == This scheme is a spatially factored, non iterative, ADI scheme and uses implicit Euler to perform the time Integration. The algorithm is in delta-form, linearized through implementation of a Taylor-series. Hence observed as increments of the conserved variables. In this an efficient factored algorithm is obtained by evaluating the spatial cross derivatives explicitly. This allows for direct derivation of scheme and efficient solution using this computational algorithm. The efficiency is because although it is three-time-level scheme, but requires only two time levels of data storage. This results in unconditional stability. It is centered and needs the artificial dissipation operator to guarantee numerical stability. The delta form of the equation produced has the advantageous property of stability (if existing) independent of the size of the time step. == The method == Consider the inviscid Burgers' equation in one dimension ∂ u ∂ t = − u ∂ u ∂ x with x ∈ R {\displaystyle {\frac {\partial u}{\partial t}}=-u{\frac {\partial u}{\partial x}}\quad {\text{with }}x\in R} Burgers' equation in conservation form, ∂ u ∂ t = − ∂ E ∂ x {\displaystyle {\frac {\partial u}{\partial t}}=-{\frac {\partial E}{\partial x}}} where : E = u 2 2 {\displaystyle E={\frac {u^{2}}{2}}} === Taylor series expansion === The expansion of : u i n + 1 {\displaystyle u_{i}^{n+1}} u i n + 1 = u i n + 1 2 [ ∂ u ∂ t | i n + ∂ u ∂ t | i n + 1 ] Δ t + O ( Δ t 3 ) {\displaystyle u_{i}^{n+1}=u_{i}^{n}+{\frac {1}{2}}\left[\left.{\frac {\partial u}{\partial t}}\right|_{i}^{n}+\left.{\frac {\partial u}{\partial t}}\right|_{i}^{n+1}\right]\,\Delta t+O(\Delta t^{3})} This is also known as the trapezoidal formula. ∴ u i n + 1 − u i n Δ t = − 1 2 ( ∂ E ∂ x | i n + 1 + ∂ E ∂ x | i n + ∂ ∂ x [ A ( u i n + 1 − u i n ) ] ) {\displaystyle \therefore {\frac {u_{i}^{n+1}-u_{i}^{n}}{\Delta t}}=-{\frac {1}{2}}\left(\left.{\frac {\partial E}{\partial x}}\right|_{i}^{n+1}+\left.{\frac {\partial E}{\partial x}}\right|_{i}^{n}+{\frac {\partial }{\partial x}}\left[A(u_{i}^{n+1}-u_{i}^{n})\right]\right)} ∵ ∂ u ∂ t = − ∂ E ∂ x {\displaystyle \because {\frac {\partial u}{\partial t}}=-{\frac {\partial E}{\partial x}}} Note that for this equation, A = ∂ E ∂ u = u {\displaystyle A={\frac {\partial E}{\partial u}}=u} === Tri-diagonal system === Resulting tri-diagonal system: − Δ t 4 Δ x ( A i − 1 n u i − 1 n + 1 ) + u i n + 1 + Δ t 4 Δ x ( A i + 1 n u i + 1 n + 1 ) = u i n − 1 2 Δ t Δ x ( E i + 1 n − E i − 1 n ) + Δ t 4 Δ x ( A i + 1 n u i + 1 n − A i − 1 n u i − 1 n ) {\displaystyle {\begin{aligned}&-{\frac {\Delta t}{4\,\Delta x}}\left(A_{i-1}^{n}u_{i-1}^{n+1}\right)+u_{i}^{n+1}+{\frac {\Delta t}{4\,\Delta x}}\left(A_{i+1}^{n}u_{i+1}^{n+1}\right)\\[6pt]={}&u_{i}^{n}-{\frac {1}{2}}{\frac {\Delta t}{\Delta x}}\left(E_{i+1}^{n}-E_{i-1}^{n}\right)+{\frac {\Delta t}{4\,\Delta x}}\left(A_{i+1}^{n}u_{i+1}^{n}-A_{i-1}^{n}u_{i-1}^{n}\right)\end{aligned}}} This resulted system of linear equations can be solved using the modified tridiagonal matrix algorithm, also known as the Thomas algorithm. == Dissipation term == Under the condition of shock wave, dissipation term is required for nonlinear hyperbolic equations such as this. This is done to keep the solution under control and maintain convergence of the solution. D = − ε e ( u i + 2 n − 4 u i + 1 n + 6 u i n − 4 u i − 1 n + u i − 2 n ) {\displaystyle D=-\varepsilon _{e}(u_{i+2}^{n}-4u_{i+1}^{n}+6u_{i}^{n}-4u_{i-1}^{n}+u_{i-2}^{n})} This term is added explicitly at level n {\displaystyle n} to the right hand side. This is always used for successful computation where high-frequency oscillations are observed and must be suppressed. == Smoothing term == If only the stable solution is required, then in the equation to the right hand side a second-order smoothing term is added on the implicit layer. The other term in the same equation can be second-order because it has no influence on the stable solution if ∇ n ( U ) = 0 {\displaystyle \nabla ^{n}(U)=0} The addition of smoothing term increases the number of steps required by three. == Properties == This scheme is produced by combining the trapezoidal formula, linearization, factoring, Padt spatial differencing, the homogeneous property of the flux vectors (where applicable), and hybrid spatial differencing and is most suitable for nonlinear systems in conservation-law form. ADI algorithm retains the order of accuracy and the steady-state property while reducing the bandwidth of the system of equations. Stability of the equation is L 2 {\displaystyle L^{2}} -stable under CFL : | a | Δ t ≤ 2 Δ x {\displaystyle |a|\,\Delta t\leq 2\,\Delta x} The order of Truncation error is O ( ( Δ t ) 2 + ( Δ x ) 2 ) {\displaystyle O((\Delta t)^{2}+(\Delta x)^{2})} The result is smooth with considerable overshoot (that does not grow much with time). == References ==
Wikipedia:Beatrice Meini#0
Beatrice Meini (born 1968) is an Italian computational mathematician and numerical analyst specializing in numerical linear algebra and its applications to Markov chains, matrix equations, and queueing theory. She is Professor of Numerical Analysis in the Department of Mathematics at the University of Pisa. == Education and career == Meini was born on 5 December 1968 in Pontedera, in the province of Pisa. She earned a laurea in mathematics from the University of Pisa in 1993, and completed her Ph.D. there in 1998. Her dissertation, Fast Algorithms For The Numerical Solution of Structured Markov Chains, was supervised by Dario Bini. After postdoctoral research with the Italian National Research Council (CNR) and at the University of Pisa, she became an associate professor of numerical analysis at the University of Pisa in 2005, and a full professor in 2016. == Books == Meini is the coauthor of books including: Numerical Methods for Structured Markov Chains (Oxford University Press, 2005, with Dario Bini and Guy Latouche) Numerical Solution of Algebraic Riccati Equations (Society for Industrial and Applied Mathematics, 2011, with Dario Bini and Bruno Iannazzo) == References == == External links == Home page
Wikipedia:Beatrice Pelloni#0
Beatrice Pelloni is an Italian mathematician specialising in applied mathematical analysis and partial differential equations. She is a professor of mathematics at Heriot-Watt University in Edinburgh, the editor-in-chief of the Proceedings of the Royal Society of Edinburgh, Section A: Mathematics, and the chair of the SIAM Activity Group on Nonlinear Waves and Coherent Structures. == Education and career == Pelloni was born on 28 June 1962 in Rome. After earning a laurea from Sapienza University of Rome in 1985, she entered graduate study at Yale University, but had to take several periods of time off from the program to raise three children. She completed her Ph.D. at Yale in 1996. Her dissertation, Spectral Methods for the Numerical Solution of Nonlinear Dispersive Wave Equations, was supervised by Peter Jones. While still a graduate student, Pelloni also worked as a researcher for the Institute of Applied Computational Mathematics of the Foundation for Research & Technology – Hellas (IACM-FORTH). After completing her doctorate she was a research associate at Imperial College London and then joined the University of Reading as a lecturer in 2001. At Reading she became a professor in 2012. She moved to Heriot-Watt University in 2016. == Recognition == Pelloni was the Olga Taussky-Todd Prize Lecturer at the 2011 International Congress on Industrial and Applied Mathematics, speaking on "Boundary value problems and integrability", and the 2019 Mary Cartwright Lecturer of the London Mathematical Society, speaking on "Nonlinear transforms in the study of fluid dynamics". She was elected Fellow of the IMA in 2012, and Fellow of the Royal Society of Edinburgh in 2020. == References == == External links == Home page Beatrice Pelloni publications indexed by Google Scholar
Wikipedia:Beatrice Rivière#0
Beatrice Marie Riviere is a computational and applied mathematician. She is the Noah Harding Chair and Professor in the department of computational and applied mathematics at Rice University. Her research involves developing efficient numerical methods for modeling fluids flowing through porous media. == Education and career == Rivière earned a diploma in engineering from École Centrale Paris in 1995, and a master's degree in 1996 from the Pennsylvania State University. She moved to the University of Texas at Austin for her doctoral studies, completing her Ph.D. there in 2000. Her dissertation, Discontinuous Galerkin Methods for Solving the Miscible Displacement Problem in Porous Media, was supervised by Mary Wheeler. Before joining the Rice University faculty in 2008, she worked as an associate professor of mathematics at the University of Pittsburgh. She was department chair from 2015 to 2018. In 2018 she was elected chair of the Activity Group on Geosciences (SIAG/GS) of the Society for Industrial and Applied Mathematics (SIAM). == Book == Rivière is the author of the book Discontinuous Galerkin methods for solving elliptic and parabolic equations: theory and implementation (SIAM, 2008). == Recognition == Rivière was named a SIAM Fellow in the 2021 class of fellows, "for contributions in numerical analysis, scientific computing, and modeling of porous media". In 2021 she was elected to SIAM's board of trustees for a term running January 1, 2022 - December 31, 2024. In 2022 she will become a fellow of the Association for Women in Mathematics, "For her important contributions to numerical analysis, scientific computing and modeling of porous media; for her exemplary mentorship and supervision of women in applied and computational mathematics; and for her distinguished record of service and outreach." == References == == External links == Beatrice Rivière publications indexed by Google Scholar
Wikipedia:Beer's theorem#0
Wijsman convergence is a variation of Hausdorff convergence suitable for work with unbounded sets. Intuitively, Wijsman convergence is to convergence in the Hausdorff metric as pointwise convergence is to uniform convergence. == History == The convergence was defined by Robert Wijsman. The same definition was used earlier by Zdeněk Frolík. Yet earlier, Hausdorff in his book Grundzüge der Mengenlehre defined so called closed limits; for proper metric spaces it is the same as Wijsman convergence. == Definition == Let (X, d) be a metric space and let Cl(X) denote the collection of all d-closed subsets of X. For a point x ∈ X and a set A ∈ Cl(X), set d ( x , A ) = inf a ∈ A d ( x , a ) . {\displaystyle d(x,A)=\inf _{a\in A}d(x,a).} A sequence (or net) of sets Ai ∈ Cl(X) is said to be Wijsman convergent to A ∈ Cl(X) if, for each x ∈ X, d ( x , A i ) → d ( x , A ) . {\displaystyle d(x,A_{i})\to d(x,A).} Wijsman convergence induces a topology on Cl(X), known as the Wijsman topology. == Properties == The Wijsman topology depends very strongly on the metric d. Even if two metrics are uniformly equivalent, they may generate different Wijsman topologies. Beer's theorem: if (X, d) is a complete, separable metric space, then Cl(X) with the Wijsman topology is a Polish space, i.e. it is separable and metrizable with a complete metric. Cl(X) with the Wijsman topology is always a Tychonoff space. Moreover, one has the Levi-Lechicki theorem: (X, d) is separable if and only if Cl(X) is either metrizable, first-countable or second-countable. If the pointwise convergence of Wijsman convergence is replaced by uniform convergence (uniformly in x), then one obtains Hausdorff convergence, where the Hausdorff metric is given by d H ( A , B ) = sup x ∈ X | d ( x , A ) − d ( x , B ) | . {\displaystyle d_{\mathrm {H} }(A,B)=\sup _{x\in X}{\big |}d(x,A)-d(x,B){\big |}.} The Hausdorff and Wijsman topologies on Cl(X) coincide if and only if (X, d) is a totally bounded space. == See also == Hausdorff distance Kuratowski convergence Vietoris topology Hemicontinuity == References == Notes Bibliography Beer, Gerald (1993). Topologies on closed and closed convex sets. Mathematics and its Applications 268. Dordrecht: Kluwer Academic Publishers Group. pp. xii+340. ISBN 0-7923-2531-1. MR1269778 Beer, Gerald (1994). "Wijsman convergence: a survey". Set-Valued Anal. 2 (1–2): 77–94. doi:10.1007/BF01027094. MR1285822 == External links == Som Naimpally (2001) [1994], "Wijsman convergence", Encyclopedia of Mathematics, EMS Press
Wikipedia:Begoña Fernández (mathematician)#0
María Asunción Begoña Fernández Fernández (published as Begoña Fernández) is a Mexican mathematician specializing in probability theory, stochastic processes, and mathematical finance. She is a professor of mathematics at the National Autonomous University of Mexico (UNAM). == Education == Fernández studied mathematics at UNAM, graduating in 1979. She earned a master's degree in statistics and operations research in 1986, and completed her doctorate at CINVESTAV in 1990. == Recognition == Fernández is a member of the Mexican Academy of Sciences. == References == == External links == Begoña Fernández publications indexed by Google Scholar
Wikipedia:Begoña Vitoriano#0
Begoña Vitoriano Villanueva (born 1967) is a Spanish applied mathematician and operations researcher whose work concerns the logistics of humanitarian aid and disaster relief. She is an associate professor in the Department of Statistics and Operational Research at the Complutense University of Madrid, and the president of the Spanish Statistics and Operations Research Society. == Education and career == Vitoriano, who is Spanish, was born in 1967. She studied mathematics and operations research at the Complutense University of Madrid. Despite difficulties caused by the death of her father in the first year of her studies, the need to support herself through private tutoring, and the birth of two children during her studies, she earned a bachelor's degree there in 1990 and completed her Ph.D. in 1994. She became an assistant professor in the Department of Statistics and Operational Research at the Complutense University of Madrid from 1990 to 1997. In 1995, when she traveled to El Salvador as part of an international collaboration to set up a master's program there, and witnessed the devastation and poverty caused in part by the recently ended Salvadoran Civil War. From 1997 to 2006 she worked as an assistant and then associate professor in the Department of Industrial Organisation and Institute for Technological Research at Comillas Pontifical University in Madrid, a private Jesuit school conflicting with her belief in public education, but with an emphasis on social justice that fit well with her research agenda. It was during this time that she changed her research focus from the management of electrical grids to disaster relief. She returned to Complutense University as an untenured associate professor in 2006, and was granted tenure in 2009. In 2021, she was elected president of the Spanish Statistics and Operations Research Society for a three-year term, beginning in 2022. == Selected publications == Vitoriano, Begoña; Ortuño, M. Teresa; Tirado, Gregorio; Montero, Javier (2011), "A multi-criteria optimization model for humanitarian aid distribution", Journal of Global Optimization, 51 (2): 189–208, doi:10.1007/s10898-010-9603-z, MR 2831951, S2CID 27827794 Vitoriano, Begoña; Montero, Javier; Ruan, Da, eds. (2013), Decision Aid Models for Disaster Management and Emergencies, Atlantis Computational Intelligence Systems, vol. 7, Atlantis Press, doi:10.2991/978-94-91216-74-9, ISBN 978-94-91216-73-2, S2CID 11137079 Liberatore, F.; Ortuño, M. T.; Tirado, G.; Vitoriano, B.; Scaparra, M. P. (2014), "A hierarchical compromise model for the joint optimization of recovery operations and distribution of emergency goods in Humanitarian Logistics", Computers & Operations Research, 42: 3–13, doi:10.1016/j.cor.2012.03.019, MR 3116287 Ferrer, José M.; Martín-Campo, F. Javier; Ortuño, M. Teresa; Pedraza-Martínez, Alfonso J.; Tirado, Gregorio; Vitoriano, Begoña (September 2018), "Multi-criteria optimization for last mile distribution of disaster relief aid: Test cases and applications", European Journal of Operational Research, 269 (2): 501–515, doi:10.1016/j.ejor.2018.02.043, S2CID 19235699 == References == == External links == Home page Begoña Vitoriano publications indexed by Google Scholar
Wikipedia:Bell-shaped function#0
A bell-shaped function or simply 'bell curve' is a mathematical function having a characteristic "bell"-shaped curve. These functions are typically continuous or smooth, asymptotically approach zero for large negative/positive x, and have a single, unimodal maximum at small x. Hence, the integral of a bell-shaped function is typically a sigmoid function. Bell shaped functions are also commonly symmetric. Many common probability distribution functions are bell curves. Some bell shaped functions, such as the Gaussian function and the probability distribution of the Cauchy distribution, can be used to construct sequences of functions with decreasing variance that approach the Dirac delta distribution. Indeed, the Dirac delta can roughly be thought of as a bell curve with variance tending to zero. Some examples include: Gaussian function, the probability density function of the normal distribution. This is the archetypal bell shaped function and is frequently encountered in nature as a consequence of the central limit theorem. f ( x ) = a e − ( x − b ) 2 / ( 2 c 2 ) {\displaystyle f(x)=ae^{-(x-b)^{2}/(2c^{2})}} Fuzzy Logic generalized membership bell-shaped function f ( x ) = 1 1 + | x − c a | 2 b {\displaystyle f(x)={\frac {1}{1+\left|{\frac {x-c}{a}}\right|^{2b}}}} Hyperbolic secant. This is also the derivative of the Gudermannian function. f ( x ) = sech ⁡ ( x ) = 2 e x + e − x {\displaystyle f(x)=\operatorname {sech} (x)={\frac {2}{e^{x}+e^{-x}}}} Witch of Agnesi, the probability density function of the Cauchy distribution. This is also a scaled version of the derivative of the arctangent function. f ( x ) = 8 a 3 x 2 + 4 a 2 {\displaystyle f(x)={\frac {8a^{3}}{x^{2}+4a^{2}}}} Bump function φ b ( x ) = { exp ⁡ b 2 x 2 − b 2 | x | < b , 0 | x | ≥ b . {\displaystyle \varphi _{b}(x)={\begin{cases}\exp {\frac {b^{2}}{x^{2}-b^{2}}}&|x|<b,\\0&|x|\geq b.\end{cases}}} Raised cosines type like the raised cosine distribution or the raised-cosine filter f ( x ; μ , s ) = { 1 2 s [ 1 + cos ⁡ ( x − μ s π ) ] for μ − s ≤ x ≤ μ + s , 0 otherwise. {\displaystyle f(x;\mu ,s)={\begin{cases}{\frac {1}{2s}}\left[1+\cos \left({\frac {x-\mu }{s}}\pi \right)\right]&{\text{for }}\mu -s\leq x\leq \mu +s,\\[3pt]0&{\text{otherwise.}}\end{cases}}} Most of the window functions like the Kaiser window The derivative of the logistic function. This is a scaled version of the derivative of the hyperbolic tangent function. f ( x ) = e x ( 1 + e x ) 2 {\displaystyle f(x)={\frac {e^{x}}{\left(1+e^{x}\right)^{2}}}} Some algebraic functions. For example f ( x ) = 1 ( 1 + x 2 ) 3 / 2 {\displaystyle f(x)={\frac {1}{(1+x^{2})^{3/2}}}} == Gallery == == References ==
Wikipedia:Bender–Knuth involution#0
In algebraic combinatorics, a Bender–Knuth involution is an involution on the set of semistandard tableaux, introduced by Bender & Knuth (1972, pp. 46–47) in their study of plane partitions. == Definition == The Bender–Knuth involutions σ k {\displaystyle \sigma _{k}} are defined for integers k {\displaystyle k} , and act on the set of semistandard skew Young tableaux of some fixed shape μ / ν {\displaystyle \mu /\nu } , where μ {\displaystyle \mu } and ν {\displaystyle \nu } are partitions. It acts by changing some of the elements k {\displaystyle k} of the tableau to k + 1 {\displaystyle k+1} , and some of the entries k + 1 {\displaystyle k+1} to k {\displaystyle k} , in such a way that the numbers of elements with values k {\displaystyle k} or k + 1 {\displaystyle k+1} are exchanged. Call an entry of the tableau free if it is k {\displaystyle k} or k + 1 {\displaystyle k+1} and there is no other element with value k {\displaystyle k} or k + 1 {\displaystyle k+1} in the same column. For any i {\displaystyle i} , the free entries of row i {\displaystyle i} are all in consecutive columns, and consist of a i {\displaystyle a_{i}} copies of k {\displaystyle k} followed by b i {\displaystyle b_{i}} copies of k + 1 {\displaystyle k+1} , for some a i {\displaystyle a_{i}} and b i {\displaystyle b_{i}} . The Bender–Knuth involution σ k {\displaystyle \sigma _{k}} replaces them by b i {\displaystyle b_{i}} copies of k {\displaystyle k} followed by a i {\displaystyle a_{i}} copies of k + 1 {\displaystyle k+1} . == Applications == Bender–Knuth involutions can be used to show that the number of semistandard skew tableaux of given shape and weight is unchanged under permutations of the weight. In turn this implies that the Schur function of a partition is a symmetric function. Bender–Knuth involutions were used by Stembridge (2002) to give a short proof of the Littlewood–Richardson rule. == References == Bender, Edward A.; Knuth, Donald E. (1972), "Enumeration of plane partitions", Journal of Combinatorial Theory, Series A, 13 (1): 40–54, doi:10.1016/0097-3165(72)90007-6, ISSN 1096-0899, MR 0299574 Stembridge, John R. (2002), "A concise proof of the Littlewood–Richardson rule" (PDF), Electronic Journal of Combinatorics, 9 (1): Note 5, 4 pp. (electronic), doi:10.37236/1666, ISSN 1077-8926, MR 1912814
Wikipedia:Bendixson's inequality#0
In mathematics, Bendixson's inequality is a quantitative result in the field of matrices derived by Ivar Bendixson in 1902. The inequality puts limits on the imaginary and real parts of characteristic roots (eigenvalues) of real matrices. A special case of this inequality leads to the result that characteristic roots of a real symmetric matrix are always real. The inequality relating to the imaginary parts of characteristic roots of real matrices (Theorem I in ) is stated as: Let A = ( a i j ) {\displaystyle A=\left(a_{ij}\right)} be a real n × n {\displaystyle n\times n} matrix and α = max 1 ≤ i , j ≤ n 1 2 | a i j − a j i | {\displaystyle \alpha =\max _{1\leq i,j\leq n}{\frac {1}{2}}\left|a_{ij}-a_{ji}\right|} . If λ {\displaystyle \lambda } is any characteristic root of A {\displaystyle A} , then | Im ⁡ ( λ ) | ≤ α n ( n − 1 ) 2 . {\displaystyle \left|\operatorname {Im} (\lambda )\right|\leq \alpha {\sqrt {\frac {n(n-1)}{2}}}.\,{}} If A {\displaystyle A} is symmetric then α = 0 {\displaystyle \alpha =0} and consequently the inequality implies that λ {\displaystyle \lambda } must be real. The inequality relating to the real parts of characteristic roots of real matrices (Theorem II in ) is stated as: Let m {\displaystyle m} and M {\displaystyle M} be the smallest and largest characteristic roots of A + A H 2 {\displaystyle {\tfrac {A+A^{H}}{2}}} , then m ≤ Re ⁡ ( λ ) ≤ M {\displaystyle m\leq \operatorname {Re} (\lambda )\leq M} . == See also == Gershgorin circle theorem == References ==
Wikipedia:Benjamin Martin (chess player)#0
Benjamin Martin (born 1969) is a New Zealand chess player and mathematician. He was awarded the title International Master (IM) by FIDE in 1996. == Chess career == Martin has represented New Zealand in four Chess Olympiads, in Novi Sad 1990, Manila 1992, Yerevan 1996, and Istanbul 2000. His best result was in 1996 when he scored 8/14. Martin jointly won the New Zealand Chess Championship with Ortvin Sarapu in 1989/90. == Mathematics == Martin was an associate professor in the department of mathematics at the University of Auckland 2011–2014. His research interests include algebraic groups and quantum field theory. He is now a professor in the department of mathematics at the University of Aberdeen, holding a personal chair. == References == == External links == Benjamin Martin rating card at FIDE Benjamin Martin games at 365Chess.com Benjamin Martin player profile and games at Chessgames.com
Wikipedia:Benjamin Muckenhoupt#0
Benjamin Muckenhoupt (December 22, 1933, Boston – April 13, 2020, Whippany, New Jersey) was an American mathematician, specializing in analysis. He is known for the introduction of Muckenhoupt weights. == Biography == After graduating in 1950 from Newton High School (renamed in 1974 Newton North High School), Benjamin Muckenhoupt matriculated at Harvard University. where he graduated in 1954 with an A.B. At Harvard, by his outstanding score on the 1954 William Lowell Putnam Competition, he became a Putnam Fellow. At the University of Chicago, he graduated in 1955 with an M.Sc. and in 1958 with a Ph.D. His Ph.D. thesis On certain singular integrals was supervised by Antoni Zygmund. In the department of the mathematics of Rutgers University, he was an associate professor from 1963 to 1970 and a full professor from 1970 to 1991, when he retired as professor emeritus. For many years, he suffered from progressive supranuclear palsy. The main focus of Muckenhoupt's mathematical research was harmonic analysis and weighted norm inequalities. At the Institute for Advanced Study, he held visiting positions for the academic years 1968–1970 and 1975–1976. At the State University of New York at Albany he was a visiting professor for the academic year 1970–1971. His doctoral students include Eileen Poiani. Upon his death he was survived by his widow, a daughter, a son, and three grandchildren. == Selected publications == Albert, A. A.; Muckenhoupt, Benjamin (1957). "On matrices of trace zero". Michigan Mathematical Journal. 4. doi:10.1307/mmj/1028990168. MR 0083961. Muckenhoupt, B.; Stein, E. M. (1965). "Classical expansions and their relation to conjugate harmonic functions". Transactions of the American Mathematical Society. 118: 17. doi:10.1090/S0002-9947-1965-0199636-9. MR 0199636. —— (1969). "Hermite conjugate expansions". Transactions of the American Mathematical Society. 139: 243–260. doi:10.1090/S0002-9947-1969-0249918-0. MR 0249918. —— (1969). "Poisson integrals for Hermite and Laguerre expansions". Transactions of the American Mathematical Society. 139: 231–242. doi:10.1090/S0002-9947-1969-0249917-9. MR 0249917. —— (1970). "Conjugate functions for Laguerre expansions". Transactions of the American Mathematical Society. 147 (2): 403–418. doi:10.1090/S0002-9947-1970-0252945-9. —— (1970). "Mean convergence of Hermite and Laguerre series. I". Transactions of the American Mathematical Society. 147 (2): 419–431. doi:10.1090/S0002-9947-1970-99933-9. —— (1970). "Mean convergence of Hermite and Laguerre series. II". Transactions of the American Mathematical Society. 147 (2): 433–460. doi:10.1090/S0002-9947-1970-0256051-9. ——; Wheeden, Richard L. (1971). "Weighted norm inequalities for singular and fractional integrals". Transactions of the American Mathematical Society. 161: 249–258. doi:10.1090/S0002-9947-1971-0285938-7. Hunt, Richard; ——; Wheeden, Richard (1973). "Weighted norm inequalities for the conjugate function and Hilbert transform". Transactions of the American Mathematical Society. 176: 227. doi:10.1090/S0002-9947-1973-0312139-8. ——; Wheeden, Richard (1974). "Weighted norm inequalities for fractional integrals" (PDF). Transactions of the American Mathematical Society. 192: 261–274. doi:10.1090/S0002-9947-1974-0340523-6. Andersen, Kenneth F.; —— (1982). "Weighted weak type Hardy inequalities with applications to Hilbert transforms and maximal functions" (PDF). Studia Math. 72 (1): 9–26. doi:10.4064/sm-72-1-9-26. Ariño, Miguel A.; —— (1990). "Maximal functions on classical Lorentz spaces and Hardy's inequality with weights for nonincreasing functions". Transactions of the American Mathematical Society. 320 (2): 727–735. doi:10.1090/S0002-9947-1990-0989570-0. == References ==
Wikipedia:Benjamin Weiss#0
Benjamin Weiss (Hebrew: בנימין ווייס; born 1941) is an American-Israeli mathematician known for his contributions to ergodic theory, topological dynamics, probability theory, game theory, and descriptive set theory. == Biography == Benjamin ("Benjy") Weiss was born in New York City. In 1962 he received B.A. from Yeshiva University and M.A. from the Graduate School of Science, Yeshiva University. In 1965, he received his Ph.D. from Princeton under the supervision of William Feller. == Academic career == Between 1965 and 1967, Weiss worked at the IBM Research. In 1967, he joined the faculty of the Hebrew University of Jerusalem; and since 1990 occupied the Miriam and Julius Vinik Chair in Mathematics (Emeritus since 2009). Weiss held visiting positions at Stanford, MSRI, and IBM Research Center. Weiss published over 180 papers in ergodic theory, topological dynamics, orbit equivalence, probability, information theory, game theory, descriptive set theory; with notable contributions including introduction of Markov partitions (with Roy Adler), development of ergodic theory of amenable groups (with Don Ornstein), mean dimension (with Elon Lindenstrauss), introduction of sofic subshifts and sofic groups. The road coloring conjecture was also posed by Weiss with Roy Adler. One of Weiss's students is Elon Lindenstrauss, a 2010 recipient of the Fields Medal. == Awards and recognition == Weiss gave an invited address at the International Congress of Mathematicians 1974, was twice the main speaker at a Conference Board of Mathematical Sciences (1979 and 1995), gave the M.B.Porter Distinguished Lecture Series at Rice University (1998). In 2000 Weiss was elected as a Foreign Honorary Member of the American Academy of Arts and Sciences. In 2006 he was awarded the Rothschild Prize in Mathematics. In 2012 Weiss was elected a Fellow of the American Mathematical Society. == See also == Daniel Rudolph - contemporary of and academic collaborator with Weiss == References ==