Dataset Viewer
Auto-converted to Parquet
text
stringlengths
1
3.05k
source
stringclasses
4 values
there are two settings under which you can get $ o ( 1 ) $ worst - case times. if your setup is static, then fks hashing will get you worst - case $ o ( 1 ) $ guarantees. but as you indicated, your setting isn't static. if you use cuckoo hashing, then queries and deletes are $ o ( 1 ) $ worst - case, but insertion is only $ o ( 1 ) $ expected. cuckoo hashing works quite well if you have an upper bound on the total number of inserts, and set the table size to be roughly 25 % larger. there's more information here.
https://api.stackexchange.com
yes, there bwa - mem was published as a preprint bwa - mem ’ s seed extension differs from the standard seed extension in two aspects. firstly, suppose at a certain extension step we come to reference position x with the best extension score achieved at query position y.... secondly, while extending a seed, bwa - mem tries to keep track of the best extension score reaching the end of the query sequence and there is a description of the scoring algorithm directly in the source code of bwa - mem ( lines 22 - 44 ), but maybe the only solution is really to go though the source code.
https://api.stackexchange.com
i would ignore answers that say the surface area is ill - defined. in any realistic situation you have a lower limit for how fine a resolution is meaningful. this is like a pedant who says that hydrogen has an ill - defined volume because the electron wavefunction has no hard cutoff. technically true, but practically not meaningful. my recommendation is an optical profilometer, which can measure the surface area quite well ( for length scales above 400nm ). this method uses a coherent laser beam and interferometry to map the topography of the material's surface. once you have the topography you can integrate it to get the surface area. advantages of this method include : non - contact, non - destructive, variable surface area resolution to suit your needs, very fast ( seconds to minutes ), doesn't require any consumables besides electricity. disadvantages include : you have to flip over your rock to get all sides and stitch them together to get the total topography, the instruments are too expensive for casual hobbyists ( many thousands of dollars ), no atomic resolution ( but scanning tunneling microscopy is better for that ). the optics for these instruments look like below and it gives a topographic map like below. ( source : psu. edu )
https://api.stackexchange.com
short answer intermittent locomotion can increase the detection of prey by predators ( e. g. rats ), while it may lead to reduced attack rates in prey animals ( e. g., rats and chipmunks ). it may also increase physical endurance. background rather than moving continuously through the environment, many animals interrupt their locomotion with frequent brief pauses. pauses increase the time required to travel a given distance and add costs of acceleration and deceleration to the energetic cost of locomotion. from an adaptation perspective, pausing should provide benefits that outweigh these costs ( adam & kramer, 1998 ). one potential benefit of pausing is increased detection of prey by predators. slower movement speeds likely improve prey detection by providing more time to scan a given visual field. a second plausible benefit is reduced attack rate by predators. many predators are more likely to attack moving prey, perhaps because such prey is more easily detected or recognized. indeed, motionlessness ( ‘ freezing ’ ) is a widespread response by prey that detect a predator. a third benefit may be increased endurance. for animals moving faster than their aerobically sustainable speeds, the maximum distance run can be increased by taking pauses. these pauses allow the clearance of lactate from the muscles through aerobic mechanisms. ps : if you mean with'snappy'not only that small animals move intermittently, but also'fast ', then remi. b's answers nicely covers the story why small critters are quick. basically, it comes down to newton's second law, namely acceleration is inversely proportional to mass ( a = f / m ), but the size of muscle power is not. hence, bigger animals have more mass and need a lot more force to build up to accelerate at the same speed. that build up of force needs time ( ever witnessed the vertical lift - off of a space shuttle? ) hence, small critters accelerate quicker and allow them to move'snappy '. reference - adam & kramer, anim behav ( 1998 ) ; 55 : 109 – 117
https://api.stackexchange.com
i'm going to break up my answer into three parts. profiling, speeding up the python code via c, and speeding up python via python. it is my view that python has some of the best tools for looking at what your code's performance is then drilling down to the actual bottle necks. speeding up code without profiling is about like trying to kill a deer with an uzi. if you are really only interested in mat - vec products, i would recommend scipy. sparse. python tools for profiling profile and cprofile modules : these modules will give you your standard run time analysis and function call stack. it is pretty nice to save their statistics and using the pstats module you can look at the data in a number of ways. kernprof : this tool puts together many routines for doing things like line by line code timing memory _ profiler : this tool produces line by line memory foot print of your code. ipython timers : the timeit function is quite nice for seeing the differences in functions in a quick interactive way. speeding up python cython : cython is the quickest way to take a few functions in python and get faster code. you can decorate the function with the cython variant of python and it generates c code. this is very maintable and can also link to other hand written code in c / c + + / fortran quite easily. it is by far the preferred tool today. ctypes : ctypes will allow you to write your functions in c and then wrap them quickly with its simple decoration of the code. it handles all the pain of casting from pyobjects and managing the gil to call the c function. other approaches exist for writing your code in c but they are all somewhat more for taking a c / c + + library and wrapping it in python. python - only approaches if you want to stay inside python mostly, my advice is to figure out what data you are using and picking correct data types for implementing your algorithms. it has been my experience that you will usually get much farther by optimizing your data structures then any low level c hack. for example : numpy : a contingous array very fast for strided operations of arrays numexpr : a numpy array expression optimizer. it allows for multithreading numpy array expressions and also gets rid of the numerous temporaries numpy makes because of restrictions of the python interpreter. b
https://api.stackexchange.com
##list : a b - tree implementation of a list, very fast for inserting, indexing, and moving the internal nodes of a list pandas : data frames ( or tables ) very fast analytics on the arrays. pytables : fast structured hierarchical tables ( like hdf5 ), especially good for out of core calculations and queries to large data.
https://api.stackexchange.com
suppose that i have a conv layer which outputs an $ ( n, f, h, w ) $ shaped tensor where : $ n $ is the batch size $ f $ is the number of convolutional filters $ h, w $ are the spatial dimensions suppose the input is fed into a conv layer with $ f _ 1 $ 1x1 filters, zero padding and stride 1. then the output of this 1x1 conv layer will have shape $ ( n, f _ 1, h, w ) $. so 1x1 conv filters can be used to change the dimensionality in the filter space. if $ f _ 1 > f $ then we are increasing dimensionality, if $ f _ 1 < f $ we are decreasing dimensionality, in the filter dimension. indeed, in the google inception article going deeper with convolutions, they state ( bold is mine, not by original authors ) : one big problem with the above modules, at least in this naive form, is that even a modest number of 5x5 convolutions can be prohibitively expensive on top of a convolutional layer with a large number of filters. this leads to the second idea of the proposed architecture : judiciously applying dimension reductions and projections wherever the computational requirements would increase too much otherwise. this is based on the success of embeddings : even low dimensional embeddings might contain a lot of information about a relatively large image patch... 1x1 convolutions are used to compute reductions before the expensive 3x3 and 5x5 convolutions. besides being used as reductions, they also include the use of rectified linear activation which makes them dual - purpose. so in the inception architecture, we use the 1x1 convolutional filters to reduce dimensionality in the filter dimension. as i explained above, these 1x1 conv layers can be used in general to change the filter space dimensionality ( either increase or decrease ) and in the inception architecture we see how effective these 1x1 filters can be for dimensionality reduction, explicitly in the filter dimension space, not the spatial dimension space. perhaps there are other interpretations of 1x1 conv filters, but i prefer this explanation, especially in the context of the google inception architecture.
https://api.stackexchange.com
apparently you're not the first person to notice this ; in 1895, a german nose specialist called richard kayser found that we have tissue called erectile tissue in our noses ( yes, it is very similar to the tissue found in a penis ). this tissue swells in one nostril and shrinks in the other, creating an open airway via only one nostril. what's more, he found that this is indeed a'nasal cycle ', changing every 2. 5 hours or so. of course, the other nostril isn't completely blocked, just mostly. if you try, you can feel a very light push of air out of the blocked nostril. this is controlled by the autonomic nervous system. you can change which nostril is closed and which is open by laying on one side to open the opposite one. interestingly, some researchers think that this is the reason we often switch the sides we lay on during sleep rather regularly, as it is more comfortable to sleep on the side with the blocked nostril downwards. as to why we don't breathe through both nostrils simultaneously, i couldn't find anything that explains it. sources : about 85 % of people only breathe out of one nostril at a time nasal cycle
https://api.stackexchange.com
i doubt we know the precise number, or even anywhere near it. but there are several well - supported theorised colonisations which might interest you and help to build up a picture of just how common it was for life to transition to land. we can also use known facts about when different evolutionary lineages diverged, along with knowledge about the earlier colonisations of land, to work some events out for ourselves. i've done it here for broad taxonomic clades at different scales - if interested you could do the same thing again for lower sub - clades. as you rightly point out, there must have been at least one colonisation event for each lineage present on land which diverged from other land - present lineages before the colonisation of land. using the evidence and reasoning i give below, at the very least, the following 9 independent colonisations occurred : bacteria cyanobacteria archaea protists fungi algae plants nematodes arthropods vertebrates bacterial and archaean colonisation the first evidence of life on land seems to originate from 2. 6 ( watanabe et al., 2000 ) to 3. 1 ( battistuzzi et al., 2004 ) billion years ago. since molecular evidence points to bacteria and archaea diverging between 3. 2 - 3. 8 billion years ago ( feng et al., 1997 - a classic paper ), and since both bacteria and archaea are found on land ( e. g. taketani & tsai, 2010 ), they must have colonised land independently. i would suggest there would have been many different bacterial colonisations, too. one at least is certain - cyanobacteria must have colonised independently from some other forms, since they evolved after the first bacterial colonisation ( tomitani et al., 2006 ), and are now found on land, e. g. in lichens. protistan, fungal, algal, plant and animal colonisation protists are a polyphyletic group of simple eukaryotes, and since fungal divergence from them ( wang et al., 1999 - another classic ) predates fungal emergence from the ocean ( taylor & osborn, 1996 ), they must have emerged separately. then, since plants and fungi diverged whilst fungi were still in the ocean ( wang et al., 1999 ), plants must have colonised separately. actually, it has been explicitly discovered in various ways ( e. g. molecular
https://api.stackexchange.com
clock methods, heckman et al., 2001 ) that plants must have left the ocean separately to fungi, but probably relied upon them to be able to do it ( brundrett, 2002 - see note at bottom about this paper ). next, simple animals... arthropods colonised the land independently ( pisani et al, 2004 ), and since nematodes diverged before arthropods ( wang et al., 1999 ), they too must have independently found land. then, lumbering along at the end, came the tetrapods ( long & gordon, 2004 ). note about the brundrett paper : it has over 300 references! that guy must have been hoping for some sort of prize. references battistuzzi fu, feijao a, hedges sb. 2004. a genomic timescale of prokaryote evolution : insights into the origin of methanogenesis, phototrophy, and the colonization of land. bmc evol biol 4 : 44. brundrett mc. 2002. coevolution of roots and mycorrhizas of land plants. new phytologist 154 : 275 – 304. feng d - f, cho g, doolittle rf. 1997. determining divergence times with a protein clock : update and reevaluation. proceedings of the national academy of sciences 94 : 13028 – 13033. heckman ds, geiser dm, eidell br, stauffer rl, kardos nl, hedges sb. 2001. molecular evidence for the early colonization of land by fungi and plants. science 293 : 1129 – 1133. long ja, gordon ms. 2004. the greatest step in vertebrate history : a paleobiological review of the fish ‐ tetrapod transition. physiological and biochemical zoology 77 : 700 – 719. pisani d, poling ll, lyons - weiler m, hedges sb. 2004. the colonization of land by animals : molecular phylogeny and divergence times among arthropods. bmc biol 2 : 1. taketani rg, tsai sm. 2010. the influence of different land uses on the structure of archaeal communities in amazonian anthrosols based on 16s rrna and amoa genes. microb ecol 59 : 734 – 743. taylor tn, osborn jm. 1996. the importance of fungi in shaping the paleoecos
https://api.stackexchange.com
##ystem. review of palaeobotany and palynology 90 : 249 – 262. wang dy, kumar s, hedges sb. 1999. divergence time estimates for the early history of animal phyla and the origin of plants, animals and fungi. proc biol sci 266 : 163 – 171. watanabe y, martini jej, ohmoto h. 2000. geochemical evidence for terrestrial ecosystems 2. 6 billion years ago. nature 408 : 574 – 578.
https://api.stackexchange.com
while you do spend some body energy to keep the book lifted, it's important to differentiate it from physical effort. they are connected but are not the same. physical effort depends not only on how much energy is spent, but also on how energy is spent. holding a book in a stretched arm requires a lot of physical effort, but it doesn't take that much energy. in the ideal case, if you manage to hold your arm perfectly steady, and your muscle cells managed to stay contracted without requiring energy input, there wouldn't be any energy spent at all because there wouldn't be any distance moved. on real scenarios, however, you do spend ( chemical ) energy stored within your body, but where is it spent? it is spent on a cellular level. muscles are made with filaments which can slide relative to one another, these filaments are connected by molecules called myosin, which use up energy to move along the filaments but detach at time intervals to let them slide. when you keep your arm in position, myosins hold the filaments in position, but when one of them detaches other myosins have to make up for the slight relaxation locally. chemical energy stored within your body is released by the cell as both work and heat. * both on the ideal and the real scenarios we are talking about the physical definition of energy. on your consideration, you ignore the movement of muscle cells, so you're considering the ideal case. a careful analysis of the real case leads to the conclusion that work is done and heat is released, even though the arm itself isn't moving. * ultimately, the work done by the cells is actually done on other cells, which eventually dissipates into heat due to friction and non - elasticity. so all the energy you spend is invested in keeping the muscle tension and eventually dissipated as heat.
https://api.stackexchange.com
tetrahedral complexes let's consider, for example, a tetrahedral $ \ ce { ni ( ii ) } $ complex ( $ \ mathrm { d ^ 8 } $ ), like $ \ ce { [ nicl4 ] ^ 2 - } $. according to hybridisation theory, the central nickel ion has $ \ mathrm { sp ^ 3 } $ hybridisation, the four $ \ mathrm { sp ^ 3 } $ - type orbitals are filled by electrons from the chloride ligands, and the $ \ mathrm { 3d } $ orbitals are not involved in bonding. already there are several problems with this interpretation. the most obvious is that the $ \ mathrm { 3d } $ orbitals are very much involved in ( covalent ) bonding : a cursory glance at a mo diagram will show that this is the case. if they were not involved in bonding at all, they should remain degenerate, which is obviously untrue ; and even if you bring in crystal field theory ( cft ) to say that there is an ionic interaction, it is still not sufficient. if accuracy is desired, the complex can only really be described by a full mo diagram. one might ask why we should believe the mo diagram over the hybridisation picture. the answer is that there is a wealth of experimental evidence, especially electronic spectroscopy ( $ \ mathrm { d - d ^ * } $ transitions being the most obvious example ), and magnetic properties, that is in accordance with the mo picture and not the hybridisation one. it is simply impossible to explain many of these phenomena using this $ \ mathrm { sp ^ 3 } $ model. lastly, hybridisation alone cannot explain whether a complex should be tetrahedral ( $ \ ce { [ nicl4 ] ^ 2 - } $ ) or square planar ( $ \ ce { [ ni ( cn ) 4 ] ^ 2 - } $, or $ \ ce { [ ptcl4 ] ^ 2 - } $ ). generally the effect of the ligand, for example, is explained using the spectrochemical series. however, hybridisation cannot account for the position of ligands in the spectrochemical series! to do so you would need to bring in mo theory. octahedral complexes moving on to $ \ ce { ni ( ii ) } $ octahedral complexes, like $ \ ce { [ ni ( h2o ) 6 ] ^ 2 + } $, the typical explanation is that there
https://api.stackexchange.com
is $ \ mathrm { sp ^ 3d ^ 2 } $ hybridisation. but all the $ \ mathrm { 3d } $ orbitals are already populated, so where do the two $ \ mathrm { d } $ orbitals come from? the $ \ mathrm { 4d } $ set, i suppose. the points raised above for tetrahedral case above still apply here. however, here we have something even more criminal : the involvement of $ \ mathrm { 4d } $ orbitals in bonding. this is simply not plausible, as these orbitals are energetically inaccessible. on top of that, it is unrealistic to expect that electrons will be donated into the $ \ mathrm { 4d } $ orbitals when there are vacant holes in the $ \ mathrm { 3d } $ orbitals. for octahedral complexes where there is the possibility for high - and low - spin forms ( e. g., $ \ mathrm { d ^ 5 } $ $ \ ce { fe ^ 3 + } $ complexes ), hybridisation theory becomes even more misleading : hybridisation theory implies that there is a fundamental difference in the orbitals involved in metal - ligand bonding for the high - and low - spin complexes. however, this is simply not true ( again, an mo diagram will illustrate this point ). and the notion of $ \ mathrm { 4d } $ orbitals being involved in bonding is no more realistic than it was in the last case, which is to say, utterly unrealistic. in this situation, one also has the added issue that hybridisation theory provides no way of predicting whether a complex is high - or low - spin, as this again depends on the spectrochemical series. summary hybridisation theory, when applied to transition metals, is both incorrect and inadequate. it is incorrect in the sense that it uses completely implausible ideas ( $ \ mathrm { 3d } $ metals using $ \ mathrm { 4d } $ orbitals in bonding ) as a basis for describing the metal complexes. that alone should cast doubt on the entire idea of using hybridisation for the $ \ mathrm { 3d } $ transition metals. however, it is also inadequate in that it does not explain the rich chemistry of the transition metals and their complexes, be it their geometries, spectra, reactivities, or magnetic properties. this prevents it from being useful even as a predictive model. what about other chemical species? you mentioned that hybridisation
https://api.stackexchange.com
works well for " other compounds. " that is really not always the case, though. for simple compounds like water, etc. there are already issues associated with the standard vsepr / hybridisation theory. superficially, the $ \ mathrm { sp ^ 3 } $ hybridisation of oxygen is consistent with the observed bent structure, but that's just about all that can be explained. the photoelectron spectrum of water shows very clearly that the two lone pairs on oxygen are inequivalent, and the mo diagram of water backs this up. apart from that, hybridisation has absolutely no way of explaining the structures of boranes ; wade's rules do a much better job with the delocalised bonding. and these are just period 2 elements - when you go into the chemistry of the heavier elements, hybridisation generally becomes less and less useful a concept. for example, hypervalency is a huge problem : $ \ ce { sf6 } $ is claimed to be $ \ mathrm { sp ^ 3d ^ 2 } $ hybridised, but in fact $ \ mathrm { d } $ - orbital involvement in bonding is negligible. on the other hand, non - hypervalent compounds, such as $ \ ce { h2s } $, are probably best described as unhybridised - what happened to the theory that worked so well for $ \ ce { h2o } $? it just isn't applicable here, for reasons beyond the scope of this post. there is probably one scenario in which it is really useful, and that is when describing organic compounds. the reason for this is because tetravalent carbon tends to conform to the simple categories of $ \ mathrm { sp } ^ n $ $ ( n \ in \ { 1, 2, 3 \ } ) $ ; we don't have the same teething issues with $ \ mathrm { d } $ - orbitals that have been discussed above. but there are caveats. for example, it is important to recognise that it is not atoms that are hybridised, but rather orbitals : for example, each carbon in cyclopropane uses $ \ mathrm { sp ^ 5 } $ orbitals for the $ \ ce { c - c } $ bonds and $ \ mathrm { sp ^ 2 } $ orbitals for the $ \ ce { c - h } $ bonds. the bottom line is that every model that we use in chemistry has a range of validity,
https://api.stackexchange.com
and we should be careful not to use a model in a context where it is not valid. hybridisation theory is not valid in the context of transition metal complexes, and should not be used as a means of explaining their structure, bonding, and properties.
https://api.stackexchange.com
the " roiling boil " is a mechanism for moving heat from the bottom of the pot to the top. you see it on the stovetop because most of the heat generally enters the liquid from a superheated surface below the pot. but in a convection oven, whether the heat enters from above, from below, or from both equally depends on how much material you are cooking and the thermal conductivity of its container. i had an argument about this fifteen years ago which i settled with a great kitchen experiment. i put equal amounts of water in a black cast - iron skillet and a glass baking dish with similar horizontal areas, and put them in the same oven. ( glass is a pretty good thermal insulator ; the relative thermal conductivities and heat capacities of aluminum, stainless steel, and cast iron surprise me whenever i look them up. ) after some time, the water in the iron skillet was boiling like gangbusters, but the water in the glass was totally still. a slight tilt of the glass dish, so that the water touched a dry surface, was met with a vigorous sizzle : the water was keeping the glass temperature below the boiling point where there was contact, but couldn't do the same for the iron. when i pulled the two pans out of the oven, the glass pan was missing about half as much water as the iron skillet. i interpreted this to mean that boiling had taken place from the top surface only of the glass pan, but from both the top and bottom surfaces of the iron skillet. note that it is totally possible to get a bubbling boil from an insulating glass dish in a hot oven ; the bubbles are how you know when the lasagna is ready. ( a commenter reminds me that i used the " broiler " element at the top of the oven rather than the " baking " element at the bottom of the oven, to increase the degree to which the heat came " from above. " that's probably why i chose black cast iron, was to capture more of the radiant heat. )
https://api.stackexchange.com
the answers are no and no. being dimensionless or having the same dimension is a necessary condition for quantities to be " compatible ", it is not a sufficient one. what one is trying to avoid is called category error. there is analogous situation in computer programming : one wishes to avoid putting values of some data type into places reserved for a different data type. but while having the same dimension is certainly required for values to belong to the same " data type ", there is no reason why they can not be demarcated by many other categories in addition to that. newton meter is a unit of both torque and energy, and joules per kelvin of both entropy and heat capacity, but adding them is typically problematic. the same goes for adding proverbial apples and oranges measured in " dimensionless units " of counting numbers. actually, the last example shows that the demarcation of categories depends on a context, if one only cares about apples and oranges as objects it might be ok to add them. dimension is so prominent in physics because it is rarely meaningful to mix quantities of different dimensions, and there is a nice calculus ( dimensional analysis ) for keeping track of it. but it also makes sense to introduce additional categories to demarcate values of quantities like torque and energy, even if there may not be as nice a calculus for them. as your own examples show it also makes sense to treat radians differently depending on context : take their category ( " dimension " ) viz. steradians or counting numbers into account when deciding about addition, but disregard it when it comes to substitution into transcendental functions. hertz is typically used to measure wave frequency, but because cycles and radians are officially dimensionless it shares dimension with the unit of angular velocity, radian per second, radians also make the only difference between amperes for electric current and ampere - turns for magnetomotive force. similarly, dimensionless steradians are the only difference between lumen and candela, while luminous intensity and flux are often distinguished. so in those contexts it might also make sense to treat radians and steradians as " dimensional ". in fact, radians and steradians were in a class of their own as " supplementary units " of si until 1995. that year the international bureau on weights and measures ( bipm ) decided that " ambiguous status of the supplementary units compromises the internal coherence of the si ", and reclassified them as " dimensionless derived units,
https://api.stackexchange.com
the names and symbols of which may, but need not, be used in expressions for other si derived units, as is convenient ", thus eliminating the class of supplementary units. the desire to maintain a general rule that arguments of transcendental functions must be dimensionless might have played a role, but this shows that dimensional status is to a degree decided by convention rather than by fact. in the same vein, ampere was introduced as a new base unit into mks system only in 1901, and incorporated into si even later. as the name suggests, mks originally made do with just meters, kilograms, and seconds as base units, this required fractional powers of meters and kilograms in the derived units of electric current however. as @ dmckee pointed out energy and torque can be distinguished as scalars and pseudo - scalars, meaning that under the orientation reversing transformations like reflections, the former keep their value while the latter switch sign. this brings up another categorization of quantities that plays a big role in physics, by transformation rules under coordinate changes. among vectors there are " true " vectors ( like velocity ), covectors ( like momentum ), and pseudo - vectors ( like angular momentum ), in fact all tensor quantities are categorized by representations of orthogonal ( in relativity lorentz ) group. this also comes with a nice calculus describing how tensor types combine under various operations ( dot product, tensor product, wedge product, contractions, etc. ). one reason for rewriting maxwell's electrodynamics in terms of differential forms is to keep track of them. this becomes important when say the background metric is not euclidean, because the identification of vectors and covectors depends on it. different tensor types tend to have different dimensions anyway, but there are exceptions and the categorizations are clearly independent. but even tensor type may not be enough. before joule's measurements of the mechanical equivalent of heat in 1840s the quantity of heat ( measured in calories ) and mechanical energy ( measured in derived units ) had two different dimensions. but even today one may wish to keep them in separate categories when studying a system where mechanical and thermal energy are approximately separately conserved, the same applies to einstein's mass energy. this means that categorical boundaries are not set in stone, they may be erected or taken down both for practical expediency or due to a physical discovery. many historical peculiarities in the choice and development of units and unit systems are described in klein's book the science of measurement.
https://api.stackexchange.com
to the best of my knowledge, most physicists don't believe that antimatter is actually matter moving backwards in time. it's not even entirely clear what would it really mean to move backwards in time, from the popular viewpoint. if i'm remembering correctly, this idea all comes from a story that probably originated with richard feynman. at the time, one of the big puzzles of physics was why all instances of a particular elementary particle ( all electrons, for example ) are apparently identical. feynman had a very hand - wavy idea that all electrons could in fact be the same electron, just bouncing back and forth between the beginning of time and the end. as far as i know, that idea never developed into anything mathematically grounded, but it did inspire feynman and others to calculate what the properties of an electron moving backwards in time would be, in a certain precise sense that emerges from quantum field theory. what they came up with was a particle that matched the known properties of the positron. just to give you a rough idea of what it means for a particle to " move backwards in time " in the technical sense : in quantum field theory, particles carry with them amounts of various conserved quantities as they move. these quantities may include energy, momentum, electric charge, " flavor, " and others. as the particles move, these conserved quantities produce " currents, " which have a direction based on the motion and sign of the conserved quantity. if you apply the time reversal operator ( which is a purely mathematical concept, not something that actually reverses time ), you reverse the direction of the current flow, which is equivalent to reversing the sign of the conserved quantity, thus ( roughly speaking ) turning the particle into its antiparticle. for example, consider electric current : it arises from the movement of electric charge, and the direction of the current is a product of the direction of motion of the charge and the sign of the charge. $ $ \ vec { i } = q \ vec { v } $ $ positive charge moving left ( $ + q \ times - v $ ) is equivalent to negative charge moving right ( $ - q \ times + v $ ). if you have a current of electrons moving to the right, and you apply the time reversal operator, it converts the rightward velocity to leftward velocity ( $ - q \ times - v $ ). but you would get the exact same result by instead converting the electrons into positrons and letting them
https://api.stackexchange.com
continue to move to the right ( $ + q \ times + v $ ) ; either way, you wind up with the net positive charge flow moving to the right. by the way, optional reading if you're interested : there is a very basic ( though hard to prove ) theorem in quantum field theory, the tcp theorem, that says that if you apply the three operations of time reversal, charge conjugation ( switch particles and antiparticles ), and parity inversion ( mirroring space ), the result should be exactly equivalent to what you started with. we know from experimental data that, under certain exotic circumstances, the combination of charge conjugation and parity inversion does not leave all physical processes unchanged, which means that the same must be true of time reversal : physics is not time - reversal invariant. of course, since we can't actually reverse time, we can't test in exactly what manner this is true.
https://api.stackexchange.com
what type of solder is safest for home ( hobbyist ) use? this advice is liable to be met with doubt and even derision by some - by all means do your own checks, but please at least think about what i write here : i have cited a number of references below which give guidelines for soldering. these are as applicable for lead - free solders as for lead based solders. if you decide after reading the following not to trust lead based solders, despite my advice, then the guidelines will still prove useful. it is widely know that the improper handling of metallic lead can cause health problems. however, it is widely understood currently and historically that use of tin - lead solder in normal actual soldering applications has essentially no negative health impact. handling of the lead based solder, as opposed to the actual soldering, needs to be done sensibly but this is easily achieved with basic common sense procedures. while some electrical workers do have mildly increased epidemiological incidences of some diseases, these appear to be related to electric field exposure - and even then the correlations are so small as to generally be statistically insignificant. lead metal has a very low vapor pressure and when exposed at room temperatures essentially none is inhaled. at soldering temperatures vapor levels are still essentially zero. tin lead solder is essentially safe if used anything like sensibly. while some people express doubts about its use in any manner, these are not generally well founded in formal medical evidence or experience. while it is possible to poison yourself with tin - lead solder, taking even very modest and sensible precautions renders the practice safe for the user and for others in their household. while you would not want to allow children to suck it, anything like reasonable precautions are going to result in its use not being an issue. a significant proportion of lead which is " ingested " ( taken orally or eaten ) will be absorbed by the body. but you will acquire essentially no ingested lead from soldering if you don't eat it, don't suck solder and wash your hands after soldering. smoking while soldering is liable to be even unwiser than usual. it is widely accepted that inhaled lead from soldering is not at a dangerous level. the majority of inhaled lead is absorbed by the body. but the vapor pressure of lead at soldering temperatures is so low that there is essentially no lead vapor in the air while soldering. sticking a soldering iron up your nose ( hot or cold ) is
https://api.stackexchange.com
liable to damage your health but not due to the effects of lead. the vapor pressure of lead at 330 °c ( very hot for solder ) / 600 kelvin is about 10⁻⁸ mm of mercury. lead = " pb " crosses x - axis at 600k on lower graph here. these are interesting and useful graphs of the vapor pressure with temperatures of many elements. ( by comparison, zinc has about 1, 000, 000 times as high a vapor pressure at the same temperature, and cadmium ( which should definitely be avoided ) 10, 000, 000 times as high. atmospheric pressure is ~ 760 mm of hg so lead vapor pressure at a very hot iron temperature is about 1 part in 10¹¹ or one part per 100 billion. the major problems with lead are caused either by its release into the environment where it can be converted to more soluble forms and introduced into the food chain, or by its use in forms which are already soluble or which are liable to be ingested. so, lead paint on toys or nursery furniture, lead paint on houses which gets turned into sanding dust or paint flakes, lead as an additive in petrol which gets disseminated in gaseous and soluble forms or lead which ends up in land fills are all forms which cause real problems and which have led to bans on lead in many situations. lead in solder is bad for the environment because of where it is liable to end up when it is disposed of. this general prohibition has lead to a large degree of misunderstanding about its use " at the front end ". if you insist on regularly vaporising lead in close proximity to your person by e. g. firing a handgun frequently, then you should take precautions re vapor inhalation. otherwise, common sense is very likely to be good enough. washing your hands after soldering is a wise precaution but more likely to be useful for removal of trace solid lead particles. use of a fume extractor & filter is wise - but i'd be far more worried about the resin or flux smoke than of lead vapor. sean breheney notes : " there is a significant danger associated with inhaling the fumes of certain fluxes ( including rosin ) and therefore fume extraction or excellent ventilation is, in my opinion, essential for anyone doing soldering more often than, say, 1 hour per week. i generally have trained myself to inhale when the fumes are not being generated and exhale slowly while actually soldering - but that is only adequate for very
https://api.stackexchange.com
small jobs and i try to remember to use a fume extractor for larger ones. ( added july 2021 ) note that there are many documents on the web which state that lead solder is hazardous. few or none try to explain why this is said to be the case. soldering precautions sheet. they note : potential exposure routes from soldering include ingestion of lead due to surface contamination. the digestive system is the primary means by which lead can be absorbed into the human body. skin contact with lead is, in and of itself, harmless, but getting lead dust on your hands can result in it being ingested if you don ’ t wash your hands before eating, smoking, etc. an often overlooked danger is the habit of chewing fingernails. the spaces under the fingernails are great collectors of dirt and dust. almost everything that is handled or touched may be found under the finger nails. ingesting even a small amount of lead is dangerous because it is a cumulative poison which is not excreted by normal bodily function. lead soldering safety guidelines standard advice their comments on lead fumes are rubbish. fwiw - the vapor pressure of lead is given by $ $ \ log _ { 10 } p ( mm ) = - \ frac { 10372 } { t } - \ log _ { 10 } t - 11. 35 $ $ quoted from the vapor pressures of metals ; a new experimental method wikipedia - vapor pressure for more on soldering in general see better soldering lead spatter and inhalation & ingestion it's been suggested that the statement : " the majority of inhaled lead is absorbed by the body. but the vapor pressure of lead at soldering temperatures is so low that there is essentially no lead vapor in the air while soldering. " is not relevant, as it's suggested that vapor pressure isn't important if the lead is being atomized into droplets that you can then inhale. look around the soldering iron and there's lead dust everywhere. in response : " inhalation " there referred to lead rendered gaseous - usually by chemical combination. eg the use of tetraethyl lead in petrol resulted in gaseous lead compounds not direcly from the tel itself but from wikipedia tetraethyllead page : the pb and pbo would quickly over - accumulate and destroy an engine. for this reason, the lead scavengers 1, 2 - dibromoethane and 1, 2 - dichloroethan
https://api.stackexchange.com
##e are used in conjunction with tel — these agents form volatile lead ( ii ) bromide and lead ( ii ) chloride, respectively, which are flushed from the engine and into the air. in engines this process occurs at far higher temperatures than exist in soldering and there is no intentional process which produces volatile lead compounds. ( the exceedingly unfortunate may discover a flux which contains substances like the above lead scavenging halides, but by the very nature of flux this seems vanishingly unlikely in the real world. ). lead in metallic droplets at soldering temperatures does not come close to being melted or vaporised at anything like significant partial pressures ( see comments and references above ) and if any enters the body it counts as'ingested ', not inhaled. basic precautions against ingestion are widely recommended, as mentioned above. washing of hands, not smoking while soldering and not licking lead has been noted as sensible. for lead " spatter " to qualify for direct ingestion it would need to ballistically enter the mouth or nose while soldering. it's conceivable that some may do this but if any does the quantity is very small. it's generally recognised both historically and currently that the actual soldering process is not what's hazardous. a significant number of webpages do state that lead from solder is vaporized by soldering and that dangerous quantities of lead can be inhaled. on every such page i have looked at there are no references to anything like reputable sources and in almost every such case there are no references at all. the general rohs prohibitions and the undoubted dangers that lead poses in appropriate circumstances has lead to a cachet of urban legend and spurious comments without any traceable foundations. and again... it was suggested that : anyone who's sneezed in a dusty room knows that it doesn't have to enter the nose or mouth " ballistically ". any time solder splatters or flux pops, it creates tiny droplets of lead that solidify to dust. small enough particles of dust can be airborne and small exposures over years accumulate in the body. " lead dust can form when lead - based paint is dry scraped, dry sanded, or heated. lead chips and dust can get on surfaces and objects that people touch. settled lead dust can re - enter the air when people vacuum, sweep or walk through it. " in response : a quality reference, or a few, that indicated that air borne dust can be produced
https://api.stackexchange.com
in significant quantity by soldering would go a long way to establishing the assertions. finding negative evidence is, as ever, harder. there is no question about the dangers from lead based paints, whether form airborne dust from sanding, children sucking lead painted objects or surface dust produced - all these are extremely well documented. lead in a metallic alloy for soldering is an entirely different animal. i have many decades of personal soldering experience experience and a reasonable awareness of industry experience. dusty rooms we all know about, but that has no link to whether solder does or doesn't produce lead dust. soldering can produce small lead particles, but these appear to be metallic alloyed lead. " lead " dust from paint is liable to contain lead oxide or occasionally other lead based substances. such dust may indeed be subject to aerial transmission if finely enough divided, but this provides no information about how metallic lead performs in dust production. i am unaware of discernible " lead dust " occurring from'popping flux ', and i'm unaware of any mechanism that would allow mechanically small lead droplets to achieve a low enough density to float in air in the normal sense. brownian motion could loft metallic lead particles of a small enough size. i've not seen any evidence ( or found any references ), that suggest that small enough particles are formed in measurable quantities. interestingly - this answer had 2 downvotes - now it has one. somebody changed their mind. thanks. somebody didn't. maybe they'd like to tell me why? the aim is to be balanced and objective and as factual as possible. if it falls short please advise. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ added 2020 : sucking solder? i remember biting solder when i was a kid and for about 2 years i wouldn't wash my hands after soldering. will the effects show up in the future?? i can only give you a layman's opinion. i'm not qualified to give medical advice. i'd guess it's probably ok but i don't know. i suspect that the effects are limited due to insolubility of lead - but lead poisoning from finely divided lead such as in paint is a significant poisoning path. you can be tested for
https://api.stackexchange.com
lead in the blood very easily ( it requires one drop of blood ) and it's probably worth doing. internet diagnosis is, as i'm sure you know, a very poor substitute for proper medical advice. that said here is mayo clinic's page on lead poisoning symptoms & causes. and here is their page on diagnosis and treatment. mayo clinic is one of the better sources for medical advice but, even then, it certainly does not replace proper medical advice.
https://api.stackexchange.com
a quick search on web of science yields " polyphasic wake / sleep episodes in the fire ant, solenopsis invicta " ( cassill et al., 2009, @ mike taylor found an accessable copy here ) as one of the first hits. the main points from the abstract : yes, ants sleep. indicators of deep sleep : ants are non - responsive to contact by other ants and antennae are folded rapid antennal movement ( ram sleep ) queens have about 92 sleep episodes per day, each 6 minutes long. queens synchronize their wake / sleep cycles. workers have about 253 sleep episodes per day, each 1. 1 minutes long. " activity episodes were unaffected by light / dark periods. " if you study the paper you might find more information in its introduction or in the references regarding why ants sleep, although there doesn't seem to be scientific consens. the abstract only says that the shorter total sleeping time of the workers is likely related to them being disposable.
https://api.stackexchange.com
okay, this is not so much of an answer as it is a summary of my own progress on this topic after giving it some thought. i don't think it's a settled debate in the community yet, so i don't feel so much ashamed about it : ) a few of the things worthy of note are : the bond energy found by the authors for this fourth bond is $ \ pu { 13. 2 kcal / mol } $, i. e. about $ \ pu { 55 kj / mol } $. this is very weak for a covalent bond. you can compare it to other values here, or to the energies of the first three bonds in triple - bonded carbon, which are respectively $ 348, 266 $, and $ \ pu { 225 kj / mol } $. this fourth bond is actually even weaker than the strongest of hydrogen bonds ( $ \ ce { f \ bond {... } h – f } $, at $ \ pu { 160 kj / mol } $ ). another point of view on this article could thus be : “ valence bond necessarily predicts a quadruple bond, and it was now precisely calculated and found to be quite weak. ” the findings of this article are consistent with earlier calculations using other quantum chemistry methods ( e. g. the dft calculations in ref. 48 of the nature chemistry paper ) which have found a bond order between 3 and 4 for molecular dicarbon. however, the existence of this quadruple bonds is somewhat at odds with the cohesive energy of gas - phase dicarbon, which according to wikipedia is $ \ pu { 6. 32 ev } $, i. e. $ \ pu { 609 kj / mol } $. this latter value is much more in line with typical double bonds, reported at an average of $ \ pu { 614 kj / mol } $. this is still a bit of a misery to me …
https://api.stackexchange.com
it has to be so common a question that the answer is actually given in various places on dupont's own website ( dupont are the makers of teflon ) : “ if nothing sticks to teflon®, then how does teflon® stick to a pan? " nonstick coatings are applied in layers, just like paint. the first layer is the primer — and it's the special chemistry in the primer that makes it adhere to the metal surface of a pan. and from this other webpage of theirs : the primer ( or primers, if you include the “ mid coat ” in the picture above ) adheres to the roughened surface, often obtained by sandblasting, very strongly : it's chemisorption, and the primer chemical nature is chosen as to obtain strong bonding to both the metal surface. then, the ptfe chain extremities create bonds with the primer. and thus, it stays put.
https://api.stackexchange.com
edit : this is now in sympy $ isympy in [ 1 ] : a = matrixsymbol ('a ', n, n ) in [ 2 ] : b = matrixsymbol ('b ', n, n ) in [ 3 ] : context = q. symmetric ( a ) & q. positive _ definite ( a ) & q. orthogonal ( b ) in [ 4 ] : ask ( q. symmetric ( b * a * b. t ) & q. positive _ definite ( b * a * b. t ), context ) out [ 4 ] : true older answer that shows other work so after looking into this for a while this is what i've found. the current answer to my specific question is " no, there is no current system that can answer this question. " there are however a few things that seem to come close. first, matt knepley and lagerbaer both pointed to work by diego fabregat and paolo bientinesi. this work shows both the potential importance and the feasibility of this problem. it's a good read. unfortunately i'm not certain exactly how his system works or what it is capable of ( if anyone knows of other public material on this topic do let me know ). second, there is a tensor algebra library written for mathematica called xact which handles symmetries and such symbolically. it does some things very well but is not tailored to the special case of linear algebra. third, these rules are written down formally in a couple of libraries for coq, an automated theorem proving assistant ( google search for coq linear / matrix algebra to find a few ). this is a powerful system which unfortunately seems to require human interaction. after talking with some theorem prover people they suggest looking into logic programming ( i. e. prolog, which lagerbaer also suggested ) for this sort of thing. to my knowledge this hasn't yet been done - i may play with it in the future. update : i've implemented this using the maude system. my code is hosted on github
https://api.stackexchange.com
first, a note : while oxygen has fewer allotropes than sulfur, it sure has more than two! these include $ \ ce { o } $, $ \ ce { o _ 2 } $, $ \ ce { o _ 3 } $, $ \ ce { o _ 4 } $, $ \ ce { o _ 8 } $, metallic $ \ ce { o } $ and four other solid phases. many of these actually have a corresponding sulfur variant. however, you are right in a sense that sulfur has more tendency to catenate … let's try to see why! here are the values of the single and double bond enthalpies : $ $ \ begin { array } { ccc } \ hline \ text { bond } & \ text { dissociation energy / } \ mathrm { kj ~ mol ^ { - 1 } } \ \ \ hline \ ce { o - o } & 142 \ \ \ ce { s – s } & 268 \ \ \ ce { o = o } & 499 \ \ \ ce { s = s } & 352 \ \ \ hline \ end { array } $ $ this means that $ \ ce { o = o } $ is stronger than $ \ ce { s = s } $, while $ \ ce { o – o } $ is weaker than $ \ ce { s – s } $. so, in sulfur, single bonds are favoured and catenation is easier than in oxygen compounds. it seems that the reason for the weaker $ \ ce { s = s } $ double bonds has its roots in the size of the atom : it's harder for the two atoms to come at a small enough distance, so that the $ \ mathrm { 3p } $ orbitals overlap is small and the $ \ pi $ bond is weak. this is attested by looking down the periodic table : $ \ ce { se = se } $ has an even weaker bond enthalpy of $ \ ce { 272 kj / mol } $. there is more in - depth discussion of the relative bond strengths in this question. while not particularly stable, it's actually also possible for oxygen to form discrete molecules with the general formula $ \ ce { h - o _ n - h } $ ; water and hydrogen peroxide are the first two members of this class, but $ n $ goes up to at least $ 5 $. these " hydrogen polyoxides " are described further in this
https://api.stackexchange.com
question.
https://api.stackexchange.com
pick your poison. i recommend using homebrew. i have tried all of these methods except for " fink " and " other methods ". originally, i preferred macports when i wrote this answer. in the two years since, homebrew has grown a lot as a project and has proved more maintainable than macports, which can require a lot of path hacking. installing a version that matches system compilers if you want the version of gfortran to match the versions of gcc, g + +, etc. installed on your machine, download the appropriate version of gfortran from here. the r developers and scipy developers recommend this method. advantages : matches versions of compilers installed with xcode or with kenneth reitz's installer ; unlikely to interfere with os upgrades ; coexists nicely with macports ( and probably fink and homebrew ) because it installs to / usr / bin. doesn't clobber existing compilers. don't need to edit path. disadvantages : compiler stack will be really old. ( gcc 4. 2. 1 is the latest apple compiler ; it was released in 2007. ) installs to / usr / bin. installing a precompiled, up - to - date binary from hpc mac os x hpc mac os x has binaries for the latest release of gcc ( at the time of this writing, 4. 8. 0 ( experimental ) ), as well as g77 binaries, and an f2c - based compiler. the petsc developers recommend this method on their faq. advantages : with the right command, installs in / usr / local ; up - to - date. doesn't clobber existing system compilers, or the approach above. won't interfere with os upgrades. disadvantages : need to edit path. no easy way to switch between versions. ( you could modify the path, delete the compiler install, or kludge around it. ) will clobber other methods of installing compilers in / usr / local because compiler binaries are simply named'gcc ','g + + ', etc. ( without a version number, and without any symlinks ). use macports macports has a number of versions of compilers available for use. advantages : installs in / opt / local ; port select can be used to switch among compiler versions ( including system compilers ). won't
https://api.stackexchange.com
interfere with os upgrades. disadvantages : installing ports tends to require an entire " software ecosystem ". compilers don't include debugging symbols, which can pose a problem when using a debugger, or installing petsc. ( sean farley proposes some workarounds. ) also requires changing path. could interfere with homebrew and fink installs. ( see this post on superuser. ) use homebrew homebrew can also be used to install a fortran compiler. advantages : easy to use package manager ; installs the same fortran compiler as in " installing a version that matches system compilers ". only install what you need ( in contrast to macports ). could install a newer gcc ( 4. 7. 0 ) stack using the alternate repository homebrew - dupes. disadvantages : inherits all the disadvantages from " installing a version that matches system compilers ". may need to follow the homebrew paradigm when installing other ( non - homebrew ) software to / usr / local to avoid messing anything up. could interfere with macports and fink installs. ( see this post on superuser. ) need to change path. installs could depend on system libraries, meaning that dependencies for homebrew packages could break on an os upgrade. ( see this article. ) i wouldn't expect there to be system library dependencies when installing gfortran, but there could be such dependencies when installing other homebrew packages. use fink in theory, you can use fink to install gfortran. i haven't used it, and i don't know anyone who has ( and was willing to say something positive ). other methods other binaries and links are listed on the gfortran wiki. some of the links are already listed above. the remaining installation methods may or may not conflict with those described above ; use at your own risk.
https://api.stackexchange.com
i apologize in advance for the length of this post : it is with some trepidation that i let it out in public at all, because it takes some time and attention to read through and undoubtedly has typographic errors and expository lapses. but here it is for those who are interested in the fascinating topic, offered in the hope that it will encourage you to identify one or more of the many parts of the clt for further elaboration in responses of your own. most attempts at " explaining " the clt are illustrations or just restatements that assert it is true. a really penetrating, correct explanation would have to explain an awful lot of things. before looking at this further, let's be clear about what the clt says. as you all know, there are versions that vary in their generality. the common context is a sequence of random variables, which are certain kinds of functions on a common probability space. for intuitive explanations that hold up rigorously i find it helpful to think of a probability space as a box with distinguishable objects. it doesn't matter what those objects are but i will call them " tickets. " we make one " observation " of a box by thoroughly mixing up the tickets and drawing one out ; that ticket constitutes the observation. after recording it for later analysis we return the ticket to the box so that its contents remain unchanged. a " random variable " basically is a number written on each ticket. in 1733, abraham de moivre considered the case of a single box where the numbers on the tickets are only zeros and ones ( " bernoulli trials " ), with some of each number present. he imagined making $ n $ physically independent observations, yielding a sequence of values $ x _ 1, x _ 2, \ ldots, x _ n $, all of which are zero or one. the sum of those values, $ y _ n = x _ 1 + x _ 2 + \ ldots + x _ n $, is random because the terms in the sum are. therefore, if we could repeat this procedure many times, various sums ( whole numbers ranging from $ 0 $ through $ n $ ) would appear with various frequencies - - proportions of the total. ( see the histograms below. ) now one would expect - - and it's true - - that for very large values of $ n $, all the frequencies would be quite small. if we were to be so bold ( or foolish ) as to attempt to "
https://api.stackexchange.com
take a limit " or " let $ n $ go to $ \ infty $ ", we would conclude correctly that all frequencies reduce to $ 0 $. but if we simply draw a histogram of the frequencies, without paying any attention to how its axes are labeled, we see that the histograms for large $ n $ all begin to look the same : in some sense, these histograms approach a limit even though the frequencies themselves all go to zero. these histograms depict the results of repeating the procedure of obtaining $ y _ n $ many times. $ n $ is the " number of trials " in the titles. the insight here is to draw the histogram first and label its axes later. with large $ n $ the histogram covers a large range of values centered around $ n / 2 $ ( on the horizontal axis ) and a vanishingly small interval of values ( on the vertical axis ), because the individual frequencies grow quite small. fitting this curve into the plotting region has therefore required both a shifting and rescaling of the histogram. the mathematical description of this is that for each $ n $ we can choose some central value $ m _ n $ ( not necessarily unique! ) to position the histogram and some scale value $ s _ n $ ( not necessarily unique! ) to make it fit within the axes. this can be done mathematically by changing $ y _ n $ to $ z _ n = ( y _ n - m _ n ) / s _ n $. remember that a histogram represents frequencies by areas between it and the horizontal axis. the eventual stability of these histograms for large values of $ n $ should therefore be stated in terms of area. so, pick any interval of values you like, say from $ a $ to $ b \ gt a $ and, as $ n $ increases, track the area of the part of the histogram of $ z _ n $ that horizontally spans the interval $ ( a, b ] $. the clt asserts several things : no matter what $ a $ and $ b $ are, if we choose the sequences $ m _ n $ and $ s _ n $ appropriately ( in a way that does not depend on $ a $ or $ b $ at all ), this area indeed approaches a limit as $ n $ gets large. the sequences $ m _ n $ and $ s _ n $ can be chosen in a way that depends only on
https://api.stackexchange.com
$ n $, the average of values in the box, and some measure of spread of those values - - but on nothing else - - so that regardless of what is in the box, the limit is always the same. ( this universality property is amazing. ) specifically, that limiting area is the area under the curve $ y = \ exp ( - z ^ 2 / 2 ) / \ sqrt { 2 \ pi } $ between $ a $ and $ b $ : this is the formula of that universal limiting histogram. the first generalization of the clt adds, when the box can contain numbers in addition to zeros and ones, exactly the same conclusions hold ( provided that the proportions of extremely large or small numbers in the box are not " too great, " a criterion that has a precise and simple quantitative statement ). the next generalization, and perhaps the most amazing one, replaces this single box of tickets with an ordered indefinitely long array of boxes with tickets. each box can have different numbers on its tickets in different proportions. the observation $ x _ 1 $ is made by drawing a ticket from the first box, $ x _ 2 $ comes from the second box, and so on. exactly the same conclusions hold provided the contents of the boxes are " not too different " ( there are several precise, but different, quantitative characterizations of what " not too different " has to mean ; they allow an astonishing amount of latitude ). these five assertions, at a minimum, need explaining. there's more. several intriguing aspects of the setup are implicit in all the statements. for example, what is special about the sum? why don't we have central limit theorems for other mathematical combinations of numbers such as their product or their maximum? ( it turns out we do, but they are not quite so general nor do they always have such a clean, simple conclusion unless they can be reduced to the clt. ) the sequences of $ m _ n $ and $ s _ n $ are not unique but they're almost unique in the sense that eventually they have to approximate the expectation of the sum of $ n $ tickets and the standard deviation of the sum, respectively ( which, in the first two statements of the clt, equals $ \ sqrt { n } $ times the standard deviation of the box ). the standard deviation is one measure of the spread of values, but it is by no means the only one nor is it the most " natural, " either historically or for
https://api.stackexchange.com
many applications. ( many people would choose something like a median absolute deviation from the median, for instance. ) why does the sd appear in such an essential way? consider the formula for the limiting histogram : who would have expected it to take such a form? it says the logarithm of the probability density is a quadratic function. why? is there some intuitive or clear, compelling explanation for this? i confess i am unable to reach the ultimate goal of supplying answers that are simple enough to meet srikant's challenging criteria for intuitiveness and simplicity, but i have sketched this background in the hope that others might be inspired to fill in some of the many gaps. i think a good demonstration will ultimately have to rely on an elementary analysis of how values between $ \ alpha _ n = a s _ n + m _ n $ and $ \ beta _ n = b s _ n + m _ n $ can arise in forming the sum $ x _ 1 + x _ 2 + \ ldots + x _ n $. going back to the single - box version of the clt, the case of a symmetric distribution is simpler to handle : its median equals its mean, so there's a 50 % chance that $ x _ i $ will be less than the box's mean and a 50 % chance that $ x _ i $ will be greater than its mean. moreover, when $ n $ is sufficiently large, the positive deviations from the mean ought to compensate for the negative deviations in the mean. ( this requires some careful justification, not just hand waving. ) thus we ought primarily to be concerned about counting the numbers of positive and negative deviations and only have a secondary concern about their sizes. ( of all the things i have written here, this might be the most useful at providing some intuition about why the clt works. indeed, the technical assumptions needed to make the generalizations of the clt true essentially are various ways of ruling out the possibility that rare huge deviations will upset the balance enough to prevent the limiting histogram from arising. ) this shows, to some degree anyway, why the first generalization of the clt does not really uncover anything that was not in de moivre's original bernoulli trial version. at this point it looks like there is nothing for it but to do a little math : we need to count the number of distinct ways in which the number of positive deviations from the mean can differ from the number of negative deviations
https://api.stackexchange.com
by any predetermined value $ k $, where evidently $ k $ is one of $ - n, - n + 2, \ ldots, n - 2, n $. but because vanishingly small errors will disappear in the limit, we don't have to count precisely ; we only need to approximate the counts. to this end it suffices to know that $ $ \ text { the number of ways to obtain } k \ text { positive and } n - k \ text { negative values out of } n $ $ $ $ \ text { equals } \ frac { n - k + 1 } { k } $ $ $ $ \ text { times the number of ways to get } k - 1 \ text { positive and } n - k + 1 \ text { negative values. } $ $ ( that's a perfectly elementary result so i won't bother to write down the justification. ) now we approximate wholesale. the maximum frequency occurs when $ k $ is as close to $ n / 2 $ as possible ( also elementary ). let's write $ m = n / 2 $. then, relative to the maximum frequency, the frequency of $ m + j + 1 $ positive deviations ( $ j \ ge 0 $ ) is estimated by the product $ $ \ frac { m + 1 } { m + 1 } \ frac { m } { m + 2 } \ cdots \ frac { m - j + 1 } { m + j + 1 } $ $ $ $ = \ frac { 1 - 1 / ( m + 1 ) } { 1 + 1 / ( m + 1 ) } \ frac { 1 - 2 / ( m + 1 ) } { 1 + 2 / ( m + 1 ) } \ cdots \ frac { 1 - j / ( m + 1 ) } { 1 + j / ( m + 1 ) }. $ $ 135 years before de moivre was writing, john napier invented logarithms to simplify multiplication, so let's take advantage of this. using the approximation $ $ \ log \ left ( \ frac { 1 - x } { 1 + x } \ right ) = - 2x - \ frac { 2x ^ 3 } { 3 } + o ( x ^ 5 ), $ $ we find that the log of the relative frequency is approximately $ $ - \ frac { 2 } { m + 1 } \ left ( 1 + 2 +
https://api.stackexchange.com
\ cdots + j \ right ) - \ frac { 2 } { 3 ( m + 1 ) ^ 3 } \ left ( 1 ^ 3 + 2 ^ 3 + \ cdots + j ^ 3 \ right ) = - \ frac { j ^ 2 } { m } + o \ left ( \ frac { j ^ 4 } { m ^ 3 } \ right ). $ $ because the error in approximating this sum by $ - j ^ 2 / m $ is on the order of $ j ^ 4 / m ^ 3 $, the approximation ought to work well provided $ j ^ 4 $ is small relative to $ m ^ 3 $. that covers a greater range of values of $ j $ than is needed. ( it suffices for the approximation to work for $ j $ only on the order of $ \ sqrt { m } $ which asymptotically is much smaller than $ m ^ { 3 / 4 } $. ) consequently, writing $ $ z = \ sqrt { 2 } \, \ frac { j } { \ sqrt { m } } = \ frac { j / n } { 1 / \ sqrt { 4n } } $ $ for the standardized deviation, the relative frequency of deviations of size given by $ z $ must be proportional to $ \ exp ( - z ^ 2 / 2 ) $ for large $ m. $ thus appears the gaussian law of # 3 above. obviously much more analysis of this sort should be presented to justify the other assertions in the clt, but i'm running out of time, space, and energy and i've probably lost 90 % of the people who started reading this anyway. this simple approximation, though, suggests how de moivre might originally have suspected that there is a universal limiting distribution, that its logarithm is a quadratic function, and that the proper scale factor $ s _ n $ must be proportional to $ \ sqrt { n } $ ( as shown by the denominator of the preceding formula ). it is difficult to imagine how this important quantitative relationship could be explained without invoking some kind of mathematical information and reasoning ; anything less would leave the precise shape of the limiting curve a complete mystery.
https://api.stackexchange.com
no need to use taylor series, this can be derived in a similar way to the formula for geometric series. let's find a general formula for the following sum : $ $ s _ { m } = \ sum _ { n = 1 } ^ { m } nr ^ { n }. $ $ notice that \ begin { align * } s _ { m } - rs _ { m } & = - mr ^ { m + 1 } + \ sum _ { n = 1 } ^ { m } r ^ { n } \ \ & = - mr ^ { m + 1 } + \ frac { r - r ^ { m + 1 } } { 1 - r } \ \ & = \ frac { mr ^ { m + 2 } - ( m + 1 ) r ^ { m + 1 } + r } { 1 - r }. \ end { align * } hence $ $ s _ m = \ frac { mr ^ { m + 2 } - ( m + 1 ) r ^ { m + 1 } + r } { ( 1 - r ) ^ 2 }. $ $ this equality holds for any $ r $, but in your case we have $ r = \ frac { 1 } { 3 } $ and a factor of $ \ frac { 2 } { 3 } $ in front of the sum. that is \ begin { align * } \ sum _ { n = 1 } ^ { \ infty } \ frac { 2n } { 3 ^ { n + 1 } } & = \ frac { 2 } { 3 } \ lim _ { m \ rightarrow \ infty } \ frac { m \ left ( \ frac { 1 } { 3 } \ right ) ^ { m + 2 } - ( m + 1 ) \ left ( \ frac { 1 } { 3 } \ right ) ^ { m + 1 } + \ left ( \ frac { 1 } { 3 } \ right ) } { \ left ( 1 - \ left ( \ frac { 1 } { 3 } \ right ) \ right ) ^ { 2 } } \ \ & = \ frac { 2 } { 3 } \ frac { \ left ( \ frac { 1 } { 3 } \ right ) } { \ left ( \ frac { 2 } { 3 } \ right ) ^ { 2 } } \ \ & = \ frac { 1 } { 2 }. \ end
https://api.stackexchange.com
{ align * } added note : we can define $ $ s _ m ^ k ( r ) = \ sum _ { n = 1 } ^ m n ^ k r ^ n. $ $ then the sum above considered is $ s _ m ^ 1 ( r ) $, and the geometric series is $ s _ m ^ 0 ( r ) $. we can evaluate $ s _ m ^ 2 ( r ) $ by using a similar trick, and considering $ s _ m ^ 2 ( r ) - rs _ m ^ 2 ( r ) $. this will then equal a combination of $ s _ m ^ 1 ( r ) $ and $ s _ m ^ 0 ( r ) $ which already have formulas for. this means that given a $ k $, we could work out a formula for $ s _ m ^ k ( r ) $, but can we find $ s _ m ^ k ( r ) $ in general for any $ k $? it turns out we can, and the formula is similar to the formula for $ \ sum _ { n = 1 } ^ m n ^ k $, and involves the bernoulli numbers. in particular, the denominator is $ ( 1 - r ) ^ { k + 1 } $.
https://api.stackexchange.com
a byte of data is eight bits, there may be more bits per byte of data that are used at the os or even the hardware level for error checking ( parity bit, or even a more advanced error detection scheme ), but the data is eight bits and any parity bit is usually invisible to the software. a byte has been standardized to mean'eight bits of data '. the text isn't wrong in saying there may be more bits dedicated to storing a byte of data of than the eight bits of data, but those aren't typically considered part of the byte per se, the text itself points to this fact. you can see this in the following section of the tutorial : doubleword : a 4 - byte ( 32 bit ) data item 4 * 8 = 32, it might actually take up 36 bits on the system but for your intents and purposes it's only 32 bits.
https://api.stackexchange.com
i myself was always confused why $ \ ce { h3o ^ + } $ is so well - known and yet almost nobody talks of $ \ ce { h4o ^ 2 + } $. i mean, $ \ ce { h3o ^ + } $ still has a lone pair, right? why can't another proton just latch onto that? adding to the confusion, $ \ ce { h4o ^ 2 + } $ is very similar to $ \ ce { nh4 + } $, which again is extremely well - known. even further, the methanium cation $ \ ce { ch5 + } $ exists ( admittedly not something you'll find on a shelf ), and that doesn't even have an available lone pair! it is very useful to rephrase the question " why is $ \ ce { h4o ^ 2 + } $ so rare? " into " why won't $ \ ce { h3o ^ + } $ accept another proton? ". now we can think of this in terms of an acid - base reaction : $ $ \ ce { h3o ^ + + h + - > h4o ^ 2 + } $ $ yes, that's right. in this reaction $ \ ce { h3o ^ + } $ is the base, and $ \ ce { h ^ + } $ is the acid. because solvents can strongly influence the acidity of basicity of dissolved compounds, and because inclusion of solvent makes calculations tremendously more complicated, we will restrict ourselves to the gas phase ( hence $ \ ce { ( g ) } $ next to all the formulas ). this means we will be talking about proton affinities. before we get to business, though, let's start with something more familiar : $ $ \ ce { h2o ( g ) + h + ( g ) - > h3o ^ + ( g ) } $ $ because this is in the gas phase, we can visualise the process very simply. we start with a lone water molecule in a perfect vacuum. then, from a very large distance away, a lone proton begins its approach. we can calculate the potential energy of the whole system as a function of the distance between the oxygen atom and the distant proton. we get a graph that looks something like this : for convenience, we can set the potential energy of the system at 0 when the distance is infinite. at very large distances, the lone proton only very slightly
https://api.stackexchange.com
tugs the electrons of the $ \ ce { h2o } $ molecule, but they attract and the system is slightly stabilised. the attraction gets stronger as the lone proton approaches. however, there is also a repulsive interaction, between the lone proton and the nuclei of the other atoms in the $ \ ce { h2o } $ molecule. at large distances, the attraction is stronger than the repulsion, but this flips around if the distance is too short. the happy medium is where the extra proton is close enough to dive into the molecule's electron cloud, but not close enough to experience severe repulsions with the other nuclei. in short, a lone proton from infinity is attracted to a water molecule, and the potential energy decreases up to a critical value, the bond length. the amount of energy lost is the proton affinity : in this scenario, a mole of water molecules reacting with a mole of protons would release approximately $ \ mathrm { 697 \ kj \ mol ^ { - 1 } } $ ( values from this table ). this reaction is highly exothermic alright, now for the next step : $ $ \ ce { h3o ^ + ( g ) + h + ( g ) - > h4o ^ 2 + ( g ) } $ $ this should be similar, right? actually, no. there is a very important difference between this reaction and the previous one ; the reagents now both have a net positive charge. this means there is now a strong additional repulsive force between the two. in fact, the graph above changes completely. starting from zero potential at infinity, instead of a slow decrease in potential energy, the lone proton has to climb uphill, fighting a net electrostatic repulsion. however, even more interestingly, if the proton does manage to get close enough, the electron cloud can abruptly envelop the additional proton and create a net attraction. the resulting graph now looks more like this : very interestingly, the bottom of the " pocket " on the left of the graph ( the potential well ) can have a higher potential energy than if the lone proton was infinitely far away. this means the reaction is endothermic, but with enough effort, an extra proton can be pushed into the molecule, and it gets trapped in the pocket. indeed, according to olah et al., j. am. chem. soc. 1986, 108 ( 5 ), pp 1032 - 1035, the formation of $
https://api.stackexchange.com
\ ce { h4o ^ 2 + } $ in the gas phase was calculated to be endothermic by $ \ mathrm { 248 \ kj \ mol ^ { - 1 } } $ ( that is, the proton affinity of $ \ ce { h3o ^ + } $ is $ \ mathrm { - 248 \ kj \ mol ^ { - 1 } } $ ), but once formed, it has a barrier towards decomposition ( the activation energy towards release of a proton ) of $ \ mathrm { 184 \ kj \ mol ^ { - 1 } } $ ( the potential well has a maximum depth of $ \ mathrm { 184 \ kj \ mol ^ { - 1 } } $ ). due to the fact that $ \ ce { h4o ^ 2 + } $ was calculated to form a potential well, it can in principle exist. however, since it is the product of a highly endothermic reaction, unsurprisingly it is very hard to find. the reality in solution phase is more complicated, but its existence has been physically verified ( if indirectly ). but why stop here? what about $ \ ce { h5o ^ 3 + } $? $ $ \ ce { h4o ^ 2 + ( g ) + h + ( g ) - > h5o ^ 3 + ( g ) } $ $ i've run a rough calculation myself using computational chemistry software, and here it seems we really do reach a wall. it appears that $ \ ce { h5o ^ 3 + } $ is an unbound system, which is to say that its potential energy curve has no pocket like the ones above. $ \ ce { h5o ^ 3 + } $ could only ever be made transiently, and it would immediately spit out at least one proton. the reason here really is the massive amount of electrical repulsion, combined with the fact that the electron cloud can't reach out to the distance necessary to accommodate another atom. you can make your own potential energy graphs here. note how depending on the combination of parameters, the potential well can lie at negative potential energies ( an exothermic reaction ) or positive potential energies ( an endothermic reaction ). alternatively, the pocket may not exist at all - these are the unbound systems. edit : i've done some calculations of proton affinities / stabilities on several other simple molecules, for comparison. i do not
https://api.stackexchange.com
claim the results to be quantitatively correct. $ $ \ begin { array } { lllll } \ text { species } & \ ce { ch4 } & \ ce { ch5 + } & \ ce { ch6 ^ 2 + } & \ ce { ch7 ^ 3 + } & \ ce { ch8 ^ 4 + } \ \ \ text { stable in gas phase? } & \ text { yes } & \ text { yes } & \ text { yes } & \ text { yes } & \ text { no } \ \ \ text { approximate proton affinity } \ ( \ mathrm { kj \ mol ^ { - 1 } } ) & 556 & - 246 & - 1020 & n / a & n / a \ \ \ end { array } $ $ notes : even without a lone pair, methane ( $ \ ce { ch4 } $ ) protonates very exothermically in the gas phase. this is a testament to the enormous reactivity of a bare proton, and the huge difference it makes to not have push a proton into an already positively - charged ion. for most of the seemingly hypercoordinate species in these tables ( more than four bonds ), the excess hydrogen atoms " pair up " such that it can be viewed as a $ \ ce { h2 } $ molecule binding sideways to the central atom. see the methanium link at the start. $ $ \ begin { array } { lllll } \ text { species } & \ ce { nh3 } & \ ce { nh4 + } & \ ce { nh5 ^ 2 + } & \ ce { nh6 ^ 3 + } \ \ \ text { stable in gas phase? } & \ text { yes } & \ text { yes } & \ text { yes } & \ text { no } \ \ \ text { approximate proton affinity } \ ( \ mathrm { kj \ mol ^ { - 1 } } ) & 896 & - 410 & n / a & n / a \ \ \ end { array } $ $ notes : even though the first protonation is easier relative to $ \ ce { ch4 } $, the second one is harder. this is likely because increasing the electronegativity of the central atom makes the electron cloud " stiffer ", and less accommodating to all those extra protons. the $ \ ce { nh5 ^ { 2 + } } $ ion, unlike other
https://api.stackexchange.com
ions listed here with more than four hydrogens, appears to be a true hypercoordinate species. del bene et al. indicate a five - coordinate square pyramidal structure with delocalized nitrogen - hydrogen bonds. $ $ \ begin { array } { lllll } \ text { species } & \ ce { h2o } & \ ce { h3o + } & \ ce { h4o ^ 2 + } & \ ce { h5o ^ 3 + } \ \ \ text { stable in gas phase? } & \ text { yes } & \ text { yes } & \ text { yes } & \ text { no } \ \ \ text { approximate proton affinity } \ ( \ mathrm { kj \ mol ^ { - 1 } } ) & 722 & - 236 & n / a & n / a \ \ \ end { array } $ $ notes : the first series which does not accommodate proton hypercoordination. $ \ ce { h3o + } $ is easier to protonate than $ \ ce { nh4 + } $, even though oxygen is more electronegative. this is because the $ \ ce { h4o ^ 2 + } $ nicely accommodates all protons, while one of the protons in $ \ ce { nh5 ^ 2 + } $ has to fight for its space. $ $ \ begin { array } { lllll } \ text { species } & \ ce { hf } & \ ce { h2f + } & \ ce { h3f ^ 2 + } & \ ce { h4f ^ 3 + } \ \ \ text { stable in gas phase? } & \ text { yes } & \ text { yes } & \ text { yes } & \ text { no } \ \ \ text { approximate proton affinity } \ ( \ mathrm { kj \ mol ^ { - 1 } } ) & 501 & - 459 & n / a & n / a \ \ \ end { array } $ $ notes : even though $ \ ce { h3f ^ 2 + } $ still formally has a lone pair, its electron cloud is now so stiff that it cannot reach out to another proton even at normal bonding distance. $ $ \ begin { array } { lllll } \ text { species } & \ ce { ne } & \ ce { neh + } & \ ce { neh2 ^ 2 + } \ \
https://api.stackexchange.com
\ text { stable in gas phase? } & \ text { yes } & \ text { yes } & \ text { no } \ \ \ text { approximate proton affinity } \ ( \ mathrm { kj \ mol ^ { - 1 } } ) & 204 & n / a & n / a \ \ \ end { array } $ $ notes : $ \ ce { ne } $ is a notoriously unreactive noble gas, but it too will react exothermically with a bare proton in the gas phase. depending on the definition of electronegativity used, it is possible to determine an electronegativity for $ \ ce { ne } $, which turns out to be even higher than $ \ ce { f } $. accordingly, its electron cloud is even stiffer. $ $ \ begin { array } { lllll } \ text { species } & \ ce { h2s } & \ ce { h3s + } & \ ce { h4s ^ 2 + } & \ ce { h5s ^ 3 + } & \ ce { h6s ^ 4 + } \ \ \ text { stable in gas phase? } & \ text { yes } & \ text { yes } & \ text { yes } & \ text { yes } & \ text { no } \ \ \ text { approximate proton affinity } \ ( \ mathrm { kj \ mol ^ { - 1 } } ) & 752 & - 121 & - 1080 & n / a & n / a \ \ \ end { array } $ $ notes : the lower electronegativity and larger size of $ \ ce { s } $ means its electrons can reach out further and accommodate protons at a larger distance, while reducing repulsions between the nuclei. thus, in the gas phase, $ \ ce { h2s } $ is a stronger base than $ \ ce { h2o } $. the situation is inverted in aqueous solution due to uniquely strong intermolecular interactions ( hydrogen bonding ) which are much more important for $ \ ce { h2o } $. $ \ ce { h3s + } $ also has an endothermic proton affinity, but it is lower than for $ \ ce { h3o + } $, and therefore $ \ ce { h4s ^ 2 + } $ is easier to make. accordingly, $ \ ce { h4s ^ 2 + } $
https://api.stackexchange.com
has been detected in milder ( though still superacidic! ) conditions than $ \ ce { h4o ^ 2 + } $. the larger size and lower electronegativity of $ \ ce { s } $ once again are shown to be important ; the hypercoodinate $ \ ce { h5s ^ 3 + } $ appears to exist, while the oxygen analogue doesn't.
https://api.stackexchange.com
bcftools has sample / individual filtering as an option for most of the commands. you can subset individuals by using the - s or - s option : - s, - - samples [ ^ ] list comma - separated list of samples to include or exclude if prefixed with " ^ ". note that in general tags such as info / ac, info / an, etc are not updated to correspond to the subset samples. bcftools view is the exception where some tags will be updated ( unless the - i, - - no - update option is used ; see bcftools view documentation ). to use updated tags for the subset in another command one can pipe from view into that command. for example : - s, - - samples - file file file of sample names to include or exclude if prefixed with " ^ ". one sample per line. see also the note above for the - s, - - samples option. the command bcftools call accepts an optional second column indicating ploidy ( 0, 1 or 2 ) or sex ( as defined by - - ploidy, for example " f " or " m " ), and can parse also ped files. if the second column is not present, the sex " f " is assumed. with bcftools call - c trio, ped file is expected. file formats examples : sample1 1 sample2 2 sample3 2 or sample1 m sample2 f sample3 f or a. ped file ( here is shown a minimum working example, the first column is ignored and the last indicates sex : 1 = male, 2 = female ) : ignored daughtera fathera mothera 2 ignored sonb fatherb motherb 1 example usage : bcftools view - s sample1, sample2 file. vcf > filtered. vcf bcftools view - s sample _ file. txt file. vcf > filtered. vcf see the bcftools manpage for more information.
https://api.stackexchange.com
one major problem with using uracil as a base is that cytosine can be deaminated, which converts it into uracil. this is not a rare reaction ; it happens around 100 times per cell, per day. this is no major problem when using thymine, as the cell can easily recognize that the uracil doesn't belong there and can repair it by substituting it by a cytosine again. there is an enzyme, uracil dna glycosylase, that does exactly that ; it excises uracil bases from double - stranded dna. it can safely do that as uracil is not supposed to be present in the dna and has to be the result of a base modification. now, if we would use uracil in dna it would not be so easy to decide how to repair that error. it would prevent the usage of this important repair pathway. the inability to repair such damage doesn't matter for rna as the mrna is comparatively short - lived and any potential errors don't lead to any lasting damage. it matters a lot for dna as the errors are continued through every replication. now, this explains why there is an advantage to using thymine in dna, it doesn't explain why rna uses uracil. i'd guess it just evolved that way and there was no significant drawback that could be selected against, but there might be a better reason ( more difficult biosynthesis of thymine, maybe? ). you'll find a bit more information on that in " molecular biology of the cell " from bruce alberts et al. in the chapter about dna repair ( from page 267 on in the 4th edition ).
https://api.stackexchange.com
mathematical explanation when examining the linear combination of atomic orbitals ( lcao ) for the $ \ ce { h2 + } $ molecular ion, we get two different energy levels, $ e _ + $ and $ e _ - $ depending on the coefficients of the atomic orbitals. the energies of the two different mo's are : $ $ \ begin { align } e _ + & = e _ \ text { 1s } + \ frac { j _ 0 } { r } - \ frac { j'+ k'} { 1 + s } \ \ e _ - & = e _ \ text { 1s } + \ frac { j _ 0 } { r } - \ frac { j'- k'} { 1 - s } \ end { align } $ $ note that $ j _ 0 = \ frac { e ^ 2 } { 4 \ pi \ varepsilon _ 0 } $, $ r $ is the internuclear distance, $ s = \ int \ chi _ \ text { a } ^ * \ chi _ \ text { b } \, \ text { d } v $ the overlap integral, $ j'$ is a coulombic contribution to the energy and $ k'$ is a contribution to the resonance integral, and it does not have a classical analogue. $ j'$ and $ k'$ are both positive and $ j'\ gt k'$. you'll note that $ j'- k'> 0 $. this is why the energy levels of $ e _ + $ and $ e _ - $ are not symmetrical with respect to the energy level of $ e _ \ text { 1s } $. intuitive explanation the intuitive explanation goes along the following line : imagine two hydrogen nuclei that slowly get closer to each other, and at some point start mixing their orbitals. now, one very important interaction is the coulomb force between those two nuclei, which gets larger the closer the nuclei come together. as a consequence of this, the energies of the molecular orbitals get shifted upwards, which is what creates the asymmetric image that we have for these energy levels. basically, you have two positively charged nuclei getting closer to each other. now you have two options : stick some electrons between them. don't stick some electrons between them. if you follow through with option 1, you'll diminish the coulomb forces between the two nuclei somewhat in favor of electron -
https://api.stackexchange.com
nucleus attraction. if you go with method 2 ( remember that the $ \ sigma ^ * _ \ text { 1s } $ mo has a node between the two nuclei ), the nuclei feel each other's repulsive forces more strongly. further information i highly recommend the following book, from which most of the information above stems : peter atkins and ronald friedman, in molecular quantum mechanics ; $ 5 ^ \ text { th } $ ed., oxford university press : oxford, united kingdom, 2011 ( isbn - 13 : 978 - 0199541423 ).
https://api.stackexchange.com
oscilloscopes usually require significant power and are physically big. having a chassis that size, which would include exposed ground on the bnc connectors and the probe ground clips, floating would be dangerous. if you have to look at waveforms in wall - powered equipment, it is generally much better to put the isolation transformer on that equipment instead of on the scope. once the scope is connected, it provides a ground reference to that part of the circuit so other parts could then be at high ground - referenced voltages, which could be dangerous. however, you'll likely be more careful not to touch parts of the unit under test than the scope. scopes can also have other paths to ground that are easy to forget. for example, the scope on my bench usually has a permanent rs - 232 connection to my computer. it would be easy to float the scope but forget about such things. the scope would actually not be floating. at best a fuse would pop when it is first connected to a wall powered unit under test in the wrong place. manufacturers could isolate the scope easily enough, but that probably opens them to liability problems. in general, bench equipment is not isolated but hand - held equipment is. if you really need to make isolated measurements often, you can get battery operated handheld scopes.
https://api.stackexchange.com
for computing three - dimensional delaunay triangulations ( tetrahedralizations, really ), tetgen is a commonly used library. for your convenience, here's a little benchmark on how long it takes to compute the terehedralization of a number of random points from the unit cube. for 100, 000 points it takes 4. 5 seconds on an old pentium m. ( this was done with mathematica's tetgen interface. i don't know how much overhead it introduces. ) regarding your other question : if you already have the voronoi tessellation, then getting the delaunay triangulation is a relatively simple transformation.
https://api.stackexchange.com
because " pixel " isn't a unit of measurement : it's an object. so, just like a wall that's 30 bricks wide by 10 bricks tall contains 300 bricks ( not bricks - squared ), an image that's 30 pixels wide by 10 pixels tall contains 300 pixels ( not pixels - squared ).
https://api.stackexchange.com
great question! note that from a prescriptive standpoint, the terms pipeline and workflow don't have any strict or precise definitions. but it's still useful to take a descriptive standpoint and discuss how the terms are commonly used in the bioinformatics community. but before talking about pipelines and workflows, it's helpful to talk about programs and scripts. a program or script typically implements a single data analysis task ( or set of related tasks ). some examples include the following. fastqc, a program that checks ngs reads for common quality issues trimmomatic, a program for cleaning ngs reads salmon, a program for estimating transcript abundance from ngs reads a custom r script that uses deseq2 to perform differential expression analysis a pipeline or a workflow refers to a particular kind of program or script that is intended primarily to combine other independent programs or scripts. for example, i might want to write an rna - seq workflow that executes trimmomatic, fastqc, salmon, and the r script using a single command. this is particularly useful if i have to run the same command many times, or if the commands take a long time to run. it's very inconvenient when you have to babysit your computer and wait for step 3 to finish so that you can launch step 4! so when does a program become a pipeline? honestly, there are no strict rules. in some cases it's clear : the 10 - line python script i wrote to split fasta files is definitely not a pipeline, but the 200 - line python script i wrote that does nothing but invoke 6 other bioinformatics programs definitely is a pipeline. there are a lot of tools that fall in the middle : they may require running multiple steps in a certain order, or implement their own processing but also delegate processing to other tools. usually nobody worries too much about whether it's " correct " to call a particular tool a pipeline. finally, a workflow engine is the software used to actually execute your pipeline / workflow. as mentioned above, general - purpose scripting languages like bash, python, or perl can be used to implement workflows. but there are other languages that are designed specifically for managing workflows. perhaps the earliest and most popular of these is gnu make, which was originally intended to help engineers coordinate software compilation but can be used for just about any workflow. more recently there has been a proliferation of tools intended
https://api.stackexchange.com
to replace gnu make for numerous languages in a variety of contexts. the most popular in bioinformatics seems to be snakemake, which provides a nice balance of simplicity ( through shell commands ), flexibility ( through configuration ), and power - user support ( through python scripting ). build scripts written for these tools ( i. e., a makefile or snakefile ) are often called pipelines or workflows, and the workflow engine is the software that executes the workflow. the workflow engines you listed above ( such as argo ) can certainly be used to coordinate bioinformatics workflows. honestly though, these are aimed more at the broader tech industry : they involve not just workflow execution but also hardware and infrastructure coordination, and would require a level of engineering expertise / support not commonly available in a bioinformatics setting. this could change, however, as bioinformatics becomes more of a " big data " endeavor. as a final note, i'll mention a few more relevant technologies that i wasn't able to fit above. docker : managing a consistent software environment across multiple ( potentially dozens or hundreds ) of computers ; singularity is docker's less popular step - sister common workflow language ( cwl ) : a generic language for declaring how each step of a workflow is executed, what inputs it needs, what outputs it creates, and approximately what resources ( ram, storage, cpu threads, etc. ) are required to run it ; designed to write workflows that can be run on a variety of workflow engines dockstore : a registry of bioinformatics workflows ( heavy emphasis on genomics ) that includes a docker container and a cwl specification for each workflow toil : a production - grade workflow engine used primarily for bioinformatics workflows
https://api.stackexchange.com
let's try this wittgenstein's ladder style. first let's consider this : simulate this circuit – schematic created using circuitlab we can calculate the current through r1 with ohm's law : $ $ { 1 \ : \ mathrm v \ over 100 \ : \ omega } = 10 \ : \ mathrm { ma } $ $ we also know that the voltage across r1 is 1v. if we use ground as our reference, then how does 1v at the top of the resistor become 0v at the bottom of the resistor? if we could stick a probe somewhere in the middle of r1, we should measure a voltage somewhere between 1v and 0v, right? a resistor with a probe we can move around on it... sounds like a potentiometer, right? simulate this circuit by adjusting the knob on the potentiometer, we can measure any voltage between 0v and 1v. now what if instead of a pot, we use two discrete resistors? simulate this circuit this is essentially the same thing, except we can't move the wiper on the potentiometer : it's stuck at a position 3 / 4th from the top. if we get 1v at the top, and 0v at the bottom, then 3 / 4ths of the way up we should expect to see 3 / 4ths of the voltage, or 0. 75v. what we have made is a resistive voltage divider. it's behavior is formally described by the equation : $ $ v _ \ text { out } = { r _ 2 \ over r _ 1 + r _ 2 } \ cdot v _ \ text { in } $ $ now, what if we had a resistor with a resistance that changed with frequency? we could do some neat stuff. that's what capacitors are. at a low frequency ( the lowest frequency being dc ), a capacitor looks like a large resistor ( infinite at dc ). at higher frequencies, the capacitor looks like a smaller resistor. at infinite frequency, a capacitor has to resistance at all : it looks like a wire. so : simulate this circuit for high frequencies ( top right ), the capacitor looks like a small resistor. r3 is very much smaller than r2, so we will measure a very small voltage here. we could say that the input has been attenuated a lot. for low
https://api.stackexchange.com
frequencies ( lower right ), the capacitor looks like a large resistor. r5 is very much bigger than r4, so here we will measure a very large voltage, almost all of the input voltage, that is, the input voltage has been attenuated very little. so high frequencies are attenuated, and low frequencies are not. sounds like a low - pass filter. and if we exchange the places of the capacitor and the resistor, the effect is reversed, and we have a high - pass filter. however, capacitors aren't really resistors. what they are though, are impedances. the impedance of a capacitor is : $ $ z _ \ text { capacitor } = - j { 1 \ over 2 \ pi f c } $ $ where : \ $ c \ $ is the capacitance, in farads \ $ f \ $ is the frequency, in hertz \ $ j \ $ is the imaginary unit, \ $ \ sqrt { - 1 } \ $ notice that, because \ $ f \ $ is in the denominator, the impedance decreases as frequency increases. impedances are complex numbers, because they contain \ $ j \ $. if you know how arithmetic operations work on complex numbers, then you can still use the voltage divider equation, except we will use \ $ z \ $ instead of \ $ r \ $ to suggest we are using impedances instead of simple resistances : $ $ v _ \ text { out } = v _ { in } { z _ 2 \ over z _ 1 + z _ 2 } $ $ and from this, you can calculate the behavior of any rc circuit, and a good deal more.
https://api.stackexchange.com
you can use the python builtin ctypes module as described on fortran90. org. it is pretty straight forward and doesn't require any external dependencies. also, the ndpointer arg type helper is very handy.
https://api.stackexchange.com
yes, c + + is still useful in embedded systems. as everyone else has said, it still depends on the system itself, like an 8 - bit uc would probably be a no - no in my book even though there is a compiler out there and some people do it ( shudder ). there's still an advantage to using c + + even when you scale it down to something like " c + " even in a 8 - bit micro world. what do i mean by " c + "? i mean don't use new / delete, avoid exceptions, avoid virtual classes with inheritance, possibly avoid inheritance all together, be very careful with templates, use inline functions instead of macros, and use const variables instead of # defines. i've been working both in c and c + + in embedded systems for well over a decade now, and some of my youthful enthusiasm for c + + has definitely worn off due to some real world problems that shake one's naivete. i have seen the worst of c + + in an embedded systems which i would like to refer to as " cs programmers gone wild in an ee world. " in fact, that is something i'm working on with my client to improve this one codebase they have among others. the danger of c + + is because it's a very very powerful tool much like a two - edged sword that can cut both your arm and leg off if not educated and disciplined properly in it's language and general programming itself. c is more like a single - edged sword, but still just as sharp. with c + + it's too easy to get very high - levels of abstraction and create obfuscated interfaces that become meaningless in the long - term, and that's partly due to c + + flexibility in solving the same problem with many different language features ( templates, oop, procedural, rtti, oop + templates, overloading, inlining ). i finished a two 4 - hour seminars on embedded software in c + + by the c + + guru, scott meyers. he pointed out some things about templates that i never considered before and how much more they can help creating safety - critical code. the jist of it is, you can't have dead code in software that has to meet stringent safety - critical code requirements. templates can help you accomplish this, since the compiler only creates the code it needs when instantiating templates. however, one must become more thoroughly
https://api.stackexchange.com
educated in their use to design correctly for this feature which is harder to accomplish in c because linkers don't always optimize dead code. he also demonstrated a feature of templates that could only be accomplished in c + + and would have kept the mars climate observer from crashing had nasa implemented a similar system to protect units of measurement in the calculations. scott meyers is a very big proponent on templates and judicious use of inlining, and i must say i'm still skeptical on being gung ho about templates. i tend to shy away from them, even though he says they should only be applied where they become the best tool. he also makes the point that c + + gives you the tools to make really good interfaces that are easy to use right and make it hard to use wrong. again, that's the hard part. one must come to a level of mastery in c + + before you can know how to apply these features in most efficient way to be the best design solution. the same goes for oop. in the embedded world, you must familiarize yourself with what kind of code the compiler is going to spit out to know if you can handle the run - time costs of run - time polymorphism. you need to be willing to make measurements as well to prove your design is going to meet your deadline requirements. is that new interruptmanager class going to make my interrupt latency too long? there are other forms of polymorphism that may fit your problem better such as link - time polymorphism which c can do as well, but c + + can do through the pimpl design pattern ( opaque pointer ). i say that all to say, that c + + has its place in the embedded world. you can hate it all you want, but it's not going away. it can be written in a very efficient manner, but it's harder to learn how to do it correctly than with c. it can sometimes work better than c at solving a problem and sometimes expressing a better interface, but again, you've got to educate yourself and not be afraid to learn how.
https://api.stackexchange.com
the assumption that the layers are all cylindrical is a good first approximation. the assumption that the layers form a logarithmic spiral is not a good assumption at all, because it supposes that the thickness of the paper at any point is proportional to its distance from the center. this seems to me to be quite absurd. an alternative assumption is that the layers form an archimedean spiral. this is slightly more realistic, since it says the paper has a uniform thickness from beginning to end. but this assumption is not a much more realistic than the assumption that all layers are cylindrical ; in fact, in some ways it is less realistic. here's how a sheet of thickness $ h $ actually wraps around a cylinder. first, we glue one side of the sheet ( near the end of the sheet ) to the surface of the cylinder. then we start rotating the cylinder. as the cylinder rotates, it pulls the outstretched sheet around itself. near the end of the first full rotation of the cylinder, the wrapping looks like this : notice that the sheet lies directly on the surface of the cylinder, that is, this part of the wrapped sheet is cylindrical. at some angle of rotation, the glued end of the sheet hits the part of the sheet that is being wrapped. the point where the sheet is tangent to the cylinder at that time is the last point of contact with the cylinder ; the sheet goes straight from that point to the point of contact with the glued end, and then proceeds to wrap in a cylindrical shape around the first layer of the wrapped sheet, like this : as we continue rotating the cylinder, it takes up more and more layers of the sheet, each layer consisting of a cylindrical section going most of the way around the roll, followed by a flat section that joins this layer to the next layer. we end up with something like this : notice that i cut the sheet just at the point where it was about to enter another straight section. i claim ( without proof ) that this produces a local maximum in the ratio of the length of the wrapped sheet of paper to the greatest thickness of paper around the inner cylinder. the next local maximum ( i claim ) will occur at the corresponding point of the next wrap of the sheet. the question now is what the thickness of each layer is. the inner surface of the cylindrical portion of each layer of the wrapped sheet has less area than the outer surface, but the portion of the original ( unwrapped ) sheet that was wound onto the roll to make this layer had equal area on
https://api.stackexchange.com
both sides. so either the inner surface was somehow compressed, or the outer surface was stretched, or both. i think the most realistic assumption is that both compression and stretching occurred. in reality, i would guess that the inner surface is compressed more than the outer surface is stretched, but i do not know what the most likely ratio of compression to stretching would be. it is simpler to assume that the two effects are equal. the length of the sheet used to make any part of one layer of the roll is therefore equal to the length of the surface midway between the inner and outer surfaces of that layer. for example, to wrap the first layer halfway around the central cylinder of radius $ r $, we use a length $ \ pi \ left ( r + \ frac h2 \ right ) $ of the sheet of paper. the reason this particularly simplifies our calculations is that the length of paper used in any part of the roll is simply the area of the cross - section of that part of the roll divided by the thickness of the paper. the entire roll has inner radius $ r $ and outer radius $ r = r + nh $, where $ n $ is the maximum number of layers at any point around the central cylinder. ( in the figure, $ n = 5 $. ) the blue lines are sides of a right triangle whose vertices are the center of the inner cylinder and the points where the first layer last touches the inner cylinder and first touches its own end. this triangle has hypotenuse $ r + h $ and one leg is $ r $, so the other leg ( which is the length of the straight portion of the sheet ) is $ $ \ sqrt { ( r + h ) ^ 2 - r ^ 2 } = \ sqrt { ( 2r + h ) h }. $ $ each straight portion of each layer is connected to the next layer of paper by wrapping around either the point of contact with the glued end of the sheet ( the first time ) or around the shape made by wrapping the previous layer around this part of the layer below ; this forms a segment of a cylinder between the red lines with center at the point of contact with the glued end. the angle between the red lines is the same as the angle of the blue triangle at the center of the cylinder, namely $ $ \ alpha = \ arccos \ frac { r } { r + h }. $ $ now let's add up all parts of the roll. we have an almost - complete hollow
https://api.stackexchange.com
cylinder with inner radius $ r $ and outer radius $ r $, missing only a segment of angle $ \ alpha $. the cross - sectional area of this is $ $ a _ 1 = \ left ( \ pi - \ frac { \ alpha } { 2 } \ right ) ( r ^ 2 - r ^ 2 ). $ $ we have a rectangular prism whose cross - sectional area is the product of two of its sides, $ $ a _ 2 = ( r - r - h ) \ sqrt { ( 2r + h ) h }. $ $ finally, we have a segment of a cylinder of radius $ r - r - h $ ( between the red lines ) whose cross - sectional area is $ $ a _ 3 = \ frac { \ alpha } { 2 } ( r - r - h ) ^ 2. $ $ adding this up and dividing by $ h $, the total length of the sheet comes to \ begin { align } l & = \ frac1h ( a _ 1 + a _ 2 + a _ 3 ) \ \ & = \ frac1h \ left ( \ pi - \ frac { \ alpha } { 2 } \ right ) ( r ^ 2 - r ^ 2 ) + \ frac1h ( r - r - h ) \ sqrt { ( 2r + h ) h } + \ frac { \ alpha } { 2h } ( r - r - h ) ^ 2. \ end { align } for $ n $ layers on a roll, using the formula $ r = r + nh $, we have $ r - r = nh $, $ r + r = 2r + nh $, $ r ^ 2 - r ^ 2 = ( r + r ) ( r - r ) = ( 2r + nh ) nh $, and $ r - r - h = ( n - 1 ) h $. the length then is \ begin { align } l & = \ left ( \ pi - \ frac { \ alpha } { 2 } \ right ) ( 2r + nh ) n + ( n - 1 ) \ sqrt { ( 2r + h ) h } + \ frac { \ alpha h } { 2 } ( n - 1 ) ^ 2 \ \ & = 2n \ pi r + n ^ 2 \ pi h + ( n - 1 ) \ sqrt { ( 2r + h ) h } - \ left ( n ( r + h
https://api.stackexchange.com
) - \ frac h2 \ right ) \ arccos \ frac { r } { r + h } \ \ & = n ( r + r ) \ pi + ( n - 1 ) \ sqrt { ( 2r + h ) h } - \ left ( n ( r + h ) - \ frac h2 \ right ) \ arccos \ frac { r } { r + h }. \ end { align } one notable difference between this estimate and some others ( including the original ) is that i assume there can be at most $ ( r - r ) / h $ layers of paper over any part of the central cylinder, not $ 1 + ( r - r ) / h $ layers. the total length is the number of layers times $ 2 \ pi $ times the average radius, $ ( r + r ) / 2 $, adjusted by the amount that is missing in the section of the roll that is only $ n - 1 $ sheets thick. things are not too much worse if we assume a different but uniform ratio of inner - compression to outer - stretching, provided that we keep the same paper thickness regardless of curvature ; we just have to make an adjustment to the inner and outer radii of any cylindrical segment of the roll, which i think i'll leave as " an exercise for the reader. " but this involves a change in volume of the sheet of paper. if we also keep the volume constant, we find that the sheet gets thicker or thinner depending on the ratio of stretch to compression and the curvature of the sheet. with constant volume, the length of paper in the main part of the roll ( everywhere where we get the the full number of layers ) is the same as in the estimate above, but the total length of the parts of the sheet that connect one layer to the next might change slightly. update : per request, here are the results of applying the formula above to the input values given as an example in the question : $ h = 0. 1 $, $ r = 75 $, and $ r = 25 $ ( inferred from $ r - r = b = 50 $ ), all measured in millimeters. since $ n = ( r - r ) / h $, we have $ n = 500 $. for a first approximation of the total length of paper, let's consider just the first term of the formula. this gives us $ $ l _ 1 = n ( r + r ) \ pi = 500 \ cd
https://api.stackexchange.com
##ot 100 \ pi \ approx 157079. 63267949, $ $ or about $ 157 $ meters, the same as in the example in the question. the remaining two terms yield \ begin { align } l - l _ 1 & = ( n - 1 ) \ sqrt { ( 2r + h ) h } - \ left ( n ( r + h ) - \ frac h2 \ right ) \ arccos \ frac { r } { r + h } \ \ & = 499 \ sqrt { 50. 1 \ cdot 0. 1 } - ( 500 ( 25. 1 ) - 0. 05 ) \ arccos \ frac { 25 } { 25. 1 } \ \ & \ approx - 3. 72246774. \ end { align } this is a very small correction, less than $ 2. 4 \ times 10 ^ { - 5 } l _ 1 $. in reality ( as opposed to my idealized model of constant - thickness constant - volume toilet paper ), this " correction " is surely insignificant compared to the uncertainties of estimating the average thickness of the paper in each layer of a roll ( not to mention any non - uniformity in how it is rolled by the manufacturing machinery ). we can also compare $ \ lvert l - l _ 1 \ rvert $ to the amount of paper that would be missing if the paper in the " flat " segment of the roll were instead $ n - 1 $ layers following the curve of the rest of the paper. the angle $ \ alpha $ is about $ 0. 089294 $ radians ( about $ 5. 1162 $ degrees ), so if the missing layer were the innermost layer, its length would be $ 25. 05 \ alpha \ approx 2. 24 $, and if it were the outermost layer it would be $ 74. 95 \ alpha \ approx 6. 69 $ ( in millimeters ). just for amusement, i also tried expanding $ l - l _ 1 $ as a power series around $ h = 0 $ ( with a little help from wolfram alpha ). ( to make $ l - l _ 1 $ a function of one variable $ h $ with constants $ r $ and $ r $, make the substitution $ n = ( r - r ) / h $. ) this turns out to be a series of powers of $ \ sqrt h $ whose leading term is $ $ - \ fra
https://api.stackexchange.com
##c { ( r + 2r ) \ sqrt2 } { 3 \ sqrt r } \ sqrt h. $ $ plugging in the values from the example, this evaluates to approximately $ - 3. 7267799625 $. if you really wanted the length of the idealized toilet roll to the nearest millimeter, but could tolerate an error of a few $ \ mu \ mathrm m $ ( for typical dimensions of a toilet roll ), a suitable approximation would be $ $ l \ approx \ frac { \ pi ( r ^ 2 - r ^ 2 ) } { h } - \ frac { ( r + 2r ) \ sqrt2 } { 3 \ sqrt r } \ sqrt h. $ $
https://api.stackexchange.com
worst - case hardness of np - complete problems is not sufficient for cryptography. even if np - complete problems are hard in the worst - case ( $ p \ ne np $ ), they still could be efficiently solvable in the average - case. cryptography assumes the existence of average - case intractable problems in np. also, proving the existence of hard - on - average problems in np using the $ p \ ne np $ assumption is a major open problem. an excellent read is the classic by russell impagliazzo, a personal view of average - case complexity, 1995. an excellent survey is average - case complexity by bogdanov and trevisan, foundations and trends in theoretical computer science vol. 2, no 1 ( 2006 ) 1 – 106
https://api.stackexchange.com
let ’ s start with what they have in common : all three formats store sequence data, and sequence metadata. furthermore, all three formats are text - based. however, beyond that all three formats are different and serve different purposes. let ’ s start with the simplest format : fasta fasta stores a variable number of sequence records, and for each record it stores the sequence itself, and a sequence id. each record starts with a header line whose first character is >, followed by the sequence id. the next lines of a record contain the actual sequence. the wikipedia artice gives several examples for peptide sequences, but since fastq and sam are used exclusively (? ) for nucleotide sequences, here ’ s a nucleotide example : > mus _ musculus _ trna - ala - agc - 1 - 1 ( chr13. trna34 - alaagc ) gggggtgtagctcagtggtagagcgcgtgcttagcatgcacgaggccctgggttcgatcc ccagcacctcca > mus _ musculus _ trna - ala - agc - 10 - 1 ( chr13. trna457 - alaagc ) gggggattagctcaaatggtagagcgctcgcttagcatgcaagaggtagtgggatcgatg cccacatcctcca the id can be in any arbitrary format, although several conventions exist. in the context of nucleotide sequences, fasta is mostly used to store reference data ; that is, data extracted from a curated database ; the above is adapted from gtrnadb ( a database of trna sequences ). fastq fastq was conceived to solve a specific problem arising during sequencing : due to how different sequencing technologies work, the confidence in each base call ( that is, the estimated probability of having correctly identified a given nucleotide ) varies. this is expressed in the phred quality score. fasta had no standardised way of encoding this. by contrast, a fastq record contains a sequence of quality scores for each nucleotide. a fastq record has the following format : a line starting with @, containing the sequence id. one or more lines that contain the sequence. a new line starting with the character +, and being either empty or repeating the sequence id. one or more lines that contain the quality scores. here ’ s an example of a fastq file with two records : @ 071112 _ sl
https://api.stackexchange.com
##xa - eas1 _ s _ 7 : 5 : 1 : 817 : 345 gggtgatggccgctgccgatggcgtc aaatcccacc + iiiiiiiiiiiiiiiiiiiiiiiiii iiii9ig9ic @ 071112 _ slxa - eas1 _ s _ 7 : 5 : 1 : 801 : 338 gttcagggatacgacgtttgtattttaagaatctga + iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii6ibi fastq files are mostly used to store short - read data from high - throughput sequencing experiments. the sequence and quality scores are usually put into a single line each, and indeed many tools assume that each record in a fastq file is exactly four lines long, even though this isn ’ t guaranteed. as with fasta, the format of the sequence id isn ’ t standardised, but different producers of fastq use fixed notations that follow strict conventions. sam sam files are so complex that a complete description [ pdf ] takes 15 pages. so here ’ s the short version. the original purpose of sam files is to store mapping information for sequences from high - throughput sequencing. as a consequence, a sam record needs to store more than just the sequence and its quality, it also needs to store information about where and how a sequence maps into the reference. unlike the previous formats, sam is tab - based, and each record, consisting of either 11 or 12 fields, fills exactly one line. here ’ s an example ( tabs replaced by fixed - width spacing ) : r001 99 chr1 7 30 17m = 37 39 ttagataaaggatactg iiiiiiiiiiiiiiiii r002 0 chrx 9 30 3s6m1p1i4m * 0 0 aaaagataaggata iiiiiiiiii6ibi nm : i : 1 for a description of the individual fields, refer to the documentation. the relevant bit is this : sam can express exactly the same information as fastq, plus, as mentioned, the mapping information. however, sam is also used to store read data without mapping information. in addition to sequence records, sam files can also contain a header, which stores information about the reference that the sequences were mapped to, and the tool used to create the sam file. header information precede the sequence records, and consist of lines starting with @. sam itself is almost never used as a storage format ;
https://api.stackexchange.com
instead, files are stored in bam format, which is a compact, gzipped, binary representation of sam. it stores the same information, just more efficiently. and, in conjunction with a search index, allows fast retrieval of individual records from the middle of the file ( = fast random access ). bam files are also much more compact than compressed fastq or fasta files. the above implies a hierarchy in what the formats can store : fasta ⊂ fastq ⊂ sam. in a typical high - throughput analysis workflow, you will encounter all three file types : fasta to store the reference genome / transcriptome that the sequence fragments will be mapped to. fastq to store the sequence fragments before mapping. sam / bam to store the sequence fragments after mapping.
https://api.stackexchange.com
as far as i'm aware, the most accurate methods for static calculations are full configuration interaction with a fully relativistic four - component dirac hamiltonian and a " complete enough " basis set. i'm not an expert in this particular area, but from what i know of the method, solving it using a variational method ( rather than a monte - carlo based method ) scales shockingly badly, since i think the number of slater determinants you have to include in your matrix scales something like $ o ( ^ { n _ { orbs } } c _ { n _ e } ) $. ( there's an article on the computational cost here. ) the related monte - carlo methods and methods based off them using " walkers " and networks of determinants can give results more quickly, but as implied above, aren't variational. and are still hideously costly. approximations currently in practical use just for energies for more than two atoms include : born oppenheimer, as you say : this is almost never a problem unless your system involves hydrogen atoms tunneling, or unless you're very near a state crossing / avoided crossing. ( see, for example, conical intersections. ) conceptually, there are non - adiabatic methods for the wavefunction / density, including cpmd, and there's also path - integral md which can account for nuclear tunneling effects. nonrelativistic calculations, and two - component approximations to the dirac equation : you can get an exact two - component formulation of the dirac equation, but more practically the zeroth - order regular approximation ( see lenthe et al, jchemphys, 1993 ) or the douglas - kroll - hess hamiltonian ( see reiher, computmolsci, 2012 ) are commonly used, and often ( probably usually ) neglecting spin - orbit coupling. basis sets and lcao : basis sets aren't perfect, but you can always make them more complete. dft functionals, which tend to attempt to provide a good enough attempt at the exchange and correlation without the computational cost of the more advanced methods below. ( and which come in a few different levels of approximation. lda is the entry - level one, gga, metagga and including exact exchange go further than that, and including the rpa is still a pretty expensive and new - ish technique as far as i'm aware. there are also functionals which use differing techniques as a function of
https://api.stackexchange.com
separation, and some which use vorticity which i think have application in magnetic or aromaticity studies. ) ( b3lyp, the functional some people love and some people love to hate, is a gga including a percentage of exact exchange. ) configuration interaction truncations : cis, cisd, cisdt, cisd ( t ), casscf, rasscf, etc. these are all approximations to ci which assume the most important excited determinants are the least excited ones. multi - reference configuration interaction ( truncations ) : ditto, but with a few different starting reference states. coupled - cluster method : i don't pretend to properly understand how this works, but it obtains similar results to configuration interaction truncations with the benefit of size - consistency ( i. e. $ e ( h _ 2 ) \ times 2 = e ( ( h _ 2 ) _ 2 $ ( at large separation ) ). for dynamics, many of the approximations refer to things like the limited size of a tractable system, and practical timestep choice - - it's pretty standard stuff in the numerical time simulation field. there's also temperature maintenance ( see nose - hoover or langevin thermostats ). this is mostly a set of statistical mechanics problems, though, as i understand it. anyway, if you're physics - minded, you can get a pretty good feel for what's neglected by looking at the formulations and papers about these methods : most commonly used methods will have at least one or two papers that aren't the original specification explaining their formulation and what it includes. or you can just talk to people who use them. ( people who study periodic systems with dft are always muttering about what different functionals do and don't include and account for. ) very few of the methods have specific surprising omissions or failure modes. the most difficult problem appears to be proper treatment of electron correlation, and anything above the hartree - fock method, which doesn't account for it at all, is an attempt to include it. as i understand it, getting to the accuracy of full relativistic ci with complete basis sets is never going to be cheap without dramatically reinventing ( or throwing away ) the algorithms we currently use. ( and for people saying that dft is the solution to everything, i'm waiting for your pure density orbital - free formulations. ) there's also the issue that the
https://api.stackexchange.com
more accurate you make your simulation by including more contributions and more complex formulations, the harder it is to actually do anything with. for example, spin orbit coupling is sometimes avoided solely because it makes everything more complicated to analyse ( but sometimes also because it has negligable effect ), and the canonical hartree - fock or kohn - sham orbitals can be pretty useful for understanding qualitative features of a system without layering on the additional output of more advanced methods. ( i hope some of this makes sense, it's probably a bit spotty. and i've probably missed someone's favourite approximation or niggle. )
https://api.stackexchange.com
there are a few industry approaches to this. the first is molded cables. the cables themselves have strain reliefs molded to fit a given entry point, either by custom moulding or with off the shelf reliefs that are chemically welded / bonded to the cable. not just glued, but welded together. the second is entry points designed to hold the cable. the cable is bent in a z or u shape around posts to hold it in place. the strength of the cable is used to prevent it from being pulled out. similarly, but less often seen now in the days of cheap molding or diy kits, is this. the cable is screwed into a holder which is prevented from moving in or out by the case and screw posts. both of those options are a bit out of an individual's reach. the third is through the use of cord grips or cable glands, also known as grommets. especially is a water tight fit is needed. they are screwed on, the cable past through, then the grip part is screwed. these prevent the cable from moving in or out, as well as sealing the hole. most can accommodate cables at least 80 % of the size of the opening. any smaller and they basically won't do the job. other options include cable fasteners or holders. these go around the cable and are screwed or bolted down ( or use plastic press fits ). these can be screwed into a pcb for example. cable grommets are a fairly hacky way of doing it, as they are not designed to hold onto the cable. instead they are designed to prevent the cable from being cut or damaged on a sharp or thin edge. but they can do in a pinch. as can tying a knot, though that mainly prevents pull outs, but might not be ideal for digital signals. pushing a cable in doesn't happen too often, so you might not worry about that. similar to the second method, is using two or three holes in a pcb to push a cable through ( up, down, up ), then pulling it tight. this moves the point of pressure away from the solder point and onto the cable + jacket. the other industry method is avoiding all this in the first place, by using panel mounted connectors ( or board mounted connectors like dell does for power plugs, yuck ).
https://api.stackexchange.com
i would suggest use ralee — rnalignment editor in emacs. it can get for you the consensus secondary structure, you can move left / right sequences and their secondary structures ( you can't do it in jalview! ), and more. it's an emacs mode, so could be a bit hard to start off, but just try, you don't have to use all emacs features to edit your alignments! the ralee ( rna alignment editor in emacs ) tool provides a simple environment for rna multiple sequence alignment editing, including structure - specific colour schemes, utilizing helper applications for structure prediction and many more conventional editing functions. sam griffiths - jones bioinformatics ( 2005 ) 21 ( 2 ) : 257 - 259. fig. you can move left / right sequences and their secondary structures ( you can't do it in jalview! )
https://api.stackexchange.com
people are skeptical because : no proof has come from an expert without having been rescinded shortly thereafter so much effort has been put into finding a proof, with no success, that it's assumed one will be either substantially complicated, or invent new mathematics for the proof the " proofs " that arise frequently fail to address hurdles which are known to exist. for example, many claim that 3sat is not in p, while providing an argument that also applies to 2sat. to be clear, the skepticism is of the proofs, not of the result itself.
https://api.stackexchange.com
i recommend you to check scott aaronson's blog post on a proof of the incompleteness theorem via turing machines and rosser's theorem. his proof of the incompleteness theorem is extremely simple and easy to follow.
https://api.stackexchange.com
i realize this question has been answered, but i don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. in particular, the link describes one technique for programmatic network configuration, but that is not a " [ a ] standard and accepted method " for network configuration. by following a small set of clear rules, one can programmatically set a competent network architecture ( i. e., the number and type of neuronal layers and the number of neurons comprising each layer ). following this schema will give you a competent architecture but probably not an optimal one. but once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms ; one family of these works by pruning nodes based on ( small ) values of the weight vector after a certain number of training epochs - - in other words, eliminating unnecessary / redundant nodes ( more on this below ). so every nn has three types of layers : input, hidden, and output. creating the nn architecture, therefore, means coming up with values for the number of layers of each type and the number of nodes in each of these layers. the input layer simple - - every nn has exactly one of them - - no exceptions that i'm aware of. with respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. specifically, the number of neurons comprising that layer is equal to the number of features ( columns ) in your data. some nn configurations add one additional node for a bias term. the output layer like the input layer, every nn has exactly one output layer. determining its size ( number of neurons ) is simple ; it is completely determined by the chosen model configuration. is your nn going to run in machine mode or regression mode ( the ml convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing )? machine mode : returns a class label ( e. g., " premium account " / " basic account " ). regression mode returns a value ( e. g., price ). if the nn is a regressor, then the output layer has a single node. if the nn is a classifier, then it also has a single node unless softmax is used in which case the output layer has one node per class label in your model.
https://api.stackexchange.com
the hidden layers so those few rules set the number of layers and size ( neurons / layer ) for both the input and output layers. that leaves the hidden layers. how many hidden layers? well, if your data is linearly separable ( which you often know by the time you begin coding a nn ), then you don't need any hidden layers at all. of course, you don't need an nn to resolve your data either, but it will still do the job. beyond that, as you probably know, there's a mountain of commentary on the question of hidden layer configuration in nns ( see the insanely thorough and insightful nn faq for an excellent summary of that commentary ). one issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers : the situations in which performance improves with a second ( or third, etc. ) hidden layer are very few. one hidden layer is sufficient for the large majority of problems. so what about the size of the hidden layer ( s ) - - how many neurons? there are some empirically derived rules of thumb ; of these, the most commonly relied on is'the optimal size of the hidden layer is usually between the size of the input and size of the output layers '. jeff heaton, the author of introduction to neural networks in java, offers a few more. in sum, for most problems, one could probably get decent performance ( even without a second optimization step ) by setting the hidden layer configuration using just two rules : ( i ) the number of hidden layers equals one ; and ( ii ) the number of neurons in that layer is the mean of the neurons in the input and output layers. optimization of the network configuration pruning describes a set of techniques to trim network size ( by nodes, not layers ) to improve computational performance and sometimes resolution performance. the gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance ( i. e., resolution of the data ). ( even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training ; look at weights very close to zero - - it's the nodes on either end of those weights that are often removed during pruning. ) obviously, if you use a pruning algorithm during training, then begin with a network configuration that is more likely
https://api.stackexchange.com
to have excess ( i. e.,'prunable') nodes - - in other words, when deciding on network architecture, err on the side of more neurons, if you add a pruning step. put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration ; whether you can do that in a single " up - front " ( such as a genetic - algorithm - based algorithm ), i don't know, though i do know that for now, this two - step optimization is more common.
https://api.stackexchange.com
i think i can attempt to clear this up. usb - 100ma usb by default will deliver 100ma of current ( it is 500mw power because we know it is 5v, right? ) to a device. this is the most you can pull from a usb hub that does not have its own power supply, as they never offer more than 4 ports and keep a greedy 100ma for themselves. some computers that are cheaply built will use an bus - powered hub ( all of your usb connections share the same 500ma source and the electronics acting as a hub use that source also ) internally to increase the number of usb ports and to save a small amount of money. this can be frustrating, but you can always be guaranteed 100ma. usb - 500ma when a device is connected it goes through enumeration. this is not a trivial process and can be seen in detail on jan axelson's site. as you can see this is a long process, but a chip from a company like ftdi will handle the hard part for you. they discuss enumeration in one of their app notes. near the end of enumeration you setup device parameters. very specifically the configuration descriptors. if you look on this website they will show you all of the different pieces that can be set. it shows that you can get right up to 500ma of power requested. this is what you can expect from a computer. you can get ftdi chips to handle this for you, which is nice, as you only have to treat the chip as a serial line. usb - 1. 8a this is where things get interesting. you can purchase a charger that does outlet to usb at the store. this is a usb charging port. your computer does not supply these, and your device must be able to recognize it. first, to get the best information about usb, you sometimes have to bite the bullet and go to the people whom write the spec. i found great information about the usb charging spec here. the link on the page that is useful is the link for battery charging. this link seems to be tied to revision number, so i have linked both in case the revision is updated people can still access the information. now, what does this mean. if you open up the batt _ charging pdf and jump to chapter three they go into charging ports. specifically 3. 2. 1 explains how this is gone about. now they keep it very technical, but the key point is simple. a usb charging port
https://api.stackexchange.com
places a termination resistance between d + and d -. i would like to copy out the chapter that explains it, but it is a secured pdf and i cannot copy it out without retyping it. summing it up you may pull 100ma from a computer port. you may pull 500ma after enumeration and setting the correct configuration. computers vary their enforcement, as many others have said, but most i have had experience with will try to stop you. if you violate this, you may also damage a poorly design computer ( davr is spot on there, this is poor practice ). you may pull up to 1. 8a from a charging port, but this is a rare case where the port tells you something. you have to check for this and when it is verified you may do it. this is the same as buying a wall adapter, but you get to use a usb cable and usb port. why use the charging spec? so that when my phone dies, my charger charges it quickly, but if i do not have my charger i may pull power from a computer, while using the same hardware port to communicate files and information with my computer. please let me know if there is anything i can add.
https://api.stackexchange.com
blood is not a good source of water. 1 liter of blood contains about 800 ml of water, 170 grams of protein and 2 grams of sodium ( calculated from the composition of lamb blood ). when metabolized, 170 grams of protein yields the amount of urea that requires 1, 360 ml of water to be excreted in urine ( calculated from here ) ; 2 grams of sodium requires about 140 ml of water to be excreted ( from here ). this means that drinking 1 liter of blood, which contains 800 ml of water, will result in 1, 500 ml of water loss through the kidneys, which will leave you with 700 ml of negative water balance. fish blood can contain less protein, for example, trout ( check table 1 ) contains about 120 g of protein ( plasma protein + hemoglobin ) per liter of blood. using the same calculation as above ( 1 g protein results in the excretion of 8 ml of urine ), drinking of 1 liter of trout blood, which contains about 880 ml of water, will result in 960 ml of urine, so in 80 ml of negative water balance. turtle blood can contain about 80 g of protein ( plasma protein + hemoglobin ) and 3. 4 g of sodium per liter. drinking 1 liter of turtle blood, which contains about 920 ml of water, will result in 80 x 8 ml = 640 ml loss of urine due to protein, and ~ 240 ml due to sodium, which is 880 ml of urine in total. this leaves you with 40 ml of positive water balance ( to get 2 liters of water per day you would need to drink 50 liters of turtle blood, which isn't realistic. in various stories ( the atlantic, the diplomat, the telegraph ), according to which people have survived by drinking turtle blood, they have also drunk rainwater, so we can't conclude it was turtle blood that helped them. i'm not aware of any story that would provide a convincing evidence that the blood of turtle or any other animal is hydrating.
https://api.stackexchange.com
what you've implemented is a single - pole lowpass filter, sometimes called a leaky integrator. your signal has the difference equation : $ $ y [ n ] = 0. 8 y [ n - 1 ] + 0. 2 x [ n ] $ $ where $ x [ n ] $ is the input ( the unsmoothed bin value ) and $ y [ n ] $ is the smoothed bin value. this is a common way of implementing a simple, low - complexity lowpass filter. i've written about them several times before in previous answers ; see [ 1 ] [ 2 ] [ 3 ].
https://api.stackexchange.com
the impulse response and frequency response are two attributes that are useful for characterizing linear time - invariant ( lti ) systems. they provide two different ways of calculating what an lti system's output will be for a given input signal. a continuous - time lti system is usually illustrated like this : in general, the system $ h $ maps its input signal $ x ( t ) $ to a corresponding output signal $ y ( t ) $. there are many types of lti systems that can have apply very different transformations to the signals that pass through them. but, they all share two key characteristics : the system is linear, so it obeys the principle of superposition. stated simply, if you linearly combine two signals and input them to the system, the output is the same linear combination of what the outputs would have been had the signals been passed through individually. that is, if $ x _ 1 ( t ) $ maps to an output of $ y _ 1 ( t ) $ and $ x _ 2 ( t ) $ maps to an output of $ y _ 2 ( t ) $, then for all values of $ a _ 1 $ and $ a _ 2 $, $ $ h \ { a _ 1 x _ 1 ( t ) + a _ 2 x _ 2 ( t ) \ } = a _ 1 y _ 1 ( t ) + a _ 2 y _ 2 ( t ) $ $ the system is time - invariant, so its characteristics do not change with time. if you add a delay to the input signal, then you simply add the same delay to the output. for an input signal $ x ( t ) $ that maps to an output signal $ y ( t ) $, then for all values of $ \ tau $, $ $ h \ { x ( t - \ tau ) \ } = y ( t - \ tau ) $ $ discrete - time lti systems have the same properties ; the notation is different because of the discrete - versus - continuous difference, but they are a lot alike. these characteristics allow the operation of the system to be straightforwardly characterized using its impulse and frequency responses. they provide two perspectives on the system that can be used in different contexts. impulse response : the impulse that is referred to in the term impulse response is generally a short - duration time - domain signal. for continuous - time systems, this is the dirac delta function $ \ delta ( t ) $, while for discrete - time systems, the kronecker delta function $ \
https://api.stackexchange.com
delta [ n ] $ is typically used. a system's impulse response ( often annotated as $ h ( t ) $ for continuous - time systems or $ h [ n ] $ for discrete - time systems ) is defined as the output signal that results when an impulse is applied to the system input. why is this useful? it allows us to predict what the system's output will look like in the time domain. remember the linearity and time - invariance properties mentioned above? if we can decompose the system's input signal into a sum of a bunch of components, then the output is equal to the sum of the system outputs for each of those components. what if we could decompose our input signal into a sum of scaled and time - shifted impulses? then, the output would be equal to the sum of copies of the impulse response, scaled and time - shifted in the same way. for discrete - time systems, this is possible, because you can write any signal $ x [ n ] $ as a sum of scaled and time - shifted kronecker delta functions : $ $ x [ n ] = \ sum _ { k = 0 } ^ { \ infty } x [ k ] \ delta [ n - k ] $ $ each term in the sum is an impulse scaled by the value of $ x [ n ] $ at that time instant. what would we get if we passed $ x [ n ] $ through an lti system to yield $ y [ n ] $? simple : each scaled and time - delayed impulse that we put in yields a scaled and time - delayed copy of the impulse response at the output. that is : $ $ y [ n ] = \ sum _ { k = 0 } ^ { \ infty } x [ k ] h [ n - k ] $ $ where $ h [ n ] $ is the system's impulse response. the above equation is the convolution theorem for discrete - time lti systems. that is, for any signal $ x [ n ] $ that is input to an lti system, the system's output $ y [ n ] $ is equal to the discrete convolution of the input signal and the system's impulse response. for continuous - time systems, the above straightforward decomposition isn't possible in a strict mathematical sense ( the dirac delta has zero width and infinite height ), but at an engineering level, it's an approximate, intuitive way of looking
https://api.stackexchange.com
at the problem. a similar convolution theorem holds for these systems : $ $ y ( t ) = \ int _ { - \ infty } ^ { \ infty } x ( \ tau ) h ( t - \ tau ) d \ tau $ $ where, again, $ h ( t ) $ is the system's impulse response. there are a number of ways of deriving this relationship ( i think you could make a similar argument as above by claiming that dirac delta functions at all time shifts make up an orthogonal basis for the $ l ^ 2 $ hilbert space, noting that you can use the delta function's sifting property to project any function in $ l ^ 2 $ onto that basis, therefore allowing you to express system outputs in terms of the outputs associated with the basis ( i. e. time - shifted impulse responses ), but i'm not a licensed mathematician, so i'll leave that aside ). one method that relies only upon the aforementioned lti system properties is shown here. in summary : for both discrete - and continuous - time systems, the impulse response is useful because it allows us to calculate the output of these systems for any input signal ; the output is simply the input signal convolved with the impulse response function. frequency response : an lti system's frequency response provides a similar function : it allows you to calculate the effect that a system will have on an input signal, except those effects are illustrated in the frequency domain. recall the definition of the fourier transform : $ $ x ( f ) = \ int _ { - \ infty } ^ { \ infty } x ( t ) e ^ { - j 2 \ pi ft } dt $ $ more importantly for the sake of this illustration, look at its inverse : $ $ x ( t ) = \ int _ { - \ infty } ^ { \ infty } x ( f ) e ^ { j 2 \ pi ft } df $ $ in essence, this relation tells us that any time - domain signal $ x ( t ) $ can be broken up into a linear combination of many complex exponential functions at varying frequencies ( there is an analogous relationship for discrete - time signals called the discrete - time fourier transform ; i only treat the continuous - time case below for simplicity ). for a time - domain signal $ x ( t ) $, the fourier transform yields a corresponding function $ x ( f ) $ that specifies, for each frequency $ f $,
https://api.stackexchange.com
the scaling factor to apply to the complex exponential at frequency $ f $ in the aforementioned linear combination. these scaling factors are, in general, complex numbers. one way of looking at complex numbers is in amplitude / phase format, that is : $ $ x ( f ) = a ( f ) e ^ { j \ phi ( f ) } $ $ looking at it this way, then, $ x ( t ) $ can be written as a linear combination of many complex exponential functions, each scaled in amplitude by the function $ a ( f ) $ and shifted in phase by the function $ \ phi ( f ) $. this lines up well with the lti system properties that we discussed previously ; if we can decompose our input signal $ x ( t ) $ into a linear combination of a bunch of complex exponential functions, then we can write the output of the system as the same linear combination of the system response to those complex exponential functions. here's where it gets better : exponential functions are the eigenfunctions of linear time - invariant systems. the idea is, similar to eigenvectors in linear algebra, if you put an exponential function into an lti system, you get the same exponential function out, scaled by a ( generally complex ) value. this has the effect of changing the amplitude and phase of the exponential function that you put in. this is immensely useful when combined with the fourier - transform - based decomposition discussed above. as we said before, we can write any signal $ x ( t ) $ as a linear combination of many complex exponential functions at varying frequencies. if we pass $ x ( t ) $ into an lti system, then ( because those exponentials are eigenfunctions of the system ), the output contains complex exponentials at the same frequencies, only scaled in amplitude and shifted in phase. these effects on the exponentials'amplitudes and phases, as a function of frequency, is the system's frequency response. that is, for an input signal with fourier transform $ x ( f ) $ passed into system $ h $ to yield an output with a fourier transform $ y ( f ) $, $ $ y ( f ) = h ( f ) x ( f ) = a ( f ) e ^ { j \ phi ( f ) } x ( f ) $ $ in summary : so, if we know a system's frequency response $ h ( f ) $ and the fourier transform of the signal that we put into it $ x ( f )
https://api.stackexchange.com
$, then it is straightforward to calculate the fourier transform of the system's output ; it is merely the product of the frequency response and the input signal's transform. for each complex exponential frequency that is present in the spectrum $ x ( f ) $, the system has the effect of scaling that exponential in amplitude by $ a ( f ) $ and shifting the exponential in phase by $ \ phi ( f ) $ radians. bringing them together : an lti system's impulse response and frequency response are intimately related. the frequency response is simply the fourier transform of the system's impulse response ( to see why this relation holds, see the answers to this other question ). so, for a continuous - time system : $ $ h ( f ) = \ int _ { - \ infty } ^ { \ infty } h ( t ) e ^ { - j 2 \ pi ft } dt $ $ so, given either a system's impulse response or its frequency response, you can calculate the other. either one is sufficient to fully characterize the behavior of the system ; the impulse response is useful when operating in the time domain and the frequency response is useful when analyzing behavior in the frequency domain.
https://api.stackexchange.com
hint $ $ there is a $ \ rm \ color { darkorange } { unique } $ denominator $ \ rm \, \ color { # 0a0 } { 2 ^ k } $ having maximal power of $ \ : \! 2, \, $ so scaling by $ \ rm \, \ color { # c00 } { 2 ^ { k - 1 } } $ we deduce a contradiction $ \ large \ rm \, \ frac { 1 } 2 = \ frac { c } d \, $ with odd $ \ rm \, d \ : $ ( vs. $ \, \ rm d = 2c ), \, $ e. g. $ $ \ begin { eqnarray } & & \ rm \ \ \ \ \ color { 0a0 } { m } & = & \ \ 1 & + & \ frac { 1 } { 2 } & + & \ frac { 1 } { 3 } & + & \, \ color { # 0a0 } { \ frac { 1 } { 4 } } & + & \ frac { 1 } { 5 } & + & \ frac { 1 } { 6 } & + & \ frac { 1 } { 7 } \ \ & \ rightarrow \ & \ rm \ \ \ color { # c00 } { 2 } \ : \! m & = & \ \ 2 & + & \ 1 & + & \ frac { 2 } { 3 } & + & \, \ color { # 0a0 } { \ frac { 1 } { 2 } } & + & \ frac { 2 } { 5 } & + & \ frac { 1 } { 3 } & + & \ frac { 2 } { 7 } ^ \ phantom { m ^ m } \ \ & \ rightarrow \ & - \ color { # 0a0 } { \ frac { 1 } { 2 } } \ \ & = & \ \ 2 & + & \ 1 & + & \ frac { 2 } { 3 } & - & \ rm \ color { # c00 } { 2 } \ : \! m & + & \ frac { 2 } { 5 } & + & \ frac { 1 } { 3 } & + & \ frac { 2 } { 7 } ^ \ phantom { m ^ m } \ end { eqnarray } $ $ all denom
https://api.stackexchange.com
' s in the prior fractions are odd so they sum to fraction with odd denom $ \ rm \, d \, | \, 3 \ cdot 5 \ cdot 7 $. note $ $ said $ \ rm \ color { darkorange } { uniqueness } $ has easy proof : if $ \ rm \ : j \ : \! 2 ^ k $ is in the interval $ \ rm \, [ 1, n ] \, $ then so too is $ \, \ rm \ color { # 0a0 } { 2 ^ k } \! \ le \, j \ : \! 2 ^ k. \, $ but if $ \, \ rm j \ ge 2 \, $ then the interval contains $ \ rm \, 2 ^ { k + 1 } \! = 2 \ cdot \! 2 ^ k \! \ le j \ : \! 2 ^ k, \, $ contra maximality of $ \, \ rm k $. the argument is more naturally expressed using valuation theory, but i purposely avoided that because anton requested an " elementary " solution. the above proof can easily be made comprehensible to a high - school student. generally we can similarly prove that a sum of fractions is nonintegral if the highest power of a prime $ \, p \, $ in any denominator occurs in $ \ rm \ color { darkorange } { exactly \ one } $ denominator, e. g. see the remark here where i explain how it occurs in a trickier multiplicative form ( from a contest problem ). in valuation theory, this is a special case of a basic result on the valuation of a sum ( sometimes called the " dominance lemma " or similar ). another common application occurs when the sum of fractions arises from the evaluation of a polynomial, e. g. see here and its comment.
https://api.stackexchange.com
there is a couple of points to consider here, which i outline below. the goal here should be to find a workflow that is minimally intrusive on top of already using git. as of yet, there is no ideal workflow that covers all use cases, but what i outline below is the closest i could come to it. reproducibility is not just keeping all your data you have got your raw data that you start your project with. all other data in your project directory should never just " be there ", but have some record of where it comes from. data processing scripts are great for this, because they already document how you went from your raw to your analytical data, and then the files needed for your analyses. and those scripts can be versioned, with an appropriate single entry point of processing ( e. g. a makefile that describes how to run your scripts ). this way, the state of all your project files is defined by the raw data, and the version of your processing scripts ( and versions of external software, but that's a whole different kind of problem ). what data / code should and should not be versioned just as you would not version generated code files, you should not want to version 10k intermediary data files that you produced when performing your analyses. the data that should be versioned is your raw data ( at the start of your pipeline ), not automatically generated files. you might want to take snapshots of your project directory, but not keep every version of every file ever produced. this already cuts down your problem by a fair margin. approach 1 : actual versioning of data for your raw or analytical data, git lfs ( and alternatively git annex, that you already mention ) is designed to solve exactly this problem : add tracking information of files in your git tree, but do not store the content of those files in the repository ( because otherwise it would add the size of a non - diffable file with every change you make ). for your intermediate files, you do the same as you would do with intermediate code files : add them to your. gitignore and do not version them. this begs a couple of considerations : git lfs is a paid service from github ( the free tier is limited to 1 gb of storage / bandwidth per month, which is very little ), and it is more expensive than other comparable cloud storage solutions. you could consider paying for the storage at github
https://api.stackexchange.com
or running your own lfs server ( there is a reference implementation, but i assume this would still be a substantial effort ) git annex is free, but it replaces files by links and hence changes time stamps, which is a problem for e. g. gnu make based workflows ( major drawback for me ). also, fetching of files needs to be done manually or via a commit hook approach 2 : versioning code only, syncing data if your analytical data stays the same for most of your analyses, so the actual need to version it ( as opposed to back up and document data provenance, which is essential ) may be limited. the key to get this this working is to put all data files in your. gitignore and ignore all your code files in rsync, with a script in your project root ( extensions and directories are an example only ) : #! / bin / bash cd $ ( dirname $ 0 ) rsync - auvr \ - - exclude " *. r " \ - - include " *. rdata " \ - - exclude " dir with huge files that you don't need locally " \ yourhost : / your / project / path / *. the advantage here is that you don't need to remember the rsync command you are running. the script itself goes into version control. this is especially useful if you do your heavy processing on a computing cluster but want to make plots from your result files on your local machine. i argue that you generally don't need bidirectional sync.
https://api.stackexchange.com
the batch size defines the number of samples that will be propagated through the network. for instance, let's say you have 1050 training samples and you want to set up a batch _ size equal to 100. the algorithm takes the first 100 samples ( from 1st to 100th ) from the training dataset and trains the network. next, it takes the second 100 samples ( from 101st to 200th ) and trains the network again. we can keep doing this procedure until we have propagated all samples through of the network. problem might happen with the last set of samples. in our example, we've used 1050 which is not divisible by 100 without remainder. the simplest solution is just to get the final 50 samples and train the network. advantages of using a batch size < number of all samples : it requires less memory. since you train the network using fewer samples, the overall training procedure requires less memory. that's especially important if you are not able to fit the whole dataset in your machine's memory. typically networks train faster with mini - batches. that's because we update the weights after each propagation. in our example we've propagated 11 batches ( 10 of them had 100 samples and 1 had 50 samples ) and after each of them we've updated our network's parameters. if we used all samples during propagation we would make only 1 update for the network's parameter. disadvantages of using a batch size < number of all samples : the smaller the batch the less accurate the estimate of the gradient will be. in the figure below, you can see that the direction of the mini - batch gradient ( green color ) fluctuates much more in comparison to the direction of the full batch gradient ( blue color ). stochastic is just a mini - batch with batch _ size equal to 1. in that case, the gradient changes its direction even more often than a mini - batch gradient.
https://api.stackexchange.com
this one by ramanujan gives me the goosebumps : $ $ \ frac { 2 \ sqrt { 2 } } { 9801 } \ sum _ { k = 0 } ^ \ infty \ frac { ( 4k )! ( 1103 + 26390k ) } { ( k! ) ^ 4 396 ^ { 4k } } = \ frac1 { \ pi }. $ $ p. s. just to make this more intriguing, define the fundamental unit $ u _ { 29 } = \ frac { 5 + \ sqrt { 29 } } { 2 } $ and fundamental solutions to pell equations, $ $ \ big ( u _ { 29 } \ big ) ^ 3 = 70 + 13 \ sqrt { 29 }, \ quad \ text { thus } \ ; \ ; \ color { blue } { 70 } ^ 2 - 29 \ cdot \ color { blue } { 13 } ^ 2 = - 1 $ $ $ $ \ big ( u _ { 29 } \ big ) ^ 6 = 9801 + 1820 \ sqrt { 29 }, \ quad \ text { thus } \ ; \ ; \ color { blue } { 9801 } ^ 2 - 29 \ cdot1820 ^ 2 = 1 $ $ $ $ 2 ^ 6 \ left ( \ big ( u _ { 29 } \ big ) ^ 6 + \ big ( u _ { 29 } \ big ) ^ { - 6 } \ right ) ^ 2 = \ color { blue } { 396 ^ 4 } $ $ then we can see those integers all over the formula as, $ $ \ frac { 2 \ sqrt 2 } { \ color { blue } { 9801 } } \ sum _ { k = 0 } ^ \ infty \ frac { ( 4k )! } { k! ^ 4 } \ frac { 29 \ cdot \ color { blue } { 70 \ cdot13 } \, k + 1103 } { \ color { blue } { ( 396 ^ 4 ) } ^ k } = \ frac { 1 } { \ pi } $ $ nice, eh?
https://api.stackexchange.com
End of preview. Expand in Data Studio

MNLP_M3_rag_documents

This is a sample set of documents for use in Retrieval-Augmented Generation (RAG) evaluation.

Downloads last month
12