source
stringclasses
43 values
text
stringlengths
500
114k
https://ocw.mit.edu/courses/7-01sc-fundamentals-of-biology-fall-2011/7.01sc-fall-2011.zip
PROFESSOR: Hi. In this clip, we're going to go through problem two of the transcription and translation unit. We're also going to review all of the information you would need to fill out problem one, the table. If you haven't tried these problems yet on your own, please pause the video now and try them, and then come back and watch this explanation. OK. So here we have, as in problem two, the double stranded DNA. Now, this is a piece of the eukaryotic genome, and so this DNA is in the nucleus of the cell. Now what we're going to do is we're going to go through transcription and translation with this gene. So the first thing we need to do is we need to figure out where we want to start transcription and in what direction we want to go and what strand we want to use as our template. So in the problem, we're given our promoter region and we're given that our transcription starts right here at this arrow and moves in this direction. So we know where we're starting and what direction we're moving in. All transcription will proceed reading a template in the three prime to five prime direction and will polymerize the RNA made in the five prime to three prime direction. So what we need to do now is figure out what strand is going in the three prime to five prime direction and the direction that we're going to be transcribed. So that means that this strand, which is going from three prime over here down to five prime at the end, this is going to be our template strand. So our template strand is going to be at the bottom. OK. So when our RNA polymerase starts, it's going to be reading this template strand and adding nucleic acid, adding nucleotide triphosphates, or NTPs, one by one together to make the RNA. Now it's going to add these together by adding a phosphodiester bond in between all of the monomers. So we're going to start our transcription. And basically, what I need to do is just use the complimentary base pair to my template strand. So I'm going to write it out here. So I've done a lot of transcription here. Now where am I going to stop my transcription? I'm not going to stop just randomly in the middle of nowhere. What my RNA polymerase is going to do is it's going to stop when it gets to the end of this terminator sequence. So I'm going to keep transcribing until I get to the end of my terminator here. OK. So this is my RNA strand. And as I said before, we're going to polymerize this mRNA in the five prime to three prime direction. OK, we've got our mRNA. Since this is a eukaryotic cell and the mRNA right now is in the nucleus, what we need to do is transport it out of the nucleus and into the cytoplasm where it will be translated by a ribosome. Now what the ribosome is going to do is it's going to take this mRNA, which is a message that encodes the sequence of amino acids that we need to put together to make the protein, and we're going to take this message and we're going to find a place to start translation. So what the ribosome is going to do is it's going to start at the five prime end of the template and read towards the three prime end of the template. It's going to read until it gets to a start codon. The start codon is AUG. So this is where we're going to start, and AUG encodes methionine. We're then going to read in three base pair codons down from the five prime end of our template to the three prime end of our template. And so we're going to keep going in this translation until we reach what's called a stop codon. Now there are three stop codons. Stop codons are UAA, UGA, and UAG. So if when my ribosome finds any of these three codons in frame, it's going to stop. So I get all the way down here before I hit a stop codon on this transcript. So what my ribosome is going to be doing as it's reading these codons is it's going to insert amino acid by amino acid, pairing them together with a peptide bond. So we've got methionine bonded to histidine, tyrosine, leucine, and so on and so forth. OK, so this is the protein chain that's encoded by this gene in the DNA. And in proteins, we always synthesize from the N terminus to the C terminus. OK, so we've gone through both transcription and translation. Now what we want to do is we want to take into account what would happen if we had a mutation in our sequence. So in this case, in this problem, we're going to insert a base pair right here. So we're going to insert an extra T here and an extra A here. Now, if this is happening in the DNA, when our DNA is transcribed to make our RNA, what's going to happen to the RNA? Well, we're going to also add the corresponding base into our RNA, so we're going to have an extra U here. So in addition to only adding one nucleotide in our RNA, this is also going to throw off our frame that we're reading when our ribosome is reading our RNA. So instead of the frame that we had previously, we're now going to have a new frame. So it's going to start at the same start codon and then read this codon just as normal, but then this U will make UUA and then will read in codons of three from there. So again, I need to translate until I hit a stop codon. And so the stop codon is going to be different in this case, because this stop codon is no longer in frame. I need to find an in frame stop codon. And the first stop codon that I hit is going to be this you UAG right here. And so our new protein sequence is now going to be N terminus, and then the beginning is going to be very similar, and then our amino acids are going to be completely different because we're reading this in a different frame. OK, so we can see how this one mutation, addition of one base pair into our DNA, has caused us to create a completely different peptide. It's probably not going to perform the same function. OK. So we've gone over transcription and translation. You should be able at this point to answer all of question two and you should also be able to fill in all of table one. Thank you for watching.
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/8.04-spring-2016.zip
PROFESSOR: There's one more property of this thing that is important, and it's something called the correspondence principle, which is another classical intuition. And it says that the wave function, and it addresses the question of what happens to the amplitude of the wave function. It says that the wave function should be larger in the regions where the particle spends more time. So in this problem, you have the particle going here. It's bouncing and it's going slowly here, it's going very fast here. So it spends more time here, spends a lot of time here, spends a lot of time here. So it should be better in these regions and smaller in the regions that spends little time. So this was called the correspondence principle, which is a big name for a somewhat vague idea. But nevertheless, it's an interesting thing and it's true as well. So let me explain this a little more and get the key point about this. So we say, if you have a potential, you have x and x plus dx, so this is dx, the probability to be found in the x is equal to psi squared dx, and it's proportional to the time spent there. So we'll say that it's-- we'll write it in the following way. It's proportional to the fraction of time spent in dx. And that, we'll call little t over the period of the motion in this oscillation. The classical particle is doing, the period there. That's the fraction of time it spends there. Up two factors of 2, maybe, because it spends going there and there for the whole period, it doesn't matter, it's anyway approximate. It's a classical intuition expressed as the correspondence principle. So this is equal to dx over v, over the velocity that positioned the [INAUDIBLE] velocity T. And this is there for dx. And the velocity is p over m, so the mass over period and the momentum. So here we go. Here's the interesting thing. We found that the magnitude of the wave function should be proportional to 1 over p of x, or lambda over h bar of x. So then the key result is that the magnitude of the wave function goes like the square root of the position the [INAUDIBLE] de Broglie wavelength. So if here the de Broglie wavelength is becoming bigger because the momentum is becoming smaller, the logic here says that yes indeed, in here, the particle is spending more time here, so actually, I should be drawing it a little bigger. So when I try to sketch a wave function in a potential, this is my best guess of how it would be. And you will be doing a lot of numerical experimentation with Mathematica and get that kind of insight. They position the [INAUDIBLE] de Broglie wavelength as you have, it is a function of the local kinetic energy. And that's what it gives for you. OK so that is one key insight into the plot of the wave function. Without solving anything, you can estimate how the wave length goes, and probably to what degree the amplitude goes. What else do you know? There's the node theorem that we mentioned, again, in the case of the square well. The ground state, the bounce state, the ground state bounce state is a state without the node. The first excited state has one node, the next excited state has two nodes, the next, three nodes, and the number of nodes increase. With that information, it already becomes kind of plausible that you can sketch a general wave function.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
So let me summarize what the steps that we've taken are to do this differential analysis. So when you're trying to analyze a continuous mass distribution, the first step is to pick some arbitrary mass element, a small but finite size mass element somewhere in the middle of the mass distribution. You don't want to pick one of the endpoints, because the endpoints are special. You want to pick an arbitrary point somewhere in the middle and then pick a small mass element at that point, so a small but finite size. Analyze the forces acting on that mass element. So write down Newton's second law, the equation of motion, for that mass element. That will give you what the forces are on that element. Then go to the limit of an infinitesimally small element. That will give you a differential equation. You can then separate the differential equation and integrate both sides to solve the differential equation. And then finally, you can apply a boundary condition, something you know about one or the other of the endpoints. And that will allow you to solve for the function of interest at any point along your distribution.
https://ocw.mit.edu/courses/5-111sc-principles-of-chemical-science-fall-2014/5.111sc-fall-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. CATHERINE DRENNAN: Let's try the countdown again. You can just give it to them for now. We'll figure it out later. I like to ask people to explain, now that they know what the right answer is, if someone will explain why that is the right answer. And I know it's a big class, and people sometimes get nervous about talking. So I bribe people. So today, the person who answers why that is correct will get an MIT chemistry T-shirt. OK. AUDIENCE: All right, let's see. If you're using 5 moles of N2, you need 15 moles of hydrogen gas. Since there's not enough hydrogen gas-- there's only 10 moles-- that means the hydrogen gas should be the limiting reactant since you would need roughly 3.33 moles of N2 for it. CATHERINE DRENNAN: Great. And here is an MIT chemistry T-shirt. [APPLAUSE] CATHERINE DRENNAN: OK, so you'll notice that the prizes are good in the beginning, get worse throughout the semester, and get good again at the end. So keep that in mind. OK, so let's try to get started. It's been a little bit of a crazy start. If people are still having clicker problems, we have a couple more clicker questions that you can try out. And we'll get your clickers working by the end of class today. So the topics-- today, we're going to be talking about the discovery of the electron and of the nucleus. And I said that there's going to be limited amounts-- or every time I have a lecture that has some history, I'm going to counter that with some modern chemistry. So next week, we're going to have two examples of modern uses. But today, we're going to do a little history. And I like history, especially when it can lead to a cool demo. So you might have noticed something demo-like came in while we were doing the clickers. So that's always good. And I also like talking about history when I feel like it's a great example of a challenge that chemists face, and really, most scientists face, that they used to face and still currently face, which is that chemists study small particles. They study things that are really tiny. How do you study something that's really small? How do you demonstrate that something that is invisible to the eye actually exists? So this is a common challenge. And so today, I'm going to tell you about how the electron and the nucleus were discovered at a very low-tech time in our scientific history, how they were able to figure this out. And let me just set the stage of what people were thinking around this time, and how these discoveries really changed everything. So in the late 1890s, chemists were patting each other on the back. And physicists too were thinking, boy, we have it all figured out. We have a real, complete understanding of our universe. We have atomic theory of matter. We have Newtonian mechanics. This is really great. And in fact, someone said these words, which are really dangerous words, that our future discoveries must be looked for in the sixth decimal place. So honestly, when I started studying chemistry, I thought everything about chemistry was probably already known. And it was just fine tuning things. I was absolutely wrong at that time. And this statement was absolutely wrong, and it really came right before the major discovery of the electron, where they realized they hadn't really understood anything about the atomic theory of matter. So more experiments are always dangerous because they can change everything. And that's why I really like science. All right, so the experiment I'm going to tell you today, JJ Thompson's discovery of the electron. Or one of the experiments I'll tell you about today, and this is really a pretty simple experiment. So JJ Thompson was interested in this thing called cathode rays. He had hydrogen gas, and then he took an evacuated glass cylinder. And he put hydrogen gas in it. And then he applied a current to that, and he could see these rays coming off. And he thought that was pretty cool. And so he was wondering about these rays. Are they made up of negatively charged particles, possibly charged particles, maybe neutral? What are these things? So he decided to do this experiment. And he wondered whether if he took two plates and had charges associated with them, whether he would see the cathode rays deflected or not. Well, first, he didn't apply any current, so it was just neutral, just to see if putting these plates in would affect anything in any way. And so when he did that, when there was zero voltage difference between these two plates, he could see the cathode ray hit this phosphor screen. And there was no deflection. All right, then he said, OK, now, I'm going to charge things up, and see if I can see a deflection. So he did that, and he saw the following. There was a deflection. And so now, the different voltage between the plates was greater than 0. And now, he saw the cathode ray was being deflected a different distance of delta x over here. And it was being deflected toward the positive plate. So that said to him that the cathode ray contained negatively charged particles. You'll notice that the word "negatively" is the only word on the screen in blue. And if you look at your notes, you'll see that there's a blank spot. Just pointing out there might be a correlation there. OK, so he knew some things also from classical work that was done in the time. And I didn't reset my boards. So he knew something about what it meant if there was just deflection. So we had our deflection of the negatively charged particles. And he knew that that was going to be proportional to the charge of the negatively charged particle. And it was going to be directly proportional to that, and inversely proportional to the mass of the negatively charged particle. So it was a pretty big deflection, so he wasn't sure. Maybe there was a very big charge, or maybe there was a very small mass, or maybe both. So then he wanted to see what happened if he really got things going, and he applied even more of voltage difference. So he did that. And when he did that, he saw this. He saw another deflection, but this time, it was much smaller. And it was toward the negatively charged plate. So he realized that, in addition to the negatively charged particle, there was also a positively charged particle. So for that particle, then, the deflection of the positively charged particle should also be proportional to the charge on the positively charged particle, and inversely proportional to the mass of the positively charged particle. But there was a big difference in this deflection. Toward the positive plate, there was a big deflection. And toward the negative plate, it was pretty small. So he knew that this deflection was much bigger than that one. And then he thought about-- is that going to be due to charge or mass? But he said, the charges should be the same because it's neutral normally. So those charges must equal each other, the absolute values at least, must equal each other. So then we can think about the comparison of these deflections. So if we can take the absolute value of the deflection of the negatively charged particle over the absolute value of the deflection of the positively charged particle, on the top here, we're going to have the charge of the negatively charged particle over the mass of the negatively charged particle. And the absolute value of that term over the charge of the positively charged particle, over the mass of the positively charged particle. But now, if you say that the charges are equal to each other, at least the absolute values of them, we can get rid of that term. And just see that the mass then, of the positively charged particle over the negatively charged particle, remains. So if this is going to be big, and we know it is, it's a big difference. The negatively charged particle deflected a lot more than the positively charged particle. That means that the difference in masses also has to be big, and that that negatively charged particle must be a lot smaller in mass than the positively charged particle. So if we go up a little bit here-- oops. So the mass of the negatively charged particle must be a lot smaller than the mass of the positively charged particle. And actually, it's about 2,000 times smaller. So he was able to figure all this out by just doing this pretty simple experiment. So he had now a small, negatively charged particle, and also a positively charged particle. And later, the negatively charged particle got a name. The negatively charged particle got the name of the electron. And its mass was determined in an another interesting experiment I won't tell you about. And it was determined to be really small, about 9 times 10 to the minus 31 kilograms. So through this experiment, he was able to figure out, that in these cathode rays, you had something that was tiny, this electron. And that means this idea that atoms were the smallest thing out there was incorrect. That's what everyone believed. They were patting themselves on the back. They had figured it all out. But there was something smaller than the atom. There was this electron, this negatively charged particle that was really tiny. So it is pretty cool. It's a pretty low-tech experiment that figured out something that really changed the way that we thought about science. So what about the nucleus? So we had the electron, and we also had this idea there was something positively charged going on there. And of course, in that experiment, that was H plus. But what about the nucleus? So Rutherford is credited with the discovery of the nucleus. So this was a little later. And he had been studying radioactive material. And his good friend, Marie Curie, from France would often send him interesting samples for him to study. I'm not sure quite how they got from France to England. Some of these were really cancer causing things. I don't know how many people touched them without safety precautions on the way. But anyway, it's interesting to note that Rutherford actually did not die of cancer, despite this research. He was literally the victim of his own success. So after his great discovery of the nucleus, which I'll tell you about, he was made into a knight. And at one point, he became sick. And he needed a doctor. Well, if you're a knight in England, you can't just have any old doctor treat you. You need to have a doctor that is also a knight. And so while he was waiting for a doctor of the appropriate ranking to come and treat him, he died. So he literally died of his own success. He was a victim of his own success. But it wasn't just his success as I'll tell you about. He had some help. He had a really good graduate student, a really good undergraduate student helping him out. So he was studying these alpha particles that were being emitted from this radioactive material. And we know now that this is Helium plus 2 ions. But that was not known at the time. They just knew something was coming out of this radioactive material, and they wanted to find out what it was, and characterize the properties of this. So he had a post-doc named Hans Geiger. And this is the Geiger of the Geiger counter. And he also had an undergraduate student, E. Marsden. And so together, they did the following experiment. And I wasn't obviously there, but I'm imagining that by "they did the following experiment," it meant the undergraduate and the graduate student, or maybe even just the undergraduate. OK, so here is the experiment. They had the radioactive material. Alpha particles were coming off, and they had built a detector that would count how many alpha particles were coming off. And so they did the experiment, and they counted. And they found there were a lot of particles, 132,000 alpha particles per minute, in fact. So then they said, OK, let's see what happens if we put a piece of foil in the path of the alpha particles. And we're going to have really, really thin gold foil. So this is like smaller than a human hair. This is really, really thin foil. And they shot alpha particles at it. And they counted. And they got approximately-- I don't know how many significant figures-- but 132,000 alpha particles. Seemed to be, in terms of the significant figures, the same. So it was just going through. These alpha particles were just going through this thin, gold foil. So they had this vision then of the gold atoms being all empty space, and the alpha particles were just going through, no problem. But then they did one more experiment. And by then, I think this was the undergraduate. So they built this detector. And they had built the detector so it could move. So sometimes when you design something to do something, you actually want to use it for that. So if we have the alpha particles coming this way and on the detector over here. And it had been sitting, and I've been collecting it. The undergraduate was told, well, put the detector over there, and see what happens. So the undergraduate moved the detector over here. And he said, you're going to see if the alpha particles hit the gold foil and backscatter. So we'll have the detector over here. So the undergraduate did this. They didn't think it was going to do anything. They needed something for the undergraduate to do. So they had him do that. And then they counted. And sure enough, click, click. It wasn't a lot, but there seemed to be some backscatter, about 20 counts, 20 alpha particles per minute. That was not expected. They were expecting 0. So they were detecting backscattering. The alpha particles were bouncing off that thin, gold foil, and coming back at the moved detector. So they could calculate this probability of backscattering, the account rate of the backscattering over the normal count of the particles. And so they had 20 backscattering events, or 20 counts over the 132,000, 2 times 10 to the minus 4, or 0.02%. This is small, very small. But it was not 0. And I don't know how many times they did this experiment, but I can imagine there were lots of times they did the experiment that no one would really believe this result. And Rutherford himself said, "it was about as credible as if you had fired a 15-inch shell at a piece of tissue paper, and it came back and hit you." So that's how he felt about it. He was like, I don't understand how this is working. This is so thin. It's like tissue paper, but yet these alpha particles are bouncing off something. So what did this all mean, once they had repeated the experiment many times? So their interpretation, then, was that these gold atoms were, in fact, mostly empty. It seemed like all the alpha particles were just going through. Most of them were just passing through and not hitting anything. But there was something in there that could be hit. There was some concentrated mass in this volume that, when the alpha particle hit that directly, it backscattered. And they later called this the nucleus. So they came up then with this new model, the Rutherford model, where you had mostly empty space. But you had concentrated mass inside that an alpha particle might hit and then backscatter. And Rutherford assumed that the electrons would be in that empty space, and that this positive mass was going to be positively charged. Because he knew the overall atom was going to be neutral. So just a little nomenclature. We can think about the charge of the electrons in the atom as being equal to minus Z to the e, where Z is our atomic number, and e is the absolute value of the electron's charge. And if this term is negative, then the charge on the nucleus is going to be positive. So we have positive Z to the e, because overall, the atom is going to be neutral. Then Rutherford went on to actually use this backscattering to measure the diameter of this positively charged, dense part of the atom, of the nucleus. And he was able to measure that diameter as a very small number, 10 to the minus 14th meters. So he did this with this back scattering experiment. So you might say, how can you get a diameter from this backscattering? And so that's what we're going to try right now ourselves. We're going to do an experiment, and Professor Sylvia Ceyer originally came up with the experiment to do in class. And so she built the first version of this gold foil right here. And originally, she took something apart from her own research program. Since then, it's been replicated so that she doesn't have to shut down her research lab every year when we do this experiment in class. So here, imagine this as a piece of gold foil. And it's mostly empty space, but there are some small, concentrated nuclei, gold nuclei, these Styrofoam balls. And if we have, over here, some alpha particles, which we happen to have 502 alpha particles. If the alpha particle hits the concentrated part, it should back scatter. Otherwise, it should go through. So you are now going to be radioactive material. I don't know. Is that the first time you've been called radioactive material? I'm not sure. But we're going to come around. Everyone can have one or two of these. So let me just tell you. You need to watch your ping pong ball. Once you get it, you can move to the center. Watch your ping pong ball. If it hits the edge of this, it's not a backscatter. Watch if it goes through or if it backscatters. And you will click in whether you had a backscatter event or not. And from that information, we will calculate the diameter of the nucleus. Do you want to put up the clicker? I think everyone's good. Do we have any more? OK, everyone, if you want to get up and get a better vantage point, do so. And let the experiment begin. I'm moving out of the way. OK, so go ahead, and say whether you had 1, 2, or 0 backscatter events. And if your clicker isn't working, we'll ask you to raise your hand and tell us, especially if it was a backscatter event. Has everyone had a chance to click in? Has everyone clicked in? You can't tell. All right, we're going to countdown. Go ahead and click in, and then we're going to do the calculation. OK, actually, we need to calculate the actual number of them, not the percent. All right, so we had some backscatter events. So let's see if we can use this information to actually calculate the diameter of the gold nuclei. All right, so we are going to talk about the probability of backscattering. So we have a probability is equal to the number of ping pong balls backscattered, backscattered over the total number. And this will be related to the radius of those gold nuclei by the following. So we have the probability is going to be equal to the area of the nuclei, the total area over the area of the whole atom. So basically, the piece of foil-- and then that's going to be further equal to the number of nuclei times the area per nucleus, again, over the area of all the atoms, or the piece of foil. OK, so we know some of this information. So I'm going to move this up. You have an actual number. 36 total? Oh, that's interesting. OK, that's a lot of backscattering. OK, so we can plug in some of these numbers now. So we have the probability is going to be equal. Someone counted. I didn't count, but someone counted that there were 120 nuclei. And the area is going to be pi r squared. And someone measured the entire frame, or the size of the piece of film, as a 139,000 centimeters squared. Or it was 1.39 meters squared. OK, so we'll assume that they counted the nuclei to three significant figures. So it's exactly 120. And we'll assume that they measured the box with three significant figures as well. So now we can solve this for r, the radius, or for the diameter. So if we now solve for the radius, we'll bring the radius over. And we'll have the square root of the probability. And if we take these numbers, and I did the math. I have a calculator, if someone wants to check-- 6.072 centimeters. And then the diameter is going to just be equal to twice that. So we have the square root of the probability times 12.14 centimeters. And now, we need to calculate the probability. So the probability is going to be the number of backscatter, which was 36. 36 over 502, someone have a calculator? Someone check my math. Check math for me. Check math, anybody. I don't have another prize. Thank you. Was it 26 or 36? I can't read up there. AUDIENCE: You said it's 12 single or double? CATHERINE DRENNAN: Double. AUDIENCE: [CHATTER] CATHERINE DRENNAN: Excellent, checking math for me, here. AUDIENCE: I can't do math in my head. AUDIENCE: Yeah, it's roughly 500 [INAUDIBLE]. CATHERINE DRENNAN: OK, 26, and so what does this come out to be? {} AUDIENCE: 0.050. CATHERINE DRENNAN: 0.050, And now we need to plug that in. So we have d equals the square root of 0.050 times 12.14. And what does that come out to be? 2.71-- and the actual was 2.5. Not bad. So using methods very similar to this, Rutherford was able to figure out what the diameter of the nucleus was. And this was a really important achievement of the time. OK, so in the last few minutes, maybe I'll move this down. In the last few minutes, I want to talk about the fall out of all of these great experiments and all of these great results. So we now know there is an electron and a nucleus. So there are subatomic particles. What does that mean in terms of what people thought they understood about atomic theory? So we had this question then. OK, so we have a nucleus, positively charged, and electron, negatively charged. And there's a distance between them. And wanted to know, why do they stay apart? Why does the electron not crash into the nucleus? So from classical description, we have Coulomb's Force Law, which tells us about the force when you have two charged particles, Q1 and Q2, so the charge on the particles. And you have over 4 times pi times this permittivity constant times the distance, r, squared. So if you apply a force then, and you have charged particles, if those particles have the same sign, then acceleration should push them apart. The force should be positive and repulsive. So two things with the same charge don't want to be near each other. It's going to be repulsive. If, like in this case, you have two things that have opposite signs, acceleration should pull them together. And here the force should be negative and attractive. So that's the situation we're in here, positively charged nucleus, negatively charged electron. So let's consider then a hydrogen atom. It equals one electron, one proton. Let's think about what happens when you have an infinite distance between them. So if they're infinitely far apart, what is going to be the force? You can just yell out the answer. AUDIENCE: 0. CATHERINE DRENNAN: 0, right. They don't feel each other. They don't know anything about each other. They're infinitely far apart. There's no force. But that's not going to be the situation in the atom. Atoms are small. So they're going to be somewhat near each other. Now, we can think about what happens if they're right on top of each other and are a 0. And here, we can try out the clickers one more time. OK, so we'll do 10 seconds. Oh, the colors changed. So most people had it's infinitely attractive. So infinitely attractive, like most chemists, except Avogadro. He's very strange looking. OK, so if these things are going to be close to each other, then they should be attracted to each other and collapse into each other. So why then are the electron and the nucleus that are infinitely attracted to each other-- why do they stay apart? So Coulomb's Law is not helping us understand this. But it's really just talking about the force with respect to a distance. It's not telling us anything about what happens when r changes with time. So we'll find in chemistry sometimes that things are spontaneous in one direction, but they're also very slow. So you don't have that the thing doesn't happen, it's just kinetically very slow. So let's consider time now. Maybe that will help us understand why this is not working. It doesn't, but let's look at that. So what do we know about time? What do we know about acceleration and force in time? We need a classical equation of motion that can explain how the electron and the nucleus could move under force. So we have our good friend. We have Newton's Second Law. We have f equals Ma, force equals mass times acceleration. So let's think about what this tells us about the electron and the nucleus. So we can express force as a function of velocity. We can also do that in terms of distance. So now, let's think about what's happening. We know the force. We can calculate the force from the Coulomb's Force Law, the force between the nucleus and the electron. And then we can think about two different distances, and with that force, how fast the particles should move toward each other. So for the initial distance, we can put take 0.5 angstroms, or 0.5 times 10 to the 10th meters, so that's about the radius of a hydrogen atom. So we take that distance. And then we want to think about how fast that would then go to 0. And it's fast, approximately 10 to the 10th seconds. Or the electron should plummet into the nucleus in about 0.1 nanoseconds. It doesn't do that though. So we have these beautiful classical laws. I'm a big fan of f equals Ma. I like all these things. But it's not working to describe what's happening here. So we discovered the electron, discovered the nucleus, but now we have a new problem. We don't understand why the electron isn't plummeting into the nucleus. So what's the problem here? So is the problem Coulomb's Force or Newton's Second Law? And it turns out, as most of you are probably aware, it's that classical mechanics doesn't work when you consider things on this size scale. So we need a new way to describe what's going on here. Classical mechanics isn't working. And so we need quantum mechanics. And so that is allowing us to understand the behavior that we're actually observing. We're not observing this plummeting, so there must be a better way to do this. And when you're on this really small scale, you need a different way to describe the behavior. And so next week, we're going to be moving in, and thinking about quantum mechanics. And if anyone's still having clicker questions or needs a clicker, we'll be down here to help you out. And otherwise, I will see you on Monday.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
We are now considering the motion of a rigid body. And we'd like to talk about kinetic energy of rotation and properties of that rigid body. Let's consider a rigid body, and let's say that our rigid body is rotating about an axis passing through a point S. If we look overhead at our rigid body, then what we're going to do is introduce a coordinate system. So this is our overhead view. And suppose that we have a small element of our rigid body here, which I'm going to write as delta m j. And that's a distance r j from the center. Our rigid body has an angle theta. And what we'd like to do is describe a coordinate system, r hat, theta hat, and k hat pointing like that. Now, with this rigid body that's rotating, we describe the angular velocity as the rate that the angle is changing with respect to k hat. And so we can draw the vector omega, and this establishes our coordinate system for our rigid body and a small element of mass, delta m j. Now, what we'd like to consider is what does this mass element do? Imagine that it's over there. Because every point in the rigid body has the same angular velocity, this object is going in a circle with angular velocity omega. Recall that omega is perpendicular to the plane of motion. But the object itself has a velocity, which I'm going to write v j. And that velocity vector, v j, is in the tangential direction. And it's related to the z component of our angular velocity by the following relationship-- it's how far we are from the center point, S, times the z component of the angular velocity. And it's pointing in the tangential direction. So remember that omega z is equal to d theta dt. And this describes our coordinate system for the rigid body. Now what we'd like to discuss is the kinetic energy. And the way we're going to consider kinetic energy is, we're going to sum up the rotational kinetic energy of every single mass element. So we begin by writing k j rotational. And we know that that is just 1/2 times the mass element times the velocity of that element squared. Now we can use our relationship for the tangential velocity element related to the angular velocity. And we have 1/2 delta m j r j squared times omega z squared. Now keep in mind that omega z is the same for every single mass element, but the distance of the mass elements are all different by r j. So the total rotational kinetic energy is the sum over j from 1 to n of, let's put the 1/2 outside, times delta m j r j squared. Now again, recall that every element has the same omega z. So I can write parentheses omega z squared. And that's our rotational kinetic energy. So what we want to do now is look at the limit as our delta m j becomes very small, because we have a continuous body. And we'll write a definition, which is going to be the moment of inertia passing through this point, S, about the axis passing through S is equal to the limit as delta m j goes to 0 or n goes to infinity of this sum-- delta m j r j squared, j goes from 1 to n. Now because this is a limit for the continuous body, we'll define it as the integral over the body of a small mass element dm. That's a distance r squared. Now here, what is the meaning of the r? For our continuous body here, if we call this dm and we define the distance from S to the body r-- and now I'm going to just put a little notation in here. It's the distance from S, the axis we're calculating about, to where the body is dm. So I'll write S dm. This is what we call the moment of inertia of a continuous body. Now again, what's very important to realize-- it's a moment about a particular axis. So this is about an axis passing perpendicular to the plane of rotation and through S, point S. So it's an axis that's passing perpendicular to the plane of rotation passing through the point S. And this is what we call the moment of inertia of a body. Now, we'll see that the moment can be expressed in terms of other physical quantities. As the course develops, you'll see two or three more fundamental relations for moment of inertia. But what we'd like to do now is summarize our results-- that the kinetic energy of rotation is 1/2 for rotation about this axis, I'm just to indicate passing through S, of I s times omega z squared. Now keep in mind, because omega z is a component, it can be positive, 0, or negative. But the square is always a positive definite quantity. And that's our kinetic energy of rotation. Let's contrast that with our translational kinetic energy. And we remember there that was 1/2 times the total mass of the object times v squared cm, where we're looking at all of the objects at the center of mass. And this is the total mass of the object.
https://ocw.mit.edu/courses/3-091sc-introduction-to-solid-state-chemistry-fall-2010/3.091sc-fall-2010.zip
The following content is provided under a creative commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Hi, I'm Jocelyn, and today we're going to go over to the Exam 2 from Fall 2009, and Problem 4, Parts (a) and (b). As always, we're going to start by reading the question. Boron exists in the gas state as the dimer of boron 2. Explain how the fact that B(2) is paramagnetic, two unpaired electrons, implies that, in this molecule, the pi 2p orbitals must lie at a lower energy than do the sigma 2p. OK, so what is this question actually asking you to do? Well, it's asking you why the energy of the pi 2p is less than the energy of the sigma 2p, given that boron 2 is paramagnetic. So the first thing we need to know is what paramagnetic means, and he actually gives that to you in the problem, so it means that you have two unpaired electrons. So this is unpaired electrons. We are asked here then to give a reason why the fact that there are unpaired electrons implies a certain ordering of the sigma and pi 2p orbitals. So the first thing we want to do, probably, is recognize that these are molecular orbitals, not atomic orbitals. Therefore, we probably want to start by drawing the molecular orbital diagram for boron. Now, how do we draw a molecular orbital diagram. Well, the first thing to do is to start with the atomic orbitals. So boron has five electrons, as we can find out from our periodic table, and they occupy the 1s, the 2s, and one electron in the 2p. Again, if this seems unfamiliar to you, you might want to go back to earlier in the course, where we went over electron configuration. However, when we're looking at molecular orbitals, we actually only care about the valence electrons, because those are what are involved in bonding. The core electrons are much, much lower in energy, so they aren't really involved in the bonding. So we're actually only going to look at the electrons in the second energy level. So we have three valence electrons per boron, and they are in the 2s and 2p, which we remember, there are three p orbitals in each energy level. So now let's draw those in. We have 2s and 2p. Remember to always have a scale here, and we're in an energy space, and this is not to scale, because we're just looking at relative energies, we don't care about the actual difference in energy. And because we have two borons-- so this is boron one-- I'm going to draw the same thing over here. Make sure they're at the same energy level because they are the exact same atom, and therefore, the same atomic orbitals. So this is, again, boron, and we have three electrons for each. We don't have to fill them in, but I'm going to do so, just to keep track. And again, we're filling according to the orbital filling rules gone over earlier in the class. So now that we have our atomic orbitals, we are going to combine them to, like, molecular orbitals. And we want to remember that s orbitals combine to make a sigma and a sigma bonding and a sigma anti-bonding. Remember that whenever you combine atomic orbitals, you create-- if you're combining two atomic orbitals, you create one of a higher energy and one of a lower energy, because the net energy stays the same. Oh sorry, that's crooked. All right, now we'll move on to the 2p, and because the problem told us that we need to explain why the pi bonding orbitals are lower in energy than the sigma bonding orbitals, we're going to start with that configuration. The actual order depends on the molecule, and so without any extra information, we wouldn't really know if the sigma was above or below the pi. So as the question, kind of, hints at: p orbitals make two pi bonds and one sigma bond. This is due to the orientation of the pi bonds, or the p orbitals in space. Then, as always, we need to put in the anti-bonding just to make sure we keep track of all of our orbitals. OK, so we combined one, two, three, four, five, six, seven, eight atomic orbitals, and we made one, two, three, four, five, six, seven, eight molecular orbitals. We want to make sure we always have the same number of orbitals before and after. So now we can fill in the electrons into the molecular orbitals: that's these in the middle here. And we use the same rules as we do when we're filling atomic orbitals. So we start at the lowest energy, pairing spins, if we only have one orbital in the energy level; but as here, we have degenerate orbitals, so we can actually have unpaired spins. So we have one, two, three, four, five, six electrons, and we have three from each boron atom, and so, we have the same number of electrons in our molecule. Now that we have our molecular orbital diagram, we can go back to the actual question. Some students stopped here and were like, I made the molecular orbital diagram. I'm done. But we need to remember that the question is asking us why the configuration we drew makes sense because of the unpaired electrons. So if we go back to our molecular orbital diagram, we see that because the pi 2p have two degenerate orbitals, we can have unpaired electrons. If we think about the sigma being below, we would have paired up those electrons, and therefore, gotten 0 unpaired electrons, and it would not be paramagnetic. So let's just try that out. Move over here, and we're going to draw just the molecular orbitals; since we already did the full process of making them from the atomic. So this is the sigma 2s, sigma star 2s. And now we're just looking at what if the sigma 2p was lower in energy than the pi 2p. And again, we would fill up our orbitals, and we see that because this lowest occupied molecular orbital is singularly degenerate, we pair up our electrons, and so if this were the case, boron would not be paramagnetic. However, because it is paramagnetic, we know that the pi 2p is lower in energy than the sigma 2p. So that's what this question was asking, and if you said all those things, great job, and you answered the question correctly. Now we're going to move on to Part (b), and it's very related to Part (a), so we're going to use the same diagrams here. So it asks, is the gas molecule B(2) 2 minus more or less stable than the gas molecule to B(2). Explain. So here he's asking about the boron dimer that has a negative 2 charge-- that is, it has two extra electrons. So we're still talking about boron; and thus the atomic orbitals that we're combining are exactly the same. I'll write that down. So we have we still have the 2s and the 2p from each of the boron. That's not the same energy level, and they still make the same molecular orbitals, but we have two extra electrons. So when we go to fill up our molecular orbitals, we now have eight instead of six. So not only is the boron 2 minus no longer paramagnetic, because we don't have unpaired electrons, but we also have a difference in the bond order. So if you don't remember what bonding order is, it is when-- I'll just put it here-- it's how we determine the strength of the interaction between the two atoms. And because he's asking is the boron 2 minus more or less stable, we want to figure out the strength of the interaction, and that will tell us which is more stable. So the bond order is the bonding electrons minus the anti-bonding divided by 2. Sorry that's a little messy, but I hope you get the idea. So for boron 2 minus, which we're talking about over here, the bonding order: we have two, four, six bonding electrons, electrons in bonding orbitals. And we have two in anti-bonding orbitals, divided by 2 and that gives us a bonding order of 2. So that's equivalent to saying there's a double bond between the boron 2 minus, the two borons in that molecule. So let's move back to our diagram for the neutral boron dimer, and we have two, three, four bonding, and two anti-bonding, and so that equals 1. So in the neutral dimer, we have a bonding order of 1, and in the charged 2 minus dimer, we have a bonding order of 2. From this, we can see that the 2 minus has more bonding interaction, and so, it will be more stable. So again, you can't just put that it's more stable, we need to have an explanation for why, and this is a way to explain that.
https://ocw.mit.edu/courses/3-054-cellular-solids-structure-properties-and-applications-spring-2015/3.054-spring-2015.zip
LORNA GIBSON: My name's Lorna Gibson. I'm the professor for 3.054, it's a course on cellular solids. And I've been working on cellular solids since I was a graduate student, since I did my Ph.D. And cellular solids are materials that are made up of an interconnected network of struts or plates. And there's examples like engineering honeycombs and foams, and there's lots of examples in nature. Things like wood and cork and there's a type of porous bone. And there's lots of examples in medicine too. Tissue engineering scaffolds, for example. So my background is in civil engineering, and in civil engineering we study structures. And typically people think of large structures like bridges or buildings. But in fact when we analyze the cellular solids, we use the same kind of mechanics. It's just the scale is very much smaller. So we're looking at structures where the scale might be hundreds of microns or millimeters, things like that, but the same sort of mechanical principles apply to that. OK, so I grew up in Niagara Falls, in Ontario. And people always think of Niagara Falls as being the waterfall and all the tourist stuff, there's a casino there now. But in fact, there's loads and loads of big civil engineering works in Niagara Falls, mostly associated with the hydroelectric power station. So when they make hydroelectric power in Niagara Falls, the power station is actually about a mile downstream from the Falls. And what they do is they have a big hydraulic gate that goes into the river and it diverts water from the river above the Falls into a whole series of canals and tunnels and there's a big reservoir where they store water. And then the water from this reservoir goes into the penstocks, the tubes that go down to the turbines and then make the electricity. Niagara Falls is not a big town, but if you drive around Niagara Falls, you see these canals, you see the reservoir, you see the big power station. And so there's these really huge, impressive civil engineering works. And my father worked for an engineering company in Niagara Falls and they specialized in the design of hydroelectric power stations, and I think that's how I got interested in engineering. So I've been interested in bird watching for some time. Mostly just because birds are beautiful and there's all sorts of interesting behaviors you can see with birds. But since I started doing research on cellular solids and, in particular, teaching this course, I realize there's lots of examples of things about birds that have to do with cellular materials. So for instance, some people had once told me that woodpeckers avoid head injury and brain injury by having a special cellular material in between their brain and their skull. And that this acted kind of like a foam in a bicycle helmet. That it would absorb the energy of the impact. And I thought oh, well, I like bird watching and I study cellular materials, I should find out about this. So I started looking into it and people had looked at the anatomy of the woodpecker skull and brain. And, in fact, there is no special cellular material. But by that point, I was kind of hooked. And I actually did a project at one point looking at why it was that woodpeckers don't get brain injury. And it's largely a scaling law. It has to do with the fact that their brains are very small. Another aspect of birds that has to do with cellular solids is how birds make themselves very light. And here we have an owl skull. This owl, unfortunately, had an accident with a car. But somebody picked up its body and took it to Mass Audubon, and I got this from somebody at the Massachusetts Audubon Society. And if you look at the skull-- I don't know if you can do a close-up here-- if you look at the skull, you can see there's a dense layer of bone on the outside and there's another dense layer bone on the inside, and there's a sort of foamy layer bone in between. And that's called a sandwich structure. And this foamy type of bone is called trabecular bone. And that's one of the things that I study. And it turns out that particular structure gives you a very stiff, strong, lightweight structure. So you can see an example of how cellular materials are used in engineering but here sort of manifested in the owl's skull in making the skull very light.
https://ocw.mit.edu/courses/7-016-introductory-biology-fall-2018/7.016-fall-2018.zip
PROFESSOR: OK. Let's get going here. So this week I'll be talking about bacteria and viruses. And these are really significant topics, because I think it's something that we often don't think about the magnitude of the problems and what kind of crises we're approaching with respect to the therapeutic treatment of infectious disease. So what I want to try and get home to you this week is the variety of different microorganisms that threaten our health, and just talk to you about the sorts of issues that are really prominent in the news concerning resistance to therapeutic agents. But in order to do that, we've got to meet some bacteria, meet some viruses, and understand that some of their lifestyles, their mechanisms, so that we can understand what kinds of agents are used and developed to try to mitigate these diseases, because it's only through a molecular mechanistic understanding of the life cycles of viruses and bacteria that we can understand how many of these therapeutic agents work and what may be happening in resistance development. Now I find this particular slide a little daunting, but I want to point out to you that it concerns the world's deadliest animals. So we worry a lot about tigers, and sharks, and things like that, nasty poisonous snakes, bites from dogs with rabies, and so on. I'm going to leave this black bar here, sort of unmentioned. I don't know what year this is, but if we talk about daunting, that's pretty serious. And then the biggest killer on this screen is the mosquito. But it isn't actually the mosquito, it's the protozoal microorganisms that the mosquito carries from one person to another that really make that such a serious consideration. But what's not here are all the bacteria and viruses that actually are far more serious. And the numbers on the next stage will show you just quite how shocking these numbers are. If you're interested in infectious disease as a field, because I think anyone going towards MD, MD/PhD infectious diseases, it really is a critical area that we have to get to grips with. There are not enough vaccines in the world. There is not enough treatment with a very microbe specific anti-infective agents. So I encourage you to look at the CDC. There's a few other places where there's loads of information collated, such as the NIAID, which is the NIH Center for Infectious Disease, and the World Health Organization. So there's lots of places where you can find stuff out. So what we're going to be talking about in the next three classes are our smallest enemies, things like bacteria, fungal infections from things like yeast or Aspergillus, which would cause candida and aspergillosis. Protozoal disease we won't mention, but those are the types of diseases that are carried by things like ticks, mosquitoes, tsetse flies. We think of those as the infectious agent, but it's really what those organisms carry and cause the spread of disease that's important there. And we won't either talk about prion diseases, which are the diseases that don't involve an infectious microorganism, but are believed to be spread from protein to protein through the nucleation of new prions from existing prions. What we'll focus on in the first class is bacteria and in the other two on viruses, with an eye to looking at antibiotics and antiviral agents, how they work, where they go wrong. And this is where the numbers get fairly shocking. So for example, bacterial infections of the lower respiratory tract, that's deep in the lungs, cause 4 million deaths a year. Think back to the numbers you just saw on that first slide. These are things like strep pneumoniae, Klebsiella pneumoniae. They're called pneumonias because they're infectious diseases of the lung, but the organisms that cause them are of the Streptomyces, and Klebsiella, and Staphylococcus aureus specifically. But there are others that cause lung infections and lower respiratory disease. These are particularly troublesome in areas where the atmosphere is bad. In big cities where there's a lot of insult from emissions and such that make the lungs weaker, then these sorts of organisms can really take a hold more readily, so they are more serious. There are many, many microorganisms that cause pneumonias. And sometimes it's a real problem to track down the precise microorganism, which makes the issue of treatment really difficult, really challenging. So I'm going to talk in a minute about absolute identification of infectious agents, so we can do better jobs of specifically targeting the causative agents. Diarrheal disease-- 2 million deaths. These are organisms like Campylobacter jejuni and Salmonella enterica. We tend to have these crises, because romaine my is contaminated with infection. There are very few deaths in the developed world. We get down to that very quickly, say stop eating Romaine lettuce until we figure out what's going on here-- very, very few. But once again, in the developing world, these can run rampant. And they can grab small children and older people who are already compromised, already a little bit not quite with strong immune systems, and people generally die of dehydration, because these diseases really hit the GI tract. It causes leaking us in the GI tract and really, really serious diarrheal disease. So those are the bad boys there. But once again, there are many others. Tuberculosis is yet another really serious infectious disease caused by Mycobacterium tuberculosis, that's the main one of the mycobacteria that is a threat. It used to be called consumption in the old days, because people almost looked consumed by the disease. They would just get thinner and thinner. Literally it was a wasting disease. People would be sent up into the mountains of Switzerland to try to recover from consumption, to where the air is clearer and cleaner, and maybe hope that they can recuperate. But TB-- look at these numbers. In 2015 there were almost 10 million new cases. There are about 1.2 million deaths from TB. A serious situation with TB is that it's often found co-infecting with the HIV virus, where you just can't fight the TB. So eventually, if you're infected with the HIV virus, it's the TB that gets you due to the weakening caused by the infection with TB. So these numbers are shocking in light of the numbers I showed you on the previous slide, right. Look at these numbers if you go to snakes and things like that. They're meaningless numbers compared to infectious diseases. So now, and I'm going to talk to you about the origins of this, many, many infectious agents that we thought we had conquered-- we thought we could take care of it. You just take this course of antibiotics and you're off, you're set. But now, because of the rapid mutation rates in bacteria and viruses, certain pathogens have completely worked out mechanisms to escape therapeutic agent. And I'm going to talk to you about those mechanisms towards the end of this class. So basically you can dose a person one day with a normal dose of an antibiotic agent, and then 10 months later that normal dose or 10 times or 100 times that dose stops working. Why is that? It's due to resistance acquisition due to rapid cell division and mistakes made on replication and transcription, that then may one in a million times confer an advantage on the microorganism. All of a sudden the drugs don't work anymore. The WHO and various community notice boards call this set of infectious agents the escape pathogens. It helps us remember which ones these are, because these are pathogens that escape treatment, because they've developed resistance to multiple drug cocktails. So commonly, when someone has a particular disease they don't take one drug, they take two or three to hit lots of pathways at once in the hope that resistance won't develop fast. But the escape pathogens have collectively acquired resistance to several antibiotics, meaning there's no good treatment. So the letters of escape stand for Enterococcus faecium, Staph aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and some of the Enterobacter species. Some of these infectious agents are what result from-- I always say this wrong-- nosocomial infections. Does anyone know what those are? These are infections that people get in hospitals. So Tom Brady had a knee operation. He got an infection in his knee that came from the surgery, right. These are hospital acquired infections, because you can't sometimes clear an area enough, and there's infectious agents around. So Acinetobacter baumannii was dubbed the Iraqi Bug for many, many years, because the vets coming back from Iraq were going to military hospitals, and these were abundant with cases of Acinetobacter baumannii. So that moved on to the escape pathogen list. So these are things to watch out for. It's the reason that nosocomial infections-- I hope I'm saying it right, otherwise you're going to go off and Google it and realize I said it wrong. It's the reason why old school physicians wore bow ties and not ties. Can you imagine why? So if you're wearing a tie, which I seldom wear to be honest, and you're working over a patient, the tie can be the thing that carries the infection, because it gets closer to infected areas. This is old school stuff. And so originally the physicians wore bow ties in order to distinguish themselves as important people, but not to wear ties that might carry infectious agents. That's sort of a scary thing. So with all this said, let me just lead you in to talk about bacteria antibiotics and resistance development. So we often name bacteria somewhat by their shape. So the long rod shaped ones are cocci. The round ones are cocci. The rod ones, whatever they are, come on one of you. And then what did the rod shape ones? Bacilli. I had a blank moment. So the rod shaped ones are bacilli. And then there's some others that have a different morphology like Campylobacter jejuni that have kind of a corkscrew shape. And that's thought to be important in their motility, digging through the mucous layers in the epithelial layers. So here I show you several shapes of bacteria. And I'm just going to, once again, reinforce what diseases some are associated with and some other diseases that you might be surprised by. So yes, we know about salmonella and the E. coli and food poisoning. But Helicobacter pylori, which is one of these flagellated bacteria, can infect the stomach. It's often the cause of ulcers. So it's a causative agent of stomach ulcers, but that has in turn led to a considerable risk factor in stomach cancer. So what we thought was just an infection causes a constellation of other problems, including cancers. And more and more microbial agents are now associated with cancers, in particular the viruses. Neisseria, these come along with the sexually transmitted diseases such as gonorrhea. Neisseria meningitidis is the one that causes meningitis. It is a very, very often fatal infection of the meninges. Staph aureus lots of infections around the body, just gruesome things like cellulitis, wound infections, toxic shock. Streptococcal bacteria, I've already mentioned-- the pneumonias, and then Campylobacter. And now another complicated factor of infection-- so I talked to you about stomach ulcers and stomach cancer. Another thing that seems to be coming along with infections is autoimmunity. So in the last section of the class you heard about immunity and you also heard about tolerance, that we don't react to things that are ourselves, otherwise we'd be in deep trouble. Autoimmunity can suddenly pop up from certain bacterial infections, because bacteria tend to cloak themselves with unusual sugar polymers and other kinds of structures that the body doesn't really know what to do with. And in some cases they kind of mimic things that are in the human body. So they they are mimetics of normal structures in the human body. And the body just doesn't notice them at all. And then there are incidences where certain bacterial infections later on cause autoimmune disease. So a bacteria may come along. It may have something that looks kind of like something human, but not quite. The human body responds, develops antibodies, and then they cross talk back to aspects of our physiology. So Campylobacter jejuni is often a contaminant in poultry. It's a severe GI infection. But later on people get diseases such as Guillain-Barre, which is a neuropathy where the ends of your limbs become numb and non-functional. So there was a famous football player, the one they called "The Refrigerator," who had a serious case of Guillain-Barre resulting from very much an infectious disease, which converted into autoimmunity. So let's now look at antibiotic targets. And to look at antibiotic targets, I think the first clear place to look is at the bacterial cell wall. Now when we first started talking about prokaryotes, things that include bacteria, we talked about the fact that these single celled organisms have to have a robust cell wall to prevent osmotic shock. They have to have some kind of thing to keep them from taking up too much water and basically exploding because of osmosis. Water floods in to balance the salt concentrations. So they have a complex cell wall, which is made of a macro molecule called peptidoglycan And it's usually one word, but I want to just underline peptidoglycan because it's a fascinating polymer that's made up of peptides and linear carbohydrate polymers. So if you look at this typical bacterium, this is just a cartoon of the peptidoglycan. So it's a cross-linked polymer, we in one direction it has repeating carbohydrate units. I'm not drawing those complex hexose structures there. I'm just drawing it in cartoon form. And those are carbohydrates known as NAG and NAN. NAG is N-Acetyl Glucosamine. It's a hexose sugar. NAN is N-AcetylMuramic acid. It's another modified sugar. And on the one of those sugars, there is a reactive site that allows you to basically cross-link these polymers into a mesh work. So it's a feat of engineering to build this amazing polymer. It starts being built on the inside, on the cytoplasm. And then the components get flipped onto the other side of the cytoplasm of bacteria. Then they get polymerized in place to make this complex mesh work of a polymer that creates the rigidity of the bacterial cell wall. It's generically known as a peptidoglycan. Different bacteria have different peptidoglycans. There are several modifications that might be specific to particular bacterial sera But this is the generic structure, where you have a polymer that's built of sugars. You can recognize the sugar structure there going in one direction and the peptide component that cross-links across in order to make this mesh. And bacterial wall have different amounts of this, but it'll build up to a really strong, rigid mesh work that's permeable to things, small molecules and water. There is holes, and so on. But it creates a mechanical rigidity so that osmotic shock doesn't occur on the bacteria. Any questions about that? Does that makes sense? So that, in a sense, it's their exoskeleton, if you want to think about it like that. So the properties are rigid. Without it, the bacteria would suffer osmotic shock. And it's plenty permeable to allow 2-nanometer type pores in order for nutrients and water to go into the structure. [VIDEO PLAYBACK] - --have E. coli growing here. And it's living. You can see it start to grow. Here we add penicillin. We're going to see these bacteria-- PROFESSOR: These are bacteria, rod-shaped bacteria. - There wasn't any microphone on this, so-- PROFESSOR: And I'm going to ask you to just keep watching this kind of carefully. - There goes another one, boop, boop. [LAUGHTER] - Poking holes in the cell wall, boom, bacteria is dead. PROFESSOR: Look at some of the bacteria disappearing. All right, I guess [INTERPOSING VOICES] OK, then we're going to-- [INTERPOSING VOICES] So we're going to leave it. [END PLAYBACK] Let's go back one. OK, now what was that? OK, so I've told you bacteria would suffer osmotic shock without peptidoglycan. Those are bacteria that you see popping, as the person who was talking said, because the peptidoglycan cannot be made. There is an antibiotic that's added. It is penicillin that's added to the bacteria. And it stops-- as bacteria grow, they have to make a bunch more peptidoglycan, because if you're doubling, you've got to make twice as much peptido-- you've got to double the amount of peptidoglycan. If you have something that inhibits the peptidoglycan being made, you have a bacterium that's trying to stretch out what it has, it's not resistant to osmotic shock. And what you saw was the bacteria basically undergoing cell death via osmotic shock, pretty graphic, pretty visual. So penicillin was one of the first antibiotics that was described for the treatment of bacterial infections. And we'll go to the timeline of that in a moment. So when we talk about bacteria, the original definition of bacteria is in three different subtypes, gram-negative, gram-positive, and mycobacterial. This is actually the first way that people would take a look at your cell-- at the bacterial cells and diagnose roughly what kind of bacteria they were. Did they fall-- which of these broad families did they fall into? Because it would help in defining how you would treat the infectious disease. So I want to show you the difference between the cell wall of these various types of bacteria. And the truth is, if you have an infectious disease, your wish is, if you had to pick one of the three, that you have a gram-positive disease. And I'll explain why that is in a moment, because it's all to do with how drugs can get into the bacteria to inhibit vital functions in order that they die and they don't take over your system. So let's look first at gram-positive bacteria. They're shown here. This is a section of a bacterium. Gram-positive have a single cell wall. And they also have a thick layer of peptidoglycan. So they gain rigidity by basically having an extracellular thick layer of peptidoglycan coating them. There is a schematic of it here. So here would be the inner cell wall. And here would be the peptidoglycan, shown in orange and pale, buff-colored circles. So that would be where their peptidoglycan is. And then there are some other glyco conjugates that actually stick out beyond that. But there is only one cytoplasmic membrane. That's the standard double bilayer. And the peptidoglycan is quite thick, relatively, 20 to 80 nanometers across. So that's how wide it is. And you can, if you've got a-- if you've stain a bacterium under a microscope, you would see that, the thickness of that wall, but the absence of a double wall. The gram-negative bacteria have a double wall. The inner membrane is pretty standard. It's just typical phospholipids. It looks like the inner cytoplasmic membrane of the gram-positive bacteria. And then it has an outer wall. So the inner membrane is typical. And then the outer wall has one leaflet that looks kind of normal. And then it has a second leaflet that's sort of decorated, honestly, like a Christmas tree. There is all kinds of things sticking out there that interact with hosts that they infect, and so on. And the space between the two walls is called the periplasmic space, because it's between. It's not the cytoplasm. It's what's called the periplasm. Now, what's interesting about these, the gram-negative bacteria, is they have quite a bit less peptidoglycan, only about 7 to 8 nanometers. So that's pretty interesting. But they sort of gain robustness from that second wall structure that's coating on the outside. Now, their challenge with gram-negative bacteria relative to gram-positive bacteria is any drugs you develop have to make a pretty-- if they're targeted at intracellular sites, they have to get through two walls, not just one wall. So they are harder to treat. And they also have a lot of characteristics that make them more prone to resistance development. So I want to point out to you, on this electron micrograph, you can actually see the double wall, the dark band of space and then another dark band, whereas here you see a thin single wall, but you see a lot of junk on the outside. Is everyone seeing the differences just to look at them? OK, so what's this gram thing about? What does this stand for? It simply stands for a chemical dye that stains peptidoglycan. And it was invented or discovered by Professor Gram. That was his name. So when someone says you got a gram-positive infection, gram-negative infection, it's how those cells look when they've been treated with this stain. Gram positives show up very positive to the stain because there is a lot of peptidoglycan on the outside that absorbs the dye and shows a strong color. The gram-negatives don't show very well with a Gram stain, because the peptidoglycan is tucked in the periplasm, not on the outside of the cell. So if someone does a quick check on a bacterial streak or an infection that you have, they might treat it with the Gram stain and say gram-positive or gram-negative just based on that simple color analysis. And so in one case, the peptidoglycan is abundant and accessible. In the other case, it's very, very much thinner and less accessible to the dyes. Now, this probably looks like stone-age stuff to you, because how much can you learn by these simple colorimetric stains? We're certainly moving in very, very different directions. But let me just finish off with the third type of bacteria, the mycobacteria, which include Mycobacterium tuberculosis. And they have a different kind of wall, again. And they're pretty unusual. And they are really, really hard to treat, because it's almost impossible to get therapeutic agents into mycobacteria. I used to work on a team with Novartis in Singapore. And they said, doing anything with mycobacteria was like trying to do biochemistry on a wax candle, literally. You just can't work with it, because they have a thick additional wall that's kind of different again. Did you have a question? No. Sorry, I thought I saw your hand up. So what they have is a typical cell wall then some peptidoglycan, but then they have this thick mycobacterial layer which comprises what are known as mycolic acids, which basically add this thick layer of greasy hydrophobic material on the outside of the mycobacteria that's pretty impenetrable. The cell wall is quite different. It doesn't have an outer coat. It's like gram-positives in that respect. But it doesn't stain very strongly. So it has a weak, what's known as Gram stain. So sometimes if you've got something that gives a sort a so-so response to the Gram stain, you might say, oh, it looks like a mycobacterium because of what's happening. Now, mycobacteria TB is a huge threat, because its treatment, its current treatment-- and it's the same treatment that's been around for, like, 30 years or something-- is a treatment with four different antibacterial agents that hit a bunch of different sites in the lifecycle of the bacteria. It includes these compounds shown here which are isoniazid, rifampicin, ethambutol, and pyrazinamide. And it's a six-month treatment with those medications, so handful, four different medications for six months. So what they were realizing in the developing world is that there was terrible compliance. The drugs are cheap, but there was no compliance. People just were not taking the pills, because they're like, I'm tired of taking these pills every day for six months. So what was developed was what's known as the DOTs program. Has anyone never heard of this? Is anyone interested in infectious disease? It was a situation where it was a social system set up in order to make sure people took these drugs every day for six months in order to comply. So social workers would go to the villages in remote areas and watch people take the medications. So it's directly observed treatment to make sure they followed through, because if they had regular TB, not very resistant TB, you could overcome it, provided that you took these medications. But still, it's a hugely debilitating thing to have to deal with these treatments. Now, there are two strains of TB. One is called MDR-TB. And the other ones called XMDR-TB You'll occasionally hear of these on TV programs. MDR is resistant to three of the four medications. And XMDR, which stands for extremely MultiDrug Resistant, is resistant to every single one of those medications. New medications, different mechanisms of action are sorely needed. All right, this is just what things look like with the Gram stains. So here you see gram-positive Bacillus anthracis. That's the deep purple rods. You know that's a gram-positive because it's a deep purple stain. The other cells in this picture are white cells. So you can really pick out the gram-positive. This is the structure of the chemical dye that stains peptidoglycan through absorbing into the peptidoglycan. It's a very sort of physical interaction of the dye with the polymer. And over on this slide, it's a mixture of gram-positive and gram-negative. And you can pick up the gram-positive and differentiate them from the gram-negative, which just stains sort of kind of weakly pink. And then mycobacteria, which are formerly gram-positive, don't stain very well because of that thick mycolic acid hydrophobic wall. So what would you do nowadays? Would you pull out a stain and drop it on bacteria and get some vague response? What's open to you now in the 21st century? You have a tiny sample of a bacterium. Grow it up. What would you do? You could tell exactly what it is. AUDIENCE: PCR. PROFESSOR: Yeah, you'd PCR up the genomic DNA and then go match it, because the thing that we, in addition to the human genome, there are thousands of pathogenic bacteria sequences that are completely annotated, known. The [INAUDIBLE] has a massive compilation of these sequences. And you just go and you find out what the bacterium is based on the sequence. So now rapid sequencing efforts-- maybe they're just a few number of key places in a genome that you would go towards and just do a really fast array and figure out what's there and within what bacterium it is, which gives you a much better clue as to how to treat it than the vague, ambiguous stains. So even though stains keep going, there is now other ways. Unfortunately, not everyone has the instrumentation to do rapid sequencing. So nowadays, there is a lot, lot, lot of interest in faster dipstick sorts of tests that can distinguish between different bacterial strains by, for example, interrogating that coat of glyco-conjugates that's on the outside of the bacteria, dipstick paper tests that can give you an idea of what organism and what serotype so you can move forward and do a much more rational treatment of those organisms. OK, let's see what's-- yes. All right, so where did the antibiotics first come from? Any questions so far? OK, so where did the first antibiotics come from? From a couple of accidental discoveries. Who has heard of the Fleming experiment? Who knows about that discovery of penicillin? Yeah, so there was an original observation that predated that which sort of suggests that Pasteur was a pretty smart guy, because he contributed in a lot of different areas. He discovered that some bacteria tend to release substances that kill other bacteria. That was in the 1870s. Then later on, there was another sort of spread of antibiotic agents. And it came with the discovery that we had things like arsenic derivatives actually showed some value in treating the organism that causes syphilis. So talk about the treatment being-- the cure being worse than the infection. People were being treated, seriously, with these arsenic derivatives in the hope of wiping out the infectious agent that caused syphilis. But you know, sometimes it was a mixed bag. But where things started to get a lot more interesting was that in 1928, there was this sort of famous historic story of Fleming discovering that some bacteria seemed to be inhibited by a particular agent that came from a fungus. And this was the origin of penicillin. So he would have a Petri dish where he was growing bacteria. And he noticed that in some of his samples, there was inhibition of bacterial growth due to an exogenous agent that had somehow contaminated the plates. So in that story, that was the substance that was named as penicillin. The mold from the-- mold is the fungus-- actually inhibited the growth of staphylococcus bacteria. And it was called penicillin. And then a lot more time went by. But in the 1940s, the active ingredient was discovered. So 1940s is sort of slap bang about, I would say, a couple of years into the Second World War. And they were able to mobilize the production of this agent. Towards the later end of the war, people had penicillin available to them. And it's basically pretty well believed that, if it wasn't for the antibiotic agents that emerged-- you know, the war ended in 1945. If it wasn't for those agents that emerged, there would have been way way, way more deaths from the war. As it was, there were way too many. So penicillin was the first antibiotic that was discovered with a discreet mechanism of action. And it was discovered at a very, very important time. So that was all great news. Penicillin was produced widely. Some of you may be allergic to penicillin. There are other options nowadays. But it's the cheapest and most viable of the first-line antibiotics. Here we go. And this thing, this pointer has a mind of its own. It sort of changed its mind. But the problem was the bacterial species started to survive treatment due to development of resistance. And all of a sudden, something that worked really well wasn't working anymore. So let's try and think about peptidoglycan, what penicillin looks like and what it does, and how penicillin resistance emerges. Those are the three things I'm going to cover here. OK, so what does penicillin do? Penicillin stops the formation of this big macromolecular peptidoglycan polymer by stopping the last cross-link, stopping the chemistry that happens to join the peptide chains to make a cross-linked polymer. And anyone who is in the mechanical engineering area will know that polymers that are just strands are much weaker than polymers that are crossed-linked structures which have tensile strength in both directions. So the uncross-linked peptidoglycan was weak. And what penicillin specifically did was inhibit forming that cross-link. What does penicillin look like? Here it is. It's a cool structure. It's what's known as a natural product, five ring, four ring, an interesting structure. And what it would do is it would interact with the enzyme the cross-linked the peptidoglycan and basically stop it dead in its tracks. What did the bacteria do? The key part of this structure is this four-membered ring within amide bond in it. The bacteria evolved an enzyme to chop it open basically making it completely inactive. So beta lactamase was evolved in the bacterial populations. It was probably derived from some other enzyme that did some useful function, but not targeted to the penicillins. But the bacteria started to survive because they made a ton of an enzyme called beta lactamase. And then it completely stopped working. So the chemists came up with other options, because they said, well, you know, if that doesn't work, we've got other antibiotics in our arsenal. And there is a compound that was used for years as a last line of resort antibiotic known as vancomycin. It was very, very important, so very serious infections, and really preserved for that use. And they thought that vancomycin might be a drug that just couldn't be defeated. This big molecule here is vancomycin. This little piece of peptide is actually the peptide that's in that cross-link. And vancomycin basically, like a glove, sat on that piece of peptide and stopped it being cross-linked. And what did the bacteria do? They evolved a set of enzymes to completely change that little piece of peptide into something that bound more poorly, giving you resistance to vancomycin as well. So when there is one drug involved, it's pretty easy to get resistance quite quickly. You just mutate one enzyme and you get a resistant strain. And the enzyme that can beat the antibiotic will win. If you've got a compound that takes five different enzymes or an antibiotic that has a very complex mechanism of action, you might say, well, this is never going to be defeated. It took five additional enzymes to evolve to make the peptidoglycan a different structure. And it's not that within every bacterium, you mutate five different enzymes and get them all working as a team. What was happening in these infections is that a plasmid with the set of enzymes was being passed around amongst bacteria. So a new bacterium could acquire resistance to this compound without evolving a whole bunch of new enzymes, but rather by lateral transfer of plasmids encoding the genes that it took to make the vancomycin inactive. All right, so let me just tell you a few of the targets. And then there is one movie I want to show you that's kind of cool. So currently, when we inhibit bacteria with antibiotics, there are a number of essential processes that are targeted with common antibiotics. So this would be a typical bacterium. One target of action is DNA synthesis and DNA polymerase. And the enzyme that is targeted is one we've talked about, topoisomerase. And that is inhibited by the fluoroquinolones such as ciprofloxacin that actually targets specifically the bacterial polymerase. So that's one way, inhibit DNA replication, bacteria can't divide. Another set of antibiotics are those that inhibit protein synthesis. So in particular, you know the tunnel that comes out of the ribosome where the growing polypeptide chain emerges after reading the messenger RNA and translating the messenger into protein? There are antibiotics to basically stick in that tunnel and stop protein synthesis. And those are things like the aminoglycosides. And they block exit from the ribosome. But you could imagine that mutating. There are the ones that inhibit cell wall biosynthesis that I've already talked to about, the penicillins, the vancoymcin. And then there are others that inhibit folate synthesis. And then there is a lot of synthetic drugs, but also a lot of natural product drugs. So both nature and chemistry have teamed up to inhibit all of these essential steps. OK, so how do you test for antibiotic resistance? You use plates where you're growing particular strains of bacteria on a plate. This would be a colony. And it's growing outwards. Where there is a colony but there is no growth around it, it means there is something in that plate that is inhibiting bacterial growth. So these are very clear types of ways that people check to see if bacteria have become resistant to drugs. You would look for that zone of inhibition. Does it disappear with some of the resistant strains, for example? And these get pretty sophisticated now where you can test a bunch of antibiotics in one go, where each of these colored dots represents an area where there is treatment with one antibiotic or another. So what's the problem? The problem is this graph, that as soon as an antibiotic is introduced, just a few years go by. And there is resistance to that antibiotic. So resistance basically is the gradual acquisition of machinery to somehow inactivate the antibiotic treatment. So if you take a look, here on the top is where the drug is introduced. And on the bottom is when resistance was developed. So let's go to something we're familiar. Here is penicillin, people introduced about 1940 to the general population. By about '47, there was resistance to penicillin. And you can see, this is really just a really serious sort of series of events. So what I want to show you was resistance in action. And that'll be the last thing I talk about today, because I just want to give you a feel for what does resistance look like. So this was an experiment that was done at Harvard on just a visualization of resistance development. I think what's so fascinating is you could then go back to the plate and pluck the first pioneers who crossed that line and find out what that was. What was that mutation that let the population expand, and so on? So you could really map out the entire evolution of very, very strong resistance. So in the next class, I'll talk to you about resistance mechanisms. And then we'll talk about viruses and resistance to antivirals.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
Let's compare kinetic energies in a two particle one-dimensional collision in different reference frames. So we could have one reference frame in which particle 1 is coming in and particle 2 is moving like that. And in particular-- so we can call this the ground frame. And now let's consider the center of mass frame. And in the center of mass frame, let's remind you that when we have two different reference frames, the velocity-- we'll call this the ground frame g-- in the ground frame is equal to the velocity of the object-- we'll actually write it this way, just unprime, V1-- the velocity in the center of mass frame-- so this is the velocity in the cm frame-- plus the relative velocities between the frames. And that's why that's the velocity of the center of mass. So this was our rule for describing how velocities change in different reference frames. And so we can draw the picture in the center of mass frame, V1 initial prime and V2 initial prime. Now let's compare kinetic energies in these different frames. So we know that the kinetic energy in the center of mass frame is just 1/2 m1 V1 initial squared prime-- put the prime there-- plus 1/2 V2 initial prime squared, kinetic energy in the center of mass frame. How do we calculate the kinetic energy in the lab frame? Well, that's a little bit more complicated. And we'll need a little algebra to start that. So let's put kinetic energy in the ground frame. We know is 1/2 m1 V1 initial squared plus 1/2 m2 V2 initial squared. Now what I have to do is use the law, the velocity relationship. And this is going to take a little bit algebra. We have m1-- I'll write V1 prime plus Vcm. And remember that any quantity squared is the dot product, V dot V. So I'm going to dot this with itself. V1 initial prime plus the center of mass. And I have the second term which looks identical to this first term. I'll write it all the way down here. 1/2 m2 V2 initial prime plus Vcm. Vector dot scalar dot product of V2 initial prime plus Vcm. Now when you take a dot product, remember there's four terms here. There's V1 prime dot V1 prime, which is just V1 prime squared. There's V center of mass dot V center of mass. So that's V center of mass squared. And then there's the cross term. And because they're identical, there's a factor of 2. And it will be repeated below. So the kinetic energy in the ground frame is 1/2 m1. So we'll take v1 i prime dotted with itself. That's V1 i prime squared. We have the cross term, which is a factor of 2, which will cancel this. So the cross term is . m1. That canceled the factor of 2. V1 i prime dot Vcm plus the Vcm with itself. So that's 1/2 m1 Vcm squared. Now I have exactly the same thing on the next one. So we'll write that down. 1/2 V2 i prime squared plus m2 V2i prime dot Vcm. That's the same in both. Plus 1/2 m2 Vcm squared. Now let's look carefully at what we. Have 1/2 m1 V1 prime squared. This is an m2. 1/2 m2 V2 i prime squared. So we have 1/2 m1 V1i prime squared plus 1/2 m2 V2i prime squared. And you're already noticing that's the kinetic energy in the center of mass frame. We have the total mass, 1/2 m1 plus m2 Vcm squared. I can just put a little check to show which terms I've done so far. And now here's the interesting one. We have m1 V1i prime plus m2 V2i prime dot Vcm. That represents this term and this term added together. But recall that the center of mass reference frame is defined by the condition that the total momentum in that frame is zero. So this term is zero. And thus, we get that the kinetic energy in the ground frame is equal to the kinetic energy in the center of mass frame plus 1/2 m1 plus m2 times Vcm squared. And that's how kinetic energy is in different reference frames. And the next thing we'll look at is how that changes when we have a collision.
https://ocw.mit.edu/courses/7-014-introductory-biology-spring-2005/7.014-spring-2005.zip
And, we're going to make a major shift. You're going to feel like this is a whole different class compared to what we were talking about last time, because were jumping from the biogeochemical cycles, or looking at the biosphere as essentially a large biochemical machine, to studying individual populations of organisms, and the communities that they make up when they come together. So, before we were really talking about organisms as they function in the biosphere. Mentally, we're grinding them all up and thinking of them as a collective biochemistry basically. And now we are going to stop grinding them up, mentally, and think of them as individual organisms. So, the next series of lectures, we're going to talk about population ecology. If you remember the first lecture I gave we talked about the hierarchy of organization within ecological systems, and then we are going to talk about competition between organisms with a population, and between organisms of different species, and were going to talk about predation, and mutualism. These are all interactions between organisms that affect the fitness of organisms. And then we'll, at the end, talk about community structure. So this is sort of the outline for the rest of my lectures, not for this lecture. So, today we are going to talk about properties of populations. We're going to analyze how we measure growth rate, growth and death in populations, and this will include populations that have an age structure, and populations that don't. And this is all in preparation for the next lecture where we will talk about human population growth. So, in this field of population ecology, which is as I told you in the first lecture, and some universities you could take three courses in population ecology, and you could get a Ph.D. in population ecology. I mean, this is a whole field that we're going to cover in two lectures. But what population ecologists worry about fundamentally, well, they don't worry about it. This is what they study, is what regulates the density of populations? Obviously, it's a function of how fast they're growing, the birth rate, and how fast they're dying, the death rate. But what are the factors that actually influence those rates? Is it competition with other organisms? Is it the entire structure of the community? Is it the availability of food? Is it the various abiotic properties of the environment: temperature, etc. So, they analyze these and basically try to model the population growth as a function of these various parameters. The other questions they ask, is how are populations distributed in the environment? Are they clustered? Are they evenly distributed? This has specific meanings about their ecology. And, the other thing that people are really fascinated by, which is a really tough question, is why are some species' populations extremely abundant, while others are rare? And one of the discussions we always have in my lab, we work on an organism that's extremely abundant, this prochlorococcus, which I told you briefly about, is the most abundant photosynthetic cell on the planet. So, my students tend to keep saying why is it so successful? And I keep saying, it's successful but there are thousands of other species who are also successful. Abundance does not equal success. Endurance equals success. If you're here in the next generation, you're successful. If you're not, if your species is disappearing, then you're not successful. So, speaking of abundance, let's talk about how we measure abundance, population ecologists. And this is just one example. Obviously, for microorganisms, or some microorganisms it's really easy because they're tiny relative to their habitats. So for the prochlorococcus that we work on, there are 10^5 cells per milliliter. So, we can go take a milliliter of water and measure how many cells there. But for some organisms, larger ones, that are widely distributed, it's not that easy. So, one method is mark and recapture. That's used a lot for things like birds and butterflies. For a bird, the mark would be putting a band on the bird. For a butterfly, they often take a magic marker and put a mark on the wing. Well, that's largely what they do. You try to mark individuals in some way that would not influence their survivorship rate. So, if N equals the population size, that is, that's our unknown, what we're going to do is capture, say, for butterflies or moths, you use a butterfly net, or moths you can use a light to track them; for birds, you put up these big mist nets. They fly into them; they get tangled up a little bit but they don't get hurt. Then you band them, and that we let them go. That's the way you mark them. So, we're going to say n1 equals the total number of marked individuals released. So you capture them, you mark them, you release them. n2 is equal to, and then you go out sometime later and you recapture as many individuals as you can find, and this would be the total number [SIREN] that doesn't sound like a fire drill, does it? I assume we're good to go here. So, n2 is the total number of recaptured. And we're going to say m2 is equal to the numbers recaptured that are marked. OK, and then we assume that the fraction of the recaptured that are marked represent the fraction in the total population that was marked. So, we say m2 over n2 is equal to n1 over N. And the number that we're looking for, population size, is equal to n1, n2 divided by m2. So, of course, this assumes that there's no effect of the marking of the individuals. It assumes that there's no bias in the trapping for the marked or not marked individuals. There's all kinds of assumptions that underlie this. It's a start for assessing the population size. OK, so how do we measure population growth? We're going to first start with looking at populations that have age structure. Now, I hope you printed out the slides that were on the Web, because I'm depending on these overheads a lot for this lecture because we wouldn't get through any of it if I wrote all this stuff on the board. So, we're going to talk about populations that have an age structure. And the data I'm going to show you here is for human populations. But this applies to any population that has differential birth and death rates as a function of the age of the organism, OK? So, in these populations if birth rate and death rate are high, the population is dominated by young people. And, we'll look at this in a minute. And, if B and D are low, dominated by old people, or older I should say, since I now fit into the old category. OK, so here's a typical population age distribution for developed countries, where each slice here, these are females on the right, males on the left, and each slice is an age category: zero to 10 years, 10 to 20. And you can see that in these kinds of populations, you have a fairly even age distribution. Long periods of no net growth in a population lead to this. In these developed countries, and we're going to examine why this is, there's basically an even replacement rate of children for adults. And one of the things we worry about when you see this kind of age distribution, although it's good in terms of population growth, is when you have few young people and a lot of older people, who's going to take care of them, which is what's behind the Social Security crisis. But we won't get into that. Since you're the young people and I'm the old people, I don't want to dwell on that. OK, so what demographers do for human populations is project what the population will look like in the future based on the reproductive rates of the present. And you can see for the US here, it's reasonably stable if you look at these three snapshots. We're going to go backwards starting with 1950, and show you what the population has been doing since 1950. And I'm just going to walk through this. You only have one in your handouts, but I'll show you how it's moving along. Moving along, you can think of this as generations moving through the population. And this is the date up here. So, this is 1950, 1955, you can see this red cohort. A cohort is a group of individuals that were born at roughly the same time. So, you can see that red cohort there. And we are going along, 1965. This lip here, that we can now see, is the postwar baby boom. That's what I'm a member of. If you can see it in this bulge in this population. And now were marching along. Here's my cohort, and I just put these lines on to keep you oriented. And here comes you guys. I think those are you guys, 1985. That's roughly right, because I never know when I've last updated these slides. So, and here you go. See, here's the big bulge of all of these baby boomers that you guys are going to have to take care of. And now, we can actually see an echo. This is what's called the baby boom echo. These are the kids of the baby boomers, which is you guys. But you can only see that as we march through it. So, here we are at 2020. But you get the impression that it's a fairly stable, now, even age distribution in the US and these developed countries. Oops, here we go a little but more. Sorry. 2035, 2045, OK. Now, in less developed countries, the birth rate's high and the death rate's low. We see a much different age distribution. And here's Uganda, with a very high reproductive rate showing the projections to 2050. And here, we can march through from 1970. You can see that this huge expansion, do you know what that noise is? OK. Does anybody have a hypothesis for what that noise is that we could test? Oh, OK, I guess we can't do anything about that. OK, so here's Uganda. And you can see the dramatic difference in a population where there is large birthrates, and reducing death rates. And we're going to get into analyzing that in the next lecture. I just want to show you this here so you have a feeling for what we are talking about in age structured populations. So, let's now look at how are going to analyze these populations to try to quantify growth rates or replacement rates. And to do this, we set up life tables. And this is basically what insurance agencies do for human populations. But we do the same thing for populations of ecological interests. We use the same techniques. In this lecture, going to use a unicorn is my example, because I can make up the numbers because they don't exist. But in a textbook there are examples for real organisms like lizards and things like that. OK, so we need to define an age interval, X, and then this is the number of intervals in the original cohort. Again, a cohort is a group of individuals that are born within a defined age interval. I mean, I think of you guys as a cohort. DX is the number dying during that interval. All of this is on the Web. These slides are on the Web. So, you don't need to write it down, but you can. And, NX is that number of individuals surviving to age X. LX is the portion of individuals surviving to age X. So, that's just equal to NX divided by N0. And, we're going to look at a table that shows this in a minute. And MX is something that's measured. It's the per capita births during age interval X to X plus one. And this is also called age-specific fecundity. And you can think of it as the number of female offspring produced per female in a particular age category. OK, is everybody comfortable with that? So, with these definitions, we're going to build a life table that will allow us to actually calculate some things of interest. And, what do we want to calculate? We want to calculate the survivorship probability, LX. We want to calculate the net replacement rate. No it's not really a rate, net replacement of population per generation, which we are calling R0. It's basically the number of children people have to replace who's there per generation. And then, for now, this is what we are going to look at. And to do that, we are going to generate what's called a cohort life table. And to do this, we follow a cohort of individuals throughout lifetime. Or, we can also generate a static life table because it's not that easy sometimes to have a group of organisms that are born at the same time to follow them throughout their entire lifetime. So there is a static life table of taking a snapshot at one time of the population, and calculating the age structure. So, you take a snapshot, and we look at the age structure. And, we are going to do this in a second so it will make more sense. OK, so we've defined our terms. And now, we are going to start by calculating LX. So, this is a cohort life table for unicorns. We're going to start out with a hundred baby unicorns that we have in our imaginary unicorn pen. So, this is a cohort size of 100. And, we find that after a year there are 50 of them left. 50 of them die in the first year. So, the probability here, the proportion surviving is 0. , NX over N0, and then a year later, .4, .3, and then by four years older, no unicorns left. They don't live very long. All right, so this is what's called the survivorship probability, and what we can do is look at there. Different types of organisms have different, what we call, survivorship curves. And this is discussed in your textbook. We'll just describe the extremes. These are just theoretical survivorship curves. But some organisms have a very high probability of survival as a function of age until they reach an old age. And then, they have a very low probability of survival. There are other organisms whose survivorship probability drops very fast, right after they're born. But if they make it through that interval, they're pretty good to go. And then there are some that have a steady probability of dying. So, where are humans, do you think, on this? Two? No, but that's OK. Let me ask you the other way; where our frogs, do you think? Yeah, OK, so you got that image. Tons of frogs' eggs: everybody eats them. Or for that matter, the video I showed towards the end of the last class where there were all those eggs of, what was that? Remember all those eggs that everybody was eating? Herring, thank you. So, any organism that puts out just tons of fertilized eggs, and knowing that most of them will be eaten, but some of them will survive, falls here. And, humans actually fall here. Any organism that has a high investment in the care of offspring, they have few offspring but they invest a lot into the care of those offspring, would fall here. And then this, actually birds and things fall here. So, here's some real but idealized survivorship curves. These are humans. And males and females are different. I'm not sure whether we understand that completely yet. Does anybody know whether that's socially constructed? Now that there's more women experiencing equal stress in the workplace as there are men that will probably even out. But, I think there are more women born, or girl babies. Anyway, there's some interesting biology behind this, but I don't know. I don't remember. And, here's grass, of course grass spew out all these seeds everywhere, and very few of them survive, also these frogs, etc. and birds are commonly like this, where they're somewhere in between. Why do we care so much about survivorship curves? Who cares? Well, I mean they're inherently interesting to population ecologists, but there are also uses for them. For example, if you want to conserve a species, if you're worried about a species going extinct, you want to figure out whether it's better to conserve the young ones or the old ones. For example, turtle species, you would pick a certain age group where the probability of survival is high, and decide to target the conservation of that age group. So, let's continue with, we are building our life table here. So, we have the survivorship probability, but what we really want to get at is understanding whether or not the population that we are describing is replacing itself with each generation. So, maybe we should define, when R0 is equal to one, that meets the population is exactly replacing itself. So, this is replacing, so the actual growth rate of the population would be steady. If R0 is less than one, the number of individuals is declining. And R0 of greater than one, it's increasing. So, we want to know for our unicorns what that is. And to get to that, we have to know something about the birth rates. So, MX is the average offspring per female of age X. So, this is called the age-specific fecundity. And that's something that's a known property of the population. Whoops, oh, my, my, my, my, I'm missing a slide. Oh, there we go. They're out of order. OK, so we have MX. So, how do we calculate R0? Well, R0 is the sum of LX MX. With the sum of the survivorship times the age-specific fecundity, and in this case, it sums up to three. So, what's happening to our unicorn population? It's growing. Yeah, we are getting three unicorns in each generation for every one that existed before. So, in our imaginary unit of our population, we're going to be knee deep in unicorns pretty fast. OK, so I forgot my watch, so I have to look at my computer. What if we can't follow cohort? Oh, thank you. How do we create the same kind of analysis for a population that we can't follow through time, but can only look at as a snapshot? OK, this is where we go to the slide. If you don't have it in your handout, it doesn't matter. I just got off the web this morning. I couldn't find a skeleton of the unicorn because, of course, that's totally imaginary, but I found a mastodon. So, just imagine that this is a unicorn, and I couldn't find a unicorn horn, so this is a sheep's. But, all these principles apply. I just discovered Images in Google, which is really exciting. So, you're going to get subjected to this for awhile. So, OK, so what you can do, and this has actually been done with mountain sheep, is you go out you find dead sheep, you find skeletons of sheep that have died for whatever causes. And you go out, and you sample until you have, say, 100 skeletons. And that's your cohort that you're looking at, at one point in time. And from their horn, you can actually tell how old they were when they died. You can count the number of rings, so that's what's here, annual horn rings. This is for a dall mountain sheep. So, you can say well now it died when it was two. That one died when it was 10. That one died when it was whatever age. And then you can create the same kind of life table, a static life table, where you have a hundred skeletons. That is your cohort. You look at the number dying of age zero to one, the number of one year olds, the number that died when they were one year old, the number that died when they were a two-year-old etc. And so, from these data, these are the data that you collected, you can calculate this column, NX, so NX is DX, or NX minus DX equals NX plus one. Does that make sense? I can never tell whether. I know if I write this on the board it might be easier, but it's so obvious isn't it? We are just saying that this is the number that died at the age. This is the number you started with, so that's how many are going to have that age, that age, and that age. And then, once you have this column, your proportion surviving LX, you can calculate LX. LX equals NX divided by N0, OK? So, we are doing exactly the same thing as we did before. It's just that we're getting the NX column instead of getting it by following the cohort. We're getting it by calculating it based on how old dead organisms were when they died. And in my ecology class that I teach, some years we actually go out to the Mount Auburn Cemetery. And you can do this from human gravestones. You can go to the cemetery, and pick out a number of gravestones, and see the age at which humans died. You create yourself a cohort, and you can create a life table. And you can do that for different eras, and see how replacements have changed. OK, now so that's the analysis for populations that have an age structure. Now we are going to go more into simpler type of population, and that is a population with a stable age distribution. And to do this, you're going to help me, and we're going to use your calculus that you've all been studying. So, instead of the unicorn now, have your imaginary population be a population of microbes that divide in half. They multiplied by dividing in half. So, each one of these is a microbe that's dividing in half. This is your mental image. This is what's called exponential growth. It's obvious how that happens. And we're going to model this population, we're going to first assume unlimited resources. OK, so we're going to say that the rate of population increase is equal to the average birth rate minus the average death rate times the number of cells. OK, so we are going to now turn this into math, and that is to say the dN/dt, the increase in population where N is the population number is equal to the birth rate minus the death rate times N which is the number of cells, OK? And then, we're going to let B minus D, the birth rate minus the death rate, be what we call r. And, this is what's called the intrinsic rate of increase of a population. OK, what are the units of r? One over time, exactly, time to the minus one. So, let's look at that more carefully. And also, it's a little misleading to say it's the rate of increase because r can be positive or negative, however it turns out. It can be positive or negative, but that's what it's called. So, we have the dN/dt equals rN. We're substituting r in this equation for one over N times dN/dt equals r. OK, so ours has the unit time to the minus one. And so, let's ask a question. Given N0 I give you the population density at some time which we're going to call T equals zero. Given a population growing according to this, which is exponential growth, what if we want to know the population, what N is at any time T? We want an equation that will give us, given N0 what would the population density be at some time, T? What do you have to do to this to get that? Yeah, so who wants to do that for me? Come on. You guys did this freshman year. It's the easiest thing there is, right? Every class I've had has had somebody who was willing to come up and do this. OK, so we'll just add a T there. So, N at sometime T is equally to N0 e to the rT. And so, We could say, then, r equals natural log of NT minus natural log of N0 divided by T. And I like to write it that way because then, we know what this looks like, right? Let's plot that. This is N and this is T. What does that look like? I know this is really rudimentary but remember we're modeling population growth. So here, if we plot the log of N, and this is what we do with cultures of microorganisms. That's a flask. Those are a lot of microbes in there. And what we do is we sample it at various points in time, and if you take the log we get a nice straight line that we can draw a regression through. And what's the slope of that line equal to? r. Exactly. The growth rate in the units: N to the minus one. OK, what's the Y intercept? N0. OK, now suppose we want to calculate the doubling time of the population, the time it takes to double. How would we do that? Let's first define it. It's the time, T, that it takes for NT to equal to N0, right? If we start with N0 the population doubles. Then, that's the time at NT. So, we want to solve for that T for the time it takes for the population to double. Since natural log of NT over N0 equals rT, then the natural log of, sorry, 2N0 over N0 equals rT, and T equals the natural log of two divided by r equals our doubling time. Does that make sense? I'll put this out there so you can see it better. What's the natural log of two? 0.69, thank you, always a handy thing to have in our repertoire. So, that's just the way, it's easier to think about the time it takes for a population to double often, then the instantaneous growth rate.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
So I would like to tell you about friction today. Friction is a very mysterious subject, because it has such simple rules at the macroscopic scale, and yet-- as I will show you and tell you a little bit about today-- very different rules at the nanoscopic scale. So I would like to tell you about friction at the nano scale. So at the macro scale, there are classic rules that were discovered very early on. For macroscopic objects, it was actually Leonardo da Vinci who found the first laws of so-called "dry" friction. So this was da Vinci as early as in the 15th century. And basically, he discovered that friction depends on the force, on the weight of the object on the surface, and that there's a certain coefficient that depends on the material. Da Vinci did not publish his notes. They were just hidden in his notebooks for a long time. And Amontons in the 17th century rediscovered the very same laws that da Vinci had found. And these are two very simple laws. The first law is that if you have an object sliding on a surface-- so there's a force that you are pulling on the object, and there's the friction force acting on it in the opposite direction-- then Amontons found that the friction force is proportional to the normal force. So the normal force is the weight of the object. So he found that the friction force as a function of normal force is just the simple interdependence. So if you make the object twice as heavy, the friction force will be twice as large. This makes, maybe, sense. It agrees with our intuition that maybe an object twice as heavy should have a force twice as large. The second law of friction is much more surprising-- namely, that if you have two objects of the same weight-- so you have the same normal force-- but very different areas of contact, then the friction force is the same. So if you have a friction force for this object-- let me call it friction force one-- and the friction force of this object-- friction force two-- then the friction forces are the same. One equals friction force two. So surprisingly, the friction force is independent of contact area. This seems very counter-intuitive, because if you think of friction as some kind of sticking, some kind of effect where two surfaces rub together, then we might naively assume that the larger the surface, the larger the friction force. So one maybe intuition of why this might not be the case, or one possible explanation, is that surfaces always have roughness. And so the actual contact area between two surfaces-- so if I really draw the surface of the object here-- then the actual contact area between two surfaces might not be the observed area of the macroscopic object. And so this may be one way to explain why the friction is independent of the area, because it might only be proportional to the actual contact area of the system. We have already described two of the laws of macroscopic friction. There's a third law that was actually discovered by Coulomb. And again, this is a little bit surprising result-- namely, that for this kind of friction, the dry friction, it is independent of the velocity of the object. So this is, for instance, something that is not true for air resistance. If you drive a car, the faster you go, the more friction there is, the more air resistance there is. But if you have two surfaces sliding on top of each other, then Coulomb found that the friction force is independent of velocity. Again, this is something quite counter-- not quite, but somewhat counter-intuitive. And as I will show you a little bit later, it actually is no longer true when you look at the nano scale. Coulomb was also the first one to distinguish static friction, which is the force with which an object resists motion for a while. You make the force larger and larger, so for a while, the object will stick. And then it will start sliding. So he distinguished static friction from dynamic friction. And because we have this law that says that the frictional force is proportional to the normal force, I will remind you of the first law-- the friction force being proportional to the normal force, like so-- we can define a slope. So we can write the friction force as some slope, mu, times the normal force. These are both forces, so they have the same units, which means that the so-called coefficient of friction-- which could be therefore static frictional or for dynamic friction, for kinetic friction-- this coefficient is just a number. And typically, this number is between 0.3 and 0.6. This makes a lot of sense. The friction force is not quite as strong as the normal force. So the friction force with which an object resists motion is not quite as large as the normal force with which it pushes down on a certain surface. However, surprisingly, there are materials where mu is actually larger than 1. It's possible, for instance, for rubber. Which means that an object can be harder to pull sideways than actually-- it can be harder to pull an object sideways than to actually lift it from the surface. So this is a surprising result. Now, it turns out that despite these very simple laws, our fundamental understanding of friction is rather limited. So for instance, nobody in the world can predict this coefficient mu of friction. Nobody in the world can, from microscopic or principle say-- being told what the two surfaces are, say plastic or wood-- what is the friction coefficient. Can you calculate it for me? We cannot predict it. We can only measure it, and then tabulate it-- make a table-- and say this kind of object on this kind of surface has this kind of coefficient for friction. So that's one, if you want, failure of physics. How is it possible that we can't describe, and quantitatively describe, something as simple as that? Also, in many instances, we would like to change the coefficient to friction. Typically, we would like to reduce friction, because friction is something where we dissipate energy. It is estimated that as much as 3% of the nation's GDP is "wasted" on friction. So friction destroys energy in the amount of billions and billions of dollars-- cars driving on streets, machines working, and so on. Sometimes friction is, of course, desirable. If you want to stop your car on the road, you want friction to actually work. You wouldn't like to do away with it completely. But in many instances, it's just a source of energy loss. So if we could even make a small change in the friction, this would be not only important for science, but it might also be very important for technology. So here, I've told you about friction of macroscopic objects. Now people-- chemists, physicists, other scientists, engineers-- are able to build smaller and smaller machines. And there's tremendous interest to build nanoscopic machines-- machines that are maybe only a few molecules or a small number of molecules or atoms wide and big. And there's been tremendous success. People have built nano pumps-- tiny pumps-- even a tiny car, consisting of a relatively small number of molecules that can move on the surface if you inject the electrons. So as I would like now to tell you about how bad friction is at the nano scale. So let's ask the following question-- how bad is friction at the nano scale? Your general intuition might be that friction gets worse at the nano scale, simply because the bulk-- the volume of an object-- is what determines how many atoms are in it, whereas friction is a surface effect. And because the volume increases more quickly with the size of the object then does the surface, you might think that probably, there's a bigger problem with friction at the nano scale than there is for macroscopic objects. To illustrate how bad this really is, let's consider a tire of a car. So this tire is rolling as your car is driving. And we can try to estimate what happens to the wear of this tire. So let's say that this tire maybe wears off. Wear is on the order of maybe, let's say, 10 millimeters every, maybe, 50,000 kilometers. So that means we take off a number of atomic layers here every 50,000 kilometers. Now, how many atomic layers is 10 millimeter? If we say that one atomic layer is on the order of, let's call it 1 Angstrom-- which is 10 to the minus 10 meters-- then 10 millimeters is 10 to the minus 2 meters. So that is equal-- since one layer is 10 to minus 10 meters-- that is equal to 10 to the 8 atomic layers. We can also ask ourself, how far has the tire been moving? So if we assume that the circumference of the tire-- so this is the radius of the tire. Let's guess that maybe the circumference of the tire here, which is given by 2 pi r, is on the order of 1 meter. Then this means that 50,000 kilometers-- or 5 times 10 to the 7 meters, which is 50,000 kilometers-- is basically 5 times 10 to the 7 revolutions. So in 50 million revolutions-- 5 times 10 to the 7 revolutions-- we lose 10 to the 8 atomic layers. So it comes out that we lose one to two atomic layers per revolution. If we had used maybe a size of an atom more closely as two Angstroms, we would come out at about one atomic layer. So this is a remarkable effect. Even when we drive our car, each time the tire rolls around, we leave one atomic layer on the street. For a macroscopic object, as we see, it doesn't matter. We can drive a very large macroscopic distance before we wear off a few millimeters of the tire. But if the tire of the car was itself only a few atomic layers big, then you see we would have a huge problem. So this is why people are interested in understanding, and maybe manipulating, friction at the nano scale. So what does friction at the nano scale really look like? And why are people interested in it after so many centuries of first studying friction? One reason is that now one can do experiments at the nano scale. And the new tool that has enabled that is what is called an atomic force microscope. What is an atomic force microscope? Well, it's essentially a very, very sharp tip-- atomically sharp tip. So you have a surface here, which consists of atoms. And these are individual atoms. And you're trying to study the friction. And now what you do is, you bring an object here shaped in the form of the tip, which also consists of atoms. And the object is so sharp that you have, ideally, just one atom near the surface-- one or more atoms near the surface. So this is what an atomic force microscope is. You can pull on this with some force. And then you can try to measure the force of resistance-- the friction force that is due to the interaction between the atom near the surface and the atoms forming the surface. So we might call this the substrate. And this would be the moving object. And we can either study static friction by pulling the objects, but not with a force large enough to move, or maybe we can apply a larger force so the object starts moving at a certain velocity v, and we can measure friction. Now this, as simple as it may seem, is still too complicated for physicists. So we try to make the model even simpler. So how do physicists do this? Well, they say that the surface still consists of atoms, but now we're mostly interested what happens to this tip. We're going to say, well, somehow, what matters is that this atom is bound to the tip. So maybe we can model it in the following way. We have here the macroscopic part of the tip. And then we're going to say that the surface atom is really bound in some way to the tip. And the simplest way that we can imagine the surface atom to be bound to the tip is via some spring with some specific spring constant. So basically, the one atom that makes the contact with the surface is bound via spring to the rest of a tip that is moving-- to the rest of the moving object. So this is how we view the tip as physicists, to make a model. And then how do we view the surface to make a model? Well, we say this is really a periodic arrangement of the atoms. So maybe it makes sense to kind of assume that the associated potential is also periodic at the atomic scale. So this is the potential as a function of position. So basically, we model friction as spring plus periodic potential. This is a very simple model. And it was first introduced now about 90 years ago. So by Prandtl-- by a German physicist called Prandtl-- and independently by another physicist called Tomlinson. And this is called the Prandtl-Tomlinson model. And they discovered it, or introduced it, in 1928 and 1929. And it turns out that this very simple model of a spring and the periodic potential captures much of the essence of nano friction. So as physicists, now we have a spring and the periodic potential. How do we think about this? Well, one way is to think about the energy in the system. So we can plot the energy as a function of position. And now we have our periodic potential, like so. And how do we model the spring? Well, this is going to be the potential energy, v. How do we model the spring? Well, the spring has a linear force. The force is proportional to the displacement, which means the potential is quadratic in this placement. So this is the potential for spring. This is what the spring does, and this is what the substrate does. And what we need to do is, we need to add these two together. So what that means qualitatively is that the total potential of the system might look something like this. So as we now translate the spring across the surface with some velocity, then you can see that this addition between the fixed substrate potential-- this one is fixed-- and this spring potential is moving. You can see that this will lead to a time-varying potential for the object. So let's look now, in a simulation, what that time variation might look like. So what you see here is a combination of a spring and the periodic potential. And the periodic potential has been chosen a slightly different strength than I'm showing you here. It has been chosen weaker so that instead of many minima in this total potential, there are only two minima. And what you see happening in this system is that, as the particle is moving, the spring tries to pull it across a maximum of the periodic potential. At some point, the atom is released, and it releases energy that is taken up as heat by the substrate. And then, the object is pulled towards the next minimum. The minimum disappears slowly. The object is released again, et cetera. So basically, in this model, we can understand friction as the external force pulling the object over successive maxima. So that's nice. In this model we can understand why there is heat generated. There's always heat generated when the atom loses this extra energy, because it is stuck for a while in a minimum of the potential, which is not the absolute minimum of the potential. And then the moment it's released, it releases kinetic energy that is converted into heat. Now we can have a different situation. So this was for strong or moderately strong potential. So basically, this was a situation where there are just two minima in the potential, like so. We can make the potential a little bit weaker. And in that case, the curvature of this potential-- of the periodic potential-- the curvature in one direction might not be enough to overcome the opposite curvature of the spring. So in this case, we can end up with a potential just slightly distorted, but we have only one minimum. So in the next movie, I will show you what happens when we have just one minimum. In that case, you can see that, because there is no second minimum in the system, the object-- in this case, the tip-- follows quite smoothly the minimum of the potential. And no energy is released. No heat is released in the problem. This model-- this Prandtl-Tomlinson model-- is interesting. If in this model, we plot the friction force versus the corrugation of the potential corrugation-- so we'll call this the corrugation of the potential, u, which is proportional to the normal force for a macroscopic object-- then what we find is, yes, we find a linear dependence. So this is, if you want, the corrugation or the normal force. What we find is a linear dependence, but only above a certain critical value. So the friction force actually looks like this. It's 0 until the potential becomes strong enough. Basically, you can think of this curvature becoming larger than that curvature. And then, the friction force sets in. So this is at the nano scale. And our simple macroscopic friction law would have predicted something of the same slope at the macro scale-- something of the same slope with a [INAUDIBLE] offset. So we can kind of see that, at the nano scale, the friction is a little bit more complicated. There's a region where there's no friction whatsoever. But then it increases linearly with the normal force. So you can imagine that if I go to macroscopic normal loads, then the difference between these curves, at least fractionally, will be quite small. The difference between these curves, at least fractionally, will be quite small. And so we can see how the law of the nano scale, explained by this very simple Prandtl-Tomlinson model, approaches the law at the macro scale. So far, I have told you about something very simple-- namely when the contact is a single atom. And even making this very simple approximation, we can already understand why the friction force is approximately proportional to the normal force. Now in real life, probably there is more than one atom touching the surface. So let's consider a contact area where several atoms-- maybe a long chain, maybe just a few atoms, in the case of nanoscale probably just a few atoms-- several atoms will be making up the contact. So instead of considering this situation with one atom on a spring, which is the single atom case, now let's consider a situation where several atoms make up the contact. So again, here we have the substrate with our periodic potential that is ultimately coming from the individual atoms. But now we will consider more than one atom making up the surface. So in this case, how do we model the system? Well, we think that these atoms are still connected by springs to the macroscopic object. But typically, these atoms will also have forces between them. So this is my physicist's model of what happens when more than one atom touches the surface. Now these are only masses of springs and some simple periodic potential, and yet the situation is quite complex, and in many ways, counter-intuitive. And the first thing that changes compared to the single atom is that now I have the distance between the atoms as a parameter. So if I label the period of the periodic potential as a, and maybe the period of my object of the distance between the atoms in the object as d, then I can have different situations. So one very simple case is when d is equal to a-- when the two periods are the same-- or in general, when d is a multiple integer of a, and n is an integer. So let's see what happens in this case. Very naively, all the atoms are doing the same thing relative to the surface. So you might expect them, maybe, to move in exactly the same way. They will all move together. And these springs between them will not stretch. In this case, I can forget about the springs between the atoms in this simple case that we'll call commensurate. Basically, when the period of the object matches the period of the substrate-- the commensurate case-- then the atoms are at equivalent positions throughout the substrate period, the lattice of the springs between them will not stretch. And it's as if they are not there. So in the commensurate case, what you can see here is, as the atoms are pulled across the periodic potential, because of this commensurability condition where d is equal to a or a multiple integer of a-- basically, two periods are matched-- all the atoms are doing the same thing at the same time. So they get pulled, pulled, pulled. They first stick, and then they all slip at the same time. And you can imagine-- and you can kind of see visually in this case-- that the friction is the same as the single atom friction multiplied simply by n. So this is just the single atom friction that we have seen before multiplied by just the number of atoms that makes up the contact area. Now there's a different case, which we might call incommensurate, where d is not equal to a, and d is not an integer multiple of a. So maybe d could be 1.5 a, or 2/3 of a, or some other number. And what is the most incommensurate case that we can imagine? Well, the most incommensurate case for d and a is that the ratio of d and a is an irrational number. So an irrational number would be something like square root of 2, or for our purposes, it turns out that the most irrational number in a mathematical sense is 1/2 squared of 5 plus 1. This is the so-called Golden Ratio. The ancient Greeks believed that it had magical properties. For instance, when they built the temples, the two sides of the temples-- the two sides of the rectangle-- were related by this very strange number, the so-called Golden Ratio. It is assumed to be aesthetically very pleasing. Now mathematically speaking, the Golden Ratio is very interesting, because it is the most irrational number, in some sense, that you can devise. Now we can choose such an incommensurate ratio of d over a, and see what happens in this case. So you can see here that, in the case of an incommensurate ratio, the behavior of the transport-- the behavior of the motion of the object-- changes dramatically. Instead of all the atoms sticking and slipping together, like we had in the commensurate case, now the atoms move one by one. Basically, the first atom moves over the barrier, the spring stretches, which facilitates the motion of the next atom over the barrier, the next atom, and so on. So these atom chains move like a caterpillar. They essentially move one by one. In technical language, we call these things kinks. You can see that they are periodic compressions. They are compressions in the chain and stretches in the chain. And so these atoms move like a caterpillar. And now an interesting fact is that in this case, friction is dramatically reduced. And depending on which regime you are, the friction can even disappear altogether at the nano scale. The reduction was so large that some people have even coined this with the word superlubricity. Interestingly enough, even though this is a very simple system, it's just, if you want, balls and springs, this superlubricity was only discovered in the late 1980s, even though we know all the physics for more than a century. So you can see just a tiny, tiny rearrangement of the atoms-- not to be any more commensurate with the lattice, but to be incommensurate in terms of an irrational number-- can change friction properties dramatically. In fact, there was a French scientist called Aubry that first pointed out that there's a very interesting transition that happens in this system of a periodic potential and atoms connected with springs when you choose, as the ratio between these two length scales, the Golden Ratio. And this is called the Aubry Transition. So far, we have considered only, if you want, the mechanics of the motion, which is equivalent to saying that we have assumed that the system is at temperature T equal 0. Basically, the atoms remains at the minimum. It doesn't wiggle around because of temperature, and we have derived the friction in this limit. Now what happens for finite temperature-- by finite, meaning a temperature that is larger than 0. Well, if you think back of our simple single-atom model of friction, where maybe an atom is stuck here as the potential is moving with some velocity v, then in the 0 temperature limit, this atom would only be released at the time when this minimum actually disappears. And then it would have a high energy and it would dissipate this energy to be cooled to the next minimum. However, if we have a finite temperature T-- some temperature scale T-- that means that the energy of the atom in the potential is not 0, the kinetic energy. But the atom has a smeared-out kinetic energy and potential energy that is like so. So what that means is that the atom can [INAUDIBLE] hop over the barrier and find the new minimum, without actually having to be pulled over this maximum. So what we expect is that temperature effect might reduce friction. So people call this thermolubricity-- the effect that when you heat up the surface-- and in many instances, you would have to substantially heat up the surface-- then friction is reduced, because the atom can find the new minimum of the potential without actually having to be pulled over the barrier. So let's see in a small simulation what that might look like. What you see here is now that the atom has a chance of hopping between the two minima, back and forth. And it has a certain probability to be found at either one of the two minima, which is indicated by the size of the red circle, in this case. So you can see that when temperature is present, then the atom can follow the minimum locally-- the absolute global minimum-- simply because it can hop back and forth between the barrier without actually experiencing much friction. So in particular in the limit when the temperature is very, very high, or the velocity of the atom is very, very low, then the atom with hop many times back and forth between the two minima. And the distribution between the two minima would be simply given by the Maxwell-Boltzmann distribution, which means that the atom will be predominantly found in the global minimum. And they'll just be following the potential along and the friction in this system will be quite small. So in this case, if we do a measurement of friction versus velocity in a simulator-- that we do, then this is what the result might look like, or what the result does look like. We see here different ranges of friction. We see a range where the friction does not depend on velocity at all. Friction is 0. Then there's a range where the friction increases with velocity, but only very weakly logarithmically. Please notice that this is a logarithmic scale for the velocity. So the velocity here changes over five orders of magnitude. So friction increases with increasing velocity. Then there's a range of frictions actually independent of velocity, just like the simple macroscopic law would predict. And then it turns out when you move the atom very, very fast, then it basically doesn't have time to dissipate all the energy. And friction is reduced again, effectively, because the atom is hotter. So we can see that the simple macroscopic law of friction only applies in a region, which in this experiment was relatively narrow, of relatively low temperatures. So let's summarize what happens to the loss at the macro scale. So the first law at the macro scale was that the friction force was proportional to the normal force. And we found that at the nano scale, actually what happens is a displaced curve, where there is no friction in a certain region, and then the friction follows parallel to the macro results. So this would be the actual friction at the nano scale. And we see that it's a fairly good approximation to the macroscopic friction, but with a small offset. Our second law of friction was that friction is independent of surface area, of contact area-- basically, the idea that a high object and a flat object of the same mass experience the same friction. And we see that at the nano scale, it's replaced by a much more complicated law that depends on the arrangement of the atoms, or on, if you want, commensurability. so we have non-equivalent arrangements of the atoms that can either lead to large or small friction. And somehow, the macroscopic law is some kind of average over these behaviors, if we allow for randomness in the positions of the surfaces. And finally, our third law-- that at the macroscopic scale, friction is independent of velocity-- we found at the nano scale that this can be true, but it is generally true only in some finite temperature range-- namely, that if you move the object very, very slowly, then thermal excitations allow you to always find the global minimum, and friction disappears-- the effect called thermolubricity. So one can see that there are nice connections between the nano scale and the macro scale. But many open questions remain, in particular, concerning the point two, which is that the friction is independent of the contact area. What does the contact area really look like for two macroscopic objects? And how is it that this independence on the surface area-- or at least on the apparent contact area-- actually arises from the properties of friction at the nano scale?
https://ocw.mit.edu/courses/7-016-introductory-biology-fall-2018/7.016-fall-2018.zip
ADAM MARTIN: And so today and for the remainder of the week, the theme is going to be the cell division cycle. And so we're going to really talk about the cell division cycle in every lecture this week with the penultimate lecture talking about how dysregulation of the cell division cycle results in a pathological condition known as cancer. OK, so here is now a cell going through the cell division cycle. It's entered into mitosis right now. And these guys here are the chromosomes of the cell. And you're going to see them line up at the metaphase plate. And eventually they'll be segregated to the two poles of the cell. And then the cell will divide along its equator. OK, so I thought we could start today by just thinking about what has to happen in a cell during the cell division cycle. What has to happen during this process in order for the cell to replicate? Yes, Miles? AUDIENCE: For all the [INAUDIBLE] all those have to be duplicated so that each cell has a starting number. ADAM MARTIN: Mmm hmm. So Miles suggested the organelles have to be duplicated such that the daughter cells can inherit those organelles. And that's correct. What else has to happen? Anything else have to be duplicated? Stephen-- AUDIENCE: DNA has to be duplicated. ADAM MARTIN: The DNA, the nuclear DNA, the chromosomes, have to be duplicated. So the chromosomes have to be duplicated-- duplicated. What else has to happen every cell cycle? What would happen to the size of the cell just when it divides? Yeah, Udo? AUDIENCE: It would grow. ADAM MARTIN: So Udo is suggesting that the cell has to grow, right? Because if the cell didn't grow, then cell division would make smaller and smaller and smaller cells. And so another thing the cell has to do during the cell cycle at some point, it has to grow in size. OK, and what's the final point of the cell cycle? What happens? What's kind of the goal of the cell cycle? Yes, Stephen? AUDIENCE: Undergo mitosis. ADAM MARTIN: To undergo mitosis. And so the cell has to physically divide, right? The chromosomes have to be segregated, and the cell has to physically divide. OK, so you can think of the cell cycle as the goal of getting all these events to happen is to get one cell to become two cells. So you need chromosome segregation. And you want an equal segregation of genetic material into two daughter cells after the cell divides. OK, so today, we're going to unpack the mechanisms that allow a cell to do many of these things and how it's regulated. And one thing to think about is, what is going to determine whether or not a cell enters the cell division cycle and undergoes a division? What do you think are some things that cells would care about if it's trying to decide whether or not to divide? So one thing a cell is going to care about in a multicellular setting is whether it's getting appropriate communications from other cells that are telling it to divide. And remember, Professor Imperiali told you about signaling. And one example was receptor tyrosine kinase signaling. And this is just a diagram showing you the RAS map kinase pathway. And one of the effects of this signaling pathway is to promote cells to enter the cell division cycle in order to divide, OK? So this is in a multicellular organism, the signaling is important. For unicellular organisms, cells might care about whether or not there are nutrients present or whether or not the cell is of the right size, OK? So cells have to make the decision. I'm going to focus mainly on cell communication and how that might change the cell physiology. And I want to start by just giving you a little bit of an overview of the cell division cycle. So there are four distinct phases of the cell division cycle. And I split them into two classes. There are phases where things physically happen to the cell. And those are S phase and M phase. And so during S phase, S phase stands for DNA synthesis. And it's during this phase when the nuclear DNA is replicated, OK? And so for each of the action phases, if you will, there's some sort of machinery that's involved in changing the cell. In the case of S phase, that would be DNA polymerase and helicases that mediate the replication. So helicases, DNA polymerase. And you remember from earlier in the semester when we talked about DNA replication all of the proteins that are involved in replicating a chromosome. OK, the other phase where something really physical happens is M phase, which is mitosis. And during M phase, this is when the sister chromatids of each chromosome are separated to the daughter cells. OK, so this is when chromosome segregation happens. And again, during this phase, there has to be some sort of machine that gets activated at this phase of the cell cycle in order to, in this case, physically separate the chromosomes from each other. And that machine in M phase is the mitotic spindle, which you'll recall is a machine that consists of microtubules. So these are microtubules. And in each cell cycle, these events have to happen. But they have to happen in order, right? You need DNA synthesis before you segregate the chromosomes, right? So there has to be an order to this. So these other phases are called gap phases. And there are two of them, G1 and G2. And events happen during these gap phases to help to ensure that things happen in the right order. And so in G1, the cell has to decide whether or not to enter into the cell cycle. So some things to consider here, the one that I mentioned before is whether or not there are growth signals present. OK, so for a metazoan cell, it's important that the cell doesn't just divide without any regard to what's going on in the surroundings. There needs to be a communication between cells such that there's the proper balance of cell division in a tissue for specific cell types. OK, so this G1 phase, this is when the cell-- if the cell goes from G1 to S, this is when the cell commits to the cell cycle. So G1 to S, this is the time when the cell commits to going through the entire cell cycle. So if the cell passes this G1 to S transition, then the cell has committed to going through the entire cell cycle. OK, the other phase, this G2 phase, ensures that this type of quality control that happens, it has to ensure that the DNA is replicated before the cell moves on to mitosis. So you can think of G2 as a phase where there's a quality control mechanism and the cell cares about whether or not its DNA is replicated or not. OK, so now I want to tell you basically the answer as to how this system works in a eukaryotic cell. And this system requires a level of control. And it requires a control system. And what this control system does is to ensure that these different events that happen during a cell cycle occur in the right order. OK, so this control system is going to ensure proper order. And there are two main components to this control system. The first is called cyclin dependent kinase, or CDK. And so cyclin dependent kinase is a kinase, so it can post-translationally modify other proteins by adding a phosphate group to them. And so it's through that mechanism that cyclin dependent kinase can modify events and control when they happen in the cell cycle. OK, and the other key component of the system is a protein called cyclin. And what cyclin is is it's the regulatory subunit of the CDK. So this is the regulatory subunit of CDK. And so without the cyclin, the cyclin dependent kinase is inactive, OK? So the CDK needs the cyclin to have activity. So cyclins increase the activity or activate CDK. OK, but there are different flavors of cyclins. There are actually many different cyclins, at least four classes of cyclins. And these cyclins appear at different phases of the cell cycle and then go away. OK, so the cyclins oscillate that's why they're called cyclins, because they come on and off. And depending on which cyclin is present determines what CDK is going to phosphorylate, OK? So these cyclins also determine substrate specificity of the kinase. So which cyclin determines what protein the CDK phosphorylates. So I've outlined three classes of cyclins here. Here is a G1S cyclin in complex of CDK. And then here's an S cyclin complex with CDK in red. And so what do you think the S cyclin CDK is going to phosphorylate, what kind of protein? Anyone have a guess? Miles-- AUDIENCE: Helicase. ADAM MARTIN: Yes, you're actually exactly right. It's going to phosphorylate and activate things that are involved in DNA replication. And Miles is right. Helicase is one of the proteins that gets phosphorylated by S cyclin CDK. And then similarly, M cyclin CDK, which appears here in blue during mitosis, M cyclin CDK is going to phosphorylate proteins that are involved in forming the mitotic spindle so that it induces cell cycle events that happen specifically during mitosis. OK, so you all see it depends which cyclin is present that determines what cell cycle events are happening at a given time. And therefore, it's important that we understand how these cyclins appear at distinct cell cycle phases and whether or not that's the mechanism for the oscillation. Yes, miles-- AUDIENCE: I have a question about [INAUDIBLE] question about mitosis [INAUDIBLE]. So I know that microtubules make sure the chromosomes separate from the cell. How does a cell regulate having half of the organelles on each side [INAUDIBLE] split [INAUDIBLE].. ADAM MARTIN: Some organisms employee motors such that organelles are physically sort of put in daughter cells. But I think often it's just random, right? If you dissolve the organelle and it becomes kind of a bunch of different vesicles, then if you just split in half, there's a high probability that each daughter cell will get parts of the organelle, OK? So organelles can change their morphology during the division process in such a way that they're able to be inherited by both daughter cells. OK, that's an excellent question. So it's the cyclins that really determine what's happening. And I just wanted to point out here that one of the main transcriptional targets of RTK signaling is this G1 cyclin, OK? So it's these signaling pathways that lead to the increase in G1 cyclin that start the cell on this process of entering into the cell cycle, OK? So you get these cyclins getting synthesized. And the cyclins appear in a defined order. OK, so there are different cyclins, but they appear relative to each other in a stereotypical order. So they appear in order. And it's that order of the cyclin that defines which cell cycle events happen at what time in a cell. OK, now I want to tell you a little bit about how the machinery that's involved in the control of the cell cycle was discovered. And I'm going to start by telling you a little bit about budding yeast. And by showing you how this was discovered, it will give you a sense as to how this system controls the cell division cycle. You'll recall that budding yeast can exist as a haploid cell in addition to existing as a diploid cell. So there's a haploid/diploid life cycle. Also, one nice feature of budding yeast for this particular question is that you can infer the cell cycle morphology of the yeast just by looking at its morphology. So budding yeast divides by budding. And the size of the bud indicates what cell cycle phase the cell is in, OK? So you can infer cell cycle phase by morphology of the yeast cell. So for example, if we look up here, here's an unbudded yeast cell that's probably in G1. Here's a yeast cell with a little teeny bud on it. That's probably an S phase. Next to it over here is one that's a slightly bigger bud. And here's one with an even bigger bud. And that one might be in G2 phase. OK, so because this bud grows in size over the course of the cell cycle, you can just look at a yeast cell and infer what the cell cycle phase is. OK, so I'm going to tell you about a genetic screen that was done to look for mutants that were defective in the cell division cycle. And these are known as cell division cycle, or CDC, mutants. Now, what type of yeast cell, haploid of diploid, might you want to screen mutants with? What would be the advantages of either one or the other? Yeah, Natalie? AUDIENCE: Would you do haploid, because if it's a recessive mutation [INAUDIBLE] expressed? ADAM MARTIN: Yes, so what Natalie suggested is to start with the haploid mutants because there's only one copy of each gene such that if you hit it, now you no longer have a functional copy of that gene. If you started with a diploid cell, you'd have two copies of the gene. And you'd have to have two mutations both happening in the same gene, which would be a rare event. OK, so it's better to start with a haploid in this case. Now, what's the problem if you mutate a gene that's involved in the cell cycle? What's going to be the phenotype, the immediate visible phenotype? Is it going to be alive or dead? Carmen-- AUDIENCE: Dead. ADAM MARTIN: It's going to be dead, right? And it's hard to work with an organism that's dead. OK, so what was done is to look for a particular type of mutant which is known as a temperature sensitive mutant. And a temperature sensitive mutant is a mutant where the cell or organism is alive and well at one temperature but dead at another temperature, OK? And so the screen basically involved taking yeast. Here's yeast growing in a test tube. And this is now haploid yeast. And you can treat that yeast with a mutagen. It doesn't matter what, just something that will induce mutations at a high rate in these yeast cells. And then you can take these cells and plate them on media where individual cells will grow into colonies. OK, and if you grow it at 22 degrees C for yeast, this is the most moderate temperature you can choose. So this is what's known as the permissive temperature. OK, but you can also take this plate of used colonies and duplicate it and grow it at another temperature. And you might get something that looks like this, where you see this colony grew at 22 degrees, but at 37 degrees C it did not grow. And that would suggest that, then, this has a temperature sensitive mutant. And this temperature of 37 degrees is known as the restrictive temperature. OK, so that would identify a temperature sensitive mutant. Now, is every temperature sensitive mutant that you identify, is that going to be a cell division cycle mutant? Miles, you're shaking your head no. Why is that? Can you explain your logic? AUDIENCE: So there could be a couple different proteins [INAUDIBLE] there's too many mechanisms in the cell that could be dependent on temperature to narrow down to just [INAUDIBLE] mutant cell cycle. For example, if a phytoprotein mutant organism would mutate [INAUDIBLE] temperature sensitive, without that protein it would die also. ADAM MARTIN: Exactly. So Miles is suggesting that if you just mutated any old gene that was involved in viability for this yeast, and it unfolded at 37 degrees because you sort of made a mutation that made it unstable, then you would identify that as a temperature sensitive mutant. So what would be a good criteria, I guess, that we could use to select just the mutants that are affecting the cell division cycle? Might there be a way for us to do that? I guess I'm asking, can we narrow down the phenotype, right? Temperature sensitivity could be-- by affecting any process in yeast, is there a way we can gear it towards the cell division cycle? Diana-- AUDIENCE: Maybe [INAUDIBLE] specific phase of the cell cycle, you could look at the morphology of it. And if all of them stop at the same phase, you might assume that you [INAUDIBLE].. ADAM MARTIN: All right, very good. So Diana is suggesting is that we looked for a phenotype. And she's guessed that the phenotype, if this gene is involved in sort of mediating a change from one cell cycle to phase to another, that if you mutate that, you'd have yeast that's all stuck in one phase, OK? And that's indeed the phenotype that was screened for. OK, and so if you just take a random population of yeast that's dividing, you'll see cells that are unbudded, small budded, slightly bigger budded. And if you count the number of cells, what you'll see is that most of your cells are unbudded. Some are small budded. And a larger percentage are large budded. And this just reflects the relative amount of time that yeast is in each of these phases of the cell cycle. So yeast spends most of its time in G1. Therefore, if you look at a random population of yeast, you'll see most of the cells will be in the unbudded state. OK, so this is for wild-type normal yeast. Now, what was identified is a cell division cycle mutant, CDC 28, which at the restrictive temperature causes a train wreck at a specific phase of the cell cycle. So all of the cells now are stuck in the unbudded state. And so this suggests that these cells, when they are shifted to the restrictive temperature, are still able to move through the cell cycle. But once they get to this phase, they get stuck. OK, so here there is a cell cycle arrest at G1. And they confirmed it was G1 by measuring the DNA content. And by measuring the DNA content, they were able to show that these cells did not duplicate their DNA. So they didn't even start to undergo S phase. They were stuck in G1, OK? OK, so that suggests that the CDC 28 gene is required for cells to go from G1 to Sl. OK, so the wild-type CDC 28 gene is required for this transition from G1 into the S phase. And it turns out that this yeast CDC 28 gene is the one yeast cyclin dependent kinase, OK? So this was sort of the defining mutant for cyclin dependent kinase. And you'll recall earlier in the semester when we talked about molecular biology that we talked about work done by Paul Nurse who used functional complementation to clone the human cyclin dependent kinase by transforming DNA into a different yeast, fission yeast. But again, that rescued the cell cycle arrest. And that's how the human cyclin dependent kinase was discovered. And I just wanted to point out here that the work I'm telling you about was awarded the Nobel Prize in physiology and medicine in 2001. And it was awarded to Leland Hartwell, Tim Hunt, and Sir Paul Nurse. Leland Hartwell did the screen that I just told you about there and identified CDC 28. And we already talked about Paul Nurse earlier in the semester. Tim Hunt worked on clams and sea urchins and identified cyclin. So this was the work that identified the regulatory machinery of the cell cycle. And they sort of showed that this worked in a number of different organisms. And they showed that it was evolutionarily conserved from yeast all the way to humans. OK, so this is a conserved mechanism. OK, so there's a mechanism that actively governs the transition from G1 to S. And I'll point out this transition is known as start in yeast. And it's called the restriction point in mammalian cells. It's kind of the point of no return in the cell cycle. But the cell cycle doesn't just blindly charge through the rest of the way. And there are certain quality control mechanisms that are in place to ensure that things happen in the proper order and that the quality of events happening is good before the cell moves on to the next stage. And so I'm going to define a concept called a checkpoint. And the checkpoint is a type of quality control mechanism. And checkpoints operate in the cell cycle to ensure that one event doesn't occur till the preceding event happens correctly, OK? So this ensures proper order of events and ensures that events happen correctly before the next subsequent event has to occur. OK, so one example of this is, if you just consider S phase and M phase, DNA replication has to finish before the cell starts segregating chromosomes. Otherwise there's going to be catastrophic consequences, such as possibly creating a cancer cell, OK? So one example of a checkpoint is called the DNA damage checkpoint. And what the DNA damage checkpoint does is it looks to see if there's DNA damage or if the DNA is still replicating. And if either of these cases is present in a cell, then it sends a signal. And that signal, in order to influence the cell division cycle, has to interface with the cyclin CDK control machinery, OK? So this signal will then inhibit cyclin CDK. And cyclin CDK governs two major transitions in the cell cycle. So there are two major what I'll call transition points. There's the G1 to S, which I just outlined over there, which is called start. So there's G1 to S. But there's also G2 to M, OK? So these, basically, the transition out of the gap phases, those are the key transition points that can be regulated by the cell to either slow things down to halt the transition or just go right through, OK? So let me tell you about an experiment that defined the functionality of the DNA damage checkpoint. And I'm going to tell you about work done by Weinert and Leland Hartwell. And it was published in 1988. And they were interested in what the nature and the function of these checkpoints was. And so you can take budding yeast. And you can damage its DNA by irradiating the cells with X-rays. And if you irradiate the cell with X-rays in a wild-type normal yeast, so in the normal yeast that's not mutant, then this damages the DNA. And the cell stays in G2. OK, so it stays in G2. OK, and there's a delay. OK, so here what I'm drawing is a G2 delay. So the cell spends an abnormally longer time in G2 than it normally would if you didn't damage its DNA. All right, and then over time it will continue in the cell cycle and enter into the next cell cycle. And what's interesting about this is that these cells live. OK, so one interpretation from this result is you damage the cell's DNA. It delayed the cell cycle in G2. So it didn't rush right into chromosome segregation. And that allowed the cell time to repair its DNA. And that enabled the cell daughters to live, OK? So that's an interpretation. Now, part of the evidence for that interpretation is that Hartwell and Weinert discovered a mutant called RAD 9, so the RAD 9 mutant. And RAD 9 stands for radiation sensitive. This is a radiation sensitive mutant. And this particular radiation sensitive mutant disrupted the delay. So it disrupted the checkpoint here. So what happens in a RAD 9 mutant is, again, you irradiate cells with X-rays. The cell goes from S phase to G2. But this time, there's no delay, so it goes from G2, the poor yeast charges unsuspectingly into mitosis with damaged DNA and divides. But in this case, there's a high level of death in the resulting progeny. So here you have death. OK, so RAD 9, then, is a gene that is involved in promoting the cell cycle delay such that the yeast cell has time to repair its DNA. And if you remove that delay-- so here there's no delay. If you disrupt this delay, which defines the checkpoint process, right? The checkpoint is a process whereby if there's DNA damage, you delay the cell cycle such that the cell has time to repair it. If you don't have that, it has bad consequences for the cell and results in your cells undergoing premature mitosis before they've had a chance to repair the DNA. Let's see. I'm going to use this one here. All right, one thing you might be wondering is what causes these cyclin proteins to oscillate. And so the last point I want to make is I want to tell you about the mechanism that allows these protein oscillations. And it involves a mechanism that's going to be new for you, at least from the context of this class. It's a mechanism that's regulated proteolysis. OK, so there's a regulated degradation of these cyclin proteins that allows them to go up and then down, OK? So if we consider just one part of the cell cycle, well, if I plot the concentration of cyclin and we look at M cyclin from G2 to M phase-- so this here is a time axis-- the M cyclin goes up in M phase. And then it drops precipitously during mitosis, specifically at the metaphase/anaphase OK, so this is for the mitotic cyclin. And I'm going to tell you that this precipitous decline in cyclin levels is due to regulated proteolysis. OK, and it involves a mechanism that you were briefly introduced to by Professor Imperiali, because it involves a small protein known as ubiquitin. What ubiquitin is is it's a small 76 amino acid protein. So it's a 76 amino acid protein. But this protein can get attached to other proteins. So it's a post-translational modification. OK, so ubiquitin, which I'll abbreviate UB, ubiquitin is attached to lysines on a target protein. And the attachment of ubiquitins to lysines of a protein has important consequences. And what Professor Imperiali told you about is when this happens in the case of protein misfolding. But this ubiquitination of proteins also occurs to proteins that are not denatured or misfolded. And it's a way of regulating protein levels in the cell. OK, so what happens is-- I'll show you a complicated diagram of what happens. But I'm really going to focus on this step right here. There's a series of steps that are needed to get ubiquitin to get attached to the target protein. I'm going to ignore pretty much all of that. But there is an E2 enzyme that becomes conjugated with ubiquitin. And then the ubiquitin is able to be transferred to a target protein. And rather than make this generic, I'll let you know the target protein is going to be cyclin. So we'll just say cyclin. And so there's an enzyme that transfers the ubiquitin from the E2 to the cyclin, OK? And it's polyubiquitinated, meaning there's a chain of ubiquitins added to the cyclin. And it's carried out by a particular type of enzyme known as an E3 ubiquitin ligase. And there are hundreds of these ubiquitin ligases in humans. And different E3 ubiquitin ligases confer different specificities. So they target different proteins, OK? So different E3's target different proteins. And so this is where the specificity comes from, OK? So when a protein in the cell, misfolded or not, is polyubiquitinated like this, this is a garbage tag on that protein, OK? So you can think of polyubiquitin as a garbage tag. And once it's polyubiquitinated, it's sent to the proteasome, which Professor Imperiali showed you. And this is the structure that degrades proteins. OK, so if the protein is targeted for degradation by putting this tag on it, it's going to be rapidly proteolyzed in the cytoplasm of the cell. All right, now I want to show you some experiments that provided the first evidence that this regulated proteolysis is what sort of causes the cyclin to oscillate, OK? And it's going to involve a new model organism. Is there anyone here that has ranidaphobia? OK, we all like frogs? OK, so this model organism is xenopus laevis, or the African clawed frog. And I want to thank my colleague at UMass Amherst, Tom [? Oreska, ?] who provided the slides of frogs for me. So what's great about these frogs, well, other than them being very cute, is that they lay a ton of eggs. And the eggs are huge, OK? So here is a mom, a mom frog. And then you see all these circular things all around the frog are the eggs. OK, so they're about 1 millimeter in size. They're huge. You can collect these eggs, put them in a test tube. And you can see all these eggs that you have in this test tube. And then you can spin the test tube. And by spinning the test tube in a centrifuge, you crush the eggs. And so that results in this middle layer here, which is cytoplasm, OK? And you can remove it with a syringe. And what you end up with is a concentrated cytoplasmic extract. So that's all cytoplasm. OK, so that's a lot of cytoplasm. OK, so for the xenopus system, this system allows you to get this highly concentrated-- because it's not diluted. It's the same concentration as cytoplasm almost-- cytoplasmic extract, which is known as xenopus egg extract. OK, and what's amazing about this egg extract is it can go through the cell cycle even though it's not in a cell. OK, so you can get this extract to essentially simulate the cell cycle. OK, so you have to mimic fertilization, because that's when cell divisions start to happen in the normal frog embryo. But once you do this, then you mimic the fertilization process by adding calcium, and then you'll see this extract go through the cell cycle. And you can see it by looking at the morphology of different structures in the extract. So this is a nucleus that has assembled an extract around some DNA that was added. OK, DNA replication can happen in this nucleus. Other events that happen in the interphase of the cell cycle also happen in this extract. And if you wait, then it will go into mitosis. And you'll start to see mitotic spindles assembling in the extract. OK, so what's important about this, this is totally in vitro. OK, so this is an in vitro system. There are no cells. But you're able to see the extract go through the cell cycle. OK, and if you were able to look at cyclin, like M cyclin, you'd see that M cyclin levels go up and then down and up and down. They oscillate just like they would in a cell. And this is just a diagram showing you that here where mitotic cyclin, M cyclin, concentration is in blue and CDK activity for M cyclin CDK is in purple. So you see it goes up. And then the cell enters mitosis. Early mitosis is in blue. Late mitosis is an orange. So you see mitosis happens when M cyclin is high, just like it does in a cell. And then it degrades. And then it repeats. OK, so this is all outside of a cell. But you're just looking in a test tube. And you can recreate the cell cycle. Now, this, because this is a biochemical system, allowed these researchers-- in this case, the researchers who did this experiment were Andrew Murray and Mark Kirchner at Harvard. And what they did was to test the role of various components in this oscillation. The first experiment they did was to RNase treat the extract to get rid of all the mRNA. And if you degrade all of the mRNA in this extract, you no longer get the cycling. You no longer get the cell cycle. So this shows you mRNA is important or necessary. But you don't know which mRNA, right? One hypothesis might be that you need the mRNA from M cyclin in order to produce cyclin every cell cycle. And that was their hypothesis. So what they did to test that was to degrade all the mRNA, inactivate the RNase, and then add back the mRNA to one gene, that mitotic cyclin. And what they saw when they did that is they restored the cell cycle, suggesting that this one mRNA, M cyclin, is sufficient to restore the oscillation of the mitotic cycle. OK, now, the last experiment, which I think is the most important, shows you the mechanism by which the cyclin is going. Because they added-- and instead of adding back the wild-type mitotic cyclin, they added back a cyclin mutant that was non-degraded all by this E3 ubiquitin ligase mechanism. OK, so if they add back a cyclin mutant and this mutant has a deletion in the part of the protein called the destruction box-- and this is essentially the part of the protein that is recognized by the E3 ubiquitin ligase, OK? So the destruction box mutant basically blocks this such that cyclin is no longer polyubiquitinated and it can't be targeted for proteolysis. OK, and in this case, what happens is cyclin levels increase. And then they stay high and there's no cycle. OK, so when you have this cyclin mutant with the destruction box deleted such that it's non-degraded, it's not degraded, you get a cell cycle arrest. And because this is M cyclin, the cell arrests in mitosis. You get a mitotic arrest, a mitotic arrest. OK, any questions about this mechanism of proteolytic degradation? You all see how this-- yes, Malik? AUDIENCE: So [INAUDIBLE] cell [INAUDIBLE] what is it physically doing? ADAM MARTIN: What is it physically doing? It's basically stuck with a mitotic spindle and it's not segregating the chromosomes. Yeah, so it hasn't gone through mitosis. It's stuck in a specific phase of mitosis. In this case, it's stuck in basically a metaphase-like state. One last point I want to make about this is that the mRNA for M cyclin is just constant. It's always present. So this is constant. You have constant mRNA. CDK is constant. It's the cyclin protein that's going up and down. And it's going up and down because of this regulated proteolysis. OK, great. On Wednesday, we'll talk about stem cells and we'll talk about guts.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
Let's consider a ball that is dropped from a certain height, h i, above the ground and this ball is falling. It hits the ground and it bounces up until it reaches some final height, h final. Now when the ball is colliding with the ground, there are collision forces. And in this problem what we like to do is figure out what the average force of the ground is on the ball. And that will be the normal force, the average normal force, on the ball during the collision. Now if we look at this ball dropping, it's going to lose a little bit of energy, because it's getting compressed at the collision. Let's look at an example of the actual ball dropping. As you can see in this high speed video, as the ball falls down, it collides with the ground. When it collides with the ground, it's compressed. And then as it rebounds upwards, the ball expands back to its original shape, but it doesn't quite get to the same height-- that's because when the ball is compressed, there's some deformation in the rubber structure of the ball, and it's not a completely elastic deformation. And so some of the energy is transformed into, first, molecular motions, which turn into thermal energy that's radiative into the environment. Let's look in particular at the details of the collision. If we look at it in slow motion what we have here-- and I'll draw a picture-- as the ball is colliding with the ground, ball compresses, expands as it goes upwards. And so we can draw a free body diagram of the ball with a normal force and a gravitational force. Now let's choose our positive direction up. So now what we'd like to do is apply the momentum principle to analyze the average normal force. And our momentum principle, remember is impulse. The force integrated over some time during the collision is equal to the change in momentum. So what we'd like to do is identify the states that are relevant. So it will have a state before, so what we'll do is we'll call this the before state, and that's right before the ball is hitting the ground. And we have an after state, and in the after state, the ball has now finished colliding with the ground, and it's now moving up with speed up. Now again, we're going to choose a positive up. Here on representing things as it speeds. One of the things, we need some times here, so let's say that t initial is zero, this is our final time. We'll call this time the before time, we'll just call this t before, and this is t after. And then our integral is going from before to after the momentum. And we can now apply the momentum principle. Well, this is a vector equation and we've chosen unit vectors up, so what we have here is the integral of from t before to t after of N minus mg, integrated over dt, and that's equal to the momentum at the y component of the momentum at t after, minus the y component of the momentum. We don't have a vector here anymore. The y component of the vector, t before. And so this is our expression of the momentum principle. Impulse causes momentum to change. Now we're assuming that the normal force just averaging it and so this intregral simply becomes N average minus ng, times the time of collision, is equal to-- now in here we can put the mass of the ball, we have the velocity. Now here's where we have to be a little bit careful, because we're looking at the y component. We chose speed downwards, that's in the negative y direction, so we have minus-- sorry, we're looking at after. We have plus V after, because this is going in the positive j direction. And over here we have a negative mass, but it's going in the minus direction, so we have a minus mV before, and so we get mass times V after plus V before. So our first result is that the normal force average. Let's bring the divide through by delta t, and bring the Ng term over, so we have m Va plus Vb, divided by delta t, plus mg. So we see that if the collision time is very short, then this average force is a little bit bigger. A long collision time, the average force a little bit smaller. Now from kinematics, we already have worked out the problem that the speed for an object that rises to a height, h final, this is the velocity afterwards, is just square root of 2g h final. And in a similar way, if an object is falling height h i, the speed when it gets to the bottom is 2ghi. And so now we can conclude with these substitutions that the average force equals m times square root of 2g h final, plus the square root of 2g h initial, over the collision time, plus and mg. And of course the collision time, we're saying is t after minus t before. And so that's how we can use the momentum principle to get an average expression for the normal force.
https://ocw.mit.edu/courses/5-111-principles-of-chemical-science-fall-2008/5.111-fall-2008.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: And this is a question based on where we left off on Wednesday -- we were talking about Coulomb's force law to describe the interaction between two particles, and good job, most of you got this correct. So, what we're looking at here is the force when we have two charged particles, one positive, one negative -- here, the nucleus and an electron. So, I know this is a simple example and I can see everyone pretty much got it right, and probably those that didn't actually made some sort of clicker error is my guess. But I wanted to use this to point out that in this class in general, any time you see an equation to explain a certain phenomenon, such as here looking at force, it's a good idea to check yourself by first plugging it into the actual equation, so you can plug in infinity and this equation here, and what you would see is, of course, the force, if you just solve the math problem goes to zero. But you can also look at it qualitatively, so, if you think about the force between the electron and the proton, you could just qualitatively think about what's happening. If they're close together there's a certain force -- they're attracted because they have opposite charges, but as that gets further and further away, that force is going to get smaller and smaller, and eventually the force is going to approach zero. So, it's a good kind of mental check as we go through this course to remember every time there's an equation, usually there's a very good reason for that equation, and you can go ahead and just use your qualitative knowledge, you don't have to just always stick with the math to check and justify your answers. So, we can get started with today's lecture notes. And, as I mentioned, we left off and as we started back here to describe the atom and how the atom holds together the nucleus and the electron using classical mechanics. And today we'll finish that discussion, and, of course, point out actually the failure of classical mechanics to appropriately describe what's going on in an atom. So, then we'll get to turn to a new kind of mechanics or quantum mechanics, which will in fact be able to describe what's happening on this very, very small size scale -- so on the atomic size scale on the order of nanometers or angstroms, very small particles. And the reason that quantum mechanics is going to work where classical mechanics fails is that classical mechanics did not take into account the fact that matter has both wave-like and particle-like properties, and light has both wave-like and particle-like properties. So, we'll take a little bit of a step back after we introduce quantum mechanics, and talk about light as a wave, and the characteristic of waves, and then light as a particle. And one example of this is in the photoelectric effect. So, we just talked about the force law to describe the interaction between a proton and an electron. You told me that when the distance went to infinity, the force went to zero. What happens instead when the distance goes to zero? What happens to the force? Yeah. So, the force actually goes to infinity, and specifically it goes to negative infinity. Infinity is the force when we're thinking about it and our brains, negative infinity is when we actually plug it into the equation here, and the reason is the convention that the negative sign is just telling us the direction that the force is coming together instead of pushing apart. So, we can use Coulomb's force law to think about the force between these two particles -- and it does that, it tells us the force is a function of that distance. But what it does not tell us, which if we're trying to describe an atom we really want to know, is what happens to the distance as time passes? So, r is a function of time. But luckily for us, there's a classical equation of motion that will, in fact, describe how the electron and nucleus change position or change their radius as a function of time. So, that's -- does anyone know which classical law of motion that would be? Yup, so it's going to be Newton's second law, force equals mass times acceleration -- those of you that are quick page-turners, have a little one-up on answering that. And that tells us force as a function of acceleration, we want to know it though as a function of radius, so we can just take the first derivative and get ourselves to velocity. So, force is equal to mass times dv /dt. But, of course, we want to go all the way to distance, so we take the second derivative and we have this equation for force here. And what we can do in order to bring the two equations together, is to plug in the Coulomb force law right here. So, now we have our Coulomb force law all plugged in here, and we have this differential equation that we could solve, if we wanted to figure out what the force was at different times t, or at different positions of r. So, all you will have the opportunity to solve differential equations in your math courses here. We won't do it in this chemistry course. In later chemistry courses, you'll also get to solve differential equations. But instead in this chemistry course, I will just tell you the solutions to differential equations. And what we can do is we can start with some initial value of r, and here I write r being ten angstroms. That's a good approximation when we're talking about atoms, because that's about the size of and atom. So, let's say we start off at the distance being ten angstroms. We can plug that into this differential equation that we'll have and solve it, and what we find out is that r actually goes to zero at a time that's equal to 10 to the negative 10 seconds. So, let's think qualitatively for a second about what that means or what the real meaning of that is. What that is telling us is that according to Newtonian mechanics and Coulomb's force law, is that the electron should actually plummet into the nucleus in 0.1 nanoseconds. So, we have a little bit of a problem here. And the problem that we have is that what we're figuring out mathematically is not exactly matching up with what we're observing experimentally. And, in fact, it's often kind of difficult to experimentally test your mathematical predictions -- a lot of people spend many, many years testing one single mathematical prediction. But, I think all of us right now can probably test this prediction right here, and we're observing that, in fact, all of us and all the atoms we can see are not immediately collapsing in less than a nanosecond. So, just, if you can take what I'm saying for a moment right now that in fact this should collapse in this very small time frame, we have to see that there's a problem with one of these two things, either the Coulomb force law or Newtonian mechanics. So, what do you guys think is probably the issue here? So, it's Newtonian mechanics, and the reason for this is because Newtonian mechanics does not work on this very, very small size scale. As we said, Newtonian mechanics does work in most cases, it does work when we're discussing things that we can see, it does work even on things that are too small to measure. But once we got to the atomic size scale, what happens is we need to be taking into account the fact that matter has these wave-like properties, and we'll learn more about that later, but essentially classical mechanics does not take that into account at all. So, we need a new kind of mechanics, which is quantum mechanics, which will accurately explain the behavior of molecules on this small scale. So, as I mentioned, the real key to quantum mechanics is that it's treating matter not just like it's a particle, which is what we were just doing, but also like it's a wave, and it treats light that way, too. The second important point to quantum mechanics is that it actually considers the fact that light consists of these discrete packets or particle-like pieces of energy, which are called photons. And if you think about what's actually happening here, this second point that light consists of photons is actually the same thing as saying that light shows particle-like properties, but that's such an important point that I put it separately, and we'll cover that separately as we go along. So, we now have this new way of thinking about how a nucleus and an electron can hang together, and this is quantum mechanics, and we can use this to come up with a new way to describe our atom and the behavior of atoms. But the problem is before we do this, it makes sense to take a little bit of a step back and actually make sure we're all on the same page and understanding why quantum mechanics is so important and how it works, and specifically understanding what we mean when we say that light is both a particle and a wave, and that matter is both a particle and a wave. So, we'll move on to this discussion of light as a wave, and we really won't pick up into going back to applying quantum mechanics to the atom until Friday, but in the meantime, we'll really get to understand the wave particle duality of light and of matter. So, we'll start with thinking about some properties of waves that are going to be applicable to all waves that we're talking about, including light waves. The easiest kind of waves for us to picture are ocean waves or water waves, because we can, in fact, see them, but they have similar properties to all waves. And those properties include that you have this periodic variation of some property. So, when we're talking about water waves, the property we're discussing is just the water level. So, for example, we have this average level, and then it can go high where we have the peak, or it can go very low. We can also discuss sound waves, so again it's just the periodic variation of some property -- in this case we're talking about density, so we have high density areas and low density areas. So, regardless of the type of wave that we're talking about, there's some common definitions that we want to make sure that we're all able to use, and the first is amplitude. And when we're talking about the amplitude of the wave, we're talking about the deviation from that average level. So, if we define the average level as zero, you can have either a positive amplitude or a negative amplitude. So, sometimes people get confused when they're solving problems and call the amplitude this distance all the way from the max to the min, but it's only half of that because we're only going back to the average level. So, what we really want to talk about here is light waves, and light waves have the same properties as these other kind of waves in that they're the periodic variation of some property. So, when we're discussing light waves, what we're talking about is actually light or electromagnetic radiation, is what we'll be calling it throughout the course. And that's the periodic variation of an electric field. So, instead of having the periodic variation of water, or the periodic variation of air density, here we're talking about an electric field. We know what an electric field is, it's just a space through which a Coulomb force operates. And the important thing to think about when you're talking about the fact that it's a periodic variation, is if you put a charged particle somewhere into an electric field, it will, of course, go in a certain direction toward the charge it's attracted to. But you need to think about the difference, if you have a particle here on your wave, it will go in one direction. But remember, waves don't just have magnitude, they also do have direction. So, if instead you put your particle somewhere down here on the electric field, or on the wave, the electric field will now be in the other direction, so your particle will be pushed the other way. And from physics you know that, of course, if we have a propagating electric field, we also have a perpendicular magnetic field that's going back and forth. But in terms of worrying about using the concepts of a wave to solve chemistry problems in this course, we can actually put aside the fact, and only focus on the electric field part of things, because that's what's going to be interacting with our charged particles, such as our electrons. So, other properties of waves that you probably are all familiar with but I just want to review is the idea of a wavelength. If we're talking about the wavelength of a wave, we're just talking about the distance that there is between successive maxima, or of course, we can also be talking about the distance between successive minima. Basically, we can take any point on the wave, and it's the distance to that same point later on in the wave. So, that's what we call one wavelength. We also commonly discuss the frequency of a wave, and the frequency is just the number of cycles that that wave goes through per unit time. So, by a cycle we'd basically mean how many times we cycle through a complete wavelength. So, if something cycles through five wavelengths in a single second, we would just say that the frequency of that wave is five per second. We can also mathematically describe what's going on here other than just graphing it. So, if we want to look at the mathematical equation of a wave, we want to describe -- again as I mention, what we're describing is the electric field, we're not worrying about the magnetic field here, as a function of x and t that's equal to a cosine [ 2 pi x over wavelength, minus 2 pi nu t ]. And note this is the Greek letter nu. This is not a v. Where we have E, which is equal to the electric field, what is x? STUDENT: Position. PROFESSOR: Yup, the position of the wave. And what about t? Yeah, so we're talking about both position and time. So what we can do if we're talking about a wave is think of it both in terms of position time, but if we're trying to visualize this -- for example if we're actually to graph this out, the easiest thing to do is keep one of these two variables constant, either the x or the t, and then just consider the other variable. So, for example, if we're to hold the time constant, this makes it a lot simpler of an equation, because what we can end up doing is actually crossing out this whole term here. So what we're left with is just that the electric field as a function of distance is a times cosine of the argument there, which is now just 2 pi x over wavelength. So, what we want to be able to do, either when we're looking at the graph or looking at the equation up there, is to think about different properties of the wave. For example, to think about at what point do we have the wave where it's at its maximum amplitude? So, if we think about that, we need to have a point where we're making this argument of the cosine such that the cosine is going to all be equal to one, so all we're left with is that a term. So, we can do that basically any time that we have an integer variable that is either zero or an integer variable of the wavelength. So, for example, negative wavelength or positive wavelength are two times the wavelength, because that lets us cross out the term with the wavelength here, and we're left with some integer multiple of just pi. So, that's sort of the mathematically how we get to a, but we can also just look at the graph here, because every time we go one wavelength, we can see that we're back in a maximum. So, I mentioned we should be able to figure out where the maximum amplitude is. You should also just looking at an equation, immediately be able to figure out what that maximum amplitude is in terms of the height of it just by looking at that a-term, here we should also be able to know the intensity of any light wave, because intensity is just the amplitude squared. So, we should immediately be able to know how bright or how intense a light is just looking at the wave equation, or just by looking at a graph. We can also do a similar thing, and I'll keep my distance from the board, but we can instead be holding x constant, for example, putting x to be equal to zero, and then all we're doing is considering the electric field as a function of t. So, in this case we're crossing out the first term there, and we're left with amplitude times the cosine of 2 pi nu times t. And, of course, we can do the same thing again, we can think about when the amplitude is going to be at its maximum, and it's going to be any time cosine of this term now is equal to one. So that will be at, for example, negative 1 over nu, or 0, or 1 over nu. And again, we can just look at our graph to figure that out, that's exactly where we're at a maximum. So, 1 over nu is another term we use and we call it the period of a wave, and the period is just the inverse of the frequency. And if we think about frequency, that's number of cycles per unit time. So, for example, number of cycles per second, whereas the period is how much time it takes for one cycle to occur. And when we talk about units of frequency, in almost every case, you'll be talking about number of cycles per second. So, you can just write inverse second, the cycle part is assumed. But you'll also frequently see it called Hertz, so, Hz here. So, if you're talking about five cycles per second, you can write five per second, or you can write five Hertz. The one thing you want to keep in mind though is that Hertz does not actually mean inverse seconds, it means cycles per second. So, if you're talking about a car going so many meters per second, you can't say it's going meter Hertz, you have to say meters per second. So, this really just means for frequency, it's a frequency label. Alright. So, since we have these terms defined, we know the frequency and the wavelength, it turns out we can also think about the speed of the wave, and specifically of a light wave, and speed and is just equal to the distance that's traveled divided by the time the elapsed. And because we've defined these terms, we have ways to describe these things. So, we can describe the distance that's traveled, it's just a wavelength here. And we can think about how long it takes for a wave, because waves are, we know not just changing in position, but the whole wave is moving forward with time, we can think about how long it takes for wave to go one wavelength. So, one distance that's equal to lambda. So, how much time would that take, does anyone know? So, would it take, for example, the same amount of time as the frequency? The period, that's right. So, it's going to take one period to move that long. And another way we can say period is just 1 over nu or 1 over the frequency So, now we know both the distance traveled and the time the elapsed. So, we can just plug it in. Speed is equal to the distance traveled, which is lambda over the time elapsed, which is 1 over nu. so, we can re-write that as speed is equal to lambda times nu, and it turns out typically this is reported in meters per second or nanometers per second. So, now we have an equation where we know the relationship between speed and wavelength and frequency, and it turns out that we could take any wave, and as long as we know the frequency and the wavelength, we'll be able to figure out the speed. But, of course, there's something very special about electromagnetic waves, electromagnetic radiation and the speed. And it's not really surprising for me to tell you that electromagnetic radiation has a constant speed, and that speed is what we call the speed of light, and typically we abbreviate that as c, and that's from the Latin term celeritas, which means speed in Latin. That's one of four or five Latin words I remember from four years of high school Latin, but it comes in handy to remember speed of light. And some of you may have memorized what the speed of light is in high school -- it's about 3 times 10 to the 8 meters per second. This is another example of a constant that you will accidentally memorize in this course as you use it over and over again. But again, that we will supply for you on the exam just in case you forget it at that moment. And this is a very fast speed, of course, it's about 700 million miles per hour. So, one way to put that in perspective is to think about how long it takes for a light beam to get from earth to the moon. Does anyone have any guesses? Eight seconds, that sounds good. Anyone else? These are all really good guesses, so it actually takes 1.2 seconds for light to travel from the earth to the moon. So, we're talking pretty fast, so that's nice to appreciate in itself. But other than that point, we can also think about the fact that frequency and wavelength are related in a way that now since we know the speed of light, if we know one we can tell the other. So, you can go ahead and switch us to our clicker question here. So, we should be able to look at different types of waves and be able to figure out something about both their frequency and their wavelength, and know the relationship between the two. So, it's up on this screen here now, so we'll work on the other one. If you can identify which of these statements is correct based on what you know about the relationship between frequency and wavelength and also just looking at the waves. Alright. So, let's give ten more seconds on that. So, ten seconds on that. Alright. So, good job. So, most people could recognize that light wave a has the shorter wavelength. We can see that just by looking at the graph itself -- we can see, certainly, this is shorter from maxima to maxima. This we can't even see the next maxima, so it's much longer. And then, we also know that means that it has the higher frequency, because our relationship between wavelength and frequency are inversely related. And also, we know the speed of light. So, if we think about if it's a shorter wavelength, we'll be able to get a lot more wavelengths in, in a given time, than we would for a longer wavelength. So, we can switch back to the notes and think about what this means, and what this means when we're talking about all the different kinds of light waves we have, and I've shown a bunch here, is that if we have the wavelength, we also know the frequency of these wavelengths. So, for example, radio waves, which have very long wavelengths have very low frequencies. Whereas where we go to waves that have very short wavelengths, such a x-rays or cosmic rays, they, in turn, have very high frequencies. So, it's important to get a little bit of a sense of what all these different kinds of lights do. You're absolutely not responsible to memorize what the wavelengths of the different types of lights are, but you do want to be able to know the general order of them. So, if someone tells you they're using UV light versus x-ray light, you know that the x-ray light is, in fact, at a higher frequency. So that's the important take-away message from this slide. If we think about these different types of lights, microwave light, if it's absorbed by a molecule, is a sufficient amount of frequency and energy to get those molecules to rotate. That, of course, generates heat, so that's how your microwaves work. If we talk about infrared light, which is at a higher frequency here and a shorter wavelength, infrared light when it's absorbed by molecules actually is enough to cause molecules now to vibrate. If we move up to the more high-frequency and divisible light and all the way into UV light, if you shine UV light at certain molecules, it's going to have enough energy to actually pop those electrons in that molecule up to a higher energy level, which will make more sense once we talk about energy levels in atoms, but that's what UV light can do. And actually, that's responsible for fluorescence and phosphorescence that you see where typically UV light comes in. So, if you use a black lamp or something and you excite something up to a higher energy level and then it relaxes back down to its lower energy state, it's going to emit a new wavelength of light, which is going to be visible to you. X-rays are at even a higher frequency, and those are sufficient to actually be absorbed by a molecule and pop an electron all the way out of that molecule. You can see how that would be damaging to the integrity of that molecule, that's why x-rays are so damaging -- you don't want to have electrons disappearing for no good reason from your molecules that can cause the kind of mutations we don't want to be seeing in ourselves. And then also as we go higher, we have gamma rays and cosmic rays. Within the visible range of what we can see, you also want to know this relative order that's pretty easy -- most of us have memorized that in kindergarten, so that should be fine. Just remembering that violet is the end that actually has the shortest wavelength, which means that it also has, of course, the highest frequency. So, just an interesting fact about this set of light, which we're most familiar with, if we think about our vision, it turns out that our vision's actually logarithmic and it's centered around this green frequency. So, if instead of a red laser pointer here, I had a green one, you'd actually, to our eyes, it would seem like the green one was brighter, even if the intensity was the same, and that's just because our eyes are centered and logarithmic around this green frequency set. So, using the relationship between frequency and wavelength, we can actually understand a lot about what's going on, and pretty soon we'll also draw the relationship very soon to energy, so it will be even more informative then. But I just want to point out one of the many, many groups at MIT that works with different fluorescing types of molecules, and this is Professor Bawendi's laboratory at MIT, and he works with quantum dots. And quantum dots are these just very tiny, tiny crystals of semiconductor material. They're on the order of one to ten nanometers, and these can be shined on with UV light -- they have a lot of different interesting properties, but one I'll mention is that if you excite them with UV light, they will have some of the electrons move to a higher energy state, and when they drop back down, they actually emit light with a wavelength that corresponds with the size of the actual quantum dot. So, from what we know so far, we should be able to look at any of these quantum dots, which are depicted as a cartoon here, but here we have an actual picture of the quantum dots suspended in some sort of solution and shone on with UV light, and you can see that you can achieve this whole beautiful range of colors just by modulating the size of the different dots. And we should be able to know if we're looking at a red dot -- is a red dot, it's going to have a longer wavelength, so is this a higher or lower frequency? Yeah, and similarly, if someone tells us that their dot is blue-shifted, that should automatically in our heads tell us, oh it shifted to a higher frequency. And these dots are really interesting in that you can, I'm sure by looking at this picture, already imagine just a whole slew of different biological or sensing applications that you could think of. For example, if you were trying to study different protein interactions, you could think about labeling them with different colored dots, or there's also a bunch of different fluorescent techniques that you could apply using these dots, or you could think of in-vivo sensing, how useful these could be if you could think of a way to get them into your body without being too toxic, for example. These are all things that the Bawendi group is working on. What they are real experts in is synthesizing many different kinds of these dots, and they have a synthetic scheme that's used by research groups around the world. The Bawendi group also collaborates with people, both at different schools and at MIT. One example, on some of their biochemistry applications is with another Professor at MIT, Alice Ting and her lab. So really what I want to point out here is as we get more into describing quantum mechanics, these quantum dots are one really good example where a lot of the properties of quantum mechanics apply directly. So, if you're interested, I put the Bawendi lab research website onto your notes. And also, Professor Bawendi recently did an interview with "The Tech." Did anyone see that interview in the paper? So, three or four -- a few of you read the paper last week. So, you can either pick up an old issue or I put the link on the website, too. And that's not just about his research, it's also about some of his memories as a student and advice to all of you. So, it's interesting to read and get to know some of these Professors at MIT a little bit better. So, one property that was important we talked about with waves is the relationship between frequency and wavelength. Another very important property of waves that's true of all waves, is that you can have superposition or interference between two waves. So, if we're looking at waves and they're in-phase, and when I talk about in-phase, what I mean is that they're lined up, so that the maxima are in the same position and the minima are in the same position, what we can have a something called constructive interference. And all we mean by constructive interference is that literally those two waves add together, such as the maxima are now twice as high, and the minima are now twice as low. So, you can also imagine a situation where instead of being perfectly lined up, now we have the minima being lined up with the maxima here. So, if we switch over to a clicker question maybe on this screen -- okay, can it be done up there to switch? So, we're still settling in with the renovations here in this room. So, why don't you all go ahead and tell me what happens if you combine these two waves, which are now out of phase? So, let's -- okay, so, why don't you all think about would happen -- we'll start with the thought exercise. You can switch back to my lecture notes then if this isn't going. Alright. So, hopefully what everyone came up with is the straight line, is that what you answered? STUDENT: Yeah. PROFESSOR: OK, very good. And I didn't make you try to draw the added, the superimposed positive construction in your notes, but I think everyone can handle drawing a straight line. So, you can go ahead and draw what happens when we have destructive interference. And destructive interference, of course, is the extreme, but you can picture also a case where you have waves that are not quite lined up, but they're also not completely out of phase. So in that case, you're either going to have the wave get a little bigger, but not twice as big or a little bit smaller. So, I think the easiest way to think about interference is not actually with light, but sometimes it's easiest to think about with sound, especially when you're dealing with times where you have destructive interference. Has anyone here ever been in a concert hall where they feel like they're kind of in a dead spot, or you don't quite hear as well, and if you move down just two seats all of a sudden it's just blasting at you -- hopefully not in this room. But have people experienced that before? Yeah, I definitely experienced it, too. And really, all you're experiencing there is destructive interference in a very bad way. Halls, they try to design halls such that that doesn't happen, and I show an example of a concert hall here -- this is Symphony Hall in Boston, and I can pretty much guarantee you if you do go to this Symphony Hall, you will not experience a bad seat or a dead seat. This is described as actually one of the top two or three acoustic concert halls in the whole world. So, it's very well designed such that they've minimized any of these destructive interference dead sounds. So, it's nice, on a student budget you can go and get the worst seat in the house and you can hear just as well as they can hear up front, even if you can't actually see what's going on. So, another example of destructive interference is just with the Bose headphones. I've never actually tried these on, but you see people with them, and what happens here is it's supposed to be those noise cancellation headphones. All they do is they take in the ambient noise that's around it, and there's actually battery in the headphones, that then produces waves that are going to destructively interfere with that ambient noise. And that's how it actually gets to be so quiet when you have on, supposedly, these quite expensive headphones. So, that's light as a wave, and the reason -- well, that was sound as a wave, but light as a wave is the same idea. And it was really established by the early 1900s that, in fact, light behaved as a wave. And the reason that it was so certain that light was a wave was because we could observe these things -- we could see, for example, that light defracted, and we could see that light constructively or destructively could interfere with other light waves, and this was all confirmed and visualized. But also, around the time that Thomson was discovering the electron, there were some other observations that were going on, and the most disturbing to kind of the understanding of the universe was the fact that there were some observations about light that didn't make sense with this idea that light is a particle. And the photoelectric effect is maybe the most clear example of this. So, the photoelectric effect is the effect that if you have some metal, and you can pick essentially any metal you want, and you shine light of a certain frequency onto that metal, you can actually pop off an electron, and you can go ahead and measure what the kinetic energy of that electron that comes off is, because we can measure the velocity and we know that kinetic energy equals 1/2 m b squared, and thanks to Thomson we also know the mass of an electron. So, this is an interesting observation, and in itself not too disturbing, yet but the important thing to point out is that there's this threshold frequency that is of the metal, and each metal has a different threshold frequency, such as if you shine light on the metal where the frequency of the light is less than the threshold frequency, nothing will happen -- no electron will pop off of that metal. However, if you shine a light with a frequency that's greater than the threshold frequency, you will be able to pop off an electron. So, people were making this observation, but this wasn't making any sense at all because there was nothing in classical physics that described any sort of relationship between the frequency of light and the energy, much less the energy of an electron that would get popped off of a metal that would basically come off only when we're hitting this threshold frequency. So, what they could do was actually graph what was happening here, so we can also graph what was happening, and what they found was that if we were at any point below the threshold frequency and we were counting the numbers of electrons that were popping off of our metal, we weren't seeing anything at all. But if you go up the threshold frequency, suddenly you see that there's some number of electrons that comes off, and amazingly, the number of electrons actually had no relationship at all to the frequency of the light. And this didn't make a lot of sense to people at the time because they thought that the frequency should be related to the number of electrons that are coming off, because you have more frequency coming in, you'd expect more electrons that are coming off -- this wasn't what people were seeing. So, what they decided to do is just study absolutely everything they could about the photoelectric effect and hope, at some point, someone would piece something together that could explain what's going on or shed some light on this effect. So, one thing they did, because it was so easy to measure kinetic energy of electrons, is plot the frequency of the light against the kinetic energy of the electron that's coming off here. And in your notes and on these slides here, just for your reference, I'm just pointing out what's going to be predicted from classical physics. You're not responsible for that and we won't really discuss it, but it just gives you the contrast of the surprise that comes up when people make these observations. And the first observation was that the frequency of the light had a linear relationship to the kinetic energy of the electrons that are ejected here. This made no sense at all to people, and again they saw this effect where if you were below that threshold frequency, you saw nothing at all. So, that was frequency with kinetic energy. The next thing that they wanted to look at was the actual intensity of the light and see what the relationship of intensity to kinetic energy is. So, what we would expect is that there is a relationship between intensity in kinetic energy, because it was understood that however intense the light was, if you had a more intense light, it was a higher energy light beam. So that should mean that the energy that's transferred to the electron should be greater, but that's not what you saw at all, and what you saw is that if you kept the frequency constant, there was absolutely no change in the kinetic energy of the electrons, no matter how high up you had the intensity of the light go. You could keep increasing the intensity and nothing was going to happen. So, we could also plot the number of electrons that are ejected as a relationship to the intensity, so that was yet another experiment they could do. And this is what they had expected that there would be no relationship, but instead here they saw that there was a linear relationship not to the intensity and the kinetic energy of the electrons, but to the intensity and the number of electrons. So, none of these observations made sense to any scientists at the time, and really all of these observations were made and somewhat put aside for several years before someone that could kind of process everything that was going on at once came along, and that person was Einstein, conveniently enough -- if anyone could put it together, we would hope that he could, and he did. And what he did in a way that made sense when all of us look at it, is he plotted all of these different metals on the same graph and made some observations. So, for example, here we're showing rubidium and potassium and sodium plotted where we're plotting the frequency -- that's the frequency of that light that's coming into the metal versus the kinetic energy of the electron that's ejected from the surface of the metal. And what he found here, which is what you can see and we can all see pretty clearly, is the slope of all of these lines is the same regardless of what the type of metal is. So, he fit all these to the equation of the line, and what he noticed was the slope was specifically this number, 6.626 times 10 to the negative 34, joules times seconds. And he also found that the y intercept for each one of these metals was equal to basically this number here, which was the slope times the minimum frequency required of each specific metal, so that's of the threshold frequency. And he actually knew that this number had popped up before, and a lot of you are familiar with this number also, and this is Planck's constant. Planck had observed this number as a fitting constant years earlier when he looked at some phenomena, and you can read about in your book, such as black body radiation. And what he found was he needed this constant to fit his data to what was observed. And this is the same thing that Einstein was observing, that he needed this fitting constant, that this constant was just falling right out of, for example, this slope and also the y intercept. So he decided to go ahead and define exactly what it is, this line, in terms of these new constants, this constant he's calling h, which is Planck's constant. So, on the y axis we have kinetic energy, so we can plug that in. If we talk about what the x axis is, that's just the frequency of the light that's coming in. We know what m is, m is equal to h. And then we can plug in what b is, the y intercept, because that's just the negative of h times that threshold frequency. So we have this new equation here when we're considering this photoelectric effect, which is that the kinetic energy is equal to h nu minus h nu threshold of the metal. And what Einstein concluded and observed is that well, kinetic energy, of course, that's an energy term, and h times nu, well that has to be energy also, because energy has to be equal to energy -- there's no other way about it. And this worked out with units as well because we're talking about joules for kinetic energy, and when we're talking about h times nu, we're talking about joules times second times inverse seconds. So, the very important conclusion that Einstein made here is that energy is equal to h times nu, or that h times nu is an actual energy term. And this kind of went along with two observations. The first is that energy of a photon is proportional to its frequency. So this was never recognized before that if we know the frequency of a photon or a wave of light, we can know the energy of that light. So, since we know that there's relationship also between frequency and wavelength, we can do the same thing -- if we know the wavelength, we can know the energy of the light. And I use the term photon here, and that's because he also concluded that light must be made up of these energy packets, and each packet has that h, that Planck's constant's worth of energy in it, so that's why you have to multiply Planck's constant times the frequency. Any frequency can't have an energy, you have to -- you don't have a continuum of frequencies that are of a certain energy, it's actually punctuated into these packets that are called photons. And, as you know, Einstein made many, many, many very important contributions to science and relativity, but he called this his one single most important contribution to science, the relationship between energy and frequency and the idea of photons. So this means we now have a new way of thinking about the photoelectric effect, and that is the idea that h times nu is actually an energy. So, it's the energy of an incident photon if we're talking about nu where we're talking about the energy of the photon going in, so we can abbreviate that as e sub i, energy of the incident photon. We can talk about also h times nu nought, which is that threshold frequency. So this is a term we're going to see a lot, especially in your problem sets, it's called the work function, and the work function is the same thing as the threshold frequency of a metal, except, of course, that it's multiplied by Planck's constant. So, it's the minimum energy that a certain metal requires in order to pop a photon out of it -- in order to eject an electron from the surface of that metal. So this is our new kind of schematic way that we can think about looking at the photoelectric effect, so if this is the total amount of energy that we put into the system, where here we have the energy of a free electron. We have this much energy going in, the metal itself requires this much energy, the work function, in order to eject an electron. So that much energy is going to be used up just ejecting it. And what we have left over is this amount of energy here, which is going to be the kinetic energy of the ejected electron. So, therefore, we can rewrite our equation in two ways. One is just talking about it in terms only of energy where our kinetic energy here is going to be equal to the total energy going in -- the energy initial minus this energy of the work function here. We can also talk about it in terms of if we want to solve, if we, for example, we want to find out what that initial energy was, we can just rearrange our equation, or we can look at this here where the initial energy is equal to kinetic energy plus the work function. So before we go we'll try to see if we can do a clicker question for you on this, and we can, very good. So, everyone take those clickers back out and tell me, if a beam of light with a certain energy, and we're going to say four electron volts strikes a gold surface, and here we're saying that the gold surface has a work function of 5.1 electron volts, what is the maximum kinetic energy of the electron that is ejected? So why don't you go ahead and take ten seconds on that. And if you don't know, that's okay, just type in an answer and give it your best shot. And let's see what we come up with here. Alright. So, it looks like some of you were tricked, but many of you were not, so no electrons will be ejected. The reason for that is because this is the minimum amount of energy -- hold off a sec on the packing up, so in case someone doesn't understand -- this is the minimum amount of energy that's required from the energy going in in order to eject an electron. So if the incident energy is less than the energy that's required, absolutely nothing will happen. That's the same thing we were talking about with threshold frequency. All right, now you can pack up and we'll see you on Wednesday.
https://ocw.mit.edu/courses/8-421-atomic-and-optical-physics-i-spring-2014/8.421-spring-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: The last topic we discussed on Monday it was the situation of the Landau-Zener transition that you sweep for resonance. You've all seen the Landau-Zener formula, you all know that in crossing turns into an avoided crossing. But I tried to at least provide you additional insight by emphasizing that the whole process is absolute coherent. That it's full-phase coherent throughout. And what happens is there is a coherent transfer of amplitude from the state 1 to the state 2. This is nothing else than Schrodinger's equation. But I want to point out that for the short times you sweep through, there is no T2 independence here. In other words, when I discussed what is the effective time during which the transfer of population takes place, at the time here is so short that the detuning doesn't matter. Actually, the criterion which leads to this effective time during which the transfer takes place is actually exactly the time window where the detuning is small enough that-- to say it loosely, it doesn't make a difference whether your in resonance or slightly away. The atom experiences the same tri field. And so based on this criterion, we discussed that we can understand the Landau-Zener probability in the perturbative limit as a coherent process where we transfer population, we transfer amplitude with a Rabi frequency during this effective time, delta t. It's not the only way how we can look at it, but it's one way which I think is insightful. Any questions about this or what we discussed on Monday? If not, I would like to take the topic one step further and discuss the density matrix formalism. So we have so far discussed purely Hamiltonian unitary evolution. Namely, the Schrodinger equation. And of course, unitary evolution leaves the system which is in a pure state in a pure state. It's just that the quantum state evolves. However, that means we cannot describe processes like decoherence or some losses away from the two levels we are focusing on. And so now we want to use the density operator, the density operator formalism to have a description of two-level system which goes beyond that. So let me just-- so Schrodinger equation deals only with pure states, cannot describe loss of particles, loss of photons, and decoherence. Well, there is one exception. If the decoherence process is merely a state-dependent loss of atoms to a third state, then you can still use the wave function formalism. So this is the exception. If you have two states, that's the excited state. If you have two states and all of what happens is that you have some loss to some other levels and their rate coefficients, then one can still use a Hamiltonain description, but you have to replace the eigenvalues by complex numbers. In other words, you have to add an imaginary part to the energy levels. And that means the time evolution is exponentially dent. So that's as much as you can incorporate decoherence and losses into a wave function formalism. However, many other processes require the formalism of the density matrix. And the simplest process where wave function formalism is absolutely inadequate is the process of spontaneous emission. When you have a loss in the excited state, you could still describe the excited state with a complex energy eigenvalue. But the fact that whatever is lost from the excited state is added to the ground state. There is no wave function formalism which can describe that. So for those processes and for decoherence in general, we require the use of the density operator. So I know that most of you have seen the density operator in statistical mechanics or some advanced course in quantum mechanics. So therefore, I only spend about five minutes on it. So I want to kind of just remind you or give you a very short introduction. So for those of you who have never heard about it, I probably say enough that you understand the following discussion. And for those of you who know already everything about it, five minutes of recapitulation is hopefully not too boring. So my way of introducing the density operator is first introduce it formally. Write down a few equations for pure state. But then in a moment, add something to it. So if you have a time-dependent wave function, which we expand into eigenfunctions, then we can, in these spaces, define arbitrary operators by matrices. We want to describe our system by finding out measurable observables, expectation values of operators, which of course, depend on time. And this is, of course, nothing else than the expectation value taken with a time-dependent wave function. But now we can expand it into the bases m, n and we can then rewrite it as a matrix, which is a density matrix. Or, simply as the trace of the density matrix with the operator. And what I introduced here as the density matrix can be written as psi of t, psi of t. And the matrix element given by this combination of amplitudes when we expand the wave function psi into its basis. So this density matrix his diagonal and off-diagonal matrix elements. The diagram matrix elements are called the populations, the populations in state n and the off-diagonal matrix elements are called coherences. OK, so this is just rewriting Schrodinger's equation expectation value in a matrix formalism. Yes, please. AUDIENCE: Why are you starring the coefficients? PROFESSOR: Oh, there's one star too many. Thank you. AUDIENCE: That makes sense. PROFESSOR: But the reason why I wrote it is that we want to now add some probability to it. We do not know for sure that the system is in a pure state. We have probabilities P i that the system is in a quantum state psi i. So we add another index to it. And when we perform the expectation value-- there's also one star too many. When we perform the expectation value, we sort of do it for each quantum state with a probability P i. So we are actually-- and this is what I wanted to point out. This was the purpose of this short discussion, that we are now actually performing two averages. One can be regarded as the normal quantum mechanical average when you find the average value or the expectation value for quantum state. So this is sort of the statistics or the averaging, which is inherent in quantum physics. But then in addition, there may simply be another probabilistic average because you have not prepared the system in a pure state, or the system may undergo some stochastic forces and wind up in different states. So there are two kinds of averages which are performed. And the advantage of the density matrix formalism is that both kinds of averages can be done simultaneously in a very compact formalism. So therefore, I put this probabilistic average in now into the definition of the density matrix. Or I can write the density matrix in this way. And with this extended definition of the density matrix, both kinds of averages are done when I determine the expectation value and operator by performing the trace with the density matrix. A lot of properties of the density matrix, I think are-- you're familiar with many of those. For instance, that Schrodinger's equation for the density matrix becomes this following equation. You can derive that by-- if you take the Schrodinger equation and you apply the Schrodinger equation to each state psi i. And then you do the averaging with a probability P i. You find that the Schrodinger equation for each state psi i turns into this equation for the density matrix. Let me just write down that the purpose now is we have two averages here. One is the quantum mechanic average and one is sort of an ensemble average with the probabilities P i. I want to discuss in a few moments the density matrix for two-level system. So I have to remind you of two properties, that the density matrix is normalized to unity. So there's probability of unity to find the system in one of the states. When we look at the square of the density matrix, a trace of rho square, this is simply the probability-- the sum of the probability squared. And this is smaller than 1. And the only way that it is one is for pure state. So pure state is characterized by the fact that there is only-- that we can find the basis where only one of the probabilities, P i is non-vanishing. And then, of course, almost trivially the trace rho is 1 and the trace rho square is 1. So, so far I've presented you the density matrix just as an elegant way of integrating the two averages into one formalism. And in essence, this is what it is. But you can now also use the density matrix if the whole system undergoes a time evolution, which is no longer unitary. No longer described by a Hamilton operator. Because you're interested in the time evolution or a small system which is part of a bigger system. The bigger system is always described by unitary time evolution, but a smaller system is usually not described by unitary time evolution. And that's when the density matrix becomes crucial. Of course, you can see this is just you describe the smaller system and you do some probabilistic average what the other part of the system does. And therefore, it's just another version of doing two averages. But this is sort of why we want to use the density matrix in general. So we want to use the density matrix for non-unitary time evolution. And the keyword here is that this is often the situation for open systems where we are interested in a small system, but it is open to a bigger system. Like, we're interested to describe an atom, but the atom can spontaneously emit photons into other parts of [INAUDIBLE] space. And we're not interested in those other parts of [INAUDIBLE] space. So an open system for this purpose is where we limit our description to a small part of a larger system. Again, an atom interacting with all the modes of the electromagnetic field, but we simply want to describe the atom. And then, we cannot use a wave function anymore. We have to use the density matrix. OK After these preliminaries, I want to now use the density matrix formalism for arbitrary two-level systems. So what is the most general Hamiltonian for the most general two-level system? Well, the most general Hamiltonian is the most general Hamiltonian we can construct with 2 by 2 matrices. And the base is set to expand the 2 by 2 matrices are the Pauli matrices. So if you expand the Hamiltonian into the unity matrix, sigma x, sigma y, and sigma z, we have four coefficients, four amplitudes, which are complex in general-- omega 1, omega 2, omega 3. And here is something which I've called omega bar. By appropriately shifting what is the 0 point of energy, we can always get rid of this. So this is just this definitional character. So therefore, the most general Hamiltonian for any two-level system can be written in this very compact way that it is the scalar product of the vector omega-- omega 1, omega 2, omega 3-- with the vector sigma of the three Pauli matrices-- sigma x, sigma y, sigma z. OK, so this is a way to write down the most general Hamiltonian for a two-level system. Now, we describe two-level systems by a density matrix, by statistical operator, which is also a 2 by 2 matrix. And the most general density matrix can also be expanded into the its four components. Sort of the basis set of matrices is the unitary matrix and the three Pauli matrices. So 1, 2, 3. Of course, this time we cannot throw away the unity matrix because otherwise the density matrix would have no trace and there would be no probability to find the particle. But we can, again, write it in a compact form that it is 1/2-- yes, I'm using the fact now that the trace of rho is r0. And this is, by definition, or by conservation of probability, is 1. So therefore, r0 is not a free parameter. It's just the sum of all the probabilities to find the system in any state. And the non-trivial part is then the scalar product of this vector r-- rx, ry, rz-- with the vector of the three Pauli matrices. Well, so we have our most general Hamiltonian. We have our most general density matrix. And now we can insert this into the equation of motion for the density matrix. Which, as I said before, is just a reformulation of Schrodinger's equation. And if you insert the Hamiltonian and the density matrix into this equation, we find actually something which is very simple. It says that this vector r, which we call the Bloch vector-- the derivative of the Bloch vector is given by the cross product of the vector omega, which were the coefficients with which we parametrized the Hamiltonian cross r. The derivation is straightforward. And you will be asked to do that on your homework assignment number 1. But it has a very powerful meaning. It tells us that an arbitrary two-level system with an arbitrary Hamiltonian can be regarded as a system where we have a vector R which undergoes precession. This is the time evolution of the system. So this is a powerful generalization from the result we discussed previously where we found that if you have an arbitrary quantum-mechanical spin, the time derivative can be written in that way. So previously, we found it for a pure state, but now we find it-- that it's even valid for a general density matrix and its time evolution. So what I've derived for you is-- Classroom Files. Is a famous theorem, which is traced back to Feynman, Vernon, and Hellwarth. It's sort of a famous paper which-- So this famous theorem-- and I've summarized it here for you-- says that the time evolution of the density matrix for the most general two-level system is isomorphic to pure precession. And that means it's isomorphic to the behavior of a classical moment, classical magnetic moment, in a suitable time-dependent magnetic field. So when you have a Hamiltonian, which is characterized by-- the most general Hamiltonian is characterized by the three coefficients-- w1, w2, w3. But if you would create a classical system where w1, w2, and w3 are the time-dependent components, xyz-component of a magnetic field, then the precession of a magnetic moment would be exactly the same as the time evolution of a quantum-mechanical density matrix. Any question? So in other words, we've started out with rotating frames and rotation and now we've gone as far as I will go. Namely, I've in a way told you that an arbitrary quantum-mechanical two-level system, the time evolution is just precession. It's rotation. There is nothing more complicated possible. Well, unless we talk about decoherence. If we have such a Hamiltonian, we know, of course, that a pure state will stay pure forever. And you can immediately verify that if you look at the trace of rho square. If the trace of rho square is 1, we have a pure state. And now we have parametrized the density matrix with the Bloch vector component-- r1, r2, r3. So in those components, the trace of rho square can be written in this way. And of course, r0 square was constant. This was our normalization of 1. So the question is now when we have an arbitrary time evolution, which we know now according to the Feynman, Vernon, Hellwarth theorem. The arbitrary time evolution of the Bloch vector can be written as omega cross r. So this equation tells us immediately that the length of the vector r is constant because r dot is always orthogonal to r. And therefore, the lengths of the vector r is not changing. So what we have derived says that with the most general Hamiltonian, the lengths of the vector r will be constant. And therefore, the trace of rho square will be constant. This is constant because r dot is perpendicular to r. So this will tell us that a pure state will just precess with the constant lengths of its Bloch vector forever. However, we know that in real life some coherences are lost and now we have to introduce something else. So this does not describe loss of coherence. So now we are just one tiny step away from introducing the Bloch equations. We will fully feature the optical Bloch equations in 8.422. but since we have discussed two-level systems to quite some extent, I cannot resist to show you now in three, four minutes what the Bloch equations are. And then when you take the second part of the course, you will already be familiar with it. So let me just now tell you what has to be added to do this step from the previous formalism to the Bloch equations. And this is the one step you have to do. We have to include relaxation processes into the description. So my less than five-minute way to now derive the Bloch equations for you goes as follows. I first remind you that everything has to come to thermal equilibrium. In other words, if you have an atomic system, if you have a quantum computer, whatever system you have and you prepared in a pure state, you know if you will wait forever, the system will be described by a density matrix, which is the density matrix of thermal equilibrium, which has only diagonal matrix elements. The populations follow the [INAUDIBLE] factor. And everything is normalized by the partition function. So we know that this will happen after long, long times. So no matter with what density matrix we start out, if we start with a density matrix rho pure state, for instance, there will be inevitably some relaxation process which will restore rho to rho t, to the thermal equilibrium. Now, how this happens can be formulated in a microscopic way. And we will go through a beautiful derivation of a master equation and really provide some insight what causes relaxation. But here for the purpose of this course, I want to say, well, there is relaxation. And I want to introduce now, in a phenomenological way, damping and damping times. So the phenomenological way to introduce damping goes as follows. Our equation of motion for the density matrix was that this is a unitary evolution described by-- the Schrodinger equation was that the density matrix evolves according to the commutative with the Hamiltonian. But now-- and I have to pick big quotation marks around it because this is not a mathematically exact way of writing it. But now I want to introduce some term which will damp the density matrix to the thermal equilibrium density matrix with some equilibration time, Te. I mean, this is what you can always do if you know the system is damped. You have some coherent evolution, but eventually you added a damping term and you make the damping term-- you formulate in such a way that asymptotically the system will be damped to the thermal equilibrium. In other words, the damping term will have no effect on the dynamics once you've reached equilibrium. So it does all the right things. Of course, we have to be a little bit careful because everything is either an operator or matrix. And I was just adding the damping term as you would probably do it to a one-dimensional equation. So therefore, let me be a little bit more specific. In many cases, you will find that there are two distinctly different relaxation times. In other words, the system will have usually at least two physically distinct relaxation times. They are traditionally called T1 and T2. T1 is the damping time for population differences. So this is the damping time to shovel population from some inverted state or some other state into the equilibrium state. That usually involves the removal of energy out of the system. So it's an energy decay time. And if you would inspect our parameterization of the Bloch vector, population or population differences are described by the z-component, the third component of the Bloch vector. Well, we have other components of the Bloch vector which correspond to coherences. The off-diagonal matrix element of the density matrix. And they're only nonzero if you have two states populated with a value-defined relative phase. When the system, quantum mechanical system, loses its memory of the phase, the r1 and r2 component of the Bloch vector go to 0. So therefore, the time T2 is a time which describes the loss of coherences, the dephasing times. And in most situations, well, if you lose energy, you've also lost-- if you lose energy because you quench a quantum state, you've also lost the phase. So therefore in general, T2 is smaller than T1. Often by a lot. So with those remarks about the two damping times, I can now go back to the equation at the top, which was sort of written with quotation marks, and write it in a more accurate way as a matrix equation for the damping of the components of the density matrix expressed by the Bloch vector. In other words, the equation of motion for the z-component of the Bloch vector, which is describing the population, has a coherent part, which is this generalized precession. And then, it has a damping part, which damps the populations to the equilibrium value with a damping time T1. And then we have the corresponding equations for the x and y, or the 1 or 2 component of the optical Bloch vector. We just replace the z index by x and y from the equation above, but then we divide by a different relaxation time, T2. So what we have found here, these are the famous Bloch equations, which were introduced by Bloch in 1946. Introduced first for magnetic resonance, but they're also valid in the optical domain. For magnetic resonance, you have a two-level system, spin up and spin down. In the optical domain, you have a ground and excited state. In the latter case, they're often referred to as the optical Bloch equations. Any questions about that? Yes, please. AUDIENCE: So what determines [INAUDIBLE]? PROFESSOR: Well, that's a long discussion. We spent a long time in 8.422 discussing various processes. But just to give you an example, if you have a gas of atoms and there are slightly inhomogeneous magnetic field, that would mean that each atom, if you look at it as precession motion, precesses at slightly differently rates. And the atoms will decohere. They all will eventually wind up with a different phase, that if you look at the average of coherence, it's equal to 0. So any form of inhomogeneity, which is not quenching a quantum state, which is not creating any form of de-activation of the excited state, can actually decohere the phase. And these are contributions to T2. So often, contributions to T2 come from inhomogeneous environment, but they are not changing the population of states. Whereas, what contributes to T1 are often collisions. Collisions which, when an atom in an excited state collides with the buffer gas atom, it undergoes a transition from the excited to the ground state. So these are two distinctly different processes. One is really a collision and energy transfer. Each atom has to change its quantum state. Whereas, decoherence can simply happen that there is a small pertubation of the energy levels due to external fields. And then, the system as an ensemble loses its phase. In the simplest way, you can assume inhomogeneous broadening. But you can also assume, if the whole ensemble is subject to fluctuating fields, then since you don't know how the fields exactly fluctuate after characteristic time, you no longer have a phased coherent system. Rather, phase at a later time is deterministically related to the phase at which you prepared it. And that would mean the system has dephased. And this dephasing time is called the T2 time. Nancy. AUDIENCE: I think I have two things. First, you said that it's generally true that T2 is less than T1. Is it ever true that it's not the case? PROFESSOR: Oh. There is one exception. And that's the following. Let me put it this way, every process which contributes to T1 will also contribute to T2. But there are lots of processes which only contribute to T2. So therefore, in general, T2 is much faster because many more processes can contribute to it. However, now if you ask me, is it always true? Well, there is one glitch. And this is the following. T1 is the time to damp populations. And that's the damping of psi square. T2 is due to the damping of the phase. And this is actually more a damping time of the wave function itself. And if you have a wave function psi which is damped with a damping time tau. Psi square is damped with twice the damping time. So if the only process you have is, for instance, spontaneous emission, then you find out that the damping rate for population is gamma. This is the definition of the spontaneous emission rate. But the damping rate 1 over T2 is 1/2 gamma. But because simply the way how we have defined it, one involves the square of the wave function. The other one involves simply the wave function. So there is this factor of 2 which can make-- by just a factor of 2-- T1 faster than T2. But apart from this factor of 2, if T2 would be defined in a way which would incorporate the factor of 2, then T2 would always be faster than T1. AUDIENCE: Yeah, it makes sense [INAUDIBLE]. I can't imagine if the system has a smaller T1, then it still has any coherence left in it. PROFESSOR: So maybe to be absolutely correct, I should say this. T1 is much larger than-- is larger or equal than T2 over 2. In general, we have even the situation that T1 is much, much larger than T2. But with this factor of 2, I've incorporated this subtlety of the definition. Other questions? Yes. AUDIENCE: Just a question about real motivation of using Bloch equation [INAUDIBLE]. I understand that [INAUDIBLE]. But you mentioned before that you can't describe spontaneous emission with a Hamiltonian formalism. PROFESSOR: Yes. AUDIENCE: But couldn't you use-- [INAUDIBLE]. Don't you still get spontaneous emission out of the coupling into the continuum? The emission into the different modes? You don't necessarily need [INAUDIBLE]. PROFESSOR: Yes, but let me kind of remind you of this. If you are interested in a quantum state and it decays to a level. But we're not really interested what this level is and we're not keeping track of the population here, when we can describe the time evolution of the excited state with a Hamiltonian because of the imaginary part, the Hamiltonian is no longer imaginary. And this is what Victor Weisskopf theory does. It looks at a system in the excited state and looks at the time evolution of the excited state. But if you want to include in this description what happens in the ground state, you are not having this situation. You have this situation. And what eventually will happen is you can look at a pure state which decays. And this is what is done in Victor Weisskopf theory. But if you want to know now what happens in the ground state, well, I'm speaking loosely, but that's what really happens. Every spontaneous emission adds something to the ground state, but in incoherent way. So what is being built up in the ground state is not a wave function. It's just population which has to be described with a density matrix. Or in other words, if you have a coherent superposition between excited and ground state, you cannot just say spontaneous emission is now increasing the amplitude to be in the ground state. It really does something fundamentally different. It puts population into the ground state with-- I'm loosely speaking now, but with a random phase. And this can only be described probabilistically by using the density matrix. But what you are talking about is actually, for the Victor Weisskopf theory, is pretty much this part of the diagram. We prepare an excited state, and we study it with all its glorious details, with the many modes of the electromagnetic field how the excited state decays. OK. Actually with that, we have finished one big chapter of our course, which is the general discussion of resonance, classical resonance, and our discussion of two-level systems. AUDIENCE: But [INAUDIBLE], wouldn't you have to do a sum over every single mode [INAUDIBLE]? Which would be the exact same thing you do when you do a partial trace over the environment. Isn't the end result sort of the same thing that you have to do some [INAUDIBLE] infinite sum and integral or all the [INAUDIBLE]? PROFESSOR: You need a sum, but-- AUDIENCE: That's where the decoherence comes from? PROFESSOR: Yes. But if you're interested in only the decay of an excited state, it can decay in many, many modes, but all these different modes provide a contribution to the decay rate gamma. So at the end of the day, you have a Hamiltonian evolution with a damping time gamma. And this damping time gamma is the sum of the other states. So in other words, the loss of population from the excited state, you just incorporate it by adding a damping time to the Schrodinger equation because you're not keeping track of the other modes where the population goes. You're not keeping track. You just say, excited state is lost. You're not interested whether the atoms are now in the ground state or some other state. All you are describing the loss rate from the excited state. And this is possible by simply doing-- by adding damping terms to the Schrodinger equation. In other words, what I'm saying is actually fairly simple. If you have a coherent state and you lose it, you just lose amplitude. what is left is coherent. When it's gone, it's gone. You have a smaller amplitude, smaller probability. And that's simple to describe. What is harder to describe is if you accumulate population in the ground state and the population arrives in incoherent pieces. How to treat that, this is more complicated. But simply the decay of a pure state, it's just-- you have e to the i omega t, which is a coherent evolution, and then you add an imaginary part and this is a damping time. So what I'm saying, it's sort of subtle but it's also very trivial. I don't know if this addresses your question. In the end, in general you need a density matrix. I just wanted to sort of emphasize that there is a little bit of decoherence where you can still get away with a wave function description. And actually, Victor Weisskopf theory is the wonderful example. OK, so we have discussed resonance. Arizonans have discussed in particular two-level systems. And if I wanted, we could now continue with two-level systems and talk about the wonderful things you can do with two-level systems. Absorbing photons, emitting photons, and all that. But let's put that on hold for a few weeks. And I think what we should first do is realize, where do those levels come from? And we discuss where those levels come form in our discussion of atoms. So our big next chapter is now atoms or atomic structure. And we build it up in several stages. Well, first things first. And the first things are the big chunks of energy which define the electronic structure. We discuss electronic structure for one electron and two electron atoms, hydrogen and helium. We don't go higher in the periodic table. But then we talk about other contributions to the energy of atoms, other contributions to the level structure of atoms. And this will start with fine structure, the Lamb shift. We bring in properties of the nucleus by discussing hyperfine structure. And then as a next big chapter, we will learn how external fields, magnetic fields, electric fields, and electromagnetic fields will modify the level structure of atoms. So by going through all those different layers, we will arrive at a rather complete description. If you have an atom in the laboratory, what determines its energy level and the transitions between those energy levels? So this is our agenda for the next few lectures. Today, we start with single electron atom with a hydrogen atom. And I cannot resist to start with some quotes from Dan Kleppner, who I sometimes call Mr. Hydrogen. So there is some beautiful piece of writing in a reference frame in Physics Today, "The Yin and Yang of Hydrogen." I mean, those of you who know Dan Kleppner know that he's always said hydrogen is the only atom, other atom he wants to work with. Other atoms are too complicated. And he studied-- actually, hydrogen was-- he did a little bit on alkali atoms, of course, but hydrogen was really the central part of his scientific work. Whether he studied Rydberg states in hydrogen or Bose-Einstein condensation in hydrogen. And this column in Physics Today, he talks about the yin and yang. The simplicity of hydrogen. It's the simplest atom. But if you want to work with hydrogen, you need vacuum UV because the step from the 1s to the 2p transition is-- Lyman-alpha is vacuum UV at 121 nanometer. So it's simple, but challenging. And hydrogen is the most pristine atom. But for those of you who do Bose-Einstein condensation, it's the hardest atom to Bose condense. Because the physical properties of hydrogen, it's simple in its structure. But the properties of hydrogen, in particular the collision cross-section, which is important for evaporative cooling, is very, very unfavorable. So that's why he talks about the yin and the yang of hydrogen. Let me just show you the first sentence of this paper, of this reference frame. Oops. Just a technical problem to make this fit the screen. I think I select it. What's going on? Yep. So now it's smaller. I can move it over there. Well, why don't we read it together? It's a tribute to hydrogen, a tribute to famous people. Viki Weisskopf was on the faculty at MIT. I met him, but he was already retired at this point. But then, Kleppner interacted with him. And you see the first quote, "To understand hydrogen is to understand all of physics." Well, it simply says that if you understand some of this paradigmatic systems in physics, you understand all of physics. I would actually say, well, you really have to understand the harmonic oscillator, the two-level system, and hydrogen. And maybe a little bit about three-level systems. But if you understand, really, those simple systems-- they're not so simple. But if you understand those so-called simple system very well in all its glorious detail, then you have really understood, maybe not all of physics, but a hell of a lot of physics. And this quote goes on that, "To understand hydrogen is to understand all of physics." But then Viki Weisskopf said, "Well, I wish I had understood all of hydrogen." And this is sort of Dan's Kleppner's wise words. For me, hydrogen holds an almost mystical attraction. Probably because I'm among the small band of physicists who actually confront it, more or less, daily. So that's what we are starting out now to talk about hydrogen. I know that a discussion of the hydrogen atom, the solution of the Schrodinger equation for the hydrogen atom is in all quantum mechanics textbooks. I'm not doing it here. I rather want to give you a few insightful comments about the structure of hydrogen, some scaling of length scales and energy levels, because this is something we need later in the course. So in other words, I want to highlight a few things which are often not emphasized in the textbook. So let's talk about the hydrogen atom. So the energy levels of the hydrogen atom are described by the Rydberg formula. This actually follows already from the simple Bohr model. But of course, also from the Schrodinger equation. And it says that the energy levels-- let me write it in the following way. It depends on the electron mass, the electron charge, h bar square. It has a reduced mass correction. And then, n is the principal quantum number. It scales as 1 over n squared. So this here is the reduced mass factor. This here is called the Rydberg constant R, sometimes with the index infinity because it is the Rydberg constant which describes the spectrum of a hydrogen atom where the nucleus has infinite mass. If you include the reduced mass correction for the mass of the proton, then this factor which determines the spectrum of hydrogen is called the Rydberg constant with an index H for hydrogen. You find the electronic eigenfunctions as the solution of Schrodinger's equation. And the eigenfunctions have a simple angular part, which are the spherical harmonics. We are not talking about that. But there is a radial part, radial wave function. So if you solve it, if you find those wave functions, there are a number of noteworthy results. One is in short form the spectrum is the Rydberg constant divided by n squared. I want to talk to you about your intuition for the size of the hydrogen atom, or for the size of hydrogen-like atoms. So what I want to discuss is several important aspects about the radius or the expectation value of the position of the electron. And it's important to distinguish between the expectation value for the radius and the inverse radius. The expectation value for the radius is, well, a little bit more complicated. 1/2 1 minus l times l plus 1 over n squared. Whereas, the result for the inverse radius is very simple. What I've introduced here is the natural length scale for the hydrogen atom which is the Bohr radius. And just to be general, mu is the reduced mass. So it's close to the electron mass. Well, the one thing I want to discuss with you-- we will need leader for the discussion of quantum defects, for field ionization and other processes, we have to know what the size of the wave function is. And so usually, if you wave your hands, you would say the expectation value of 1/r is 1 over the expectation value or r. But there are now some important differences. I first want to sort of ask you, why is the expectation value of 1/r, why does it have this very, very simple form? AUDIENCE: Virial theorem? PROFESSOR: The Virial theorem. Yes. We know that there is a fairly simple form for the energy eigenvalues. It's 1 over n squared. Well, Coulomb energy e square over r. So if the only energy of the hydrogen atom were Coulomb energy, it's very clear that 1/r, which is proportional to the Coulomb energy, has to have the same simple form as the energy eigenvalue. Well, there is a second contribution to the energy in addition to Coulomb energy. This is kinetic energy. But due to the Virial theorem, the kinetic energy is actually proportional to the Coulomb energy. And therefore, the total energy is proportional to 1/r. And therefore, 1/r has to scale exactly as the energy. Since the energy until we introduce fine structure is independent of l, only depends on the principal quantum number n, we find there's only an n-dependence. But if you would ask, what is the expectation value for the radius? You find an l-dependence because you're talking about a very different quantity. So let me just summarize what we just discussed. We have the Virial theorem, which in general is of the following form. If you have potential energy which is proportional to radius to the n, then the expectation value for the kinetic energy is n/2 times the expectation value for the potential energy. The most famous example is n equals 2, the harmonic oscillator. You have an equal contribution to potential energy of the spring and kinetic energy. Well, here for the Coulomb problem, we discuss n equals minus 1. And therefore, the kinetic energy is minus 1/2 times the potential energy. So this factor of 2 appears now in a number of relations and that's as follows. If you take the Rydberg constant, the Rydberg constant in CGS units is-- well, that's the Coulomb energy at the Bohr radius. But the Rydberg constant is 1/2 of it. So the Rydberg constant is 1/2 of another quantity, which is called 1 Hartree. We'll talk, probably not , today, but on Monday about atomic units, about sort of fundamental system of units. And the fundamental way-- the fundamental energy of the hydrogen atom, the fundamental unit of energy is whatever energy you can construct using the electron mass, the electron charge, and h bar. And what you get is 1 Hartree. If you ever wondered why the Rydberg is 1/2 Hartree, what happens is in the ground state of hydrogen, you have 1 Hartree worth of Coulomb energy. But then because of the Virial theorem, you have minus 1/2 of it as kinetic energy. And therefore, the binding energy in the n equals 1 ground state, which is 1 Rydberg, is 1/2 of the Hartree. So this factor of 1/2 of the Virial theorem is responsible for this factor of 2 for those two energies. I usually prefer SI units for all calculations, but there's certain relations where we should use CGS units. Just as a side remark, if you want to go to SI units, you simply replace the electron charge e squared by e squared divided by 4 pi epsilon0. OK. So I've discussed the hydrogen atom. It's also insightful and you should actually remember that or be able to re-derive it for yourself. How do things depend on the nuclear charge z? Well, if you have a nuclear charge z, the Coulomb energy goes up by-- well, if you have a stronger attraction. If you would go to helium nucleus or even more highly-charged nucleus and put one electron in it. Because of the stronger Coulomb attraction, all the length scales are divided by z. So everything is smaller by a factor of z. So what does that now imply for the energy? Well, you have a Coulomb field which is z times stronger, but you probe it now at a z times smaller radius. So therefore, the energies scale with z squared. Let me formulate a question because we need that later on. So if you have a hydrogen-like atom and the electron is in a state with principal quantum number n. And let's assume there is no angular momentum. So what I'm writing down for you is the probability for the electron to be at the nucleus. This will be very important later on when ewe discuss hyperfine structure because hyperfine is responsible-- for hyperfine structure, what is responsible is the fact that the electron can overlap with the nucleus. So this factor will appear in our discussion of hyperfine structure. And what I want to ask you is, how does this quantity depend on the principal quantum number n and on z? And I want to give you four choices. Of course, for dimensional reasons, everything is 1 over the Bohr radius cubed because it's a density. But you cannot use dimensional analysis to guess, how do things scale with z and with n? So here are your four choices. Does it scale with z, z squared, z cubed? Does it scale with n squared, n cubed, n to the 6? If you don't know it, just make your best guess. OK, one part should be relatively straightforward. And this is the scaling with z. Let me just stop it. So the exact answer is that z n 00 at the origin squared is pi a0 cubed. And its c cubed over n cubed. So the correct answer is this one. Let me first say-- OK, I gave you four choices and it's difficult to distinguish all of them. But the first one you should have gotten rather simply, and this is the z-scaling. Because the scaling with z is the following, that everything-- if you write down the Schrodinger equation, if you have z, you replace e squared by z e squared. And I actually just mentioned it five minutes ago, that all length scales, the Bohr radius is h bar squared over electron mass times e squared. It actually scales with 1/z. So if all length scales go as 1/z, the density goes with z cubed. So therefore, one should have immediately narrowed down the choice. It should be A or C because they have the correct scaling with z. The scaling with n is more subtle and there was something surprising I learned about it. And this is what I want to present to you in the last three or four minutes. So the z-scaling, just remember that the length scaling is the length scales as 1/z. Therefore, density scale was z cubed. The interesting thing about the length scaling is-- and I just want to draw your attention to it because it can be confusing, that in high torsion we have not only one length scale, but two length scales. We have mentioned one of it already, which is the energetic length scale 1/r. 1/r is the Coulomb energy. Because of the Virial theorem, it's proportional to the total energy. And that's what you know, what you remember when you wake up in the middle of the night out of deep sleep, that the energy of high torsion is 1 over n squared. So therefore, this is a0 over n squared. However, if you look at the wave function of hydrogen, you factor out. When you solve the radial equation, you factor out an exponential. There's sort of polynomial and then there is an exponential decay. And the characteristic lengths in the exponential decay of the wave function is n e0 over z. So therefore, when we talk about wave functions with principal quantum number n, there are two length scale. 1 over r n l scales with n squared. But the characteristic length scale in the exponential part of the radial wave function scales with n and not with n squared. And it is this exponential part of the wave function which scales with n which is responsible for the probability to find the electron as the nucleus. Which, as I said before, the z-scaling is simple but the n-scaling is not n to the 6. It's n cubed. And this is really important. And this describes the scaling with n for everything which depends on the presence of the electron as a nucleus. One is the quantum defect and the other one is the hyperfine structure. Let me just give you one more scaling. I've discussed now what happens for 0 angular momentum, for finite angular momentum states, psi is proportional to r to the l. So therefore, if you ask, what is psi square, it scales with 2l. And at least for large n, the n-scaling is, again, 1 over n cubed. OK, that's what I wanted to present you today. Any questions? OK, so we meet again on Wednesday next week.
https://ocw.mit.edu/courses/5-112-principles-of-chemical-science-fall-2005/5.112-fall-2005.zip
The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu. Let's pick up from where we were on Friday. We had discovered the nucleus. Now we were faced with the problem, as all the scientific community was in 1911, in trying to understand the structure of the atom. Where was the nucleus in the atom? Where was the electron? How were they bound? How did they hang together? And we talked about the fact that the electron in the nucleus, the force of interaction is the Coulomb force. And we started talking about how, at that time, the only equation of motion that was going to allow us to figure out how the electron and nucleus moved under influence of this Coulomb force was Newton's equations of motion, in particular the Second Law, F equals ma. And so, in order to apply that equation of motion, we needed a model for the atom. And what was the simplest and most obvious thing to do was to suggest the planetary model. After all, that is how the astronomical bodies moved around the sun. And so the model that is set up is one where this electron has a uniform circular motion around the nucleus with a well-defined radius, which we called r star. We said that given this, the acceleration was a constant. It was given by v squared over. The linear velocity over r. We plugged that into F equals, put in the Coulomb force, and from that we were able to calculate the kinetic energy of that electron going around the nucleus. Well, the reason I want to calculate the kinetic energy from this model is because I want to ultimately calculate the total energy. And why I want to calculate the total energy is going to be obvious in just a few minutes. My goal is to get the total energy. Actually, I am using my notes from Friday because I didn't finish them. You may need to get them out. This will probably often be the case, is that I won't quite finish the notes from the other lecture. I will start out the next lecture where I left off, so you should bring your previous day's notes to class if, in fact, you use them during class. I want the kinetic energy plus the potential energy because I want both of them to add them up to get the total energy. I know the kinetic energy. Now, we need the potential energy. What is the potential energy? Well, the potential energy is the integral over the operating force over the appropriate limits. In this case, if our force of interaction is the Coulomb force, which I will just represent as F of r, I am going to integrate this from r star out, and this is going to be minus the integral of the force. Now, some of you may have seen this before. This is a general case, the potential energy of anything is minus the integral of the operating force over the appropriate coordinates. If you have seen it before, that is fine, you are happy. If you have not seen this before, you are panicked. Don't panic. I do not hold you responsible for this. You will see it in 8.01 later on this semester. When you see it later on, you can come back here and say, okay, now I know what is going on. But I just need it right now to make a point about the total energy of the system. And that is what is going to lead me to the conundrum. I need the potential energy. It is the integral of the force. Let me plug in, here, my force that is operating, e squared 4 pi epsilon nought r squared. I do that integral and put in the appropriate limits. It is minus e squared over 4 pi epsilon nought r star. Now I have kinetic energy plus potential energy. Let me add them up. The kinetic is one-half e squared 4 pi epsilon nought r star. Potential, minus e squared over 4 pi epsilon nought r star. The result is minus one-half e squared 4 pi epsilon nought r star. That is the total energy here of this particular system. Well, why I wanted this total energy is to show you that this total energy is negative. What that negative means to us is that the system is bound. The electron and the nucleus are stuck together. And I can show you that maybe a little more clearly if I draw an energy level diagram. Let me plot here the total energy. And I am plotting it as a function of r, the distance between the electron and the nucleus. Well, what you can see is that for very large r, the energy here is going to be zero. Way out here, for very large r, where we have the electron and the nucleus separated infinitely apart, the energy is zero. And, of course, as you bring them closer together, the energy goes down. And when you are exactly, and we calculated this, at r star here, well, then the total energy is minus one-half e squared over 4 pi epsilon nought times r star. If you brought the electron and the nucleus into this value here of r star, the energy would change like that. But the big point is this energy is negative, or it is lower than the electron and the nucleus separated. That means that the electron and nucleus are stuck together. You are going to have to put this much energy into the system in order to pull them apart. That is the big point here, is that this model so far looks like everything is hunky-dory. Everything is working. The electron stuck to the nucleus. It is not going anywhere. It looks terrific. But here comes the conundrum. The conundrum is that classical electromagnetism, which was pretty well understood by this time, 1911, 1912. Maxwell's equations, that was down pat. But what classical electromagnetism says is that when you have a charge, and this electron is a charge, that is accelerating, that charge has to be emitting radiation. It has to be giving off energy. After all, that is actually how an antenna works. In an antenna, what you are doing is taking charge and sloshing it, accelerating it. When it accelerates, it emits radiation. That is how you broadcast. That is true, it was known in 1911. Synchrotron radiation works the same way. When you have a synchrotron, the way you get synchrotron radiation is essentially by accelerating charge. That is a given and is actually, again, something you will talk about in much more detail in 8.02. But the point here is if this charge is being accelerated, and it is, then it must be giving off radiation. It must be giving off energy. Well, if it is giving off energy, we look at our energy expression here. That must mean that the energy in the system is going down because it is losing the energy. It is giving it off to radiation. If E is going down, it is getting more negative here. The only way for E to get more negative is for this r star right here to be changing. Is for r star to be getting smaller and smaller and smaller. Well, we could set up another set of equations using what we know from classical electromagnetism and from what we have already done here. What we would find is that this value here of r star would go to zero in t equal 10^-10 seconds if r was originally on the order of an angstrom to begin with. Here is the problem. Classical equations of motion coupled with classical electromagnetism, they are making a prediction that my atom is not going to live more than 10^-10 seconds. Because in 10^-10 seconds, that electron is on top of the nucleus. We no longer have an atom that was already known to have a volume associated with a diameter that is about an angstrom. The classical way of thinking is making a prediction that is not consistent with the observations at that time. And even now, it is predicted that the atom essentially kind of annihilates, collapses in 10^-10 seconds. And that is the problem that the scientific community had in That is the problem we have right now. And they had it for 10, 12 years. Now you can say what is wrong here? Well, it is possible, and they were thinking about this, too. It is possible that maybe this force is wrong, this Coulomb force. That is a possibility. Or, of course, maybe it is the equations of motion that are wrong. That is possible. Or, maybe it is classical electromagnetism that is wrong. Well, of course what it is going to turn out to be is the equations of motion, F equals ma. Bottom line is that you cannot use classical mechanics to explain the motion of this microscopic particle, the atom, in the constrained environment of an atom. That is the bottom line. We need different mechanics. We cannot use classical mechanics to describe how that electron hangs on that nucleus, how they are bound. And so that was the problem. This signaled something was really amiss in the scientific community in the world at that time. That is our problem now, too. What is the next step? Well, historically the clues about why the electron did not actually collapse into the nucleus, like classical physics predicted, came from a completely different area of discussion. It came from the discussion of the wave-particle duality of light and matter. It was long believed that matter, with its particle-like behavior, was distinct from light, which was this transmission of energy through space. But, in the last 1800s and early 1900s, there were a few experiments that appeared on the horizon that began to suggest that maybe this boundary between matter with its particle-like behavior and radiation with its wave-like behavior was not as rigid as thought. And, in fact, what we are going to see is that radiation has both wave-like properties and particle-like properties. It depends on the particular experiment that you do which one of those behaviors you see. And, consequently, matter behaves both as a particle and a wave. Again, it depends on exactly what experiment you do, which one of those properties you observe. What we are going to do right now is put aside the discussion of the structure of the atom. We are going to put it aside until next Monday. We have to do that because we need some more information in order to take a big leap to get us out of this constraint of classical mechanics. And those clues, as I said, came from this discussion of the wave-particle duality of light and matter. And that is what we are going to be talking about for the next three lectures. Then we are going to come back and tie in those results to the structure of the atom. Of course, where that is going to lead us is a new equation of motion called quantum mechanics. That is where we are going. Let's start off by talking about radiation or light. We are going to talk about its wave-like properties, then Wednesday we are going to talk about the particle-like properties of light, and Friday we are going to talk about the wave-like properties of matter. That is where we are going. Let's talk about waves here. You all know that waves are some periodic variation of a quantity. A water wave, for example, is a periodic variation of the level of water. At some points in space, the water level is high. At other points, the water level is low. Sound wave. Well, a sound wave is the periodic variation of the density of air. At some points in space, the air is very dense. At other points in space, the air is not dense. Well, light or radiation is a period variation of an electric field, as I depict here on this slide. Electric field versus position. There is a periodic variation of the electric field. Now, exactly what is an electric field? Some of you know this, some of you don't, but an electric field is literally the space through which the Coulomb force operates. For example, if we have a negatively charged plate and a positively charged plate here. The space through which the Coulomb force is operating here, and the Coulomb force is operating because we have two plates here that are oppositely charged. The electric field is the space through which the Coulomb force operates. If we put a positive charge in that space, you know what is going to happen. In this coordinated system, the positive charge is going to float up. Because the negatively charged plate is up above. If we reversed the potential difference and put a positively charged particle in this electric field, in this space, it is going to move down because now the negatively charged plate is lower. This electric field here has not only magnitude -- You can imagine here the magnitude is given by the difference in the potentials of these plates. The larger the difference, the larger the magnitude. But it also has direction. In one case, it is pointed this way. In the other case, it is pointed that way. And that is reflected here on this plot of the electric field here. What you see is that right here, the magnitude of the electric field is small. As you move along in x, that magnitude increases, goes to a maximum, then turns around and at some point literally is zero. And then the electric field changes direction and its magnitude increases in the opposite direction. Increases, increases, gets to a point, then turns around and becomes zero again. If you have a charge in a radiation field and you put it right here -- Well, it would be pulled in one direction. If you put it over here, it would be pulled in the other direction. We have a magnitude and we have a direction. Now, not only is light a periodic variation of the electric field in space, it is also a periodic variation of the electric field in time. That is, this is a picture of that field, that one instant in time. We will call it t equals 0. However, that electric field moves. It propagates. And the distance, or the time it takes for the electric field here to move over one wavelength, I have shown this as a star. The time it takes for this maximum to go from here to here, one wavelength, is defined as one period. And a period is given by one over nu, where nu is the frequency of the radiation. It is the number of cycles per second. In other words, if you were sitting here at x equals 0, you were tied at x equals 0, and you were just watching this electric field come by. You would see a maximum in that electric field, one maximum every second if the frequency is one Hertz. In other words, the frequency is the number of maxima you would see pass by you per second. Well, we have a unit to characterize frequency. I call it cycles per second. It is cycles per second, but the formal unit is Hertz. Hertz is inverse seconds. We leave out the number of cycles. The number of cycles is implied in the unit of Hertz. To give another example, here, suppose we had some radiation and the frequency of that radiation was one Hertz. Suppose we had an electron, an electron is charged, and we put here at x equals 0 and we tie it at x equals 0. What is going to happen to this electron? Well, what is going to happen is that this electron is going to be pulled down and then it is going to be pushed back up once every second because the frequency, here, is one Hertz. It is charged, and we are tying it at x equals That is our proof that the It is going to go like this. It is going to oscillate once every second if this frequency is one hertz. Here it goes. An electron is pulled down and then pushed back up once every second. Now, we, of course, can write an equation to describe this oscillation of the electric field in both space and time. x is the position variable, t, the time variable, and I have written it down here. I will explain this more in just a moment, but what I also want to point out is that an oscillating electric field always, always, always has perpendicular to it an oscillating magnetic field. That is well described by Maxwell's equations. Again, you are going to see that in 8.02. And the magnetic field here has the same essentially function form and characteristics as this electric field. And, because it does, I am just going to talk about the electric field. Here is the expression for the magnetic field. I just call it H. But, again, it is a function of position and time. Here is an illustration, just the variation of the electric field. Light, radiation is actually a variation in space and time of both the electric and a magnetic field. That is why it is electromagnetic radiation. Now, let me show you on the 8.02 website. Let me get that rolling. There it is. Now we have to start it. All right. One of these is the electric field. The other one is the magnetic field. This is a simulation that 8.02 has made for you. You can go and look at it on the 8.02 website, but you can see it propagating here in time, and you can see its variation in space of this electromagnetic field. Let's look at this functional form just a little more carefully, just to make sure everybody is on the same page. I think many of you have seen this before. What we are going to do, because we have two variables, is we are going to hold one variable constant and plot it as a function of the other variable, just to explain the parameters that go into this functional form. At time t equals 0, if in this equation here I stick in t equals 0, I have a form that looks like this. It is just the cosine function in x. And you can see that the amplitude goes from positive A to minus A. And so what you see is that this A in front of the cosine, the physical meaning of it is just the maximum amplitude. If you were given a functional form with a number in front of a cosine, well, you could read off the amplitude immediately. The other parameter that characterizes this wave is the wavelength. It is the distance between two successive maxima or two successive minima. And you can also see here that the field is going to be at its maximum amplitude whenever this x is an integral multiple of the wavelength, lambda, 2 lambda, 3 lambda, or minus lambda, minus 2 lambda or zero. If you were given a waveform and there was a number in front of the x, you can almost, by inspection, tell what the wavelength is. That number would be equal to 2 pi over lambda. Now what we are going to do is hold x constant and set it equal to zero, and then plot this functional form as a function of time. Again, we have the cosine function, oscillates from plus A to minus A. Now the time between two successive maxima or minima is what we spoke earlier of as the period. It is the time for one cycle. In other words, is it one over the frequency. And you get the maxima then whenever the time is an integral multiple of the period, whenever time is 1 over nu, 2 over nu, 3 over nu, or minus 1 over nu, minus 2 over nu, or zero. These are the characteristics of the functional form, amplitudes, wavelengths, frequencies. Now, I told you that the period was given by 1 over nu. Let's just do a quick proof that the period is actually 1 over nu, one over frequency. How are we going to do that? Well, what I said was the definition for a period was the time it takes the wave to move one wavelength. If this is the wave at t equals 0, this then coming up here should be the wave at one period later. And so, if we moved over exactly one cycle, what this means is that at one period later the functional form ought to look exactly like it did at t equals 0. If I take my general expression for the waveform and plug in t equals 1 over nu, I get this. What you can see at first glance is that it doesn't really look like this, or at least not just yet, but we are going to make it look like this and we are going to do so legally. What are we going do? This just repeats that equation. You can already see we have some cancellation here. These two nu's go away, so I just have cosine ((2 pi x) over lambda minus 2 pi). In order to simplify this, I am going to need a trigonometric identity, which you may or may not remember, cosine (alpha minus beta) is the cosine alpha times cosine beta plus the sine of the alpha sine beta. I am going to let 2(pi)x be alpha, and beta will be 2pi. I am going to plug that in. Here, we can see some nice simplification. This cosine 2pi, of course, is one. The sign of 2pi is zero, this term goes away, and what I have left is A cosine 2 pi x over lambda at t equals 1 over nu. And, indeed, that is the same functional field as the field at t equals period is equal to 1 over nu. Now, this wave also propagates in space. It moves. It goes from here to here. And another important characteristic of electromagnetic reaction is the speed with which it propagates. Let's just quickly calculate what that speed is. We have enough information to do that. Speed is always distance traveled divided by time elapsed. And we said that at t equals 0, this is what our waveform looked like. We also said that one period later, this is what our waveform looked like. We know at one period that the waveform moved over one wavelength. The speed is the distance traveled, which is a wavelength, divided by the time elapsed, which is 1 over nu, the period. Therefore, the speed is lambda times nu. That is the speed with which this wave propagates. And, of course, you already know that all electromagnetic radiation has a constant speed of about 3x10^8 meters per second, or we call it c. And what that is, is the product of the wavelength times the frequency. The electromagnetic spectrum, of course, is infinitely wide. And here is the electromagnetic spectrum. We won't do this in any kind of detail, but I just want you to note here that on the long wavelength end, we have what we call radio waves. And on the short wavelength, then, we have our gamma rays and cosmic rays. And, in the case of the gamma and the cosmic rays, because the wavelength is small, lambda is small, that means those waves have a high frequency. In the case of the radio waves, because those wavelengths are long, that means those waves have a lower frequency because the frequency times the wavelength is a constant. It is this c. It is 3x10^8 meters per second. And, of course, right in here, a very small region, narrow region of the electromagnetic spectrum, are the light waves that are sensitive to our eye. What you do need to know is that the red wavelengths are longer and the blue wavelengths are shorter. Again, the important thing is lambda times nu is always, for every kind of radiation, equal to a constant. And that constant is c. Now, the other thing that you just need to know is the relative ordering here in wavelengths. You do need to know that microwaves are longer wavelengths than gamma rays. All MIT students should know that. And one other thing I might say, because we are going to talk about this a little later in the course. See these microwaves? Well, molecules will absorb microwaves, take it in. That kind of radiation is going to set the molecule rotating. Molecules will absorb infrared radiation, and that kind of radiation is going to set the molecules vibrating. Molecules will absorb visible and ultraviolet radiation. What that is going to do is promote an electron to an excited state. Then sometimes those electrons, in the excited state, want to relax back down to the ground state. When they do so they give off radiation. That is the origin of fluorescence and sometimes phosphorescence. Then sometimes molecules will also fluoresce if they absorb X-rays, but with X-rays, if a molecule absorbs them, it also kicks out an electron. And we will be looking at that in a few days to identify the energy levels in atoms and molecules. This, I think, you are familiar with. So far I have just told you what electromagnetic radiation is, how we characterize it: speed, frequency, wavelength, maximum amplitude. But what I have not shown you, yet, is any evidence that indeed light has wave-like characteristics. And to do that we are going to do the experiment that essentially was done to demonstrate the wave-like behavior of light, and that is Young's two-slit experiment. This is the late 1800s. What was done was to take a source of monochromatic radiation. We are going to use 6 angstroms, 633 nanometers. It is a helium neon laser. And it is going to impinge on just a thin metal plate. It does not have to be metal. It can be anything. But what we did was poke two holes in it, made two slits. And naively you might think, if you looked at a screen out here, that this screen will light up in spots that are directly opposite those slights. Because, after all, light travels in straight lines. And so if the slits here are 0.005 meters apart, you might think that the two bright spots on the slit will be about 0.02 inches apart. Well, of course, that isn't the case. What you really see is an array of bright spots. And Christine has up there in the projection booth a helium neon laser that is shinning behind two slits. You've got really beautifully now, Christine. That is great. And what you see is that there is a whole array of spots. There aren't just two spots. There is a bunch of spots here, bright spot, dark spot, bright spot, dark spot. You've also got another pattern superimposed on that. It almost looks like you would see the single slit diffraction, too, on top of the double slit, but we won't get into that. But this is not just two spots. Let's see if we can try to understand how this pattern arises, what this pattern comes from. Well, waves have the property of superposition. Superposition means that if I take a wave and have it in space, but now I take a second wave and put it in the same place in space but make it such that the maxima of both waves are in the identical place in space, what I have is a situation where the two waves add that property of addition of waves -- When they are in the same place in space, that property is called superposition. That is the property of waves. And in this particular case, we are going to have what we call constructive interference. They are going to add up such that the amplitude here of the resulting wave is going to be twice the amplitude of each of the individual waves. This is constructive interference. On the other hand, I can have two waves in the same place in space, but they can be positioned so that the maximum of one wave is at the same point in space as the minimum of the other. And because we have these positive and negative amplitudes, well, then these are going to cancel when they add up and we are going to have the null result. We are going to have no intensity. That is called destructive interference. Well, in order to understand how this property of interference gives rise to these array of bright spots in the two slit experiment, let me actually use water waves as an example to try to understand why we get this array of spots, or this row of bright spots and dark spots. Here is the beach. Here is the water. This is the top view. Here is the water. Here is the sand. Here is where I wanted to be all weekend. And the waves are rolling in to the shore. There are the wave fronts. And then suppose I get ambitious and, for whatever perverse reason, I decide to build a barrier to prevent these waves from coming onto the beach. Except I poke two holes in the barrier, two little holes. Well, you know what is going to happen. When the wave approaches that barrier, well, through that little hole a little bit of the wave is going to sneak through. And because that little hole is really pretty little, what is going to happen is that the wave front is going to spread out isotropically. And so that wave front is going to look like a semicircle centered on that little hole. And, of course, this wave front is going to keep propagating. And it propagates out. And then soon enough, a wavelength later, another wave sneaks through and I have two semi-circles. And the distance between those two semi-circles is lambda. That is the wavelength. That is the wave crest. That is the maximum of the wave. Keep going. That propagates out. Keep going. That propagates out. Well, at the same time that the waves are sneaking out through that little hole, waves are sneaking out through this little hole. And I will color them green. That wave propagates out and keeps propagating. The other one sneaks through and keeps propagating. And now let me clean up the drawing a little bit. And I am going to call this slit one. The green waves are the waves that have come through slit one. The blue waves are the ones that have come through slit two. And the distance between any two successive maxima here, or any two semi-circles is, of course, lambda. And lambda is the same for slit one and slit two. Now, I want you to look at this spot that I just circled right here. Right here what do you see? Interference. Absolutely. You have two maxima at the same place in space. You are going to have constructive interference right there. What about this spot? Constructive interference. What about this spot? Right. Everywhere along that line you are going to have constructive interference. Now, let me just tell you one other thing. We have every constructive interference all along this line. Now look right at this point here. What you see is you have the superposition of the blue wave that has come from slit two, and this blue wave has traveled out from slit two a distance four lambda. One, two, three, four lambda. That is the radius. It has traveled out a distance four lambda. It is constructively interfering with a wave coming from slit one that has traveled out a distance three lambda. One, two, three. The difference in the distance traveled by those two waves that are constructively interfering is one lambda. Let's keep going in order to understand this diagram. Let's look at this spot. Right here, what do you have? Constructive interference. Right here you have constructive interference. If you kept going you would see, everywhere along this line, constructive interference. Now let's look at the difference in the distance traveled by the waves that are constructively interfering along that line. Well, you see the green wave here? The wave that is constructively interfering is one that has traveled out a distance two lambda. That is, r sub one is equal to two lambda. It is interfering with this wave front that has traveled out a distance four lambda. The difference in the distance traveled by those two waves is two lambda. 4 lambda minus 2 lambda equals 2 lambda. I think on your notes, it is actually this case that I have written it down. Here is another point of constructive interference. Here is another point of constructive interference. Everywhere along this line, we have constructive interference. And, if you analyze this, the difference in the distance traveled would be zero. What you would expect, if you were to image this, you'd expect right here very bright spot, very bright spot, very bright spot. This is going to be symmetric around the center, so there will be a bright spot out here, a bright spot out there. Let's look at this actually in real life in a water tank. There we go, up here on the side boards. Here are the waves coming this way onto some barrier, and here are the holes. Here is one hole. Here is the other hole. And then these bright semi-circles are the wave fronts. And what I want you to notice, and you have to kind of look out here, right there you see a whole bunch of very bright spots. Well, if this were light and we had a screen then right here we would see the screen light up. And then right here you see kind of nothing. That nothing is destructive interference. That would be a dark spot if, in fact, this were light and we were looking at a screen. Then here is another very bright spot. Here is another very bright spot. This is on a website from the University of Colorado, which, if you are not familiar with, is actually kind of a very neat website. It has some very elementary topics in it, but it also has some topics that even you would be interested in. And that is actually the name of the website. And so what is going on here, in the case of the light, is just what we have explained. We've got this line of constructive interference that is going to result on the screen as a very bright spot. And then another line with another bright spot and another line with a very bright spot. And this is symmetric around the zero. Right at this point we have constructive interference. In between we have destructive interference. Constructive, destructive, constructive. And that is the origin of the many different bright spots. And now there is a condition that has to obtain in order for there to be maximum constructive interference, and that is this condition. The difference in the distance traveled of the two waves that are interfering to give us that maximum constructive interference has to be an integral multiple of the wavelength. I will explain this a little bit more starting on Wednesday. Okay. See you then.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
We're now going to introduce the concept of potential energy. Let's begin by considering a system where a conservative force is acting. So I'll consider a conservative force, which I'll call F sub c. So force set to force, the work integral is path independent. So the work integral for this force is the integral of F sub c .ds, which is the path going from point A to point B. And for a conservative force, this integral does not depend upon the path from A to B. It's independent of A and B. So it depends only upon the endpoints. So this is a path independent integral. And since it depends only upon the endpoints, I can write it, since it's going to be an integral from point A to point B-- this integral must be equal to some function of the final point. So some function of r sub B minus some function of the initial point r sub A. And to just get this integral from A to B, in our usual way of evaluating a definite integral, it's going to be equal to some function of r B minus some function of r A, since the integral depends only upon the endpoints. Now, let's call this function-- I'm going to make a sort of funny choice here-- so let's call this function minus U as a function of position factor r. And we'll see the reason for this funny choice of minus sign in just a moment. So now with this definition, my work integral, which again, is the integral of F sub c .ds from point A to point B is now minus U of r B minus minus U of r A. So in other words, that's minus U of r B-- so minus minus gives me a plus U of r sub A. For shorthand, I can write that as minus U sub B plus U sub A. And since we start out at point A and go to point B, notice that I can also write this as the negative of the change in U. Since the final value of U is r U at B and the initial value is U at A, so this minus U B plus U A is equal to minus delta U, the change in U as we go from the initial to the final position. And note that in addition to that, given that this is the work integral, I can summarize that by writing that-- so I'll say, note that delta U is equal to the negative of the work done going from point A to point B. Now, let's write the work kinetic energy theorem using this newly introduced U function. So the work kinetic energy theorem, which tells us that the work done, which we've seen is minus U sub B plus U sub A is equal to the change in kinetic energy delta k, which I can write as K sub B minus K sub A. Or I could also write that as 1/2 M V B squared minus 1/2 M V A squared. So this is just me stating that the work done on the system is equal to the change in kinetic energy. And I can write the work in terms of my function U that I've introduced here to minus U sub B plus U sub A. So I'm going to rearrange this equation now-- basically the one involving U's and the one involving kinetic energies-- so that I have all the terms involving point A on one side and all the terms involving point B on the other side. So rearranging, I get that at point A 1/2 M V A squared plus U sub A is equal to at point B 1/2 M V B squared plus U sub B. Now, notice however, that there is nothing special about how I chose the points A and B. They're completely arbitrary. So that means that this equation must be true for any points A and B. And what that means is that each side must be equal to the same constant for any point in the system. So in fact, we can write that K plus U for any point must be able to some constant, which I'm going to call E sub mech. So K here is the kinetic energy. U is my function that I introduced, and we're going to call it the potential energy. And E sub mech-- and remember, this E sub mech here is a constant. E sub mech is something that we call the total mechanical energy. Now, what we've done here is that we've shown that the total mechanical energy, which is the sum of the kinetic energy and the potential energy, is a constant under the action of a conservative force. In other words, if we look at this equation and look at how it changes with time, the change in the kinetic energy, plus the change in the potential energy is equal to the change in the total mechanical energy. And this is 0 for our conservative force. So in other words, the change in kinetic energy is balanced by the change in potential energy, such that the sum is 0 when the force acting is conservative. Now, we've now introduced the very important concept of the potential energy that is associated with the conservative force. And we see that the change in the potential energy, the way we defined it, the change in potential energy is equal to the negative of the work integral for our conservative force going from point A to point B. Now in fact, it's actually only the change in the potential energy that has physical significance. We'll be concerned with potential energy differences or changes. The actual value of the potential energy itself doesn't matter. We're free to choose any convenient reference point, or 0 point, for measuring the potential energy. It's equivalent to choosing a coordinate origin when we're talking about positions. Now, the potential energy change is related to the work done by conservative forces. But we know that in general, work can also be done by non-conservative forces. Although, that work by non-conservative forces will depend upon the path taken from point A to point B. So in general, the total work is given by the sum of the conservative work-- the work done by conservative forces, which we can relate to a potential energy change, and the non-conservative work done. And it's this total work that tells us what the change in the kinetic energy is. Now we'll soon see that in the presence of non-conservative forces, the total mechanical energy, which is K plus U, is not a constant.
https://ocw.mit.edu/courses/5-111sc-principles-of-chemical-science-fall-2014/5.111sc-fall-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. CATHY DRENNAN: So now, we're going to move on to talk about spontaneous change. And so this is today's handout. So spontaneous reaction is a reaction that proceeds in the forward direction without any kind of outside intervention, like heat being added, for example. It just goes in that direction. So we can talk about the following reactions are spontaneous at constant pressure. And we'll see later that temperature can make a difference between whether something's spontaneous or not. But constant pressure, here's an example, iron plus oxygen. And what is this in layman's term an example of? AUDIENCE: Rust. CATHY DRENNAN: Rust, yes. And many of you are probably aware of this, rust is a spontaneous process. It's something that people try to do something about. You don't want your car to rust. If you're new to New England and you're from a part of the country that doesn't get so cold, you'll look at people's cars and you're like, wow, look at all that rust all over them. Yes, rust happens, especially in New England. And delta H here is negative. Is this endothermic or exothermic? AUDIENCE: Exothermic. CATHY DRENNAN: Exothermic, minus 824. Here is another spontaneous process. This molecule is ATP. And it will hydrolyze, which means react with water, forming ADP. So ATP is triphosphate. ADP is diphosphate. So one of the phosphates-- and here's the phosphate-- comes off. It hydrolyzes off. And this is a spontaneous process. And we also have a delta H0 of minus 24 kilojoules per mole. And remember, when we oxidize glucose in our body, we store that energy in ATP. And we want that ATP to be around. And then when we break ATP apart, it releases the energy. So this is a very important biological process. And you have a negative value, exothermic reaction. But there's a few other examples of spontaneous reactions. One of them is this one. And we've probably all experienced this. If you're from New England, you've seen snow melt or ice melt, solid to liquid. If you're from a hot part of the world, you probably had ice cubes in your nice, refreshing drink with maybe a little umbrella on the top. Anyway, everyone, I think, has seen ice melt. But here, delta H0 is of positive value. It's endothermic. Also, if you have ammonium nitrate, this will just come apart in a spontaneous reaction. Delta H here is plus 28 kilojoules per mole. So is delta H the key to spontaneity? It is not-- plays into it, but it is not the determining factor. So if delta H0 is not the key to spontaneity, what is? It is free energy, yes, particularly Gibbs free energy, or delta G. And I'm really happy they decided to add the Gibbs free energy, because another thing of energy would be a lot. So having this free energy having abbreviation of G, I think is a good thing-- so Gibbs free energy. So Gibbs free energy depends on delta H. But it also depends on another term, which is T delta S-- temperature and delta S, change in entropy. So delta G is the predictor of whether a reaction will go in the forward direction in a spontaneous fashion or not. So let's just think about the sign of delta G and what it means. So again, at constant temperature and pressure here, delta G less than 0, negative delta G, is that spontaneous or not? AUDIENCE: Spontaneous. CATHY DRENNAN: Spontaneous. Positive delta G is not spontaneous, non-spontaneous. And delta G equals 0. It's one of the other things that I am very fond of, which is equilibrium. So delta G indicates whether something is spontaneous or not. Negative value, spontaneous in the forward direction. Positive value, not spontaneous in the forward direction. And equilibrium, the thing we all try to reach in our lives. So let's look at an example and calculate what delta G is going to be. So we saw this equation already. We have a positive delta H0. And now, I'm telling you that delta S0 is also a positive value. So we can use this equation. And this is really one of the most important equations in chemistry. Figuring out this equation was really a crowning achievement. And you'll be using it a lot. Not just in this unit, but pretty much in every unit from now on, you will be using this equation. So room temperature, pretty much we're not doing-- occasionally, we'll do something not at room temperature, but we like room temperature. And we like it in Kelvin. So delta G0, so we plug in our delta H value. So it's going to equal delta H minus the temperature. And if the temperature isn't given in a problem, you can assume that it's 298. And now we need to plug in delta S. But I left a blank here to make a point, which is that delta S's are almost always given in joules per kelvin per mole. But everything else is given in kilojoules. So you want to make sure you convert your units, or you're going to come up with very funky answers at the end. So from joules to kilojoules, so now plus 0.109 kilojoules per kelvin per mole. And we can do this out now. So we have plus 28 minus 32.48. And why don't you tell me how many significant figures this answer has. 10 more seconds. So at least some people got it right. We've identified once again a weakness, so rules of adding and subtracting. So we have 28 here minus 32. There are no significant figures after the decimal point here. So we're just left with 4. So when we're doing multiplication or division, we consider the total number of significant figures. But with addition and subtraction, you gotta pay attention to where the decimal point is. And when we get into the next unit, there are logs. And those have special rules of significant figures. Yes, very exciting. So delta G0 is negative here, although delta H is positive. So this reaction is spontaneous. It's not hugely. It's a pretty small number, but still, it's spontaneous. So let's consider our friend over here that we've been talking about-- glucose being oxidized to CO2 and water. You practically should have the delta H memorized for that at this point. Now, I'm telling you what the delta S0 is. And it's positive 233 joules per kelvin per mole. And we can plug this into our equation to calculate a delta G0, again remembering to convert joules to kilojoules to do this. And so now, we see that it has a very negative delta G0 here, minus 2,885 kilojoules per mole at room temperature. So at room temperature, this reaction is spontaneous but slow. We saw that with the candies that had glucose in it. We opened them up, and no water or CO2 were obviously being liberated in this reaction, because it is slow. And now, a clicker question. I want you to tell me whether it would be spontaneous at different temperatures or not? 10 more seconds. Yep. So it is spontaneous at all temperatures. Not all reactions are, but this one is. So if we go back here, the reaction is spontaneous at all temperatures. And that's because, to be spontaneous, you want a delta G that's negative. If delta H is negative and delta S is positive, then you'll have a negative minus a negative. So it doesn't matter what temperature is. This will always yield a negative delta G0. So other reactions, that might not be the case. But if you have negative delta H and a positive delta S, it will be spontaneous. So negative delta H, again, exothermic, heat release. And a positive entropy is a favorable thing. Entropy is always increasing. So if this reaction has increased in entropy, it will be much more likely to be spontaneous. So let's talk about entropy. So entropy is a measure of disorder of a system. Delta S is the change in entropy. And delta S, again, is a state function. So one example of entropy in New England are these stone walls that do not look absolutely beautiful. There are often stones falling everywhere. And it doesn't matter if these stone walls that were probably built in 1600s or 1700s in New England fell totally apart and were rebuilt, now we just care about how the wall is compared to the way it started. So delta S, again, is a state function. It doesn't depend on path. And so if you get out and walk around and go like on the Minuteman Trail and see some of the historical sites where Paul Revere rode his horse along a lot of stone walls, there's a New England poet who writes about this, Robert Frost. And he said, "something there is that doesn't love a wall." And that something is entropy. AUDIENCE: [LAUGHTER] CATHY DRENNAN: Entropy does not love a wall. Entropy does not like order. Another example, those of you who are learning more about me as a person know that I am a fan of dogs. This is my dog Shep. Shep does not like going to the groomers, does not like it. And I think that this is because he's been at my office hours and he knows that increasing entropy is favorable, decreasing entropy is not. And he says, really, this violates the laws of thermodynamics, what you're doing to. Me and you should cease and desist. But anyway, he still get haircuts. So entropy, again, is this measure of disorder of a system. You have a positive delta S, which is going to be an increase in disorder. And a negative delta S is going to be a decrease in disorder. And disorder, you can be thinking about this as internal degrees of freedom in your molecule, thinking about this as vibrations. All sorts of different things can lead to increase or decrease in entropy. But we often think about changes in entropy depending on if the reaction is changing in phase. So gas molecules have greater disorder than liquid. And liquid has greater disorder than solids. And so a solid has all its molecules lined up. And liquid can move around a little bit more. But gas really can spread all out. So in terms of entropy and changes in entropy, we can think about the phase change that's happening and even predict if something's going to be an increase in entropy or not. So let's just look at one example. So without a calculation, predict the sign of delta S. And this is a clicker question. Let's just take 10 more seconds. And can our demo TAs come down? Yep, good. So you predicted positive, which is the correct answer. And so here, we're going from a liquid to a liquid and a gas. And so going to the gas, that will increase the disorder of the system. So delta S will be positive. So now, we're actually going to do a demo of this particular reaction. And so we have hydrogen peroxide, which can just be bought at a CVS or local drugstore. And it will go to liquid water and also oxygen gas. And so how do you see a gas? And you can see it by putting it in with soap bubbles. So as bubbles of oxygen form, the soap bubbles will bubble out. And so you can see it. And you can also add some kind of food color. And we have yeast as a catalyst to make it go a little bit faster. So let's see if we can actually see disorder increase. I don't want the mic. If you want to just say-- if you want to talk at the same time, here's a mic. AUDIENCE: I might do that. CATHY DRENNAN: You're not going to do that, OK. AUDIENCE: Yeah, we will. CATHY DRENNAN: Oh, you do. OK. AUDIENCE: Is this on? This on? Yes, it is. OK, great. So what we have going on here is we've got this container. It's filled with water. And what I did was I added about 4 teaspoons of yeast. The yeast, as Cathy said, is going to act as a catalyst. It's actually a biological species. It's a living species that's actually going to catalyze this reaction. What Erik is doing is Erik is pouring some hydrogen peroxide. He added some soap. So as you see in the reaction, the H2O2 is going to break down into water and gas-- the gas being oxygen. And what we don't want to happen is we don't want just the gas to escape, because then you guys can't see it. So what Erik is doing right now is he's adding some soap. The soap is actually going to catch, if you will, the escaping gas and turn it into a foam. And what we should be able to see is the foam kind of escape from this container. You ready? OK. So hopefully, this will work. We should put on our goggles. [LAUGHTER] Smells really bad. OK, ready? And-- get out of there, look at that. Hey! Wow, that worked a lot better than we thought it was going to work. CATHY DRENNAN: And so this is sometimes called the elephant toothpaste demo, because that is sort of, if you were an elephant, what you would probably be brushing your teeth with, I don't know. Yes. So this is-- [APPLAUSE] --entropy increasing. So let's just see if we can quickly talk a little more about entropy and then we'll end. So entropy of reactions can be calculated from absolute values. And again, we can use this equation here. So we have a delta S for a particular reaction, can be calculated from the delta S's of the product minus reactants. So again, we have products minus reactants. The absolute value, or an absolute delta S, S equals 0 for a perfect crystal at a temperature of 0 kelvin. You never really talk about S by itself. It's always really delta S. And S of 0, this is like the saddest thing for a crystallographer, because you know you're never going to have a perfect crystal, even if you go to 0 kelvin, I feel like at least experimentally. So S equals 0, to me that's kind of sad. So if we just put in for this reaction that we just did, we can put in our values here. And we can put in we're forming liquid water. And we're forming O2 gas. And we're using two molecules of hydrogen peroxide-- H2 O2 here. And so now, we can calculate what that delta S0 is. And it's a value of 125 joules per kelvin per mole. So again, products, water and gas, minus reactants, pay attention to the stoichiometry, and you can get your delta S value. And why is it positive? Again, we already talked about this. It's because it's going from liquid to a liquid plus a gas. And then, if we plug these values in again to see if it's spontaneous, we can use this equation and plug in our values, making sure we change our units. And we can see that, in fact, this is a spontaneous reaction, because it's negative here. But you already knew that, because you watched it go spontaneously. So most of the time, you can't do the demo. So then you can use this awesome equation right here. So that's where we're stopping for today. And we'll see you all on Friday. So if you take out your Lecture 16 notes, the bottom of page 3, we had an example about the melting of ice at room temperature. So we did a little demo for you at the end of class last time and calculated that the reaction was spontaneous for hydrogen-- hydrogen peroxide is pretty reactive. And we watched the O2 bubble go. And we did that calculation. So we're thinking about, not just delta H, but we're thinking about delta S. And we're now thinking about delta G as well and how they all play together. So when we started last lecture, we had talked about the fact of some reactions that were spontaneous where delta H was negative, where it was exothermic, where heat was released. But then we also gave some examples where delta H was positive and said, but these are also spontaneous reactions. We all know that at room temperature ice will melt. We know that that's a spontaneous reaction. But the delta H for that reaction is actually a positive value. It's an endothermic reaction. So when we're thinking about these reactions and spontaneity, we have to be thinking about delta G, no just delta H. And delta G has to do with delta H and delta S. So sometimes, delta S is the driving force behind whether a reaction is going to be spontaneous. Whether the delta G will be negative or positive, delta S is making that determination. So we can calculate what a delta S for reaction is if we know the entropy values for the products. And it's the sum of the entropy values for the products minus the sum for the reactants. So when we're doing heats of formation, we also had products minus reactants. But we have one exception to this products minus reactant rule, and that's when we're using what? What thing are we going to do reactant minus products? AUDIENCE: Bond-- CATHY DRENNAN: Bond? AUDIENCE: Bond enthalpy. CATHY DRENNAN: Enthalpy, right. But here, we're products minus reactants. So we can plug those numbers in. Our product is our liquid water. Our reactant is our solid water, or our ice. And we can calculate what the delta S0 is for this reaction. We can put in our values. And we get a positive value, positive 28.59 joules per kelvin per mole. And delta S's tend to be in joules. Everything else is in kilojoules. So keep that in mind. And why do you think this reaction has a positive value? Why is delta S greater than 0? What would be your guess for that? What's happening? AUDIENCE: [INAUDIBLE]. CATHY DRENNAN: Yeah, so we're going from a solid to a liquid. So we're increasing the internal degrees of freedom. The molecules of water can move around more in a liquid than they can in a solid. So this is increasing the disorder of the system. You're increasing entropy here, because the water molecules can move around more. There's more freedom of motion. So delta S is positive. It's increasing. And then we can use that to calculate delta G0, Gibbs free energy. We can plug in our delta H value minus T, room temperature, times delta S, which we just calculated, making sure that we convert from joules to kilojoules. And then our units will be kilojoules per mole. And here, delta G0 is a negative value. So it is spontaneous. We all know it's spontaneous. We've observed this happening. So even though delta H0 is positive, it's an endothermic reaction. Ice melts at room temperature, because the delta G is negative. So let's talk a little more about delta G. So let's talk about free energy of formation and the last page of this handout. So free energy of formation, delta G sub f. And so this is analogous to delta H of formation-- so the change in enthalpy of formation. So again, when you have a little value here, this is standard Gibbs free energy of formation for the f here. And that's the formation of 1 mole of a molecule from its elements in most stable state and in their standard states. So we can have tables in your book of these values. So your book, in the back, if you haven't explored, the back of your book gives lots of tables of things, including information about delta G's and delta H's and bond enthalpies and all sorts of other things. Redox potentials, we haven't talked about yet, lots of tables. So you can look this up. Or if you have already, say, looked up your delta H of formation, you can use this handy equation-- delta G equals delta H minus T delta S. But if we plug in our delta H's of formation, we can get our delta G's of formation. So how you're going to calculate delta G of formation depends on what information you're given. So let's think a little bit about what it means for particular delta G's of formation-- if they're positive or if they're negative. So let's look at an example. And we saw this before. This is the formation of carbon dioxide from elements in its most stable state, which is graphite carbon and O2 gas. So these are the elements in their most stable state, forming CO2. Now, I'm telling you that the delta G0 is minus 394.36 kilojoules per mole. And we can think about what this information tells us, that this is a fairly large negative number. So if delta G of formation is less than 0, what's going to be true thermodynamically? And this is a clicker question. Let's just take 10 more seconds. Interesting. So let's think about why this is true. This might be a deciding clicker question. We'll see. So if it is negative value for delta G, a negative value for delta G means that it's spontaneous in its forward direction. So here, the formation from the elements in their most stable state, if this is spontaneous in the forward direction, it also means that it's non-spontaneous in the reverse direction. That means once CO2 forms, it's going to be stable compared to the elements from which it came, because it's non-spontaneous going in the reverse direction, or at least that's the way that I like to think about it. So relative to its elements, it's stable-- spontaneous forward, non-spontaneous in reverse. So this is kind of bad news for us, because there's too much CO2 in our environment right now. It's a greenhouse gas. And wouldn't it be awesome if we could just encourage it all to go back to its elements, form more oxygen, which we could breathe. How lovely? Make some nice graphite. Maybe compress it, make some diamonds. But no, it is quite stable compared to its elements. So CO2 is in our environment causing global warming. And it's going to be hard to solve that problem, not easy to solve that problem. So this is unfortunate news that thermodynamics gives us. So then we can look at the other. If you have a positive value for delta G of formation, then it's thermodynamically unstable compared to its elements. So it's spontaneous going in the reverse direction. So it's unstable. So thermodynamics tells us whether something is stable or unstable. And kinetics tells us about whether things will react quickly or not. So something can be kinetically inert-- it might take a long time to react. But thermodynamics tells us stable, unstable. So thermodynamics is great, but it doesn't tell us anything about the rates of reactions. So nothing about the rates, and that's kinetics. So really thermodynamics and kinetics are very important for explaining reactions. And we'll talk about more kinetics at the end. So to calculate a delta G for a reaction, it depends, again, what you're given. You can sum up the delta G of formation of your products minus your reactants. Or you might use this. You'll find yourself using this equation a lot. This, again, was a crowning achievement of thermodynamics, that delta G equals delta H minus T delta S. So, again, whatever information you're given, you can use that to find these values. So we're not done with this equation. We're going to switch handouts. But we're going to continue with that exact same equation.
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/8.04-spring-2016.zip
PROFESSOR: So let's do an example where we can calculate from the beginning to the end everything. Now, you have to get accustomed to the idea of even though you can calculate everything, your formulas that you get sometimes are a little big. And you look at them and they may not tell you too much unless you plot them with a computer. So we push the calculations to some degree, and then at some point, we decide to plot things using a computer and get some insight on what's happening. So here's the example. We have a potential up to distance, a, to 0. The wall is always there, and this number is minus v naught. So it's a well, a potential well. And we are producing energy. Eigenstates are coming here. And the question now is to really calculate the solution so that we can really calculate the phase shift. We know how the solutions should read, but unless you do a real calculation, you cannot get the phase shift. So that's what we want to do. So for that, we have to solve the Schrodinger equation. Psi of x is equal to what? Well, there's a discontinuity. So we probably have to write an answer in which we'll have a solution in one piece and a solution in the other piece. But then we say, oh, we wrote the solution in the outside piece already. It is known. It's always the same. It's universal. I don't have to think. I just write this. E to the i delta. I don't know what delta is, but that's the answer, E to the i delta sine kx plus delta should be the solution for x greater than a. You know if you were not using that answer, it has all the relevant information for the problem, time delays, everything, you would simply write some superposition of E to the i kx and E to the minus i kx with two coefficients. On the other hand, here, we will have, again, a wave. Now, it could be maybe an E to the i kx or E to the minus i kx. Neither one is very good because the wave function must vanish at x equals 0. And in fact, the k that represents the kinetic energy here, k is always related to E by the standard quantity, k squared equal to mE over h squared or E equal the famous formula. On the other hand, there is a different k here because you have different kinetic energy. There must be a k prime here, which is 2m E plus v naught. That's a total kinetic energy over h squared. And yes, the solutions could be E to the ik prime x equal to minus ik prime x minus ik prime x, but since they must vanish at 0, should be a sine function. So the only thing we can have here is a sine of k prime x for x less than a and a coefficient. We didn't put the additional normalization here. We don't want to put that, but then we must put the number here, so I'll put it here. That's the answer, and that's k and k prime. Now we have boundary conditions that x equals a. So psi continues at x equals a. What does it give you? It gives you a sine of ka is equal to E to the i delta sine of ka plus delta. And psi prime continues at x equals a will give me ka cosine ka equal-- I have primes missing; I'm sorry, primes-- equals k E to the i delta cosine ka plus delta. What do we care for? Basically we care for delta. That's what we want to find out because delta tells us all about the physics of the scattering. It tells us about the scattering amplitude, sine squared delta. It tells us about the time delay, and let's calculate it. Well, one way to calculate it is to take a ratio of these two equations so that you get rid of the a constant. So from that side of the equation, you get k cotangent of ka plus delta is equal to k prime cotangent of k prime a. Or cotangent of ka plus delta is k prime over k. We'll erase this. And now you can do two things. You can display some trigonometric wizardry, or you say, OK, delta is arc cotangent of this minus ka. That is OK, but it's not ideal. It's better to do a little bit of trigonometric identities. And the identity that is relevant is the identity for cotangent of a plus b is cot a cot b minus 1 over cot a plus cot b. So from here, you have that this expression is cot ka cot delta minus 1 over cot ka plus cot delta. And now, equating left-hand side to this right-hand side, you can solve for cotangent of delta. So cotangent of delta can be solved for-- and here is the answer. Cot delta is equal to tan ka plus k prime over k cot k prime a over 1 minus k prime over k cot k prime a tan ka. Now, who would box such a complicated equation? Well, it can't be simplified any more. Sorry. That's the best we can do.
https://ocw.mit.edu/courses/8-421-atomic-and-optical-physics-i-spring-2014/8.421-spring-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Well, the result of which we obtained on Wednesday for spontaneous emission for the Einstein A coefficient regarded as an accomplishment as a highlight of the course. We've worked hard to talk about atoms and electromagnetic fields. And ultimately, to deal with spontaneous emission, it was not enough to put a semi-classical light atom interaction, dipole Hamiltonian, Rabi oscillation and such to put that into the picture. We really needed a quantized version of the electromagnetic field. And this is a result when an atom is excited and interact with all of the empty mods of the vacuum. And be summed up the probability that photon is immediate in any of those modes. And by doing, kind of, all of the ever reaching with intensity of state, and for all the possibility of actions we obtained. The famous result for the Einstein A coefficient, which is also the natural aligned width of the atomic excited state. Do you have any questions about the derivation or what we did last week? Then I think I will just continue and interpret the result. So we go to result for an Einstein A coefficient. And well, the question is, how big is it? Well it has a number of constants. And if it is-- let's discuss it now in atomic units. Well, if we assume the frequency or the energy is on the order of Rydberg-- that's sort of the measure for an electronic excitation in the atom-- we assume the dipole matrix element is one. That means one per radius. Since we have pretty much set everything one and expressed everything in atomic units, it means that the speed of light is-- remember? The velocity of the atom in any correspondent was alpha times smaller than the speed of light. But the velocity of the atom is one atomic unit. So therefore, the speed of light in atomic units is one over alpha. And that means that if you look at the formula, there is the speed of light to the power of 3 in the denominator. And that means that in atomic units the Einstein A coefficient is alpha to the 3, which is 3 times 10 to the minus 7. So that means that the ratio of this spontaneous emission rate, which is also the inverse lifetime and, therefore, the natural alignments of the excited state. Relative to the transition frequency, so the damping of the harmonic oscillator or the two level system relative to the NFC spacing of the oscillator. It's small. It's actually alpha cube. So if you take this 3 times 10 to the minus 7 and multiply it with the atomic unit of frequency, which is 2 Rydbergs. We obtain on the order of 10 to 9. And it's a rate of 10 to the 9 per second. And that means that the lifetime of a tubercle atomic level is on the order of 1 nanosecond. Well, often it's 10 200 nanosecond because many transition frequencies are smaller by quite a factor than the atomic unit of the transition frequency. Remember, the Rydberg frequency would be deep in the UV. But a lot of atoms have transitions in the visible. I highlighted already when I derived it that the spontaneous emission has this famous omega cube dependents. And that this actually important to understand why lower lying levels-- excited hyperfine levels-- do not radiate. So let me just, kind of, formalize it. If I would now estimate what is the radiative lifetime for a transition, which is not as I just assumed in the UV or in the visible. Let me estimate what is the radiative lifetime to emit a microwave photon at a few gigahertz? Well, the microwave frequency of the gigahertz 10 to the nine is five orders of magnitude smaller than the frequency 10 to the 14 of an optical transition. So therefore, this is 10 to the 15 times longer. And if you have, typically, one or ten nanosecond for an electronic transition. That means that this spontaneous lifetime for a microwave transition is seven months. If in addition we factor in that hyperfine transitions have an operator which is a Bohr magneton and magnetic type of operator, not an electric dipole and we discussed that the Bohr magneton is actually when we discuss multiple transitions we discuss that the Bohr magneton is alpha times smaller than a typical electric dipole moment. So therefore, a magnetic dipole transition is alpha times weaker than an electronic dipole transition. And that means now, if you multiply months, which we obtain by the frequency scaling, again, by alpha square for the weakness of the magnetic dipole, we find that atomic hyperfine levels have a lifetime, which is on the order of 1,000 years. And this is why it's very safe to neglect those transition in the laboratory and assume that all hyperfine states in the ground state manifold pretty much don't decay and are long lived. Questions? OK, so with that we have discussed spontaneous emission. Let's go through a few clicker questions to discuss the subject and verify your understanding. So the first question is can an E2 transition, which is a quadruple transition, can you drive it by a plane wave? Or does it need a laser beam which has an intensity gradient such as a focus laser beam. Yes or no? OK. Well the answer is yes. You can just use your laser beam. If a quadrupole transition would require a gradient, it would really require a gradient over the size of the atom. And that would be extremely hard to achieve. Fortunately, this is not the case because what happens is we actually assumed in the derivation that we had a plane wave into the IKR. And then do the tailor expansion. And it was these part of the tailor expansion of a plain wave, which gave rise to the matrix element for the quadrupole transition. So a plane wave laser beam is sufficient to drive higher multiple transitions. Next question. Can spontaneous emission be described as a stimulated emission process by the zero point field. So by the zero point field, we know the electromagnetic wave is a harmonic oscillator. And a harmonic oscillator has a ground state. And in the ground state you have zero point motion. So there is an electric field, even when we have the vacuum state. And the question is, can spontaneous emission be described as simply being stimulated emission but now do to the silver point fluctuations of the electromagnetic field. OK. The answer is it depends. It depends if you just want to make a qualitative hand waving argument. Then I would say you are correct. You can say that the electromagnetic field of the vacuum stimulates a transition. But when I said described, I meant if you can get it quantitatively correct. And there the answer is actually no because the energy of the electromagnetic field is n plus 1/2 h bar omega. Whereas, this spontaneous emission of eight is n plus 1. So you have half a photon verse of extra energy. But this spontaneous emission is sort of like the spontaneous emission is the rate, which would be stimulated by an extra energy of h bar omega. So in other words, you would get the answer wrong by a factor of two. I think decoding deeper in the electrodynamics description of spontaneous emission you would identify two terms for spontaneous emission. One is actually the stimulation by the vacuum field. But there is another term called radiation reaction. So there's, sort of, two terms. Trust me. If not, there are hundreds of pages in [INAUDIBLE], which is books written about it. And in the ground state, the two terms destructively interfere. Therefore, you have no spontaneous emission in the current state, which is reassuring. But then in the excited state the two terms constructively interfere. And therefore, you get spontaneous emission, which is twice as much as you would get if you just look at the stimulation by the vacuum field. So the answer is not quantitative but half of it, yes, can be regarded as stimulated emission by the vacuum fluctuations of the electromagnetic field. OK. We emphasized that spontaneous emission is proportional to omega cube. The question is now what is the dependence in one dimension? If everything the atom can only emit in one dimension, everything is one dimensional, put the atom into a waveguide. So your choices are omega cube, omega square, or omega-- well, if you press D, none of the above. But I can already tell you it's one of those three. So everything the same. But we are in one dimension. The world seen by the atom and by the electromagnetic waves is one dimensional. Yes, it's correct. As you remember, out of the omega cube dependence. Omega square came from the density of states. And what is omega square in three dimension becomes omega in two dimensions and constant density of state in one dimension. So therefore, in one dimension, we are only left with the omega dependence. OK, so there is one factor of omega, which does not come from the density of state. And the next question is where does the other power of omega come from? As we discussed, it's not the density of states. So we have three choices. One is it comes from the atomic matrix element, it comes from the dipole approximation, or it comes from the quantization of the electromagnetic field. OK, the majority got it right. It's a field quantization. Sort of remember when you write down the electric dipole Hamiltonian, in the quantized version, there is a perfecter, which is electric field of a single photon. So if you have a single photon, it gives rise to an electric field squared, which is proportionate to h bar omega. And this is, sort of, the normalization factor. Two more questions. We talked a lot about the rotating wave approximation. And we also talked about it for a spinning system driven by magnetic field. If you have a rotating magnetic field, we do not need the rotating wave approximation because if you drive a spin system with a rotating magnetic field, we have only the co-rotating term. The question I have now for you is whether the same is correct or not for an electronic transition. So therefore, the question is for electronic transitions do we always get the counter rotating term. And if you want to have a simple Hamiltonian, then we do the rotating wave approximation. So the question is is the rotating wave approximation necessary because we always get the counter rotating term for the electronic transition, then the answer is yes. Or are there examples where the system is exactly described by only one term? The core rotating term. I will come back to that later in the class. But I thought it's a good question. OK, let me give you the answer. I actually coincide with everybody in the class here because I would tend to say no because there are situations where the counter rotating term can be zero due to angular momentum selection rules. However, if you have an electronic transition and you have a sigma plus transition to one state, there's always a possibility for sigma minus transition. So you usually get both. But if you apply an infinitely strong magnetic field, then the m equals minus 1 state can be moved out of the picture. You have only, let's say, the m equals plus 1 state. And then selection holds mean that the counter rotating term is vanishingly small. But it's an artificial situation. So you can all claim credit for your answer. Finally, the last question is about the Lamb shift. We are now talking about electronic transitions. And the question is Lamb shift-- if it's due to the counter rotating term. In other words, if you have a situation where the counter rotating term is zero, as we just discussed in the previous example that there may be situations. Somewhat artificially but you could arrange for it. The set then implies that there is no lamb shift. So yes or no. Is the lamb shift caused by the counter rotating term involved in electronic transitions? OK. OK, well what else is the lamb shift? It is the AC stock effect of the counter rotating term. So is it due to the counter rotating term? Yes, of course. The lamb shift is the AC stock effect caused by the vacuum fluctuations. That's what it is. But we come to that because I want to discuss later today some aspects of the fully quantized Hamiltonian. And we will, again, in the fully quantized picture see the operators, which are responsible for the core rotating for the counter rotating turn. And then I will point to the operator, which causes a lamb shift. But before I continue, any questions about the questions? Collin. AUDIENCE: When you derive the amplitude in the electric field due to the single photon-- PROFESSOR: Yep. AUDIENCE: I always get the factor of two wrong. So you wrote h bar omega is 2 epsilon 0 [INAUDIBLE] squared. Now there's a contribution that comes from the electric field and magnetic field because you have one factor of two. Then there's always that other factor of two. Are you getting that from using one half h bar because of the vacuum fluctuation. PROFESSOR: I'm not going back to the formula because I run the risk that it was wrong. But all I want to say is what I really mean is use Jackson. Put in a volume V-- an electromagnetic field-- with h bar omega energy. And the electric field squared of this photon, that's what I mean. And if you find a factor of two mistakes in my E square, I can still, you know, get out of theory exit by the rear-entrance door by saying that there is also a difference whether E square is E square RNS or whether E square is the amplitude. You know I mean there are risk factors of two everywhere. But what I mean is really the electric field caused by one photon. And of course, the argument stands. I don't need any factors of two or any subtleties of the electromagnetic field energy. We know that the energy is n plus 1/2 but emission is n plus 1. And these shows that the stimulation by the vacuum field cannot quantitatively account for spontaneous emission. AUDIENCE: So the quantity that you set equal to is h bar omega 1/2, not the fluctuation but the real-- PROFESSOR: OK, if you want to know, let's not compare apples with oranges. You want an electric field. And you can pick whether it's the RMS field or whether it is the maximum amplitude. You can pick what you want. But now we are comparing what is the e-square for the vacuum-- for single-mode-- vacuum. And what is the e-square for single photon? The two answers differ by a factor of 2. A single photon is twice as strong in e-square as the vacuum fluctuations in the same mode. That's what it means. Yes? AUDIENCE: I have a question about the quantum emission rate. The explanation that it had-- quantum mechanic derivation that we have, do people not know the formula, how to describe spontaneous emission [INAUDIBLE]? PROFESSOR: I think so. I have not gone deeply back into the story. But a lot of credit is given to Einstein. And as I mentioned last week that Einstein actually had spontaneous emission in his derivation for the Einstein A and B coefficient in this famous paper. And so he found that there must be spontaneous emission based on a thermodynamic argument. It's only spontaneous emission, which brings the internal population of an atom into equilibrium. So I think it is correct to say. AUDIENCE: Can you derive it from that stagnant condition of getting [INAUDIBLE]? PROFESSOR: That's what Einstein did. And the answer is, by comparison with the Planck law, you get an expression for the Einstein A and B coefficient. Now of course, you can go the other way around. You can see if you just use classical physics you would actually expect-- now it depends. If you use the Bohr model, you would expect that the electron is radiating and it was a mystery. How can you have an atom in the ground state, which is circling around a nucleolus, and not radiating at all? On the other hand, in quantum mechanics, we are not assuming that the atom is circulating. And we have an accelerated charge and then we have a time dependent charge distribution. We use the steady state wave function. So I'm not sure if there is maybe an argument, which would say there should be some spontaneous emission based on a purely classic argument. But this would not be the whole story because a classic argument would then deal with the difficulty. Why is there difference between n equals 1, which does not radiate in n equals 2, which radiates. So my understanding is that it is only the physics either through the perspective of Einstein by just using equilibration or our microscopic derivation using filed quantization, which allows us to understand the phenomenon of a spontaneous emission. Other questions? OK, then before we talk about some really cute and nice aspects of the fully quantised Hamiltonian, I want to spend a few minutes talking about degeneracy factors. I've already given you my opinion. You should not think in almost all situations about levels, which have a degeneracy. Just think about states. A state is a state, and it counts as one. And if you have a level which has triple degeneracy, well, it has three states. Just kind of count the states and look at the states. However, there are formula for which involves degeneracy factors. And just to remind you, when we had the discussion of Einstein's A and B coefficient, the Einstein A coefficient was proportionate to the B coefficient responsible for stimulated emission from the excited to the ground state. But the Einstein B coefficient for absorption was related to the Einstein B coefficient for stimulated emission by involving these degeneracy factors. So degeneracies appear and in some formal layer that it makes a lot of sense to use them. So I've always said for a fundamental understanding, you should just assume all degeneracies are one. This is how you can avoid, sort of, some baggage in deriving equations. And I'm still standing to my statement. I want to show you now a situation where it becomes useful to consider degeneracy factors. So let me give you an example. We can now look at the situation where we have an excited P state and a ground state, which is S. Or I can look at the opposite situation where we have an S state, which can radiate to a P state. Well by symmetry, the different p states and plus 1 and minus 1 m equals 0 are just connected by spatial rotations. So therefore, their lifetime of the 3 P states and the rate of spontaneous emission are the same. But if you now assume that you have absorption, you go from the S state to the P state. Then you find that the Einstein B coefficient there are now three possible ways. Not just one polarization or 3 polarization. And you will find that this is proportional to three times r. However, in this situation, it's a reverse but let me just finish here. So here the natural align rates and the rate of stimulated emission described by the coefficient from the excited state to the ground state is proportionate to R. Whereas, in the other situation, if you have absorption now, well, each of those levels, there's only one transition, one pass way. Therefore, you will find that the coefficient for absorption is proportionate to R. Whereas, gamma and the stimulated emission, which is now BSP, is proportionate to three R because there are three pathways. So depending what the situation is, you have to be careful. And you would say-- but if it's an S to P transition, it maybe connected by the same matrix element. And therefore, you would say shouldn't there be align strings, which is independent whether you go from S to P or P to S, which just describes in a natural way what is really the coupling between S and P state? And yes indeed, there is in the literature some definition of line strings where the lines strings S would be proportionate to the sum of all of the eights between an initial and the final state. And do sum over all. So therefore, when you use this formula for the line strings, whether you have the situation on the left side or on the right side, you will do always the sum over the 3 possible transitions. So the lines things is the same for both situations. It's just generic for an S to P transition. So if you use this definition but then you have the situation that spontaneous emission is always given by the line strings but you have to multiply now by the multiplicity of the excited state. If you have a P state, the whole line strings is distributed over three states. And each state has only a spontaneous emission rate, which is a third of what the line strings gives you. I don't want to beat it to death, because I hate degeneracy factors. But I just thought this example with the P to S and S to P transition tells you why they necessarily have to appear in derivations like Einstein's A and B coefficient. I hope there are no further questions about degeneracies. But you know, making this comment also allows me to say, well, when I derived the Einstein A coefficient-- what we did last class-- I did not use any degeneracy factors. Well, this is correct. Our derivation assumed that there was-- we assumed that there is only one final state. We did not include degeneracy factors. We also assumed that we had a dipole matrix element, which was along the z-axis. And so by those definitions, I have implicitly picked a geometry, which can be represented by that we have an exciting piece state in the m equals 0 state. And we have a pie transition with linear polarization to this s state. And by doing that, I did not have to account for any degeneracies. But in general, if you derive microscopically an equation for spontaneous emission, you may have to take into account that your excited state has different transitions-- sigma plus and sigma minus transitions-- to different states. And you have to be careful how you do the sum over all possible finer states. And this is where degeneracies would eventually matter? Questions? OK, so then lets go from P counting or accounting for the number of states to something, which is hopefully more exciting. We want to talk about the fully quantized Hamiltonian. So what we are working towards now and it may spill over into the Wednesday class is I want to give you the sort of paradigmatic example of cavity QED where an atom within an excited state is in an empty cavity. And now it can emit a photon into the mortification mode of the cavity. But these photon can be reabsorbed. So this is a phenomenon of vacuum Rabi oscillations. And so I want to set up the Hamiltonian and then the equation to demonstrate to you the vacuum Rabi oscillations. And for me, the vacuum Rabi oscillations are the demonstration, that spontaneous emission, has no randomness, no spontaneity, so to speak because you can observe coherent oscillation. A coherent time evolution of the whole system and which is possible only due to spontaneous emission. So let's go there. So just to make the connection, a few lectures ago, we had a semi classical Hamiltonian. This is when I wanted to show you that the two level electronic system can be mapped onto a spin one half system driven by magnetic field. So this was when we only looked at the stimulated term when we only did perturbation theory. And in that situation, we had the electronic excitation. And then we had the drive field, which was assumed to be purely classical like a rotating magnetic field which drives spin up spin down transitions magnetically. And we concluded that, yes, if you use a laser field, it does exactly the same to a two level atom what a magnetic field does to spin up spin down. But now we are one step further. We've quantized the electromagnetic field. And we have spontaneous emission. And this is something, for reasons I just mentioned, you will never find in spin up spin down because it will take 1,000 years for spontaneous emission to happen. So now we want to actually go beyond this semi classical picture, which is fully analogous to the precession and rotation of the spin in a magnetic field. And we want to add spontaneous emission. So what we had here is the Rabi frequency was a matrix element-- the dipole matrix element-- times a classic electric field. And we want to replace that now by the electric field at the position of the atom. But we want to use the fully quantized version of the electric field. And it also becomes useful to look at the sigma x operator, which actually has two matrix elements of [INAUDIBLE], which connects ground excited and excited ground state. And one of them is going from the excited to the ground state. So this is, sort of, lowering the energy to sigma minus operator. And the other one will be a raising operator. It raises the excitation of the atom. And we will refer to it as sigma plus. So the electric field is replaced by the operator obtained from the fully quantized picture. Here we have the prefactor, which is the electric field of a single photon or half a photon, whatever. But it's factors of 2 r square over 2. We have the polarization. And now if you would take the previous result and would look at it. Well we want to go to the Schrodinger picture. And I mentioned that in the Schrodinger picture the operators are time independent. So we cancelled the e to the i omega t term. If you would go to the result we had last week and would simply get rid of the e to the i omega t term, you would now find operators a and dagger. But they would have factors of i in front of it. That's a equation we had when we derived it. Well I prefer note to use something which looks nicer. Just use a and a dagger. And you can obtain it by shifting the origin of time. So we're not looking e to the i omega t or t equals 0. We wait a quarter period into e to the i omega t just gives us factors of i, which conveniently cancel the other factors of i. So what I'm doing is just for convenience. And let me write down that this is in the Schrodinger picture. OK. So we want to absorb all constant by in one constant now, which is the single photon Rabi frequency. We have the type or matrix element of the atom. There's a dot product with the polarization of the light. And then we have the electric field amplitude of a single photon. h bar omega over 2 epsilon 0 v. So this is what appears in the coupling. And we want to write it s h bar omega 1 over 2. And this omega 1 is the single photon Rabi frequency. And with that, we have now a Hamiltonian, which is really a classic Hamiltonian, written down in the standard form. It has the excitation energy times the sigma z matrix. It has the single photon Rabi frequency. The single photon Rabi frequency appears. You know, this is the single photon Rabi frequency. But then the operator for the electric field, after getting rid off the i's, is simply h plus h dagger. h plus h dagger. So this takes care of the photon field. And the operator which acts on the atoms are the raising and lowering operator sigma plus and sigma minus. And finally, we have the Hamiltonian, which describes the photon field which is h bar omega times a dagger a the photon number operator. Any questions? Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: I mean, we are looking at the interaction with an atom, which is at rest at the origin. Therefore, e to the ikr is 0. We will only consider the spatial dependence e to the ikr when we allow the atom to move. As long as the atom is stationary for convenience, we put the atom at i equals 0. But in 8.422 when we talk about light forces and laser cooling, then it becomes essential to allow the photon to move. And this is actually where the recoil and the light forces come into play. But as long as we're not interested in light forces, only in the internal dynamics-- calm and excited state-- we can conveniently neglect our spatial dependencies. Other questions? So this is really a famous Hamiltonian. And you also see how natural the definition of the single-photon Rabi frequency. So we have one half h bar omega for the diagonal sigma z matrix. This is the atomic excitation. This is the unperturbed Hamiltonian of the atom. This is the unperturbed Hamiltonian of the photon. And now the two are coupled. And the coupling is a product of an operator acting on the photon field plus minus one photon. And the other one is an operator acting on the atoms. And it is plus, minus, and atomic excitation. So let me just remind you of that. The sigma plus and sigma minus operator. The sigma plus is the atomic raising operator, which takes a ground to the excited state. And the sigma minus operator is the atomic lowering operator, which takes the atom from the excited to the ground state. So this is our Hamiltonian. And to hear about space on which this Hamiltonian acts is the product space of the atom. Direct product of the states of the light. Or in other words, the basis state would be that we use for the atoms. The states which have zero or one quantum of excitation. So we use excited state or ground state. And for the photon, we can just use the Fock states where the occupation number is n. Questions about that? So it's a very-- just look at it with some enjoyment for a few seconds. I mean, this is a Hamiltonian, which has just a few terms. But what is behind it is, of course, a power of all the definitions. I mean, each symbol has so much meaning. But in the end, by having this formalism of operators quantized electromagnetic field. We can write down-- we can catch many, many aspects or we can, pretty much, fully describe how a two level system interacts a quantized electromagnetic field with that set of equations. Of course, the fact is not that everything is so simple. The fact is that we have, by understanding the physics, we have skillfully made definitions, which allow us to write everything down in this compact form. So often something is simple to write down. But if there's a lot of physics insight, we spend some time in discussing it. And the first thing I want to just point out and discuss is this interaction term. We have the product of sigma plus and sigma minus with a and a dagger. So what we have here is we have an interaction term. And this interaction part has actually four terms in a very natural way. Well, let me just write them down. It's sigma plus with a. Sigma minus with a dagger. Sigma plus with a dagger. And sigma minus with a. OK, so let's discuss those. Sigma plus with a dagger. Sigma plus is actually an absorption process. a reduces the photo number by one, and increases the atomic excitation from the column to the excited state. The other term looks naturally, intuitively like emission. The a dagger operator takes us from n to n plus 1. And sigma minus takes us from the excited state to the current state. So these are the two terms, which we would call intuitive terms because they make sense. The other terms are somewhat more tricky. Sigma plus and a dagger means we create a photon and we create an excitation. So in other words, it's not that, like the other term, quantum of excitation disappears from the field, appears in the atom, and vice versa. Sigma plus a dagger means we have an atom excitation takes us from the ground to the excited state. Plus, we emit a photon at the same time. And sigma minus a dagger means that we go from the excited to the ground state. So we have an atom d excitation. And I would say, well, if the atom is d excited it should emit a photon. But instead, the photon disappears. So we have those processes. The last two are sometimes referred to in the theoretical literature. They are off shell. Under shell is energy conservation. Off shell means they cannot conserve energy. But nevertheless, these are terms which appear in the operator. But you should be used to if you have often terms in the operator which cannot drive a resonant transition. When you looked at the DC stock effect or when we looked at the AC stock effect for low frequency photons, those low frequency photons cannot excite an atom to the excited state. So they are not causing a transition, but they led to energy shifts in second order perturbation theory. So therefore, those terms this language now cannot drive transitions. They can only drive transitions to virtual states, which would mean they can only appear in second order perturbation theory that you go up to a so-called virtual state but you immediately go down. And those terms give only rise to shifts. No transitions because you couldn't conserve energy in the transition. But you can do shifts in second order. And one example, which we discussed in the clicker question is that those shifts are actually lamb shifts. And in other places, especially in the context of microwave fields, they are called Bloch-Siegert shifts And let's just look at one specific state. And this is the simplest of all. We have the vacuum no photons. And the atom is in the ground state. If you look at the four possibilities of the interaction term, there is only one non vanishing term. The photon is at the bottom off all possible states. The atom is at the bottom of the possible states. So when we act with the four terms on it, the only term which contributes is where those is where those are raised because all the others are 0. The only non vanishing term is where we create a virtual atomic excitation and also a virtual excitation of the photon field. And we know that when we have an atom in the ground state in the vacuum that the only manifestation of the electromagnetic field is, of course, not spontaneous emission but the lamb shift. So therefore, if you would apply this operator to the bound state of an electron in an atom, the complicated 1s wave function of hydrogen and sum this operator over all modes of the electromagnetic field. Then you would have done a first principle QED calculation of the lamb shift. I'm not doing it but you should understand that this operator-- sigma plus a dagger-- is you operator for the Lamb shift. Questions? Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: Oh, no, everything is. If you have a two level system, this Hamiltonian captures everything which appears in nature if you have a two level system interacting with the electromagnetic field. That's it. A radiation reaction is just something we can pull out of here. Stimulated emission we can pull out of here. The way how vacuum fluctuations create a lamb shift or the way how vacuum fluctuations affect an atom in the excited state, everything is included in here. The question is just can we solve it. And the calculations can get involved. But this is the full QED Hamiltonian for a two level system. That's a full picture. I mean, that's why I sort of said before be proud of it. You understand the full picture of how two level systems interact with electromagnetic radiation. The only complication is, yes, if you put more levels into it and such and things can get richer and richer. And-- yes, we have also made the dipole approximation, which we're just wondering how critical it is. Well, we use the electric field a and a dagger, but my gut feeling is it doesn't really matter what we have. Here is the most generic term, which can create and annihilate photons, and we have the a and a dagger term. Actually, I don't know what would happen if you don't make the dipole approximation. Well, if you have two levels which are coupled by magnetic dipole, then you have the same situation. It is just your prefactor, the semi photon Rabi frequency, is now alpha times smaller because of the smaller dipole matrix element. So I think you can pick, pretty much, any level you want. And this is why I actually discussed matrix elements at the beginning of the unit. For, pretty much, all of the discussion you're going to have, it doesn't really matter what kind of transition you have as long as the transition creates or annihilates a photon. And all the physics of the multiplicity of the transition, magnetic, dipole, electric, quadrupole, or whatever just defines what this the semi photon Rabi frequency is. You've put me on the spot, but the only thing which comes to my mind now is if you would formulate QED not in the dipole approximation but through with the p minus a formulation. Then we have an a-square term. And then we have the possibility that one transition can emit two photons. So that's not included here. AUDIENCE: So that's higher-- PROFESSOR: This would be something higher order. On the other hand, we can shoulder the canonical transformation that the p minus a formalization with the a-square term is equivalent to dipole approximation. So the question whether you have a transition which emits two photons simultaneously or two photons sequentially eventually by going through an immediate state, this is not a fundamental distinction. You can have one description of your quantum system via two photons automated in one transition. You have another description of your quantum system where photons cannot-- only one photon can be emitted. And then you have to lend an intermediate state. And you would say, well, either two photons at once or one photon at a time. This is two different kinds of physics. But we can show that the two pictures are connected with economical transformation. So therefore, you have two descriptions here. But anyway, I'm going a little bit beyond my knowledge. I'm just telling you bits and pieces I know. But this Hamiltonian is either generally exact. I just don't know how to prove it. But it really captures in all of the QED aspects of the system we want to get into. So OK. So in many situations we may decide that the off shell terms of the interaction just create level shifts, Lamb shifts, Bloch-Siegert shifts. And we may simply absorb those lamb shifts in our atomic energy levels, omega e and omega g. So therefore, for the dynamic of the system, if you include all of those lamb shifts in the atomic description, you do not need those off shell counter intuitive terms. These are actually also the counter-rotating terms in the semi classical approximation. We only keep the intuitive terms. And that's called, again, the rotating wave approximation. Just to remind you, we do not have rotating waves here. Everything is operators. But the same kind of physics-- co- and counter-rotating-- appears here that we have four terms. Two are the fully quantized version of the co-rotating terms. And the other two-- the off shell terms-- are the quantized version of the counter rotating term. So therefore, if you neglect those two off shell terms, we have now the fully quantized Hamiltonian in the rotating wave approximation. So let me just write it down because it's also a beautiful line. We have the electronic system. We have the interaction Hamiltonian, which has now owned the two terms. When we raise the atomic excitation, we lower the photon excitation and vice versa. And we have the Hamiltonian for the photon field a dagger a. And this is apart from those lamb shift terms. The full QED description of the system. And if we only consider one mode-- here, of course, in general, the general Hamiltonian has to be sent over a modes. And then you'll get spontaneous emission and everything we want. But if you have a situation where you only look at one single mode, then you have what is called the famous chains Cummings model And very important result of this James Cummings model are the vacuum Rabi oscillations, which I want to discuss now. OK. So let me just-- it's called James Cummings Model. So let me describe to you why it is a model. Well it assumes a two level system, which we find a lot of candidates among the atoms we want. Sure our atoms have hyperfine states. But we can always select a situation where, essentially, we only couple two states. We can prepare initial state by optical pumping, and then use circularly polarized slide on a cycling transition. And this is how we prepare in the laboratory a two level system. So that's one assumption of this model with a two level system. But the second assumption is that the atom only interacts with a similar mode. And that requires a little bit of engineering because it means we need a cavity. So let me just set up the system. So our laboratory is a big box of volume v. And this is where we maybe quantize electromagnetic field to calculate spontaneous emission. And our atom here may actually decay with the rate gamma, which is given by the Einstein A coefficient. And in order to describe this spontaneous emission, be quantized electromagnetic field in the large volume v. But now we have a cavity with two mirrors. And those two mirrors define one mode of the electromagnetic field, which will be in resonance on your resonance with the atom. Well there will be some losses out of the cavity, which eventually coupe the electromagnetic mode inside the cavity to the other awards modes in the speaker volume v. And this is described by a cavity damping constant kappa. What is also important is when we use cavity to single out one mode of the electromagnetic field, the cavity volume is v prime. And we often make it very small by putting the atoms in the cavity where the mirror spacing is extremely small. OK. We know, and I'm not writing it down again, what the Einstein A coefficient is. The Rabi frequency-- the single photon Rabi frequency-- which couples the atom to the one mode of the cavity has this important perfecter, which was or is the electric field of one photon in the cavity. And importantly, it involves the electric field of the photon in the cavity value, which is B prime. So now in addition to using, you know-- now you see what our experimental handle is. If you make this volume very small, then we can enter this strong coupling regime where the single photon Rabi frequency for this one mode selected by the cavity becomes much larger then the spontaneous emission into all the many other modes. So the interaction with one mode due to the cavity and the smallness of the volume is, sort of, outperforming all these many, many modes of the surroundings. And that would mean that an atom in an excited state is more likely to emit into the mode between the two cavity mirrors than to any other modes to the side. Secondly, of course, when the photon has been emitted into the cavity, the photon can still couple to the other modes by cavity losses kappa. And now we assume that we have such high reflectivity mirrors that kappa is smaller that the single photon Rabi frequency. And this is called the strong coupling regime of cavity QED. So then we can at least observe for a limited time the interplay between a single mode of the cavity and a two level system. And this is a James Cummings model. The James Cummings model. So in that situation, the Hamiltonian, the fully quantized Hamiltonian, and the QED Hamiltonian couples only pairs of states which we label those states the manifold n. So we have an excited state with n photons. And it is coupled to the ground state with one more photon. Our Hamiltonian has two coupling terms. Remember the other tool where you clicked it in the rotating wave approximation and we can go from left to right with sigma minus a data plus. And we can go from right to left with the operator sigma plus and the annihilation of the [INAUDIBLE] a. So as long as we have a detuning delta, which is relatively small. As long as detuning is small, the rotating wave approximation is excellent. So let me just conclude by writing down the Hamiltonian for the situation I just discussed. And then we'll discuss the Hamiltonian on Wednesday. So if this is energy, we have two levels. The excited state with n photons, the ground state with n plus 1 photons. If the photons are on resonance, the two levels are degenerate. But if you have a detuning delta, the two levels are split by delta. And what we are doing right now is for the [INAUDIBLE] the Hamiltonian, we shift the origin so the zero of the energy is just halfway between those two states. That's natural. So this avoids just off sets in our equations. So our Hamiltonian has now the splitting of plus minus delta over two. The coupling has the perfecter, which is the single photon Rabi frequency. And then the a and a dagger terms depends on n square root n plus 1. So what I wrote down now is the Hamiltonian rotating wave approximation, which interacts, which describes only one pair of states. But we have sort of a cause in our Hilbert space. One pair of states for each label n. But each of them is, sort of, described by the decoupled Hamiltonian. So that's what I wanted to present you today. And I will show you on Wednesday how this Hamiltonian needs to Rabi oscillations not induced by an external field but induced by the vacuum. Any questions?
https://ocw.mit.edu/courses/8-421-atomic-and-optical-physics-i-spring-2014/8.421-spring-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon. So in the last week of this semester, we will be finishing up the chapter on coherence. What we want to continue to explore today is the presence of a dark state. If you have a three level system, and that's what we went through last week, we will always find one state which is dark, which means we fully illuminate the atom with laser light, but there is one state which is not coupled to the excited state which does not scatter light, and therefore is dark. I showed you that for very general conditions, if you have two laser beams exciting the two states, there will always be a coherent superposition of the two states, which is the dark state. The dark state is the novel feature of three level systems, and I want to show you different perspectives of it. We started out by talking about just the existence of the dark state. Then we talked about dark state transfer. You can regard, for detuned light, the dark state as the lowest energy state of the system. And then an adiabatic theorem tells you that you can keep particles in the dark state even if you change the laser parameters and change what state is now the dark state. This is the basis of coherent population transfer, or the famous STIRAP method. There is another aspect of the dark state which gives us a possibility of lasing without inversion. As I reminded you, lasing has a threshold, which is inversion that you need more atoms in the excited state than in the ground state because the atoms in the ground state absorb the laser light and you want a net gain. However, if you have a dark state, you have a situation where the atoms do not absorb the laser light, and therefore, the conditions for net gain have changed. And indeed, lasing without inversion becomes possible. I started to explain to you the way how lasing without inversion could come about. It relies on the fact that in a three level system, absorption destructively interferes but stimulated emission does not destructively interfere. And therefore, you can have lasing without inversion in such three level systems. I showed you one possible realization, and this is about hydrogen in a DC electric field. If we mix the 2S and 2P states with an electric field, we have this three level structure. And for a laser tuned right in the middle between the two states, the two amplitudes for excitation cancel exactly, and therefore, you have a zero absorption feature. However, if you put now a little bit of population into, let's say, the upper state, this upper state in the wings of its profile has still gained for stimulated emission, and what we get is lasing without inversion. Now you say, OK, but that's a hydrogen atom. Which atom is really degenerate between two levels, and you can split regeneracy with a static electric field. Well, you're already an expert at this point in the course. If the 2S and 2P state are widely separated, well, you add a photon, and the photon, which is in resonance with the 2S and 2P state, creates, in the dressed atom picture, degeneracy. Maybe I should have shown the P state higher. The 2S state with one more photon and the 2P state with one photon less have the same energy, and then you create exactly this situation. So therefore, the way you can realize that in atoms other than hydrogen is use an AC electric field to mix S and P states. And I'll show you in five minutes a little bit more in detail what I mean by that. There is a trivial realization which I want to mention for lasing without inversion, and this would be if you have a three level system with an excited state, two levels, g and f in the ground state. You may have an inversion for the e to g transition. And therefore, you can get lasing because the population in this state f is not coupled. This is pretty trivial, but the more subtle part, of course, is that we can realize it using a driven system using a control laser by creating the same situation with the bright and the dark state, and population in the dark state is hidden from the light and does not absorb light. Let me just indicate that in both those states, we would have no absorption, dark state. Those examples may raise the question whether whenever you have lasing without inversion, if you can find a basis where you have inversion again between the two levels which are relevant and the extra population is just hidden. I want to make two comments about it. This question is sometimes discussed in the literature, sometimes in a semi-controversial way. There are two comments about it. One is when you start dressing up your system with laser beams, you have strong control lasers. You have two lasers, omega 1, omega 2. One is often a strong control laser and the other one is the weak laser where we want to have lasing. You have actually a time dependent system driven by time dependent fields, and once you have a time dependent system, it's no longer clear what the eigenstates are, what the populations are, and what are the coherences. There is no longer a unique way to distinguish what are the eigenstates because every state is, so to speak, time dependent almost by definition. On the other hand, I think the example I gave you with atomic hydrogen where you just a little bit of mixing with an electric field is an example where you genuinely have less population in the excited state than you have in the ground state, and at least the equation tells you, even without inversion, you have a net gain. So my own understanding of that situation is that in many situations, you can actually reduce it to a simple picture where you have simply hidden population in a dark state, but without any sort of unnatural definitions, you may not find that in some other systems. Questions about lasing without inversion? Nancy? STUDENT: Is lasing without inversion important in lab, as opposed to lasing with inversion, or is it more about teaching us [INAUDIBLE]?? PROFESSOR: Well, lasing without inversion has definitely been touted as a way to get lasing deeper in the UV, to get lasing for very blue transitions because when you want to create inversion-- this actually has been the problem in creating x-ray lasers in atomic systems. If you have larger and larger energy separation, spontaneous emission scales with omega [? cube. ?] And so therefore, it becomes harder and harder to fulfill the ordinary gain equation. STUDENT: So in those cases, even these last two methods [INAUDIBLE] because [INAUDIBLE]?? PROFESSOR: Well, lasing without inversion alleviates the requirement to build a laser, and so people have discussed that where it's really hard to create a laser in the conventional way, deep, deep in the blue of the UV or in the x-ray regime, that lasing without inversion may help. I'm not aware that any practical development has emerged from that because there is a price to be paid, and that is usually in the form of coherence. You need a certain degree of coherence in your system to be able to do that. It's an idea which is powerful, but as far as I know, there was no killer application of it. The importance of dark states is definitely in slow light, manipulation, quantum computation, and concepts of storing light, and this is what we want to discuss next. Actually not next, but after next. What I first want to discuss is another aspect of the dark state, and this goes by the name EIT, electromagnetically induced transparency. I can introduce this topic by a question to Radio Yerevan, is it possible to send a laser beam through a brick wall? And the answer of Radio Yerevan, if you know the joke, is always, in principle yes, but you need another very powerful laser. So in an incoherent way, of course, a very powerful laser can drill a hole into the wall and then the next laser can go through the wall without absorption, but you can be smarter. If the very powerful laser through coherence puts all the atoms in the brick in the beam pass of the laser into a coherent superposition state, and then they become a dark state, then your laser can go through a brick wall. So can a laser beam penetrate an optically thick medium? And the answer is yes, with the help of another laser. Original ideas along those lines were formulated by Steve Harris, who has really pioneered this field, and he first considered special auto-ionizing excited states which couple to-- there were two pathways of coupling into the continuum, but later work has shown that it can be realized in a lambda system. Let me talk about this conceptionally simpler realization in a three level lambda system. Let's assume we have again our normal three level system gf and an excited state which has a width, gamma. And we want to send a probe laser through a dense medium, and it would be completely absorbed by the resonance to the excited state. But now we can have a strong coupling laser with a rapid frequency, omega c, and so if we drive the system very strongly, we can create a situation where the coupling laser does, if it's strong enough, complete mixing between the excited state and the ground state, and that means if you have two levels which are completely mixed, they are split by the energy or the frequency of the coupling. So in other words, what we obtain is we have now two states, e plus f and e minus f. You know the how you should read is the following. You can just assume for a second the state f will degenerate with gamma and then I put in a very strong mix in between the two. This is exactly the example we had with hydrogen in an electric field, and then we get two states which have both width gamma over 2. They are strongly mixed, and the splitting is nothing else than the matrix element of the electric field. But if you don't have two degenerate states and we add a photon, then the photon-- I mentioned it again and again in addressed atom picture-- creates a degeneracy. You can just add a photon, draw a dashed line, and this is your virtual state if you want. Or if you look at Schrodinger's equation, you have something which oscillates at the frequency of the state, f, but now you multiply it with an electric field which oscillates at the resonance frequency, and then you have something which oscillates at the sum of the two frequencies, and this is exactly what I indicated with a dashed line. So that's how you create, so to speak, a degeneracy by using the frequency of the laser to overcome the energy splitting, and the result is that you have created exactly this excited state level structure. And if you now look at the ground state, and our photon is tuned right between those two continua, then we have a dark resonance. And in order to accomplish this, I'm not going into any calculations here, but you need a sufficiently strong coupling laser who can accomplish that. For instance, coupling laser which is stronger than the spontaneous emission rate or decoherence rate gamma in the excited state. So if we now scan the probe laser and we look for transmission, let me assume for simplicity that there is no relaxation between the two ground states. If the coupling laser strength is 0, then we have a broad feature, which is simply the single photon absorption of the probe light. If we have an infinitesimal coupling laser strength, we get a very, very sharp feature, and if the coupling laser is stronger and stronger, we get a window of transparency, and the width is given. The width delta omega of this central feature is given by the Rabi frequency of the coupling laser. So what are possible applications? One is you can design non-linear materials. Usually, when you want to have frequency conversion processes or optical switches at one laser beam affects another laser beam, you want a very strong non-linear response of your medium, you usually get that if you go near resonance, but near resonance, you have strong absorption. But now, using the concept of EIT, you can have both. You can have the strong non-linear response of your medium near resonance but you suppress absorption, you get a window of transparency using EIT. So you can have near resonant materials without absorption. Again, in principle, I'm not aware that this has really taken off in a bigger time. One would be if you want to do very sensitive spectroscopy. Assume you want to measure one isotope which has a tiny little abundance but you have to observe it against the background of a very strong isotope. If you could switch off the absorption of the background, the absorption of the strong isotope, you could still see a small amount of a trace isotope in the presence of a strong absorption line. So sensitive detection of trace elements. Questions about that? I've given you qualitative pictures for lasing without inversion and for EIT. I want to give it a little bit more quantitative touch, not by going through to the optical Bloch equations which would be necessary to describe all features of it, but at least I want to give you one picture where you can derive and discuss things in a more quantitative way, and this is the eigenstate picture. I've also done here what I've said several times, that when we have splittings between the levels, we can actually focus on what really matters, namely the detunings, by absorbing the laser frequency into the definition of our levels, and that's what I've done here. Instead of using levels g, f, and e, I have levels with g, f, e, but this is the photon number in laser field one and the photon number in laser field two. So if all of the photons, the laser field one and laser field two, would be resonance with their respective transitions, then all those three levels would be degenerate. But now in the three level system, they're not degenerate only because we have a relative detuning delta, detuning small delta from the Raman resonance, and we have a detuning peak delta, which is sort the common detuning of the Raman laser from the excited state levels. So therefore, if I did define the Rabi frequencies, the Rabi frequencies over 2 are gain coefficient, and the Rabi frequencies are proportional to the electric field. Therefore, they scale with the photon numbers n and m in the two laser beams. If I do that, I have a really very simple Hamiltonian. On the diagonal, we simply have the detuning of both laser beams form the excited state. Here we have the Raman detuning, and we have two couplings to the excited state. One is laser field one, g1, with n photons, and the other one is laser field g2 with m photons. Any questions about? Just setting up the simple equations which we have done a few times. Let me focus first on the simple case that everything is on resonance. Then, if everything is on resonance, we have the structure which I've shown here. You can sort of say you have three levels, which are degenerate without any Rabi frequency, because the detunings are all zero, or all the diagonal is zero. And then the off diagonal matrix elements, the Rabi frequencies, are just spreading the three levels apart, and the general structure of this matrix is that in the middle, you always have a state which is just a superposition of g and f. So it's a dark state. It has no contribution in the excited state. And the two outer states have equal contribution of the excited state. So the excited state has been distributed over the two outer states, and the widths of those levels is therefore gamma over 2. And we know when we had two levels and we were driving them with a Rabi frequency resonantly, we had splittings which were just given by the Rabi frequency, and now the splitting between the outer level and the dark state is the quadrature sum of the two Rabi frequencies. PROFESSOR: So that's a very general structure. I want to go back to the situation where one laser is a probe laser. Let's assume this is our laser beam which wants to go through the brick wall, and the other laser has to prepare the system. Let me discuss the limit where the photon number becomes very small in laser one, the photon number n goes to unity, and this is much smaller than m. So we have the situation of a weak probe field. If we have this limit, then the dark state has much, much more amplitude in state g, and the state g is almost decoupled. It's in a trivial way. The dark state and the laser beam is very strongly mixing the state f and the excited state. So in this limit, we have a nice physical situation. I should actually point out that you can solve most of those situations for three level systems analytically. It's just those explanations get long and are not very transparent. So what I'm trying here in the classroom, I try to pick certain examples-- weak probe field, resonance-- where we can easily understand the new features which happen in the system. So the situation we have now prepared is we have our dark state, which is level g, we said we have only one photon in our probe beam, and we have lots and lots of photons in the coupling laser which couples the other ground state, f, to the excited state, e. So we have the structure that we have now two states which have half of the widths of the excited state, and they are both bright. We can call one the bright state 1 and the other the bright state 2. The splitting is related to the Rabi frequency. Let me just call it delta bar. This is now the level structure which we have, and what I want to now emphasize in this picture is the phenomenon of interference. We've talked about interference of amplitudes, but now I want to take the system and show you how we get now interference when we send one probe photon through the system. What we can now formulate is a scattering problem. We have one photon in our probe beam in a special mode, but then, all the other modes are unoccupied. And when we are asking, does the probe photon get absorbed? Does the weak laser beam get stuck in the brick wall? We are actually are asking if it is possible that we scatter the photon out of the mode and it gets absorbed. But you know absorption is actually always a two photon process involves spontaneous emission, and we have emitted the photon into another mode. The scattering problem is a two photon process, and the matrix element needs an intermediate state, but now we have two. We start with one photon, we have the light atom coupling, we can go through bright state 1. From bright state 1, we have the light atom coupling again, and eventually, we go back to the ground state without four photon. And here we have a detuning. Let's assume we are halfway detuned between the two bright states. And then we have a second amplitude, and it's indistinguishable. We have a Feynman double slit experiment, and everything here is the same except that we scatter through bright state 2. This matrix element, when it vanishes, this is now the condition of electromagnetically induced transparency. But we want to now understand what happens when we detune the probe laser. So we have set up the system with a strong control laser, we have completely mixed the ground state f with the excited state, and now we want to ask, can a weak probe laser go through the brick wall How much of the probe laser is absorbed? So what we want to understand now, and this is a new feature I want to discuss now, what happens when we detune the probe laser by delta? Well, it's clear, and I just wanted to show you the formula. We had two detunings here, and if we detune the probe laser by delta, that would mean now that those denominators are no longer opposite but equal. In one case, we add delta. In the other case, we subtract delta. And therefore, we have no longer the cancellation of the two amplitudes and we have the scattering of the photon. This is sort of the framework, and I want to show you now several examples. I want to show you examples of a probe absorption spectra. It's more a little bit of show and tell. I want to show you the result if you would work that out. It's pretty interesting. So first, I want to discuss the case where the coupling laser is really in resonance. We are near the one photon resonance. Let me assume first that the Rabi frequency of the coupling laser is much, much larger than gamma. Then if you look at the probe transmission, we have the situation that we know when we are right here in the middle, we have a window of transparency, but if we detune by the Rabi frequency over two, we hit the bright state 1, and if we detune in the opposite, we hit the bright state 2. And the splitting in this situation, delta bar, is nothing else than the Rabi frequency omega 2. So what we have is we have the detuning in between, and we know already here is our special point, and that's what we have discussed for long, electromagnetically induced transparency. The two bright states are at plus and minus half the Rabi frequency of the coupling laser. We know that we have very strong absorption here. What you get is sort of broad feature. But this was a situation when we drive the system very strongly. I now want to discuss the case that the coupling laser is much weaker than gamma. Then the splitting between the two bright states was the Rabi frequency, but the width is gamma, so then the two bright states pretty much merge into one continuum feature of width gamma. For that situation, we have a broad feature of absorption, which is on the order of gamma, or if you have an opaque medium, of course, you put the absorption coefficient into the exponent of an exponential function and you get a blackout which is wider than gamma. But then we have our phenomenon of EIT, but the width of this feature is now much smaller than gamma. So what we have here in those two situations where the lasers are close to resonance with the excited level, we have the situation that the strong absorption feature due to the bright states, either one strong feature of width gamma or two features here, those broad, similar photon absorption features, they really overlap with our window of transparency. And what I find very insightful is now to discuss the situation where we separate the two, and I want to show you what happens. It gives a very interesting profile. So what we want to discuss now are the famous Fano profiles, and I want to discuss two photon absorption features. We have discussed the case where the one photon detuning. We want to go now to a large one photon detuning delta 2, and the new feature now is that this will separate the window of electromagnetically induced transparency from the broad absorption features. Let me just draw a diagram of the states. As usual, we have our two states in a lambda transition, g and f. We have this continuum of the excited state, but now, and this is what often is done in the experiment, you're not using Raman lasers which are in resonance with the excited state. You're using Raman lasers-- here is the strong coupling lasers-- which are far detuned. So here we have a detuning for laser two called gamma 2. The Rabi frequency is omega 2. Here we have a weak probe laser omega 1, and the detuning. Let's call it gamma 1. So in order to keep the situation simple, we use a weak probe laser. Omega 1 is small. And our Raman detuning delta is the difference between the two single photons detuning, capital delta 2 minus delta 1. For these situations, there are nice analytic expressions, and together with the class notes, I will post a wonderfully clear paper by [? Lunis ?] and [INAUDIBLE] where the two authors discuss this situation. Let's just figure out what are the features in the system. Let's just go through different situations. One is if the Raman detuning is zero, we should always get the phenomenon of electromagnetically induced transparency. If we don't have any coupling, then we simply have a two level system, and if we tune omega 1 into resonance, we get simple single photon absorption. So if we look at the system of the three levels and we are asking what are now the relevant processes, one limit is, of course, the trivial limit that we have single photon absorption. This is, of course, a trivial case. Let's now go to the more interesting case that we have a coupling laser. What happens now? We have our two states. The excited state is coupled with the laser and Rabi frequency omega 2. Now we know that the laser omega 2, if it is not on resonance, will give us an AC Stark shift. This AC Stark shift, we've gone through that several times. It's the matrix element or Rabi frequency squared divided by the detuning. If we now bring in the probe laser, what are the features we expect? Well, there are two features. One is we've just discussed the trivial case above. If we tune the probe laser into resonance with the excited state, we have single photon absorption. We get a broad feature. It's almost like in the two level system. But in contrast to the case I just discussed above, the excited state level e has now an AC Stark shift so the resonance is shifted. That's now becoming a four photon process because we need two photons going up and down with a coupling laser to create the AC Stark shift. And now we have a laser from the probe beam and the photon is scattered, so it's a four photon process and it will give us a broad resonance which is now AC stack shifted. But in addition, we have a resonance, which is the Raman resonance. When the Raman detuning is zero, then we absorb from the probe laser and we emit in a stimulated way with the coupling laser, and we have a stimulated two photon transition. Now, what is the width of this stimulated two photon transition? Well, we go from a stable ground state to-- I wanted to say another stable ground state, but this other stable ground state is now scattering photons. So because of the presence of a strong coupling laser, you have broadened this level f by photon scattering. You interrupt the coherent time evolution by scattering photons, and the photon scattering happens in perturbation theory by the amplitude to be in the excited state squared times gamma. The scattering rate, gamma scattering, is Rabi frequency divided by detuning. This is the amplitude to be in the excited state. We square that, and then we multiply with gamma, and if I can trust my notes, it's a factor of two, which I don't want to discuss further. This is a quantitative argument. The analytic expressions are in the reference I've given to you. So the situation which we have right now can be in a very powerful way summarized as follows. We have our ground state, we have two continua we can couple to. We can couple to the excited state which has a width, gamma, through a single photon, but there is the AC Stark shift. Or we can couple through a two photon Raman transition to the state f, but the width is much, much smaller because it is only the scattering rate due to the off resonant coupling laser. Let me just write that down because I said a lot of things. So g couples now. When we detune the probe laser, we can be in resonance with this feature and we can be in resonance with that feature to a narrow and wide excited state. One excited state, of course, is the state f, but the coupling laser puts some character of the excited states into the state f. And the most important thing now is the following. This is the theme I've emphasized again and again when we discussed three level systems. Those two states have a width, and the width means they spontaneously emit light, but they emit light into the same continuum. So if you start in the ground state, your probe laser has a photon and the photon gets scattered. You do not know through which channel it has been scattered. So in general, and this is as far as I want to push it in this class, in three level system, we have now the interference between those two continua. One is narrow and one is wide. Let me just write that down because this is important, but both excited states couple to the same continuum. And by continuum, I mean the vacuum of all empty states where photons can be scattered, and this is the condition for interference. And this is, of course, what gives rise to electromagnetically induced transparency. I'll take your questions, but let me first give you a drawing which may illustrate, or summarize, what I've just said. So what we try to understand is what happens when we detune the probe laser. Until now, we had always the EIT feature, the EIT window, was completely overlapping with the one photon resonance, but now, because the coupling laser has a detuning of gamma 2, we have to use with the probe laser. I have to trace back how the detunings are defined. The Raman detuning was big delta 2 minus big delta 1. If the detuning delta is chosen to be delta 2, we have the simple situation that we go from the ground state right to the excited state, and we have the feature which has a width of gamma. This is what you would call the single photon resonance. It is the single photon resonance. It is due by resonantly coupling into this continuum, and the only feature of the coupling laser is that there's an AC Stark shift to it. Single normal resonance, almost like in a two level system, but the only addition is the AC Stark effect. Now we have a second feature, which can be very sharp. This is when we do the two photon Raman process into the other ground state. Due to photon scattering, this resonance has a width of gamma scattering, and the position is not at zero at the naked Raman resonance. It also has an AC Stark shift because the coupling laser does an AC Stark shift to both the excited and the ground state. The coupling laser, the AC Stark shift, pushes ground and excited state in opposite directions. So therefore, we find that at the Rabi frequency of the coupling laser divided by gamma over 2. The name of this feature is the two photon Raman resonance plus the AC Stark shift, which actually means it's a four photon process. The question is now, where is the electromagnetically induced transparency? We have introduced now specifically the absorption feature. We have identified a one photon absorption feature plus AC Stark shift, a two photon absorption feature plus AC Stark shift, but where is electromagnetically introduced transparency? Here. Electromagnetically induced transparency is always at delta equals 0. You have to fulfill the Raman resonance, and this resonance is not affected by any AC Stark shift. It's always at delta equals 0. So therefore, what that means is you have two absorption features, and you would think these are two Lorentzians, but they interfere and they go to exactly 0 at delta equals 0, and this is our EIT feature. So there is an interference effect between a broad feature and a narrow feature, and this is found in many different parts of spectroscopy, but you also find it in nuclear physics whenever you have a narrow feature embedded into continuum. What we have here is a narrow feature and a broader feature, but a narrow feature and something which is broader or continuum is called a Fano resonance. You actually have the same situation when you look at scattering of atoms. Many of you are familiar with Feshbach resonances. Well, a lot of people call it Fano Feshbach resonance, and what you have in a Fano Feshbach resonance is two atoms can scatter off each other. This is the continuum. This would be your broad feature. But then they can also scatter through a molecular state, and this is a narrow feature. And what we've identified now for electromagnetically induced transparency are the two features. One is the single photon absorption, one is the Ramon resonance. But in general, the concept is much more general. You have a narrow feature, you have a broader feature. It's responsible for scattering two atoms or it's responsible for scattering a photon, and once the photon or the atoms have been scattered, you have no way of telling which intermediate state was involved. And therefore, you get interference. And what I just said, what is EIT for light is the zero crossing of a scattering length that the atoms do not scatter off each other because the two different processes completely destructively interfere. So what I've shown here is two Lorentzian, two absorption features. Let me know re-plot it and plot the index of refraction minus 1. Here was EIT, zero detuning. We have a sharp feature here at the two photon resonance, we have a broad feature here at the single photon resonance, and if I now transform the Lorentzian into a dispersive feature. I use freehand, so this is a dispersive feature for the broad transition. The narrow transition has a much, much sharper dispersive feature, and the important part is now at the EIT, at the detuning delta is zero where we have electromagnetically induced transparency, we have an index of refraction which is exactly one because it's a dark state, you have no absorption, you have no light scattering, you have no reaction to the light. And therefore, the index of refraction of the material is like the index of refraction of the vacuum. So you have n equals 1. It looks like a vacuum in terms of index of refraction. It looks like a vacuum because you have no absorption. But what you have is you have a large derivative of the index of refraction with the frequency detuning, and that affects, and that's what I want to tell you now, the group velocity of light. So anyway, this is maybe as far as I want to push it, and I was actually wondering if this is a little bit too complicated to present in class. But on the other hand, I think it sort of also wraps up the course. We have a three level system, and we find a lot of things we have studied separately before-- two photon Raman feature, single photon light scattering-- but now they act together and they interfere and have this additional feature of electromagnetically induced transparency. Colin? STUDENT: So the way you drew the level diagram, the f, I think, [INAUDIBLE] state [INAUDIBLE]? PROFESSOR: Yes. Actually, I emphasized that we have an AC Stark shift, and what I didn't say when I discussed it here that the AC Stark shift pushes this level down and pushes the other level up. But since we are talking about a very broad resonance in the excited state, for all practical matters, the AC Stark shift doesn't matter, whereas for the narrow Raman resonance, the AC Stark shift is important. STUDENT: Also, you showed on the plot that the shift of the excited state [INAUDIBLE].. PROFESSOR: Sorry. Thanks. So the AC Stark shift for that detuning would shift this level up and would shift this level down. What's the second question? STUDENT: You drew on the plot-- PROFESSOR: This one? STUDENT: That one. That the shift of the excited state was much higher than the shift of the ground state. PROFESSOR: You mean those two shifts? STUDENT: Yeah. PROFESSOR: We have to now do the bookkeeping. We have assumed that the coupling laser has a large detuning, the coupling laser is very far away from resonance. And if you want to hit the excited state, we know we need a Raman resonance, which is delta 2, but the Raman resonance, capital delta 2, means that we are smack on the single photon resonance for the probe laser. So this feature is pretty much we take the ground state and go exactly to the excited state with a single photon. There is an AC Stark shift involved but it's not relevant here, whereas the other feature is the two photon Raman feature. And the one thing I wanted to point out in this context is that there is actually a small energy splitting between the two photon Raman feature by the AC Stark effect, whereas the EIT feature always happens at Raman resonance delta equals 0. Because, just to emphasize that, delta equals 0 is really you induce a coherence between g and f. It's a dark state, and when you have a dark state in that situation, you don't have an AC Stark effect. So the EIT feature is at delta equals 0, whereas the photon scattering features, they suffer, or they experience-- it may not be negative to suffer an AC Stark shift, but they experience an AC Stark shift. Further questions? Yes? STUDENT: Can we somehow relate this to Doppler free spectroscopy? PROFESSOR: Can we relate that to Doppler free spectroscopy? Actually, I don't think so because I would say for the whole discussion here, let's assume we have an atom which has infinite mass, which is not moving at all. We're really talking about internal coherences. However, and that's where it becomes related, if you look at very, very narrow features as a function of detuning then, of course, Doppler shifts play a role, and if you have very, very, very narrow features, you become sensitive to very, very, very small velocities, and therefore, you have an opportunity to cool. So if you can distinguish spectroscopically an atom which moves a tiny little bit and an atom which stands still, if you can, by a narrow line, distinguish the two, then you can actually laser cool this atom. The EIT feature can give you extremely high resolution. I'm not discussing it here. I've discussed the phenomenon of EIT, which is coherent population trapping. But there is an extension, which is VSCPT, Velocity Selective Coherent Population Trapping, and VSCPT was a powerful method to cool atoms below the recoil limit, but I've not connected anything of coherent population trapping with the Doppler shift. So for all this discussion, please assume the atom is not moving. Other questions? Then let me just say a few words about the fact that we have a large derivative of the index of refraction, and this, of course, is used for generating what is called slow light. The group velocity of light is the speed of light, but then it has a denominator which is the derivative of the index of refraction with respect to frequency. Towards the end of the last century, there were predictions that electromagnetically induced transparency would give you very sharp features which can be used for very slow light. And what eventually triggered major developments in the field was this landmark paper by Lene Hau where she used the Bose-Einstein condensate to eliminate all kinds of Doppler broadening. There are other tricks how you can eliminate it, but this was the most powerful way to just take a Bose-Einstein condensate where atoms have no thermal velocity and, in this research, she was able to show that light propagated at the speed of a bicycle. So it was a dramatic reduction of the speed of light, and this showed the true potential of EIT. There have been other demonstrations before where light has been slowed by a factor of 100 or a few hundred, but eventually, combining that with a Doppler-less feature because the atoms don't move in a BEC created a dramatic effect. So we have now two ways how we can get a large derivative, dn d omega. I've discussed here the general case that we have a narrow feature, a broad feature, because we have the coupling light detuned from the excited state, but let me just point out that even if you have the coupling light on resonance, depends what you really want, but you can get an even stronger feature in the index of refraction versus frequency. This is now the situation where we have the strong absorption feature, but then we have the EIT window. So we have this superposition of a positive Lorentzian and a negative Lorentzian, and if I now run it through my Kramers-Kronig calculator and I take the dispersive shape, I can sort of do it for the broad feature in this way and for the narrow feature in this way, and now you have to add up the two. And what you realize is at this point, you have a huge dn d omega. What I'm plotting here is on the left side, the absorption of the Lorentzian, and you can regard this sharp notch as a second Lorentzian. So you have the positive Lorentzian, negative Lorentzian, and then you take the dispersive features and you add them up with the correct sign. So whether you're realizing now for quite a general situation where you have single photon detuning, which I discussed before, or whether you're on the single photon resonance, you can have extremely sharp features. So now you can take it to the next level. You have a light pulse which enters a medium, and now the light pulse slowly moves through the medium. But while the light pulse moves through the medium, you reduce the strength of the coupling laser. What happens? So you do now an adiabatic change of your system. You do an adiabatic change of the control field omega 2 while the probe pulse is in the medium. Well, that means that under idealized assumptions which we've discussed, this feature gets narrower and narrower and narrower. If omega 2 goes to 0, the strength of the control field, this feature becomes infinitesimally narrow. And therefore, this feature becomes infinitely sharp, and that means that the group velocity goes to zero. This is now in the popular press, it's called stopped light or frozen light because the light has come to a standstill. What really happens is the following. We have our coupling laser, omega 2, and we have our probe laser, omega 1. When we do what I just said is that omega 2 goes through zero, then the dark state originally, for very strong omega 2, remember the dark state was g? But now if we let omega 2 go to zero, the dark state will become f. And that means that in a way, every photon in the probe pulse has now pumped an atom from g through two photon Raman process into f. So therefore, what it means to stop light or to freeze light means simply that the photons of the laser have turned into an atomic excitation where the excitation is now the state, f. In other words, you have written the photon. The photon has now put the atom into a different hyperfine state. So if this is done adiabatically, and I can't do full justice in this course, but this means that the light is coherently converted into the atomic-- when I say "coherently" say "into population," I mean all the quantum phases, everything which was in the quantum nature of the light has now been converted, has been written into the state, f. This is often called, because g and f are hyperfine states, this means that you have coherently converted the photon or the electromagnetic wave in the probe beam into a spin wave or a magnon. Anyway, I just want to show you the analogy. The fact that you can put the quantum information of light into an atomic state and back and forth, we've discussed that when we had the situation of cavity QED. We prepared a superposition of ground and excited state and exactly the same quantum state which we had in the atom we later found in the cavity as a superposition of the zero photon and the one photon state. So from those general concepts, it should be clear to you that it is possible to coherently transform a quantum state from light to atoms and back to light. And here you see the different realization. We have a quantum state of the photon in the probe laser, and we can now describe the excitations in the system in a parametrized way. What it means is for the strong probe laser, for the strong coupling laser, the excitation in the system travels as a photon in the field one, but when you reduce the coupling in the coupling laser, the excitation becomes less and less photon-like. It becomes more and more magnon-like, spin wave like. And the moment you reduce the power in the coupling laser to zero, what used to be an excitation in the electromagnetic field has now been termed adiabatically into a spin excitation. Coherence has been written into the hyperfine states or your atoms, g and f. All this is done coherently, and therefore reversibly. You can read out the information by simply time reversing the process. You ramp up again the coupling laser, and that adiabatically turns the spin excitation back into an excitation of the electromagnetic field. Any questions? Well, we have three minutes left, but I'm not getting started with superradiance. We have one more topic left, and this will be the topic on Wednesday, decay superradiance, and this is when we discuss the phenomenon of coherence where we have coherence not only in one atom between two or three levels. We will then discuss on Wednesday if we have coherence between many atoms, and this at the heart of superradiance. OK. See you on Wednesday.
https://ocw.mit.edu/courses/5-111sc-principles-of-chemical-science-fall-2014/5.111sc-fall-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. CATHERINE DRENNAN: That's today's handout. We have valence bond theory and hybridization. So some people ask, OK, now you're going to tell me everything you just learned. It's not really right and there's something else that's better. No. All of these theories are good theories. They all do a very good job predicting the properties of molecules, but they all have different strengths and weaknesses. And I think in terms of what they're useful for, molecular orbital theory is very good in terms of thinking about energy levels. It's very good about thinking about bond orders or predicting whether something's going to have an unpaired electron or not. Valence bond theory and hybridization are really good in terms of thinking about shapes of molecules. So not so much about energy levels, but shapes. So all of these theories are very, very useful because we want to think about how atoms come together to form these molecules and what are the properties of the molecules. So these theories brought together really give us a wonderful picture of this. And I really like valence bond theory and hybridization because I like shape. I determine shapes of molecules, complicated molecules, for a living so I'm a big fan. But I will say that when I taught this the same lecture last year, I announced to the class that I had had a dream where all these atomic orbitals were coming together and trying to make other kinds of orbitals. And I realized that, perhaps, that was a sign that on Friday I should start teaching thermodynamics which is what we're going to be doing. We're going to start on thermodynamics on Friday. And last night, I had another dream about orbitals. So I think this is some more orbitals and then we go to thermodynamics. And I remembered my dream because at that moment, my giant dog jumped on top of me as I was sleeping to wake me up and realize that thermodynamics needs to come pretty soon. OK. But one more theory, valence bond. This is not so bad. OK. Bonds result from the pairing of unpaired electrons from the valence shell of atomic orbitals. That's it. That's it. So we have one, we bring in another so we can make molecular hydrogen, H2, because they each have one unpaired electron and they come together to form a bond. I like theories that you can put on a magnet on your refrigerator. That's a good theory to me. So also as part of valence bond theory, we have some names of bonds. And we've been talking about sigma molecular orbitals and pi molecular orbitals. And now, we're going to talk about sigma bonds and pi bonds. So we had orbitals in MO theory, valence bond theory, we now have bonds. Sigma orbital is cylindrically symmetric about the bonding axis. Thank goodness they didn't define them differently. That would have been a nightmare. So we have sigma orbitals that are cylindrically symmetrical about the bond axis and sigma bonds are cylindrically symmetrical about the bond axis. So no nodal plane along the bond axis. Good. We should be able to remember that. So with pi bonds, we have electron density in two lobes with a single nodal plane along the bond axis. So again, with pi orbitals, we had more-- it wasn't cylindrically symmetric. So this we should be able to remember. A couple other things about sigma bonds and pi bonds, a single bond is a sigma bond. So when there's one bond, it's a sigma bond. So what's a double bond? A double bond is a sigma bond plus one pi bond. So if it's double bond, it's got two types of bonds, sigma and pi. And what do you think a triple bond is? Sigma bond and two pi bonds. So you got a triple bond like nitrogen, you got two pi's. Hey, it's really a good life when you have a triple bond. All right. Single bonds always going to be sigma. Double, sigma and pi. Triple, sigma and two pi bonds. OK. So now we're going to hybridize our orbitals. And we're going to talk about electron promotion, as well. So start with carbon, carbon based life. Carbon is really important and if you are an organic chemist, and by organic, it means studying things with carbon, you care a lot about hybridization. And the stuff I'm teaching you today, you'll see a lot if you go on to take Organic Chemistry 512. So carbon, such as one in methane, so we have our methane molecule here. The carbon has four unpaired-- can form bonds with four electrons, but to do so we need to do something with our electrons. So carbon comes in, it has two electrons in it's 2s and it has two electrons in it's 2p's, p orbitals, but we want to form four bonds. And in covalent bond theory, every bond you bring an electron from one atom, an electron from the other, and they pair and that forms a bond. So we don't have four unpaired electrons to make four bonds with this configuration of electrons, so we can talk about promotion of an electron from here up there. And if we do that , now we have our four single unpaired electrons ready to make four bonds. And carbon does like to make four bonds. It does it quite often. So that's electron promotion. To form those four bonds, a 2s electron is promoted to an empty 2p orbital. And then, we can hybridized our orbitals and that means that we want to give all our orbitals some s and some p character. So here are our hybrid orbitals and let me show you the nomenclature. So we're talking about n equals two. So we have a two. We have s character and we have p character and we're using three p orbitals to make our hybrid orbitals. So we are going to make a 2sp 3 hybrid orbital. And we're going to make four of them because we've used four atomic orbitals to make them. So if we are using four, we need to make four. So let's kind of take a look at what's going on here. And we'll say that these molecular orbitals differ only in terms of their orientation in space. So they don't have different shapes, they're just oriented differently. So here we have our 2s, remember it's symmetric, and we have our three p orbitals, and they're all the same except that they're all oriented differently in space. And when we bring these together, we form four hybrid orbitals and they kind of look like turtles, but they're turtles oriented differently in space, but otherwise they're the same. So those are our sp 3 hybridized orbitals. So carbon has this sp 3 hybridized orbital and it has four unpaired electrons available to form bonds with four hydrogens. So let's bring our hydrogens in to form our bonds. And each hydrogen brings with it it's one electron. So now we have two electrons in all four of our hybrid orbitals. And we can think about where the energy came from. I just moved that electron, I didn't think about it. I'm like, yeah, that just goes up here. So where did the energy come from to do that? And that is, it came from bonding. So this molecule now is more stable because it's bonded. Methane isn't quite a stable molecule. That's another problem in and of itself. So the bonding allows you to do that. You get back from this bonding. So let's look at those bonds then that are formed that make that electron promotion worthwhile. And so you're forming a bond between the carbon and the hydrogen, you're forming for them, and you're forming single bonds, they're sigma bonds, and the bond is formed between the carbon's 2sp 3 orbital and the hydrogen's 1s orbital. Hydrogen can't hybridize, it's got one, 1s orbital. That's all it's got, can't do anything else. And that gives you a bond then, a sigma bond, that you'll see this a lot and you'll write this a lot. This is how we're going to name that sigma bond. So we're going to say sigma. We're going to have a parentheses. Identify the element, it's carbon. N is 2. Type of orbital, sp 3 comma hydrogen, the name of the other element, and it's orbital, which is 1s. So when it ask you to name the type of bond, this Is the complete answer that we're looking for. And we'll have more practice on this. Now we can also think about the shape that this molecule would have. What is the angle here between this hydrogen and that hydrogen and frankly, between any of the hydrogen carbon hydrogens? Yup, 109.5. And the name of that geometry? Tetrahedral, right. So sp 3 gives you a tetrahedral based geometry here. All right. So now let's get more complicated. Let's bring two carbons in. So we have ethane, two carbon's, six hydrogens. So this also has its carbons are sp three, and this is what we saw before for methane, but now I'm going to rotate this around and that's one carbon, but we need another carbon, but first we can think about this one carbon. So one of the carbons of, ethane it would have this 109.5 angle. It has four unpaired electrons available in it's four hybrid orbitals to form interactions, one with carbon and three of them with hydrogen. And then we need another one of these so we'll bring that in and it comes in with its set of hybrid orbitals and it's set of electrons. And we form a bond between them. And the bond we're going to form is a single bond, a sigma bond. And now let's bring in our hydrogens. So we had six hydrogens, three for each carbon. And so there are now two types of bonds. We have the carbon-carbon bond and we also have the carbon-hydrogen bonds. And so the carbon-carbon bond, which is a sigma bond, is sigma (C-- it has carbon-- 2sp 3, the other carbon is the same, C2sp 3 and then the bracket. So that's that sigma bond. It's a single bond. And here is our ethane molecule. And then we have our carbon hydrogen bonds, they're also sigma. Please don't give me pi bonds to hydrogen. It only has that one electron tapping with two electrons. It doesn't want do anything complicated. It doesn't have p orbitals, just that one s. So sigma C2sp 3, H1s. And now we have defined this molecule so we brought together two tetrahedral centers and formed this molecule with a single bond. So let's talk about nitrogen. Nitrogen, also again, very important. So here we have five valence electrons. What about electron promotion? Should I do it? No. Because I mean, you could put it up here, but it can't make any more bonds so it doesn't really matter. So it doesn't occur because it would not increase the number of unpaired electrons to form bonds, but we can hybridize. So we can still hybridize our orbitals and we can get four hybrid orbitals, because we're going to use our 2s and all three of our 2p orbitals. So we'll get the same set of hybrid orbitals. But this time, one of them has two electrons in it. So it's not ready to bond, it's happy according to valence bond theory. And these are our alone pairs. But we can form three bonds with these guys so let's look at an example, NH3. So now we have our lone pair, it's in this orbital up here, and then we have three orbitals available for bonding, each with an unpaired electron ready for the three atoms of hydrogen to come in. So we bring in our three atoms of hydrogen, each came with an electron, and now you can tell me with a clicker about the angle and the geometry of this molecule. Let's just do 10 more seconds. All right. So this is back to VSEPR again. So we have an angle here. It's based on an sn 4 system, one lone pair, three bonded atoms. So it's based on our 109.5, but those lone pairs make for bad roommates and they're pressing all of these hydrogens together and so the angle is less than 109.5 and we name this structure based on the atoms we see, not the lone pairs, so this is trigonal pyramidal. And so here we have it here so we're naming it without thinking about the position of those lone pairs that are pressing down on the bond so it looks trigonal, like a triangle, but it's also a little pyramid. So VSEPR-- VSEPR and hybridization, they just go right together. It's awesome. OK. So we can also name the type of bond. So our nitrogen had 2sp 3 hybridization and our hydrogen just 1s, it's a sigma bond, it's a single bond. So we named that sigma N2sp 3, H1s. So nitrogen, now we're going to go back to carbon-- sorry, to oxygen-- and think about hybridization of oxygen. Oxygen. Should I do an electron promotion? No. It's not going to help me. It's not going create any more electrons available to form bonds, but I can hybridize and I can get the same four hybrid orbitals, our four 2sp 3 orbitals, but now two of them have two electrons in them and two are available to form bonds. Oxygen loves to form bonds with hydrogen and form water, most of the planet is water. There's a lot of water and water is really important for life so it's great that oxygen and hydrogen get along so well. So the oxygen, again, has two lone pairs which are here. You bring in our hydrogens and they come with one electron. And again, now, it's still a steric number of four systems, so it's less than 109.5, and it's actually a lot less than the nitrogen because you have those two lone pairs that are just taken up so much room and squeezing together these hydrogen atoms over here creating this 104.5 angle. So here we have our oxygen molecule with its two lone pairs and its two hydrogens, and what's the name of that geometry? Bent. And again, we have these polar bonds that create a dipole so it's a polar molecule, which is very important in life. And we can name that bond. It's a sigma bond. It's made up of oxygen O2sp 2, H1s. All right. So that's sp 3 hybridization. Now let's talk about sp 2 hybridization. So sp 2 hybridization. So back to our atomic orbitals. And now, we're not going hybridized all of our orbitals. We're just going hybridize our 2s and two of our p orbitals. So we'll hybridized these guys and we will form three hybrid orbitals and we will still have one un-hybridized orbital. We will have 2p y left alone. So let's see what this does. How is this hybridization useful? So let's talk about boron. Boron has three unpaired electrons, but they are not all available right now to form bonds, according to valence bond theory. So here we do want to do an electron promotion to put one of them up here so that now all three are available to form bonds. And we can again, hybridize these three atomic orbitals and form three hybrid orbitals. So we have three 2sp 2 hybrid orbitals and then we still have our 2 py orbitals so don't forget to mark it. It seems lonely. It's over here, but it's going to be important later so don't feel bad for it, yet. All right. So boron-- let's think about these hybrid orbitals and how this gives us the structure that we know occurs when we have boron. So boron now has its three sp 2 orbitals and these are going to lie in a plane and they're going to be as far apart from each other as they can to minimize electron repulsion. And if you're in a plane, then you need-- far apart as you could be is 120 degrees. And this is what gives us our trigonal planar geometry. So we saw that boron formed these trigonal planar complexes before and again, they're trigonal planar because they're like a triangle and they're in a plane, trigonal planar. And we can now bring in our hydrogens. The hydrogens come with an electron so we have an electron for them and there we have our structure. We can also name that bond. So again we have single bonds. So sigma B, for boron, 2sp 2, H1s, and there are three of those. Carbon-- carbon can also do this. We talked about carbon being sp 3. Carbon can also be sp 2. Hybridized carbon is amazing that's why life is based on carbon. Carbon can do lots of things. So again, we're going hybridized two p orbitals, one s orbital to give three hybrid orbitals and we have our 2py over here in the corner, but don't feel bad for it, it's going to do something useful. So we now have three electrons in these hybrid orbitals and now we have one electron in our 2py un-hybridized orbital, as well. So let's see what carbon, with this kind of arrangement of orbitals, can do. And again, we're going to have trigonal planar geometry for our 2sp 2 hybrid orbitals. So we have carbon there and these are all in a plane, but now coming out of a plane toward us is this 2py orbital. So it's coming out 90 degrees away from the trigonal planar geometry. So an example of sp 2 hybridization is in this molecule, C2H4, and it has a double bond, which means if it's a double bond, it has what kind of bonds in it? Sigma and pi, right. So one sigma, one pi bond. So here now, and this is the trigonal planar geometry. It's supposed to be in a plane, but you can't really see it if it's really in a plane. But 90 degrees away from that plane is our 2py orbital. We brought in our two hydrogens so this carbon here, is carbon is there, two hydrogens are there. This would be 120 degrees. Now we're going to bring in another one. That's the one over here. It comes in with it's carbon. It comes in with it's two hydrogens forming these single bond, sigma bonds, between the carbon and those hydrogens. And now we're going to form a carbon-carbon bond. This carbon-carbon bond is a sigma bond and so it's C2sp 2, C2sp 2, but we're not done. We said this is a double bond, so that's our sigma bond, but we need our pi bond. And now py, our un-hybridized orbital, is extremely excited because it can form the pi bond. So we form a pi bond and that's formed by our C2py, C2py un-hybridized orbital. And we also have four CH bonds and those are single bonds, those are sigma bonds, and so they're formed by our C2sp 2 carbon and hydrogen 1s. And there are four of those. So that's an example of sp 2 hybridization. And one thing that's very important, and here you can see what that molecule looks like-- doesn't all fall apart-- so this is a double bond, these smaller kits don't let me make double bonds, so I have a sign double bond. And you can see the angles and the geometry of this molecule. And another property of something with the double bond like this is that it's not really free to rotate. So when you have these two kind of points of attachment, when we have these orbitals forming between your on hybridized p orbitals, that does not allow for rotation around the double bond. So if you're an organic chemist wanting to make a molecule that's going to be rigid, if you put a lot of double bonds in it, it can't twist and turn very well. It's often very rigid which is useful. So we'll stop here and we'll finish up on Friday sp 2 hybridization. For the clicker question, the bone over there on that is also the same as the one on the board. The one on the board is written with atoms in it and it has squiggly lines to abbreviate so make sure that your answer is consistent with the picture on the board, as well. How we doing? OK. All right. Let's just take 10 more seconds. Remember this is a clicker competition so we want to get the right answer in for your recitation. AUDIENCE: [SIDE CONVERSATIONS] CATHERINE DRENNAN: All right. That's pretty good because that's the right answer. OK. So let's just take a look at this for a minute. First let me explain. Let's settle down, quiet down. Let me just explain the diagram, too, because you'll be seeing these diagrams. So when you just have a bond, a line, and there's no atom indicated, that means it's carbon. Organic chemists, I think, came up with this rule. Carbon, they just said if nothing's indicated, of course, it's carbon. Carbon is such an important element, we don't really need to say more about it than that. So you could interpret this diagram, you have a carbon double bonded to another carbon. And then up here, there's a carbon in that ring so I just put, in this diagram, carbon with squigglies, you'll see that sometimes. That means that there's more atoms there, but I'm too lazy to draw them. And on this side, there's a carbon, but there's more atoms there and I'm too lazy to draw it, another squiggly. And then we have the double bond so there's a carbon down here, as well. It wasn't indicated, just the line in the drawing. And you have to predict how many hydrogens Hydrogens are often not indicated. This one is indicated. There are other hydrogens in this drawing that are not indicated. You need to figure out where they go and the material we're doing now is going to help you do that. And then I also drew something and another squiggly because I was too lazy to draw the rest. So these are different kinds of diagrams that you'll see that all kind of mean there's more than one way to kind of write the same structure. So this particular molecule was used to treat schizophrenia in the 1950s and key to the usefulness of the molecule was that double bond. As we talked about last time, double bonds restrict movement. You can't twist around the double bond. And so if you had exchange and you had this group over there and the hydrogen over here, it wouldn't be an active molecule. So this double bond fixes the orientation of those other atoms such that it was an active molecule and could be used as a pharmaceutical to treat schizophrenia. So in terms of the bonds then, we have a double bond which means we have one sigma and one pi bond. And so the sigma bond down here, we had to know what the hybridization was. And here, those carbons are bonded to three other atoms and so it would be sp 2 carbons and also with the double bond sp 2. And then we also have a pi bond and pi bonds are made up of non-hybridized orbitals, our py or our px. And so those are the ones that make up the pi bond. In all the other variations, some you had two sigmas, that's not right, we have a sigma and a pi so most people figure that out. They picked the ones that had those categories for the most part. And then you had to pay attention to whether it was sp 2 or sp 3. And then here, this one, the pi bond is not made up of hybridized orbitals, it's made up of the atomic orbital leftover. So a lot to look at that particular problem, but this is really good practice for the exam, which is coming up a week from Monday. There's going to be lots of hybridization and today, we're going to post extra problems for the exam so you have, really, a whole week to start getting ready for this exam and to keep up with the new material. And so extra problems and an old exam are also going to be posted later today. All right. So let's just finish with hybridization now and this is good because this is all stuff that's going to be on the exam. And also, the instructions for the exam are attached to today's handouts and, of course, remember no makeup exams and clicker competition. So I'm not really going to go through anything in the instructions. It's very similar to last time so you can take a look at that and see on the material, the material starts with the periodic table, periodic trends, and goes through the material that I'm going to finish with partway through today's lecture. So at the end of hybridization, that's the end of exam two material. So we're going to finish our lecture notes from last time so pull those out. And then we're going to move on to thermodynamics. So once we hit thermodynamics, that's exam three material, so we're almost done with exam two material, a week in advance to get ready for the exam. So that's great. Lots of time to review exam two material and let's see if we can have an A average on this exam. That would make me really, really really, happy. I would wear my periodic table leggings again if we could get an A average on the exam. I'm just saying, I'd be very excited. OK. So we better finish up that material so that you can get started, get ready for this. So we talking about valence bond theory and hybridization and forming these hybrid orbitals. And valence bond theory is this idea that if you have a single electron in an orbital, it's available to form a bond and bonding happens when two atoms bring together single electrons and those pair up to form a bond. So we talked about electron promotion before and let's just review what that meant. So if you have an empty orbital, you can promote one of your electrons to that empty orbital. And so now we have four valence electrons after electron promotion so that we have more possibility of forming bonds. Now if you don't have an empty orbital, you can't promote your electron. If you do have an empty orbital, you can promote it more single electrons available for bonding. If you don't have an empty orbital, there's nothing to do with that. So that's the trick to electron promotion. All right. So now we have one electron in each of the four valence atomic orbitals that we have for carbon, but we're going to only hybridize two of them. We saw already last time that we can hybridize all four orbitals and have sp 3. We can hybridize just three of those orbitals and have sp 2. Now we're going to see that we can hybridize just two of those orbitals and have sp. So carbon is really amazing. It can do all three of these kinds of hybridization. That's why carbon based life forms are able to exist and do so much. So we're going to now hybridize our 2s and our two pz. Z is just special and so it gets to hybridize with the 2s leaving two of the other orbitals just by themselves. So we're going to form two hybrid orbitals. Again, if we hybridize two atomic orbitals, we're going to form two hybrid orbitals. And if we hybridized 2s and 2pz, we're going to get hybrid 2sp orbitals. And we'll have our 2px and our 2py just the same as always. All right. So we can think about this in terms of shapes, as well. So we have, again, our spherically symmetric s orbitals and our p orbitals and we have the three of our p orbitals that are the same shape they just differ in orientation in space. And so we're just going hybridize our 2pz and our 2s and so we'll have our kind of funny looking I think of them as turtle shaped or hybrid orbitals and then we also have our 2px and 2py orbitals the same as always. So what are we going to do with our two sp orbitals and our one 2px and one 2py? Well, we can form a pretty cool molecule with it. So we're going to form something that has a carbon-carbon triple bond. So this is C2H2. So now in cyan is the sp orbital, the hybrid orbital, that is formed on this carbon here. And then we have a 2px orbital here in the plane of the screen. And we have a 2py orbital coming out toward us. And, of course, our 2pz orbital had been hybridized with the 2s. So here we have this structure. We're going to bring in our other carbon, and the other carbon has the same situation going on. And we can form a bond between the two carbons with our sp orbitals. And we can form, also, with the sp hybrid orbitals, bonds to hydrogen. So we have two hydrogens, one over here and one over there. So what is the angle between these hydrogens here? Yeah. So that's 180 and, again, we have an example here of the molecule we're going to build that's going to have a triple bond. So now let's name those types of bonds or as sometimes in problem set, it will say describe the symmetry of the bond. And what it means by that is the following. It means that, that's either say name the type of bond or describe the symmetry. There's multiple ways to ask the question and this is the answer to those questions. So the bond that's formed, the first one that's formed, between the two carbons is a sigma bond and it's formed between the sp orbital. So C2sp, C2sp. That's the first one. But this is a triple bond so we have two more bonds to form. And this is where our atomic px and py orbitals will come in. So we're going to form the next bond, which is what? A sigma or pi? Pi bond and that can be between our x, px, orbitals, so pi C2px, C2px. And now, we have the 2py orbitals, as well, and that allows us to form our triple bond. So we're also going to have a bond pi C2py, C2py. So again, with the triple bond, we're going to have one sigma and two pi bonds. The sigma is formed from the hybrid orbitals and the pi bonds are formed by the 2px and 2py orbitals. So carbon, really impressive. Carbon can form of these three types of hybrid orbitals. It can form molecules with single bonds, double bonds, and triple bonds. So let's just have a little cheat sheet to think about that. So again, this is for carbon hydrocarbon molecules, like we've looked at so far, that have two carbons in them. So let's look at carbon in C2H6, so that's over here. What is going to be the hybridization when you have a carbon that has a bond to another carbon and a bond to three hydrogens here? What hybridization? sp 3. That's right. And it's going to have what kind of a bond-- single, double, or triple? It's going to have a single bond and it's going to have tetrahedral geometry around both of the carbons. So both of these carbons are going to have tetrahedral geometry, which is not a blank in your note, but what's the angle? 109.5, right. Thank you. So we have carbons C2H2 are going to be sp 2 hybridized. And what kind of a bond are they going to have between the two carbons? That will be a double bond. And what is the geometry of that? Right. Trigonal planar. And here, you have to pretend this is a double bond, my model kit didn't come with double bond possibilities and I have to hold it very carefully, but if I hold it very y-- oh-- the bonds are still there. You'll see that the angles are 120 and so this is trigonal planar geometry at each carbon. We didn't tape that one. You can see there's scotch tape all over the others. It was not a happy molecule. OK. So now, C2H2, what kind of hybridization? sp, that is our friend the triple bond and we're going to have linear geometry and 180. So both carbons have linear geometry. That works. It's always triple bonds are much more stable, they don't fall apart as much. OK. So that's a cheat sheet for carbon. Now if you're thinking about nitrogen or oxygen, those often have lone pairs on them. Carbon likes to form all bonds, it doesn't care double, triple, single, whatever, but it doesn't really have a lot of lone pairs on it. But oxygen, nitrogen have lone pairs and whenever you have lone pairs, you have to worry about what the geometry is because the geometry gets named based on the atoms that you do see, not the lone pairs. So this cheat sheet works for carbon without lone pairs. If you have lone pairs, you've got to go back to your vesper and think about what the names of the geometries are. OK. So rules, and I posted this on Steller for the problem set that was due today, and so very simple for determining hybridization. And this is the kind of equation that will not be on an equation sheet for an exam, you just need to know that. So in determining hybridization of an atom in a complex molecule, you're going to be thinking about the number of bonded atoms plus the number of lone pairs is going to be equal to the number of hybrid orbitals. So now, clicker question, what is the hybridization of an atom that has exactly two hybrid orbitals? All right. 10 seconds. Yes. Right. sp. So we can take a look at that two hybrid orbitals are formed by one at 1s orbital and 1p orbital and if you have two things bonded and no lone pairs, that's what you would get. Three hybrid orbitals would be sp 2 and four would be sp 3. So again, you're going to just be thinking in these problems about how many atoms are bonded to that central atom and how many lone pairs do you have. And that's going to then let you figure out what your hybridization is. And we have one exception which is that if an atom has a single bond and it's terminal on the edge of the molecule, then we're not going hybridize it. So we can now take a look at an example of this and this is going to be another-- yeah, just keep your clickers out. We've got a whole bunch of clicker questions coming at you kind of in a row here. And if we have this molecule, it has a central carbon and three terminal atoms. Now help me figure out what kinds of bonds this will form. So which one of these has the correct bond types for this molecule? All right. Make a decision. Let's just take 10 more seconds. Interesting. I think some time I re-poll, but I think that we'll just kind of go over this one and then we'll-- do you want to go ahead and show the answer and then-- this is-- AUDIENCE: [SIDE CONVERSATIONS] CATHERINE DRENNAN: And if it wasn't a clicker competition, I might have you discuss it more and re-poll, but it's a competition. So let's go over it. So this one isn't in your notes so if you want to write it at the bottom of the page, we'll go over what the answer is. Hopefully there's a typo in there, but we'll see when we go through. All right. So let's take a look at this molecule. Hydrogen is terminal and single bonded, but we've already talked about hydrogen so we kind of knew that. Oxygen is terminal, but it's double bonded so we need to hybridize it. Cl is terminal and single bonded so we don't hybridize this and we don't hybridize hydrogen, never hybridized hydrogen. OK. So let's look at the kind of bonds that are formed. So we have a sigma bond, single bond, this carbon is C2sp 2. It's bonded to three different things and it has no lone pairs so that makes it three that there's three things so we have three hybrid orbitals, which is sp 2. Our hydrogen is just 1s, it's always just 1s. That's all it is. So let's look at this bond now. So we have a single bond between our carbon that is 2sp 2 and then, we also have this oxygen. We do hybridize it because it has a double bond and it has two sets of lone pairs and it's bonded to one atom, so it has three hybrid orbitals, so it's sp 2 just like the carbon. And then we have a pi bond and the pi bond is made up of atomic orbitals, either 2px or 2py. Chlorine is single bonded so we're not going to hybridize it because it is single bonded and it's terminal. So it's a single bond, it's from this carbon that's C2sp 2, we already saw that, and then the chlorine is going to be terminal and so it's Cl3pz and so that's a non-hybridized orbital. So good practice for the clicker. I think that one could help, but we're going to have more practice now. I threw in a bunch of extra problems so that one was extra and now, let's do the one that is in the notes from last time, which is vitamin C. So I'll give you another minute if everyone has that one down. OK. So let's look at vitamin C. So vitamin C is needed to form collagen in your body. Without enough vitamin C in your diet, you could be in trouble. So it doesn't happen too much anymore because there's vitamin supplements and all sorts of things, but often, vitamin C deficiency is associated with sailors who went out to sea and didn't have a healthy diet and they became deficient in vitamin C and got scurvy. And so then they had to figure out they had to eat oranges or other things that were rich in vitamin C. In terms of who should be concerned about vitamin C deficiency, us, primates, we don't make vitamin C, so we have to get it in our diet and also, Guinea pigs. Most other animals make it. I don't really know why-- maybe this is why Guinea pigs are called Guinea pigs, they're good for scurvy experiments because they don't make vitamin C. All right. So let's look at this vitamin C molecule and think about what type of molecule it is and this is a quicker question. So we have to remember back more material that's going to be on exam two. Does that look like a polar or non-polar molecule and what's true about polar and non-polar molecules? All right 10 more seconds. Great. So it is polar and it, therefore, water soluble and so you know that because if there's atoms in there that have differences of electronegativity of greater than 0.4, carbon and oxygen of a difference in electronegativity of greater than 0.4, oxygen hydrogen also, electronegativity differences greater than 0.4. So we have a lot of polar bonds and they're not canceling each other out. It's not a symmetric molecule so therefore, it would be a polar molecule and water soluble. OK. Great. So you're good on your polar covalent bonds which is also going to be on exam two. All right. So let's go back to hybridization and have a little more practice on that. So don't put your clickers away. Why don't you tell me the hybridization of carbon a labeled up here and in your notes. All right. 10 more seconds. All right. So we know what clicker questions are going to determine the winners. OK. So carbon a was sp 3 hybridized. So if we look at it over here, it has bonded to four things so there's four which makes it sp 3. OK. So let's just do the rest and you can yell these out. Carbon labeled b, what kind of hybridization for carbon b? AUDIENCE: Sp 3. CATHERINE DRENNAN: Sp 3. Carbon c? AUDIENCE: Sp 3. CATHERINE DRENNAN: Sp 3. Again, you just want to count how many bonds you have going on or lone pairs, but carbon doesn't usually like to have lone pairs. What about carbon d? AUDIENCE: sp 2. CATHERINE DRENNAN: Sp 2. Right. It only has-- if we look at that one over here, I'm supposed to point to this one-- so carbon d over here, it has three atoms that it's bound to. Carbon e? sp 2 and carbon f? AUDIENCE: sp 2. CATHERINE DRENNAN: Sp 2. Right. So now that we did that, we can use this information when we think about the bonds that are formed between these carbons and the other atoms. So let's look at bonding now. So if we look at carbon b, two hydrogen, that's going to be a sigma bond, and you told me that carbon b was sp 3 so we write that. So describe the symmetry around the bond, name the bond, C2sp 3, H1s, we do not hybridize hydrogen. So now, for the bond between b and a, again, a sigma bond. We already looked at the fact that carbon b is 2sp 3, carbon a was the same. Now if we look at the difference between b and c, b was C2sp 3 and then c is also the same. Remember to write the twos, remember to write the hybridization, remember to write the element, remember to write sigma for the single bond. Grading these questions on the exam is not fun. You've got to remember to have all those things in there so if you get them all in there, it makes everyone very happy. OK. Now let's look at carbon b to the oxygen. It's also a single bond, so sigma. We know that carbon b is C2sp 3. The oxygen here is also going to be sp 3 because it has two bonded atoms and two sets of lone pairs. OK. One more clicker. All right. 10 more seconds. Great. Yup. So that is correct and if we take a look at that over here, we have carbon d, it has bonded to three things so it's sp 2 and the oxygen is bonded to two atoms and two lone pairs so it's sp 3. We can keep going and finish up between d and c now, we have-- oops, sorry, d and c up here, we have d which is 2sp 2 bonded to three things, c has bonded to four things, it's C2sp 3. And then finally, d to e, we have two bonds. We have a sigma bond so that's between our two-- these two carbons here are hybridized orbitals and again, it's 2sp 2, 2sp 2. And it's a double bond so we have one sigma and one pi bond and the pi bond is between non-hybridized orbitals, so it's C2py, C2py or you could have used x, I don't really care about that. All right. Good practice. I think you're getting the hang of this. Again, there will be more practice problems on hybridization posted today to get you ready for the exam and also to figure out these bonds. Once you get the hang of this, it's really pretty trivial and good points for an exam.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
How do we solve problems involving massive pulleys using Newton's laws? As a simple example, let's look at this problem, consisting of a block of mass m1 hanging from a massive pulley that has a moment of inertia I and radius r. We'll find the acceleration of the block, a1, and the angular acceleration of the massive pulley, alpha. It is critical to remember that the first step in solving these problems is to define the positive x and y directions and the positive direction of rotation. In two-dimensional problems, you're free to find any direction to be positive. Here, we'll set our positive x and positive y like this. Once we pick x and y, the direction of the positive rotation is now also defined by the right-hand rule. You'll see shortly that defining positive directions at the beginning will save you from a negative sign nightmare later on. Now let's break down Newton's laws for the different parts of the system. First, for the block, we will write down the linear version of Newton's second law. For the sum of forces, we have gravity pointing down and tension pointing up. According to the convention we just defined, T1 is positive and m1g is negative. Notice that since the block is accelerating, T1 minus m1g is not 0 but is equal to m1a1. Notice that here we've set the signs for tension and weight, but we don't yet know which were the block is accelerating. So we just start by writing m1a1. And if, in the end, we get that a1 is negative, we'll know that it's actually accelerating in the negative direction. Now let's look at the pulley. Newton's second law in its rotational form is tau is equal to I alpha. For a normal pulley, the string is always tangential to the side of the pulley. So r and f are perpendicular, so that means the torque is just T1 times r. What's the sign of this term? Well, according to the positive direction that we defined, this torque is positive. Again, we'll leave the sign of the I alpha term to be positive. And alpha will turn out to be either positive or negative. Finally, we need to connect these two equations. Because the block is connected to the pulley using an ideal, taut, inextensible rope, a1 is going to be related to alpha. We just need to figure out exactly how they're related. In other words, we need to write down the constraint condition. Let's say that the block hypothetically is moving upwards. In this case, what's happening to the pulley? It must be spinning clockwise to pull the rope up as the block goes in, so it has a clockwise angular velocity, which, according to our convention, is negative. In fact, if the pulley rotates a full turn in the clockwise negative direction, the angle changes by negative 2 pi, and 2 pi times r of the rope will be pulled up. So we can write that delta y is equal to negative r delta theta or, taking some derivatives, a1 is equal to negative r times alpha. After setting the sign conventions, writing Newton's laws for different parts of the system, and then writing down the constraint condition, we have three equations and three unknowns for which we can now solve.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
Now we'd like to analyze in more depth our result that for a system of particles-- so let's indicate our system. We had particle 1. We have our jth particle. And we have a particle N. So here's our system of particles where the total force caused the momentum of the system of particles to change. Now, I'd like to examine that concept of the total force. Before we said that our total force on the jth particle-- we just wrote it like this. And I'm going to put a little t up here for the moment. Because when we examine what force we mean here-- and I also want to put a little boundary around our system. And let's now consider another particle internal to the system. And let's try to identify the types of forces on the jth particle. We can really have two types of forces here. Our first force can be an interaction between these two particles. So what I'll write is the force on the jth particle due to the interaction between the k and the jth particle. And I'm going to put a little sign up here. I'm going to write this internal. What do I mean by internal? This is a force strictly between the internal particles in the system. Now, of course, we know that there must be a force on the kth particle due to the interaction between the jth and the kth particle. And we can call that internal. So what we have here is that we can divide-- there can still be other forces acting on the jth particle. And we'll do a decomposition like this. We'll say that the total force on the jth particle can come from some external forces. There could be an object outside our system. If these were interacting gravitationally, there could be a planet outside here. And this could be a moon. And our system is just the moons. That would be an external gravitational force, plus the total internal forces. So I'm going to keep this same color. Internal on the jth particle. Now how do we write this total internal force? Well, we're interested in the force on the jth particle. But the internal forces can come from all of the other particles in the system. So what we're looking at here is for a sum over all of the possible interactions where the other particles, k, go from 1 to N. And we have to be very careful here that in this sum k cannot be equal to j. Now this sum-- again, because it's a little bit tricky to understand-- is the internal force on the jth particle. Here's the kth one. But this could be a sum. I'll just draw one here. This is the internal force on the jth particle due to particle number 1. And so we're adding as k goes from 1 to N all of these internal forces. But we're excluding the case when k equals j, because that would be a force of an object on itself. And this quantity here, we can write as the total internal force on the jth particle. So in summary, we see that the total force on the jth particle is equal to the total external forces. I didn't say total there. I'm assuming there could be many different types of internal forces plus the total internal force. A little bit later on, we can drop the T's for simplicity of notation. But this is our big idea, that a force on the jth particle, external plus internal. And now when we look at this sum, and we want to now apply our main idea, we have that the force, which we're writing as-- let's explore this. Our total force is the sum of the forces on the jth particle. And we've now done this decomposition. I'm going to drop total. So it's the sum of the external forces on the jth particle. j goes from 1 to N. And here, we have a sum of the internal forces. So we have our sum j. It goes from 1 to N of the internal forces on the jth particle. Now we want to apply Newton's second law. And the concept is very straightforward. But the mathematical expression can be a little bit messy. We know by Newton's second law that the sum of a pair of internal forces is zero-- third law, by Newton's third law. So what we're saying here is, as an example, for Newton's third law-- let's just focus on this particular pair-- that F internal kj plus F internal jk is zero. So this is the statement that internal forces cancel in pairs. And so when I look at this total internal force, which is the sum of all of these pairs of internal forces, I can see that the total internal force has to be zero. So internal force cancel in pairs. Now here, we can see it another way if we want to look at this notation. We took the sum. j goes from 1 to N of F internal j. Now we use our definition for F internal. This where things get a little bit messy. k goes from 1 to N. k not equal to j. j goes from 1 to N. F internal kj. This looks terribly messy. But what we're saying is this sum is just a sum of pairs. And every single pair in this adds to 0. So what we have for our statement now is that the total force is the sum of the external forces, plus the sum of the internal forces which we've now said that cancels in pairs. So let's rewrite that as the total force-- now instead of writing this sum, let's write it as the sum of the external forces. And the internal forces cancel in pairs. And so this is now our force on our system. It's only the external force. And now we can recast our Newton's second law for a system of particles with the following statement that the external force causes the momentum of the system to change. And this becomes our expression for Newton's second law when we apply it to a system of particles where the beauty of this idea is that no matter how complicated the interaction is inside the system, all of those interactive pairs sum to zero. And so only thing that matters is the external force in terms of changing the momentum of the system. And now we'll look at some applications of that.
https://ocw.mit.edu/courses/7-016-introductory-biology-fall-2018/7.016-fall-2018.zip
ADAM MARTIN: And so I just want to say a couple sentences about DNA sequencing, just to finish that up. And so you'll remember this slide from last lecture. And remember, the way this Sanger technique works is to set up four different reactions where each reaction has a different one of these dideoxynucleotides. OK, so there's four reactions, each with different dideoxy NTP. And I brought along a gel that I ran a while ago, which is basically-- it's from sequencing gel, and you can-- I'll pass this around so you can take a look at it. So the four different lanes for each sample are the different dideoxynucleotide reactions. And what I want you to notice as that's passing around and you're looking at it is that the different reactions with the different dideoxynucleotides give different patterns of DNA fragment lengths. So there are different patterns of fragment lengths. And the different patterns are based on the fact-- this is based on the sequence, the sequence of the template, OK? And so if we look at the example up here, what you'll see is that in this banding pattern for dideoxy TTP, you see that there's a really short fragment at the bottom there, and so that fragment indicates that there must be an A in the template sequence. The next fragment up would be this one in the dideoxy GTP lane, and that indicates that one nucleotide beyond this A is a C position, and so on and so forth, such that you can sort of order the fragments and see which reaction has a fragment and then read off a DNA sequence. OK, so conceptually, that's how you would read off the sequence of a given strand of DNA, OK? So you might be wondering, if now, we just read off sequence as a series of colors, why am I even introducing this technique? And the reason is because I think it's important for you as potentially future scientists to know that when you're faced with a problem, how you might discover something new. And I see the Sanger method of DNA sequencing as a really clever and elegant way in which Fred Sanger solved the problem of DNA sequencing, and while we don't necessarily do it that way today, it still illustrates a concept that's important, the concept of chain termination, and I think there is something to be learned from this older technique, even if it's not exactly how we sequence DNA today. So for today's lecture, we're going to continue on our quest to basically clone a gene that's responsible for a disease. And so we started this in the last lecture. And I guess one thing we would want to start with is a disease, so I'm going to introduce to you now a disease called aniridia. And in order to clone the gene for a disease, it has to be a heritable disease, in this case, because we're going to use linkage analysis to identify it. So aniridia is a disease that's an eye disease in humans. It's a rare eye disease. So I want to show you a bit of an example of this eye disease. The way this disease manifests itself is it's basically the affected individual has an eye that is lacking an iris. So I'm going to show you what this looks like. If you're squeamish or don't like weird eyes and you don't want to look, you can look away. But I will show you affected phenotype in 3, 2, 1, OK, everyone looking who wants to see weird eyes. OK, good. So that is a individual that has aniridia, and also this one. So you see there's no clear iris in these eyes. And this disease is associated with other abnormalities of the eye that severely impair vision. And this is an inherited disease, and this is a pedigree from a family or series of families where the disease is propagating. And so anyone have a suggestion as to what mode of inheritance this is? Anyone want to rule a mode out? Rachel, you have an idea? AUDIENCE: I was going to say X-linked dominant, but [INAUDIBLE] ADAM MARTIN: OK, so let's take X-linked dominant. So if it was X-linked dominant, then this male would have an X chromosome with the dominant allele of the disease and should only pass it to his females. So I don't think that it would necessarily be X-linked dominant. Anyone else have an idea? Yeah, Georgia? AUDIENCE: Autosomal. ADAM MARTIN: Autosomal dominant. I like autosomal dominant. So in this case, you see you have an individual with the disease and they marry into a family with no history of the disease. One thing I'll point out, for many of these diseases, they're extremely rare, so if you see sort of a family tree where there's no instance of the disease, if it's a rare disease, it's likely that these individuals are not carriers. And so in this case, if you assume that this person doesn't have any form of the-- isn't a carrier for the disease, then this cross here resulting in about half of the individuals affected with the disease, that would be a characteristic of an autosomal dominant disease. So everyone understand my logic? Yes, Carlos? AUDIENCE: What are-- why is two and that looks like three on the slide, why are they crossed out? ADAM MARTIN: I think they're deceased. Yes. OK, so let's say you have a pedigree. You have pedigrees, you're able to try to link this marker to-- or the disease phenotype with various molecular markers, which we discussed in last week's lectures, then you're on the way to performing a process which is known as positional gene cloning. And what positional gene cloning is is it's basically cloning a gene and a allele that's responsible for a disease based on its position in the genome, it's position in a particular chromosomal region. So it's basically cloning a gene based on its chromosomal position or its chromosome position. And the first step of positional gene cloning would be to establish maybe what chromosome it's on. And a straightforward way to do this, as we've basically been discussing almost from when I started lecturing, is to create some sort of linkage map or do linkage mapping to identify, in the case of humans, molecular markers that this disease allele is linked to. And remember, in last week's lecture, we talked about a number of different polymorphisms that are present in the human genome that we can use to establish linkage with a given phenotype. In this case, it's a human disease. And we talked about this example for a microsatellite marker. And in this case, we talked through this example of how this dominant allele, P, is linked to this microsatellite allele m double prime, because if you look at the pedigree here, all of the affected individuals here contain this m double prime sized fragment for this microsatellite. Another thing to notice here is you can see that this couple has been faithful to each other, because basically, each of the children have an allele from the father and an allele from the mother. So you can see that type of-- you can see that using this type of molecular marker as well. OK, so you establish linkage. So linkage mapping establishes the chromosome position of a given allele and the gene. And this chromosome position sort of gets maybe in the right country, but you still have a long way before you get to the specific street address. And so you have to then sort of narrow it in to identify a smaller region of the chromosome that could possibly contain this gene. And so what you would do is go from this linkage map, where you maybe identify the position of this gene within a couple map units, to this next resolution of map called a physical map, OK? So we go from the linkage position to the physical map of the chromosome. And the physical map, as the name implies, is when you have physical pieces of DNA that are present in this region of the chromosome. So the physical map means you have cloned, so recombinant pieces of DNA, cloned pieces of DNA which encompass a given chromosome region. So these are encompassing a chromosome region. OK, so how would you get a piece of DNA that sort of is in this region? How would you start? How would you start fishing for that DNA? So you've gone through the process of linkage, you've identified sort of a polymorphism that is linked to the disease allele. How would you go from there to getting a physical piece of DNA that is present in that region of the chromosome? So let's think back to-- Jeremy, did you have an idea? AUDIENCE: Start by using PCR to just amplify that chunk. [INAUDIBLE] ADAM MARTIN: And what primers, I guess, would you use for the PCR? AUDIENCE: Depending on which chunk you're trying to get, you'd use [INAUDIBLE] ADAM MARTIN: OK, so Jeremy is saying if you knew the sequence, and I guess if you're doing this microsatellite analysis, you had primers that recognize a sequence at a given genomic position, so you actually know something about the sequence because of this polymorphism, so you can use that knowledge to then look for this sequence. And you could even look for the microsatellite in a DNA library. OK, so you have cloned pieces of DNA, and you're going to start with-- I'm going to swap this. Your starting position could be one of these polymorphisms in the sequence around it, which you already know. So let's say you had this microsatellite marker. You could then-- what I'm drawing here is a piece of genomic DNA. So this is genomic DNA. I'm just drawing the insert. This would be recombinant DNA. It would be present in some vector or plasmid. But if you can identify the sequence that contains this microsatellite marker, then you would have the microsatellite, but also the surrounding DNA, OK? So that sort of anchors you at a given position. Now, you don't know if your gene is in that piece of DNA, but you know that it's linked, and so it should be around that piece of DNA somewhere. And so it's unlikely your gene is going to be on this small piece of DNA that's cloned. This is probably just a few kb, and you could still be very far away from this, but that serves as a starting point from which you can go from to get more and more pieces of DNA such that eventually, you have a bunch of pieces of DNA that are going to span the entire region. So the way you identify other pieces of DNA is you could start with a piece of DNA maybe at the end of this insert and look for other inserts that are not identical to this piece that also contain this piece here. So that might get you a piece that's overlapping, but extends farther than your initial piece. So now you've moved slightly farther away from your starting point, which is this starting polymorphism. Then you could choose maybe another DNA sequence here and look for a piece of DNA that, again, is extending a bit farther out. And so you can see how iteratively, you can get farther and farther away from this starting point that you know your gene is linked to. And this process of going sort piece by piece and clone by clone away from a starting position is known as a chromosome walk. And you can do this bidirectionally. So you could also start with a sequence of DNA here and look for a clone that goes the other way. And you can see on my slide up there, you can see that in this case, they've taken a one map unit region of the chromosome and they're illustrating physical pieces of DNA that are overlapping that encompass this entire region. So this could be much bigger than the amount of DNA that would fit in one of these clones in the bacteria, but by sort of identify overlapping clones, you get the entire region. And what this is called here, because these pieces of DNA are contiguous with each other, this is known as a contig. Yeah, Jeremy? AUDIENCE: So would how you get the-- once you find one of those pieces, how do you get the primer for the end of it to start? Do you actually sequence each of these pieces of DNA? ADAM MARTIN: You could sequence it, or you could use a technique that I'm going to talk about at the end of my lecture, which I'll come back to. So nowadays, you'd probably just sequence it and then maybe look for that in another clone. But even before we could sequence DNA in entire genomes, you could do that type of experiment by using a technique called hybridization, which I'll come back to. OK, so the question in this chromosome walk then becomes, how do you know when to stop? Because you could do this for a very long time, but it might not be useful. So you have to know when to stop, and you need to know when you arrive at the gene that you're interested in, which would be the gene that is responsible for the disease. So another way to phrase this question is, how do you know when you have an interesting gene on one of these fragments? So let's say this is an interesting gene here. How do you identify interesting genes? So now, let's talk about identifying interesting genes. Anyone have an idea for how they would-- what criteria they would use to define a gene as being interesting here? I mean, one could say that all genes are interesting. If it's a gene, it's interesting, right? How might we define whether or not there's a gene even there? It could be-- there could be a gene-- how would you define a gene? Can someone define for me a gene? Yeah, Miles? Is it Miles? No? Malik, OK. AUDIENCE: [INAUDIBLE] that would create a starting and stopping point. So like [INAUDIBLE] ADAM MARTIN: So you'd look for a piece of DNA that has a start and a stop codon? So you'd look for an open reading frame, basically. Yeah. You could look for an open reading frame. And so I totally agree with Malik there. And another criteria you could use is if it's encoding a protein, at some point, it also must have been transcribed as an mRNA. And there are some genes that are transcribed as RNA but don't make a protein, and they're often involved in coding or in regulation of gene expression. So I'm going to-- I'm going to say, is it transcribed? So is there some transcript that's made? And specifically, is it transcribed in the tissue that we're interested in? So if we're talking aniridia, we might be looking for genes that are being expressed or transcribed specifically in eyes. You're looking for something that might be expressed in the eye. If it's not expressed in the eye, that gene's going to be much less interesting to you because the phenotype of aniridia is clearly in the eye. OK, what might be some other criteria here? Well, one criteria might be, is there a conserved gene that has an interesting function that's maybe similar to the disease related phenotype? So is there a conserved gene with an interesting function? And to take this example of aniridia, let's say you're doing this chromosome walk, and you identify a gene, maybe you sequence part of this clone, you get a string of sequence, and you realize that the sequence that you get is related to a gene from a model organism, and maybe that gene is called eyeless. If you've identified a region of sequence in a human, in the human genome that's mapping to an eye disease gene, and you find out that in that region, there is a conserved gene called eyeless, might be a very interesting gene for you. So eyeless is a gene. So here's a normal fly. You see it has that bright red eye. The eyeless gene, when mutated, results in a fly that now just doesn't have a white eye, but has no eye altogether. So it turns out that the aniridia gene is the homolog of the eyeless gene in flies. That's not how it was identified initially, but nowadays, there's a lot of information in model organisms. And so if you're sort of trying to identify a gene, and you see that there's a gene in the neighborhood you're looking at with a function that's related to a gene like eyeless, which has a clear sort of analogy in terms of phenotypes, then that's going to increase your interest in that gene. So I'm going to come back to this point here, which is how do we determine whether a piece of DNA that's on one of these inserts that we're getting as we walk across the chromosome, how do we know whether it is transcribed or not? And to get at this, I'm going to introduce you to a concept which is important in and of itself, which is the idea of cDNA. So cDNA. And specifically, I'm going to show you how one would make a cDNA library, which is basically a library of different cDNAs. And so what cDNA is, as shown up there on my slide, a cDNA is complementary DNA. It's complementary DNA, meaning that is the complement of an mRNA transcript. This DNA is the complement of an RNA or mRNA transcript. One thing to watch out for is it's not complimentary DNA. So this is MIT. This is a no compliment zone, so I don't want to see any complimentary DNA. All right, so let's think about complementary DNA. So remember, we've talked about the central dogma and how DNA encodes for RNA, which encodes for protein. And so the information flows from DNA through RNA to protein. But there are some specialized cases in biology where this information flow is reversed. So there can be a reverse of information flow where information flows from RNA to DNA. OK, so that's pretty cool. Where does that happen? Well, there are viruses, such as retroviruses, one example of a retrovirus is HIV, and the virus life-- the virus genome is a single-stranded RNA molecule, and the life cycle of the virus is that inserts into the host-- the host genome, which is double-stranded DNA. For a retrovirus to do that, it needs to take its RNA genome and make double-stranded DNA in order for it to insert. So this is an example in biology, which is basically breaking the rules that we talked to you about earlier in the semester. Also, there are retrotransposons which do a similar process, going from an RNA molecule to double-stranded DNA. So this is a specialized case, and it's interesting, and we can take advantage of it to basically clone and identify mRNA transcripts. OK, so I'm going to tell you how to make complementary DNA, and I'll go through a series of steps. The first step is we want to make complementary DNA of mRNA, so we need a way to purify the mRNA. So anyone have any idea how to purify mRNA? First, we could maybe draw an RNA molecule here. What are some salient features of mature mRNA? Yeah, Carlos? AUDIENCE: It'll have the five-prime cap [INAUDIBLE] phosphate. ADAM MARTIN: Yeah, it'll have a five-prime cap. Anything else? Jeremy? AUDIENCE: Poly-A tail. ADAM MARTIN: It'll have a five-prime cap and a poly-A tail. I'm going to take advantage mostly of the poly-A tail here. So here, we have a poly-A tail. OK, how might we use that poly-A tail to purify mRNA? Natalie? AUDIENCE: Well, you can add a [INAUDIBLE] because you know they're [INAUDIBLE] ADAM MARTIN: Mm-hmm. What sequence would you use? AUDIENCE: [INAUDIBLE] ADAM MARTIN: Yes. So Natalie has suggested using poly T, which she said would stick to this poly A tail because of base pair hybridization, OK? So let's say we have a bead or some type of resin with dTs hanging off of it. So I'll draw a few of them, but you'd have maybe a lot of them sticking off, OK? So you have a bead with pieces of DNA, all of which are poly dT hanging off of it. And then these poly dTs, if you add cytoplasm from cells, the mRNA in that cytoplasm is going to stick to this poly dT bead, and it will stick with a higher affinity than other things that are non specifically sticking to the beads, and you can wash these beads with buffer and salt to get rid of everything that's non-specifically sticking to the bead, and then you're left with just a bead that's enriched with mRNA, which is what was specifically sticking to this, OK? So you could purify-- you're purifying the mRNA based on its affinity for a poly dT, OK? So then you're going to have enrichment of mRNA in your sample. And so then once you have your RNA, you're going to want to somehow go from RNA to DNA, OK? So the next step will involve somehow going from RNA to DNA. So let's draw our piece of RNA here. Here's our RNA. It has a poly A tail so it's mRNA. There is 5 prime. OK, so now we need to take advantage of a trick. We can still take advantage of dT because we can use this as a primer because polymerase usually requires some primer and a three prime hydroxyl in order to extend. Now, can we use DNA polymerase to extend this primer? Jeremy is shaking his head no. Why? AUDIENCE: Because DNA [INAUDIBLE] ADAM MARTIN: Exactly. So what Jeremy is saying is DNA polymerase is a DNA dependent DNA polymerase, OK? DNA polymerase can only use this if this is DNA here, OK? So we need a different type of enzyme, essentially, in order to make DNA from RNA, and luckily, molecular biologists-- actually one of whom was here at MIT-- discovered this type of enzyme, and it's called reverse transcriptase. Reverse transcriptase. This is an enzyme that's encoded by retroviruses in order to make double stranded DNA from RNA, and that allows the retrovirus to insert into the host genome, OK? And what reverse transcriptase is is it's an RNA dependent DNA polymerase, OK? So it takes RNA as its substrate, and then it synthesizes DNA on the opposite strand, OK? So this is an RNA dependent DNA polymerase. OK, so if you add reverse transcriptase to mRNAs that have these dT primers, then what you get is a new strand, which is DNA here. This is the strand of DNA. And then you have a strand of RNA opposite it, OK? So at this step, you have a DNA RNA hybrid. So this is a DNA RNA hybrid. Let's see. Reveal some more of this. This is the process which I'm basically outlining on the board. So then you want double stranded DNA, so you don't want this strand of RNA that's down here, so you have to get rid of it. So you would degrade the RNA, and this is done using another enzymatic activity, which is derived from reverse transcriptase, which is termed RNAs H activity. So you can add an enzyme RNAs H, which RNAs H takes this DNA RNA hybrids and degrades the RNA part of it, OK? So this is going to degrade the RNA strand. And if you degrade the RNA strand, then you're left with a single strand of DNA. So you have single strand of DNA here, and now what you need to do is to synthesize the second strand of DNA. So you need a second strand synthesis. And so you need, again, a primer in order to prime the synthesis here. So there are a variety of ways to do this. You can add some type of hairpin, which is five prime here and three prime here, and then you can use either DNA, polymerase, or reverse transcriptase, which also can be a DNA dependent DNA polymerase to transcribe this strand here, OK? So again, you add polymerase, and now you've gone and you've generated double stranded DNA, OK? So everyone see how we've gone from an mRNA transcript, and we've done the reverse of everything we just told you in the first half of the course because we've gone from RNA and we've made DNA, OK? But this will be really useful because now we have a stable piece of DNA that we can clone into a plasmid and we have a record of this transcript being present in our sample, and we can propagate that on and on, so we've cloned it, OK? All right, what's going to be special about this piece of DNA versus a piece of genomic DNA? Natalie? AUDIENCE: [INAUDIBLE] ADAM MARTIN: Yes, so Natalie suggesting that it doesn't have introns, and that's totally right. So this is not like genomic DNA, and what Natalie said is because mRNA is processed, the introns are spliced out, such the mature mRNA only has the axons, and so this piece of complementary cDNA is going to have no introns. How else is it different? Yeah, Jeremy? AUDIENCE: It's not going to have promoters. ADAM MARTIN: It's not going to have a promoter. Yes, Carmen? AUDIENCE: It doesn't have [INAUDIBLE] ADAM MARTIN: You might see a poly A and T sequence in the cDNA. Yes, that's true. OK, so you might have poly A, poly T. I'm going to focus on the other part from-- there's going to be no promoter, enhancer, regulatory sequences. Basically, it's got no sequence that's not transcribed, right? The DNA is only going to have the part of the gene that was physically transcribed by the RNA polymerase originally. OK, so no non-transcribed regions. No non-transcribed regions, and Carmen's absolutely right. You will also have possibly a poly A or poly T sequence. OK, so when you get these cDNAs, you might have-- you have more than one mRNA in a sample like a cytoplasmic extract, so you're going to prime-- you're going to make multiple cDNA and different cDNAs will reflect different transcripts that are present in your sample, OK? So you could have one clone that's one gene, another clone that's a different gene, and another clone that's another gene, and you could have thousands of clones of these different DNAs. What's going to be special about what types of genes are you going to get for I guess different tissues. Are they going to be the same or not? Yeah, Carlos? AUDIENCE: [INAUDIBLE] ADAM MARTIN: Exactly. You're not going to see-- if you've prepared a tissue and there is no gene being-- if one gene was not expressed or transcribed in that tissue, you will not get a cDNA for that particular gene in your library, OK? So the representation of genes-- the representation of genes in a cDNA library is totally dependent on what genes are being expressed, OK? So this representation is going to be proportional to the expression level, and the more genes-- the more a gene is expressed in a given tissue, the more copies of cDNA for that gene you would see in the library, OK? So there's really a proportionality between the number of clones in a library and the expression level of a gene, where in the most extreme case, if this gene is not expressed at all, you're not going to see it represented at all in the cDNA library, OK? And then a corollary to this statement is that if you make cDNA libraries from different cell types or different tissue types, the cDNA libraries are going to be different between those different types of sources of mRNA, OK? So in other words, different tissues give you different cDNA. OK, so there is the process. So I went through most of the side. Yes, miles? AUDIENCE: Is this a way you can determine what gene sequences are expressed in all cells? Because in certain mRNA strands across all tissue samples, those are basic cell functions and expressed in a [INAUDIBLE] organism? ADAM MARTIN: So you're asking, if you grind up like an entire organism and if you get a cDNA from that library, could you tell if it's expressed in all different cell types? Even if you have one cell type that expresses a gene, if you grind up the entire organism, then you're going to have some mRNA that represents that gene. So I don't think it would be as an effective measure to determine the ubiquity of expression of a given gene, but in just a minute, I'm going to give you a tool that would allow you to answer the exact question that you're asking, OK? Any other questions about the cDNA library? OK. So I just wanted to mention that a comeback to this example I gave on the identification of the human CDK gene. So remember, we started with yeast that were mutant. They had temperature-sensitive mutants, and we transformed these mutants with a library, but I didn't really tell you what the library was. It was in fact the cDNA library from humans that was transformed into yeast, OK? And that's because yeast genes-- for the most part, they don't have a lot of interests, and so the yeast-- the machinery is not able to splice out the human interactions and human genes, OK? And so this was done with a human cDNA library, which then encoded-- one of which encoded the cumin CDK gene, and that allowed Paul Nurse to discover the piece of DNA that encoded for the human CDK, OK? So I just wanted to kind of retroactively go back and sort of tell you how that experiment was done. OK, so now I'm going to get to my final point for this lecture, which is this final technique, which will allow us to determine whether or not a transcript is expressed in a single cell type or ubiquitously through an organism, and this involves a technique, which is known as hybridization. And what hybridization is is if you're starting with a piece of DNA, you don't need to know its sequence in order to determine whether there are sequences that are similar or identical to it, because hybridisation is basically if you have some sequence and it's single stranded such that you have a DNA backbone but you have base pairs that are able to pair with their complementary bases and you can use a piece of single stranded DNA like this and you can label it such that if the labeled piece sticks to another piece that has identical or similar sequence, you'll be able to visualize it in some way, OK? So this is called-- you're looking for things that anneal or hybridize to a particular specific sequence. So you don't need to know the sequence a priori, OK? You just need to have this physical piece of DNA, and you can use this single stranded piece of DNA to then fish for similar sequences, OK? So we could take a piece of DNA here maybe that's in a gene, and we could fish through a DNA library to try to identify a cDNA clone that has sequence identity to that piece of DNA, OK? And the way this is done is to take a cDNA library. So each of these colonies here would express or have a different clone of DNA. You can then take a nitrocellulose filter, put it on this plate, which would stick the bacteria in place to that filter, and you could then lice the bacteria and denature the DNA, and then the DNA is stuck to the figure, but now it's single stranded. You can then add your probe, which is labeled, and look for the colonies that this probe sticks to, and that would then identify a particular cDNA, which would identify whether or not a piece of DNA is expressed in a given tissue type, OK? So everyone see how that would work? So in addition to doing this on a nitrous cellulose filter, you can also do this in a tissue, and that's known as in situ hybridization. And in this case, in situ hybridization, you're searching for mRNA in a section of fixed tissue. OK, and I have an example from this paper here, which is the paper this are cloned. In this paper was the cloning of the aniridia gene, and they identified a gene of interest, which is called Pax6 now, and they basically used a piece of DNA that they thought was interesting, and they did in situ hybridization in an organism, in this case, you see an eye. This is an eye here, and the label Pax6 is labeled in yellow, and you can see how this transcript is present throughout the entire eye, right? And the way you would see if it's tissue specific is you look in other tissues and you wouldn't see this yellow label. So that's how you would determine if it's expressed in a specific tissue or ubiquitously throughout an organism. OK, so this Pax6 gene. Oop. So I was going to ask, what do you think would happen if you hyperactivate Pax6 in humans, and this is one idea, but actually, I just made that up, or Stan Lee made that up, but actually, Stan Lee never in fact mentioned whether or not cyclops is a Pax6 mutant, but we can do a different type of experiment, which might be more ethical, which is we know there's a fly gene that's homologous to Pax6. And what we can do in flies is we can topically express this islets gene in non-eye tissues and see what happens. OK so, this is pretty wild. This is my Halloween image of the class. So this is a fly where eyeless has been expressed all over its body. OK, so here you see there's an eye-- It's normal eye-- here. You can see there's now another eye growing in the front of its head. You can see here's an eye growing on this fly's back, and you can see the legs. There's eye tissue all over the legs of this fly, OK? So this Pax6 gene, which is conserved from flies to humans is the master regulator of eye development, OK? And at least in flies, if you topically express this in other parts of the body, you get an eye. I should say these are not functionalized. They don't hook up to the brain the same way the normal eye does. So it's not like this fly can see out of the back of its head. OK, that's it. I'm done, and good luck on your exam on Wednesday. We will see you here.
https://ocw.mit.edu/courses/8-821-string-theory-and-holographic-duality-fall-2014/8.821-fall-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let us start. So let me first remind you what we did in last lecture. We proved-- we showed that the Weinberg-Witten theorem for beats existence of massless spin-2 particles. Massless spin-2 particles are hallmark of gravity, so that's why we look for them. In the same-- so emphasize in the same space time, say a QFT lives. So this theorem essentially cites this. You will never have emergent gravity start from a [INAUDIBLE] quantum field theory, which is a well-defined stress tensor. So as we already mentioned, it's a new pole of this theorem is that actually emergent gravity can actually live in a different space time. So as in a holographic duality. In fact, in holographic duality the gravity lives in one dimension higher, but we are not ready to go there yet. So in order to describe this thing we still need to do some preparations. So the preparations-- let me just outline the preparations we need to do. So the first thing we will do is black hole thermodynamics. So these will give hint for something called the holographic principle, which is actually more general than the holographic duality discovered so far. And the second thing is, we will also quickly go over the large N gauge theory-- the properties of the large N gauge theories. So this give hints-- something called a gauge string duality. So the behavior of the large N gauge theory give you a strong hint that actually there's a string theory description for ordinary gauge theories. OK, and then when you combine these two things together, then you get what we have currently, the holographic duality. And then we will also talk a little bit of sting theory. A tiny bit. Not a lot, so don't be scared. So we also will talk a little bit about string theory. So in principle, actually, I can talk about holographic duality right now, but going over those aspect can help you to build some intuitions, and also to have a broader perspective than just presenting the duality directly. So before I start today's lecture, do you have any questions regarding our last lecture and regarding this-- yeah, general remarks here? Yes? AUDIENCE: So moving forward, are we going to define emergent gravity as this existence of a massless spin-2 particle? PROFESSOR: No, that we will not do. AUDIENCE: So how are we defining emergent gravity? PROFESSOR: You construct Newton's Law? Yeah, you will see it. You will see it when we have it. Yeah, essentially you see-- essentially it's handed over to you. You don't have to go through that step. You don't have to go through that step. Yeah. Any other questions? OK, good, let's start with black hole thermodynamics. So let me start by doing a little bit of dimensional analysis, just to remind you important scales for the gravity. So the first thing is what we called the Planck scale. So in nature we have fundamental constant H-bar, Newton constant, and the C, speed of light. And immediately after Planck himself introduced this H-bar, he realized you can actually combine the three of them to come up with a map scale, which is just hc divided by GN. And this, if you plug in the expressed numerical value, it's about 1.2 to the 10, to the 19 gev, divided by c squared. You can also write it in terms of the gram, which is about 2.2 10 to the minus 5 gram. And then you can also have-- we can also construct the length scale, which is hGN divide c cubed, which is about 1.6 to 10 the minus 33 centimeter. And tp equals lp divided by c, OK? So this was discovered-- Planck introduced them in 1899, so most of you may know the story. And then, just immediately after he introduced H-bar-- so he introduced H-bar in 1899. In the same year he realized you can write down those numbers. And of course, those numbers meant nothing to him because at that time he didn't know special relativity, he didn't know quantum mechanics-- essentially, he didn't know anything. But he famously said that this unit-- so he claimed those should be basic units of physics, and he also said these are the units that would retain their significance for all times and all cultures. He even said they even will apply to aliens. But only after 50 years-- so after 1950s people get some sense. So many years after special relativity, many years after general relativity, and also quantum mechanics, et cetera, people started grasping the meaning of those scales. And so let me briefly review them. So you can-- OK, the feeling of the meaning of those scales by looking at the strength of gravity. So let me first start this example for electromagnetism, which you have a potential, which is e squared divided by r. So if you have two charged particle of charge e, then the potential between them is e squared divided by r. And then for particle of mass m then you also have Compton wavelengths. So if you have a particle with mass m, you also have Compton wavelengths. It's H-bar divided my mc. So you can get rough measure of the strength of electromagnetism-- the actual strength of the electromagnetism by conceiving the following [INAUDIBLE] dimension number, which I call lambda e, which is the potential evaluated at the minimal-- in quantum mechanics, essentially this is the minimal distance you can make sense of. Because once you go distance smaller than these, you can no longer-- and then the quantum uncertainty will create the energy uncertainty bigger than m, and you can no longer talk about single particle in a sensible way. So the essentially the minimal length scale you can talk about single particles. So this is essentially some energy scale. So you can compare these to the, say, the static mass of the particle. So this give you a measure of the strength of electromagnetism. Of course, if you plug this in you just get e squared divided by H-bar c. And of course, we know this is the fine structure constant, which is indeed the coupling. Say you would do-- indeed, it's a coupling in QED, OK? So you can do the same thing for gravity. You can do the same thing for gravity. So for gravity we have essentially g, and say if you take two particle of masses m then the potential between them is gm, m squared, divided by r. And then again, you can define effective strength for the gravity. I evaluate this potential at Compton wavelengths divided by the static energy of the particle. And then you can just plug this in. So this is GN m squared, divided by H-bar, divided by mc, then divided by mc squared. And then you find this is just equal to GN m squared, divided by H-bar c. OK, so now if you compare with this equation-- OK, so this is just given by m squared divided by mp squared. OK, so for gravity this effective strength is just given by the mass of the particle. So because for gravity the mass is like some kind of effective charge, and then divided by Planck scale-- this Planck mass-- squared. OK, and you can also write it as lp squared divided by rc squared, or just as this Planck length divided by [INAUDIBLE] wavelengths of this particle. OK, so for most elementary particles-- so for typical elementary particles we know the m is always much smaller than mp. And then the lambda g would be typically much smaller than 1. So for example, say, if you can see the electron, the electron mass would be 5 to the 10 to the minus 4 gev divided by c squared. Of course this is much smaller than this Planck mass. And then you find that this ratio-- so you can work out this ratio. So let's compare it to the corresponding strength for electromagnetism. So this is about 10 to the minus 43 if you work this out. So this tells you the gravity's really weak. So for ordinary elementary particles-- so the gravity is really weak, and so we can forget all about gravity until you reach the Planck mass, or your Compton wavelengths reach the Planck length. And then the fact of the quantum gravity will be important. So from this exercise we know that the mp is the energy scale, that the effective gravity strength become of order one-- become of order one-- that is, quantum gravity fact becomes significant. And similarly-- so lp is the corresponding length scale associated with such energy. So this give you a heuristic feeling-- give you a heuristic indication that the meaning of those Planck scales. OK, so there's another important scale associated with gravity. So any questions on this? Good, there's another important scale associated with gravity. It's called Schwarzschild radius-- Schwarzschild radius. So just from dimensional analysis, the Schwarzschild radius can be argued as follows. So can see that-- again, we can just even see from Newtonian gravity. So I would say let's consider the object of mass m. Then we ask is the distance-- at what distance, maybe I should-- at what distance from it the classical gravity becomes strong. So for this purpose, let's consider, say, a probe mass-- say, m prime. So, can see the probe mass. So I define a scale which I call rs, as I require the potential energy between m and m prime. at such a scale rs, then this become of order, say again, the static energy of this probe political, or of this probe mass. So if you cancel things out then you find rs is of order GNm divided by c squared. OK, so this give you-- I'll give you a scale. You can also ask-- so this is from Newtonian gravity. Of course, when your gravity becomes strong you should replace the Newtonian gravity by Einstein, the relativity. And then when you go to relativity-- when you go to relativity, general relativity, then you find then there's a Schwarzschild radius. So there's a Schwarzschild radius which given by 2 gm divided by c squared, which corresponding to the sides of a black hole. OK, corresponding to the sides of a black hole. So this is a classical scale-- purely classical scale. Just the scale which the classical gravity becomes strong. In particular, rs can be considered as the minimal length scale one can probe an object of mass m, OK? So classically, black hole absorb everything. So once you fall into black hole you can never come back. And so the minimal distance you can approach an object of mass m, it is given by the Schwarzschild radius. So when you go inside the Schwarzschild radius we just fall into the black hole, and you can never send information out. And the one interesting thing compare-- yeah, one interesting thing regarding this Schwarzschild radius. You said Schwarzschild radius increase with mass. OK, if you increase the mass the Schwarzschild radius increases. So in principal it can be very large when you can see the very large object-- when you can see the very massive object. So now-- so let us summarize. Let us summarize. So for object of mass m there are two important scales. Two important scales. So one is just the standard Compton wavelengths, and the other is the Schwarzschild radius. So one is quantum and the other is classical. The other is classical. So let's take the ratio between them. Let's take the ratio between them. So if you take the ratio between them-- so let's forget about these two, just to consider of order up to order 1. So c squared, divided by H-bar, divided by mc. And then this again give you gm m squared divided by H-bar c. And then this is again just m squared divided by mp squared. This again is m squared divided mp squared, and p is the Planck mass. So let's consider the different scenarios. So the first, let's consider-- just suppose the mass of the object is much, much greater than mp. OK, so in this case then the Schwarzschild radius are much larger than the Compton wavelengths. Much larger than the Compton wavelengths, And essentially, all of physics is controlled by the classical gravity, because you can no longer probe-- yeah, because a Compton wavelengths is much, much inside the Schwarzschild radius, which you cannot probe. So the physics is essentially classical gravity. And the quantum effect is not important, so you don't have to worry about the quantum effect. I will put quote here. Not quote, I will to put some-- yeah, some quote here. You will see what this means. And the second possibility is for mass much, much smaller than p. So in this case, then the Compton wavelengths will be much, much greater than the Schwarzschild radius. OK, so this is a quantum object, so the quantum size is much larger than the Schwarzschild radius. But we also know, precisely in this region, this lambda g is also very small. The effective strength of the gravity is also very, very small. So we also have found before that the lambda g is also very, very small. So in this case, as what we said here-- the gravity is very weak and not important. It's much, much weaker than other interactions, so you can essentially ignore them. Then we're only left with the single scale which mass is of all the mp. And then, as we said before, the quantum gravity is important. So let me just say, quantum gravity important. If this were the full story, then life would be very boring. Even though it would be very simple. Because the only scale you need to worry about quantum gravity is essentially natural zero. It's only one scale. And it will take us maybe hundred if not thousand years to reach that scale by whatever accelerator or other probes. So there's really no urgency to think about the quantum gravity. Because right now we are at this kind of scale. Right now we are this kind of scale. It's very, very far from this kind of scale. But the remarkable thing about black hole-- so this part of the physics is essentially controlled by Schwarzschild radius, because the Schwarzschild radius is the minimal classical radius you can achieve. Yeah, you can probe the system, and the quantum physics is relevant. But the remarkable thing is that it turns out that this statement is not correct. This statement is not correct. Actually, quantum effect is important. So the remarkable thing is that the black hole can have quantum effect manifest at the microscopic level. Say, at length scale of order Schwarzschild radius, OK? So that's why it makes the black hole so interesting, and also makes why black hole is such a rich source of insight and information if you want to know about quantum gravity. And as we will see, actually, we also contain a rich source information about ordinary many body systems, due to this duality. Any questions regarding this? Yes? AUDIENCE: So when I talk about the m much larger than mp, so the m is not only an elementary particle, it can be just and object? PROFESSOR: Yeah, it can be a bound state. So it can be-- here, we always talking about quantum object. But it can have mass very large. AUDIENCE: That's still a quantum object? PROFESSOR: Yeah. What you will not see from a traditional way, for such a large mass object, you will not see its quantum uncertainty, because quantum uncertainty is tiny. The Compton wavelengths is very tiny, and so the fluctuations are very small. And so you have to probe very, very small in scale to see its quantum-- from traditional point of view, we have to probe very, very small in scale to see its quantum fluctuations. And that scale is much, much smaller than the Schwarzschild radius. AUDIENCE: OK. PROFESSOR: Any other question? Good, so let me-- before we talk more specific about black holes, let me just make one final remark. It's that in a sense, this Planck length-- this length scale defined by Planck can be considered as a minimal localization length. OK, for the following reason. So let's firstly imagine in non gravitational physics-- so in non gravitational physics if you want to probe some short distance scales then it's easy if you are rich enough. Then you just accelerate the particles to very high energies. Say e plus, e minus, with p and minus p. Then that can probe the distance scale, then this can probe length scales. Say of order h divided by p. OK, so if you make the particle energy high enough you can, in principle, probe as short as any scale as you want. Anything, as far as you can make this p as large as you want. So in principle, you can take l all the way to zero. So the scale comes all the way to 0. If you take p, go to infinity. But in gravity, this is not so. So with gravity the square of the distance [INAUDIBLE] so when your energy is much, much greater than ep-- say, the Planck mass-- then, as we discussed from there-- so this is a central mass energy, OK? Say if your central mass image becomes much, much larger than the Planck scale-- Planck mass-- then rs controlled by the image-- yeah, and [INAUDIBLE] p. Yeah, let me just forget about c. Let me just say-- OK? Then the Schwarzschild radius from now on-- so you go to y, OK? So the Schwarzschild radius will take over as the minimal length scale. OK, so what's going to happen is that if you collided these two particles at very high energies, then at a certain point, even before these two particles meet together, they already form a black holes over the Schwarzschild sites, OK? If this energy is high enough, then we will form black hole, and then you can no longer probe inside the Schwarzschild radius of that black hole. So that defines a new scale which you can probe, OK? So the funny thing about this Schwarzschild radius is that it's proportional to energy, rather inverse proportion to energy as the standard Compton wavelengths, OK? So after a certain point, when you go beyond the Planck mass, when you further increase in the energy, then you're actually probing the larger distance scales rather than smaller distance scales, due to the funny thing about the gravity and the funny thing about black hole. OK, so actually, this scale increases with p-- increase with you center of mass energy. OK, and the high energies those two longer length scales. OK, this essentially defines the Planck length as a minimal scale one can probe. OK, so when your center of mass energy is smaller than the Planck scale-- than the Planck mass-- then your Compton wavelength of course is larger than lp. But when this is greater than mp, then as we discussed here-- then the Schwarzschild radius, of course, [INAUDIBLE] object will be greater than the Planck side, and will break through the common wavelengths and will be greater than the Planck sides. And this give you, essentially, the minimal radius to probe. Alternatively, we can also just reach the same argument. Simply, I can just write down a couple equations. So let's consider you have uncertainty. So suppose you have a position, data x, and then the answer in the energy or momentum associated with the data x is data p. But on the other hand, the distance you can probe must be greater than the Schwarzschild radius associated with lp, data p. OK, data p. So if you combine these two equations together-- so this is greater than GN H-bar divided by lp. So now I have suppressed the c. So you can see from this equation-- you can see that data x must be greater than H-bar GN, which is lp. OK, which is lp here. This is the same argument as this one, but this is a little bit formal. So the essence is that once you're energy is big enough, then you will create the black hole, and then your physics will completely change. AUDIENCE: [INAUDIBLE] black hole evaporates [INAUDIBLE] they do not contain that information. PROFESSOR: Right, yeah so we will go into that. When the black hole evaporate, we still don't probe the short-- it's still harder to probe the short distance scale. Yeah, we will talk about that later. So yeah, here just a heuristic argument to tell you that because of this, actually the physics are very special. The physics of the gravity is very special. Any another questions? AUDIENCE: As a probe for quantum gravity, I was thinking-- what if there were some phenomenon in which gravity is weak, and it's a macroscopic scale, but there's some kind of coherence happening that will-- maybe on the scale of galaxies or something like that-- that will allow quantum effects to manifest. Like an analogy of what happens in a laser or something like that. PROFESSOR: Yeah, that is black hole. The black hole is the way which gravity can manifest at the quantum effect that matches in scales. And we don't know any other ways for gravity to manifest such quantum physics at large distance scales. OK, so let me conclude this I generally discussion. Again, by reminding you various regimes of gravity. Various regimes of the gravity or quantum gravity. So in this discussion, you should always-- in the discussion I'm going to do in the next minutes, you should always think, when I take the limit I always keep my energy fixed. I keep the energy scale I'm interested in fixed. I keep that fixed. So the classical gravity regime is the regime in which you take H-bar equals zero-- take Newton constant finite-- and the regime of a particle physics, we would normally be, sometimes, say, QFT in fixed space time. So this is a quantum field theory in the fixed space time, including curved-- including curved. So this is a regime in which H-bar is finite, while you take the Newton constant go to 0. And then there's, of course, the quantum gravity regime, which is the GN and the H-bar both finite. And then there's a very interesting regime, which is actually the regime most of us work with. So there's also something called the semiclassical regime for quantum gravity. So this is the regime you keep H-bar finite, and you expand this system in Newton constant, OK? So around-- so you expanded whatever quantity Newton constant around-- say G Newton equal to zero. So G Newton equal to 0 is the classical regime. It's the regime the gravity's not important, but including H-bar. And now you can take into account the quantum gravity fact, semiclassically, by expanding around the GN. OK, so this is normally what we call semiclassical regime of gravity. Yes? AUDIENCE: Do we know that G goes to 0 [INAUDIBLE] and it's not some other part? PROFESSOR: Yeah, this is a very good question. So this is indeed the question. Indeed, most of the particles regarding black hole is in this regime. So our current understanding of quantum physics or black hole is in the semiclassical regime, and you treat any matter field H-bar finite, but the gravity's weak. And so there are various indication that this limit is actually not smooth, but the only for very subtle questions. For simple questions, for typical questions, actually this is a limit. This limit is smooth, but there can be very subtle questions which this limit is not smooth. And one such question is this so-called the black hole information loss, and the subtle limit of taking this-- yeah, you [INAUDIBLE] taking this limit. Any other questions? And this is a regime, actually, we will work with most. OK, this is a regime we will work with most. So we will always-- in particular in nature. So right there I'm keeping the H-bar explicitly, but later I will also set the H-bar equal to 1. And so H-bar will equal to-- so we are always typically working with this regime. So H-bar equal to one. And then you will have-- yeah, then you take into account the fact of GN in the perturbations series. Good, so now let's move to the black hole. OK, with this-- yes? AUDIENCE: [INAUDIBLE] Also working in the other sermiclassical regime. I mean finite GN, but expand H-bar. PROFESSOR: Yeah, this is not so much. It's because it's easy-- in some sense it's easy. So in a sense, we are doing a little bit both. Yeah, so later-- right now I don't want to go into that. You will see the effective coupling constant and show the quantum gravity fact is in fact H-bar times GN. It's H-bar times GN. And so the quantum gravity fact will be important when the H-bar times-- when you do perturbation series H-bar times GN. So when I say you are doing same thing GN, essentially it's because I'm fixing H-bar. So you're actually doing perturbation series H-bar GN. So you have to do both. Any other questions? Good. Right, so now let's talk about the black hole. Let's talk about classical geometry. OK, so here I will assume you already have some background in GR-- in general relativity-- and for example, you have seen since Schwarzschild metric, et cetera. And if you have not seen a Schwarzschild metric, it's also OK. I think you should be able to follow what I'm going to say. So for simplicity, right now let me consider zero cosmological constant. OK, zero cosmological constant. OK, zero cosmological constant corresponding to-- we consider symptotically Minkowski space time. And so in the space time of zero cosmological constant entered in the space time metric due to an object of mass m. It can be written as-- so this is the famous Schwarzschild metric. And so of course here we are assuming this is a spherical symmetric, and a neutral, et cetera. So this object does not carry any charge. And this m and this f-- it's given by 1 minus 2m, mass divided by r, or it's equal to rs divided by r. OK, so this rs is the Schwarzschild radius. So now c is always equal to 1. So if this object-- so we consider this object is very close metric center. If this object have finite sides, then of course this metric only is varied outside this object. But if the sides of this object is smaller than the Schwarzschild radius, then this is a black hole. OK, and the black hold is distinguished by event horizon and r equal to rs. OK, and the r equal to rs. So at r equal to rs, you see that this f becomes 0. OK, so f equals become 0. So essentially at here gtt-- so the metric for the tt component is becomes 0, and the grr, the metric before the r component become infinite. And another thing is that when r becomes smaller than rs, the f switch sign. So f become active, and then in this case then the r become time coordinates and the t becomes spatial coordinates, when you go inside to the r equal to rs. OK, this is just a feature of this metric. So now let me say some simple fact about this metric. So most of them I expect you know-- I expect you know of them. But just to remind you some of those facts will be important later. OK, so these are mostly reminders. So first, this metric-- if you look at this metric itself, it's time reversal invariant. OK, because if you take t go to minus t, of course the invariant on the t goes to minus t. OK, so this-- actually, because of this, this cannot describe a real black hole. So the real life black hole arise from the gravitational collapse, and the gravitational collapse cannot be a time symmetric process, OK? So this cannot describe-- so this does not describe a real black hole, but it's a good approximation to the real life black hole after this object have stabilized. So after the gravitational collapse has finished. So this is a mathematical-- in some sense, it is a mathematical idealization of real life black hole. OK, so this is first remark. So the second remark is that despite this grr goes infinity, this metric component goes infinity. So the space time is non-singular at the horizon. OK, so you can check it by computing, say, curvature invariants of this metric. You find the number of the-- all the curvature invariant that are well-defined. So this horizon is just the coordinate-- you can show that this horizon is just a coordinate singularity. Which we will see-- actually, we will see it in maybe next lecture, or maybe at the end of today's lecture, that just this coordinate, r and the t, coordinate become singular. The coordinate itself become singular. So this t, we normally call it Schwarzschild time. So let me just introduce a name. So this t, we call it the Schwarzschild time. So Schwarzschild found this solution while fighting first World War, really in the battlefield. And a couple months after he finished this metric, he died from some disease. Right, this is that. So another thing, you can easily check yourself with r equal to rs-- the horizon is a null surface-- is a null hyper surface. It's a null hyper surface. The null hyper surface just say this surface contains geodesics which are null. And this is a-- the third remark is an extremely important one, which we will use many times in the future. We said the horizon is a surface of infinite red shift compared to the-- infinite red shift from the perspective of observed infinity, OK? So let me save time, now, to add to this qualifying remark from the perspective of observed infinity. So now let me just illustrate this point a little bit more explicitly. So let's consider observer-- consider an observer-- so let me call oh-- at some hyper surface r, which is close to the horizon. Yeah, say as someplace which is close to the horizon. Slightly outside the horizon. And let us consider another observer, which I call o infinity, which at i go to infinity. OK, very far away from the black hole. So let's first look at this observer. So the i equals infinity, then your metric-- so at i equals infinity, this f just becomes one, because when your r goes to infinity, this r just become 1. And then this become the standard Minkowski space time, written in the spherical coordinates. So we just have the standard Minkowski time written in spherical coordinates. And then from here you can immediately see-- so this t is what we call Schwarzschild time. t is the proper time for this observed infinity. Say, for o infinity. OK, so now let's look at someplace i equal to rh. So at i equal to rh, then the metric is given by minus f rh dt squared with the rest. OK, and to define the proper time for the observer at-- we can just directly write it as minus d tau squared. So that's the proper time observed by observer at this hyper surface, OK? So then we concludes-- so let me call it tau h. So we conclude that the problem time for oh is given by f 1/2 rh times dt. OK, so it relates to the proper time at infinity by this factor. So if I write it more explicitly-- so this is 1 minus rh, divided by rs 1/2 times dt. So we see that as rh-- suppose this observer-- the location of this observer approach the horizon, say, if rh approach to rs, then this d tau h divided by dt will go to 0. So that means compared to the time at infinity the time at r equal to rs becomes infinite now. So-- or approximate, let me say, becomes infinite now. So that means any finite interval-- any finite proper time interval for observer at oh-- for this oh-- when you view [INAUDIBLE] infinity become infinitely null, OK? Become very long time scale. OK, become very long time scale. So you can also invert this relation. OK, you can also invert the recent relation. Say, suppose some event of energy with energy eh-- with proper energy eh-- say, for this observer at oh, for this observer oh. Then because of the time relation between them, because of the time relation between them, then from the perspective of this observer at infinity the energy is given just but you invert the [INAUDIBLE] between the time, because of the energy and time conjugate. So the e infinity becomes eh times this f 1/2 rh. So for this just again says, in a slightly different way, that for any finite eh local proper energy-- so this is a local proper energy for the observer at oh-- this e infinity-- the same event viewed form the perspective of the observer at infinity goes to 0. S rh goes to rs. So that e is infinity red shifted-- become infinite red shift. So any process with local proper energy viewed from infinity corresponding to very, very low energy process. So this actually will play a very important role. This feature will play a very important role when later we talk about holographic duality and it's implication, say, for the field series, et cetera. Yes? AUDIENCE: So this is just a pedantic comment, but I think you need a minus sign in front of your f, just to make sure proper time isn't imaginary. PROFESSOR: Sorry, which minus sign? AUDIENCE: In the d tau stage. PROFESSOR: You mean here? You mean here or here? AUDIENCE: Below that. PROFESSOR: Below that, yeah-- oh, sorry, sorry. Thank you, I wrote it wrong. It should be rs rh. Yeah, because rs is above. Sorry, yeah-- so rs is above, so varied rh, so rh is downstairs. Thank you, so rh is-- so I always consider rh as [INAUDIBLE] equal to rs, OK? Any other questions? OK, and so some other fact. And again, I will just list them. I will just them. If you're not familiar with them, it should be very easy for you to go through them, to re-derive them, with a little bit knowledge in gr. So number four is that it takes a free fall-- free fall means we just follow geodesics-- free fall of a traveler a finite proper time to reach the horizon-- say, from the infinity-- but infinite Schwarzschild time. So also you can easily check that it actually takes infinite Schwarzschild time for object to fall through the horizon. But from the free fall observer itself, it's actually just finite proper time. And so from the perspective of the observer at infinity, it looks like this object never fall into the black hole. It's just frozen at the horizon. So another remark is that once inside the horizon-- that means when r becomes smaller than rs-- the traveler can not send signals to outside, nor can he escape. So that's why this is called even horizon. So we will see this slightly later. In next lecture we will see this explicitly. So finally, there are two important geometric quantities associated with the horizon. Two important geometric-- OK? So the first one is the area of the spatial section. So suppose-- so let's consider we are at i equal to rs, and then here you just said r equal to rs, and then this is a two-dimensional sphere, OK? So this two-dimensional sphere corresponding to a spatial section of the horizon. So first, A is the area of a spatial section. So you can just say let's look at the area of this part with the r equal to the horizon radius. So this just give you a equal to ah ah, equal to 4 pi rs squared. And this rs is 2GN, so this become 16 pi GN squared. OK, so this is one of the key quantities of a black hole horizon. It's what we call the horizon area, OK? What we call this horizon area. And the B is called the surface gravity. B is called service gravity. So the surface gravity is defined by the acceleration of a stationary observer at the horizon as measured at infinity. OK, at infinity means at spatial infinity. So you can-- if you are not familiar with this concept of surface gravity, you can find it in standard textbooks. For example, the Wald say page 158, and also section-- OK. Just try to check it there. So, say, suppose you have a black hole. Say this is the Schwarzschild radius. Of course, things want to fall into the black hole. So if you want to remain at a fixed location outside the black hole, then you really have to accelerate. You have to fire some engine to keep yourself to stay there. And you can calculate what is acceleration you need to be able to stay here, OK? And once you are closer and closer to the horizon, that acceleration becomes bigger and bigger-- eventually, becomes infinity when you approach the horizon. But because of this red shift effect, when this acceleration is viewed from the units for observer at affinity, then you have infinity divided by infinity, then turns out to be finite. And this is called a surface gravity. It's normally called a kappa. Normally called a kappa. And this is one of basic quantities-- basic geometric quantities of the horizon. So I will not derive it here. I don't have time. And if you want to see this, Wald's book. So you can calculate that this is just given by 1/2 the derivative of this function f evaluated at the horizon location. OK, so f is equal to 0 at the horizon, but f prime is not. OK, so you can easily calculate. So this is 1 over 2rs from here, and is equal to 1 over 4GN. OK, so this is another very important quantity for the black hole. Actually, I think right now is a good place to stop. OK, so let's stop here for today, and the next time we will describe-- then from here we will discuss the causal structure of the black hole, and then you will see explicitly some of this statement, if you have not seen them before.
https://ocw.mit.edu/courses/7-016-introductory-biology-fall-2018/7.016-fall-2018.zip
PROFESSOR: And today I'm going to talk about DNA sequencing. And I want to start by just sort of illustrating an example of how knowing the DNA sequence can be helpful. So you remember in the last lecture, we talked about how one might identify a gene through functional complementation. And this process involved making a DNA library that had different fragments of DNA cloned into different plasmids and then involved finding the needle in the haystack where you find the gene that can rescue a defect in a mutant that you have. So if this line that I'm drawing here is genomic DNA, and it could be genomic DNA from, let's say, a prototroph for LEU2, the leucine gene. So this is from a prototroph. Then you could cut up the DNA with EcoRI. And if there is not a restriction site in this LEU2 gene, you get a fragment that contains the LEU2 gene. And then you could clone this into some type of plasmid that replicates in the organism that you're introducing it and propagating it in. And so that would allow you to then test whether or not this piece of DNA that you have compliments a LEU2 auxotroph, OK? Now one thing I want to point out is that because these EcoR1 sites, these sticky ends, would recognize this EcoR1 one end or this EcoR1 end, you can imagine that this gene-- if the gene reads this way to this way-- it could insert this way into the plasmid. Or it could insert in the opposite direction. So it could be inverted. So this would have some sort of origin of replication and some type of selectable marker. But if you have the same restriction site it can insert one way or the opposite way. That's just one thing I wanted to point out. Now let's say rather than leucine, you're interested in cycling dependent kinase, and you had a mutant end CDK and you had this sequence of your yeast CDK gene. Well, rather than having to dig through a whole library of pieces of DNA for the CDK gene, basically you're sort of fishing for that needle in the haystack. If you knew the sequence of the human genome, you'd be able to identify similar genes by sequence homology. And you could then take a more direct approach, where you take-- let's say you have a piece of human DNA now, double stranded DNA, and it has the CDK gene. You could take human DNA with this CDK gene. And you have unique sequence around the CDK gene, which would allow you to denature this DNA. And if you denature the DNA, you'd get two single strands of DNA. And you could then design primers that recognize unique sequences flanking the CDK gene. So you could imagine you'd have a primer here and a primer here. And then you could use PCR to amplify specifically CDK gene from, it could be the genome or from some library. And then you get this fragment here, which includes CDK. So knowing the sequence of the genome would allow you to more rapidly go from maybe a gene that you've identified as being important in one organism, and find the human equivalent that might be doing something similar in humans. So this step here is basically PCR. And let's say the CDK gene had restriction sites. Let's see, we'll say restriction site K and A here. Then if you have these restriction sites in your fragment of DNA, you can then digest or cut that piece of DNA with these restriction endonucleases. And then you'd get a fragment of CDK that has K and A sticky ends. We'll pretend that both of these have sticky ends. And now you have unique sticky ends between K and A. And you might have a vector that also has these two sites. And you could digest this vector with these two enzymes. And that would allow you to insert the specific gene in this plasmid. And if you have two unique sites, because K only recognizes K here and A only recognizes A, then it will ligate in. But you can do it with a specific orientation because you have two different restriction sites. So I hope you all see how it's with one restriction site versus two. All right. Now let's say you want to do something more complicated than this. Let's say rather than just identifying the gene that's involved in cell division, you want to engineer a new gene, in order to determine where this particular protein, CDK, localizes in the cell. So we have CDK, which could be from yeast or human, it doesn't matter. And you want to engineer a new protein, basically, that you can see. So remember Professor Imperiali introduced green fluorescent protein earlier in the year. And this green fluorescent protein is from a gene from jellyfish. So now we could, using what I've told you, reconstruct or engineer a gene that has DNA from three different organisms, in order to make a CDK variant that we are able to see in the cell. So remember, a green fluorescent protein is like a beacon, if it's attached to a protein. If you shine blue light on it, it emits green light. And so you can use a fluorescent microscope in order to see it. In this case, let's say there's also another restriction site here, R. And let's say you have a fragment of GFP that has two restriction sites, A and R. You could then cut this fragment and this fragment with these restriction enzymes A and R. And you could insert GFP at the C terminus of the CDK gene. So you could go and have a gene that has CDK GFP inserted inside a bacterial vector. Now which one of these junction sites do you think would be most sensitive in doing this type of experiment? So there are three junction sites. There's this one, this one, and this one. Which is the one you're probably going to put the most thought into when you're doing this experiment? Yes, Miles. AUDIENCE: The A? ADAM MARTIN: The A site. Miles is exactly right. This one is going to be important. And why did you choose that site? AUDIENCE: Of the three sites, two are half insert, half originals [INAUDIBLE].. But at A, both sides of it are inserts. So [INAUDIBLE] carefully. ADAM MARTIN: And if you're trying to make a fusion protein, what's going to be an important quality of this? Malik, DID you have a point? AUDIENCE: Well, they try to [INAUDIBLE] we'd have to make sure that the [INAUDIBLE].. ADAM MARTIN: Excellent job. So Malik just pointed out two really important things. To make this a fusion protein, you have two different open reading frames. These two open reading frames have to be in frame with each other. So this junction here has to be in frame where GFP is in frame with CDK, meaning that you're reading the same triplet codons for GFP, there in the same frame as CDK. Also, you want to make sure there's no stop codon here. Because if you had a stop codon here, you're just going to make a CDK protein. And then it's going to stop and then you won't have it fused to GFP. And you guys will work through more of these in the homework. So you'll be able to get a sense of it. So now for the remainder of this lecture and also for Monday's lecture, I want to go through a problem with you. Basically, if you have a given disease that's heritable, how might you go from knowing that disease is heritable to finding out what gene is responsible for that given disease? And this is going to involve thinking about different levels of resolution, in terms of maps. So the highest resolution map you can have for a genome is the sequence. You can have the full nucleotide sequence of a genome. And that's the highest possible resolution because you have single nucleotide resolution as to what every single base pair is. But that's like knowing like your apartment number and your street number and basically knowing everything. But starting out, you might want to know what continent it's on, or what country is it in. And so you first have to narrow down the possible locations for a given disease gene. And that will, at first, involve establishing what chromosome and what region of a chromosome a given disease allele is linked to. And that involves making essentially a linkage map, where you establish where a disease gene is located based on its linkage to known markers that are present in the genome. Now this is going to require that you remember back two weeks ago, to when we talked about linkage and recombination. And you'll recall that we were looking at the linkage between genes and flies and genes and yeast. One difference between that type of linkage mapping and human linkage mapping is we don't have really clear traits that are defined by single genes. You can't just take hair color and map the hair color gene to link it to a disease gene. Because hair color is determined by many, many different genes. So in fruit flies, you can take white eyes and see if it's connected with yellow body color because both of those are determined by single genes. So we need something other than just having phenotypic traits that we can track. We need what are known as molecular markers to be able to perform linkage mapping. And so what we need in these molecular markers-- well, if we just think about if we wanted to determine the linkage between the A and B genes. And if you did this cross, would you be able to determine linkage? Georgia, you made a motion that was correct. Tell me. Why did you shake your head no? AUDIENCE: They'd all be heterozygous. ADAM MARTIN: Yeah they'd all be heterozygous. Because this individual has the same allele on both chromosomes, you're not going to be able to differentiate one chromosome from the other. And so the point I want to make is that in order to see linkage, what you need is variation. So we need to have variation. And another term for genetic variation is polymorphism. So we need polymorphism, or genetic variation, between these molecular markers. We also need genetic variation in the disease. But we have that. We have individuals that are affected by a disease and individuals that are not affected by a disease. So we have variation in alleles there. But in order to map it with a molecular marker, to map linkage to a molecular marker, you also need variation here. So the problem with this cross is here you need to have heterozygote. There needs to be variation in this individual, where both of these alleles are heterozygous. So now I want to talk about some of these molecular markers that we can use, and how they vary between individuals and between chromosomes. Now this is going to be maybe the lowest resolution map. But I'm talking about this linkage map here. And you can see highlighted that the bottom here are various types of polymorphisms that we can use to link a given disease allele to a specific chromosome and a specific place on chromosome. So I'll start with the first one, which is a simple sequence repeat. It goes by many names. But I will stick with what's on the slide. So a simple sequence repeat is also known as a microsatellite. So you might see that term floating around, if you're reading about this. And what a simple sequence repeat is, as the name implies, it's a simple sequence. It could be a dinucleotide, like CA. And it's just a dinucleotide that's repeated over and over again. So on a chromosome, you might have a unique sequence, which I'll just draw as a line. , And then you could have a CA dinucleotide that's repeated some number of times, N. And then that's followed by another unique sequence. And that's what's present in it. So that would be one strand. And then in the opposite strand, you'd have a unique sequence, the complement of CA, which is GT, and then, again, unique sequence. And so there's variation in the number of repeats of the CA. And so there's polymorphism. So we can use this to establish linkage between this marker and a phenotype, like a disease phenotype. So how might you detect the number of repeats that are present here? Anyone have an idea of a tool that we've discussed that could be used here? So one hint that I gave you is that the sequence here is unique and the sequence here is unique. So is there a way we can leverage that unique sequence to determine whether there's a difference in the number of repeats? What's a technique we discussed that involves some component of the technique recognizing a unique sequence? Yeah, Natalie? AUDIENCE: CRISPR Cas9. ADAM MARTIN: Well, CRISPR Cas9 is a possibility. Jeremy, did you have an idea? AUDIENCE: PCR? ADAM MARTIN: PCR-- so it's true. You could get it to recognize that. But then you have to detect it, somehow. So what's more commonly used is PCR. Those are both good ideas. But using PCR, you could design a primer here and a primer here. And you could amplify this repeat sequence. And the number of repeats would determine the size of your PCR fragment. So if you did PCR, then you'd get a PCR fragment that has the primers on each end, but then has this certain size based on the number of repeats. So in that case, we need some sort of tool that enables us to determine the size of a particular DNA fragment. And so I'm going to just introduce to you one such tool, which is gel electrophoresis. And gel electrophoresis involves taking DNA that you've generated, by either PCR or by cutting up DNA with a restriction enzyme, and loading it in a gel that has agarose. Maybe it's composed of agarose. It could be composed of polyacrylamide. And then because DNA is negatively charged, the backbone, if you run a current through it, such as the positive electrode is at the bottom, then the DNA is going to snake through this gel. Now we'll do a quick demonstration, if you two could come up. I need one volunteer. Ori, find 10 of your friends and bring them down. All right. That's probably good. Yeah. All right, Hannah, why don't you-- you guys have to link up, OK? Stay over here. We'll start at this end. This is the negative electrode over here. The positive electrode is going to be down there. And Jackie is going to be our single nucleotide. You guys link like-- yeah. You don't have to do-si-do, or anything like that. All right. Now what I want you guys to do is I want you to slalom through these cones like it's all agarose gel. So that you're going towards the other side. And I'm going to turn on the current now. So go. All right, stop. All right. See how the shorter DNA fragment is able to more easily navigate through the cones and get farther. So it was somewhat rigged. I know. But I just needed some way to make sure you always remember that the shorter nucleotide, or the shorter fragment, is going to migrate faster. You guys can go back up. Thank you for your participation. Let's give them round of applause. [APPLAUSE] All right. So what you just saw is that the longer DNA fragments, they're going to be more inhibited by moving through the gel. And so they're going to move slower and thus, not move as far in the gel. Whereas, the small fragments are going to move much faster because they're able to maneuver their way through this gel much more quickly. So there's going to be an inverse proportionality between the size of the DNA chain and its rate of movement. You're always going to see the shorter DNA fragment moving faster. So what one of these gels actually looks like is shown here. So this is a DNA gel that's agarose. And DNA has been run in these different samples. And what you're seeing is this gel is subsequently stained with a dye, like ethidium bromide, which allows you to visualize the individual DNA fragments. And so a band on this gel indicates a whole bunch of DNA fragments that are all roughly the same length. So essentially, you can measure DNA length using this technique. What's over here at the end of the gel, this is probably some sort of DNA ladder, where you have DNA fragments of known length that you can use to calibrate the length of these bands over here. So this is how you measure DNA length. And we're going to use it over and over again, as we talk about DNA and sequencing. So now, let's think about how this is going to help us establish linkage between a particular marker in the genome and a genetic disease. So if we think about these microsatellite repeats, I told you they're polymorphic. They exhibit a lot of variation in size. And so here's an example showing you a female who has two intermediate sized microsatellites. And if you look at this-- if you did PCR and measured the size of these, you get two different bands because there are two different alleles of different length here. So you can see this individual has two intermediate length repeats. And this person has had children with an individual that has a short and a long microsatellite. And you can see that on the gel, here. Now this female is affected by some disease. And these two individuals have children. And you can see that a number of those children are affected by the disease. So what mode of inheritance does this look like? If you had your choice between autosomal recessive, autosomal dominant, sex linked dominant, and sex linked recessive, what mode of inheritance is this looking like? Oh, Carmen. AUDIENCE: Autosomal recessive. ADAM MARTIN: Autosomal recessive? Why do you go with recessive? Yeah, go ahead. AUDIENCE: Because there is a male that's affected. But not both of the parents are affected. So it seems like the father is heterozygous and the mother is homozygous recessive. ADAM MARTIN: That's possible. That's exactly the logic I want to see. Is there another possibility? Yeah, Jeremy. AUDIENCE: Autosomal dominant. ADAM MARTIN: It could also be autosomal dominant. So you're right. You're right. If this was not a rare disease, then that male could care be a carrier and could be passing it on to half the children. So that's good. You'd essentially need more information to differentiate between autosomal recessive and autosomal dominant. For the purposes of this, we're going to go with autosomal dominant. And what you see is that you want to look at the affected individuals and see if the disease phenotype is linked, or connected, with one of these microsatellite alleles. So if we look at-- we basically PCR DNA from all these individuals. And if you look at who is affected, each one of the individuals has this M double prime band. And none of the unaffected individuals has it. So obviously, it would be better to have more pedigrees and more data to really establish significance between this linkage. But this is just a simple example, showing what you could possibly see if you have one of these molecular markers linked to a particular disease allele. So that kind of establishes the principle. Now let's think about what are some other molecular markers that are possible? So another type of marker, and this is one that's the most common one, if I go here. So here, you see here's is a linkage map, here. And you see most of these bands are green. And the green markers, here, are what are known as Single Nucleotide Polymorphisms, or SNPs. So single nucleotide polymorphisms-- and this is abbreviated SNP. And what a single nucleotide polymorphism is, is it's a variation of a nucleotide at a single position in the genome. So it's just a one base pair difference at a position. So there's variation of single nucleotide at a given position, at a position in the genome. And because that's a pretty general definition, there are tons of these in the genome. Now one thing to think about is you could have a mutation in an individual that creates a SNP. So you could have a de novo formation of a SNP. But if you have a SNP and it gets incorporated to the gametes of an individual, then that variant is going to be passed on to the next generation. So this is something that could occur de novo. But it is also heritable. And if it's heritable, then you can follow it and use it to determine if a given variant is linked to a given phenotype, like a disease. So to identify a single nucleotide polymorphism, it's helpful to be able to sequence the DNA. And I'll talk about how we could do that in just a minute. But before I go on, I just want to point out a subclass of SNPs that can be visualized without sequencing. And these are called restriction fragment length polymorphisms. So restriction fragment-- so it's going to involve some type of restriction enzyme digest length polymorphism. It's a long word. But it's abbreviated RFLP. And what this is, is it's a variation of a single nucleotide. But this is a subclass of SNP. Because this is when the variation occurs in a restriction site for a restriction enzyme. So if you remember your good friend EcoR1, EcoR1 recognizes the nucleotide sequence GAATTC. And EcoR1 only cleaves DNA sequence that has GAATTC. So if there was a single nucleotide variation in the sequence, such that it's now GATTTC, or something like that, that destroys the EcoR1 site. And so EcoR1 will no longer be able to recognize this site in the genome and cut it. So you could imagine that if you had one individual in the genome having three EcoR1 sites, if you digest this region, you'd get two fragments. But if you destroyed the one in the middle, then if you digested this piece of DNA, then you'd only get one fragment. And that's something. Because it results in different sizes of fragments, that's something you can see just by doing DNA electrophoresis. And maybe you would use some method to detect this specific region, so that you're not looking at all the DNA in the genome, but you're establishing linkage to this specific area. You could use PCR. You can have PCR primers here and here. And you could then cut with EcoR1. In one case, you'd get two fragments. In this case, you'd get two fragments. In this case, if you amplified this region of the genome and cut with EcoR1, you'd only get one fragment. So you'd be able to differentiate between those possibilities. Yes, Malik. AUDIENCE: When you use PCR, are there [INAUDIBLE]?? ADAM MARTIN: What's that? AUDIENCE: Are there [INAUDIBLE]? ADAM MARTIN: Oh. You're saying what causes it to stop? That's a great question, Malik. Yeah. So initially, it's not going to stop. That's absolutely right. But because every step, each time you replicate, it's then primed with another primer. So you'd replicate something like this that's too long. But then the reverse primer would replicate like this. And it would stop. So if you go back to my slide from last lecture, look through that and see if it makes sense how it's ending. Because if you do this 30 times, you really will enrich for a fragment that stops and ends at the two primers, or begins and ends at the two primers, I should say. Good question. Thank you. All right. Now, let's talk about DNA sequencing. Because as I showed you, obviously, these SNPs, because there are so many of them, are probably the most useful of these markers to narrow in on where your disease gene is. And to detect a SNP, we need to be able to sequence DNA. So I'm going to start with an older method for DNA sequencing, which conceptually, is very similar to how we do DNA sequencing today. And so it will illustrate my point. And then at the end, I'll talk about more modern techniques to sequencing. So the technique I'm going to introduce to you is called Sanger sequencing. And that's because it was identified by an individual named Fred Sanger. And I'm going to just take a very simple DNA sequence, in order to illustrate how Sanger sequencing works. So let's take a sequence that's really simple. This is very, very simple, and then more sequence here. So let's say we want to determine the nucleotide that's at every position of this DNA fragment. So one way we could maybe conceptually think about doing this, is to try to let DNA polymerase tell us where given nucleotides are. And if we're going to use DNA polymerase, what are we going to need, in order to facilitate this process? Yes, Rachel. AUDIENCE: [INAUDIBLE]. ADAM MARTIN: You're going to need nucleotides, definitely. So we're going to need nucleotides. What else? To start, what are you going to need? Miles? AUDIENCE: Primer. ADAM MARTIN: You're going to need a primer, exactly. Good job. So you need a primer. So here's a primer. And now, we're going to try to get DNA polymerase to tell us whenever there is a given nucleotide in this DNA sequence. And so think with me. Let's say we were able to get DNA polymerase to stop whenever there was a certain nucleotide. So if we go through just a couple nucleotides, let's say, at first, we want DNA polymerase to stop whenever there's an A. So let's say there was a possibility it would stop at this A. If it's stopped at this A, you'd generate a fragment of this length. But if it read on through that A, there's another possibility that it would stop at this A. So we're kind of looking at when these are stopping. And the final possibility is it goes on and stops at this A. So if this DNA polymerase stopped only at As, you'd get fragments that are these three discrete lengths. Now let's consider another possibility. So pink here is stop at A. And in blue, I'm going to draw what would happen if it stopped at T. So they all start from the same place. If it stopped at T, it would just stop one nucleotide beyond this A in this simple sequence. So in blue here, this is stop at T. But if it's just a possibility, it stops. And some of the polymerases could go beyond this T and go to the next T and stop here. And again, this would be one nucleotide length longer than this pink one, here. And the final one would-- I'll just draw it down here-- would get out to this last T, here. So what you see is if we could get DNA polymerase to stop at these discrete positions, we'd get a different sized fragments, whether it was stopping at one nucleotide versus the other nucleotide. You all see how this is resulting in different fragment lengths. Yes, Andrew. AUDIENCE: How would you create a pattern [INAUDIBLE]?? ADAM MARTIN: There are companies now. You can basically take nucleotides and synthesize these primers chemically, not using DNA polymerase. AUDIENCE: I'm saying how would you know what primer to use, if you don't know the sequence? ADAM MARTIN: Oh, in this case, you'd have to start with some sequence that you know. So in most sequencing technologies, you kind of make a DNA library, where you know the sequence of the vector. And then you'd use the vector sequence as a primer to sequence into the unknown sequence. Great question. Good job. All right. So what we need now then is some sort of tool or ability to stop DNA polymerase when there's a certain nucleotide base. And to do that, we can use this type of molecule, here, which is known as a dideoxynucleotide. Remember, for DNA polymerase to elongate a chain, it requires that the last base have a three prime hydroxyl. And so what this dideoxynucleoside triphosphate is, is it's a nucleoside triphosphate that lacks a three prime hydroxyl. Here, I'll highlight that. So you see this guy? You see it bolt the highlight H? There's a hydrogen there on the three prime carbon, rather than the normal hydroxyl group. So if this base gets incorporated into a elongating chain, DNA polymerase is not going to be able to move on. So this method where you can add a certain dideoxynucleoside triphosphate to stop chain elongation is known as a chain termination method. So you're getting chain termination. And you're getting this chain termination with a specific dideoxynucleoside triphosphate. So these dideoxynucleotide triphosphates, if they get incorporated into the DNA, are going to halt the synthesis of that DNA strand. So if we take our example, here, this might be a reaction that has dideoxythymidine triphosphate. So if we had dideoxythymidine triphosphate in this sample and it's elongating, then when the polymerase reaches this point, there's a possibility that it will incorporate the dideoxynucleoside triphosphate. And if this is a dideoxynucleoside triphosphate, then there won't be a three prime hydroxyl. And DNA polymerase will just be like, oh, I can't go on! Because it's not going to have a three prime hydroxyl. So it's not going to be able to continue with the next nucleotide. So this is known as chain termination. So let me take you through an example, here. All right. So here's an example that you have a slide of. And again, there's a template strand, which is the top strand. And this method requires that you have a primer. And what's often done is you label the primer. So the first step is you have to denature your DNA. So you have to go from double stranded DNA to a single stranded DNA. And then you mix the double stranded DNA with first, this labeled primer, such that the primer can then yield to the single stranded DNA. You need DNA polymerase, as I've mentioned. And as, I believe, Rachel mentioned before, you need the building blocks of DNA. So you need the four dideoxynucleoside triphosphates. So you always have the four dideoxynucleotide triphosphates. But what's special here is you're going to spike several reactions with one of the dideoxynucleoside triphosphates. So you spike the reaction with a tiny amount of one of your dideoxynucleoside triphosphates. So let's say you have a reaction, here. And this this one here has dideoxyadenosine triphosphate. Then polymerase will along get this strand until there's a thymidine on the template. And then there's a possibility that it will incorporate this dideoxy NTP. And if it does, then you get chain termination. And you get a fragment of this length. But the other possibility, because there is still the deoxy form of the NTP present, it's possible that it incorporates a deoxyadenosine triphosphate there. And keeps going, and then incorporates a dideoxy ATP later on, where you have another T. And so the polymerase will essentially randomly stop at these different thymidine residues, depending on whether or not a dideoxynucleoside triphosphate is incorporated. And that means for a given reaction, one in which you have dideoxy ATP, you get a certain pattern of bands that represent the length of fragments, where you have, in this case, a thymidine base. And then you do this for all four bases, where you have four reactions, each with a different base that's dideoxy. So when you're adding these, you're going to do four reactions, one with dideoxy ATP spiked in, one with dideoxy TTP, one with dideoxy CTP, and the last with dideoxy GTP. And because these nucleotides are present in different positions along the sequence, you're going to get distinct banding pattern for each of these reactions. But using that banding pattern, you can then read off the sequence of DNA that's present on the template strand. So this is how sequencing was done for many, many years. These days, it's been made cheaper and faster. And now what's often used is next generation sequencing. And one the pain in the ass about sequencing before is you'd use a lot of radioactivity. Your primer would be radioactive, so that you could detect these bands. Right now, everything's done using fluorescence, which makes it much nicer, I think. And so in next generation sequencing, your template DNA is attached to a solid substrate, such that it's immobilized on some type of substrate. And then you add each of the four nucleoside triphosphates. In this case, they're labeled with a dye, such that each one is a different color. But the dye also functions to prevent elongation, such that, again, it's this chain termination. When you incorporate one of these, the polymerase just can't run along the DNA. It incorporates one and then stops. So if you get your first nucleotide incorporated, it will incorporate one of these four. And it will be fluorescent at a certain wavelength, which you can see using a device or microscope. And then what you then do is chemically modify this base, such that you remove the dye and allow it to extend one more base pair. And so you go one nucleotide at a time. And you read out the pattern of fluorescence that appears. And that gives you the sequence of DNA on this molecule that's stuck to your substrate. And you can do this in parallel. You can have tons, many different strands of DNA. And you can be reading out the sequence of each one of these strands in parallel. Great. Any questions about DNA sequencing? OK. Very good. I will see you on Monday. Have a great weekend.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
One of the big difficulties students have in a problem that involves many objects is to successfully identify third law interaction pairs. So today, I'd like to look at a problem which shows us how to think about that. So what we're going to look at is third law interaction pairs. Now, the problem I'd like to consider is the following. Suppose we have a block one and another block two sitting on block one on a surface and our surface has friction. And I'd like to push block one with a force F. And now I'd like to use Newton's second law to determine what is the maximum force I can push that block two will not slip. So I'm trying to find F max such that block two does not slip. Now, how do we even begin to think about this? Well, the first thing that we have to decide is when we apply Newton's second law, our first question will be, what is the system that we'll choose? And in this problem, there are three separate ways to think about this. Just to show you an example, so system a will be block one, system b will be block two, and system c will be block one and block two. And given these different systems, I can address different questions and we'll see that as we develop this. So let's start with block one and we'll start with breaking our problem down into block one and block two. And let's draw the free-body diagrams on those blocks, trying to identify action-reaction pairs. So let's begin with block one. First off, we know there's gravitational force and the Earth is the other pair there, which we're not drawing. There's friction between the surfaces. So there's a friction between the ground and block one. There is the normal force of the ground on block one. Block two is sitting on block one so there's a normal force of block two on block one. And finally, as you push block one, there's a friction force between block two and block one that's opposing the fact that block one is being pushed forward. So we have a friction force here between blocks two and one. Now, this friction down here is kinetic. But if the blocks are moving together, this friction here is static. So those are the free-body force diagrams on one. I'll choose unit vectors in a moment. Now, what about two? So let's draw two. Well, again, m2 g-- the Earth is the other element of the interaction pair. And now, block one is pushing block two up. So we have block one pushing block two up and notice our indices make it very easy to see that our first Newton's third law interaction pair is the normal force of contact between the two blocks. Now, what else? Here's the subtle thing is that this whole system will move to the right. What's the force that's making block two move to the right? Well, it's static friction. So static friction from block one and block two-- this is the static friction-- is causing block two to move to the right. And now we can see, again our third law interaction pair. So in this problem, we have two third law pairs, this one and I'll connect the line there, and those are the third law pairs. Now, we know by the third law that they're equal and opposite in magnitude. We can identify f. We can call this one N if we wanted just to save ourselves the problem of writing a lot of indices. Once we've done that, we're now ready to apply Newton's second law. We haven't yet figured out what the condition is that it will just slip. We'll get to that. But for the moment, we can now apply vector decomposition. So we need to choose some unit vectors. Because I'm pushing the system this way, it makes sense for me to choose my i-hat to the right. So here, I'm going to choose i-hat 1 and j-hat 1. Now, over here, I could choose the same unit vectors, even though I'm thinking about this as a completely separate problem with its own coordinate system. And I'll choose i-hat 2 and j-hat 2. But because these unit vectors are in the same direction, they're equal. And they're both moving in the positive i directions and so I expect both a1 and a2 to be positive. And now I on block one, I can write down F1 equals m1 a1. And because we have two different directions, I'll separate out. I like to call this my "scorecard." And now I look at the forces. Oh, I missed the pushing force. But that's an interesting exercise. When I looked at this diagram, I saw I had two forces going this way. I had no force acting that way. I went back. I checked my free-body diagram, and recognized that I forgot to put F in there-- always a good exercise to double-check your free-body diagrams before you apply Newton's laws. So now in the x-direction, we have the pushing force minus the static friction minus-- we'll call this fk-- minus the kinetic friction and that's equal to m1 a1. Now in the vertical direction, we have the ground friction, N ground 1, minus block two pushing down on block one minus the gravitational force. And there is no acceleration in that direction. And I double-check my free-body diagrams, I check my signs, and that looks right to me. Now, for block two, I'll apply the same analysis. F2 equals m2 a2. Separate out my two unit directions. Notice even though these unit vectors are the same, I'm emphasizing that I'm talking about block two. I could have chosen different coordinate systems if I wanted. Now, I look at my free-body diagrams. On block two. I see that its static friction is the only one in the positive i-hat direction. So I have F equals m2 a2. And in the vertical directions, I have that my force between the blocks, the normal force between the blocks, minus gravity is 0.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
Let's now consider two dimensional motion, and let's try to analyze how to describe the change in velocity. So again, let's choose a coordinate system. We have an origin plus y plus x. And let's draw the trajectory of our object. And now let's draw the object at two different times. So for instance, if I call this the location at time t1, and a little bit later here, this is the location of the object at time t2. We'll call our unit vectors i hat and j hat. We know that the direction of the velocity is tangent to this curve. So if we draw v at time t1-- and over here, notice the direction has changed v at time t2. And what we'd like to do now is describe, just as before, that our acceleration a of t is the derivative of the velocity as a function of time. What that means is the limit as delta t goes to 0 of delta v over delta t. Now, it's much harder to visualize the delta v in this drawing. And partly, the reason for that is these velocity vectors are located at two different points. And right now, the backs of these vectors have different places in space. But remember that delta v is just v, in this case, at time t2 minus v at time t1. And our principle for subtracting two vectors at different locations in space is to draw the vectors where we put the tails at the same location. So here's a tail at this vector. We're just going to translate that vector in space. That is still v at time t1. These vectors are equal. They have the same length, and they have the same direction. And so delta v is just the vector that connects here to there. That's what we mean by delta v. And so you can see in this particular case that it's not obvious from looking at the orbit what the delta v is. So what we need to do is just trust our calculus. And so when we write the velocity as dx dt i hat plus dy dy j hat, and we're now treating each direction independently. We call this vx i hat plus vy j hat. So that's our velocity vector. Then our acceleration is just the derivative of the velocity. We take each direction separately, so we have dv x dt i hat plus dv y dt j hat. Now, again, notice that velocity v of x is already the first derivative of the position of the exponent function. So what we really have here is the second derivative of the position function in the i hat direction and the second derivative of the component function in the y direction. And that is what we call the instantaneous acceleration. Now, again, this is sometimes awkward to draw, but you always must remember that this x component of the acceleration by definition is the second derivative of the component function or the first derivative of the component function for the velocity. And likewise, the y component of the acceleration ay is the second derivative of the component function for position. And that's also equal, by definition, to the first derivative of the component of the velocity vector. And that's how we describe the acceleration. As before, we can talk about the magnitude of a vector. And the magnitude of a we'll just write as a. It's the components squared, added together, taken square root. And that's our magnitude. And so now we've described all of our kinematic quantities in two dimensions-- the position, the velocity as the derivative of the position, and the acceleration as the derivative of the velocity where each direction is treated independently.
https://ocw.mit.edu/courses/7-01sc-fundamentals-of-biology-fall-2011/7.01sc-fall-2011.zip
PROFESSOR: Welcome to another help session on recombinant DNA. Today, we're going to be discussing about transformation and protein expression. As you can imagine, there are often many times we will need a large amount of protein. But it can be difficult to get it from the original source. For example, you need insulin to treat diabetes, but it's not exactly practical to get a lot of insulin from humans. In order to get a lot of the desired protein, often other organisms will be used to express this protein. But it's a multi-step process. For example, let's say we want to express our human insulin in bacteria. Well, the human gene has both introns and exons, as you remember from lecture. Exons are what are actually cut together in order to produce the final mature mRNA, which is later used to express the protein. Bacteria, on the other hand, don't have introns. They just have exons. So they are only capable of reading a gene that just has the exons and then producing the protein from that. In order to take a version of the gene that's from the eukaryotic cell and get it to be expressed in the prokaryotic, we first have to make something called cDNA. So let's begin. We're going to take our cell of interest. And the first step in creating the cDNA is we're going to isolate the mRNA of interest-- this is the mRNA for insulin, for example-- from the cell. And once we have our mRNA, we are going to add something called reverse transcriptase. This is a protein that's a DNA polymerase that's going to use the single-stranded RNA as its template. So it's going to take the RNA and it's going to put together this double-stranded DNA that we're going to refer to as cDNA. So this is the DNA for the gene, but unlike the original gene, it only has the exons. It doesn't have any of the introns. The next step, of course, is to get the cDNA into the bacteria. We do this using something called a vector. The vector is just a means of getting DNA into a cell. One common type of vector is a plasmid. The plasmid is a circular piece of DNA that the bacteria can then take up and read. The way we're going to get our cDNA into this plasmid is through the use of restriction enzymes. As you remember from our previous help session, restriction enzymes can cut up DNA and create these overhang of the nucleotides. So we're going to cut up the cDNA, and we're going to cut up the plasmid, and they're going to have overhangs that are complementary. This means that when we add the cDNA to the plasmids, they're going to hybridize, and then we can add DNA ligase. We add DNA ligase, it will create a phosphodiester bond between the cDNA and the plasmid vectors. And finally we'll get the cDNA inserted into our plasmid. The next step, of course, is getting the plasmid into the bacteria in order for the protein to be expressed. There are multiple, different ways to do this. One common way is called heat shock. What happens is that the bacteria is heated up and then cooled rapidly, and this creates lots of little holes in the membrane, which allow the plasma to get into the cell. Once you have the plasmid in the bacteria, your job is pretty much done. Now the bacteria will naturally express this protein, so you can grow up the bacteria in large quantities and get a lot of the protein of interest. Referring back briefly to the vector, there are several parts that are important for it to have. We're going to need to have the origin of replication, the promoter, and the selection marker. The origin of replication initiation, or ORI, is necessary if we want the bacteria to produce more copies of this plasmid. So once the plasmid gets into the bacteria, if we don't have an ORI, as the bacteria grows and reproduces, none of the daughter cells will have this plasmid. We'll have to continually transform them. However if it has the ORI, as a bacteria grows and reproduces, it will also replicate this plasmid. Another very important thing to have is the promoter. The promoter is a section of DNA which signals for the RNA polymerase to bind. The RNA's polymerase will bind to the promoter, and then will proceed down the DNA on the plasmid to read the actual gene and transcribe it. So the mRNA, which then ultimately can be made into protein. Finally, you need a selection marker. Now as we talked about earlier, when you heat shock the bacteria, the plasmid will get taken up. However, not all the bacteria might take up some of the plasmid. In order to get rid of the unwanted bacteria, the bacteria that doesn't have the plasmid, we're going to use selection marker. A very common selection marker for bacteria is antibiotic resistance. For example, if the plasmid provides ampicillin resistance, then this means that any bacteria that takes up the plasmid will be resistant to ampicillin. The bacteria that don't will still be vulnerable to it. So you could plate all of your bacteria on a plate containing ampicillin, and the ones that have the plasmid will survive. The ones that don't have the plasmid will perish. So let's go once more to the original example and discuss about what we're going to need for our vector. So again, we want to express human insulin in the bacterial system. There are six possibilities for what we can need on our vector. You can need the bacterial ORI, the human ORI, the bacterial promoter, the human promoter, the bacterial selection marker, the human selection marker. Pause for a minute. Give you a chance to decide what you think the vector needs, and then we'll go over it together. OK. Does it need a bacterial ORI? Yes. If we want to grow up a large amount of insulin, we're going to put the plasmid in the bacteria. The bacteria needs to create more of this plasmid. Does it need human ORI? No, we're not actually assorting the plasmid into a human cell. So the human cell is never going to need to create more of them. Just the bacterial cell. What about a bacterial promoter? Yes. Even though it's a human gene, the promoter has to be for the bacteria. Because it's going to be the bacterial RNA polymerase that will bind to the promoter, and ultimately make the mRNA from the cDNA. What about a human promoter? No, we don't need a human promoter because the human RNA polymerase won't be involved. Again, this is only going to be in the bacterial cells. It's not going to be in the human cell, so it doesn't need a human promoter. And finally for the selection markers, once again, we only need the bacterial selection marker, not the human selection marker. We're not dealing with full human cells at this point. We're just dealing with a plasmid, so we only need to select for bacteria cells that have the plasmid of interest. This has been another help section on recombinant DNA. Thank you for your time.
https://ocw.mit.edu/courses/8-01sc-classical-mechanics-fall-2016/8.01sc-fall-2016.zip
We've seen previously that if a rope is under tension and we approximate the rope is massless that the tension is then uniform everywhere along the rope, even if the rope is accelerated. However, if the rope has nonzero mass, then its tension will vary along its length. We can see that by looking at this example. Imagine we have a massive rope of length l suspended from a ceiling. Now, it's easy to see that, at the top of the rope, a little element of rope right at the top has to support the entire weight of the entire rope. So if I were to examine just a little piece at the top here, the weight of the entire rope would be acting downwards. And for the rope to remain stationary, there has to be a tension upward. I'll call this t-top, because this is at the top of the rope. And so we see immediately from that that at the top of the rope, the tension is just equal to mg, where m is the mass of the entire rope. Similarly, if we asked what the tension is at the bottom of the rope, an element right at the bottom of the rope isn't supporting any weight, because there is no weight below it. And so at the bottom, the tension is 0, because there is no weight pulling on the bottom of the rope. So the tension is going to vary from mg at the top down to 0 at the bottom. And if we wanted to work out at some distance from the ceiling-- let's call it x-- at some position x-- say here, what the tension is at that point-- one way of figuring that out would just be imagine we cut the rope at this point and then asked what tension force would be necessary to support this bottom part of the rope. This length is l minus x, so the mass of that fraction of the rope is l minus x over l. And the mass of that fraction of the rope is that fraction times m. And so the tension at this point, t at x, is equal to the weight of the length of rope below that point. So that's this mass times g. And I can rewrite that as 1 minus x over l times mg. And this gives me mg if I put in x equals 0, and it gives me 0 if I put in x equals l. So that's one way of figuring this out. I'd like to use this same example, though, to introduce another, more elegant way of analyzing what the tension as a function of a position on this rope. The advantage-- so this is going to be a little bit more complicated, but it's a much more powerful method, and it's one that we can generally use for any continuous distribution of mass as opposed to a point mass. So let's consider the same example again of a hanging mass of rope of length l and mass m. Here's the length l and the mass m. And the approach we're going to use is called differential analysis. It's a technique from calculus, and it's applicable to any continuous distribution of mass. What we're going to do is imagine our continuous distribution of mass is made up of a whole bunch of little pieces, little elements, examine f equals ma acting on a single element, and then generalize to the entire mass. So let's do that here. What we'll do is we'll examine a piece at some position x. So I'm measuring x from the top. And let's say I define a little element here, which is that of position x and which has an extent delta x. So this thickness is delta x, and we'll call the mass of that little piece, that little element, delta m. Now, one thing I want to point out to begin with is that we want to pick an element that's somewhere in the middle of the distribution, not at one end or the other. The endpoints are special cases, so we want to pick a general case. So choose some x somewhere in the middle of our distribution which has a finite extent. That extent is delta x. In this case, we'll assume that that piece delta x has a mass delta m. So let's blow that up here and analyze it. So here is my element. And it's going to have-- let's analyze what the forces acting on it are. So there's gravity acting downwards, which will be delta m times g. There is a tension acting upwards, exerted by the rope above it. I'll write that as t of x, because it's at the location x. There's also a tension exerted in the downward direction by the remainder of the rope below the mass element. And I'll call that t of x plus delta x. We expect the tension to vary along the rope. And because this element does have a finite extent, the tension at the bottom is going to be slightly different than the tension at the top- so t of x upwards, t of x plus delta x downwards, and then the weight downwards. So let's now-- that's our free body diagram. Let's write down Newton's second law, f equals ma for that mass element. So in the positive direction, which is downward, we have t of x plus delta x plus delta mg. And then in the upward or minus direction, we have minus t of x. So those are the combined forces. Now, this rope is suspended. It's not moving. And so mass object acceleration is just 0. I'm going to rearrange that. I can write that as t of x plus delta x minus t of x is equal to minus delta m times g. And by the way, let me remind you-- if this were a massless rope, then delta n would be 0. And so the right-hand side would be 0, and the tension would be uniform. We'd have the same tension above and below. But because the rope does have mass, and in particular, this element has a nonzero mass delta m, there is a difference in the tensions. OK. Now, our delta m can be represented in terms of what this length is. Notice that that mass, delta m-- so note that delta m is just a fraction of the total mass that's in that particular mass element. Well, the fraction of the total rope is just the length of this element, delta x, divided by the length of the entire rope, which is l. So that's the fraction, and I multiply that by the total mass. So that is my mass delta m in terms of the length. So now I can rewrite this equation as t of x plus delta x minus t of x is equal to minus delta x m over l times g. Now, I'm going to rearrange this by dividing both sides by delta x. I'll do that over here. So we have t of x plus delta x minus t of x divided by delta x is equal to minus mg over l. This tells us how the tension is varying over this small but finite-sized mass element, delta x. Now I'd like to examine what happens if I go to the limit of a small-massed element-- the limit as delta x goes to 0, or in other words, the limit of an infinitesimally small mass element. So I'm going to take the limit of this equation as delta x approaches 0. Now, the left-hand side of this equation should look familiar. It's just an expression for the derivative of the tension t as a function of position. So I can write that as dt dx, and that's equal to minus mg over l. This is an example of a differential equation, or an equation that involves a derivative. This particular differential equation can be solved very simply by a technique called the separation of variables, where I just take each part of the integral-- the dt and the dx-- and put it on different sides of the equation and then integrate both sides of the equation to find the solution. So in this case, I'll multiply both sides of the equation by dx. So I get dt on the left-hand side is equal to minus mg over l dx. And now I want to integrate both sides. So I'll integrate this side. And remember, mg over l is a constant, so I can keep it outside the integral. And I'll integrate this side. I'm going to do a definite integral over the continuous distribution that I'm studying. So on the right-hand side, I'm going to start at x equals 0 and go to my position of interest, which is x. And to avoid confusion, I'm going to call the integration viable dx prime. This is a dummy variable. So x prime here represents all the values of position, ranging from my first endpoint 0 up to my other endpoint x. So x here represents a particular position, whereas dx prime is a placeholder for all the positions between the two endpoints in my infinite sum, which is an integral. So that's on the right-hand side. On the left-hand side, I'm integrating the tension t, with respect to the tension t. My limits need to correspond to the limits on the right-hand side integral. So on the right-hand side, my lower limit is at x equals 0. So on the left-hand side, I want to have my lower limit be the tension at that position x equals 0. So that's t of 0. And then the upper limit of the integral is the tension at the upper limit of the position integral, so that's t of x. And again, to avoid confusion, I'm actually going to call the integration variable dt prime. This is a dummy variable. It's a placeholder for all the values of tension from the tension at x equals 0 to the tension at my position of interest x. And so this integral now tells me how the tension is varying in a continuous fashion along this continuous mass distribution. So now I can evaluate both integrals. On this side, I have the integral of a constant with respect to the tension. And so that's just going to give me t of x minus t of 0, and that's equal to minus mg over l. The interval of dx prime from 0 to x is just x. So this tells me how the tension changes from the endpoint to some arbitrary position x. If I want to actually solve for t of x, I need to specify what the tension is at the endpoint. But we know what the tension is at the endpoint. We found that earlier. We solved from the simple argument that at the endpoint, the tension here is just equal to the weight of the entire length of rope below that point. So we know that t at x equals 0 is just equal to mg. And therefore, the tension at position x is just-- if I bring [INAUDIBLE] of 0 to this side is just mg minus mg x over l. And so I can just write that as mg times 1 minus x over l. So that tells me what the tension is as a function of position. Note again that if I put in x equals 0, I just get mg. If I put in x equals l, I get the tension is 0. And for any point in between, we see that the tension varies smoothly between mg and 0. This is exactly the same result we found earlier by just cutting the rope at x and asking how do we balance the weight of the remaining rope below that point. But the advantage of this technique which is a little bit more complicated, is that it's a very powerful technique applicable to any continuous distribution of mass. So in this specific problem, if instead of the uniform density rope that we had here, imagine we had a clumpy rope where the mass of the rope wasn't distributed smoothly, but there were clumps, little parts of the rope that were heavier than others, and so the density varied with position along the rope. In that case, we would represent that when we were writing what our mass element delta m is. Delta m, instead of just being delta x over l times m, where here, delta x over l was the uniform density of the rope, we would have to put in some position-dependent density. So delta n would depend upon position. And then when I did this integral, instead of a constant out front with my integral of dx prime, I would have some thing that was a function of x, and I'd get a different value for the integral. But the technique would still work. So this differential analysis technique is applicable to any system where we have a continuous mass distribution. And we're going to actually use this technique over and over again in this course. We'll see it a number of times. This is just the first instance we're using it. We wanted to introduce you carefully to the approach.
https://ocw.mit.edu/courses/7-013-introductory-biology-spring-2013/7.013-spring-2013.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Last time I told you about oncogenes. Oncogenes-- we discussed the fact that these were gain of function mutations that occur in cancer cells-- that occur in normal cells as they develop into cancer cells, and then occur in cancer cells. And we discussed the fact that these are dominant mutations. And this was exemplified by the Weinberg experiment in which he transferred a dominant oncogene into a "normal cell' and caused that cell to become transformed despite the fact that it had normal copies of the same gene in its genome-- definition of a dominant mutation. We also talked about tumor suppressor genes. And tumor suppressor genes importantly, carry within them loss of function mutations in the context of cancer. Loss of function mutations, as such, these mutations are recessive mutations at the cellular level. It's necessary to inactivate both copies of a tumor suppressor gene in order to give rise to a cell that's lacking that function altogether. And that's a cell that is on its way to becoming a cancer cell. And these loss of function mutations can be various. You might find nonsense mutations in a tumor suppressor gene-- blocks the production of the protein. You might find a deletion. It takes out the entire gene, or a big portion of the gene. You might find a frame shift mutation-- again, that blocks the ability to make any protein, or much protein. And I also told you about chromosome loss events. The loss of the chromosome that carries the normal copy of the tumor suppressor gene as a frequent second event to lose the remaining wild type copy of the gene. OK, so this is just a little bit of review. Now, I framed the discussion about oncogenes and tumor suppressor genes from the point of view of cell division control and the production of more cells through the action of these mutations. And indeed, cancer is a product of inappropriate cell division. And the two genes that we discussed in some detail regulate this process. The RAS gene and the product oncogene stimulate cell division. The RB tumor suppressor gene inhibits cell division. When we consider the kinds of mutations that we have in these genes that regulate the cell division process-- in the case of the RAS gene, we find activating mutations. And in the case of the tumor suppressor gene RB, we find inactivating mutations. RAS is in oncogene. RB is a tumor suppressor gene. Activating mutations in the oncogene. Inactivating mutations in the tumor suppressor gene. But cancer isn't only about cell division. It's really about cell number-- inappropriate cell number. And there's another important process to remember in this context. Apoptosis-- program cell death, which I've referred to many times in many different contexts. Apoptosis, which results in dead cells. And failure to undergo apoptosis properly will likewise result in too many cells, which again can be cancer causing. We have genes that regulate this process. For example, the P53 gene, which we'll discuss in some more detail, positively regulates apoptosis. And another gene called BCL-2, which negatively regulates apoptosis. Is P53 an oncogene or a tumor suppressor gene? Oncogene? Tumor suppressor gene? Good. P53 is a very important tumor suppressor gene. It is inactivated in the context of cancer because you want to get rid of apoptosis as you are developing cancer cell. Is BCL-2 an oncogene or a tumor suppressor gene? It's an oncogene. We find gain of function mutations in BCL-2, producing more of this inhibitor of apoptosis in the context of cancer to block apoptosis leading to an increased number of developing cancer cells. OK, so oncogenes and tumor suppressor genes can regulate these two processes differently. As you can see, inhibitors in this case, stimulator's in this case, in the context of tumor suppressor genes. All right, a little bit more about P53, which is probably the most important cancer gene of all. It's mutated in at least 50% of all human tumors. And P53 is known to function as a molecular policeman of sorts. It's sensitive to various perturbations within the cell. For example, DNA damage. DNA damage will feed into the P53 regulated pathway, as will a fascinating process, which is still incompletely understood, where the cell can recognize that it's dividing inappropriately. Abnormal proliferation-- cells dividing when it shouldn't. This gets detected, and this, too, can feed into the P53 pathway. And this leads to an increase in the levels of the P53 protein. P53 is a transcription factor. It regulates the expression of other genes. And the genes that it regulates fall into two broad categories, some of which cause the cell to undergo cell cycle arrest. When the P53 pathway is activated under certain circumstances, the cells are instructed to arrest. During which time, whatever the damage is can be fixed before the cell continues to cycle. If there's DNA damage, the cell might arrest, fix the damage, and then continue on. In addition, P53 regulates apoptosis, as I mentioned a few moments ago, causing the cells to die. Causing the damaged cells to die-- and that's good because dead cells make no tumors. This is a sacrifice by the individual cell who has been damaged, basically saying, I'm in trouble, killing itself, and allowing the organism to survive. OK, and this is a very important process in cancer prevention we now know. And I'll tell you some examples of how we know that momentarily. OK, so I've introduced you to a couple of different oncogenes and a couple of different tumor suppressor genes. I want to focus a little bit more on the tumor suppressor genes in one respect. We touched on this last time, but very briefly. Tumor suppressor genes is sporadically occurring cancers. And sporadic means that the individual has no family history of that particular cancer type-- sort of a chance event. And tumor suppressor genes in sporadic cancers require two mutational events. And I described these as hits. And we typically do describe them as hits in the context of tumor suppressor gene inactivation. Sometimes it's mutations. Sometimes it's mutations coupled with chromosome loss as the second hit. But regardless, two mutations are necessary to get rid of both copies of the tumor suppressor gene. And I also briefly introduced you to the fact that tumor suppressor genes are often-- not always-- but for the most part, familial cancer syndromes, where the individuals have a predisposition to developing a particular type of cancer are caused by inherited mutations in tumor suppressor genes. So one of the hits, one of the mutations, is inherited from one parent. Meaning that every cell in that person's body carries one of the mutations already. Meaning that only one hit, one mutation, is required somatically. That is in the individual's own cells in their body. And this is why these individuals are cancer prone, because they're one step away from lacking the tumor suppressor gene altogether, whereas in most people two mutational events are necessary, and it's rare-- not never, but rare-- to get those two mutations. And I showed you briefly this pedigree, which is a pedigree of familial retinoblastoma, where the individuals inherit a defective copy of the RB gene, and as such are predisposed to the development of this tumor of the eye. And in fact, they will develop the tumor typically in both the eyes, and typically multiple foci of tumor. So they inherit a defective copy of the gene from one parent and they go on to develop the disease at high frequency. When you look at this pedigree, remembering back to your lessons in disease genetics, this looks like an autosomal dominant disease. If you inherit the disease allele, you have a very high likelihood of developing the disease regardless of who your other parent was. It looks like an autosomal dominant disease with actually incomplete penetrance, as we can see here. And we'll talk more about this in a second. An autosomal dominate disease, but interestingly, we've been talking about the fact that the mutations are actually recessive. So this seems confusing. So who can explain it? How do we have a recessive mutation at the cellular level causing what appears to be an autosomal dominant disease? Who can answer that question? Yeah? AUDIENCE: Predisposition is the dominant. It only needs one mutation. PROFESSOR: Exactly. Exactly. You are predisposed. And there's a very high likelihood that in some or in fact, many of your cells, this second event will occur. It's almost guaranteed. And since it's almost guaranteed, if you inherit just one mutation, you will develop the disease, and therefore it appears to be at the organismal level-- autosomal dominant-- because your predisposition nearly always guarantees that you will develop the disease even though the mutations at the cellular level are recessive. And so with that in mind, as we consider a pedigree similar to the one I just showed you, where one parent is heterozygous for the mutation-- the loss of function mutation-- passes that onto the next generation and beyond, in this individual, as well as the other ones shown with the dark symbols where a tumor did develop, what is a necessary second event? The loss of the wild type of allele, either by a second mutation, or a chromosome loss event. And that happens at very high frequency. In the case of retinoblastoma, it typically happens in a dozen cells in the developing retina-- leading to an average of a dozen independent tumors in those kids. OK, but now let's think about this individual here who did inherit the defective allele because he actually passed it on to his three sons, but he himself did not develop retinoblastoma. How can we explain that? Why was he spared? Well, there are two general explanations. First, he was incredibly lucky. Although it's highly likely that some cell or cells in the developing retina will mutate the normal copy of the gene. In him, it just didn't happen. He got lucky. None of his cells mutated the normal copy of the gene. His eyes developed normally. And after that point, actually, the cells are much less sensitive to mutation, and therefore after about three or five years of age, you typically wouldn't develop the tumor. So he might be incredibly lucky. Or it might be that in him, because of some other inherited allele of some other gene, even if he were to lose the wild type alleled of RB gene it wouldn't matter, because he's got some other gene that's functioning maybe in the place of the RB gene, leading to him to be protected. And these two possibilities exist, and we have examples of how both can be important. OK, final question on this slide-- what would happen if an individual inherited a mutation from both parents and was therefore homozygous for a mutation in a tumor suppressor gene. What do you think would happen? The answer was they would be stillborn or they wouldn't make it out of embryogenesis. And that's usually the case. And we know that not so much from the study of people who are homozygous for these mutations, because it's actually quite rare to find people who are heterozygous who have children-- so the number of such examples is few-- but we know it from knock-out mice. All of these tumor suppressor genes exist within the mouse genome as well. And my group and others have made mice with mutations in these genes. And actually we know that, in fact, for many of the tumor suppressor genes that we care about, like the RB tumor suppressor gene, if one creates a homozygous mutant mouse, or the APC tumor suppressor gene, which is important in colon cancer prevention, or the BRCA1 tumor suppressor gene, which is important in breast and ovarian cancer prevention-- in all of these cases, the embryos don't survive. And they die at different points along the way of embryogenesis. And it's actually not because they develop lots of cancer as embryos. Although, you might have thought that was true. They die because these genes are actually important in normal development. They're not there just to protect against cancer. They're there because they play a role in regulating normal cell division processes, normal cell death processes, normal physiology, such that when they're missing, the embryo can't survive. There's actually one exception to this. Well, I shouldn't say it that way. There are a few exceptions to this-- but one notable exception to this. And that happens to be the P53 tumor suppressor gene that I've introduced you to. My group and others have made animals that are mutant for P53 either heterozygous for the mutation. What would you expect the phenotype of these mice to be? Are they going to be totally normal mice, do you think? They are, in fact, cancer prone. They look like those people with inherited predisposition to cancer. They carry one mutant copy of a tumor suppressor gene. And they're one mutational event away from lacking the tumor suppressor gene altogether. And that happens at some frequency in their cells. And the mice will go on to develop cancer and die early for that reason. We cross these mice together with an expectation that again, they would not survive embryogenesis. But in fact to our surprise, you can make a fully P53 null mouse. And what do you think the phenotype of that mouse is? It's very cancer prone. Because this is a mouse, that in all of its cells, this very important tumor suppressor gene is lacking. Normal mice will live about 2 and 1/2 to three years. These mice will live about 1 and 1/2 years and die from cancer as a consequence. These mice will live about four to six months and die from cancer. So P53 is actually not important in normal development. It is a true tumor suppressor gene. It probably evolved to protect cells against the kind of damage that is inflicted on cells in an individual's lifetime. And if that damage is sufficiently great, the cells are eliminated or arrest permanently so they will not develop into cancer cells. It's a true tumor suppressor gene in that case. OK, so I've told you about oncogene and tumor suppressor genes. We are now entering an era over the last couple of years really, where many, many more cancer genes are being discovered through the application of genomic sequencing in the context of cancer. Cancer genomics is all the rage, including here at MIT, for example in the Broad Institute. And there are many papers appearing in the literature like these looking at the complexity of the genome of individual cancers, like this study out of the Broad on prostate cancer, and this study out of the Sanger Institute in England on small cell lung cancer. All the genes or the entire genomes of lots of different cancers are being sequenced and compared to the normal DNA of the same individual to catalog all of the mutations that are present within an individual tumor. And although there are differences depending on the cancer type, the average cancer genome actually has 100's and sometimes thousands of mutations. Small cell lung cancer, for example, has tens of thousands of mutations compared to normal cells of the lung. Why? Because they arise following years of exposure to cigarette smoke, which carries mutagens that bathe the DNA in mutation causing chemicals leading to mutations. Now, not all of the mutations that you find in a cancer cell are relevant to the cancer phenotype. In fact, we think that there's only amongst all those mutations that are found that there's only about 5 to 20 or so mutations in oncogenes or tumor suppressor genes. And we call these mutations driver mutations. Driver mutations, meaning, they actually are participating in the development of the tumor, causing some aspect of the cancer phenotype. And the remainder, we call passenger mutations. Silent mutations-- mutations that actually don't do anything to the cancer cell, but they just happened to occur at the same time, or in the lifetime of the cancer cell when another important mutation took place. So that clone of cells that develops carries those mutations too, but they're not actually contributing to the cancer phenotype. So among the hundreds and thousands of mutations, some of them really matter, and some of them are just going along for the ride. It makes the analysis of the cancer genome much more complicated, actually, because it's hard to tell what's a passenger, and what's a driver. But increasingly recognizing what are the important ones and focusing our attention on them. OK, that's all I'm going to tell you about cancer genetics. And I want to now turn my attention for the last 25 minutes-- and I may run over a little bit, so I ask for your patience if I run over a little bit, because I do want to finish this-- to talk about cancer therapy. Before I make the switch to cancer therapy, I want to first introduce the concept of cancer prevention. A lot of us work on cancer genes and cancer genetics to understand how to treat cancer better. But in the future, hopefully we'll have many fewer cancers to treat, because we'll be able to prevent them. If people would stop smoking, we'd have 80% fewer lung cancers to treat. If you use sunblock and stay out of the sun, we'll have fewer skin cancers to treat and so forth. Better diet and excess exercise will prevent a lot of other types of cancers. So there are lifestyle things that can lower the number of cancers in the context of cancer prevention. There's also ways to prevent agents from producing cancer in your body. And the best example of that is Gardasil. Gardasil is an example of cancer prevention involving a particular virus-- human papillomavirus-- and specifically, human papillomaviruses, which are described as the high risk type. Human papillomaviruses or HPV of the high risk type can cause cervical cancer. They are the main reason that women develop cervical cancer. And they're responsible for other types of cancers as well, including in men. And what we know is that in a normal cell of the cervix, when infected by the human papilloma virus, will at some frequency, and after a period of time, develop into cervical cancer. This suggests that the virus-- some genes of the virus-- are changing the cells behavior in such a way that it develops into cancer. Yes? AUDIENCE: I don't know if you're talking about our class or something else, but it's very rude to talk. It makes so much noise during the lecture. PROFESSOR: Thank you. So something in the virus is causing the cells to divide abnormally into a tumor. And this has been studied at length. And we now know that there are two genes, which are responsible for causing cervical cells to become cancer cells-- two genes of the virus called E6 and E7. And it's been learned that these genes in fact encode proteins that inhibit cellular proteins that we're actually quite familiar with. And we now believe we understand why the virus causes cancer. E6 inhibits P53. And E7 inhibits RB. So the virus for its own reasons of viral replication, takes out these two tumor suppressor genes. And as such, the cells are lacking these two critical tumor suppressor genes, and they're well on their way to becoming uncontrolled cancer cells. OK, so that's how HPV high risk types cause cancer. And what was developed in the context of Gardasil is an HPV vaccine so that individuals cannot be infected productively with this class of viruses, specifically it's a component vaccine made of recombinant proteins that are present in the high risk types of HPV. As a component vaccine, this vaccine does not produce a replicating virus. It's just pieces of the virus. So there's no risk of a viral infection here. But an potent antibody response can be elicited, including neutralizing antibodies that will prevent an individual from being infected by the real thing at a future time. OK, so that's an example of cancer prevention, eliminating an ideological agent that is responsible. There aren't many examples of virus caused cancers in humans. So this is kind of a special case. But HBV-- hepatitis B virus induced liver cancer will be another one before too long. OK, so now let's talk about cancer therapy. I'm going to talk to you in a few minutes about some new cancer therapies that are based on our improved knowledge of the genes in cancer cells. I'm going to tell you more about anti HER-2 antibodies. I'm going to talk to you more about a small molecule inhibitor of this kinase. There are actually many theorems that are based on mutations that we know occur in cancer cells. There are processes that I've mentioned to you, like angiogenesis, the recruitment of new blood vessels. These two have led to new therapies for cancer to block that process and inhibit cancer development. In the case of tumor suppressor genes, individuals are trying to develop gene therapy to put the genes back. If the gene is lost in a cancer cell, perhaps you can restore its function by gene therapy, and normalize the growth of the cancer cells. And although I won't talk about it, there's a lot of promise for immunitherapy for cancer. Cancer cells acquire a lot of mutations. As such, they produce a lot of antigens. In theory, your immune system should recognize those as foreign and eliminate the cancer. But in general, the cancer wins, the immune system fails. And we think that there's ways the cancer actually inhibits the immune system from functioning properly. But that's being figured out now, so it's possible that we'll be able to develop new cancer therapeutics based on the immune system. All right, but before I get into the cool new stuff, let me tell you just a little bit about cancer therapy more generally-- what we would consider to be conventional cancer therapy. The most effective form of cancer therapy is surgery. If you can get to the tumor early before it has spread, you cut it out, the person is generally cured. Another very effective form of cancer therapy is radiation therapy. And this is good because you can focus the radiation beam directly on the cancer cells and eliminate them by causing a lot of damage to those cells. And the third is chemotherapy. And chemotherapy is used when you think that the cancer has spread, so radiation can't work because the cancer cells are somewhere else-- and you need a drug that can diffuse throughout the body and hopefully kill the cancer cells. Radiation and many chemotherapies act in the same general way. Adriamycin, which you will have heard of, cisplatin, which you will have heard of-- these are well used cancer therapeutics-- and many more function by inducing in the cancer cell DNA damage. And the damage can be sufficiently severe that the cell will die. And these therapies can be effective. There's another class of cancer drugs for which Taxol is the best known, which are described as mitosis inhibitors. These drugs actually bind to microtubules, block the formation of the microtubular spindle, and that way prevent cells from dividing. And since cancer cells divide a lot and you want to inhibit their division, these drugs are used and can be effective. In fact, they're used because they were tested and shown to be effective first in cell-based studies, where one looks at the growth or survival of cells in a Petri dish, scoring the number of cells or the percentage of cells that are alive at any given time, or in any given dose of drug when the concentration of drug is increasing in this experiment. What we find is that for certain normal cells, they will survive to a certain concentration of drug and then start to die off. And for certain cancer cells, the concentration required to kill them is lower. And this difference is called the therapeutic window. In theory, this looks good, because it suggests that the cancer cells are more sensitive to the drug than are normal cells. And that's why some cancer therapeutics work for some cancers. But there are problems. Some normal cells in your body, unlike these normal cells, are very sensitive to the drug at low concentrations-- the DNA damaging agents, for example. This results in side effects. And I suspect you are all familiar with the side effects of cancer chemotherapy. Your hair falls out. You get nauseous. You get anemic. This is because cells in your hair follicles, or your intestines, or your blood system are dying at low concentrations of the drug. They're actually dying by apoptosis. They're actually dying by P53 dependent apoptosis. So that's why cancer drugs cause many bad side effects, because some cells in your body are very sensitive and will kill themselves in response to low concentrations of the drug. The second problem is that some, in fact not a small percentage, are very resistant to the concentrations of the drug-- even high concentrations of the drug. And one reason for that is that many cancer cells are lacking P53. I told you that P53 is mutated in about 50% or more of human cancers. I also told you that P53 was required for cells to respond to DNA damage and die. And these cells lack that protein. And therefore, are very resistant to dying. So many therapeutics don't work, because this important machinery is lacking. So we have problems with standard therapies based on both kinds of issues. So the goal then, is to find drugs that don't cause these kinds of side effects and can work even in a P53 deficient cell, which leads us to developing new types of therapy. OK, so I want to introduce you specifically to two. And they are probably the best known and highly effective actually, great examples of molecularly targeted anti-cancer agents. The first is in the context of breast cancer. And the gene in question here is a gene called HER-2. In a normal breast cell, there are two copies of this HER-2 gene, as there are in virtually all of your cells. And those produce amounts of RNA that produce amounts of protein that lead to the decoration on the surface of these cells-- the certain number of receptor molecules called HER-2, which are a growth factor receptors. They bind to specific growth factors. And when they are engaged with their growth factors, they send a signal into the cell. And the product of that signal is for the cell to proliferate. And this is necessary in normal development and in other times. So this is normal regulation, normal signaling in response to a growth factor in the normal levels of a growth factor receptor. 30% of breast cancers carry a mutation that results in amplification of the HER-2 gene. So we don't have two copies anymore, we've got 10, or 20, or 50, or 100 amplification. This is a mechanism by which all good genes get activated. Too many genes, too many proteins. This cell now has way too much of that growth factor receptor on its surface. And in the presence of the same concentration of the growth factor itself, we get a much stronger signal-- much higher levels of proliferation. This also affects the ability of the cells to survive. It keeps them alive at times when they shouldn't be. Too much proliferation, too much survival. Given this situation, a logical therapeutic would be something that blocks the function of this growth factor receptor. And what a company called Genentech discovered was that they could make an antibody. An anti-HER-2 antibody. And that led to a drug called Herceptin, which binds to the growth factor receptor and prevents it from functioning. And in women who have this alteration, and only in them, the drug is actually highly effective. In the metastatic setting, it will lead to multiple additional years of life. But it is actually not curative in that setting. Recently, individuals are being typed for this mutation at a much earlier stage in their disease course. And when women are given the drug then, it's leading to some cures. So this is a targeted agent which is highly effective in the context of a specific mutation. And actually only then-- other breast cancer patients given the same drug have no benefit whatsoever. So this is what it looks like. This is actually the drug package. And this is what I just described to you-- normal cells, cancer cells, Herceptin antibody binding to and blocking the function of this abnormal number of growth factor receptors. OK, let me now turn to the second classic example. And this comes in the context of chronic myelogenous leukemia, which is a leukemia-- a blood cell tumor. It's a particular type of blood cell-- the myeloid lineage type of white blood cell. You can diagnose this disease by looking at blood smears and you can see that this is the normal blood smear with a single of these myeloid cells. And here is a cancer situation where we've got too many of these white blood cells circulating. This is a disease that's been studied for a very long time. And it's been discovered that in the vast majority of this type of cancer, there is a specific chromosomal event-- a specific mutation caused by a particular translocation. And that translocation was actually identified a long time ago in the city of Philadelphia. And as such, it's called the Philadelphia chromosome. It's the product of a specific translocation-- chromosome 9, which has a gene on it called ABL, which encodes a protein that is a kinase involved in phosphoralating other proteins. And chromosome 22, which has another gene called BCR-- chromosome break events occur here. Chromosome break events occur here. And a translocation results, which produces a new chromosome-- the Philadelphia chromosome, which has the BCR gene and ABL gene inappropriately fused to each other. This produces a fusion gene, which encodes a fusion protein. And that fusion protein is referred to as BCR ABL. And it was discovered that the BCR ABL form had increased kinase activity. And this resulted in increased proliferation within the cells that carried that translocation. And so the question was, could one develop an inhibitor? An inhibitor that blocked the kinase? And this resulted in the development of a drug called Gleevec. Gleevec, which is highly, highly effective. This is the idea, here's the BCR ABL fusion protein. It binds to ATP, which it needs to transfer the phosphate group onto a substrate protein in this signaling cascade. The idea is that one could make a small molecule drug that could fit into the ATP binding site very specifically, and block access of ATP, therefore shutting off the kinase. And if that were possible, then the cancer cells would be deprived of this signal, and may stop proliferating, or even die. That was the idea. In fact, they were successful in making a small molecule drug. And that may surprise you, because you might think there are a lot of kinases in this cell. They all bind to ATP. How could you ever find one that was specific to this kinase? But in fact, it was possible. You can make kinase inhibitors, because not all the ATP binding pockets look the same. And you can therefore get some specificity. And when this drug was used in patients, it showed remarkable activity. If we looked at white blood cell number, normal individuals would have a certain low level. And in case of CML, the level would be high. And actually, it goes higher still as the disease progresses through a phase called blast crisis, where additional alterations take place and the cells begin to divide even more abnormally. In this context, however, if you give the drug Gleevec, in the vast majority of patients, the numbers drop precipitously. And the drug is extremely well tolerated-- has almost no side effects. The patients take the pill with their orange juice in the morning every day, keeping their cancer cells at bay, leading to what is called clinical remission. Clinical remission-- the disease has gone into remission. And it can stay in remission for a very long time. And it's sometimes curative. But sometimes the disease cells come back. And this is a phase we call relapse. And even though the drug is present throughout this disease course, the tumor cells are dividing again. Can anybody tell me why? Mutation. The cancer cells have acquired additional mutations, specifically within DCR ABL. If we imagine DCR ABL, it can bind to Gleevec, and be shut off. At some frequency, however, mutations can occur, which change the active site in such a way that Gleevec can no longer bind. And this is still an active kinase. So now the cells begin to divide again. So the question now is what can you do about it? And the answer is, you can make a new drug. And this has actually been done successfully. A new drug that can bind specifically to the mutant form of ABL kinase. And there's a drug called SPRYCEL, which is now also FDA approved for the treatment of drug resistant forms of CML. So before you run away, this is what I've just told you, here's ABL kinase. This is where the drug binds. This is the structure of the drug. But at some frequency, mutations occur within that ATP binding site. And different mutations will do this, as shown down here. And those mutations will block the access of the drug. And the good news is that one could make new drugs that will overcome that form of resistance. So this is a good news, bad news, good news story. We'll stop there.
https://ocw.mit.edu/courses/8-421-atomic-and-optical-physics-i-spring-2014/8.421-spring-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. So what we are talking about is actually matrix elements. If you want to do anything interesting in atomic physics, you have to copy or induce transitions from one state to another state. Well, maybe that should be Hab. For many phenomenon, which we will cover throughout the rest of this semester, spontaneous emission, coherences, and three-level instances, and super radiance, all we need is a matrix element. And this matrix element will just run through all the equations, and be responsible for a lot of interesting phenomenon. And for most of the description of those phenomena, we don't have to know where this matrix element comes from. The only thing we have to know, there is a non-seeable matrix element which drives the process. And as you know, the matrix element, with an external field, is called the Rabi frequency. And a lot of physics just depends on the Rabi frequency. But what is behind? The engine behind the Rabi frequency is a matrix element. So in the unit I started to teach the week before spring break, we talked about matrix elements. And for H, for the Hamiltonian, we used the coupling of an atom to the electromagnetic field. And then we calculated, what is the matrix element induced by the electric field? We made the dipole approximation. And that's your plain, vanilla, generic dipole operator which can connect two states. But we also consider, what happens when we go beyond the dipole approximation, and we found extra ways of copying two states? For instance, we can copy two states which have the same parity with a quadrupole transition, or we can couple them with a magnetic dipole transition. So these are other ways to get into matrix elements. For most of the course, you don't have to understand what is behind the matrix, and you just know, there is a number which drives the process. So what I want to finish today is to discuss-- and these are called selection rules-- which tell us, when are those numbers, when is this matrix element which couples two states, when is it zero, or when is it non-vanishing. And what is helpful here is, well, as always in physics, use symmetry. And if you have an operator, let me give you examples immediately. But just think for a moment about the electric dipole. The electric dipole is the position operator R. And you want to know, can the position operator R induce a transition between two states. The way to analyse it is now in terms of symmetry. And for symmetry, which is always fulfilled for isolated atoms, is angular momentum. Angular momentum is a conserved quantum number, we have rotation symmetry. So therefore, we want to now understand matrix elements in the language of rotation symmetry. And therefore, we don't want to use a precision operator x, y, or z. x, y, z do not have the rotation symmetry. We want to use linear superpositions of x, y, and z-- I'll give you an example in a moment-- in such a way that the operator becomes an element of aesthetic of a spherical tensor. And spherical tensor, I gave you the definition in the last lecture, the element of spherical tensor, Ln, is defined by-- well, I connect it with something you know, that it transforms on the rotation, like the spherical harmonics, ylm. So it is pretty much for an operator, what the ylm, what the spherical harmonics are for wave functions. I think I can do it more formal. And Professor Schwann knows much more about it. I think these are elements of the rotational symmetry group, But I don't want to go there. So what I mean by that is the following, that if you take the position vector r, you can expand it into a basis, which is x and y. But if you use the spherical basis, x plus/minus iy. Then what appears are the spherical harmonics. So in that case, it's rather simple. The position vector has actually in this representation components which you can even see are the spherical harmonics. And therefore, we transform like the spherical harmonics. Or just to give you another example, if you have the operator, which is responsible for the quadrupole transition, well, you get the gist, it's a product of two coordinates. So therefore, it's a spherical tensor of rank two. And it so happens, but I'm not deriving that, it is a superposition of two components with Lm quantum number, 2 plus 1, 2 minus 1. So that's how we should think about it. So, we want to ask, we want to extend the operator, into operators which have rotational symmetry, and these are those, or these are those three. So instead of using the vector Cartesian coordinate, we use its spherical components. And with that, we can take this expression from the last lecture, and rewrite it using the Wigner-Eckart theorem into a way which allows us to immediately formulate selection rules. So [INAUDIBLE] and primer are the quantum numbers of the state, except for angular momentum, so matricing about principle quantum number of the hydrogen atom. And we want to copy from a total angular momentum J prime to total angular momentum J. Actually, we want to copy from J prime, M prime to a state, JM. And what the Wigner-Eckart theorem tells us that we can factor out the M dependence. The M dependence just comes from orientation in space. So M is just how you orient wave functions and vectors and space. And you can sort of write this matrix element as a projection. And this is nothing else than the familiar Clebsch-Gordan coefficient. And the Clebsch-Gordan coefficient for coupling the initial state, JM, or to-- let me put it this way-- to start with the initial state J prime, M prime, we have the L and the M of our operator. And that should result in a total angular momentum of J and M. So we retrieve again the formalism of the addition of two angular momenta. Sometimes, you have two particles you couple into angular momentum and ask, what is the total angular momentum of the composite particles? But what we do here for this selection rule, we have the initial state, we calculate with the angular momentum of the operator. You can think the operator is a field which can transfer angular momentum. And then, of course, the final state has to fulfill angular momentum conservation. But one source of the momentum is now the operator, is the external field, is the photon, or the microwave drive, whatever you apply. And yes, this Wigner-Eckart theorem allows us to write the matrix element as a reduced matrix element. Which really decides whether the transition is non-vanishing or not, times a factor which is just the orientation of the wave function and of the operator in space. So for the Clebsch-Gordan coefficient, we have a simple selection rule. And this is that for the [INAUDIBLE] number, the M of the final state has to be the M of the initial state, plus a little M of the operator. And for both the Clebsch-Gordan and the reduced matrix element, we have the triangle rule. Well, if you couple two angular momentum vectors to a final angular momentum, the three vectors have to form a triangle. And the triangle rule says that some-- let me write it down, and then you recognize it-- that the angular momentum construct by the field has to fulfill the triangle rule that J prime and J can be connected. Yes? AUDIENCE: What is this symbolic meaning of the double bars? PROFESSOR: It's just how, in many textbooks, the reduced matrix element is written. It's nothing else in a matrix element. But, you know, plus the y that I looked at in the quantum mechanics book. But what happens is these are not states. J and J prime are not states. They have an independence. So we've taken out the independence. So this is sort of a matrix element between a state which may have been stripped of its independence. So maybe, I don't know if that's 100% correct, but if you have the YLM in certain states, you have an e to the IM and M part. And this has probably been factored out. So these are not really states, and the double line just means it's a reduced matrix element with the meaning I just mentioned. It's a standard way of factorizing matrix elements. And yeah, that means reduced matrix elements. So in other words, when we talk about selection rules, we want to use the representation of spherical tensors, because the spherical tensor, the rank of the spherical tensor, just tells us how much angular momentum is involved in the photon, is involved in this transition. So maybe just to give you a question, so if I were to do a multipole expansion, and I have an octupole transition, what is now the angular momentum transferred by a photon? STUDENT: 3? PROFESSOR: What? STUDENT: 3? STUDENT: 3? PROFESSOR: 3, yeah. The dipole is L equals 1. Quadrupole is spherical tense of rank 2. L equals 2, so it's L equals 3. Now, can a photon transfer three units of angular momentum? Can an atom get rid of three units of, let's say, orbital or spin angular momentum. We start in a state which is J prime equals 3. You need one photon, and you go to a state which is J prime equals 0. Is that possible or not? We don't have [INAUDIBLE], but do you want to volunteer an answer? What's the angular momentum of the photon? STUDENT: [INAUDIBLE]? PROFESSOR: Well, be careful. The photon has an intrinsic angular momentum, which is like the spin of the photon. That's plus/minus 1. But just imagine that you have an atom, and the photon is not immediate at the origin. The photon is emitted a little bit further out. Then, with reference to the origin, the photon has orbital angular momentum. And that's what we're talking about. In the multipole expansion, we fall in powers of x, and z, and y of the spatial coordinate of the electron. And that actually means we're going away from the origin. And if you emit something which is away from the origin, you have orbital angular momentum. So, yes. An octupole transition is exactly what I said. It means a photon is emitted, and it changes the angular momentum of the atom left behind by three units. That's what we really mean by that. And that's what we mean by those electrodes. The question that you should maybe discuss after class is, what happens if you detect this photon? Is that now a supercharged photon, which has three units of angular momentum? Is there something strange in its polarization? Think about it, and if you don't find the answer, we can discuss it in the next class. OK. So this is the classification. Let's just focus on the simple examples. We have discussed electric dipole and magnetic dipole radiation. These are induced by vectors. Remember, E1 is the dipole vector. For M1, the matrix element was by the angular momentum vector. So These are vectors. And that means the representation of the spherical tensors, or the quantum numbers of the spherical tensors, are the same of the Y1n. And so for dipole radiation, whether it's electric or magnetic, we have now with the dipole selection rules, which pretty much save you at one unit of angular momentum to state B. Can you reach state A with that? And these selection rules are that you can change the angular momentum between initial state by 0 and 1. This is the triangle rule. And delta m can be 0 and plus/minus 1, depending on polarization, which we want to discuss in a moment. So in angular momentum, electric and magnetic dipoles have the same selection rule, where when it comes to the question of parity, we've already discussed that. That an electric dipole connects to a state of opposite parity, whereas, the magnetic dipole connects two states of the same parity. And of course, this comes about because L is an axial vector, and R is a polar vector, which have different symmetry when you invert the coordinate system. The one higher multipole port transition, which we discussed, was the electric quadrupole, E2. And the spherical tensor operators for the quadrupole transition, I gave you already the example of, let's say, xz, products of two coordinates, because we went one order higher than the dipole. They transform as Y2m. And therefore, we have selection rules for quadrupole transitions, which tell us now that we can change the total angular momentum up to 2. And also, delta m can change up to two units. And again, just to emphasize, because people get confused all the time. When we talk about a quadrupole transition, we mean absolutely positively a transition where one photon is emitted. If you fully quantize the field, there is one creation operator of the photon. It's one photon which is created, and this photon carries away the angular momentum we've just specified. Questions about that? Let me conclude our discussion of matrix elements by talking about something which is experimentally very relevant. And this is how selection rules depend on the polarization of light. And I only want to discuss it for electric dipole transitions. So when we wrote down the coupling of the atom to electromagnetic radiation, we had the dipole operator, but we also had, of course, the mode of the electromagnetic field, which was characterized by a polarization epsilon. So until now, when I talked about selection rules, we discussed this part. But now we want to see how it effects polarization. Well, the epsilon, for instance, for circular polarization-- we'll talk about linear polarization in a moment-- has this representation. So this is the unit vector of the polarization of the electric field when it's circularly polarization. And now remember, we take this vector r, and expand it in the following way. So if you multiply now the operator r, or the matrix elements created by this vectorate operator, by the polarization, you see that one circular polarization checks out this component. The other circular polarization projects this out. And later, we'll talk about that linear polarization projects that out. So when we said that we have matrix elements for dipole transition, which can change angular momentum, or the incremental number, by minus 1, plus 1, and 0, this is now related to the polarization of the light, either the photon which is emitted, or when we use circularly polarized light, we can only drive this transition, that or that, because the scale of product of the polarization vector and the matrix element project out only one component of the spherical tensor. So if you look at the expansion above, we realize that the left- and right-handed circular light projects now out the spherical tensor operator, T1 plus minus 1. And since it's circularly polarized light, and therefore, we find this election rule that delta m, the Z component, the Z component of the angular momentum, changes by plus/minus 1 when the circular polarized light is sigma plus or sigma minus, right-handed or left-handed. OK. So this is responsible for circular polarization. These are selection rules for circularly polarized light. Let me conclude by discussing the case of linear polarization. Well, when we ask linear polarization, if we ask for linear polarization along x or y, well, it's linear polarization, but we should regard it as the linear superposition of sigma plus and sigma minus. So in other words, if you have the quantization axis along z, and you use light which is polarized along x or y, the way how the light talks to the atom with symmetric operators is that the light is a superposition of sigma plus and sigma minus. So what we have so far is, so we had here the light key, the propagation of the light was along the z-axis. But now, we want to look at the other possibility that z, or the quantization axis, is parallel to the polarization of the electric field, which would mean that the quantization axis is usually defined by an external magnetic field. If you're talking about the situation that the electric field of the electromagnetic wave is parallel to the magnetic field, then with this polarization, we peak out this spherical tensor component, which is z, which is r times Y1,0. And that means that this polarization of the light induces a transition for which delta m equals 0. And this is referred to as pi light. So maybe if that got confusing for you, let me just help out with a drawing. We have our atom here, which is quantized by a magnetic field B. And if you shine light on it, we have the electric field perpendicular to the magnetic field. So this would be x and y. And the natural way to describe it is by using x plus/minus IY. And we have selection rules where delta m is plus/minus 1. But alternatively, we can also shine light along this direction. And for the electric field, which was perpendicular to B, we retrieve the previous case. We have superpositions of the sigma plus and sigma minus. But the new case now is that the electric field is parallel to B. And then, we drive transitions, which have delta M equals 0. So these are sigma plus and sigma minus transitions. And this here is what is called a pi transition. Anyway, it's a little bit formal, but I just wanted to present it in this context. Questions? STUDENT: I have one slightly, maybe, basic question. When we talk about polarization in all these matrix elements-- so for example, photon [INAUDIBLE], right-- these are single photons [INAUDIBLE] elements. And so when we talk about shining a laser, it has a polarization. But we don't talk about polarization for single photons. Or do we? PROFESSOR: Actually, we talk about-- the question is, what is the polarization? Do we talk about polarization of single photons, or polarization of laser beams? Well, let me back up and say, we talk about polarization of a mode of the electromagnetic field. We will always expand the electromagnetic field into modes. And the mode is the polarization. It may happen that at some point, a photon is emitting a superposition of modes. But in the most straightforward description, we always do a mode analysis. And often, we simplify the case by saying that the atom interacts only with one mode of the electromagnetic field. And maybe in the case of spontaneous emission, we then sum over all modes. But for each mode, there's a specific polarization. And it doesn't matter if this mode is filled with one atom, or with a laser beam, with a classical electromagnetic field, which corresponds to zillions of photons. STUDENT: [INAUDIBLE] does it always end up being electrical polarization in this case, then? Like because if it's many photons, then there's a lot of [INAUDIBLE] for each of them, or each of them individually-- I don't know. PROFESSOR: No, it depends. If you have an atom, and it has one unit of angular momentum, and it spontaneously emits a photon, if the photon is emitted along the quantization axis, it can only be sigma plus. If it's emitted in the other direction, it has to be sigma minus. Now if you go at strange angles, then at this angle, you overlay it with different modes. And you may now find photons in a superposition of polarizations, because we have several modes which are connected with this direction of emission. I think if you write it down, it's pretty clear. It's just sort of projection operators. And for spontaneous emission, we sum over all modes. But for me, I always think about-- we can always think about what a single photon does by saying, well, if I'm getting confused about a single photon, let me figure out what many, many, many identical photons would be. And that would mean, instead of a single photon in a certain mode, I release a beam in this mode. And then, suddenly, I can think, classically, I know what the electric field is such. And then you go back to the, what is the electric field of a single photon, and usually make the connection. So I think at least for the discussion of matrix elements, transitions, angular momentum, I don't think you ever have to distinguish between what single photons do and what laser beams do. But there are important aspects of single photons, non-classical aspects, which we'll discuss in a short while. Other questions? OK. That's all I want to say about selection rules. So with that now, we can simply take the matrix element and run with it. So in this lecture and on [INAUDIBLE], I want to talk about basic aspects of atom-light interaction. And what I want to talk today about it is the two important cases when an atom interacts with monochromatic wave, or when it interacts with a broad spectrum. In one case, when I say monochromatic case, you may just think of the best laser money can buy. Very, very sharp. Very, very monochromatic. When I talk about a broad spectrum, you may just think about black-body radiation, which is an ultra broad spectrum. And they're two very different cases. And some of it is just related to Nancy's question, that if you have a broad spectrum, we're always talking about many, many modes, and they will be incoherent, and they will be irreversible physics. Whereas for monochromatic light, everything is a pure, plain wave, and everything is coherent. So we want to sort of talk about that first. And then later this week, I think on Wednesday, we will talk about spontaneous emission. But right now, we focus on the simpler case, where we drive the system with electromagnetic radiation, which is either narrow-band or broadband. But let's just start with a cartoon. We have an atom. And for that discussion, all we need is two levels. And all we need is that the two levels are connected by some matrix element. And the basic phenomenological situation is that we have one atom, which sits in a vacuum. So we have volume, V, of vacuum. And what is important now is that the walls of the imaginary boundary of what defines our vacuum is at low temperature. And low temperature means that the atom will irreversibly decay into the ground state with a lifetime tau. And that means that in some picture, the excited state is the broadening, which is broadened by the natural lifetime. And in our discussion, we assume-- and this is what I said with the cold walls of the vacuum-- that the energy difference is much, much larger than the relevant temperature. And this is very well fulfilled for our standard atomic system. The typical excitation energy, even for atoms with loosely bound electrons, as the alkalis, is two electron volt, which corresponds to a temperature of 20,000 Kelvin. And even at the rather hot temperature, definitely hot temperatures, in The Center for Ultracold Atoms, but the KT at room temperature corresponds to 25 milli-electron volt. So therefore, when we have an atom in isolation, this is what we find. We find an atom which will irreversibly decay to the ground state. And the fact that it irreversibly decays to the ground state is really an inequality between energies. If you will talk about a hyperfine transition or something, there may be a possibility that we have an excited state, which is thermally excited. But in the following discussion, when we drive the atom, and when we look at spontaneous decay, we always assume that the thermal energies are so small, that we really assume an atom sitting in a cold vacuum. Actually, it's your next homework assignment, where you will consider, what are the effects of black-body radiation. And you will actually find out in your homework that they are non-negligible. So yes, there are corrections. But you will also find out that the corrections are rather small, or it takes a long time before black-body radiation induces any observable transition. OK. So I'll just try to be a little bit formal here. Give you sort of a sketch of an atom in a cold vacuum. Ground state is stable. Excited state, irreversibly decays. And now, we want to bring life into this situation. Now we add light. And the light-- and this is now our discussion-- has a [INAUDIBLE]. And we want to distinguish the cases of narrow-band and broadband radiation. So it's clear that if the bandwidth of the light, the only scale-- well, we have the scale of omega. But that's a huge scale. The only smaller scale, which is given by the atom, is the natural linewidth. And depending, in which case we are, we talk about narrow-band excitation and broadband radiation. And once the linewidth is much narrower than gamma, we don't get any new physics when we assume perfectly monochromatic light. So once we are much smaller, we're really discussing the case of, well, we can neglect the spectrum broadening of the light source. Or in the other case, when we have broadband light, we can pretty much make the assumption that the light is infinitely broad, and what matters is only the spectral density of the light. So in a pictorial representation, if this is the frequency omega, we have the atom with the natural linewidth gamma. Narrow-band means we are much sharper than that. And broadband means really wide distribution. So if we have broadband light, it doesn't really matter what the total power is. If the light is very broad, there can be infinite power in the wings, but the atoms don't care. What matters when we have broadband radiation is the quantity called the spectral density. And that's what we need in the following. Which is, let me just give you the units. Which is energy per volume and frequency interval. So we can talk about the spectral density as of omega. Or alternatively, when we have a propagating beam, we don't want to talk about energy, we want to talk about intensity. So it is intensity per unit frequency interval. Which would mean I of omega is the energy density, multiplied with the speed of light. And that becomes energy per area and time. So that's the flow of energy. But because we are talking about board light, it has to be normalized by the frequency interval. In contrast, monochromatic radiation, it's sort of one monochromatic electric field. And we will specify it by the single frequency, omega, and the electric field amplitude. Which when multiplied by a matrix element becomes the Rabi frequency. Or we can characterize the light by the intensity I. But then it's an intensity which has the units of energy per area time. It's not normalized to any frequency interval, because we have assumed that the frequency interval is 0. So if you now have a description how these two forms of light interact with the atom, at this point, and we come to that later this week, we have to make an assumption that we are looking at times which are much smaller than the time for spontaneous emission. So if you now, in a perturbative sense, expose the atom's monochromatic or broadband radiation, unless we have included in the description the many, many modes for spontaneous emission, we are limiting ourselves to a very short time. This is, you would say, a severe description, because atoms emit photons after a short time. But we already capture, without considering spontaneous emission, a lot of different physics. And we can nicely distinguish between features of monochromatic and features of broadband excitation. OK. So let's start out with the case of-- give me a second. OK. So if you look at the two cases, in the monochromatic case, we will discuss the idealized situation of an atom interacting only with a single mode. And what we will find out is, we will find out that now, in the optical domain, we will find actually equations for the two-level system which are identical to what we discussed earlier when we discussed spin [INAUDIBLE] in a magnetic field. So in that sense, a two level system, driven by a laser system, will behave identically to a spin driven by a magnetic field. Shouldn't come as a surprise, but I will show that to you. But I can go over that very quickly. The board-band case will actually follow from the single mode case, because what we assume is broadband means many, many modes. And then we do an averaging over many single modes by assuming random freeze. But I also want to show it to you because I picked my verbs carefully. You have many, many more things, but we assume that there is a random phase. When we talked about one photon emitted into a angle-- it maybe responds to a question earlier-- this photon may be in a coherent superposition. This is not many modes in a broadband wave. Many modes in broadband wave means that there is no correlation whatsoever between the modes, and all we will be able to talk about is an IMS value of an electric field. But anyway, the result is sort of predictable, and I wanted to tell you what I'm aiming for. But it's now really worthwhile to go through those exercises and look at what happens in perturbation theory for short times when we have monochromatic radiation, and when we have broadband radiation. So the first discussion will show Rabi flopping. I don't know how many times we have looked at Rabi oscillation. But these are now Rabi oscillations between two electronic states covered by a laser beam. And I want to show you how this comes about. And when I said strong driving, well, we have only a limited time window before spontaneous emission happens. We have to discuss the physics we want to discuss in this shot time window. And if you want to excite an atom, and see Rabi oscillation in a short time, you better have a strong laser beam. So this is why the monochromatic excitation that we discussed will pretty much automatically be in this strong coupling limit. OK. So what do we have? We have a ground,and we have an excited state. We have a matrix element. We know now where it comes from. And we have a monochromatic time dependence. In perturbation theory, we build up time-dependent [INAUDIBLE] amplitude in the excited state, because we couple the ground state with the off-diagonal matrix element to the excited state. And we have to integrate from the initial time to the final time. We have the time dependence of the electromagnetic field, and we also need the time dependence of the excited state. So when I integrate now over t prime, I take out the ground state amplitude, because we're doing perturbation theory, and we assume that for short times, needing order, the ground state amplitude is one, as prepared initially. So this in integral can be solved analytically. Some of you may remember that the minus 1 has something to do with the lower bound of the integral. And when we discuss the easy polarizability, we said, this is a transient, and we neglected it for good reasons. But now, we're really interested in the time evolution of the system, so now we have to keep it. OK. We are interested in the probability in the excited state. So we take the above expression and square it. And we find the well-known result, with sine squared, divided by omega minus omega eg. OK. So this is pretty much just straightforward, writing down an analytic expression. But now, let's discuss it. For very short times, and this is an important limiting case, the probability in the excited state is proportionate to times squared. And this is important. We're not getting a rate which is proportional to time. We're obtaining something which is time square. And the proportionality to t square means it's a fully coherent process. So whenever somebody asks you, you switch on a strong coupling from a ground to the excited state, what is the probability in the excited state? It starts out quadratically. The linear dependence-- famous golden rule, [INAUDIBLE] or such-- only come later. This is a very universal feature. And even if you use broadened light, for a time window, delta T, which is shorter than the inverse bandwidth of the light, talking about Fourier's theory, you don't have time to even figure out that your light is broad and not monochromatic. For very short times, the Fourier limit does not allow you to distinguish whether the light is broad or monochromatic. So what I just derived for you, an initial quadratic dependence, is the universal behavior of a quantum system at very short times. Because it simply says the amplitude in the excited state goes linearly in time, and the probability, quadratic. OK. So this is for a very short times. But if you look at it now for longer times, we have actually-- we'll see the atomic behavior, and these are Rabi oscillations. But there is one caveat. So we have derived. However, we have derived them only perturbatively by assuming that the ground state has always a population close to 100%, which means we have assumed that the probability in the excited state is much smaller than 1. Otherwise, we wouldn't keep the ground state. And this is only fulfilled if you inspect the solution. The solution is only self-consistent if you have an off-resonant case, where the Rabi oscillation only comes from a small fraction of the ground state population of the excited state. Of course, you all know that Rabi oscillations, this formula, is also varied on-resonance. And you can have full Rabi flopping. But I want to make a case here, distinguish carefully between monochromatic radiation and broadband radiation. For that, I need for perturbation theory. And therefore, I'm telling you what perturbation theory gives us at short times, and in terms of Rabi oscillations. STUDENT: So you're saying we assume strong coupling with respect to the atomic linewidth, but weak coupling with respect to the resonance, for instance, in [INAUDIBLE]? PROFESSOR: It's simple, but subtle. Yes. So what we have is, we assume we switch on a monochromatic laser. Since we do not include spontaneous emission, which will actually damp out Rabi oscillation-- we'll talk about that later-- we are only limited, we are limited here to short times, which are shorter than the spontaneous decay. And now, I gave you one universal thing. At very, very short times, it's always quadratic. It's a coherent process. So that's one simple, limiting, exact case you should keep in your mind. But now the question is, if you let the time go longer, something will happen. And there are several options. One is, if times go longer, spontaneous emission happens. OK. We are invalid. The other possibility is, when time gets longer, and we are on-resonance, we deplete the ground state, or [INAUDIBLE] perturbation theory doesn't deal with that. But if we are off-resonance, we can allow time to go over many Rabi periods and observe perturbative Rabi oscillations. So this is how we have formulated it. We do perturbation theory of the system without spontaneous emission. And eventually, we violate our assumptions, either because spontaneous emission kicks in, or because we deplete the ground state when we drive it too hard, or if we go too close to resonance. But the later assumption, of course, that we can't drive it hard, as you know, is artificial. We can actually discuss the monochromatic case. Not just in perturbation theory, but we can do it exactly. STUDENT: I want to go back again to-- PROFESSOR: And this is what I want to do now. But first, we can go back. STUDENT: So when we are talking about non-B resonance and B-resonance, so if we decrease the detuning, then we are getting close to resonance. So again, this gets invalid. But if we increase the detuning, we could exceed the spontaneous emission rate. So then, we won't see any Rabi oscillations again, because, at those time periods, this oscillation would [INAUDIBLE] detuning. So to observe Rabi oscillations, we have to be at times more than the detuning, or more than [INAUDIBLE] detuning. PROFESSOR: Oh, yeah. Of course. STUDENT: So the detuning has to be less than [INAUDIBLE], but more [INAUDIBLE] that we are still [INAUDIBLE] resonant. PROFESSOR: No. The detuning has to be larger than the natural linewidth, because then the Rabi oscillations are fast, and we have Rabi oscillations which are faster than any damping due to spontaneous decay. That's an image we are talking about. So in the limit of our detuning, you can detune very, very far, and you never reach the limit of our perturbative abode. STUDENT: Yes. OK. PROFESSOR: Anyway, I want to do perturbation theory of the broadband case. And the broadband case will be an incoherent sum over the single mode case. So this is why I had to bore you with, what do we get out of perturbation theory for the monochromatic case? Of course, you know already that in a two-level system, we can do it exactly. And I just want to outline it, mainly to introduce some notation. So our Hamiltonian here, which couples the ground in the excited state is given by the dipole matrix element, the electric field vector, and we call this the Rabi frequency. And then we have a sinusoidal or co-sinusoidal frequency dependence. And all I want to do is to show you that a two-level system driven by an electromagnetic field is identical to spin 1/2, which we discussed earlier, and then we are done. There is one technical or little trick we have to do, which is trivial, but I want to mention it. So if you want to compare directly with spin 1/2, we are now shifting the ground state to half the excitation frequency. In other words, just to make the key analogy with the spin, usually we say for an electronic transition, we start at 0, and we go up. But now we shift things that the zero of energy is in the middle between the ground and the excited state. And then, it looks like the excited state, we spin up, the ground state, we spin down. So with that, our Hamiltonian is now excited, excited, minus-- so all I've done is I've shifted the origin. And the coupling, using our definition of the Rabi frequency is couples ground and excited state. And excited ground state. These are the two off-diagonal matrix element. And the time dependence is cosine omega t. So we are now very close to exploit the correspondence with spin 1/2. Because after shifting the ground state energy, this is the z component of the spin operator, the [INAUDIBLE] matrix. And this here is the x component. So therefore, for driving an electronic transition with a laser beam, we have actually spin Hamiltonian, which has the standard form. So let me just write it down, because it's an important result. The Hamiltonian for driving and dipole transition with a linearly polarized laser beam corresponds, or is identical, to the Hamiltonian for spin 1/2, in a static magnetic field along the Z direction, which causes a splitting between spin up and spin down. And the splitting is now omega eg plus a linearly polarized oscillating field along the x direction. And you probably remember that when we discussed the spin problem, what we liked actually most was that we had a rotating magnetic field, because it made everything simpler. And we are doing that now by formally writing the [INAUDIBLE] polarized field as a superposition of left-handed and right-handed, or counter-rotating and co-rotating magnetic field. So let me just do that. So we have the Z part. And now, instead of having just sigma x, cosine omega t, I add sigma y, sine omega t, and I subtract sigma y, sine omega t. So now we have shown that there is something in addition to the spin problem. We discover when we had a rotating magnetic field, that we have two components here which rotate. And these are the co and counter rotating magnetic fields in the spin problem. And the counter rotating, you remember in the spin problem, we solved the problem exactly by going into a frame which rotated at the Larmor frequency, which becomes now omega eg. And the co-rotating term became stationary on resonance in this rotating frame, whereas the counter-rotating term rotates at a very high frequency in this frame at the Larmor frequency. So if this frequency, if you fulfill the inequalities that the co-rotating term is close to resonance, or in other words, we are close to resonance, and we are not using an infinite intensity of the laser beam, that we broaden everything in co and counter rotating terms of boson resonance. So if you fulfill those two conditions, then we can neglect the last term. And this is the rotating wave approximation. So in other words, in the spin problem, we can always assume we haven't circularly polarized the rotating magnetic field, and we have an exact solution. I say a little bit more about it later. But in many situations, when you excite an atom with a laser beam, you get both terms. And usually, you proceed by neglecting one term, and by making the rotating wave approximation. will, in one or two lectures, discuss whether there are situations where the counter-rotating term is exactly 0 due to angular momentum selection rules, but that's a separate discussion. In many situations, it cannot be avoided, and it's always there. It's actually always there to the point that when I talk to some colleagues and say, I can create a situation, an atom, where the counter-rotating term is exactly 0, some colleagues reacted with disbelief, and then eventually felt that the situation I created for angular momentum conservation was somewhat artificial. But we'll get there. It's an interesting discussion. But anyway, just remember that for magnetic drive, if you use a rotating magnetic field, you don't need a rotating wave approximation. Everything rotates at one frequency. But usually, when you drive a two-level system with lasers, we usually have an extra term which needs to be neglected. OK. But if you do the rotating wave vapor approximation, we have now exactly the situation we discussed for spin 1/2 in a rotating magnetic field. And then, the same equation has the same results. And then, our results for spin 1/2 are now as expected. Rabi oscillations without making any assumptions about perturbation theory. So this is an exact result for the initial conditions that we start in the ground state, and the initial population of the excited state is 0. And as usual, I have used here the generalized Rabi frequency, which is the quadrature sum of these matrix elements squared and the detuning. OK. A lot of it was to get ready for the broadband case. So that's-- yes, we have a little bit more than five minutes. So, so far, we have discussed the monochromatic case. What I really needed as a new result, because I carried over for the board-band case, was a perturbative result. But I also wanted to show you that the perturbative result is one limiting case of the exact solution, which I just derived by analogy to spin 1/2. OK. So we just had the result that in perturbation theory, for sufficiently short times, we discussed all that, that the excited state amplitude has the following dependence. So this is nothing else than-- I want to make sure you recognize it-- Rabi oscillations at the generalized Rabi frequency. The generalized Rabi frequency is simply the detuning, because it's a perturbative result. In perturbation theory, you don't get power broadening, because you assume that your drive field is perturbatively weak. So therefore, the Rabi oscillation, our now Rabi oscillation where the Rabi frequency, the generalized Rabi frequency, is delta the detuning. And this is just rewriting. Let me just scroll up. This is this result here. I wasn't commenting on it. But this is nothing else than the detuning. Look. I'm just reminding you what you get from perturbation theory. Power broadening is not part of perturbation theory. OK. So this is our perturbative result. And now, we want to integrate over that because we have a broadband distribution of the light. So what we have to use now is the energy density, W of omega. The electric field is related, the energy of the electromagnetic field, is 1/2 epsilon the energy density of the electromagnetic field, is 1/2 epsilon naught times the electric field squared. Well, if you have many modes, we add the different modes in quadrature. And we still have the same reaction between the electric field squared and the total energy. But the total energy is now an integral over d omega. We integrate over frequency over the spectral distribution of the light. So this is how we go from energy density to electric feels. But now, we want to evaluate this expression. And what appears in this expression is the Rabi frequency. Well, what we have to do now is we have to go back from the Rabi frequency. We assume linearly polarized light in the x direction to the electric field. And that means, now, that when we-- OK. We want to now take this expression, and sum it up over all modes, which means we integrate over, we write the Rabi frequency squared as an electric field squared. And the electric field squared is obtained as an integral over the spectral distribution of the light. So this means we will replace the Rabi frequency in this formula by an integral over the energy density of the radiation. We have the matrix element squared as a prefactor. I just try to re-derive it, but I think the prefactor is 2 over epsilon naught. So, yes. With that, in perturbation theory, the probability to be in the excited state is-- let's just take all of the prefactors. Now, I change the integration variable from omega to detuning, we just go from resonance-- we integrate relative to the resonance. So our energy density is now at the resonance, omega naught plus the detuning. And we have this Rabi oscillation term. OK. So this is nothing else than taking our perturbative Rabi oscillation formula, which is coherent physics, and indicate over many moles. . I'm one step away from the final result. If the energy density is flat, is broadband-- so for the extreme broadband case, we can pull that out of the integral. And then, we are left only with this function, F of t. And you can discuss this function, F of t, is a standard result. And we have seen many discussions in perturbation theory. If I plot this function, versus delta, we have something which has wiggles. Then, there is a maximum, and it has wiggles. The width here is t to the minus 1. And the amplitude is t squared. And this is the excited state amplitude squared. So if we integrate that over delta, we get something which is linear in t. Something which goes as t-square, and has a width 1/t. Yes, time is over. So the function F of t, which is under the integrand, starts out at short times, proportion to t squared, as we discussed. Maybe my drawing should reflect that. But then it becomes linear. So for long times, the function F of t becomes linear in t and the delta function in the detuning delta. This is what you have seen many times in the derivation of Fermi's golden rule. I'm running out of time now. I'll pick up the ball on Wednesday, and we'll discuss that result and put it into context. But the take-home message-- and what I really wanted to show you is that we do have coherent Rabi oscillations. And by just performing the integral over this broad spectrum of the light, we lose the Rabi oscillations, and we find rate equations, Fermi's golden rule, and excitation probability proportional to t. And we have done the transition from coherent physics to irreversible physics. This is all hidden in this one formula, but I want to fully explain it when we start on Wednesday. Any last second question about that? Cody? STUDENT: It looks like we're integrating right over the point to where perturbation theory becomes an exact, because we're integrating over delta equals 0. And that's the most important part. PROFESSOR: We are integrating over it, but we are integrating over it with the [INAUDIBLE]. So therefore, since we have-- perturbation theory remains valid, actually. Perturbation theory remains valid, as long as the excitation probability is less than one. So I have not put a scale on it, but we can go from a quadratic dependence to linear dependence. As long as the probability of being in the excited state is smaller than 1, perturbation theory is exactly valid. So I think what confuses you here is, we can do resonant excitation. The broadband includes resonant excitation. But for sufficiently short times, we reach the rate equation before we run out of [INAUDIBLE] perturbation theory.
https://ocw.mit.edu/courses/8-04-quantum-physics-i-spring-2016/8.04-spring-2016.zip
PROFESSOR: If you have potential transmission coefficient for a potential where z0 is equal to 13 pi over 4. That's a square well of certain depth, and we represent it in this way. Remember n must be greater than or equal than 2z0 over pi. So this will be 13/2. And 13/2 means that we can start with n equals 7, 8, 9, and all those. Remember, this n counts which bound state of the infinite square well you're talking about. And the energy that you must use are your integers, are positive energy. So positive energies mean that you have sufficiently large n, and the n that this sufficiently large is 7 in this case. So you can then determine from this formula what is the value of e n over v0. So for example E7 over v0 turns out to be 0.15976. E8 over v0 turns out to be 0.514793. And E9 over v0 is 0.91716. So if you plot it, you have here E, or capital energy, over v0. And you want to plug the transmission probability. And it begins with 0. That was the question a second ago. And then it may reach 1. And it will reach 1 at each one of those values. So if, here is 1, 0.15. There will be 0.15, 0.51, and 0.92. So you get this, and here another one, and here another one. Probability like that. So that's a typical graph for the transmission probability. It oscillates, and it reaches 1 at several points forever and ever. And the amplitude become smaller, so it's really overall tending to 1. So these two people we're talking about, Ramsauer and Townsend. They lived from 1860s to 1940s and '50s. And they did their famous experiment in 1921. So their experiment was elastic scattering of low energy electrons off of rare gas atoms. So Ramsauer and Townsend, in 1921, they scattered elastically. That means the particles didn't change their identities. They didn't create more particles. It was just electrons came in and electrons went out. Electrons. And these are low energy electrons, off of rare gas atoms. So these are noble gases. Their shells are completely filled. And they're rather inert, very unreactive, high ionization energies, no low energy states you can scatter these atoms into. So basically very unreactive atom. And you can imagine it as a very beautiful spherical cloud. We can draw some electrons, there's some protons, a nucleus here, and an electron cloud. So how does this look to an electron? Well, you know from electrostatics that if you have total charge 0 and it's totally spherically symmetric, no electric field outside. So the electron comes in, feels nothing. And as soon as you penetrate this, at any point here, the electric field points in. Or, well, it actually points out, but the electron will feel a force in. Because the charge in the outside shell doesn't produce any field. But now, the protons in the nucleus beat the effect of the electrons. So there's a force in, a force in, that goes in. So basically this is like a deep square well, or spherical well, representing the atom. The atom can be some sort of spherical well that attracts the electrons. So what these people did were throwing these electrons. And they considered that this electron scattered a lot when they bounced back. On the other hand, if they continued, if the electrons pass by, they said nothing has happened. So the reflection coefficient for them, the reflection coefficient. Reflection coefficient is a proxy, a good representation for the scattering cross-section. So the reflection coefficient, what they found experimentally was a reflection coefficient, R, that as a function of energy was very high. And people thought at this moment, OK, these are like particles colliding with particles. Their energies shouldn't make much difference, you know. You either collide or you don't collide, and you bounce back or you don't bounce back. So they thought that this would be flat. But nevertheless, it actually went down enormously, and then it went up again. So they found that for electrons with about 1 Ev, that's very low energy electrons, but they were going pretty fast. And E1, Ev electron is going like at 600 kilometers per second. So the reflection was going like this. And they had no explanation why it was so sensitive with energy, and why there would be a funny effect going on, that the reflection would suddenly go down, and just basically the particles would get transmitted. But if you think of reflection here as a continuous line and transmission as a dotted line, the transmission that must alter the reflection to be 1 would be going up here and would have reached near 1 at this value of the energy. So the explanation eventually was this effect, that you should do well and there is a resonant effect in the well, and for some energies the resonance is such that it allows the particles to just go through and not scatter. So this had to wait some time, because the experiment was done in 1921, and Schrodinger and everybody started doing good work in 1925, and of wave mechanics took a while. But eventually it was recognized that basically it's resonant transmission, what is happening there. Well, if you want to get the numbers right, if you want to get that Ev better, you have to do a spherical model of the square, finite square well, you have this spherical well, and do it a little more precisely. But then the agreement is pretty reasonable.
https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2018/8.06-spring-2018.zip
PROFESSOR: What are we going to do? We're going to explore only the first Born approximation. And the first Born approximation corresponds to just taking this part. So this would be the first Born approximation. It corresponds to what we were doing here. What did we do here? Well, we're simplifying the second term, the integral term, by using what the Green's function looked like. We simplified this term. So all what we did here was valuable, except that there's one little difference. We-- in that Born approximation, we replaced the psi that appears inside the integral by the incident psi, which is the psi in here. So we will do that now to simplify this quantity in the so-called first Born approximation. When can we use it? So, very good. So when is the Born approximation a good approximation? Well, we are throwing away terms, in general. When we're putting an expansion of this form, we're saying, OK, we can set the wave function equal to the free part plus the interacting part. So we have the free part, and it gave us this quantity. And the interacting part gave us the second one. So the Born approximation is good when sort of the free Hamiltonian dominates over the perturbation. So if a scattering center has a finite-energy bump, and you're sending things with very high energy, the Born approximation should be very good. It's a high-energy approximation, in which you are basically saying that inside the integral, you can replace the plane-incident wave, because that dominates. That's not the whole solution. The whole solution then becomes the plane integral wave plus the scattered wave. But the plane wave dominates over the scattering process. So it should be valid in high energy. It's better and better in high-energy approximation. So we have it here. And let's, therefore, clean it up. So if we call this equation "equation A," we say, back to A. The first Born approximation, back to A. The first Born approximation gives us psi of r equals e to the i k i r. Now we put all the-- and this is all the arrows. And we have here minus 1 over 4 pi, integral d cubed r prime, e to the minus i k n dot r prime, u of r prime, and e to the i k i dot r prime. All this multiplied by e to the i k r over r. So let's put a few vectors here. That's it. OK. So it's basically this same thing here, but now replacing the incident wave here. That's the so-called first Born approximation. But now this is really good. We can compare this with what we usually called f theta of phi. The expression of our brackets is the scattering amplitude f theta phi. So here we have an answer. f at wave number k of theta and phi is equal to this integral. Let's write it out. Minus 1 over 4 pi, integral d cubed r prime. And now we will combine the exponentials. Happily, the two exponentials depend on r prime. So it's a difference of exponents, and we will call it e to the minus i, capital-K vector dot r prime, u of r prime, where this capital-K vector is equal-- I kept the sign-- k n minus-- this time we'll enter with a minus in that k-- k i or k s, the scattering k, minus the incident i vector. Remember, we defined there, on that blackboard, the scattered momentum as k times the direction of observation, that unit vector. So in combining these two exponentials into a single one, we have this capital-K vector that is a pretty important vector. And now, this is a nice formula. It kind of tells you story that there are not many ways to generate things that are interesting. Here it says that f k, the scattering amplitude, as a function of theta and phi, is nothing else than a Fourier transform of the potential evaluated at what we would call the transfer momentum. So the scattering and bridges are doing Fourier transforms of the potential. Pretty nice. Pretty pictorial way of thinking about it. Fourier transforms are functions of-- I think when people look at this formula, there's a little uneasiness, because the angles don't show up on the right side. You have theta and phi on the left, but I don't see a theta, nor a phi, on the right. So I think that has to be always clarified. So for that, if you want to use, really, theta and phi, I think most people will assume that k incident is indeed in the z-direction. z-direction. So here is k incident. And it has some length. k scattered has the same length. It's made by the same wave number k without any index, but multiplied by the unit vector n. As opposed to k incident, that is the same k multiplied by the unit z vector. So the scattered vector is the vector in the direction that you're looking at. So this is k s. It's over here. And therefore this vector is the one that has the phi and theta directions. That is that vector. And the vector k is k scattered minus k initial. So the vector k is the transfer vector-- is the vector that takes you from k initial to k s, is the vector that must be added to k initial to give you the scattered vector. So this vector, capital vector K-- it's a little cluttered here. Let me put the z in here. That vector is over there, and that vector is a complicated vector, not so easy to express in terms of k i and k s, because it has a component down. But it has an angle phi as well. But one thing you can say about this vector is its magnitude is easily calculable, because there is a triangle here that we drew that has k incident and k s. And here is k. So this has length k, this has length k, and this has length capital K. The triangle with angle theta. So if you drop a vertical line, you see that k is twice this little piece, which is little k sine theta over 2. So that's one way that formula on the right-hand side has the information of theta. It also has the information of phi, because you also need phi to determine the vector k. So this is an approximation, but look how powerful approximations are, in general, in physics. This approximation is an approximation for the scattering amplitude. It-- first, it's a very nice physical interpretation, in terms of a Fourier transform of potential. Second, it gives you answers even in the case where the potential is not spherically symmetric. You remember, when potentials were spherically symmetric, the scattering amplitude didn't depend on phi, and we could use partial waves. And that's a nice way of solving things. But here, at the expense of not being exact, we have been able to calculate a scattering amplitude when the potential is not spherically symmetric. So you manage to go very far with approximations. You don't get the exact things, but you can go into results that are a lot more powerful. So this is an explanation of this formula. And we can use this formula. In fact, we'll do a little example. And these are the things that you still have to do a bit in the homework as well. Many of them are in Griffiths. And indeed, technically speaking, what you need to finish for tomorrow hinges a bit in what I'm saying. But the final formulas are well written in Griffiths explicitly, and in fact, half of the problems are solved there. So it shouldn't be so difficult.
https://ocw.mit.edu/courses/7-014-introductory-biology-spring-2005/7.014-spring-2005.zip
So today we're going to continue our focus on DNA which I'm personally enthusiastic about at least in terms of being such a fascinating molecule. And I told you the story last time of how we actually came to understand that DNA was the genetic material. And I still see comments that, oh, God, all this stuff is not relevant to the exam. We're trying to construct the exams in ways that test whether you got the concepts and not just whether you memorized every term that you ran into in the textbook. So I'm hoping that you will see some greater purpose in why I'm trying to talk about some of this. And also I'm sure some of you will forget the details of transformation, of DNA replication we're going to go into as we sort of burrow into it over the next lecture or so, but what I am hoping you may remember ten years from now, even those who don't go in biology, is how experiments are done, how real people do them. And that was partly what I was trying to tell you. And you guys are pretty good at figuring out the basic principle that someone had to somehow show that a DNA molecule in one organism could change some organism to have a new characteristic. And as I sort of told you with the work from Frederick Griffith. And then his initial stuff wasn't devoted to that at all. It was trying to solve a very pressing problem which is dealing with pneumonia in a pre-antibiotic era. And then the finding that he got, that this odd result that something in a heat-killed extract could be transferred to a live bacterium sort of set things up for Avery and his colleagues after a number of years of work to make a very powerful argument that DNA was the genetic material. But, as I said at the end of last lecture, that paper was published in the 1940s. And people didn't immediately say oh, wow, DNA is the genetic material. Often, and we'll see it again with genetics, there's sometimes sort of the body of science the average person thinks about. Science needs to reach to a certain state before an idea can take hold, even if there's evidence supporting it. Part of the problem was that chemists had isolated DNA. And the way they used to isolate DNA was really rough on it. Crack the cells open. And what happened, it would all get broken down into little pieces of DNA. And people had worked out the basic chemical structure that it was the deoxyribose and how the things were joined together, but nobody had ever seen anything more than just these little pieces of DNA. And there was a widely held conception that it was just an anonymous tetranucleotide of G, A, T and C. It wasn't clear why the cells made it, but it didn't look like anything that could encode information. Whereas, as I said, something like proteins, those seem to be very different. And so the world wasn't quite ready for it. Another thing, and this came from one of the comments here, was someone said they didn't know bacteria could take up DNA from the environment. And, in fact, most bacteria can. It happens that streptacoccucci and some other bacteria at certain phases in their lifetime develop this capacity to take up DNA from the outside. Given what I've told you about a membrane and how hard it is to get things across it, you could imagine it's not trivial to get a DNA molecule which is huge from one side to the other. So it doesn't normally happen. And what happens if you go into a lab and you're cloning something or other, and we'll talk about how to take a couple pieces of DNA and join them together in a test tube and then put them back into a bacterium. If we put it into E. coli that doesn't normally take up DNA you'll find that it's sort of basically black magic. You cook them up with some divalent cations at very high concentrations, you do temperature shifts and various things, or you give them a big jolt of electricity, and the next thing you know you get some DNA inside. And it's not a very efficient process, but all you need is one molecule to get in one bacterium and then you're in business. So that was another reason that this wasn't accepted right away. Because this was not a phenomenon that could easily be repeated with other bacteria. So it looked like it was something perhaps special to streptococcus. And what did really change people's understanding, or at least bring people to the understanding that DNA could possibly be the genetic material came about from the discovery of the actual structure of -- How the structure of DNA as a long molecule with complimentary strands and the double helix, the little pictures I showed you with the base pairs, which you know about, and how the two strands which now I'm going to start emphasizing run in opposite directions. We'll come back to that in a little bit, but the 5 prime to 3 prime direction is this way on one end and 5 to 3 on the other. And just remember back here that there's the 5 prime carbon and that's the 3 prime carbon. So this is the 5 to 3 prime direction of the strand. And then it twists up in 3-dimensional space to form this double helix. And you've seen that movie several times. So once that structure was discovered then people began to see how these could possibly encode information. It was clearly not just a tetranucleotide of G, A, T, Cs. But we didn't move immediately to that understanding. And today, again sort of trying to show you how biological experiments are done and how they're done by real people, I want to just go on and tell you the key things that happened next. So someone who was very struck by the results of Avery when they came out was Erwin Chargaff who was at Columbia. And, in fact, my colleague Boris Magasanik whose office is next to mine was a post-doc in Chargaff's lab. So I've got a neighbor of mine who worked with Chargaff. And Chargaff was very struck by this result from Avery and his colleagues that you could take DNA and put it in another organism. And here are a couple of quotes from his writing. One that I'd liked. I've sort of had a sense of this in my own research career, this kind of thing. ìI saw before me in dark contours the beginning of a grammar of biology.î He didn't really know quite how it worked but he sort of sensed that someone here where you could get down to the language that biology was written. So he started some experiments. And I started with the conviction that if different DNA species exhibited different biological activities there should exist chemically demonstrable differences between deoxyribonucleic acids. So he was able to start just doing some simple chemical experiments to try and look at DNAs from a whole variety of sources and see what he could learn. And this was not at the structural level. This was just at the chemical level. But one thing he learned was that the base content of DNA, that's the A, G, C, T part of it varied widely between organisms. So this was what Chargaff found in his lab, key findings. And that was important because if DNA was just a molecule of GATC, just a tetranucleotide that every organism made then you'd expect to find the same base composition in all organisms. He didn't, so that finding essentially buried the monotonous tetranucleotide hypothesis. Another thing he found was that DNA was the same in different tissues -- -- from the same organism -- -- but the proteins varied. And that's a characteristic you'd expect of something that was the genetic information from the cell that all cells have to have sort of the major blueprint. And if you had, even though proteins look like an attractive possibility for that because they had so much variation, this kind of finding wasn't consistent with it and it supported the idea that DNA was the genetic material. Well, the other thing he could do was he could measure the ATG and C content of all these different DNAs. And he noticed some similarities then. And he extracted out of that a couple of generalizations. One was that if you looked at the ratio of the purines, those are the ones with the two rings, adenosine and guanidine over the pyrimidines, those are the ones with the single ring which were C and T, there are about one. Another thing he noticed was that the ratio of A to T was about one and the ratio of G to C was about one. Now, that was an important clue but it didn't lead to any immediate breakthrough, even though maybe now that you know the structure you can see, gee, if I had been there maybe I would have been smart enough to jump on that number. So instead the work that led to the structure of DNA now introduces a couple of other characters who you've heard of a lot, Jim Watson and Francis Crick. At the time that Avery made his discovery reporting DNA was transformation, and Jim Watson described himself later as a precocious college boy in Chicago who was consumed by ornithology. So he was into bird watching. That's what he was excited about at the time Avery did his experiment. And Francis Crick at that point was a physicist, and he was in the British Navy designing Navel mines. So that's where those two players were at the time of Avery's results. So then both Francis Crick and Jim Watson ended up in Cambridge, England about 1950. I think Crick got there around 1949 and Jim Watson got there in 1951. Francis Crick was a grad student, 35 years old at the time. I'll show you pictures in a minute. 35 years old at the time and still working on his PhD. So he was a pretty elderly grad student, if you want to think of it that way. And Jim Watson was a young hot-shot. He had done his PhD working with Salvador Luria who was at Indiana University at the time. Salvador Luria was one of the Nobel Laureates at MIT. He founded the Cancer Center, which is still here right across from the main biology building. And Jim was a very, very bright and brash young guy, and he had done his PhD with Salva and then he went to Cambridge as well. And the reason they both went to Cambridge was they were attracted by the power of x-ray crystallography. Now, I said a little work about that earlier, that if you take x-rays and you bounce them off a crystal and then measure the diffraction pattern you can work backwards by Fourier transforms and whatnot to figure out what the underlying crystal structure is. For the purposes of this course the mechanics of how that's done, we don't have to worry about that right now. You just need to know that you can work backwards from the diffraction pattern to figure out what the underlying structure was. And I told you, when I introduced to proteins, that the first clues that there were these regions of secondary structure, alpha helices and beta sheets came because people saw characteristic reflections in these diffraction patterns of certain proteins. And I also told you the story of how Linus Pauling had gone to Oxford, had gotten sick and tired of reading detective novels, started to try and explain the refractions in a certain class of proteins and came up with a model for the alpha helix. And so that was the sort of thing that inspired Watson and Crick. They were both interested in when one could get the structure of DNA. Now, Cambridge also had a very good x-ray crystallogram group. And just in passing it's interesting as to why they didn't come up with the structure of the alpha helix. There were two things. One was just lack of basic knowledge. I told you that the peptide bond, if you remember I emphasized that you cannot rotate it because the electrons are distributed. Pauling was an outstanding chemist. He knew that fact. And the folks at Cambridge who were doing that didn't learn this until later, so their models were far less constrained because they could have rotation around that bond. And the other one was just an experimental thing that the size of the photographic plates they used in the Cambridge lab were too small in the sense that they missed a key reflection that Pauling knew about and they didn't know about. So this combination led to them being scooped by the other group. But nevertheless the group at Cambridge was absolutely outstanding and at one of the top places in the world to do. And I showed you a couple of pictures when I was showing you the transition state. Sort of what you get out of working backwards from these diffraction patterns is they can measure regions of electron density, and then you fit atoms or fit molecules to the patterns that you see. And if it's all working you can explain why there are bumps here. There's an oxygen here and so on. There's another one. This is an ATP that's bound actually in a pocket in a protein. But you can sort of see how beautifully the patterns of electron density deduced from the x-ray crystallography will match the chemical structures that we put on the board. So that was the idea, they were going to work out the structure of DNA. Now, the thing about Watson and Crick, who at this point looked like this, they didn't look inordinately distinguished. In fact, Jim probably looked like, you've probably seen people who look approximately like that around MIT. He would have fit in right here and no one would have noticed. They were not actually x-ray crystallographers. They were just trying to model other people's data. And the best DNA crystallography data was a young woman Roselyn Franklin who was working in London. A very somewhat uneasy alliance with Maurice Wilkins. And in trying to read the history it's a bit complicated because, at least some of what I've read, I think that when Roslyn Franklin arrived at the lab she was told this DNA structure problem was hers. And Maurice Wilkins in whose lab she was working was told that he was sort of working for her. So there was a bunch of confusion in this. But, in any case, Roslyn Franklin was collecting crystallographic data. And Watson and Crick located some distance away in Cambridge were trying to come up with models that could explain the structure of DNA. And they learned about Roslyn's data. And it was here data that they used to work out the basis, her crystallographic data that they used when they put together their structure. So if it hadn't been for her they wouldn't have been able to make their discovery. So part of the reason I'm dwelling on this is I think their discovery of the structure of DNA was arguably one of the great intellectual advances of our time. It just opened doors. The whole field of molecular biology became possible once people suddenly saw that DNA was complimentary strands. You could almost immediately see how you could copy genetic information. It laid the groundwork for what later turned out to be, you know, recombinant DNA and everything else. So much of this pivots around this one discovery. And I think I wouldn't be doing justice to this finding, which you all have heard about for years and years, if I had let you walk away from here thinking this was too young geniuses who sat down in a room with some crystallographic data and emerged with a structure that sort of changed the course of the study of biology. And, as you can see, changes our society and everything else. There are a couple of accounts of this, there are numerous accounts. One that I found pretty interesting is called ìThe Eighth Day of Creation,î if you ever want to read an interesting book on science. This was Horace Judson's effort to try and put together a history of this happening. And with all history he's ultimately -- You know, there are some judgment calls by the historian, but this one certainly he tried to be pretty fair-handed and even-handed and he tried to get at the heart of what was going on. Watson wrote a book called ìThe Double Helixî. Jim Watson's a very colorful character, quite brash particularly when he was younger, and that's reflected in this book. It's an interesting read. Probably more balanced point of view for sure in ìThe Eighth Day of Creationî. And there are now a lot of other books. But what I did, just to try and do this in about a minute or two, was I took a couple of the key things that happened during their adventure of trying to work out the structure of DNA and just kind of ran some of their missteps together, because even though this was a marvelous discovery it just didn't happen. So they started out, they were inspired by Linus Pauling's discovery of the alpha helix. And I don't know if you can remember the story, but what Pauling decided to do when he was lying in bed and with a strip of paper trying to work out the structure that was giving these reflections in the crystal structure, he said I'm going to start by ignoring the side chains. So that was a brilliant move in the case of the alpha helix because he was then able to figure out that that hydrogen bond between the carbonyl and the amino group, you could see how if you got helix going it would repeat at exactly the way that would give the reflections that were observed in the crystallography. So that was how Watson and Crick sort of did it. Linus Pauling had shown the way. So they decided they would ignore the side chains of DNA. So they started out by saying we won't consider the ATs, the Gs and the Cs. Well, given what you know about the structure of DNA that was not a helpful move in trying to work out the structure of DNA. Another thing, for example, that happened was that Jim Watson has no lack of self-confidence. And so it turned out when he went to hear scientific talks he didn't take notes. And so he went to hear a talk on x-ray crystallography given by Roslyn Franklin, but he didn't quite remember the numbers right. He got the facts a little jumbled, and he and Francis spent a while trying to design models to data that wasn't the right data. It was just not quite remembered right, so there was kind of an inefficiency there. And then Jim had a bias almost to the end that the phosphate backbones they knew would somehow be on the inside and the bases would be on the outside of the structure. So if that's your sort of starting place then it's sort of hard. So Watson, excuse me, Francis Crick was beginning to suspect that maybe the bases were important. So he hired a young mathematician. And he said, ìCan you see if you could work out whether there would be any chemical attraction between any pairs of bases? And the young mathematician came back and said that he thought G might go with C and A with T. And given what happened here you might have thought that a light bulb would have gone off, but it didn't. And, in fact, Chargaff visited them and the light bulb went off for nobody. And, in fact, Chargaff wasn't a terribly big fan of what Watson and Crick were trying to do. So the pieces are piling up but still not there. Then a big experimental advance came from Roslyn Franklin. And that was she discovered that the DNA that they had been diffracting was actually a mixture of two forms. So there were actually two structures in the mix that were contributing to the diffractions. She was able to separate out the two kinds of DNA, DNA-A and DNA-B she called it. And so now this gave a much clearer diffraction pattern, and that's the diffraction pattern that she saw. And Watson and Crick managed to get a look at this data. And it's a little complicated how that happened, but Crick realized almost right away that there were two strands running in opposite directions. So he know knew it was 5 to 3 in one direction and 5 to 3 in the other direction like that. So you might have thought they were home-free, but no. Jim Watson immediately built a model that paired like with like, A with A, T with T, G with G. They wrote it up and they were ready to submit the paper. And they gave a presentation to their colleagues at the lab in Cambridge. And they were shot down. And one of the key things was they learned the chemical fact that most of the textbooks were wrong at that time in the way that they depicted the structure of guanine. If you look in your textbook, excuse me, here. So if you were to look in a textbook today you'd see guanine like this, but there is another way you could draw this. So this you may remember when we were talking about phosphoenolpyruvate that this is an enol form and this is a keto form. And this is the way most of the textbooks were showing guanine at the time. So they were looking at the structure of guanine in textbooks. And if you were trying to work out schemes for putting bases together you can see what's going on up here would be very different. And if we have a hydrogen here versus if we have an oxygen, if you're trying to say make hydrogen bonds at that particular position, I think all of you understand hydrogen bonds well enough to see how that would throw you off. So once that insight came, once they learned that then the rest of the structure came pretty fast. And there's a movie about this. One of the nice things in it was sort of trying to recreate the experience where I think it was Watson who was shuffling these base pairs around. And he suddenly realized that you could set up base pairs with A and T and with G and C, and when you looked at them you could see they were geometrically exactly the same shape. You could just take the shape of the G and C pair and lay it right down on the A and T pair. And then you could see how you could build either a G-C or an A-T pair into the repeating structure of this DNA and it would be compatible. So they built a model and they thought, we can just hit the lights for a second here maybe. I just want you to see what that first model looked like. It looks like something you could hack together in a chemistry lab. They had the bases cut out of metal. And you can see just, you know, here the retort sort of stands using chemistry and various clamps that you would use for clamping a flask or something if you're doing a chemical lab. That's the stuff that they were using to put the model together. And they published then a paper in Nature that told about this result. That's the entire paper reporting the structure of DNA. And maybe you can see there's a little hand-drawn double helix right there that captures the elements. That is the paper, and that was in the journal Nature. And it had in it, right near the end, one of the coyest sentences in the scientific literature. They didn't want to go into all the details that if you had an A paired with G and G paired with a C and you pulled them apart then you could replicate the molecule by redoing it. So all they said was, ìIt has not escaped our notice that the specific pairings we expostulated immediately suggests a copying mechanism for DNA. So this is a picture of Jim Watson wearing short pants at Cold Spring Harbor in 1953 reporting this structure of DNA. Cold Spring Harbor is on Long Island. It's been one of the Meccas for molecule biology since the 1940s. They have a famous symposium once a year. The topic changes every year and rarely repeats. And it was at one of those symposia -- This was the year that they discovered the structure of DNA. And there was Watson. So two years ago they had another meeting, a special meeting just exactly this time of year. It was in February within a couple of days of right now. So I gave this lecture and I showed the student in the class that this year, I said here's a picture of Jim Watson displaying the structure. They're having a meeting 50 years later in 2003. And I'm going down there. I'm asked to give a talk. And I'll come back and I'll tell you what it was like. So I gave my lecture. I dashed out to the airport. I hoped on the plane. I went down and I registered. They gave me, you know, the stuff to get into my room, a little envelope with the key card and things. And I went up to my room. And I took out the key card. And what did I find myself looking at? The same picture I had shown to the class just a couple of hours earlier. Here's another picture of Jim the way he looked at the time when he made this amazing discovery. That's Salvador Luria who I mentioned. I tell you about him in a subsequent lecture. I was at another meeting a few years earlier where some of the old-timers were razzing each other, and someone showed this picture. And then they got up and they gave it a title. And that was ìPicture of a Man Picking His Own Pocketsî. So they would tease each other a lot. And I'm hoping maybe you'll get a chance to hear a little bit more about that soon. This is what Jim Watson looks like now. I asked to get a picture taken just so you could see he's still around and is very active and still very controversial. This doesn't make much of a difference. Here's a picture of Watson and Crick a little bit later just sitting out on a porch in Cold Spring Harbor. It's sort of right on the edge of a bay down there in a very relaxed kind of atmosphere that still permeates molecule biology research to this day. Francis Crick just died last July at the age of 88, so we've just lost the link to one of the two people who did this amazing experiment. OK. So I want to then set things up for the details of DNA replication. So there was a basic principle that came across from this that you could see how this could work, that DNA was sort of like having a photograph and a negative. And so the information is actually in there twice. It's just in different forms. And when I tell you about DNA repair in another lecture you can maybe see already how useful that is because if you damage one strand you're not really out of luck because you've still got the information in the other strand. And you could probably, on the basis of that, device a repair strategy if you thought about it. But more importantly for DNA replication finally gave an insight to this thing that had been vexing people forever. If you had to have all this information for making a cell, and every time a cell divided and you saw how it can happen pretty quickly with something like a bacterium of yeast, how could you accurately copy all that DNA, excuse me, all that genetic information? How is it stored? How could it be done? And once you saw ah, it's just a matter of separating the strands, and if there's an A there put a T there, if there's a C you put a G and so on, was a huge breakthrough. But that then didn't tell people how DNA replicated or even if this is the mechanism. You can actually come up with all kinds of models for how you could replicate things based on this principle, including crisscrossing between strands and all sorts of things. The predominant model and perhaps the simplest one was called semi-conservative. And it thought of the problem in this kind of way, that if you had two strands of the original DNA molecule and then you pulled them apart that one of the strands here would become one of the strands of the daughter, and then the new one would be here and the same thing would happen on the other side. And then if you did it again this thing would happen again with a new strand. This time the skinny strand here would be like this, the skinny strand here would be like this, and then this one again. We'd have one that was nearly synthesized plus one of the originals. So this model was one of the simplest because it kept this strand intact throughout the whole process while some of the other models had them being patched back together, all based on the idea that A pairs with T and G pairs with C. But proving that this was the correct model was then another important advance. And that was done by Frank Stahl and Matt Meselson. Actually, I think I'll skip this for right now. Matt is a professor up at Harvard, just up at Harvard Square not very far from here, still very active. Frank Stahl is a professor out in Oregon. He's still active. So one of the differences about this course is a lot of the things I'm telling you about -- And this is pretty old stuff right now, right, molecule biology. The people who did these are still around and very active. This is most of modern biology is a pretty young scientist, and many of the major characters are still running around and with us today. So, anyway, what Matt and Frank were at Caltech. And they with a bunch of other students had an apartment. And they were sitting around trying to work out a way to figure out this model. And they came up with an idea, and that was to see if you could differentially label what we might call ìold DNAî and the ìnew DNAî here. And since it's chemically the same stuff it's a bit of a trick. How do you tell old DNA from new DNA? So their idea was since nitrogen comes in two different isotopes, N14 which is the common one and N15 with is one mass heavier, that maybe you could start out with the DNA, for example, grown in N15. And then when you started replication switch to N14. And then you'd be able to tell, if you could separate these molecules on the basis of their density since the one with the N15 would be heavier than the one with the N14, then maybe you could work this out. And the story goes, this has been written, they were sitting arguing about this, or talking about this idea at the table. And it was a good idea but there was a problem. And that was how could you separate the two kinds of DNA based on their density? So they had a piece of fingernail and they were trying to see whether they could get it to float by dissolving more and more sugar. And they figured if they added more and more sugar the water would get denser and denser so the could float the fingernail. And they weren't able to do it. But all chemists made a periodic, probably some places here at MIT, they had a periodic chart right in their living room. So they went and they looked. And then they looked at sodium. And they went down the periodic table and then they saw cesium. And thought maybe, you know, if you took a solution of cesium chloride and you put it a centrifuge and you spun really hard then you'd get a gradient of varying concentrations, of slightly different concentrations of cesium chloride. And that they could tune that to a range that would discriminate between the heavy and the lighter forms of DNA. So the experiment they did is known as the Meselson-Stahl experiment. But, as I say, these are names that come from real people. And the idea was pretty simple. They grew the bacteria for many generations -- -- in N15 medium. This is the so-called heavy or H isotope -- -- of nitrogen. And then that time equals zero in their experiment, when they're ready to start the experiment they switched to medium with N14, which we'll think of as the light or the L isotope. And then they isolated DNA -- -- after let's say increasing rounds of replication that you could tell simply by measuring how much DNA was in your bacterial culture when the bacteria had doubled their DNA. And this is the data they got which looks something like this. In fact, in this case the blackboard representation is pretty close. So this is cesium chloride. And it has been centrifuged very hard so that there's a gradient now that's light at the top and a little heavier at the bottom of the gradient. There's a little more cesium chloride per mil here then there is there in the tube. And I'll just give us three little sort of reference marks here. So what they found when they started was that all of the DNA was at that position down at the heavy end. And then this is after one generation. So the DNA has now doubled. What they found was that all the DNA was now at this intermediate position. And after two generations or two DNA replications they now found that some of the DNA was here, some of the DNA was there. And if they went to three or more what they saw was they began to pile stuff up there. And I think most of you could probably make the connection between that data and that picture that I've got up there. This is the heavy-heavy DNA. This is the heavy-light. So this would be heavy-heavy, heavy-light, light-heavy. After one round it will all be here. After two we have heavy-light, but this one is light-light, light-light, light-heavy. And so now we've got light-light, the heavy-light, no heavy-heavy is ever going to show up again. And the longer you do this the more you'll get the light accumulating. A very simple experiment done by real people but enormously powerful because now it showed that this basic idea, you have the photograph and negative, you pull them apart and copy them was right. So at this point you begin to see why data of Avery's that before people had trouble accepting, all of a sudden now it was really you needed a CYD and A was the genetic material. And this is what sort of ushered in this great burst of molecular biology. So in the next lecture what we're going to start doing now is once you, this is all great, but once we start figuring out how to replicate it we're going to have to get down to enzymes and biochemical steps. And there are some formidable challenges to replicating DNA, and it's also awesome. I'll tell you at the beginning of next lecture how much DNA we have and just how accurate it is. It always blows me away. I'll see you then. Take care.
https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2018/8.06-spring-2018.zip
PROFESSOR: So let's do a connection formula and use it to solve a problem. The derivation of such connection formulas, we'll face it the next time. And we'll go further applications of the methods. So, yes. AUDIENCE: So [INAUDIBLE] that problem like [INAUDIBLE] where the wave function vanished. Is that a problem of the [? our ?] perturbation or a classical problem? Because at that point, then it means the kinetics and the potential are equals to each other. So like the potential cannot be bigger than the energy, then that's a classical thing. That's not a perturbation problem. PROFESSOR: Right. This is not a problem of the perturbation theory. It's just our lack of knowledge, our ignorance of how the solution looks near there. So there's nice WKB formulas for this solution away from the turning points. Those are this. But the solutions near the turning points violate the semiclassical approximation. Therefore, you have to find the solution near the turning points by any method you have. That is not going to be the semiclassical method. And then you will find the continuation. Now, there's several ways people do this. The most down to earth method, which is I think the method we're going to use next time, is trying to solve the thing near there, the solution. People that are more sophisticated use complex variables, methods in which they think of the solution in the complex plane, the x plane becomes complex. And they're coming to the turning point and they go off the imaginary axis to avoid it and come on the other side. It's very elegant, very nice, harder to make very precise, and a little difficult to explain. I don't know if I'll try to do that. But it's a nice thing. It's sort of avoiding the turning point by going off the axis. It sounds crazy. So here is a connection formula. Here is x equals a. Here is v of x. And here is a solution for x less than a, little a, I'll write it like that, p of x cosine x to a kappa k of x vx prime minus pi over 4 minus B over square root of p of K of x sine x to a k of x prime vx prime minus pi over 4. So look, this is a general solution in allowed region. [INAUDIBLE] no, it doesn't look like what you wrote here. But sines and cosines are linear combinations of these things. And why do I put this silly minus pi over 4? Because that's what my solutions connect to solutions on the other side. Which is to say that the solution on the other side, people that work this connection formulas, discovered that takes this form, a to x kappa of x prime vx prime plus b over the square root of kappa of x e exponential a to x kappa of x prime vx prime. So here it is. Those numbers, a and b, are things you have to keep track now. They say if your solution for x greater [INAUDIBLE] has this decaying exponential and this growing exponential with b an a, your solution far to the left will look like this. It's a pretty strange statement. Sometimes you don't want b to be 0. Sometimes you do. Let's do an example with this stuff so that you can appreciate what goes on. So that's your first look at a connection condition. There's a little bit of subtleties on how to use them. We will discuss those subtleties next time, as well. But let's use it in a case where we can make sense of this without too much trouble. So here is the example. And somebody wants you to solve the following problem. You have some slowly varying potential that grows, grows indefinitely. v of x. And you wish to find the energy eigenstates. In particular, you wish to find the energies. What are the possible energies of this potential? This potential with have a infinite wall here. So the potential is infinite here. And it grows like that up on this side. We're going to try to write the solution for this. So we'll do this with the WKB solution. And let me do it in an efficient way. So what do we have here? This is the analog of our point x equals a. Is that right? This is the point where the turning solution goes funny. So we will call it, for the same reason we did before, this point a will make matters clearer. We truly don't know what that point is because we truly don't know what are the allowed energies. But so far we can write E as a variable that we don't know. And a can be determined if you know E because you know the shape of the potential. OK. So let's think of the region, x much greater than a. We are here. We're in a forbidden region. And the potential is still slowly varying. Let's assume that slope is small. So a solution of this type would make sense. So we must have a WKB solution on this right hand side to the right of x equals a. And that's the most general solution we'll have. On the other hand, here there's two types of solutions, a growing exponential and a decaying exponential. And we must have only the decaying exponential. Because this potential never turns. If this potential would grow here and then turn down, you might have what is called the tunneling problem. And this has to be rethought. But this problem is still a little easier. We have just this potential growing forever. And on the right, we must have b equal to 0. For x greater than a solution, b is equal to 0. And this solution must have a different from 0. But if the solution has a different from 0, we now know the solution on this region, x significantly less than a. In this region we know the solution is given by that formula up there. So our solution for x much less than a, the solution must take the form whatever a is. 1, 2, 3. I kind of normalized these things yet. So the solution, I'll write it, psi of x, will be of the form 1 over square root of k of x cosine of x to a k of x prime vx prime minus pi over 4. All right. That's what WKB predicts because of the connection condition. On the forbidden region, you know which wave exists. And therefore, far to the left of the turning point, you'd also know the wave function. It's the term within a. So have we solved the problem? Well, I don't think we have. We still don't know the energy, so we must have not solved it. In fact, it doesn't look like we've solved it at all. Because at x equals 0, these wave functions should vanish because there's a hard wall. And I don't see any reason why it would vanish. So let's do a little work here expanding this and orienting it a little better. So I want to write this integral from 0, x to a. You're having an integral. You have 0, x, and a. Because we are in the x less than a region, this integral is the one that we have here. I'll write it as an integral from 0 to a minus an integral from 0 to x. So integral from x to a is equal to integral from 0 to a minus an integral from 0 to x. This will make things a little clearer. In fact, we could do things still a little easier. So what do we have here? We have this exponential, the wave function. Not an exponential, a trigonometric function. 1 over square root of k of x cosine of minus the integral from 0 to x of k of x dx prime. This integral gives rise to two integrals and I wrote the first. Then I come with the other [? sine ?] plus the integral from 0 to a of k prime of x dx prime minus pi over 4. And let's call this thing delta. So let's explore this wave function a little more. So things have become kind of nice. There's no x dependence here. That's very nice about this part of the formula. This is an angle. No x dependence. And the x dependence is just here from this term. And it's a nice x dependence because it's an integral from 0 to x. So it's kind of nice. The upper limit has the x. It's all kind of elegant. So trigonometric function of this cosine of a difference of things is equal to its cosine of the first term, 0 to x k of x prime dx prime cosine delta plus sine of the first term 0 to x k of x prime dx prime sine delta. You have this quantity and this delta. So I use this trigonometric sum. OK. But now you can see something nice. What did we say about the wave function? It had to vanish at the origin. And let's look at these two functions. Which one vanishes at the origin? You have cosine of the integral from 0 to x when x is equal to 0 at the origin is cosine of 0, which is 1. On the other hand, when x is small, x goes to 0. You get 0 for the integral and the sine vanishes. So this is the right term. So this wave function would be correct if this term would be absent. This is a term that you must make absent. Psi of x without that term would be a valid wave function. It is the wave function. Therefore, the right wave function if we demand that cosine delta be equal to 0. So now we're imposing a very non-trivial condition. This wave over there. This contribution to the argument, to the angle here, this delta must be such that cosine delta is 0. In which case, this term would disappear and we would have a good wave function. Psi of x, if this is true, is 1 over square root of k of x times sine times the sine delta, which is another number. Sine of 0 to x k of x prime dx prime. So we need cosine delta equal to 0. You know, this argument gives you this wave function in a nice way here written what it is. But we would have been able to find this even faster. If you just demand that Psi at x equals 0 is 0, you must have that this thing, the integral from 0 to a of this quantity minus pi over 4 must have 0 cosine. And that's the condition we did fine. The advantage of our rearrangement is that when that happens, the whole wave function looks like that. And that's kind of nice. That gives you the picture of the wave function. A fairly accurate picture of the wave function in this region. Not very near the turning point, but you got that. So what is this cosine delta going to 0? Well then delta must be 2n plus 1 times pi over 2. So the places where the cosine is 0 is pi over 2. That's for n equals 0. 3 pi over 2. So for n equal 1. And it just goes. The vertical axis and the unit circle. So this is for n equals 0, 1, 2, 3, goes on and on. And this is a very wonderful condition. This says that the integral from 0 to a of k of x prime dx prime minus pi over 4. So actually, we have here you can multiply the 2 here. You get an n. And then you have pi over 2. And the pi over 4 that comes from that term becomes n plus 3/4 pi. It's a Bohr-Sommerfeld quintessential condition. Look, I box that equation because that really gives you the answer. Now, why? Because you know what k is. k p of x is equal to h bar k is square root of 2n E minus v of x. And if you know E, you know a. So you have now an integral. And you take, for example, n equals 0. So you want the integral to be equal to 3/4 pi. But you will have to start changing the value of the energy with your computer if you cannot do the integral analytically and find, oops, for this value of the energy the integral of what I know, this is a known function, gives me this value. So this is a very practical way of finding the energy levels. It does give you the approximate energy levels. And it's remarkably accurate in many cases. It also has a little intuition this is for n equals 0 would be like the ground state. n equal 1 first take on excited states. And indeed, that makes sense. Because this integral from 0 to a of k is the total phase of the wave function as you move from 0 to a. And that wave is a number of factor n times pi plus a little bit. So it has time for m 0s. The phase as you move from 0 to a in the wave function, the sine function will have n 0s if this condition is satisfied consistent with that solution. So that's it. We'll continue next time. Find the solution of WKB exactly. All right.
https://ocw.mit.edu/courses/5-61-physical-chemistry-fall-2017/5.61-fall-2017.zip
The following content is provided under a Creative Commons license. Your support will help MIT Open Courseware continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT open courseware at ocw.mit.edu. ROBERT FIELD: Today's lecture is one where-- it's a lecture I've never given before. And it's very much related to the experiments we are doing right now in my research group. And so basically, we have a chirped pulse of microwave radiation, which is propagating through a sample. It causes all of the molecules in the sample to be prepared in some way. We call it polarized. And this polarization relaxes by what we call free induction decay. And they produce a signal which-- so we have a pulse of radiation that propagates through the sample. The two-level systems in the sample all get polarized, which we'll talk about today. And they radiate that polarization. And we collect it in a detector here. And so the two important things are this is a time independent experiment, and that we have a whole bunch of molecules. And they're interacting with the radiation in a way which is complicated. Because this is not-- each one of them has quantum states, but all of the particles in this sample are somehow interacting with the radiation field in a way which is uncorrelated. So we could say all of these particles are either bosons or fermions. But we're not going to symmeterize or anti-symmeterize. Each of these particles is independent. And we need a way of describing the quantum mechanics for an ensemble of independent particles. So it's a big step towards useful quantum mechanics. And I'm not going to be able to finish the lecture as I planned it. So you should know where I'm going. And I'm going to be introducing a lot of interesting concepts. The first 2/3 of lectures votes are typed. And you could have seen them. And the rest of them will be typed later today. This is based on material in Mike Fayer's book, which is referenced in your notes. This book is really accessible. It's not nearly as elegant as some of the other treatments of interaction of radiation with two-level systems. Now I talked about interaction of radiation with two-level systems in lecture number 19. And this is a completely different topic from that, because in that, we were interested in many transitions. Let me just say the radiation field that interact with the molecule is weak. It interacts with all the molecules, and the theory is for a weak pulse-- and the important point in lecture 19 was resonance. And so we made the dipole approximation. And each two-level system is separately resonant and is weakly interacted with, and does something to the radiation field. Now here, we're going to be talking about a two-level system, only two levels. And the radiation field is really strong, or is as strong as you want. And it does something to the two-level system which results in a signal. And because the radiation field is strong, it's not just a matter of taking two levels and mixing them. The mixing coefficients are not small. It's not linear response. The mixing is sinusoidal. The stronger the radiation field, the mixing changes, and all sorts of interesting things happen. So this is a much harder problem than what was discussed in lecture number 19. And in order to discuss it, I'm going to use some important tricks and refer to something called the density matrix. The first trick is we have this equation which can easily be derived. And most of this lecture, I'm going to be skipping derivations. Some of the derivations are going to be in the notes. So we have some operator. And we want to know the time dependence of the expectation value of that operator. And it's possible to show that the expectation value of the operator a is given by the expectation value of the computator of a with the Hamiltonian plus the expectation value of the partial derivative of the operator a. So this is a general and useful equation for the time dependent of anything. And it's derived simply by taking the-- applying the chain rule to this sort of thing. So we have three terms. And when you do that, you end up getting this equation. So this is just this ordinary equation. And anyway, so this is what happens. So we're going to have some notation here. We have a wave function. And this is a capital psi, so this is a wave function, a time dependent wave function that satisfies the time dependent shorter equation. And we're going to replace that by just something called little t. And we can write this thing psi of x and t as the sum over n Cn psi n of x. And this becomes in a bracket notation, becomes C Cn n. So we have a complete ortho normal set of functions. And this thing is normalized to one. And now I'm going to introduce this thing called the density matrix. This is a very useful quantum mechanical quantity which replaces the wave function. It repackages everything we know from the time dependent shorter equation and the short in your picture of a wave function. It's equivalent. It's just arranging it in a different way. And this different way is extremely powerful, because what it does is it gets rid of a lot of complexity. I mean, when you have the time dependent wave functions, you have this e to the minus i E t over h-bar always kicking around. And we get rid of that for most everything. And it also enables us to do really, really beautiful, simple calculations of the time dependence of expectation values. It's also a quantity where, if you have a whole bunch of different molecules in the system, each one of them has a density matrix. And those density matrices add. And you have the density matrix for an ensemble. And so if the populations of different levels are different, the weights for each of the levels, or each of the systems, is taken care of. But we don't worry about coherences between particles unless we create coherence between particles. So this is a really powerful thing. And it's unlike the wave function, it's observable, because the diagonal elements of this matrix are populations. And the off diagonals elements, which we call coherences, are also observable. And if you look at the Fourier transform of the emission from this system, it will consist of several frequencies. And those frequencies are the off diagonal elements with the amplitude, the relative weights of those frequencies. And so one can determine everything in the density matrix experimentally. Now it's really-- it's still indirect because you're making experimental measurements. But we think about this thing in a way we don't think about the wave function. It's really important. And this is the gateway to almost all of modern quantum mechanics and statistical mechanics-- quantum statistical mechanics. And so this is a really important concept. And we've protected you from it until now. And since this is the last lecture both, in this course and in my teaching of this course forever, I want to talk about this gateway phenomena. So what is this? Well, we denote it by this, this strange notation. I mean, you're used to this kind of notation where we have the overlap of bra with a ket and or abroad with itself. But this is different. You know, this is a number and this is a matrix. And if we have a two-level system, then we can say that t is equal to C1 of t plus C2 of t. So state 1, state 2, and we have time dependence. Now those could be-- there's lots of stuff that could be in here. And this is going to be a solution of the time dependent shorter equation. So since it's an unfamiliar topic, I'm going to spend more time talking about the mechanics than how you use that to solve this problem. But let's just look at it. So we have for a two-level level system, we have-- it's a matrix of 1, 1; a 1, 2; a 2, 1; and a 2, 2 element. And so we want the rho 1, 1 matrix on there. And so it's going to be a 1 here, then we're going to have a 1 here. And then we have the C1, 1. Plus C2, 2. And then we have C1 star 1 plus C2 star 2. And so have I got-- am I doing it right now? So the first thing we do is we look at this inside part. And we have C1, C1 star. And we have 1, 1. And we have C2, C2 star. Now I'm getting in trouble, because I want this to come out to be only C1, C1 star. So what am I doing wrong? AUDIENCE: [INAUDIBLE] both have C2 halves the left hand and the right hand half are both [INAUDIBLE].. See, on the left side, you have 1 on 1 [INAUDIBLE].. ROBERT FIELD: So here-- And that's C2 C1 star, but that's 0. And so anyway, I'm not going to say more. But this combination is 1, this combination is 0, this combination is 0, this combination is 1. And we end up getting-- and then we end up just getting this. Rho 1, 2 is equal to C1 C2 star. Rho 2, 1 is equal to C2 C1 star. And rho 2, 2 is equal to C2, C2 star. So we have the elements of this matrix. And they are expressed in terms of these mixing coefficients for the states 1 and 2. Now, if we look at this, we can see that rho 1, 1 plus rho 2, 2 is equal to C1, C1 star plus C2, C2 star. And that's the normalization integral. That's 1. And we have 1, 1; 1, 2 is equal to 2, 1 star. And so each rho is formation. So the density matrix is normalized to 1. And it's Hermitian matrix. And we can use all sorts of tricks for Hermitian matrices. Now we're interested in the time dependence of rho. And so we're going to use this wonderful equation up here in order to get the time dependence of rho because rho like a, is Hermitian operator. And so we could do that. And so the time dependence of rho is going to be equal to the time dependence of t, t. Where we operating first here. And then t time dependent. And when we do this, what we end up getting-- well, so we have a time dependence of a wave function. So we use the time dependence shorter equation. And we insert that. And using the time dependence shorter equation we have things like-- so every time we take the wave function-- the derivative of a function, we get a Hamiltonian and so on. And so what we can express this time dependence of the density matrix by just using the time-- inserting the time dependent shorter equation repeatedly. This is why I say this is repackaging the Schrodinger picture, repackaging the wave function and writing everything in terms of these matrices. So that's the first-- that's what happens here. Sorry. That's what happens here. And then we write this one, and we get plus 1 over minus i h-bar t, t H of t. And we recognize that that is just one over i h-bar times H rho. So the time dependence of the density matrix is given by this computator. And the computators are kind of neat because usually what happens is these two-level things have very different structures, and you get rid of something you don't want to deal with anymore. And so now we actually evaluate these things. And we do a lot of algebra. And we get these equations of motion for the elements of the density matrix. And so we find the time dependence of the diagonal element for state 1 is opposite that for state 2. In other words, population from state 1 is being transferred into state 2. And that is equal to minus i over h-bar times H 1, 2 rho 2, 1 minus h 2, 1 rho 1, 2. And we have rho 1, 2 time dependence is equal to rho 2, 1 time dependent star. And that comes out to be minus i h-bar minus i over h-bar, H 1, 1 minus H 2, 2 rho 1, 2 rho 2, 2 minus rho 1, 1 times H 1, 2. This is very interesting, but now we have a couple differential equations and we can solve them. But we want to do a trick where we write the Hamiltonian as a sum of two terms. This is the time-- the independent part, and this is the time dependent part. This is the part that gives us trouble. This is the part that takes us into territory that I haven't talked about in time independent. But it's still-- it's perturbation theory. This is supposed to be something that is different from and usually smaller than H 0. And so we do this. So H 0, operating on any function gives En times n. And so we could call these E zeroes, but we don't need to do that anymore. And now we do a lot of algebra. We discover that the time dependence of the density matrix is given by minus i over h-bar times H 1 of t times the density matrix. So this is very much like what we did before, but now we have that the time dependence is entirely due to the time independent Hamiltonian. So everything associated with H 0 is gone from this equation of motion. So now let's just be specific. So here is a two-level level system. This is state 1. This is state 2. This difference is delta E. And we're going to call that h-bar omega 0. So this is the frequency difference between levels 1 and 2. H 0 is equal to minus h h-omega over 2 h-bar omega over 2 0, 0. We like that, right? It's diagonal. h1 is where all the trouble comes. And we're going to call that h-bar times e x 1, 2 E0 cosine omega t. This is not an energy. This is an electric field. So this is the strength of the perturbation. And this is the dipole matrix element between levels 1 and 2. So we have a dipole moment times an electric field multiplied by h-bar. So this quantity here, has units of angular frequency. And we call it omega 1, which is the Rabi frequency. It gets a special name because Rabi was special. And so we're going to be-- and this is-- expresses the strength of the interaction. So we have a molecular antenna mu 1, 2. And we have the external field. And they're interacting with each other. And so this is the strength of the badness, except its goodness. Because we want to see transitions. So now we do a little bit of playing with notation because there's just a lot of stuff that's going on. and we have to understand it. So we're going to call the state-- we're going to separate the time dependent-- the time independent part of the wave functions from the time dependent. And so state 1-- this is the full time dependent wave function. And it's going to be minus i omega 0 t over 2. in Other words, we should have had zeros here-- times 1, right. So this is the time independent part, and this is the time dependent part. And 2 is e to the minus i omega 0 t over 2, 2 prime. Notice these two guys have the same sign. This bothers me a lot. But it's true, because we have opposite signs here, and we have a bra and a ket. And they end up having the same signs. So that means that h 1 looks like this, 0, 0 omega 1 cosine omega t e to the minus i omega 0 t. And here we have omega 1 cosine omega t e to the pi omega 0 t. So this is a 2 by 2 matrix. Diagonal elements are 0. Off diagonal elements are this omega. The strength of the interaction times the frequency of the applied radiation times the oscillating factor. So now we go back and we calculate the equation of motion, bringing in this h 1 term. And so we have minus i over h-bar h 1 rho. And we get some complicated equations of motions. And I don't really want to write them out, because it takes a while, and they're in your notes. And I'm going to make the crucial approximation, the rotating wave approximation. Notice we have a cosine omega t. We can write that as e to the i omega t plus e to the minus i omega t. And so basically what we're doing is we're going to do a trick. We have the Hamiltonian, and we're going to go to a rotating coordinate system. And if we choose the rotational coordinate the rotation frequency right, we can almost exactly cancel omega 0 terms. And so we have two terms, one rotating like this, which is canceling or trying to cancel omega 0, and one rotating like this, which is adding to omega 0. And so what we end up getting is a slowly oscillating term, which we like, and a rapidly oscillating term, which we can throw away. That's the approximation. And this is commonly used. And I can write this in terms of transformations. And although we think about going to a rotating coordinate system, for each two-level system, we can rotate at a different frequency to cancel or make nearly canceling the off diagonal elements. So although the molecule doesn't rotate at different frequencies, our transformation attacks the coupling between states individually. And you can imply as many rotating wave core transformations as you want. But we have a two-level system. So we only have one. And so we do this. And we skip a lot of steps, because it's complicated and because we don't have a lot of time. We now have the time dependence of the 1, 1 element. And it's expressed as omega 1. I've skipped a lot of steps. But you can do those steps. The important thing is what we're going to see here. We have e to the i omega 0 minus omega t rho 1, 2. And we have a minus e to the minus i omega 0 minus omega t times rho 2, 1. And we have 2, 2 dot is equal to minus rho 1, 1 dot. And we have rho 1, 2 dot-- this is the important guy-- is equal to i omega 1 over 2 e to the minus i omega 0 minus omega t rho 1, 1 minus rho 2, 2. So we have a whole bunch of coupled differential equations, but each of them have these factors here where you have omega 0 minus omega. I've thrown away the omega 0 plus omega terms. And now it really starts to look good, because we can make these-- so when we make omega equal to omega 0, well, this is just 1. Everything is simple. We're on resonance. And so what we do is we create another symbol, delta omega, which is omega 0 minus omega. So this is the oscillating frequency applied. This is the intrinsic level spacing in the molecule. And so we can now write the solution to this differential equation for each of the elements of the density matrix. And we're going to actually define another symbol. We going to have the symbol omega sub e. This is not the vibrational frequency. This is just a symbol that is used a lot in literature, and that it comes out to be delta omega squared plus omega 1 squared. So in solving the density matrix equation, it turns out we care about this extra frequency. If delta omega is 0, well, then there's nothing surprising. Well, maybe e is just omega 1. But this allows for there to be an effect of the detuning. So basically what you're doing is when you go to the rotating coordinate system, you have an intrinsic frequency separation. And so in the rotating coordinate system, you have two levels that are different. And there's a stark effect between them. And you diagonalize this stark effect using second order perturbation theory or just the diagonizing the matrix. And so that gives rise to this extra term here, because you have the oscillation frequency and the Rabi frequency. And anyway, when you do the transformation, you get these terms. And so here is now the solution in the rotating wave approximation. Rho 1, 1 is equal to 1 minus omega 1 squared over omega e squared sine squared omega 0 t over 2. We have rho 2, 2 is equal to just omega 1 squared over the e squared sine squared omega e t over 2. We have omega 1, 2-- rho 1, 2, which is equal to something more complicated looking. Omega 1 over omega e squared times i omega 0-- omega e, sorry, or 2 sine omega e t minus delta omega sine squared omega e t over 2 times now e to the minus i delta omega t. It looks complicated. And we get a similar term for who 2, 1. It's just equal to rho 1, 2 complex conjugate. And so now what we see is these populations are oscillating not at-- It's a e, not a 0. They're oscillating at a slightly shifted frequency. But they're oscillating sinusoidally. And we have an amplitude term, which is omega 1 over omega e quantity squared. Omega e is a little bigger than omega 1. So this is less than 1. So it's just like the situation in the absence of this oscillating field, that you just get a slightly, slightly shifted oscillation frequency, and a slightly reduced co-factor. The coherence terms-- so these are populations, populations going back and forth between 1 and 2 at a slightly shifted frequency. And then we have this, which looks horrible. And now, for some more insights. If we make omega 1 much larger than delta omega-- in other words, the Rabi frequency much larger than the tuning, it might as well not be detuned. We get back the simple picture. We get rho 1 is equal to cosine squared omega 1 of t over 2, et cetera. So we have what we call free precession. Each of the elements of the density-- the density matrix is telling you that the system is going back and forth sinusoidally or co-sinusoidally cosine squared. And what happens to level 1 is the opposite of what happens at level 2. And everything is simple and the system just oscillates. Suppose we apply radiation or delta t a short time. And so what we're interested in-- here is t equals 0. This is time. And this is t equals 0. And before t equals 0, we do something. We apply the radiation. And we apply the radiation for a time, which gives rise to a certain flipping. And so what we choose. We have delta t is equal to theta over omega 1, or theta is equal to delta t omega 1, the Rabi frequency. And so if we choose a flip angle, which we call say a pi pulse, theta is a pi. And what ends up happening is that we transfer population entirely from level 1 to level 2. When we do that, we get no off diagonal elements of the density matrix. They are zero. So if at t equals 0, we have everything in level 1, and we have applied this 0 pulse or a pi pulse, we have no coherence. If we have a pi over 2 pulse, well then, we've equalized the two-level populations, and we created a maximum coherence. And this guy radiates. So now we have an oscillating dipole. And it's broadcasting radiation. And so all of the two-level systems, if you use a flip angle of pi over 2, you get a maximum polarization, they're radiating to my detector, which is up there. And I'm happy. I detect their resonance frequency. And so the experiments work. So we're pretty much done. So I mean, what we are doing is we're creating a time dependent dipole. And that dipole radiates something which we call-- if we have a sample like this, that sample-- all of the molecules in the sample are contributing to the radiation of this dipole. But they all have slightly different frequencies, because the field that polarized them wasn't uniform. In a perfect experiment it would be. And so they have different frequencies and they get out of phase. Or conservation of energy as the two-level system radiates from the situation where you have equal populations to everybody in the lowest state, there is a decay. So there's decays that causes the signal, which we call free induction decay, to dephase or decay. But the important thing is, you observe the signal and it tells you what you want to know about the level system, the two-level system, or the end level system. And it's a very powerful way of understanding the interaction of radiation with matter, because it focuses on near resonance. And near resonance for one two-level system is not near resonance. For another-- and so you're picking out one, and you get really good signals. And you can actually do-- by chirping the pulse-- you can have one two-level system, and a little bit later, another two-level-level system. They all radiate. They all get polarized. They all radiate at their own frequency. And you can detect the signal in the time domain and get everything you want in a simple experiment. This experiment has enabled us to do spectroscopy a million times faster than was possible before. A million is a big number. And so I think it's important. And I think that this sort of theory is germane, not just for high resolution frequency domain experiments, in fact, it's basically a time domain experiment. You're detecting something in the time domain, and Fourier transferring back to the frequency domain. So there are ultra fast experiments where you create polarizations and they-- it is what is modern experimental physical chemistry. And the notes that I will produce will be far clearer than these lectures, this lecture. But it really is a gateway. And I hope that some of you will walk through that gateway. And it's been a pleasure for me lecturing to you for the last time in 5.61. I really enjoyed doing this. Thanks. [APPLAUSE] Thank them. [APPLAUSE] Well, I got to take the hydrogen atom. [LAUGHTER] Thank you.
https://ocw.mit.edu/courses/7-016-introductory-biology-fall-2018/7.016-fall-2018.zip
ADAM MARTIN: OK. So I'm going to start out today's lecture on the wrong foot by quoting somebody that you guys probably don't know and who was a New York Yankee. So Yogi Berra, the famous Yankee catcher once said, "You can observe a lot by watching," OK? And that is very appropriate for biology because a lot of things in biology have been discovered simply by watching for them in cells or watching for them to happen at the molecular level. And so our ability to visualize and see what's going on inside cells and at the molecular level is really critical for the process of biological discovery. So today I'm going to tell you about tools, both sort of older tools but also kind of the cutting edge, for how biologists are really observing what's going on in living cells and in life in general. OK. So let me start by just having you guys think a little bit. What do you require of me to see what I write on the board? Yeah, Rachel. AUDIENCE: Light. ADAM MARTIN: What's that? AUDIENCE: Light. ADAM MARTIN: You need light. And what does the light help you to do? What's that? AUDIENCE: [INAUDIBLE] ADAM MARTIN: You need it to see the board. And so let's say the light's on, OK? Is that, can you read this? No, what's the problem? What's that? Size, right? So Natalie suggested size. Right, so one thing that you need is some amount of magnification, right? But let's take another-- let's say I do magnify this. What if I magnify it? And I'm going to start writing my notes on the board, right? How is this? Helpful? Jeremy, what's wrong? AUDIENCE: Differentiate [INAUDIBLE].. ADAM MARTIN: Yeah. You have to be able to distinguish different objects, in this case, these letters, right? So in addition to just magnifying it, you also need the structures to be far enough apart such that you can distinguish them. So you need what is known as resolution. OK. This was resolution. But this is resolution where the letters are actually resolved, OK? So structures need to be far enough apart so that you can resolve them. Now let's come back to Rachel's point, right? Why is it that we need light to see what's on the board? Right? What if I draw without pressing? Right? Is that-- yeah, Orey. AUDIENCE: You need contrast. ADAM MARTIN: You need contrast. Exactly, right? The light sort of gives you contrast between the chalk and the black part of the board. So you also need contrast. And contrast is the ability to-- the structures need to be differentiated from the background, OK? So structures need to be different from background. What else do you need to read my writing? Right? What if I were just to-- everyone can read that? What's wrong? Carlos? AUDIENCE: Needs to be clear and legible-- ADAM MARTIN: What's that? Yeah. I need to have, like, good handwriting, right? So I like to think of this as this is an aspect of sample preservation, OK? So there's a sample preservation issue. I can't butcher the letters and the words. OK. So in the process of doing all these other things, right, magnifying your image, resolving things in your image, and generating contrast, you can't destroy your sample such that it's illegible, basically, OK? So in this case, structure must be preserved while doing one through three on this list. OK. So I'm going to start with resolution. We'll talk about, what are the limits to resolving things in biological specimens. And in biology, the one instrument we use a lot is a microscope, OK? And a microscope is basically a collection of lenses that allow you to do many of the things I just drew on the board. I'll point out a couple sort of broad sort of types of microscopy. So the human eye up here can resolve up to about 100 to 200 microns, if you're looking at something at reading distant distance, right? But cells are like way smaller than that, right? So we need some sort of instrument that allows us to see things that is lower than the resolution limit of a human eye. And so one way is to use a light microscope where you're using visible light to observe your sample. And many of the images that you're seeing that we're showing you are from visible light microscopes. To see smaller things, type of microscope that's often used is an electron microscope. And electron microscopy allows us to observe structures all the way down to the sub nanometer level of resolution, OK? Now one limitation to the electron microscope is, you have to kill the sample, OK? So that can lead to artifacts and problems. And we'll discuss away at the end where light microscopy is being extended down to the limits that approximate that of an electron microscope. OK. So what determines then, the resolution of a microscope? And so I'm going to sort of define a measurement of resolution which I'll call the d-min, or minimum distance. And this will be the minimum distance between two points that can be resolved. OK. And what I showed you on that past slide is basically the limit on the right here is the d-min for these different types of microscopy techniques, OK? And what that means is, so the minimum distance would be, if your minimum distance is 200 nanometers, if two points are greater than 200 nanometers apart from each other, then you can distinguish them as two different objects. However, if they are closer than 200 nanometers together, you wouldn't be able to see that these are two different objects. They would be overlapping each other, OK? And typically, the d-min for a light microscope is around 200 nanometers, OK? And the d-min results, if we are to determine-- if I'm to tell you what determines this minimum distance, we have to think about a microscope. So here, I'm drawing a specimen here. I've just drawn my specimen. It's on a slide or a cover slip. Here's your specimen here. And you might have a light source to generate the contrast. And then there'd be some sort of objective lens underneath the slide and the specimen. So this would be an objective lens. Sorry about my sample preservation here. And so the light is going to be hitting the sample. And the objective lens will be collecting a cone of light that's going into the lens here, OK? And maybe I'll magnify this a bit so you can see it better. So I'm just going to magnify this region over here. So if this is my specimen, I'm going to draw the objective a little farther away this time. This is the objective. And the objective is able to capture a range of different angles of light that come from the specimen, OK? So it's collecting angles. And I'll just define here an angle theta, which is like the 1/2 angle of this whole cone of light, OK? So what determines the resolution limit in this type of system is first of all, the wavelength of the light that's used. So if you're using white light, that might be from 400 to 800. If you're exciting GFP, you're going to excite with a wavelength that's 488 nanometers. So it's usually around maybe between 450 and 550 nanometers for many different fluorescent proteins. OK. So lambda here is the wavelength of the excitation of the light you're using. And this is all then divided by 2 times the NA, which is a property of the subjective. And what NA is, NA stands for numerical aperture. And what that is, is basically the range of angles that this objective can collect, OK? So it's N sine theta, where theta is this angle here. So you get the best performance if the objective can collect all of the light that comes from this side of the sample. OK, and then N refers to the refractive index of the media that this light is going through. OK. And so if you have an objective and you have your sample in here and there's a slide and a cover slip-- I'll extend this out-- you often have immersion oil. There'd be some sort of immersion media here. And I don't know if you've ever used a microscope that's meant to be used with immersion media and you don't add that immersion media. But your image quality, if you don't add that media, is like really bad, right? And that's because you're affecting the numerical aperture of what this lens can collect. And therefore, you degrade your image quality, OK? But basically, the more light, the more angles of light that you collect, the higher the numerical aperture. And therefore, the lower this d-min is going to be. And so the greater you'll be able to resolve objects that are near each other in space. OK. So the take home message from all of this is that you notice that magnification is not a part of this. But the wavelength of the light is really critical, OK? So usually, this minimum distance ends up being the wavelength of light that you're using divided by 2. And this usually ends up being about 200 nanometers. So that's the diffraction limit of a light microscope, as you see up there. And this resolution is basically limited by the diffraction or behavior of light, OK? So light microscopy is limited by the diffraction of light. And it was thought for a long time that no matter what you do, you'd never be able to break this limit of about 200 nanometers. But at the end of the lecture, I'll tell you about some very smart people who figured out a way to actually break this limit. And we'll talk about how they were able to do that. Now I want to talk about a few other limitations of microscopy. And I'm starting by showing you this electron micrograph of the endoplasmic reticulum. And one important consideration you have to make is two dimensional versus three dimensional structure. So for electron microscopy, you basically cut the sample so you have a very thin slice of it. It's like slicing bread except these slices are on the order of 30 to 60 nanometers in height. And then you pass an electron beam through the sample after it's stained in order to visualize your specimen. And one thing you have to keep in mind is that, you're looking at a slice through this. And it doesn't give you three dimensional information, OK? So if we were to think about the endoplasmic reticulum, you might have an endoplasmic reticulum. And if you take an optical slice through this, then you would see something like this where you see each of the stacks individually. And so this might make you to conclude that the way that the endoplasmic reticulum is structured is it's kind of like a stack of pancakes, where each of these, you have a lipid bilayer surrounding a lumen of the ER. So right, the lumen would be inside like this for each of these. And they're just stacked on top of each other. And this is the textbook model for endoplasmic reticulum structure. But it was actually, if you don't consider this in 3D, you might miss something. And what was missed was reported in 2013 in this paper, where rather than just taking a single slice, what they did is they made lots of slices. And they kept track of where they are. So they basically did a three dimensional reconstruction of the endoplasmic reticulum. And by imaging this other dimension, they came to a very different conclusion about how the endoplasmic reticulum is organized. And instead of being stacks of membranes on top of each other like pancakes, instead, it's a helicoid. So this is an ER from a professional secretory cell, like a salivary gland cell. And you can see in 3D, you get a very different picture of the organization of this organelle. It's actually wrapped around and spiraling membrane stacks, OK? So their model is that, basically the endoplasmic reticulum in some cell types basically has a parking garage like structure, OK? So in this case, in these cells, it seems like the ER Is basically a parking garage for ribosomes. OK. And you don't get that information unless you consider the three dimensional structure of the thing that you're looking at. So in addition to electron microscopy, there are techniques that involve light microscopy that involve optical sectioning. And so normally, if you're looking at fluorescence, if you're doing fluorescence microscopy, you'd be exciting the whole volume of your sample and exciting all of the fluorophores such that fluorescence from out of the focal plane would be getting into your image. And that would give you a much more hazy, unclear image. But there are techniques such as confocal fluorescence microscopy that allow you to exclude the out of focus light such that you're basically getting an optical section through your sample. And that can give you a much cleaner and better resolved image, OK? Now I want to talk a little bit about another dimension, which is time. And again, you're seeing images in textbooks. And usually, you're just seeing a single image. And whenever you see a single image, you have to think about how things might be changing in time in order to understand the system. So one example that I like here is shown here, where these are different proteins that are labeled in a yeast cell. And you see that these proteins form patches at the edge of the yeast cell. And some of these patches just contain the green protein, which is SLA1. And other patches contain just the red protein which is ABP1. And there's another class of patch which contains both, OK? So how might you interpret this fixed image over here? What might be one model you would conclude? Well, what was initially concluded from this type of experiment is you have three different types of patches that are distinct from each other in the cell because they have different molecular compositions, OK? And that was what was initially thought. But it was wrong because researchers had to really consider the aspect of time in this problem. And I'm going to show you a movie now over here where you're going to see this yeast cell. And now you have these different proteins tagged with different fluorescent proteins. And we can watch them in time as they progress through a stereotypic cycle. So what you're going to notice in this movie is that you see these green patches appear. And every single green patch at some point is joined by red. And then the green disappears and the red stays around. And then the thing disassembles, OK? So what was initially thought to be three different structures in the cell, eventually, it was found out that there was a dynamic process where this patch sort of matured over time and eventually disappeared into the cell. And what this process is, is actually endocytosis in yeast. And you're seeing different proteins getting recruited to endocytic vesicles as they bud from the plasma membrane of this yeast. OK. So that's just my caution in interpreting fixed images, because you have to think about how they might be changing in time. All right. So now we have to consider contrast. And in bright field microscopy, and bright field microscopy basically involves white light as your light source. And so you'd have a microscope that has a white light source. You might have your specimen here. Here's your specimen. Then you'd have some sort of detector at the end of your system. And there would be some objective lens in between, which I'm going to ignore for now. And so, for bright field microscopy, you're taking a sample and shining it with light. The light that doesn't go through your sample will go right through to the detector. And that's your background. But some of this light, the light that's going and hitting your sample, could be absorbed or it could be refracted. And it's the refraction or absorption of this light which generates the contrast for bright field microscopy. So in bright field, native structures in the cell absorb or refract light. And this is what generates the contrast. OK. So the images shown up here are bright field images of cells. And in each of these cases, there's no dye. There's no fluorescent protein. But you're able to see the outline of the cell. And you're able to see even individual organelles or structures within the cell that are interacting with the white light and generating contrast, OK? So that's one way to generate contrast is just hope that whatever is native in your cell generates the contrast. But there are also-- you can increase contrast in specimens by adding dyes. And if these dyes bind to specific structures like a membrane, then that will increase your contrast. So the electron microscopy images that I showed you-- so for electron microscopy, you generate contrast by adding a dye that is an electron-dense dye, which will bend the electron beam. And that's what allows you to get an image from an EM. So an EM contrast is from an electron-dense dye such as uranyl acetate or some other type of dye. Now, fluorescence microscopy, as Professor Imperiali showed you, involves taking a fluorescent molecule and attaching it to your protein of interest. So you're actually getting protein-specific contrast, which is very useful. OK? And the way a fluorescence microscope works is just shown up here where you might have a light source that has a range of wavelengths. And you can use a filter to select one. In the case of GFP, it would be blue light or 488 nanometer light. And that would then be shined onto your specimen. And then the light is absorbed by fluorophores in your specimen. Some energy is lost, such that the light that's emitted from GFP is a longer wavelength, in this case, green. And then you can filter again to make sure only the green light is what goes to the detector. So this is a very efficient way of generating contrast because you can use filters to select only the wavelength of light that is emitted from your fluorescent molecule. OK. Any questions about that and about my very short version of how fluorescence microscopy works? Yeah, Rachel? AUDIENCE: [INAUDIBLE] dichroic mirror? ADAM MARTIN: The dichroic mirror reflects certain wavelengths that are below a certain wavelength. And will pass wavelengths that are above a wavelength. So it will basically reflect the excitation light. But it will pass the emitted light, OK? And so, there are tons of these mirrors. Some are not dichroic, but they can reflect four different wavelengths and pass all other wavelengths. And so, this allows you to image multiple fluorophores at the same time, OK? The specifics aren't as important as the general concept of how this works. OK. Now I want to come back to the resolution limit. And I want to tell you about how we can beat it. So, beating. We all like winning. So beating the diffraction limit. And this is going to involve a type of microscopy that's really sort of been developed in the past decade, which is known as super resolution microscopy. So super resolution microscopy. And remember, I mentioned for you before that yes, electron microscopy can get you nanometer resolution. But you have to kill the cell. And also, it's hard to get protein specific contrast, right? So that kind of sucks because as biologists, usually we're interested in how things are functioning to stay, to live. So wouldn't it be great if we could somehow use light microscopy to get down into this nanometer range so that we can see how individual proteins are interacting with each other and organized at the nanometer level, OK? And so in the past decade, there's really been a revolution that's enabled us to do light microscopy with a resolution that gets down to the 10 or even single nanometer resolution. And there's a number of different super resolution techniques. I'm going to talk about just one of them. But both these techniques basically use the same concept, which is that they enable whoever's doing it to identify single molecules and define where those molecules are very precisely. And to turn fluorescent molecules on and off so that you can select individual molecules such that you can see them. So these are two different techniques. They're conceptually very similar. I'm going to focus on this one here. But it's pretty much similar to this one up here. And I just want to point out that one of our colleagues here at MIT, Ibrahim Cisse who's in the physics department, his lab builds these super resolution microscopes. And they're using super resolution microscopy to study the collective behaviors of proteins, in his case, during the function of gene expression. OK, so this is research that's actively being pursued at MIT. So let's just do a thought experiment again. OK. I'm drawing a single molecule or what you would see in an image if you were looking at a single molecule GFP. Great. Where is GFP here? Carmen? AUDIENCE: It's right there on the board. ADAM MARTIN: It's right there on the board, right? Is it here? What's that? AUDIENCE: I don't know. ADAM MARTIN: Who thinks GFP is right here? Who does not think GFP is right there? You have to be thinking one or the other. Yeah, Rachel. AUDIENCE: [INAUDIBLE] ADAM MARTIN: OK. So what Rachel says is that it's probably not right here. It's probably in the middle of this thing, right? And so if you're seeing a diffraction limited spot, you're going to get some sort of Gaussian of intensity, which I didn't draw well here. But it might be a little bit brighter in the center and drop off as you go towards the edge, right? So if I were to take a image intensity profile along the line here, you'd see something that kind of looked like a Gaussian, OK? And GFP, if there's a single molecule that you're imaging, should be right in the center of this Gaussian, OK? And so even though we're not seeing a spot, we're seeing a spot that its width here is diffraction limited. So this width is 200 nanometers. But if we can estimate where the molecule is in this region with nanometer precision, we could get a very accurate view of where this fluorescent molecule is, OK? So it relies on certain assumptions. The first assumption is that you're assuming we can see single fluorescent molecules. So that we visualize single fluorescent molecules. And that we can then estimate with some amount of precision the location of the molecule based on this diffraction limited sort of image that we get. So then we have to estimate the location based on the image. OK. And then our resolution is basically the error in fitting this curve, OK? So the error in the fitted position is equal to the standard deviation of this Gaussian. The standard deviation divided by the square root of the number of photons that you collected to get that image. So the square root of the number of photons. OK. And I just told you in the beginning of the lecture that this standard deviation is limited by the diffraction of light. So the standard deviation is going to be around 200 nanometers, right? But if you collect a lot of photons, you can accurately figure out where the fluorescent molecule is here if you know that it's a single molecule. OK? So the number of photons in a typical experiment is going to be around 10 to the fourth, OK? And so if 200 nanometers by 10 to the fourth, you're going to have sub nanometer resolution if you do the experiment right. OK? So you really need to see fluorescent molecules, however, in order to do this, OK? And the real breakthrough came with the realization that you could combine this type of fitting to estimate the position of single molecules with a certain type of fluorescent protein where you can turn the protein on and off stochastically, OK? OK. So we need one more component which is a photo-activatable fluorescent protein, in this case, the first one was photo-activatable GFP PA-GFP. And PA-GFP is a fluorescent protein like GPF. It's genetically encoded. But when it matures, it's not fluorescent. It's in a dark state, OK? So when it matures, it's dark. It has a dark state. And it starts out in this dark state. But you can turn it on. And you can turn it on with light. So that's where the photo activation is, because you're able to photo activate this fluorescent molecule. And you can photo activate with sort of UV light or 405 nanometer light. And so that's not normally the excitation wavelength. But if you shine your sample with 405 nanometer light, it will convert some set of your molecules into the now fluorescent state, OK? So this then causes it to be fluorescent. And now it's going to be lighting up. OK. And I want to thank Professor Cisse because he gave me the next slide which I think nicely shows how this technique works. So the way you can get super resolution is you can't be looking at all your fluorophores at once because they're not far enough apart and they'll all bleed together so that you get a bad image, right? So this would be your conventional diffraction limited image where all the fluorophores-- there's about 20 fluorophores here. And you can see, you can't see individual fluorophores and you can't see what that says, OK? But if you take a divide and conquer approach with this, if you have a photo-activatable GFP, you don't need to look at them all at once. You can just look at three to start, OK? So now if you only activate a small subset and you ensure that you're activating it at a frequency such that they're well resolved from each other, then you can distinguish that there are three single molecules here. You can fit where they are. And now you know where they are with nanometer precision. So you know where those are. And then you want to look at other molecules. And so you have to get rid of these. And so what you would do is to bleach them. And bleaching is to use light to basically damage the fluorophores and get it to no longer fluoresce, OK? So this process is going to involve an iterative photo activation followed by measuring and fitting the image so that you can basically determine where each single molecule is in your image. And then ending with bleaching to get rid of the fluorophores you just turned on so that everything is now dark again. And then you repeat this process iteratively to collect all of the single molecules that you can, OK? So in this case, we just got these three molecules. We would then want to bleach them so that we're now going to look at different fluorescent molecules. And we'll turn on a certain number of other fluorescent molecules. Here you see four. Here are two. They're a little close together, but you can see that there are two. Here are another two. They're close together, but you see two clear intensity peaks. And so you can fit those four. Now you have four more molecules to make up your image. You bleach them, excite or activate five more. Here are five fluorescent molecules. You can fit those. Determine their positions. And you just do this iteratively over and over again till you get as many molecules as you can. And at the end, you basically add all these images together to get the final super resolution image, OK? So this is an iterative process where the photo activation allows you to image single molecules such that you can see where they are with nanometer precision. And then you add them all together to get a super resolution image. Here is an example of this in practice. And this is the storm technique which doesn't involve a photo activatable fluorescent protein, but involves organic dyes blinking. And the concept is basically the same. And here you see a conventional image of an axon. And it's labeled with this beta-II spectrin. And you see beta-II spectrin is continuous. And it's staining in this axon. But if you look at the super resolution image, what you see is the beta spectrin actually has this repeated periodic pattern along the axon. And this is a cytoskeletal element that's basically present in rings up and down the axons of neurons. And you can kind of think of this as like the axon is a vacuum hose, where you have these rigid sort of rings that are aligned all along the axon. But because it's repeated and you have intervening areas without much cytoskeleton, you can kind of think of it as a way for the axon to be both rigid but also flexible and maneuverable. I just wanted to point out that several super resolution techniques were recognized in the 2014 Nobel Prize in chemistry. Eric Betzig on the left here developed the approach using the photo activatable GFP that I described to you. And two others, Stefan Hell and W.E. Moerner were awarded it for other types of super resolution technique. If you get a chance, you should go to the Nobel Prize website and listen to Eric Betzig's Nobel lecture. He has a very interesting story. And part of it involved how he managed to develop this technique. And he actually developed it in the living room of his friend. So this is actually one of the first super resolution microscopes. Here's the microscope and here you see-- I love this chair. But you see, you basically have this microscope in this guy's living room. So if you want to hear more about this story, listen to his Nobel lecture. He's a really funny guy and you get a sense of how science really works where you get this unemployed guy like building a microscope in his friend's living room and then wins a Nobel Prize. And just one reminder to end today, remember your news brief is due this Friday, November 30th. If you need help on selecting a topic, please see a member of our staff, including Professor Imperiali or myself. And so, good luck with that. Thank you. I'm all set.
https://ocw.mit.edu/courses/8-851-effective-field-theory-spring-2013/8.851-spring-2013.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Question. We know how to do matching and running. We've seen an example of that. In this area, there's this label v on the fields, and we want to figure out how that affects doing matching and running. So we started out talking about the wave function for normalization calculation. So we have these fields, which we can have a bare version of and a renormalized version of in the usual way. So this is renormalized, and this is bare. Some Z factor between them that I'll call Zh for Z heavy. And so we have to calculate at the lowest order, this diagram, send in a momentum P. This is Q plus p.? Use the Feynman rules. Use dimensional regularization. Use dimensional regularization with MS bar, so there's some extra factors. And use Feynman gauge, which is usually the simplest gauge choice. So each of these vertices here gets a v. There's a v mu from this vertex. A v mu from that vertex. And that's where this v squared comes from. And then there's one propagator here. It's very traditional to denote a heavy quark by two lines rather than one line, so you know which lines are heavy and which lines are light. So these two lines is a heavy quark, and that gives this propagator here, this v dot q plus P. And then there's a relativistic propagator for the gluon, and that's the q squared. I've taken into account all the i's when I put this minus sign, and this is just the fundamental Casimir from dotting two TAs together. OK. So when you have an integral like this-- so v squared is 1. That's one simplification. When you have an integral like that where you have a linear momentum in one propagator and a quadratic momentum in the other propagator, you don't want to use the standard Feynman trick. I call this trick the George I trick. So it's very similar to the Feynman trick but slightly different. So you use an integral that goes from 0 to infinity, and you can convince yourself that this is true. And so then set a equal the q squared and b equal to v dot q plus p. And the reason that you'd want to use this trick rather than the usual one is that if you use the usual one, you'd get an x multiplying this and a 1 minus x multiplying that. But then you would have an x multiplying the q squared. So when you complete this, it would be-- when you'd want to complete the square, you'd like nothing to multiply the q squared. You'd like the q squared just to be bare by itself with no Feynman parameter multiplying it. And this trick does that, because a has no Feynman parameter multiplying it, or George I parameter in this case. OK, so we combine denominators in the usual way, and the denominator would become this. And if I kept the i epsilon it would be that if we combine these two denominators here. So this factor would be that. And we can then complete the square, right. This is some momentum squared minus whatever is left over where t is q plus lambda v, and then A is the rest of the stuff, which is this. OK, so then this guy, now we just have our usual quadratic integral that we can use the standard rules to do. And then instead of an integral over x, we have this integral over lambda. There's some factors that I'm dropping. So there's that e to the Euler gamma times epsilon and the 4 pi minus epsilon. So write in everything with d equals 4 minus 3 epsilon. Do that integral, which is just giving some gamma functions. And if you think about the dimensions here, so we end up with something that's d minus 3. If you think-- if you want to look at the dimensions, d over 2 minus 2 is dimensionless, OK. The lambda had dimensions. If we go back here, that's obvious. q is dimensionful, v is dimensionless, so lambda has dimension 1. So the dimensions of the-- actually, the dimensions of the mu to the 2 epsilon are compensating the dimensions here, and then there's one power of dimension left. And that's why if I take d equals 4, I'm getting one power of momentum upstairs, which is what we would expect for an inverse propagator for a heavy quark, is 1 factor of v dot P. So expand this. And we get a divergence. So add a counter term for wave function renormalization. So if we did that in MS bar, then the Zh would be just this. And that would be the appropriate counter term to kill off the 1 over epsilon divergence. I'm going to carry out the calculation today, or this discussion of matching in MS bar for everything. And I'll show you some of the slight complication that shows up in that case. But we said that we could do matching-- basically, you could have two choices here. You could either use on-shell renormalization, or you could use MS bar. What's the difference? In MS bar, you just keep the divergence in the z. In on-shell renormalization, you keep some extra terms here, all right. And either way that we choose to do things, we should actually end up with the same matching, and I want to show you why that's true. So in order to show you why that's true, I'll pick to use MS bar at this point, and we'll see what complication that leads to later on. OK, so that's one thing that needs to be renormalized. And if we look at this guy and we compare it to z psi, this is not the same as z psi in QCD. So this is something different, and the reason it's different is because we did a different loop integral that had this heavy quark propagator, not the light quark propagator. So we have to renormalize also local operators. And so let's think about something that would make a heavy to light transition. So for example, if you looked at b goes to u, electron neutrino, then that would be a heavy to light transition. To b quark is heavy. The b quark is light. So to describe that, you would use some operator that has one heavy quark and one light quark. So let me call the light quark small q and the heavy quark big Q. And so you'd have some operator looks like that. And we could write down a renormalized operator with renormalized fields and then group all the z factors into a counter term. So let's think about doing the perturbation theory that way. So this is the renormalized operator. This is the counter term. So there's a wave function factor Zq for the light quark and Zh for the heavy quark, and then there's some Zo for the operator renormalization. OK, so to renormalize this operator at one loop, we insert it, and we do a one-loop diagram. I'm just going to tell you the answers, but let's draw the diagram. So here's the operator inserted. Here's your heavy quark. Here's your light quark. You have a diagram like that. And then you combine this calculation with the wave function renormalize Zh and Zq. Zq is the same as Z psi. We should've called this Zq for the light quark. Combine these things together, and then, because that graph is telling you to count this counter term, you need Zo. And that calculation, which you can look at in your reading if you want to look at more details, just gives you something about what you'd expect. It gives you a 1 over epsilon divergence factor of g squared. So the operator here has renormalization that's just minus alpha s over pi. That's the anomalous dimension. So what is this anomalous dimension doing? If you were to consider this kind of process in full QCD, then you'd have here gamma mu 1 minus 5, and that's a partially conserved current. So there would be actually no anomalous dimension to this operator. The vertex graph that we just drew over there would cancel the wave function graphs exactly. There'd be no anomalous dimension. But here we have one. And that's because these guys are not equivalent anymore. You saw that the Z for the heavy quark changed, and the vertex graph also changes. And we're left with something. And this remainder has to do with renormalization group evolution below the mass of the heavy quark. Above the mass of the heavy quark, there's no renormalize group evolution of this current. Below the mass of the heavy quark, there is. So there's additional logs, and that's because MQ is now being treated as infinite. So things that, from the point of view of QCD, were logs of MQ have now become UV divergences, and that gives an extra anomalous dimension. One thing you can note about this anomalous dimension is that I didn't really specify for you what the gamma was. I told you for this process it would be gamma mu 1 minus gamma 5. And the results here, actually, if you carry out this calculation with arbitrary gamma, you find that it's independent of gamma. So you get the same universal anomalous convention for any spin structure. And that's partly related to the things that we talked about last time with the spin symmetry of HQET, which is telling you that certain couplings are not sensitive to the spin. And effectively, in this diagram, you're getting a v slash here. Well, let me not try to go through it but leave it for looking at in your reading, but we'll talk a little bit more about this in a minute. I won't try to explain where that comes from from the diagram. So let's look at another case. The only real interesting thing that happened here was that we got-- well, the fact that the answer was non-0 was interesting, and the fact that it was independent of gamma. But the v didn't show up. And the reason that the v didn't show up in this calculation here is because there was only one v, and v squared was equal to 1. So v couldn't really show up because we had to get a scalar answer, and since v squared is equal to 1, just, it's not showing up. So something more interesting is to look at, instead of a heavy to light transition like that, a heavy to heavy transition. So we'll spend a little bit more time on this one. So let's have two heavy fields. And I'm going to take them in a current where they have different velocities, v and v prime. So let me imagine I went through this procedure of separating out counter term and renormalized the operator just like I did over there. So I have these two terms, two types of structures. Now I don't have a Zq, but I have two heavy quarks. So I have Zh, root Zh root Zh, which is just Zh. So an example of this would be something like B meson changing to, say, a D star meson electron and a neutrino, so having a charm quark replace our up quark. OK, so now the charm quark and the B quark could both be thought of as heavy. They have different masses, but we take both of those masses to infinity, so we can use HQET for both of them. So Mb and Mc are going to infinity. And there's no reason why, in this process, that the b quark and the charm quarks should have the same velocity, and so we'll give them different velocities, v and v prime. OK, so we can go through the same thing. We already calculated Zh, so we just have to calculate a graph like this or two, where you have two heavy quarks. But the heavy quarks have different velocities. So what would that calculation look like? Again in Feynman gauge, and let me just take the external quarks to have zero momentum for simplicity, zero residual momentum. So this guy has k equals 0 and k equals 0. So the HQET Feynman rule for this guy has a v dot q if q is the loop momenta, and this guy has a v prime dot q, OK. So the integral that have to do is that integral. And you can do this again with one of these tricks where you use lambda parameters. And then one of the handouts on the web page, I've given you the appropriate trick that's two lambda parameters for this integral. This integral actually has both ultraviolet and infrared divergences. We're here in our discussion only interested in the ultraviolet ones, because we're worrying about the anomalous dimension right now. And again, I'm not going to drag you through the details of this calculation. It's essentially just, do the Feynman parameter, combine denominators with two Feynman parameters, complete the square, do the loop integral, do the Feynman parameter integrals, get an answer. Combine it together with the counter terms of the wave function renormalize, and see what type of counter term you get left over. And it's more interesting than the heavy to light case. So here's what it looks like, where w is v dot v prime, and this function, r of w, is the following. It's log w plus square root w squared minus 1 over the square root of w squared minus 1. OK, so it's a non-trivial structure. So that counter term would lead to an anomalous dimension, which depends on this r of w. So the reason that that can happen is because v squared is 1, v prime squared is 1, but v dot v prime need not be 1. So v squared was 1, v prime squared was 1, but v dot v prime is not fixed. Well, it's not fixed simply by-- that's a poor choice of words. This is a parameter that can vary. So it will be fixed by kinematics, but it can depend on the kinematics. So let me go through this and organize it as a bunch of notes, comments. So this is what I just said. The answer depends on v dot v prime. And the way that you should think about this is that you have a current in the effective theory. So this is in the HQET current. And just like we label it by its Lorentz index mu, we should label it also by the v and v prime, because the fields involve v and v prime. So if we thought of this as some vector current or axial vector current labeled by mu, we would also label the current by v and v prime. So these are indices that are just indices on the current, and when you think about the Wilson coefficient or the anomalous dimension, it can depend on those indices. Now because the Wilson coefficient is a scalar, it really only depends on w. So if you think about Wilson coefficients, they depend on alpha s. They depend on scale mu. They depend on the hard scales in your problem, and the hard scales in the problem here for us are Mb times v and Mc times v prime, because those are the heavy scales of the heavy charm quark and the heavy b quark that we're removing. But since we know that this thing is a scalar, we can just square these, dot them into each other. So it's really a function of mu of alpha s mu Mb squared, from squaring the Mb term, M charm squared, and then this w factor. Those are the scalar quantities it can depend on. So what should I take for the w? Well, we could work that out for an example like the one I was saying that we're doing, B to D star L nu. So I want to show you how that works. So for the B meson, for momentum, we can take it to be M capital B meson times v mu. And by momentum conservation, that's going to be the D star momentum, which we can take to be MD Star times v prime mu then some momentum transfer, which I call q. This is a different q than my loop momentum q. Sorry about that. So then we can just take this relation, and we can square it, and we can get v dot v primes. So if you look at 2 squared, we can solve this for v dot v prime, and it's just fixed. You see that v dot v prime is fixed in terms of the meson masses and the momentum transfer, and that's the momentum transfer to the leptons here. How much momentum do they carry away? So you could think of all those things as external. You fixed how much momentum the leptons carry away. These are fixed numbers. You look them up in the PDG, and then what value of v dot v prime to plug-in here. Now it's a function of q squared. Q squared can vary in the process. So in that sense, it's a non-trivial function, not just a fixed number. But for any fixed kinematic configuration, any fixed q, then it would just be a number. So if you look at this in practice, you find that this guy for this particular case goes from 1 to 1.5. So that's the kinematic range that's allowed. So it is fixed by external kinematics, kinematics that is external to the dynamics inside the loop. And then that way, the Wilson coefficient here is more non-trivial than the ones we saw earlier, which just depended on masses. Now it's depending on masses as well as this function of v dot v prime, OK? Again, one finds that gamma T comes out to be exactly the same, independent of the choice of the spin structure. So we could do this calculation with any spin structure we like, and heavy quark symmetry in this case is all it takes to show that this gamma T is independent of the spin structure. So if you think about that from the loop graph, actually in this case, it's pretty easy to see, because remember that this vertex didn't have any spin structure. The propagator had no spin structure. So nothing in the calculation had spin structure. So the only thing that's had spin structure is the gamma you stuck in there. So it's just a scalar times gamma, so it doesn't care about the gamma. On whatever the tree level gamma is, it just goes through. It's not touched by heavy quark, by the HQET Lagrangian. So what is physically going on here, and what is HQET doing for you is that there's logs like this, MQ over lambda QCD and QCD. And by going over to HQET, this becomes a log of mu over lambda QCD, which is encoded in HQET operators like this current, as well as a log of mu over MQ or MQ over mu, which is in the HQET coefficient functions, HQET Wilson coefficients. So just how we-- much exactly the same as how we talked about it for integrating out a heavy particle, the logs get split up into pieces, and the Wilson coefficient into pieces, and the matrix elements, operators. Here we're separating MQ heavy quark mass, and it's both the charm and the bottom in the case of what we're talking about. And the anomalous dimension that we calculated sums up those logs, and summing up those logs involves a non-trivial function of this w. But we actually know exactly the non-trivial function, because we can calculate it. And it's just this guy here. OK, so the new wrinkle that can come in HQET is that the anomalous dimension can become a more non-trivial thing. So if you look at it at leading log order, the rest is pretty straightforward. So if you go through the leading log result, you would do the same type of thing that we did before. You would match it, some scale. And at that scale, you could normalize things so that the Wilson coefficient at mu equals MQ is just 1 at tree level. And then the leading log result is the function of these various things, which in general is C of MQ times some evolution from MQ to mu, suppressing some of the dependencies. The leading log result is 1 for this. And then this guy at the lowest order is just a ratio of alphas again. And then there's the gamma. For the purpose of solving the RGE is just, the only thing that matters is it's alpha. So a gamma's a constant for heavy to light, just a number. So for that current, it was a constant. This solution is actually valid for both of them. And then it's a function of this w for the current, where we have two heavy quarks. So the w dependence just goes along for the ride when you're solving the anomalous dimension equation. OK? So that's what re-summing the logs would look like. Essentially, each log is getting extra powers of this factor of gamma. So with number four, is there any questions about that? OK, pretty straightforward. So much of this, essentially all of the story, except for this one wrinkle, is very similar to integrating out a massive particle. And the other part of the story that's similar is that the HQET matrix elements depend on mu as well. And so in our example that we talked about last time, we had a matrix element. So let me give it to you in the context of that example. So last time we were talking about something which was a decay constant, and that's one example of this heavy to light current. So we had our current which had one heavy quark, one light quark, and then a heavy meson. And we figured out that this was giving some a times v mu, and now I'm telling you that you should think of the a as a function of mu. The matrix element here is a function of mu. OK, so that's just a slight modification to what we talked about last time. And again, if you want, for this matrix element, you'd want mu to be, say, a GeV or some scale that's greater than lambda QCD, and so what you would do is you would evaluate matrix elements and define that parameter at some scale like a GeV and do renormalization group evolution from the heavy quark mass down to a GeV. OK? So that's how the renormalization group evolution story would be. You don't want to run all the way down to lambda QCD, because the anomalous dimension has to remain perturbative. So you would decide what your cutoff is for where you think perturbation theory is still valid. Often people pick something like 1 GeV or 1 1/2 GeV for these types of problems. And again, this is just separating out all the MQs, making sure that your matrix element here has no MQs, it does have an extra cutoff mu. OK? So that's the RGE story. Let's also talk a little bit about the matching story. So these are the perturbative corrections at the scale MQ, or alpha s at MQ. We had perturbative corrections at the w scale when we integrated out the w boson. Now we have another set of perturbative corrections at the heavy quark mass scale when we integrated out the heavy quark mass. It's different because we're now passing from something that looked like a full QCD theory with some external operators. We're now passing to this HQET theory for the heavy quark. So if you like, we previously had Mw. We knew how to do renormalization group evolution there, and we had a Hw theory here. Now we integrate out the heavy quark mass, and we pass to an HQET theory below that scale. So if you go back to the Hw theory, if we can call it that, and we want to match that onto HQET, then we do it with a calculation like this. And I'll just use this heavy, light example still. So here's a matrix element that you could consider for the matching. And let me write it with a bunch of schematic objects and then explain what they are. So we use our spinners. I'm taking zero momentum here, just for simplicity. These Rs are residue factors that come in. So so far when we calculated the wave function renormalization graphs, we just took the divergent piece. And if you do that when you do the matching computation, you have to use LSZ. And so the finite pieces come back in, and you call them residues. So these are finite residues that you have to take account of, finite in the UV sense. So UV finite residues you have to take into account if you're just using MS bar, and this here would be the vertex renormalization graph. OK, both diagrams look like this. In QCD, they're both the same type of structure. And then in HQET, it's a similar thing. We can write down a formula for the s matrix element, the same states, now with our effective theory current. And we know how to transition from effective theory and full theory states. We talked about that last time. And so there again would be some residue factors. And if the residue factors are not the same in the two theories, you have to account for that. And they won't be, because one of the heavy quark residue will be different than the heavy quark residue here. So this guy here is this finite piece of this graph. This guy here will be the same as above. And this guy here is the heavy quark vertex graph, which is independent of the spin structure. It's not independent of the spin structure up there. So we could carry out the calculations of those loop diagrams, and then we could subtract, and we could see what's left over. And whatever's left over is the Wilson coefficient. OK, so very similar to what we did before. Calculate, subtract. What you find when you do that is that there's actually two currents. If you consider a vector current where gamma is gamma mu, then the effective field theory, which is HQET, has two effective theory currents that are vector. So you have C1 and C2. The reason that there's two is because we have another vector to play with, which is v mu. So that can have q bar v mu replacing the gamma mu. So v mu wasn't an external thing in QCD. It was part of the dynamics. Here it's an external thing, so it's allowed to replace gamma mu as one of the structures. And if you go through the calculation, this is the result, just to show you what the result looks like. Remember, the heavy to light case is the case where you're not getting a non-trivial function on the right hand side. So you'd get a non-zero result of order alpha s for both of those coefficients, OK? So the reading also goes through this whole thing for the heavy to heavy case, which is more interesting. But it's not really more non-trivial than what we've talked about so far already with anomalous dimension. You get results that are functions of w. Wilson coefficient would have functions of w showing up here, OK. So I won't go through that. Now if you wanted to carry out this calculation, it looks like it's kind of involved, this graph, this graph, this graph, all these diagrams to consider. You'd like to make your life as easy as possible, and there's actually a very nice trick here for doing that I have to mention to you, because it's kind of magical. So what is the fastest way that I could get this result? So this is a nice trick to remember if you ever have to do a calculation like that, because it's not specific to HQET. So let's pick our infrared regulator to make the effective theory as simple as possible. We have some choice in how to pick the infrared regulator. The result for Wilson coefficients and anomalous dimensions will not depend on that choice. So let's use that freedom and make things as simple as we can. And the choice that does that here is to use dimensional regularization for the UV, as we've been discussing, but also for the infrared. So let's use dimensional regularization for both. If you do that, you can convince yourself that all heavy quark effective theory graphs with on-shell external momentum-- so I can take the external momentum on-shell. I don't need it to regulate divergences, because I'm going to use div reg to do that. So all the integrals are scaleless, and that means that they come out to be something that, if you think about it, is either zero or zero in a way where you have the ultraviolet divergence cancelling an infrared divergence, which is still zero. Now you have to think about the fact that there's both of these going on, because you still have to think about adding counter terms to HQET to cancel the UV divergences. But the answers are very simple, because you can throw away all the finite pieces. If you have 1 over epsilon minus 1 over epsilon, if you multiply by epsilon, that term's not there. So 1 over epsilon gets removed by counter terms, and there's no finite pieces left over. So just use MS bar, so you just strip off that exactly, and you're left with 1 over epsilon IR. Now if you-- so the effective theory diagrams are just simply all 1 over epsilon IR. Now the reason that this is making things simple is because you also know a fact, which is that the IR divergences in the full theory and the effective theory have to match up. So these 1 over epsilon IRs have to match up with your full theory calculation. So if you renormalize the QCD calculation, which you can't really get around doing, you have to do that calculation. So you do that calculation in pure dim reg, same IR regulator, which is a nice regulator to use for QCD. You do the UV renormalization using the standard counter terms, and what will you get? You will get something that looks like a number over epsilon IR, and then you'll get numbers times logs mu over MQ plus other things. This thing here just cancels with the-- if I subtract HQET, this is just canceling this. So this guy here cancels when we subtract HQET. And so the matching is then just this. So I don't actually even have to consider calculating the heavy. If I use all these facts that I know, if I trust them, then I don't even have to calculate the HQET graphs. I just say, let me imagine that I calculate them. They're all scaleless. They look like that. Let me imagine that I renormalized them all. The 1 over epsilon UVs are gone. I'm left with 1 over epsilon IRs. I do this calculation. I say, let me imagine that these cancel each other. And then I have the matching. So that's exploiting all the facts that we know about effective theories and full theories to get the matching as quickly as possible by just doing the full theory calculation with a particular regulator. It's not checking anything. It's not checking that the full theory and the effective theory have the IR divergences matching up, et cetera, et cetera. But if you know that that's true, if you trust that it's true, then this is the fastest way to get the matching. Seems like magic, right? OK, so sometimes you can exploit what you know about the effective theory to get things more quickly. So questions about that? AUDIENCE: What is [INAUDIBLE]? PROFESSOR: Yeah. So if you think about the loop integral, then the d here, right, and 4 minus 2 epsilon have an epsilon greater than 0, decreasing the powers of q in the numerator is making it more UV convergent. So to regulate the IR, you want to have epsilon on the other side. So this is what you need for regulating IR, and this is what you need for regulating UV. So it may seem contradictory that you could even do both of these things at the same time, because greater than zero and less than zero. But you could always think of splitting up this integral with a hard cutoff somewhere in between and then just using this above and this below that cutoff. And the cutoff dependence will cancel when you put the pieces back together. So it's actually valid to just do calculations. And for the most part, you can just close your eyes, and you'll get some gamma of epsilons and some gamma of minus epsilons, and those will be separately regulating the divergences. And if you ever worry about it, you can do what I said. You could put a cutoff in and check that you're not making mistakes, but for the most part, it just works automatically. Any other-- AUDIENCE: [INAUDIBLE] all of the epsilon UV [INAUDIBLE] epsilon IR. PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] minus epsilon UV [INAUDIBLE] zero [INAUDIBLE]. PROFESSOR: Yeah. AUDIENCE: --negative. PROFESSOR: Yeah. AUDIENCE: But formally, you can set the epsilon UV plus an epsilon IR. PROFESSOR: That's right, yeah. So formally, this is zero. And the reason that you have to worry about zero is because you have to add a counter term to cancel this. Your UV counter term, you have to still add it. And then it cancels this, and then you get something non-zero. So the bare graph is zero. The counter term's non-zero, and there renormalized graph is non-zero. This is a subtlety that's worth remembering if you ever want to do calculations this way. OK, so that's some of the complications and fascinating facts about HQET in the perturbative sector. Let's come back and talk about power corrections, which are-- I'll go under the title of-- well, maybe I should just call them power corrections. Better title. So we have an effective theory. We've so far talked about it at lowest order. We stopped at lowest order. We had this HQET Lagrangian, and we talked about using that Lagrangian to carry out some perturbative calculations. What if we went to higher order in the power [INAUDIBLE] expansion, which is 1 over MQ? OK, so power corrections here means higher order in 1 over MQ. So let me show you how you can construct those terms. So let me go back to a representation of the full QCD Lagrangian, which we had in terms of this B field and the Q field. And when we first talked about this, we just dropped all the terms of the B, but now I'm going to do something a little more sophisticated with them and really just integrate them out. So we had this, and this was really just us writing QCD in a fancy way that was convenient for this discussion. So this is really just QCD written in a fancy way. So if we want to take this Lagrangian at tree level, we can just integrate out Bv. This is a Lagrangian that has quadratic dependence on Bv. So you could think that the path interval in this formula here would be quadratic path interval, and those we can always just solve. And what effectively integrating out Bv amounts to is solving for the equations of motion of Bv and plugging that back in. So the type of diagrams I was drawing before where I had this wiggly line and it was a Bv propagator, we can integrate out that by solving for the equation of motion. So we look for variation with respect to Bv bar and set it to 0. And that gives this equation, and then we solve this formally for Bv by just inverting this operator to get that equation. And then we can plug that equation back into this equation, which is still a QCD Lagrangian, actually, but-- and then we expand. And once we expand, we match onto the HQET Lagrangian order by order, tree level. So the first term is the term we've been discussing. The next term, we drop the v dot D here, because that's small. We just have 1 over 2 Q. There's two D transverses, and they'd be higher order terms as well, but we'll stop in that order. So this is L0, first order term, and this would be L1. They'd be higher order terms. So what is this guy? It's useful to write that guy, actually, in terms of two different things, and you'll see why momentarily. So it's got two covariant derivatives, and both of them are dotted into gamma matrices. It involves this thing, D transverse, which, remember, is the full D minus a projection onto v. So it's something that's transverse to v. So what I want to do to simplify this guy here is I want to do the following. I'm going to use the fact-- I'm going to write it in terms of the field strength by using the fact that the commutator of two Ds gives me a G. And the commutator of two sigmas, I'll write-- sorry. The commutator of two gammas gives me something I can call sigma. So let me write this as the symmetric piece and then the anti-symmetric piece. And I do a symmetrization in both the fields and the Dirac structures. Doing DT slash DT slash anti-commutator. So then since it's anti-symmetric, it automatically forces that anti-symmetric. This guy is a GB nu, so for this piece, we just get DT squared. And this piece, once we track all the 2s, the i's give sigma dot G. And so the usual way of writing L1 is as follows. You say L1, using this formula, plugging it in, has two terms. And you'll see why when I write them down that we wanted to do this. OK, so that's L1 is after plugging that in. Now the reason to do this is that if we ask about symmetry breaking, that's something that can happen from sub-leading terms. Lowest order, we had a symmetry, heavy quark symmetry, and that is broken by these interactions. But it's actually broken in different ways by these two terms. This term here doesn't have any spin structure, so it doesn't break the spin symmetry. It does have flavor structure because of the MQ, so it breaks the flavor symmetry. So this is a kinetic energy type correction, and it breaks flavor symmetry because of the dependence on Q in the MQ. And this guy breaks both, because it has a spin structure now, and it's a magnetic moment type term. So it's got the sigma dot B field type interaction. This is what I mean by the magnetic moment type term. OK? So that's what the sub-leading power corrections look like, and if we wanted to use the effective theory to talk about power corrections, we could do that. We're constructing them here by knowing the full theory, just integrating out explicitly the fields, OK, which is a very nice thing if you can do it. Now you could do it the other way, which would be to think about just writing them from the bottom up. And there is one way in which that's more general than what we've talked about, and that's because what we talked about was tree level. And if you wanted to include loop corrections, how do we know that there's not some other operator here that we missed because it just vanished at tree level, for example? We've seen examples where that happens. There's an operator that only shows up at loop level. So we could think about it from the bottom up, even though this is a top-down effective theory, in order to make sure we're not missing anything. And if we wanted to do that, we should enumerate all the possible things, the symmetries and all that we can use to constrain the form of the operators. So let's enumerate. So there's the power counting, of course. That's pretty simple. Here all the powers of 1 over MQ are being made explicit, and they just tell us what dimension of operator to look for, just as in our integrating out heavy particles. So we just know the dimension of the fields that we have to put in the numerator from how many MQs we're talking about. There's gait symmetry, of course. So use covariant derivatives. Very easy to take into account. There's discreet symmetries, charge conjugation, parity at time reversal, which are symmetries of QCD if we drop the theta term. And we can impose them as well, and again, that's easy. I wouldn't be making a list if there wasn't at least one thing that was hard and non-trivial. But discreet symmetries are easy. The thing that actually is the hardest is Lorentz symmetry. Oh, you say, just dot Lorentz indices into Lorentz indices. But you have to ask the question whether we even have Lorentzian variance in this theory. And it turns out that part of the Lorentz group was actually broken by having this heavy quark and doing this type of expansion that we've been talking about. So if you think about the six generators of the Lorentz group, the boosts and the rotations, there's a part that I could call the transverse part, which is transverse to the velocity. So in the rest frame, that would be M12, M23, and M13. And those are the rotations. So this i-- v is like this. So no matter what v you pick, there's always three generators that are rotations. And then there's the boosts. And you should think of the boosts as taking v mu and then M dotted into v and then making the other guy transverse, so when we denote it like that. So the new index is transverse, and in the rest from, that's M01, M02, and M03. And so introducing v mu actually breaks the boosts' symmetry. And if you like, you could think the reason it breaks the symmetry is because it gives a preferred frame, which is the rest frame of the heavy quark. If you have a preferred frame, then you've broken Lorentzian frames. So that's bad. And it turns out there's actually a hidden symmetry of this effective theory that partially restores this breaking. And it restores it in exactly the amount that it needs to restore it. That is, it restores it at low energies. And that's called reparameterization invariance, which I will write once, and then forever more, we talk about it as RPI, Reparameterization Invariance. And it's an additional symmetry that we have on v mu itself. So let's go back and think about how we introduced v mu in the first place. So we're saying that v mu breaks part of the cemetery, but how did we decide on what v mu was? How much freedom was there when we defined v mu at the beginning? If we're saying that it breaks, then we should know how much freedom there was, because a freedom to define different vs could restore symmetry, just realized in a different way. And that's what happens. So where did it come from? We had P, have a heavy quark, and we split that into two pieces, MQv plus k. But this split into two pieces is arbitrary by some amount. We could move pieces back and forth between here and here, and we would still have the same theory. We have to be careful that we're moving back pieces that don't violate the power counting. And that's what I mean by somewhat arbitrary here, not completely arbitrary. There was a point to doing this, because we wanted to separate out the big piece and the small piece. But we could always move a small piece back over here, and that wouldn't change this decomposition. So the invariance that you have is the following. You can take v mu and send it to v mu plus some epsilon mu over MQ. And k mu comes to k mu minus epsilon mu. That moves the piece back and forth between them, and as long as I think that epsilon mu is some parameter that doesn't have a power counting in it, i.e. it's order doesn't have any MQs in it-- it's just something of order lambda QCD, say-- then that makes the power counting still true. That was the point of this decomposition. And it allows us to move a small piece back and forth. The small piece is this epsilon. So that's a symmetry. That's called reparameterization invariance. So we have to make sure that when we construct our effective theory that it satisfies the symmetry if we want it to be a boost invariant-- if we want to restore boost invariance to the theory. So this parameter epsilon you can think of as-- you could consider it to be a finite reparameterization symmetry, but you don't really have to worry about finite transformations. You can just do the infinitesimal. So we'll think about epsilon mu as an infinitesimal. And it has that counting that I put over there. Now v squared was equal to 1, and that's also something we don't want to spoil. But that's easy. We just say that epsilon dot v is equal to 0. That maintains this condition. So that means that there's three different components of epsilon, non-trivial components to epsilon. And those three components of epsilon are exactly related to the three boosts here. OK, we have a three family-- three-parameter family of transformations, which are the three components of the epsilon, which, in the rest frame, would just be the one, the two, and three. What did we do-- what about the fields? How does the field, the Qv change under this type of transformation? Let me take the field that x equals 0 for now. So v slash on Qv, it was equal to 0. And if I do the transformation, then this v slash changes. It becomes v slash plus epsilon slash over MQ. Let me imagine Qv changes. It goes to Qv plus delta Qv, and I have to do that on both sides. Then I can take this-- so this thing here is some order epsilon change. Then I can take this equation, and I can just solve. So the piece that's order epsilon to the 0, just satisfied. Solve for the piece that's order epsilon, and that gives me an equation for delta Qv. So rearranging this equation, I find that 1 minus v slash delta Qv is epsilon slash over MQ times Qv. And this equation has the solution, but delta QV is epsilon slash over 2MQ. Remember that epsilon is transverse to v, so if I push the v slash-- so if I plug that solution in here and I push a v slash through the epsilon, well, then I can push the v slash through the epsilon, let it hit the QV. That's giving a factor of 2, because it's anti-commutes. That's this 2 here. And then I would get what I wrote on the right hand side. So if it's not obvious, check for yourself that that's a solution. So that's how you derive the change to the field under this reparameterization. And so when we talk about operators and the effective theory, we have to worry, how does the symmetry act on them? And it's a kind of a non-trivial symmetry. Was not apparent to us when we started, right. [CHUCKLES] OK. So the full reparameterization is v mu goes to v plus epsilon over MQ. And then if I take Qv of x, then that is what I said. There's this epsilon slash over 2MQ piece. This is the transformation, so it goes to itself plus this extra piece. And the fact that I take it at x adds one little slight wrinkle, and it just gives this extra phase factor. And that extra phase factor is exactly what encodes the change of k. So this, if you like, encodes that derivatives. Should go to derivatives minus epsilon or in momentum space, that k should go to k minus epsilon. OK, so previously we had a rule for k, but now I've encoded that in this phase. OK? So that's the symmetry that we should look into. So what does it do? So what this does is it restores invariance under boosts, but only small boosts. The reason that I call them a small boost is because epsilon here had to be of order lambda QCD. It couldn't have been order MQ. That's what I mean by small. And from the point of view of this theory, this is all we care about. Because we want to remain within the region where the effective theory was valid, the whole setup of the effective theory involved dividing out a large piece and a small piece. If we allow back large pieces, then the game is over, and you wouldn't be formulating correctly the effective theory, because you'd spoil the power counting. OK, so this is this hidden symmetry we're calling reparameterization variance. And it's not special to HQET. Any time you have fields that are labeled by something, you should think about whether there's a symmetry like this. OK, so that's the entire list of the symmetries that you should consider in order to think about doing a bottom-up approach to HQET. Simple ones, and then there's this one that's a little more complicated. So let's go back and now consider the 1 over MQ operators in general. And it turns out that there's not any missing operators, that the two operators we have are actually the complete set that you can write down at this dimension using all the properties of the field. So we didn't miss anything from that point of view. So let me write them down again, and let me write them down in a way where I imagine that radiative corrections have come in as well. And I'll give them some Wilson coefficients, which are generically called Ck and Cf. That's the standard notation. So this is a Wilson coefficient. That's a Wilson coefficient. This is not 4/3. it's Wilson coefficient. The name is the same as the Cf that is 4/3, but this is a little cf, not a big CF. So if we want to-- so it's gauge invariant. It has the right parity, et cetera, et cetera. We should worry about the reparameterization invariance. So let's do that. So at lowest order, the phase is what changes. And the leading order Lagrangian is invariant, because v dot epsilon is 0. So at order MQ to the 0, since v dot epsilon is 0, you don't get a leading order change. So our Lagrangian was variant. This invariance, this reparameterization invariance mixes orders. It connects orders in the expansion. There was a term that was order 1, which is this piece, and there's a term that's order 1 over MQ. So the symmetry is actually making a connection between leading order and sub-leading order operators. So we could ask about this delta L 0, and there will be a piece at order 1 over MQ. So let's just write out all these things. Transforming everything. This is our leading order Lagrangian. After imposing the field change as well as v change, the v dot D becomes this, and the field becomes that. So there's three things here that are being changed. Expand this out. Use things like 1 plus v slash over 2, epsilon slash 1 plus v slash over 2. C equal to epsilon dot v is equal to 0. Simplify, do some Dirac algebra, and you can boil this down to something simple, which is that the entire change is just an epsilon dot D over MQ times QV. And if you look at this, it has to cancel against something that's order 1 over MQ. And if you look at the terms that we had at order 1, which are 1 over MQ, there was a kinetic piece. We called it kinetic energy piece. And if we do the change there, there is a contribution from the phase in this case, because we had transverse derivatives. So we can add epsilon dot D transverse. That's non-zero. And if you go through the leading order change to this guy, as well as the guy that's the magnetic guy, you find that the magnetic guy is 0 at this order. It's non-zero at higher orders, but at this order, it's zero. And the kinetic guy does have a transformation. It has exactly the same form as this guy here. And if epsilon's dotted into the D, then it's a D transverse. But this guy has a Wilson coefficient. This guy doesn't. So in order for these to cancel, you actually learn something non-trivial. The symmetry teaches you something non-trivial about the sub-leading Lagrangian. That Wilson coefficient has to be 1 to [INAUDIBLE] perturbation theory in order for the symmetry not to be violated. AUDIENCE: But you are enforcing the symmetry? PROFESSOR: Yeah. So the symmetry is boost invariance, and it seems like a reasonable symmetry to impose. Yeah. It'd be small boosts. So as long as your scheme and your regulator don't break the symmetry, which is always something that you have to worry about in general, then this guy is 1 to all orders. OK? So if you did something like dimensional regularization and you thought you should calculate this guy, you'd just find that it would be 1. And you'd wonder, well, why is that? It's the symmetry that tells you it's 1. OK, so you don't have to figure out that guy. The other guy, which we've called Cf, you do have to figure out, because it wasn't constrained. And at lowest order, the other coefficient is not constrained in this way. And so it does get an anomalous dimension. And so we could calculate it. It's a good homework problem. You may see it on a future homework set. And there is an anomalous dimension, and when you solve that anomalous dimension, you're again getting something that's the ratio of alphas to some power. In this case, it's a non Abelian power, so the adjoint Casimir. OK, so that guy does have an anomalous dimension. I think we'll stop there today. And we'll talk more about power corrections and the phenomenology of them, how we can make non-trivial predictions of from them next time.
https://ocw.mit.edu/courses/20-020-introduction-to-biological-engineering-design-spring-2009/20.020-spring-2009.zip
>> Sally: Hey Dude, how are you? >> Dude: I'm not doing too well. I'm trying to figure which device to use to turn off the gas-o-matic module and stop Buddy from growing too big. But I don't know how to choose! >> Sally: Hmm, well choosing the right device for your system can be pretty hard. Why don't we work on this together? >> Dude: That would be great. I am just going around in circles here. >> Sally: Well I'd suggest that you use a digital device to turn off the gas-o-matic module when Buddy got to a certain size. >> Dude: What's that? >> Sally: Well let's start with the basics, all digital devices have inputs and outputs that are always in one of two states: either ON or OFF. People who work with electrical devices sometimes call the two states HIGH and LOW because of the high or low voltages that the devices receive or produce. Other people think of the two states as true/false or as the numbers 1 and 0. It really doesn't matter. What's important for us is that if you hook up the output of a digital device to the gas-o-matic, when the device produces a high signal, Buddy will inflate. When the device produces a low signal, Buddy will stop inflating and he'll just stay the same size for a while. >> Dude: That's cool but why make just two states? Wouldn't it be cooler to make buddy get bigger really fast at first and then slowed down when he's close to full size? I really don't want to see him growing out of control like before. Couldn't we build an even more interesting system from devices with more states, like three or even ten? >> Sally: Those are all really interesting ideas but here's why two states makes the most sense. Let's imagine that you're measuring the output signal from your digital device over time. Depending on what the input to the digital device is, the output will switch between its two possible states ON and OFF. But of course, this drawing is actually a bit misleading. See all digital devices, whether they are genetic devices or electrical devices, never produce a signal that is perfectly ON or perfectly OFF like I've drawn here. There are always some minor fluctuations in the signal. These fluctuations aren't real changes in state of course. But sometimes, if the fluctuations are too big, then they might tell the gas-o-matic to switch on when you really want it off. >> Dude: Oh no. I definitely don't want that. It took forever to clean up the lab last time that happened. So the reason we limit the devices to two states is because they work better when there's noise? >> Sally: Exactly! >> Dude: And you think Buddy's growth is noisy? >> Sally: Definitely! He changes a little just when he breathes. And he can grow and shrink a lot depending on how much food he has and what's around him. And sometimes I think he changes size just to show off... >> Dude: So it sounds like digital devices are the way to go! It will tell the gas-o-matic device to turn on if Buddy's really growing and turn off if he's full size, but ignore those in between size fluctuations he likes to make...but how big are the fluctuations that the device will ignore? How does it know? >> Sally: Well the device only knows what we tell it, and we can split up ON and OFF the way we choose. >> Dude: OK then...I'll put the ON/OFF border here! >> Sally: Dude, weren't you listening! If the signal is somewhere near this boundary and noisy, then the device will from switch ON to OFF or OFF to ON when you don't want it to. Here's what we can do. Let's call just this small part the "valid" signals and leave all this in between as "invalid" signals. That way the device won't confuse ON signal values with OFF signal values so easily. >> Dude: Aw Sally. Maybe I'm not getting this at all. It looks like you just put in two boundaries and now you've got twice as much trouble. What's to stop the noise from switching between valid and invalid signals here AND here? >> Sally: Dude, I actually think you've got the hang of this. Here's the trick that engineers use: make the valid output ranges smaller than the input! This way the quality of the output signal is always better than the quality of the input signal and even if there is noise on the input, the digital device will get rid of it but producing a better output signal. This trick even has a name: Engineers call the difference between the valid input and output ranges the noise margin. >> Dude: Noise margins. I like it! Digital devices sound wicked useful. But can we really build them out of DNA? >> Sally: Some have already been built I can show you how they work in some cells I've got growing in the lab. The DNA device you want may be there already and ready to go. Do you have some time to come to the lab now? >> Dude: Can I answer in digital?
https://ocw.mit.edu/courses/3-60-symmetry-structure-and-tensor-properties-of-materials-fall-2005/3.60-fall-2005.zip
PROFESSOR: All right, we've been slogging our way through derivation of the plane groups. And I think I'll do a few more, because we'll stumble across some major tricks in deriving a subfamily of them. But to not get lost in the forest because of all the trees, I have a set of notes. They are handwritten because my secretary would resign if she had to fit in all these figures and subscripts and strange symbols. So, they are as neat as I could make them. Sorry to say that, in running them through the Xerox machine in an attempt to get everything on one sheet, some of the last lines got clipped. So I'll run these through again and give you a copy that's minus those truncations. All right. What we've been doing so far, to have a brief reprise, was to take the symmetries, the 10 two-dimensional plane group symmetries. And they were one-, two-, three-, four-, or sixfold axes, a mirror plane, 2mm, 3m, 4mm, 6mm. So there are 10 of them. And these are the so-called crystallographic point groups. Crystallographic because we considered only those rotation axes that are compatible with a lattice. And they are point groups because they leave at least one point in space, invariant-- stays there rigidly fixed. And they are groups because the collection of operations that are present satisfies the postulates of the mathematical entity called a group. We've then are in the process of taking these 10 symmetries and adding them to the 5 two-dimensional lattices. The parallelogram net, the rectangular, the centered rectangular, and the square, and the hexagonal. And clearly, we can't put each of those point groups in every one of the lattices. For example, the lattice has to be square for either 4 or 4mm, so we would attempt to place only those two of the 10 point groups into a square net. We've gone through quite a few of them. I won't bother to draw the pattern of the symmetry elements. But we put no symmetry at all in the general parallelogram net. And that is plane group P1. Maybe I will draw the figures, after all. P2 was a twofold axis dropped into a net, which had to have no specialized shape, simply because a twofold [INAUDIBLE] axis requires nothing but a lattice row if one translation is combined with it. The threefold axis could fit into an equilateral net. And there's the threefold axis we added to a lattice point. And we have two additional threefold axes in the centers of those 2 equilateral triangles, which each make up half the cell. P4 was the square net. We put a fourfold axis at the corner of the cell. We've got another one in the middle. Two folds in the middle of the edges because of the 180-degree rotation that is built into the fourfold axis. And P6. We put that into a hexagonal net. We've got sixfold axes that we dropped in at the lattice point only, nothing else. There's a 120-degree rotation at the lattice point, and that gives us the threefold axes that are present in P3, as well. There's a 180-degree rotation contained in a sixfold axis, and that gives us the twofold axes in all of the locations of P2. OK, I haven't drawn in any representative patterns, but let me remind you again. The pattern that is characteristic of every one of the plane groups is just the pattern that the point group that you've placed at the lattice point would produce. And that pattern is, in turn, hung at every lattice point of the two-dimensional cell. So even though the huge number of symmetry elements that's present in the higher symmetry point groups is rather intimidating-- you say, wow, how would one draw a pattern for that-- the pattern is nothing more complicated than the pattern of the symmetry that you've placed at the lattice point. And all of these other symmetry elements arise to express relations between the motifs that you've placed at the initial representative lattice point. Deriving these, we used one theorem, which it's well to remind you of. And that is that if I have a rotation operation A alpha, follow that by a translation that's perpendicular to the rotation axis, what I get is a new rotation operation is B alpha. And it's located in a very specific location. And that is at a location that's a distance x equal half the magnitude of the translation times the cotangent of alpha over 2. So this combination term, if nothing else, reminds us that these combination theorems, as I call them, are not equations in symmetry elements. They are equations in individual operations. So, for example, if I combine the fourfold axis with the translation, the 180-degree operation that's present in a fourfold axis puts B pi here. The 90-degree rotation puts B pi over 2 here. And the 270-degree rotation-- which I can define just as well as A minus pi over 2-- puts the operation A minus 2 pi over 2 down here. So it's not an equation in symmetry elements. It's an equation in individual operations. Then some peculiarities started to arise, which we perhaps might not think of. If we put a mirror plane in a primitive rectangular net, that gave a group that we called Pm. If we combine that with a translation, we need a theorem that says what happens if you combine a mirror reflection operation with a translation that's perpendicular to it. We sketch that out, and once and for all could decide that it's going to be a new reflection operation that's located halfway along this perpendicular translation. So this reflection operation sigma, combined with this translation, gave us a new reflection operation, sigma prime, halfway along the cell. And, of course, we have this one hanging at a lattice point as well. And then came the interesting one. When we combined a mirror plane now with the centered rectangular net, we have all of the mirror planes that we have here, because this primitive rectangular lattice is a subgroup of the centered lattice. Then the interesting thing happened when we combined the reflection operation sigma with this translation that went to the center of the cell. And that gave us a transformation that was something we had not encountered before. We took an object, reflected it in this plane, and then translated it down to a centered lattice point over here-- to give us one that sat here-- and then asked, how did I get from the first one, a right-handed one perhaps, to a second one that's a left-handed one, and then a third left-handed one that sits down here? The answer is that we found there was no way we could specify getting from number 1 to number 3 in a single shot. We had to take two steps to do it. And there was nothing more simple than that. We had to first translate down by the part of the translation-- let me call it tau-- which is equal to that part of the translation t, which is parallel to the reflection operation. And then we had to reflect across, and that would get us from the first to the third. That was a new sort of operation. We'd indicate its locus by a dashed line to distinguish it from a mirror plane. And it has a translation part and a reflection part. Doesn't matter what order in which we do them. We get to the same location if we reflect first and then slide or slide first and then reflect. So this was a new operation that I'll represent by sigma tau-- looks sort of like a symbol for reflection, but the subscript reminds us you've got to translate by an amount tau that is parallel to the initial mirror plane. And this gave us a new theorem that a general translation that had a part t perpendicular and a component t parallel, when it followed a reflection operation, was equal to a net effect of reproducing the object by a glide plane, sigma tau prime. It had tau equal to the part of the translation that was parallel to the reflection part of the locus. And it was located always at one-half the perpendicular part of the translation. So using that theorem and completing the mirror planes that hang at the lattice points, we have a very interesting group that consists of a centered lattice, mirror plane hung at the corner lattice point-- this is also a lattice point, so we automatically get the mirror plane in the middle of the cell-- and then in between, we get this new 2-step operation, the glide plane. And the pattern that's representative of this plane group is, as advertised, just what a mirror plane does. And that is hung at every lattice point of the centered rectangular net. So the glide plane, this new operation that popped up, does what the new operations did in all of the other preceding groups. It tells you how do you get from the pair that you've hung at the lattice point to these that sit in the center of the cell. They're related by translation, but the relation of one to another of these motifs is by the glide plane. So that is an operation which has arisen in the group. And this, then, is a group that we would call Cm. C for a centered rectangular net, as opposed to the primitive net, which is the one we got immediately before. Again, I sound like a cracked record sometimes, but let me emphasize the simplicity of these patterns. Again, the pattern consists of what you would get when you hung a motif on one lattice point, and then that is repeated by m, which is the symmetry you've placed at a lattice point. And that's hung at every lattice point of the centered rectangular net. The glide planes just express relations between things that you already have when you've hung the motifs on the lattice point. Any questions after this brief reprieve? Comments? Yes, sir. AUDIENCE: You called those [INAUDIBLE] glides, that's a sigma tau? PROFESSOR: Yeah. The relation between this one and this one, I've called sigma. That's the reflection operation. The relation between this one and this one down here would be the operation that has a reflection part and a translation part, tau. Let me point out something that's worth observing when we start making some more complex additions. We said early on that we only have to consider translations that terminate within the unit cell. Because everything is translationally periodic-- not only the atoms in the motifs but the symmetry elements as well. But the observation that I want to make is that if we put a glide plane in. And let's do that for the direct addition of a glide plane to a lattice point. And having done that, we have the potential of possibly having derived a group that we would call Cg. OK. This diagonal translation-- this is T1, and this is T2. We might ask, what is the reflect [INAUDIBLE] glide operation that sits at the origin lattice point, followed by T1 plus T2. Well, what does our theorem tell us? It says that we should get a new glide plane. It should be at one-half of the perpendicular part of the translation. And the perpendicular part of the translation is T2. So this is at one-half of T2, and it should have a glide component equal to the parallel part of the translation, and that is T1. T1 plus the original glide component, tau. What is this? A glide component of half of T1 plus the entire T1. If we ask, does that make sense? Yeah, we would glide down to here and then we would translate down to here. And that would give us sitting at half of a path of T1, a new object that sat here. Is that a glide operation? Yeah, it is. But it is one that is not really distinct from the glide plane that sits here. Because if we have a glide operation with tau equal to one-half of T1, and if this sits in a lattice, then there's going to be a glide plane has tau equal to 1 plus one-half of T1. And there will have to be another glide operation that consists of 2 plus one-half of T1. And the reason is that everything is periodic at an interval T1. So the moral that I'm trying to draw here is that one can add or subtract to identify the actual nature of a symmetry. One can add or subtract an integral number of translations. And that permits one to reduce any tau to a sigma tau prime, such that tau prime is always less than the translation that's parallel to the tau. In other words, lop off this translation of an entire T1, this translation of the entire T2, and you have identified the basic nature of the glide operation that sits here as something with a translation that's half of T1. The translations that move motifs out of the cell may be related by a glide operation that involves an integral number of T1s plus half of T1. But it doesn't change the basic nature of the simplest glide step that's in there. That's a very obscure explanation of probably something that didn't puzzle you in the first place, but it's worth saying when we make some of these additions. Let's finish this off. We have another translation in here, and that's the translation T1 plus T2 over 2. So, what would we get if we took the glide operation sigma one-half of T1 and followed that by a translation T1 plus T2 over 2. And that operation, again, would be equal to a glide plane sigma prime with the tau equal to the original glide component plus the part of the translation that is parallel to the glide plane. And it'd be located at one-half of the perpendicular part of the translation, which is T2. So it'd be at one-half of one-half of T2. So this says that the combination of the glide operation with the centered translation is a glide plane with a glide component T1. And it's located at one-quarter of T2. And that, indeed, is what you would do if you reflected and reproduced the object by a glide down to here and then translated by T2 over to here. OK. No, I'm sorry. We glided and then we added on the parallel part of the translation. So we would end up down here. And if one sits here, there has to be one repeated by T1 up here. And this is exactly the same thing as a reflection plane that's been introduced. So here is a case where we could subtract off the entire translation T1 and say this is identical to a mirror plane passing through the origin, a pure mirror plane sigma that is at one-quarter of T2. So, let me clean this up and show you what we have. Completing the operations, we would have a pair of objects related by glide, like this, that is hung at every lattice point. It's also hung at the centered lattice point. And what that is going to give us is a pair of objects that sit like this. And what has come in as a result of those combinations is a mirror plane interleaved between the glide planes. And this is exactly what we have in Cm, which was also interleaved mirror planes and glide planes with the origin shifted by one-half one-quarter of T2. So this is not in a group. Proceeding logically, we'd take Cm and replace the mirror plane by a glide. When we do that, we have a consistent group. But it turns out to be exactly the same arrangement of symmetry elements and exactly the same pattern as Cm, but with a little nudge over along T2 by one-quarter of that translation. Yes. AUDIENCE: But in Cm, didn't we have mirror planes going through the lattice points? PROFESSOR: Yeah. And what I'm saying is here there are glide planes going through, but the lattice point is arbitrary. I can put it anywhere I like. And if I decide to put the lattice point here, that turns it into Cm. OK? So, let's put the two of them side by side. Cm was this mirror plane, mirror plane, mirror plane glides, and the atoms motifs did this. A lattice point here in the center. And I'll deliberately, to make my case, offset the cell by one-quarter of T1. Now I've got a glide plane here, same as this one. Mirror plane at a quarter of T1. Glide plane here at the centered lattice point. Mirror plane here, glide point T2 away. And here are the lattice points. But the pattern of objects looks like this. Armed in advance on what the thing has to look like. Looks like this. And this is exactly the same pattern of motifs. OK. So, coming out of this consideration, we have with the rectangular nets Pm, Cm, Pg. But Cg, if we try to construct it, was the same as Cm. And that exhausts the possibilities for a single symmetry plane and the rectangular nets. OK. Let me move on then to the next step. And I'm going to skip over a threefold axis. And I'm going to look at the square net combined with the other symmetry that would require a net with this dimensionality. And this would be a primitive square lattice plus 4 mm. We've already done almost all the work. So as we derive the symmetries that are subgroups of these higher symmetries, we've done P4. And that says that if we put a fourfold access at the origin, that fourfold axis, we'll get another one in the center of the cell. And we'll get twofold axes in the midpoints of all the edges of the cell. And then 4 mm has one kind of mirror plane. Two Ms because there are two kinds of mirror planes. The mirror plane says, hey, if I'm in a lattice, I want to be at right angles to a rectangular or a centered rectangular net. Well, OK. This one is happy, because a square net is a special case of a rectangular net. The two translations are merely equal. So he is happy. Combine that with T2, and we'll get another mirror plane like this, and another mirror plane like this. Fourfold axis is going to rotate that mirror plane, so we'll have mirror planes running this way, and this way as well. And now we have a different kind of mirror plane that we tried to put in in this location. This mirror plane says, hey, I have to be parallel to the edge of a rectangular net or a centered rectangular net. So if we look at the translations that are parallel to and perpendicular to this translation, lo and behold, this mirror plane is aligned along the edge of a centered rectangular net that has the additional specialization of being a centered square. But that mirror plane now is perfectly happy. And he says, OK, I'll hold my piece. I have the arrangements of translations relative to my orientation that makes me happy. So I can say now that there is a mirror plane running this way. There has to be another mirror plane 90 degrees away from those orientations. And this, then, is going to be the location of all the mirror planes in that net. And at no time did the chalk ever leave my fingers. Just doing what we said we're doing. Putting in 4 mm, making sure the requirements imposed on the lattice are those that that symmetry element demands. And here we go. Except that there is now one other combination that we have not considered. Here is a mirror plane. And now this mirror plane is diagonal to our translation T2. And here is a mirror plane. And that mirror plane is diagonal to our translation T1. So what is that going to require? Here is the mirror plane, that's the operation sigma. And here is our translation, T1, down to here. We have a theorem that says that a reflection followed by a translation that has a general orientation is going to give me a new reflection operation, sigma prime, that's located at one-half the perpendicular part of the translation. And it's going to have a glide component that is equal to one-half of the parallel part of the translation. So what has that told us? To get from this mirror plane down to here, we've gone this far in a sense that is perpendicular to the mirror plane, so this is T perpendicular. And we have a part of the translation that is parallel to the mirror plane. This is T parallel. So the plane that results is going to be a glide plane in here. It's located at one-half of the perpendicular part of the translation. And that is one-half of one-half of T1 plus T2. And it has a glide component equal to T parallel, which is equal to one-half of T1 plus T2. So we get a new glide plane in here. And that will require glide planes through similar arguments that go down the diagonals of the cell, like this. And let me convince you that that, in fact, is an operation that must arise if I place, at the origin of the cell, a set of objects that have 4 mm symmetry. We have one like this, one like this, one like this. Another pair hanging here. Another pair hanging here. And if we do a reflection operation, let's say this pair up to this pair, and then slide it down by the diagonal translation-- and we have the same set, again, hanging down here at the diagonally opposed lattice point. Reflect across, slide down to here. The way in which you get that is, believe it or not, to reflect across this glide plane. How do we get there? We reflect across. We translate down to here. And the way I do that is by reflecting across this diagonal glide. That probably has convinced no one. But map it out with your own pattern, and I think you'll agree. Let me observe, while you're considering that. When we derive groups based on 3m or 6mm, we're going to have other cases where a translation along the edge of the cell is inclined to a mirror plane. And the effect of combining a translation with a mirror plane that's inclined to that translation always has the effect of interposing a glide plane halfway in between the mirror planes that are related by translation. So, let me say that in general, because that will be an observation that'll let us identify glide planes quickly. So the general resolve is if we have a translation inclined to a mirror plane, which of necessity is repeated by translation, a quite general result is that a glide plane is interleaved always between the two. And they'll be parallel because they're related by translation. So we'll see some more cases where we can immediately state that without further thought. OK. There is something that we might consider doing. I'd like to put it off, though. Here we start with a mirror plane. Can we replace the mirror plane with a glide plane? The answer is yes, we can. But it's not at all clear if I take a fourfold axis and put a glide plane through it, what this plane has to be. So I'd like to leave this one for now and come back to that. OK. In the notes, I've tried to do all of these derivations thoroughly and logically. So if this is a bit fast or me waving my hands, saying a glide plane is here, and this, that, and the other thing, when you can't really see what I'm pointing at, I think if you refer to the notes that'll be clear. But yes, you had a question? AUDIENCE: So you said that this glide plane [INAUDIBLE] one-half T perpendicular. What did you write underneath that? I guess that maybe is where your T and one-half T perpendicular is where the glide plane is. PROFESSOR: OK. I was just demonstrating that, in fact, for this mirror plane with the diagonal translation, the glide plane would come in at one-half of the perpendicular part of the translation. And the part of the translation that-- we were combining this fourfold access with this translation. So the perpendicular part of the translation is half of the body diagonal. And if a glide plane comes in one-half of the perpendicular part of the translation, that put it parallel to the mirror plane and at one-quarter of the way of the translation. So this little scrawl here says it one-half of T perpendicular and one-half of one-half of the body diagonal. So it's one-quarter. All this equals one-quarter of T1 plus T2. OK. Now something unexpected happens. If we would move along, I skipped over 3m. We know what P3 looks like. And that is a hexagonal net, an equilateral net. And we have a threefold axis that we've added to the lattice point. And then we found additional threefold axes in the center of these triangles. And, once again, the pattern that has this symmetry is just a triangle of objects hung at every lattice point. And now we want to add a mirror plane to that threefold axis. So we have a primitive equilateral net plus the two-dimensional plane point group 3m. The question is, which way should we orient the mirror plane? Well, a mirror plane says, I want to be along the edge of either a centered rectangular net or a primitive rectangular net. There's nothing about this lattice that looks rectangular. Yeah, there is. I see a couple of people doing this, and they have the right idea. Here's a lattice point. If we go more than one unit cell in this diagram, here's a lattice point. Here's a lattice point. Here's a lattice point. Here's a lattice point. Go down to here and go over to here. And lo and behold, here is a centered rectangular net, hiding incognito in a hexagonal net. Well, that helps. I can put in the mirror plane. But let me point out-- this is getting very messy, so I'll redraw it on a smaller scale. This will be worth doing so that it's perfectly obvious. I'm going to draw a number of equilateral nets. The reason I'm drawing more than one cell, I would like to point out that here is one centered rectangular net. And it has its edge perpendicular to the translations in the hexagonal net. But there is also a rectangular net that goes down like this. That's a centered rectangular net that has its edge along one of edges of the unit cell. So I have two possibilities. One is to draw the rectangular net like this. The other one is to draw the rectangular net like this. And having confused you thoroughly, let me simply draw this larger cell again. And say that I could, if I wanted to, put the mirror planes in in these directions, along the edges of the cell. OK. And in this case, if you're convinced that there's a centered rectangular net sitting there. Let me clean this up. And here's one mirror plane. 60 degrees away is another mirror plane. And there'll be another mirror plane at this lattice point doing this. So now I've got 3m at every one of these threefold axes. And I got all those simply by repeating these mirror planes by translation. But in one case, the mirror planes are perpendicular to the cell edge. This being T1, let's say, and this being T2. In this case, the mirror planes are along the edges of the cell. This being T1, and this being T2. So, holy mackerel! One point group. Two different space groups, which differ in the way in which the symmetry is oriented relative to the lattice. And if you think back a little bit, this particular net, with this dimensional specialization, was happy with a sixfold access as well, and will be compatible with 6mm. So, really, these two different orientations for the mirror planes, even though we haven't got there yet, are the two different orientations which are both present simultaneously if we were to put 6mm into this hexagonal net. So, same point group. Same lattice. Two different plane groups that depend on the orientation of the symmetry relative to the lattice. So there isn't even a one-to-one correspondence between the point groups and the plane groups. The pattern for either of these is, once again, going to be just what the pattern of the point group does. If we start with a motif here and repeat that by the threefold axis and the mirror planes, we would have motifs like this. And here the translation points out in between the mirror planes. In this case, the mirror planes would repeat the objects in two locations, like this. So, indeed, both patterns have 3mm symmetry. And what's different is the way the translations come out in between those mirror planes. OK. Now our notation has come up against an impasse. We said that the notation for the plane group should be the symbol for the lattice type, which the initiated know has to be a hexagonal net. The point group that we've added, which is 3m, but now how do we distinguish the two different orientations of the mirror planes? And the way around that is to actually use three symbols. And the second symbol will be what is perpendicular to the cell edge. And the third symbol will be what's parallel to the cell edge. So this one would be called P3M1. And this one, just to distinguish it, would be called P31M. So if this weren't bad enough, now you've got almost the same symbol with a permutation of two of the terms in it. In this case, the mirror plane is along the cell edge along the translations. In this case, it's perpendicular. But we're not quite yet done, because we've got translations that are inclined to mirror planes. And now you can see how prudent I was in saying quite generally, without trying to figure out what is the perpendicular part and what is the parallel part, here's a translation and it's at an angle to a mirror plane. And therefore we get a glide that has to be halfway between these mirror planes. And there'll have to be a glide halfway between these mirror planes, and a glide that's halfway between these mirror planes. In this case, the mirror planes are this way. Again, a translation that's inclined to the mirror plane. We have to have a glide that is midway between these mirror planes. In this case, they're a little easier to identify. They sort of make a triangular box that surrounds the threefold axis. So now we see another difference in the plane groups and the reason why they're so very distinct. And that is the rearrangement of the glides relative to the threefold axes are quite different in these two cases. So these are two quite different symmetries. Notice that when we start identifying special locations in these two plane groups, here there is a location only of symmetry 3mm for a point group, 3m for a point group. Here there are two different locations, one that's symmetry 3m, the other one is just symmetry 3. So this is location of point group 3 and a location of point group 3m. So the sort of special positions that exist in these two plane groups will be very, very different. One more addition, and then I think we can leave with a light heart because we should be done. What about a primitive hexagonal net combined with 6mm? And if you think I'm going to try to draw that, you're crazy. But I can say quite simply and hopefully convince you that this looks like P31M plus P3M1 right on top of one another, because we've got mirror planes that are 30 degrees apart. So, as a schematic direction and an invitation to do this at home in your spare time, is to take what P6 looks like. And that's sixfold axes at the lattice points. Threefolds here. Twofolds here. And then, directly on top of this, place the mirrors and glides in this orientation, and the mirrors and glides in this orientation. So we will have mirror planes coming out like that, and glide planes in between all the parallel mirror planes. As I say, I'm heroic but not foolish. Yes. AUDIENCE: Can you just explain that derivation again? Your P3 [INAUDIBLE]. PROFESSOR: OK. The object of this additional bit of confusion is to put in the symbol the point group, which is 3m, and then tell you how the mirror plane in a point group is oriented relative to the edges of the cell. And we could call them anything we wanted to. We could call them P3M perpendicular, subscript perpendicular. Or P3M subscript parallel to indicate mirror plane parallel to the cell edge. And mirror plane perpendicular to the cell edge. Or we could call them some obscenity, which would be very descriptive, as well. But this is just a way of keeping track of what's parallel to the cell edge and what's perpendicular to the cell edge. So, this middle symbol is what's oriented perpendicular to the cell edge. And in this case, this is the mirror plane and that is perpendicular. All the mirror planes are perpendicular to the edges of the cell. And then parallel to the cell edge, there's no symmetry at all. So that's 1. 1 stands for no symmetry. In this case, the mirror plane is along the cell edge, so there's an M in the third position. And perpendicular to the cell edge, there's no symmetry at all-- no symmetry plane at all. So it's purely arbitrary, but we need some mechanism for distinguishing which one we're talking about. That pretty much works as well as anything. And it's a minor perturbation of what we've done for the additional plane group symbols. All right. It's time for our midafternoon break. It's a nice place to quit, because you're probably feeling good. We've done this, and we can move on to something new. Actually, there's one more trick. And unless you were really clever, you wouldn't have thought of it. But there's another small family of plane groups that we can get through one particular device. I shouldn't have told you that, because you'll be less inclined to come back now. But we're almost through. And I think it'll be very clear what these additional possibilities are.
https://ocw.mit.edu/courses/8-06-quantum-physics-iii-spring-2018/8.06-spring-2018.zip
PROFESSOR: We spoke last time of the existence of symmetric states. And for that we were referring to states psi S that belonged to the M particle Hilbert space. And V is the vector space that applies to the states of the M particles. And we constructed some states. So within the construction, we postulated, there will be some states that [AUDIO OUT] that meant that any permutation of [AUDIO OUT] state will be for the symmetric state or all alpha. So that meant the state was symmetric. Whatever you apply to it, any permutation, the state would be invariant. So these are special states, if they exist, in VN. Then there would be anti-symmetric states. And we concluded that an anti-symmetric state-- with an A for anti-symmetric-- also stayed in this tensor product. That state would react differently to the permutation operators. It would change up to a sine epsilon alpha times psi A. It would be an eigen state of all those permutation operators. But with eigenvalue epsilon and alpha where epsilon alpha was equal to plus 1 if P alpha is an even permutation. Or minus 1 if P alpha is an odd permutation. So whether this permutation is even or odd, we also discussed depends on whether it's built with an even or odd number of transpositions. With transpositions being permutations in which one state is flipped for another state. Within the end states, you pick two, and these two are flipped, that's a transposition. All permutations can be built through transpositions-- with transpositions-- and therefore you can tell from a permutation whether it's an even or an odd one depending of whether it's built with even number of transpositions or odd number of transpositions. Now, the number of transpositions you need to build the permutation is not fixed. If we say it's even, means it's even mod 2. So you might have a permutation is built with 2 transpositions. And also somebody else can write it as 4 transpositions and 6 transpositions. It just doesn't matter. So a few facts that we learned about these things are that all the P alphas are unitary operators. And we also learned that all transpositions are Hermitian operators. Now, transpositions, of course, are permutations. So they're Hermitian, and they're unitary as well. And finally, we learned that the number of even permutations is equal to the number of odd permutations in any permutation group of n objects. So these were some of the facts we learned last time already. We have now that symmetric states form a subspace. If you have two symmetric states, you can multiply a state by a number, it will still be symmetric. If you add two symmetric states, will still be symmetric. So symmetric states form a subspace of the full vector space V tensor N. And anti-symmetric states also form a subspace of the vector space, V tensor N. So let's write these facts. So symmetric states form the subspace, and it's called sym N of V of V tensor N. Anti-symmetric states form the subspace anti N V of V tensor N. All right. So these are the states. But we have not learned how to build them, how to find them, and even more, what to do with them. So main thing is if this form subspaces, there should be a way to write the projector that takes from your big space down to the subspace. So here is the claim that we have. Here are two operators. We'll call this S, a symmetrizer And S will be billed as 1 over N factorial, the sum over alpha of all the permutations, P alpha. That's the definition. And then we'll have an anti-symmetrizer, A. This is also 1 over N factorial, sum over alpha. Epsilon alpha P alpha. Where epsilon alpha, again, is that sine factor for each permutation. So here are two operators, and we're going to try to prove that these operators are orthogonal projectors that take you to the subspace's symmetric states and anti-symmetric states. So that is our first goal. Understanding that these operators do the job. And then we'll see what we can do with them. So there are several things that we have to understand about these operators. If they are projectors, they should square to themselves. So S times S should be equal to S. Remember, that's the main equation of a projector. A projector is P squared equals P. And the second thing, they should be Hermitian. So let's try each of them. So the first claim is that S and A are Hermitian. In particular, S dagger equal S. And A dagger equal A. So if you want to prove something like that, it's not completely obvious at first sight that those statements are true. Because we saw, for example, that transpositions are Hermitian operators. But the general permutation operator is not Hermitian. It's unitary. So it's not so obvious. You cannot just say each operator is Hermitian, and it just works out. It's a little more complicated than that. But it's not extremely more complicated than that. So let's think of the following statement. I claim that if you have the list of all the permutation operators. Put the list in front of you. All of them. And you apply Hermitian conjugation to that whole list, you get another list of operators. And it will be just the same list scrambled. But you will get the same list. It will contain all of them. And that is kind of obvious if you think about it a little more. You have here, for example, the list of all the permutations. And here, you apply Hermitian conjugation, HC or dagger. And you get another list. And this list is the same as this one although reordered. In a sense, it permutes the permutation operators, if you wish. And the reason is clear. If you have two operators here, and you apply-- they are different-- and you apply Hermitian conjugation, it should give two different operators here. Because if they were equal, you could apply again Hermitian operator conjugation. And you would say, oh, they're equal. But you assume they were different. So two different operators go to two different operators here. Moreover, any operator here-- you can call it O-- is really equal to O dagger. O dagger. Hermitian conjugation twice gives you back. So any operator here is the Hermitian conjugation. Hermitian conjugate of some operator there. So every operator here will end up at the end of an arrow coming here. Maybe those arrows, we could put them like they go in all directions. But the fact is that when you take Hermitian conjugation of all the list, you get all the list back. Therefore, that takes care of the symmetrizer. When I take the Hermitian conjugation of P alpha, I might get another P beta. But at the end of the day, the whole sum will become again the whole sum. So that proves the fact that this map is 1 to 1 The map Hermitian conjugation is 1 to 1. 1 and surjective meaning that every element is reached as well. Means that S dagger is equal to S. Because the list doesn't change. Here is a slightly more subtle point. This time, the list is going to change. But then maybe when this changes to another operator, maybe the epsilon is not the right epsilon. You would have to worry about that. But that's no worry either. That also works out clearly. Because of the following thing. If a permutation P alpha is even, it's billed with an even number of transpositions. If you take the Hermitian conjugation of an even number of transpositions, you'll get an even number of transpositions in the other order. That's what Hermitian conjugation is. Therefore, Hermitian conjugation doesn't change the fact that the permutation is even or is odd. So if P alpha has a sine factor E of P alpha. Let's write it more explicitly. And P alpha dagger has as a sine factor E of P alpha dagger. These two are the same. If P alpha is even, P alpha dagger is even. If P alpha is odd, P alpha is odd. Therefore, when you take the Hermitian conjugate of this thing, you may map. You think the Hermitian conjugate of this sum. You have here a dagger would be 1 over N factorial, sum over alpha, epsilon alpha, P alpha dagger, which is another permutation. But this permutation. For this permutation, this sine factor is the correct sine factor. Because the sine factor for P alpha dagger is the same as the sine factor of E alpha. Since we get the whole sum by the statement before, A is also Hermitian. So OK. Not completely obvious, but that's a fact. If you think more abstractly, this is something you could know on a group. If you have a group that has all kinds of elements. Many of you have written papers in groups in your essays. Then if you take every group element, and you replace it by its inverse, you get the same list of group elements. It's just scrambled. And remember, since this permutations are unitary, taking Hermitian conjugation is the same thing as taking an inverse. So when you take a group, and you take the inverse of every element, you get back the list of the elements of the group. And that's what's happening here.
https://ocw.mit.edu/courses/8-591j-systems-biology-fall-2014/8.591j-fall-2014.zip
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality, educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Today, what we want to do is finish off our discussion of Lotka-Volterra competition models. So starting with this idea of the two species interacting competitively but then moving on to try to think about the general properties of Lotka-Volterra systems. In particular, when you have more species, the kinds of dynamics that you can get. Also, we're going to talk about these non-transitive interactions, which are the rock-paper-scissors type interactions that may facilitate the maintenance of diversity in populations or ecosystems, in particular, in the presence of some sort of spatial structure. And so we'll talk both about this demonstration of rock-paper-scissors type interactions in the context of male mating strategies in lizards, this paper that was by Curt Lively. And then we'll talk about another paper in the microbial realm, where they showed that there are rock-paper-scissors type interactions in the context of colicin production, toxin production in the context of bacteria. Then, at the end, we'll talk about these population waves. There was a rather mathematical reading that I had originally proposed, but then we kind of switched things up a bit, so that you could, instead, read that rock-paper-scissors paper-- that is confusing-- that I think is maybe more fun. But I'll tell you kind of this basic idea of the population waves, what happens, if you have a combination of some growth process together with some effective diffusion, you can get these population waves that correspond to this process of range expansion, where a population expands into new territory. I just started by putting up what we had from the last class. So this is the two species Lotka-Volterra competition model. So what we found was that, for this to be competition, for the species to be kind of bad for each other, that that corresponds to the betas being positive. Now, what we found is there are four cases in terms of the outcome of this two species interaction. And we wanted to, at least, try to get some sense of why that was and what the trajectories might look like. If you look at and N1 and N2, we can draw these nullclines and then get a sense of where the trajectories are going to go. Now, the basic outcome of this two species Lotka-Volterra competition model are really exactly the same as the possible outcomes when we're thinking about frequency dependent selection, where we could get, in this case, that species 1 dominates or species 2 dominates, 1 or 2. But then we could also get coexistence or bi-stability. And indeed, there is, in general, a mapping from the Lotka-Volterra kind of approach here and the approach that was kind of in Martin Nonwak's book of thinking about frequency dependent selection in this population. And so then it's not a surprise that you get the same four outcomes between these two situations. And I think that this is also highlighting some very interesting and deep connections between evolution, which is changes in, say, allele frequency in a population of a single species, over time, and then some of these ecological kind of processes, where you're really thinking about these as different species. Now, of course, in the case of the evolutionary dynamics that we analyze Martin's book, though, those were the evolution of different clonal populations in asexually reproducing populations. Do you guys remember what I'm talking about? All right. So I just want to draw a couple of these sorts of diagrams. I'm not going to draw out all four of them, because it does take some time. But hopefully, we can reconstruct where we were. The case that we were analyzing before, the N1 dot equals 0. We're going to have it as a dashed line. And N2 is going to be-- well, maybe we'll try to use thick chalk. Oh, wow, I missed. These are thick lines. So we'll draw these nullclines over here. So the N1 dot-- and indeed, one of the comments is that it would be nice to draw where the nullclines are on the axes as well. And indeed, we can do that. We aim to please. So the dashed lines correspond to N1 dot being to 0. So here's N1, N2. So N1 equal to 0 corresponds to this thing. And then we have this other guy that's a line, here. Now what we found is that, if N2 is equal to 0, this intersects at K1, whereas the intersection over here is at K1 divided by beta 12. Now, we have our other nullclines that correspond to N2 dot equal to 0. Now one of those lines is indeed going to be along here. And the other line can fall-- there are four different possibilities for how we might draw it in relation to this N1 dot equal to 0 nullcline. And depending on the orientation of that, we'll end up getting these four different possible outcomes of 1 dominating, irrespective of the initial conditions, 2 dominating, irrespective of initial conditions, or coexistence or bi-stability. So bi-stability is the only-- so if you start with a finite number of each of these two species, then bi-stability is the only case where the outcome depends on the starting condition. So, of course, if you start out without one of the two species, then you won't get creation of those species, right? Because the only way to get creation of species 1 is to have some species 1 individual in this model. So we're going to get another line, here, corresponding to N2 dot equal to 0. And I think that the one that we were trying to analyze was with the solid line underneath. Is that consistent with people's notes? So we can draw some other line here. It doesn't have to the same slope. Now it's good to be clear about where these things are going to fall. So K2 is this point. And now K2 divided by beta 21 is over here. Now recall that beta 12 is telling us about how much species 2 is reducing the growth of species 1. Whereas beta 21 is how much species 1 is reducing the growth of species 2. Now, everything comes down to the relative ordering of these two quantities and these two quantities. And since there are two possibilities on each, that gives us the four possible outcomes. And broadly, the idea here is just that, if the species are weakly interfering with each other, then what should happen? Yes, then you should get coexistence. Coexistence is when the betas are small. Of course, this is a concrete model, so you have to define what you mean by small. And indeed, small here ends up being relative to the ratios of these carrying capacities. If the carrying capacities are just equal to, say, 1, then that's saying that the betas-- sorry, if the carrying capacities are the same, then the simple way to think about this is just whether the betas are larger or smaller than 1, whether each species interferes with the other species more than a member of that other species. So if carrying capacities are the same, that's what demarcates the different zones. So what we want to do is take this sort of diagram and try to figure out where will the trajectories be on this diagram? Now, it's always good to locate the fixed points. The fixed points of the system are? And somebody, words? How would we define fixed points in this system? AUDIENCE: [INAUDIBLE]. PROFESSOR: What's that? Both lines intersect, right? So when the dashed line intersects the solid lines, right? So we have one such fixed point here. We have another fixed point here and another fixed point here. Have we figured out the stability of those fixed points? No. What's the stability of this fixed point? It's unstable, right? Because we've already, previously assumed that these r's are greater than 0. We're assuming that the species would be able to survive on their own. And that's actually true for both species. So this thing is unstable kind of in both directions, right? And what are the eigenvectors associated with this fixed point? On the count of three, draw, use your arms like the hands of a clock to indicate the directions of the eigenvectors. All right, ready, three, two, one. All right, all right. There's no diagonals, right? Of course, you could also. I was waiting for somebody to be kind of obnoxious and point in the other direction. A surprisingly not obnoxious class we have. So this is just saying that, if you start out just a little bit of one species, you'll just stay with that species. It makes sense. Now, what do these lines tell us about the directions of the trajectories or the orientations of the trajectories? Why did I draw them? Yes? AUDIENCE: These trajectories are-- these are lines are nonlines. And one of the derivatives [INAUDIBLE]. PROFESSOR: One of the derivatives, all right. And in particular, let's look at this line here. Are the trajectories? Again, we're going to do our arms to indicate the orientation of the trajectories. In particular, there's a trajectory right here. What direction will that trajectory be pointing? There's only going to be one arm. Ready, three, two, one. All right, we got a lot of people not voting. All right, that means we need to turn our neighbors and discuss. If you didn't vote, it means, I think, not following what we're talking about. Turn to somebody. If your neighbor agrees with you about which direction you should be pointing your arm, then try to figure out whether we know which orientation the arrow should actually be in. [SIDE DISCUSSIONS] PROFESSOR: We'll figure this out. [SIDE DISCUSSIONS] PROFESSOR: Let's go ahead and reconvene, because it seems like some people are being quiet. But I'm not sure if that's because they think they know what's going on or they're just very much unhappy with the situation. Let me see a fresh voting. All right, ready, three, two, one. Definitely, it's going up or down, right? Because the definition of this dashed line is that N1 dot is equal to 0. We don't know what N2 dot is. We'll figure that out in a moment. But what we know is that the trajectories we should be something, lines here, either up or down. And we're going to find that they're down but here. Now, on this solid line, quickly, the orientation of trajectories. Ready, three, two, one. All right, perfect. So we know that N2 dot is 0 here. Now, an actual direction of the trajectory, at this point, right here. Ready, three, two, one. OK, good. Because in the absence of N2, we know that species N1 should just come to carrying capacity K1. Same thing over here, we should get arrows coming down. So indeed, what you can see is that the arrows are coming down here. That means they actually do have to come down immediately to the right of them. Whereas over here, the trajectory is point right here, so we can kind of figure out that-- so they're coming here. And from far away, they're coming here. And you can see that they have to come across here. And then they're going to come into this point. So this is going to be our stable fixed point. I'll color it in to indicate that. From here, they're going to curve around though. So if you start out right here, you kind of do this business. From here, we come in. Are these lines allowed to cross each other? No. Now, indeed, you could actually see here, what is the direction of the other eigenvector at this point? Using a hand, arm, ready, three, two, one. Right, it's kind of something in there. Because there's all these trajectories are coming here, and then they approach this fixed point from this point. Because here, the other eigenvector is still horizontal, right? Because we know, if we don't have N2, then we just have N1. But this other eigenvector is not purely straight up. The other one is along here, because the trajectories are coming in along that. Does that make sense? So in this case, we should just be clear. We are in a situation where K1 over beta 12 is greater than K2. Another way of writing that is that beta 12 is less than K1 over K2. So that means species 2 does not strongly harm species 1. Yet, we know that K1 is greater than K2 divided by beta 21. So that means that beta 21 is greater than K2 over K1. This is telling us that species 1 is strongly harming species 2. And that makes sense. In that case, species 1 wins. Does that outcome change if we change the r values, the division rates? A is yes. B is no. I'm going to give you 10 seconds. What I just said, if I change r's does that change the outcome? Ready, three, two, one. So we've got a majority of B. So the answer is no. This statement that, in this situation, species 1 dominates, that's independent of the r's. And what you see is the conditions, here, only depend on the what's in here. So the actual shape of those trajectories will depend upon the r's. So if it's the case that species 2 is just a faster grower than species 1, then you might end up with a situation where, if you start with a little bit of each, you might come way up here-- well, no. I guess you might come close to this fixed point. So you might think that it really looks like species 2 is about to win. But eventually, they'll curve over and come back. And indeed, in the Strogatz book, one of the chapters that I recommended, he has an example of sheeps and rabbits, the idea is that they are competing species, maybe eating similar foods. I don't know if that's true. But the rabbits can divide more rapidly. So here, the idea would be, well, if the sheep can really displace the rabbits, because it's just bigger and push them aside, then what can happen is that the rabbits first divide rapidly. It looks like they're going to take over, but, over time, eventually, the sheep population kind of grow up, and they start displacing rabbits. And you end up excluding the rabbits. This is this phenomenon of competitive exclusion. And depending upon the context-- OK, that's an e-- this is either more or less maybe formally phrased. But the idea is that, if there are two species that are too similar, and in particular, if they're somehow perfect competitors, they're really just trying to eat the same thing, then you should only end up with one of the two species surviving. And that's the kind of idea here. Although, I think, you can argue about the mapping I think. So this is one of the four outcomes. And of course, it takes 15 minutes to go through each of these examples. So we're not going to go through all of them. But you should be able to, for a given a combination of betas and K's, be able to figure out, using some combination of algebra, derivatives, fixed point stability analyses, and drawing of things, well, you should be able to do all of the above. Are there any questions about where we are here? AUDIENCE: I guess that none of this holds in the stochastic [INAUDIBLE]. PROFESSOR: Yeah, that's an interesting question. AUDIENCE: For example, in that case, if r2 is much bigger, then you're going get almost, very close to like K2, and then maybe just the last individual [INAUDIBLE] and then you just add that fixed point. PROFESSOR: Right. So it's certainly the case that, once you have stochastic extinction-- the thing is that, you would probably be most susceptible to stochastic extinction. In the case you were talking about, you would be most susceptible to stochastic extinction when you're around here, actually. These trajectories are still always moving up in N1 space. I think I know what you're saying. We're going to maybe zoom in onto this N1, N2. Because we have this unstable fixed point here. And the claim was that, if you start out over here, then the trajectory might look something like this. And you'd say, oh, well, you might go extinct here. AUDIENCE: The idea was just the statement that [INAUDIBLE] r1, r2 are very dynamical [INAUDIBLE]. PROFESSOR: It's true that, I guess, things change. There are a number of things you might want to say. First of all, this is a purely continuous and deterministic description of the setting. It allows for fractional individuals. There's no shocks or perturbations that you have to worry about. I guess the only thing I wanted to say, in regards to your question, is that I think the stochastic extinction will not be dominated. We're talking about stochastic extinction of species 1. It will not be dominated due to a stochastic extinction here but stochastic extinction at the beginning. I haven't drawn this very, very well, but, in this case, I think these trajectories are always are going up in numbers of the 1 species, which means that you're most likely to experience stochastic extinction at the beginning. Yeah? AUDIENCE: r1 and r2 should still really mess with stochastic extinction a lot, right? Because the larger r1 and r2 are the quicker we get out of the regime of stochastic extinction. PROFESSOR: That's all true. We have to be careful about many of these things. In particular, if you go and you do a stochastic simulation of this, so let's say you plug this thing into a Gillespie simulation, can you get stochastic extinction? A yes, or B no, ready, three, two, one. No. So the answer is no but why? AUDIENCE: It depends. If you take r combination [INAUDIBLE]. PROFESSOR: Exactly. So right now, as written, there's no death. Although, I guess you could say that this a death. There's a question of how you partition things. So in principle, this is the difference between the growth and the death rate. But the most straightforward way of doing such a simulation is that you put this whole thing in here as a rate for birth. Somebody is going to say, ah, that's not how I was going to do the simulation, right? Well, that's probably how you were going to do it. AUDIENCE: That's a terrible way. PROFESSOR: But if you did that, if there's only birth, then you can't get stochastic extinction, obviously. But in general-- and this is one of things we spend our time a lot time thinking about in this semester-- there are multiple ways of doing kind of a stochastic simulation from a deterministic equation. And this thing, you could be more explicit and say, oh, this thing is really a B2 minus a D2, so a birth rate minus a death rate for example. And form the standpoint of a differential equation, it doesn't make any difference. But if you do the Fokker-Planck approximation or you do a simulation or whatnot, then these lead to different things. In particular, the rate of, say, stochastic extinction here increases as B and D increase, because that leads to more of these fluctuations. So there are many things that are different once you include the stochastic dynamics. But it's always good to get a base sense of the dynamics from the standpoint of just deterministic differential equations before you think too much about the stochastic dynamics, because otherwise you get overwhelmed quickly. Any other questions about that? What I want to do is just spend a little bit of time to think about the more generalized case of more species. And in particular, we could convert this set of equations. We can normalize by each of their carrying capacities. And we can convert a set of equations to look something like this. So now we just have Xi dot is equal to-- there's some ri Xi, 1 minus. And what we can actually do is normalize everything so that it's just written like this. And normally, what we assume is that we've done things such that alpha i i is equal to 1 for all i. So this is just saying that this is the normalization such that each species inhibits itself in a way that it's just give a simple logistic growth. And it's going to be logistic growth with a carrying capacity equal to 1. And then once you've done that, then a species inhibits itself with alpha i i 1. And then everything comes down to what this alpha matrix is. And I would say that, as always, it's really very, very important that you can go back and forth between the non-dimensionalized versions of equations and the base version. This was something that, on exam number two, there were quite a lot of problems where we asked about, how does the parameter change when you change the strength of expression or this or that, right? So this is something that I think is very important. Because this comes up a lot. So in this case, the alpha matrix tells you kind of everything. And then there are a number of things that, well, the mathematicians have proven about these sorts of equations. And I just want to point you towards some of those things to think about. So first, I'm considering a case where, again, alpha i j is greater than 0, again, for all i and ji, or greater than or equal to 0. So some of them can be 0. But the interactions, when they exist, are competitive. So first, if you start out in the region where all of the species start, between 0 and 1, then you stay in that region. If this is true initially, then it will be true forever. That's good. Negative abundances, you maybe were not so worried about. But it's not as obvious that you can't get above one, because there's nothing saying that, in principle, you couldn't have started there, right? Or in principle, you couldn't have gotten there? And certainly, it's physical to think about starting outside of that region, because we often talk about carrying capacities as something that's like. If you think about, this is an Xi as a function of time. So in these situations, it's not crazy to think about something above the carrying capacity. But this mathematical statement about the Latka-Volterra framework is that, if you start out with everything below its carrying capacity, then everything will always stay there. And all the dynamics occur-- this is for i equal to 1 to N Do I want to use a big N or little n? Does it matter? So this is big N, different species. The dynamics occur on an N minus one manifold. If we have many mathematicians, they can explain what this technically means. But basically, what it's saying is that there's going to be some N minus 1 dimensional surface or volume or whatnot where all the dynamics are going end up being on. And what that means is that, in particular, a limit cycle requires two dimensions, requires 2D, which means that to get a limit cycle requires-- so then N has to be greater than or equal to 3 to get a limit cycle. Can you see what I'm saying? Whereas chaos requires 3D, and that means that N has to be greater than or equal to 4. And indeed, you can get limit cycles with N equal to 3. And you get chaos with N equal to 4. There's another theorem that says that any dynamics are possible for N equal to 5 or larger. And chaos seems as much as I would want to ask for. But there's, apparently, a 4D torus, something that is different. This is something to think about in your spare time. Well, you know, like a lot of things. And this is worth spending a moment talking about. First of all, so here's a two dimensional thing. The special thing, as you might remember, about continuous dynamics is that these trajectories are not allowed to cross each other, right? Which means that you just can't draw a chaotic trajectory, because you're going to have to cross yourself again. And that's also why a limit cycle requires two, because we originally tried to draw a limit cycle in one dimension, and it didn't work. Do you remember this? And the same thing with-- you can imagine, here's a nice limit cycle. And that could be stable. Oh, and incidentally, you were right about. I was getting confused about the Poincare-Bendixson theorem. Because I think there are these funny things. It wasn't. OK, well, you know, all these people that complain about what I say. Because if you have the trajectories coming in, then what I said was that, if you had an unstable fixed point coming out, then you could draw this region, then you could be guaranteed that there was a limit cycle in there. But you cannot, just based on what I had said, say that, if it's a stable fixed point, then you won't get a limit cycle. I mean in some other cases you can prove that it doesn't work. Particularly, like in the Latka-Volterra, there in one of the crossings, it's now going to be a stable fixed point. In that situation, you can prove, mathematically, that you can never get a limit cycle oscillation because of some divergence condition of some function and so forth. But in the example that I was telling you about, in the predator-prey, it was true in that particular case that, when that thing is stable, you don't get oscillations. And when it's unstable, you do. But both directions do not follow from the Poincare-Bendixson. This is a limit cycle. Now, in a chaotic situation, you have to be able to do something where you kind of come around and then, every now and then, you kind go over here. And then you go around and around. Something crazy happens. But you can see that this situation doesn't work if it's only 2D, right? Do you see why I'm saying that? AUDIENCE: Yeah. PROFESSOR: You don't? You don't agree. It's just, if it's 2D, then these lines cannot cross. So you need to a third dimension, so that they can just shoot above and below each other. AUDIENCE: And what's with the definition? Because you can come up with trajectories, like you can have trajectories that asymptotically approach the limit cycle but that never cross themselves, right? PROFESSOR: Yeah. AUDIENCE: Those are obviously not chaotic, but the y-- PROFESSOR: Right. So this is not actually class in non-linear dynamics, so we're not. So normally, you characterize this Lyapunov exponent, which tells you about how the phase space is kind of growing or shrinking. In the case that you were just talking about, that's a case where all the trajectories come together. Because in this case where this trajectory comes into the limit cycle, if you draw like a blob of phase space, it's going to come together over time. Whereas in a chaotic system, if you have a blob of phase space, it's going to diverge and fold and do all the craziness. Yes? AUDIENCE: So you said [INAUDIBLE] essentially you need n greater than or equal to 3. Is that just for this-- PROFESSOR: Yeah. Yes. So this is in this Latka-Volterra. Because, in general, you can get a limit cycle with two equations. And that's with the 2D. And in general, you get a chaos with three equations or three variables. But in the Latka-Volterra model, it requires three and four, respectively. That doesn't mean that every four species Latka-Volterra will have chaos. But it means that it's possible to get one. And I think one thing that's just rather striking is that this really is kind of the simplest, possible model you can ever write down describing how species might interact or variables might interact or whatnot. And so it's really kind of incredible to me that you get all these crazy dynamics. Yes? AUDIENCE: I was going to ask about the n minus 1 thing. This dynamic [INAUDIBLE] n minus 1 [INAUDIBLE]. PROFESSOR: Yeah. AUDIENCE: Over here it just looks to me like-- we have n equals 2 and the dynamics are occuring on a plane, on a two-dimensional plane. PROFESSOR: Yeah. The transience or whatever can occur. It requires the full N dimensions to describe. Because you can start-- to describe all of the trajectories, clearly requires all dimensions, right? Because anywhere you start, you have to have specify it by five dimensions. But the dynamics, as far as like-- and here, there aren't even any dynamics. I think that you go to a point. AUDIENCE: What do you mean by dynamics? [INAUDIBLE] PROFESSOR: Yeah, I agree that there is a-- so in the case of the limit cycles and so forth, this is really like, at steady state, it's doing something. So here, steady state, it only goes to the fixed points. But if you have a three dimensional system, then you can have it steady state, the trajectories on like a plane. Yeah? AUDIENCE: Is it because there's like some concept of [INAUDIBLE] that can be [INAUDIBLE] for the system? PROFESSOR: I don't think that that-- I'm not aware of that being the case. But I'm hesitant to say that it's not true. I want to switch gears a little bit and think about three species interactions and, in particular the three species interactions when they're non-transitive. Because this is thought to be potentially a significant stabilizer for diversity in populations. So this is non-transitive interactions. And we often just say rock-paper-scissors. Is everybody from a cultural that plays rock-paper-scissors? Yes? AUDIENCE: Yeah. PROFESSOR: So this is a true, human universal. Although I think that what we call it does vary and so forth. You know how sometimes the linguists try to find this ur-language that our ancestors spoke 50,000 years ago or whatever? I think that you probably do something similar with rock-paper-scissors, because it seems to be a pretty common theme. But the idea here is that you have-- wait, which direction does it go? So paper beats rock, scissors beats paper, but rock beats the scissors. So you can imagine that this kind of dynamic can be captured in the Latka-Volterra type framework. So you just have to set up these betas or alphas so that this is true. And again, the way to think about this would be-- the simplest thing is to think about it as dominance. So if you have the rock species and the scissor species together, then the rock species will drive the scissor species extinct and so forth. Now, this is the kind of situation that, in principle, can lead to very complicated dynamics in multi-species ecosystems that, at least in many models, can stabilize the coexistence of multiple species via some of these complex, crazy dynamics that we were just talking about. I would say that, as a mechanism for the stabilization of diversity, I don't know how convincing that is in terms of being what explains why it is there's so much diversity outside, when you look outside the window. But, at least, it's in principle true. And one of the topics of these papers-- certainly the one you just read-- is that rock-paper-scissors, i.e. non-transitive interactions on their own, may not be sufficient, but, in the presence of spatial structure, maybe it does allow for long-term coexistence of these species or strategies and so forth. And again, it's not always clear in these situations whether you're thinking about ecology, where these are different species, or you're thinking about evolution, where these are different genotypes. And indeed, in the Kerr paper that you read, these are all E. coli. So it's all one species, it's just that they have different mutations. So that's really rock-paper-scissors in the context of evolution. Whereas in the male mating strategies paper by Curt Lively, that's more of an ecological context, but it still is evolution within the species. Because these mating strategies are heritable. I did tell you the base idea of this lizard mating strategy business? Or did I never say anything about that? Not really? OK. For some reason, I thought that I'd alluded to it. Well, let me explain it to you. I think it's kind of an incredible paper. So this is a paper by my Curt Lively. So it's Sinervo and Lively. It was Nature in '96. And it's called "The Rock-Paper-Scissors Game and the Evolution of Alternative Male Strategies." So what was known is that there are many examples of alternative mating strategies in males. In particular, it's rather common that there are, what you might call, territorial males and what they often call sneaker males. This is observed in fish and various land animals. And in many of the cases, these sneaker males really do look phenotypically like females. And this has been measured using various kinds of observational, experimental approaches. There's often what you call negative frequency dependent selection between these strategies. The sneakers can often, when rare, spread in population of the territorial and vice versa. But this was, at the least the first case I'm aware of where these ideas had been demonstrated, that there were really three strategies. And the three strategies implemented, one of these rock-paper-scissors interactions. I want to maybe make a little more space. If you guys are available after class, I encourage you to come up and look at the paper, because they actually have pictures of them, so you can identify the sneaker males. Because they actually look different based on the coloring of their throat. So these are lizards that live in the mountains up outside of the Bay Area in California, in Merced County. They're side-blotched lizards. I don't know anything about that. And what they showed was that there's these guys with orange throats that are kind of aggressive. And they defend a very large territory with a large number of females. And they fight off any males that come. Then there are the dark blue. Sorry, I should have put that over here. So there are other lizards here that are dark blue throats. And these guys are less aggressive with smaller territories. So you can guess, if it were just the orange-throated guys and the dark-- and these are genetically encoded strategies, in the sense that they do seem to be passed on, and it's determined by the genes that the male inherits-- less aggressive and small territory. Can you guess which one wins between these two, against each other? What's that? AUDIENCE: The two aggressive ones PROFESSOR: Yeah, but if you just have the two aggressive ones? AUDIENCE: Yeah. PROFESSOR: This guy's going to beat this one, right? That's just because this aggressive male has a larger territory, and they pass on more genes than the less aggressive ones. However, what they found is that there's a third mating strategy here, which are these sneaker males. And they indeed look like the females. They have these yellow stripes on their throats. They look like the female, and they have no territory. Sneaker males, so they don't defend any territory. Instead, they just sneak into the territory of the other males and try to mate with the females in that territory. And what they show is that, over the course of seven years, in the mountains of Merced, they see that the frequency of these things goes through an entire oscillation. So they see just over one period of this oscillation. And their argument is that, although it's true that the aggressive males can outcompete the less aggressive ones, the sneakers actually outcompete these aggressive ones, basically, because these aggressive males, it's just too large of a territory for them to effectively defend it. So the sneakers can actually outcompete the aggressive males. Yet the less aggressive ones can outcompete the sneakers, because they are trying to defend a smaller territory. So they basically measured the frequency of these strategies and also the number of females in the different territories, from 1990 to '96 or so, and saw that this thing kind of went around in some circle in this frequency space of alternative male strategies. Kind of an incredible paper. So the idea here is this is a situation where it is non-transitive. So there's this rock-paper-scissors type dynamic. And it's also spatial, because these lizards are in some particular place, they have some territory and so forth. And in this other paper, by Benjamin Kerr, what he wanted to do was try to understand something about what the role of that spatial structure might be in maintaining the diversity. So in this case, the argument was that, if you go out and you look at these lizards, the reason that you see all three of these strategies is because of this rock-paper-scissors dynamic. If one of the strategies becomes more rare, it's going to have an advantage relative to the others. It's going to spread. And what Benjamin Kerr wanted to explore is whether that statement could be made-- somehow you can distinguish between whether the spatial component is important or not. Of course, it's hard to do that in the case of the lizards, but he was able to implement this in the case some chemical warfare behavior in bacteria. So in this paper by Kerr, and it's also a Nature paper from 2002. So this paper is called "Local Dispersal of Promotes Biodiversity in a Real Life Game of Rock-Paper-Scissors." So you guys read it. So what were the three strategies? AUDIENCE: [INAUDIBLE]. PROFESSOR: So C is the colicin producers. Incidentally, just this colicin production is kind of an incredible phenomenon already. So these are proteins that are produced that bind to other bacteria and can often make pores in the membrane and kill them. But the amazing thing about the colicin production is that in E. coli and other gram negative bacteria, the only way that these colicins are released is by cell lysis. So it's not just that the cell is engaging in some costly behavior in order to make this protein that will kill other cells. But the only way that the toxin is released is by the cell actually bursting open. So it's clear then that this has to be supported by some kind of group level or kin-selection kind of argument. This can never be good for the individual, because the individual has had to spill its guts in order to harm other cells. So the only way that this can be supported is by inhibiting the growth of competitors and allowing your kind of kin mates or other cells that also have this plasmid, and, therefore, also the immunity protein, allowing them to grow better. So this is a very neat example of an altruistic, kind of warlike behavior. So this is one of the strategies. What was the other two. AUDIENCE: [INAUDIBLE]. PROFESSOR: Resistant. So there's R, which is resistant. And between the C and R, who wins? AUDIENCE: R. PROFESSOR: R, that's right. There might be some costs associated with being resistant, but the cost is not as large as actually bursting open. What's the last one? AUDIENCE: [INAUDIBLE] sensitive. PROFESSOR: Sensitive, perfect. So this is just the normal bacteria. And the argument is that there's often a cost to be resistant, which means that sensitive bacteria will outcompete resistant. Yet, if it's just the sensitive and the colicinogenic strains, then this strain can beat this strain. So this is the idea of the rock-paper-scissors game in this system. They do say that, in some situations, the fitnesses are such that it's like this. And it's good to take these sentences seriously, because, if they thought that every time that you isolate a resistant bacterium, that it would satisfy this, they would have said, when you do this, this is what you see. Their phrasing tells you that actually, depending upon which strain you get here or there, you may or may not see this. So you have to be careful. Just because there's a nice paper that's written about this doesn't mean that, if you go out and you find particular strains that have these properties, that it will always yield this particular outcome. So they argue that these strains, just because they have a non-transitive interaction, does not necessarily mean that they will be able to coexist in a well mixed environment in particular. And in their simulations and the experiments, where they did experiments in a test tube, which strain died first? The sensitive strain. Does that make sense? Yeah, right? And the other thing to remember is that these strains, there's no reason that they should be accurately described by a Latka-Volterra type formulation. And in particular, it could just be the case that, if you have enough producers and they make enough of the colicin, then the sensitive cells are just all dead. And that will, in general, be hard to capture in this sort of framework. So the idea is that if you start with a bunch of N's, N for each of these three, then first you see that the sensitive cells die. And once the sensitive cells have died, then you're really just playing an interaction between these two. In that case, you get the colcinogenic strain dying, and you're left with just the resistant strain. One thing I want to caution you about, though, is that, just because in this experiment they saw that coexistence of three rock-paper-scissors type strains was not possible in a well mixed environment, it does not mean that it will always be the case. This is a very well known paper in the field. And the thing is that it's easy to forget what a paper shows and what it doesn't show. So what this shows is that there is maybe a set of these three strains that have a rock-paper-scissors type interaction. And in those particular three strains, that are interacting in this particular way, via colicin killing da-duh, then they don't coexist in this particular well mixed and maybe other well mixed environments as well. But this does not necessarily show that any rock-paper-scissors interaction in a well mixed environment will not support coexistence. And in particular, you might remember, in Martin Nowak's book, there are very reasonable equations that display rock-paper-scissors type interactions that can lead to coexistence. So you can kind of spiral in that space to a state of coexistence. So it's possible. But in this situation, it doesn't happen. Can somebody remind us how they implemented the spatially structured environment? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. So they used agar plates. They did this thing where they took this plate, and they kind of used a hexagonal grid of some sort. Is it actually hexagonal? Yeah. I don't know how they decided on the original order, but they said, OK, here is a sensitive, sensitive, resistant, colicinogenic, resistant. I don't know, whatever. So they filled it up. They put maybe 20 different kind of patches there. And then they basically, each day, they just used one of these velvets, that we often use to do replica plating. And then they just kind of took off some of the cells and put it on a fresh plate. And then they did this for a week or so. And then they saw that there was some sense that there was a spatial dynamic taking place, where the spatial structure was kind of reminiscent of this. The sensitive kind of moved into the resistant. The resistant kind of moved into the colicinogenic. Colicinogenic kind of moved into the sensitive. But they couldn't actually see all of those, because two of the strains, they said, grew to similar density. That was between the sensitive and the resistant, right? So visually, they can only distinguish the colicinogenic strain relative to the others. But what they found, though, is that, over a similar amount of time, they got coexistence of all three. So R, C, S, kind of all-- the number is just a function of time-- stuck around in this spatially structured environment. So their argument is that, in some cases maybe, this non-transitive interaction is not sufficient to maintain diversity in a population, whether it's genetic diversity or species diversity. But it may be that it's very important to have this spatial component. If you're curious about these things, there's also a nice computational study done by Erwin Frey, who studied these rock-paper-scissors dynamics as a function of the mobility of the individual agents. And in that study, he found that there was sort of a critical level of mobility in which you kind of switch from a spatially structured environment with coexistence to, above some mobility, you end up losing the diversity, because it kind of goes into this well mixed regime. So if you're curious about these things, you can look up Erwin's paper. Are there any questions about what they did here, what you think it means? Yeah? AUDIENCE: You said you couldn't use [INAUDIBLE]. How did they know if it was [INAUDIBLE]? PROFESSOR: They can just scrape off everything and plate. AUDIENCE: And then you can tell? PROFESSOR: Right. Because, well, then you can ask, oh, we're those guys sensitive to colicin or not? AUDIENCE: Why not actually just inject them with colicin? PROFESSOR: Yeah. Or plate them on something with colicin. This is a system that has been very influential, but it's really not yet or has not been a domesticated kind of model system in the sense that they are not nice, fluorescent proteins. This is not on a nice cloning plasmid. It was a natural plasmid. And this resistant strain, the way that they get it is, basically, they take some sensitive cells, and they add colicin. Let's say they supernatant from here, and they ask, which cells grow> and then that's a resistant cell. And it's genetically resistant. But then each one's going to different. They have done sequencing. It's a surface receptor that the colicin maybe uses to go in or other things. But indeed, we actually did some experiments with some of these strains. And it's a little bit messy. And I think that there's a sense that the field maybe needs to just make some nice plasmids, with nice colors, so that we can really distinguish things. Because I think that this system, despite 500 citations or so, almost none of those citations are experimental papers really exploring this thing. Because I think that it still is just a little bit messy. And I think that it's also a very pretty system to explore these ideas. Yeah? AUDIENCE: [INAUDIBLE]? PROFESSOR: So the question is, why can't you distinguish these? And they say it's because they're similar densities. I don't even know if that's even. Of course, well, if you added sensitive cells here and sensitive cells here and they grew up, it's likely you're not going to see a boundary. And just more generally, if you take similar strains-- and these are rather similar strains, this just has some mutation relative to this-- then you won't, often, see a boundary when the colonies kind of grow up. AUDIENCE: So they just didn't see boundaries. It's not like they didn't see-- PROFESSOR: So they saw there were cells there, but they couldn't necessarily say that it was true that the sensitive cells were kind of spreading into the resistant cell region, because they couldn't see the boundary. So in the last 20 minutes here, 15, 20, I just want to say a few things about these population waves. For many, many purposes, there are strong arguments to be made that, in the context of evolution or ecology, spatial dynamics matter. And the way that we often think about the spatial dynamics is via some effective diffusive process. And that's convenient, because we know a lot about how to model it. But it's also maybe a reasonable description of the motion of animals and other living things over length scales that are large compared to the movements of the animals. So what we do is we take some equation, such as, well, we often have, say, dN/dt. So this is going to be about population waves. So we take our standard thing where we say, oh, here's r N 1 minus N/K. And then we just want to add some spatial dynamic. So what we're going to do is we're going to say, now, the derivative, now it's a density. So we're going to use a little n, just for fun. And we might still use a K just for simple. But then we have to add some diffusive term. So there still is going to be a local carrying capacity. Now this is in terms of density of the organism. And we're going to assume that there is some diffusive type motion of the organism. Now this is this is, of course, going over the life-scale over which the animals are actually doing things over shorter time scales. So it could be that different organisms have very different modes of motility. In some cases they walk or swim, in some cases they just get picked up by a passing deer. So there are a wide range of ways in which organisms move around. But if you look at it over kind of longer length time scales, then maybe it doesn't matter. And certainly, if it's an unbiased kind of motion, with motility being well-behaved, i.e., if the probability distribution of these steps, as long as it's not long-tailed. Similarly, Oskar Hallatschek, over at Berkeley, has been doing a lot of fun work looking at epidemic spread in cases where this kernel, the kind of step size distribution, has long tails. It leads to qualitatively different behaviors. So if you're curious about such things, check out Oskar's work. But for normal kind of the step size distributions, then the central limit theorem type considerations just tell you that you can maybe just look at it like this over time spatial scales that are bigger. Yeah? AUDIENCE: But is it a unique correlation in your step? Because the central limit theorem, as long as the variance-- I guess, you're saying? PROFESSOR: That's why I'm saying long-tailed. AUDIENCE: If the variance is infinite? PROFESSOR: Exactly Yeah, so that's what I was saying. And indeed, people argue, in the case of disease spread, with modern air travel and so forth, that the probability distribution of kind of step sizes for infected individuals, over the next week, is long-tailed, right? Because there's a fair chance that you're just going to stay around your local neighborhood, but there's a smaller chance you're going to go to the other side of town. But you might go to a business trip over in DC. At some small right, though, you also fly to South Africa and go to a conference. So all of these things have reasonable probabilities, and so there are arguments that this is kind of some power law type distribution. And in those cases, you don't necessarily have finite variance, and so then you can't just put everything under the rug. But we first want to understand what happens here, and then we can-- well, we're not going to-- but then other people can think more deeply about what happens in fancier situations. It is worth saying, though, that this approach, I mean it looks very physics-y, in the sense that physicists like simple equations where we add diffusion and so forth. But this is not what a physicist came up with. These are classic ideas in evolution and ecology. The solution to this was originally done by Fisher. So this was in the 1920s or so. So a long time ago, originally to try to understand not the spread of a population but the spread of a beneficial allele in a spatial population. And once again, this highlights the deep connections between evolution and ecology. You can have a genetic wave in space of a of a beneficial mutant spreading, or you can have a population wave of an invasive species or whatnot, and you end up getting very similar dynamics. So the basic idea, here, is that, if you look at the density as a function of position. If start with, there's one individual. What's going to happen? It's going to start dividing, right? So we kind of get up. And it's going to come up. And eventually it's going to saturate at this carrying capacity, K. And then you end up getting these spreading population waves that look like this. And the reason we're calling it a wave is because the shape of this front is the same over time. So it can really to be described as some function x minus vt. And we are going to, maybe, typically assume that we're in a situation where we don't have to think about the left and the right, because it's just too complicated. So we'll just imagine it being that it's at saturation here, and then we're looking at some front that's moving to the right. Now, by dimensional analysis, we should be able to figure out what the velocity is going to be. Remember how much we liked dimensional analysis in this class? Yes. So what we're going to do is I'll give you some characters that you're going to be able to use in your quest. So we can use r, K, D. I'll give you a square root in case you find it useful. And you can raise something to the second power as well. So what you can do is you're going to set up your cards so that when I look at it, from the left to the right, it will describe the velocity of this wave. I'll give you 30 seconds. So this is dimensional analysis for this wave velocity. All right, do you need more time? Yes? OK, that's fine. All right, let's go ahead and vote. Construct your answer, remember, from me, from left to right. Ready, three, two, one. All right. We have trouble. A key skill is being able to imagine yourself in someone else's shoes. So if I'm viewing-- but that's OK. So we have, here, this is kind of units of 1 over time. This is what length squared over time. Whereas this is a density. If we want something that is a length over time, then we're going to end up having to take r times D and take a square root. So this should be DAC, which is a square root of-- of course, it depends on how you're entering it into your calculator, if you have scientific calculator or something else, maybe. And indeed, it ends up, there's a 2 here. So this is the velocity. Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: I see what you're saying. But it ends up not being true. These derivative signs don't have any units. So this still has units of a density divided by a length squared. So for unit purposes, you just look at this thing and at this. This squared does mean there's a length squared in the denominator, but this squared doesn't do anything. So D is still a length squared over time. So this is the famous Fisher velocity. There are some mathematical subtleties to all of this that we're not really going to get into. I don't know. I'm having trouble-- velocity. There are a few features to highlight. If the organism grows faster, the wave is going to spread faster. That make sense? If it has a larger mobility, it also moves faster. That makes sense. Of course, the velocity is given by both of those things. So this wave coming out really is a population level property. Because it's not just growth. It's not just motion. It's a result of the coupled division and diffusion that leads to this population wave spreading. Importantly, to first order, it doesn't depend on the carrying capacity, at least within the deterministic regime. AUDIENCE: It's interesting that, if the reproductive rate goes to 0, suddenly the population stops spreading. PROFESSOR: So first of all. You'd say, oh, it would be sort of surprising if it really kept on spreading in the absence of growth. But what you're pointing out is that, if you just, at one moment, turn off division, then there will still be diffusion. It'll still keep on going. But this is the velocity of a wave when it's a wave. When it's described by a function like this. So it's true that you could turn off division, and it'll still diffuse. But then the shape is changing as well. I'm going to draw a few lines describing possible populations. Now, let's assume that they have the same motion, diffusion. I want to know, which one has the largest velocity? Is it A, B, C, D? It's the same diffusion. Per capita growth rate as a function of the density for three different organisms. Ready, three, two, one. All right, so I'd say we have a fair number of B's, D's. It seems like it's B versus D. Now, this is tricky because D, we have not explicitly considered here. But it turns out that the answer is B. Certainly, between these three, these are all really logistic growth functions. And so from the standpoint of here, it's just this r. And r is the division rate at 0 cell density. The per capita growth rate at 0 cell density is what determines the velocity in a Fisher wave. And indeed, that's true even if there's no decrease in the growth up right until you get to some carrying capacity. And indeed, all of these cases, the division rate and the growth rate that's relevant for the velocity is when it hits this axis. And indeed, all of these waves are described as Fisher or pulled waves. Because there's a sense that the entire wave is determined by the front of the wave. So we drew this profile. I didn't do that very well. Here, this is an exponential. And the exponential actually is what's pulling this wave. There ends up being a characteristic length scale here that is the square root of D/r. So this is the length scale of the exponential. And the velocity and the length scale are only functions of the division rate in the limit below cell density or low density of organism. The shape of what goes on here changes, indeed, the bulk properties of the wave, but doesn't change the velocity. And I just want to make one comparison of all this to-- because there's another qualitatively different kind of wave, which is a so-called pushed wave. And that's what happens if you have an Allee effect, particularly like a strong Allee effect. If this thing looks-- like this is certainly possible. This is an Allee effect. Now, if you just said, oh, the only thing that matters is the growth rate at low cell density, you would say, oh, this thing cannot possibly expand. Although it turns out that it still is possible. And in this situation, it would be called a push wave, where your profile somehow maybe looks kind of similar. But instead of it being the front of the wave that's pulling the wave, it's diffusion around the bulk. Because the bulk is the part that is actually happily growing. Because the front, here, in this case, is dying. Yet it still is possible to have a positive velocity. And so this is, then, a qualitatively different kind of population expansion. So cooperatively growing populations expand very differently from logistically growing populations. And one of things that the reading in Physics Today talked about is these different rates of loss of heterozygocity and so forth in different populations. And as you might expect, the pulled waves have a smaller effective population size than the pushed waves, because, here, the relevant population is at the front if it's a low density. Whereas here, the relevant population is the bulk that's at high density. With that, I think we should quit. But I will see you on Tuesday. And we'll talk about this neutral theory in ecology. Thanks.