Dataset Viewer
text
stringlengths 108
1.26k
|
---|
Hi everyone, welcome to CS25 Transformers United V2. This was a course that was held at Stanford in the winter of 2023. This course is not about robots that can transform into cars, as this picture might suggest. Rather, it's about deep learning models that have taken the world by the storm and have revolutionized the field of AI and others. Starting from natural language processing, transformers have been applied all over from computer vision, reinforcement learning, biology, robotic, etc. We have an exciting set of videos lined up for you with some truly fascinating speakers, skip talks, presenting how they're applying transformers to the research in different fields and areas. We hope you'll enjoy and learn from these videos. So without any further edu |
let's get started. This is a purely introductory lecture and we'll go into the building blocks of transformers. So first let's start with introducing the instructors. So for me, I'm currently on a temporary dipro from the PhD program. And I'm leading AI at a robotic startup, collaborative robot test-tec, working on some general purpose robots, somewhat like that's about. And yeah, I'm very passionate about robotics and building efficient learning algorithms. My research and test started in the personal learning computer visionletions, in remodeling, and I have a bunch of publications in the pro take out number driving in the area. My undergrad was at Cornell, with someone to speak for now. Itto make a call. So I'm Stephen, currently first year CSP speaker. Pierce did my master's at CMU and undergrad and wanted. Mainly into NLP research, anything involved in language and text. But more recently |
I've been getting more into computer vision as well as font and normally just some stuff I do for fun. A lot of music stuff, mainly piano. Some self-promal, but I post a lot on my Insta, YouTube, and TikTok. So if you guys want to check it out. My friends and I are also starting a Stanford piano club. So if anybody's interested, feel free to email the DM for details. Other than that, you know, martial arts, bodybuilding, and a huge fan of kidedrama's anime and occasional gamer. Okay, cool. Yeah, so my name's Ryland. Instead of talking about myself, I just want to very briefly say that I'm super excited to take this class. I think it the last time was offered. I did a bunch of time. I thought we brought in a really great group of speakers last time. I'm super excited for this offering. And yeah, I'm thankful you're all here and I'm looking for a really fun quarter to get. Thank you. Yeah, so fun fact |
Rana was the most outspoken student last year and so if someone wants to become instructor next year you know what you're doing? Okay cool. Let's see, okay. So what we hope you will learn in this class is first of all, how do task fom much work? How they're being applied. just no other things. And nowadays, like, we are pretty much everywhere in, yeah, machine learning. And what are some new investment directions of research in this topics. Cool. So this class is just an introductory, so we'll be just talking about the basics of transformers introducing them, talking about the self-attention mechanism on which they're founded, and we'll do a deep dive more on like models like BIRGP, so great, happy to get solid. Okay |
so let me start with presenting the attention timeline. Attention all started with this one paper. Attention is Auditing by Bosmania L in 2017. That was the beginning of Transformers. Before that we had the historic era where we had more of like RNMs, LSTMs, and there are simple attention mechanisms that didn't fall for scale that that all. Starting 2017, we saw this explosion of transformers into NLP, where people started using it for everything. I even heard this support from Google as like our performance increased every time we fight our linguists. For the last 2018, after 2018 to 2020, we saw this explosion of transformers into other few, like vision, bunch of other stuff and like biology or polygy and last year 2021 was the start of the tentative era where we got like a lot of tinnative modeling started like moreodak's, KPP, Dali, stable diffusion |
so a lot of like things happening in genitive modeling. And we started scaling up in AI. And now it's the present. So this is 2020 and like the startup 2020. And now we have models like the ttip-pity, whisper, a bunch of other. And we are like scaling onward without spying off. So that's great. So that's a future. So going more into this, so once there were R&M, so we had two-sequence models, LSTN, GILU. What worked here was that they were good at encoding history, but what did not work was they didn't encode long sequences and they were very bad encoding context. So consider this example. Consider trying to predict the last word in the text I grew up in France. I speak fluent dash. Here you need to understand the context for it to predict French and a tension mechanism is very good at that, whereas if you're just using LSTMs |
it doesn't work at at all. Another thing transformers are good at is more based on content is like also context prediction is like finding attention maps if I have something like a word like it what now does it correlate to and we can give like a probability attention on what are the possible activations and this works are better than existing mechanisms. Okay. So where we were in 2021, we were on the verge of takeoff. We were starting to realize the potential of transformers in different fields. We solved a lot of long-sequence problems like protein folding, alpha forward, offline RL. We started to see few shots, zero-shot generalization. We saw multimodal tasks and applications like generating images from language. So that was Dali. Yeah, and it feels like Asian, but it was only like two years ago. And this is also talk on transformers that you can watch in video. Yeah |
cool. And this is where we was going from 2021 to 2022, which is we have gone from the virtual of taking off to actually taking off. And now we are seeing unique applications and audio, generation, art, music, storytelling. We are starting to see reasoning capabilities like common sense, logical reasoning, mathematical reasoning. We are also able to now get human enlightenment and interaction. They're able to use reinforcement learning and human feedback. That's how treaty is trained to perform really good. We have a lot of mechanisms for controlling toxicity bias in ethics now. And a lot of also |
a lot of developments in other areas like different models. Cool. So the future is a spaceship and we are all excited about it. And there's a lot of more applications that we can enable. And it'll be great if you can see transformers also work there. One big example is video understanding and donation. That is something that everyone is interested in. And I'm hoping we'll see a lot of models in this area this year. Also, finance, business. I'll be very excited to see GBT author novel, but we need to solve very long sequence modeling. And most transformer models are still limited to like 4,000 tokens or something like that. So we need to make them generalize much more better along sequences. We are also, we also want to have generalized agents that can do a lot of multi-task, a multi-input predictions like Gato. And so I think we will see more of that too. And finally |
we also want domain specific models. So you might want like a GPD model that's good at like maybe like your help. So that could be like a doctor GPT model. You might have like a large GPD model that's like gain on only on law data. So currently we have like GBD models that are train on everything, but we might start to see more niche models that are like good at task. And we could have a mixture of experts. It's like, you can think like, this is a, like, how you normally consult an expert, you'll have like expert AI models. And you can go to a different AI model for your different needs. There are still a lot of missing ingredients to make this all successful. The first of all is external memory. We are already starting to see this with models like GenGPT where the interactions are short-lived, there's no long-term memory |
and they don't have ability to remember that or store conversations for long term. And this is something you want to fix. Second is reducing the computation complexity. So attention mechanism is quadratic over the sequence length, which is slow and we want to use it or not make it faster. Another thing we want to do is we want to enhance the controllability of this model. Like a lot of this models can be stochastic and we want to be able to control what sort of outputs we get from now. And you might have experienced the chat if you can just refresh, you get like different output each time, but you might want to have mechanisms that controls what sort of things you get. And finally we want to align our state of art language models with how the human brain works. And we are seeing this search, but we still need more research on seeing how they can be made for it. Thank you. Great. Hi. Yes |
I'm excited to be here. I live very nearby. So I got the invites to come to class and I was like, okay, I'll just walk over. But then I spent like 10 hours in the slides, so it wasn't as simple. So yeah, I want to talk about transformers. I'm going to skip the first two over there. We're not going to talk about those. We'll talk about that one. Just to simplify the lecture since we've got time. Okay, so I wanted to provide a little bit of context of why does this transformer class even exist? So a little bit of historical context. I feel like Bilbo over there, I joined like telling you guys about this. I don't know if you guys saw all over drinks. And basically, I joined AI in roughly 2012 in full force, so maybe a decade ago. And back then, you wouldn't even say that you joined AI, by the way, that was like a dirty word. Now it's okay to talk about, but back then it was not even deep learning |
it was machine learning. That was the term you use if you were serious. But now, now AI is okay to use, I think. So basically, do you even realize how lucky you are potentially entering this area in roughly from 2003? So back then in 2011 or so, when I was working specifically on computer vision. Your pipelines looked like this. So you wanted to classify some images. You would go to a paper, and I think this is a representative. You would have three pages in the paper describing all kinds of a zoo of kitchen sink of different kinds of features descriptors and you would go to a poster session and in computer vision conference and everyone would have their favorite feature descriptors that they're proposing |
and it's totally ridiculous. And you would take notes unlike which one you should incorporate into your pipeline because you would extract all of them and then you would put an SVM on top. So that's what you would do. So there's two pages. Make sure you get your sparse sips histograms, your sysms, your color histograms, textons, tiny images, and don't forget the geometry specific histograms. All of them had basically complicated code by themselves. So you're collecting code from everywhere and running it, and it was a total nightmare. So on top of that, it also didn't work. So this would be, I thinkthink represents the prediction from that time. You would just get predictions like this once in a while and you'd be like you just try your shoulders like that just happens once in a while. Today you would be looking for a bug. And worse than that, every single sort of feel |
every single chunk of AI had their own completely separate vocabulary that they work with. So if you go to an if you go to an NLP papers, those papers would be completely different. So you're reading the NLP paper and you're like, what is this part of speech tagging, morphological analysis, syntactic parsing, core reference resolution? What is NP, VT, TJA, and your compute? So the vocabulary and everything was completely different and you couldn't read papers across different areas. So now that changed a little bit starting 2012 when you know that Aswitchievsky and colleagues basically demonstrated that if you scale a large neural network on large data set, you can get very strong performance. And so up till then, there was a lot of focus on algorithms, but this showed that actually neural nets scale very well. So you need to now worry about compute and data and you scale it up |
it works pretty well. And then that recipe actually did copy paste across many areas of AI. So we started to see neural networks pop up everywhere since 2012. So we saw them computer vision and NLP and speech and translation in RL and so on. So everyone started to use the same kind of modeling tool tool tool toolkit, modeling framework. And now when you go to an LB and you start reading papers there in machine translation, for example, this is a sequence to sequence paper which will come back to in a bit, you start to read those papers and you're like, okay, I can recognize these words like there's a neural network, there's some parameters, there's an optimizer, and it starts to read like things that you know of. So that's a decreased tremendous barrier to entry across the different areas. And then I think the big deal is that when the transformer came out in 2017 |
it's not even that just the toolkits and the neural networks were similar, is that literally the architectures converge to like one architecture that you copy-paced across everything seemingly. So this was kind of an unassuming machine translation paper at the time proposing the transformer architecture, but what we found since then is that you can just basically copypate this architecture and use it everywhere and what's changing is the details of the data and the chunking of the data and how you eat them. And you know |
that's a caricature but it's kind of like a correct first order statement. And so now papers are even more similar looking because everyone's just using transformer. And so this convergence is was remarkable to watch and unfolded the last decade and it's crazy to me. What I find kind of interesting is I think this is some kind of a hint that we're maybe converging to something that maybe the brain is doing because the brain is very homogeneous and uniform across the entire sheet of your cortex. And okay, maybe some of the details are changing, but those feel like hyperparaditors, like a transformer, but your auditory cortex and your visual cortex and everything else looks very similar. And so maybe we're converging to some kind of a uniform, powerful learning algorithm here, something like that. I think it's kind of interesting and exciting. Okay, so I want to talk about where the transform came from briefly |
historically. So I want to start in 2003. I like this paper quite a bit. It was the first sort of popular application of neural networks to the problem of language modeling. So predicting, in this case, the next word in a sequence, which allows you to build generative models over text. And in this case, they were using multi-layer perceptron, so a very simple neural net. The neural net took three words and predicted the probability distribution for the fourth word in a sequence. So this was well and good at this point. Now, over time, people started to apply this to a machine translation. So that brings us to sequence to sequence paper from 2014. That was pretty influential. And the big problem here was, okay, we don't just want to take three words and predict the four. We want to predict how to go from an English sentence to a French sentence. And the key problem was, okay |
okay, you can have arbitrary number of words in English and arbitrary number of words in French. So how do you get an architecture that can process this variability sized input? And so here they use a LSTM and there's basically two chunks of this which are covered by the slack by the by this but basically have an encoder LSDM on the left and it just consumes the work one word at a time and builds up a context of what it has read. And then that acts as a conditioning doctor to the decoder RNN or LSTM that basically goes chunk, chunk, chunk for the next word in a sequence |
translating the English to French or something like that. Now the big problem with this that people identified I think very quickly and tried to resolve is that there's what's called this encoded bottleneck. So this entire English sentence that we are trying to condition on is packed into a single vector that goes from the encoder for the decoder. And so this is just too much information to potentially maintain a single vector and that didn't seem correct. And so people who are looking around for ways to alleviate the attention of the included bottleneck as it was called at the time. And so that brings us to this paper, neural machine translation by jointly learning to align and translate. And here, just quoting from the abstract |
in this paper we conjecture that use of a fixed length vector is a bottleneck in improving the performance of the basic encoded decoder architecture and proposed to extend this by allowing the model to automatically soft search for parts of the source sentence that are relevant to predicting a target word, yeah, without having to form these parts or heart segments explicitly. So this was a way to look back to the words that are coming from the encoder. And it was achieved using this soft search. So as you are decoding in the words here, while you are decoding them |
you are allowed to look back at the words at the encoder via this soft attention mechanism proposed in this paper. And so this paper I think is the first time that I saw basically attention. So your context vector that comes from the encoder is a weighted sum of the hidden states of the words in the encoding. And then the weights of this sum come from a soft max that is based on these compatibilities between the current state as you're decoding and the hidden states generated by the encoder. And so this is the first time that really you start to look at it and this is the current modern equations of the attention. And I think this was the first paper that I saw it in. It's the first time that there's a word attention used, as far as I know, to call this mechanism. So I actually try to dig into the details of the history of the attention. So the first author here, Dmitri, I had an email correspondence with him |
and I basically sent him an email. I'm like, Dimitri, this is really interesting. Transformers have taken over. Where did you come up at the soft detention mechanism that ends up being the heart of the transformer? And to my surprise, he wrote me back this massive email, which was really fascinating. So this is an excerpt from that email. So basically he talks about how he was looking for a way to avoid this bottleneck between the encoder and decoder. He had some ideas about cursors that traverse the sequences that didn't quite work out. And then here, so one day I had this thought that it would be nice to enable the decoder RNN to learn who to search where to put the cursor in the source sequence. This was sort of inspired by translation exercises that learning English in my middle school involved. You gaze shifts back and forth from the source and target sequence as you translate. So literally |
I thought this was kind of interesting that he's not a native English speaker and here that gave him an edge in this machine translation that read to attention and then led to the transformer. So that was that's really fascinating. I expressed a soft search as soft max and then we did averaging of the Byron estates and basically to my great excitement this were this work from the very first tribe. So really I think interesting piece of history and as it later turned out that the name of RNN search was kind of lame. So the better name attention came from Joshua on one of the final passes as they went over the paper. So maybe attention is all I need would have been called like RNS or just all that. But we have Joshua Benjio to thank for a little bit of better name I would say. So apparently that's the the history of this subject. I thought it was interesting. Okay, so that brings us to 2017 |
which is attention is all unique. So this attention component, which in Dimitri's paper was just like one small segment, and there's all this bi-directional R&N, R&N and decoder, and this attentionalion paper is saying, okay, you can actually delete everything. Like what's making this work very well is just attention by itself. And so delete everything, keep attention. And then what's remarkable about this paper actually is usually you see papers that are very incremental. They add like one thing and they show that it's better. But I feel like attention is all in need was like a mix of multiple things at the same time. They were combined in a very unique way and then also achieved a very good local minimum in the architecture space. And so to me, this is a really a landmark paper that is quite remarkable and I think have quite a lot of work behind the scenes. So delete all the R&N |
just keep attention. Because attention is operates over sets, and I'm going to go into this in a second you now need to positionallyspersed attention with multilea perceptrons. They interspersed attention with multi-layer perceptrons. They used layer norms, which came from a different paper. They introduced the concept of multiple heads of attention that were applied in parallel. And they gave us, I think, like a fairly good set of hyperparameters that to this day are used. So the expansion factor in the multile of perceptron goes up by 4x and we're going to like a bit more detail and this OX has stuck around. And I believe like there's a number of papers that try to play with all kinds of little details of the transformer and nothing like sticks because this is actually quite good. The only thing to my knowledge that stuck, that didn't stick, was this reshuffling of the layer norms to go into the pre-norm version |
where here you see the layer norms are after the multi-headed detention repeat forward, they just put them before instead. So just reshuffling of layer norms, but otherwise the GPTs and everything else that you're seeing today is basically the 2017 architecture from five years ago. And even though everyone is working on it, it's proven remarkably resilient, which I think is very interesting. There are innovations that I think have been adopted also in positional encodings. It's more common to use different rotary and relative positional encodings and so on. So I think there have been changes, but for the most part, it's proven very resilient. So really quite an interesting paper. Now, I wanted to go into the attention mechanism. And I think, I sort of like, the way I interpreted is not, is not similar to the ways that I've seen that presented before. So let me try a different way of like how I see it. Basically to me |
attention is kind of like the communication phase of the transformer. And the transformer interleads two phases, the communication phase, which is the multi-headed attention, and the computation stage, which is this multilipetron or P2. So in the communication phase, it's really just a data-dependent message passing on directed graphs. And you can think of it as, okay, forget everything with a machine translation and everything. Let's just, we have directed graphs at each a node, you are storing a vector, and then let me talk now about the communication phase of how these vectors talk to each other in this directed graph, and then the compute phase later is just the multilio perceptron which now which then basically acts on every note individually but how do these note talk to each other in this directed graph. So I wrote like some simple Python |
like I wrote this in Python basically to create one round of communication of using attention as the direct as the message passing scheme. So here a node has this private data vector as you can think of it as private information to this node. And then it can also emit a key, a query and a value. And simply that's done by linear transformation from this node. So the key is what are the things that I am, sorry, the query is one of the things that I'm looking for. The key is where the things that I have, and the value is where the things that I have and the value is where the things that I will communicate. And so then when you have your graph that's made up of notes in some random edges, when you actually have these notes communicating, what's happening is you loop over all the notes individually in some random order, and you are at some node and you get the query vector of Q, which is I'm a note in some graph |
and this is what I'm looking for. And so that's just achieved via this linear transformation here. And then we look at all the inputs that point to this node and then they broadcast where are the things that I have, which is their keys. So they broadcast the keys. I have the query. Then those interact by dot product to get scores. So basically, simply by doing dot product, you get some kind of a unnormalized weight of the interestingness of all of the information in my, in the nose that point to me and to the things I'm looking for. And then when you normalize that with softmax, so it just sums to one, you basically just end up using those scores, which now sum to one and now are probability distribution, and you do a weighted sum of the values to get your update. So I have a query, they have keys, dot products to get interestingness or like affinity, softmax to normalize it |
and then waked some of those values flow to me and update me. And this is happening for each note individually and then we update at the end. And so this kind of a mess-spassing scheme is kind of like at the heart of the transformer and happens in the more vectorized, batched way that is more confusing and is also interspersed with layer norms and things like that to make the training behave better. But that's roughly what's happening in the attention mechanism, I think, a high level. So, yeah, so in the communication phase of the transformer, then this message passing scheme happens in every head in parallel and then in every layer in series and with different weights each time and that's the that's that's it as far as the multi-headed tension goes. And so if you look at these encoder decoder models, you can sort of think of it then in terms of the connectivity of the nodes in the graph, you can kind of think of it as like |
okay, all these tokens that are in the encoder that we want to condition on, they are fully connected to each other. So when they communicate, they communicate fully when you calculate their features. But in the decoder, because we are trying to have a language model, we don't want to have communication from future tokens because they give away the answer at this step. So the tokens in the decoder are fully connected from all the encoder states, and then they are also fully connected from everything that is before them. And so you end up with this like triangular structure of the directed graph. But that's the message passing scheme that this basically implements. And then you have to be also a little bit careful because in the cross attention here with the decoder, you consume the features from the top of the encoder. So think of it as in the encoder, all the nodes are looking at each other |
all the tokens are looking at each other many, many times. And they really figure out what's in there. And then the decoder, when it's looking only at the top notes. So that's roughly the message passing scheme. I was going to go into more of an implementation of a transformer. I don't know if there's any questions about this group. Yeah, self-attention and self-attention, but what is the effect? Yeah, so self-attention and multi-headed attention. So the multi-headed attention is just this attention scheme, but it's just applied multiple times in parallel. Multiple heads just means independent applications of the same attention. So this message passing scheme basically just happens in parallel multiple times with different weights for the query key and value. So you can almost look at it like in parallel, I'm looking for |
I'm seeking different kinds of information from different nodes and I'm collecting it all in the same note. It's all done in parallel. So heads is really just like copy paste in parallel and layers are copy paste but and series. Maybe that makes sense. And self-attention, when it's self-attention, what it's referring to is that the node here produces each node here. So as I described it here, this is really self-attention. Because every one of these notes produces a key query and a value from this individual node. When you have cross attention, you have one cross attention here coming from the encoder. That just means that the queries are still produced from this node, but the keys and the values are produced as a function of nodes that are coming from the encoder. So I have my queries because I'm trying to decode some |
the fifth word in the sequence and I'm looking for certain things because I'm the fifth word and then the keys and the values in terms of the source of information that could answer my queries can come from the previous notes in the current decoding sequence or from the top of the encoder. So all the nodes that have already seen all of the encoding tokens many, many times cannot broadcast what they contain in terms of information. So I guess to summarize, the self-attention is kind of like, sorry, cross-attention and self-attention only defer in where the keys and the values come from. Either the keys and values are produced from this node or they are produced from some external source like an encorder and the notes over there. But algorithmically, is the same mathematical operations. Okay. The two questions, what the first question is, in the best capacity graph the paradigm, what the players are going to be? So, yeah |
yeah, so each one of these nodes is a token. I guess like they don't have a very good picture of it in the transformer but like like this note here could represent the third word in the output in the decoder. And in the beginning, it is just the embedding of the word. And then, okay, I have to think through this analogy a little bit more. I came up with it this morning. Actually, I got up with it yesterday. One example of the instant of instantiation would be the good information. Notes as in blocks were invented? These notes are basically the factors. I'll go to an implementation. I'll go to the implementation and then maybe I'll make the connections to the graph. So let me try to first go to, let me not go to with this intuition in mind at least to NamaGPT |
which is a concrete implementation of a transformer that is very minimal. So I worked on this over the last few days and here it is reproducing GPT2 on open web text. So it's a pretty serious implementation that reproduces GP2, I would say, and provided enough compute. This was one note of 8 GPUs for 38 hours or something like that, termere very readable what's 300 lives so everyone can take a look at it and Yeah, let me basically briefly step through it. So let's try to have a decoder-only transformer. So what that means is that it's a language model. It tries to model the next word and the sequence or the next character sequence. So the data that we train on is always some kind of text. So here's some fake Shakespeare. Sorry, this is real Shakespeare. We're going to produce a big Shakespeare. So this is called the tiny Shakespeare data set, which is one of my favorite toy data sets. You take all of Shakespeare |
concatenated, and it's one megabyte file, and then you can train language models on it and get infinite Shakespeare if you like, which I think is not cool. So we have a text. The first thing we need to do is convert it to a sequence of integers. Because transformers natively process, you know, you can't plug text into transformer. You need to somehow encode it. So the way that encoding is done is we convert, for example, in the simplest case, every character gets an integer, and then instead of high there |
we would have this sequence of integers. So then you can encode every single character as an integer and get like a massive sequence of integers. You just concatenate it all into one large long one dimensional sequence and then you can train on it. Now here we only have a single document. In some cases if you have multiple independent documents what people like to do is create special tokens and they interspers those documents with those special end of text tokens that they spliced in between to create boundaries. But those boundaries actually don't have any modeling impact. It's just that the transformer is supposed to learn via back propagation that the end of document sequence means that you should wipe the memory. Okay, so then we produce batches. So these batches of data just mean that we go back to the one-dimensional sequence and we take out chunks of this sequence. So say if the block size is eight |
then block size indicates the maximum length of context that your transformer will process. So if our block size is 8, that means that we are going to have up to eight characters of context to predict 9 character in a sequence. And the batch size indicates how many sequences in parallel we're going to process. And we want this to be as large as possible, so we're fully taking advantage of the GPU and the parallels on the cords. So in this example, we're doing a 4x8 batches. So every row here is independent example, sort of. And then every every every row here is a small chunk of the sequence that we're going to train on and then we have both the inputs and the targets at every single point here. So to fully spell out what's contained in a single 4x8 batch to the transformer, I sort of like compacted here. So when the input is 47 by itself, the target is 58. And when the input is the sequence 4758 |
the target is 1. And when it's 47581, the target is 51 and so on. So actually the single batch of examples that score by eight actually has a ton of individual examples that we are expecting the transformer to learn on in parallel. And so you'll see that the batches are learned on completely independently, but the time dimension sort of here, along horizontally is also trained on in parallel. So sort of your real back size is more like B times T. It's just that the context grows linearly for the predictions that you make along the T direction in the in the model. So this is how the this is all the examples of the model. So this is how the, this is all the examples of the model will learn from this single fact. So now this is the GPT class And because this is a decoder-only model, so we're not going to have an encoder, because there's no like English we're translating from |
we're not trying to condition on some other external information. We're just trying to produce a sequence of words to follow each other or likely to. So this is all by torch, and I'm going slightly faster because I'm assuming people have to take them to 31 in or something along those lines. But here in the forward pass we take this these indices and then we both encode the identity of the indices just via an embedding lookup table. So every single integer has a, we index into a lookup table of vectors in this nn dot embedding and pull out the word vector for that token. And then because the message, because transformed by itself doesn't actually, it processes sets natively. So we need to also positionally encode these vectors so that we basically have both the information about the token identity and its place in the sequence from one to block size. Now those |
the information about what and where is combined additively. So the token embeddings and the positional embeddings are just added exactly as here. So this X here, then there's optional dropout. This X here basically just contains the set of words and their positions and that feeds into the blocks of transformer. And we're going to look into what's blocked here. But for here, for now, this is just a series of blocks in the transformer. And then in the end, there's a layer norm, and then you're decoding the logits for the next word or next integer in a sequence using a linear projection of the output of this transformer. So LM head here, a short-for language model head, is just a linear function. So basically, positionally, encode all the words, feed them into a sequence of blocks, and then apply a linear layer to get the probability distribution for the next character. And then if we have the targets |
which we produced in the data loader, and you'll notice that the targets are just the inputs offset by one in time, then those targets feed into a cross-entropy loss. So this is just a negative likelihood typical classification loss. So now let's drill into what's here in the blocks. So these blocks that are applied sequentially. There's again, as I mentioned, this communicate phase and the compute phase. So in the communicate phase, all the nodes get to talk to each other. And so these nodes are basically, if our block size is eight, then we are going to have 8 nodes in this graph. There's 8 notes in its graph. The first node is pointed to only by itself. The second node is pointed to by the first node and itself. The third node is pointed to by the first two nodes and itself, etc. So there's eight nodes here. So you apply the residual pathway in X, you take it out, you apply a layer norm |
and then the self-attention so that these communicate, these eight nodes communicate. But you have to keep in mind that the batch is four. So because batch is four, this is also applied. So we have eight nodes communicating, but there's a batch of four of them all individually communicating among those eight node. There's no crisscross, cross the batch dimension of course. There's no batch of money ed-bash dimension, of course. There's no bash on any way, luckily. And then once they change the information, they are processed using the Motalio perceptron, and that's the computer base. So, and then also here we are missing, we are missing the cross attention and because this is a decoder only model. So all we have is this step here, the multi-headed retention, and that's this line, the communicate phase. And then we have the feed forward, which is the MLP |
and that's the complete phase. I'll take questions a bit later. Then the MLP here is fairly straightforward. The MLP is just individual processing on each node, just transforming the feature representation sort of at that node. So, applying a two-layer neural nut with a gellu non-linearity, which is just think of it as a relu or something like that. It's just a non-linearity. And then MLP is straightforward. I don't think there's anything too crazy there. And then this is the causal self-attention part, the communication phase. So this is kind of like the meat of things and the most complicated part. It's only complicated because of the batching and the implementation detail of how you mask the connectivity in the graph so that you don't |
you can't obtain any information from the future when you're predicting your token. Otherwise it gives away the information. So if I'm the fifth token and if I'm the fifth position then I'm getting the fourth token coming into the input and I'm attending to the third, second, and first. And I'm trying to figure out what is the next token. Well, then in this batch, in the next element over in the time dimension, the answer is at the input. So I can't get any information from there. So that's why this is all tricky, but basically in the forward pass, we are calculating the queries, keys, and values based on X. So these are the keys, queries, and values. Here, when I'm computing the attention, I have the queries, matrix multiplying the keys. So this is the dot product in parallel for all the queries in old keys, in all the heads. So that that I mentioned, I felt to mention that there's also the aspect of the heads |
which is also done all in parallel here. So we have the dash dimension, the time dimension, and the head dimension, and you end up with five-dimensional tensors, and it's all really confusing. So I invite you to step through it later and convince yourself that this is actually doing the right thing. But basically, you have the batch dimension, the head dimension, and the time dimension, and then you have features at them. And so this is evaluating for all the batch elements, for all the head elements, and all the time elements, the simple Python that I gave you earlier |
which is query dot product T. Then here we do a masked fill. And what this is doing is it's basically clamping the attention between the notes that are not supposed to communicate to be negative infinity. And we're doing negative infinity because we're about to soft max. And so negative infinity will make basically the attention of those elements be zero. And so here we are going to basically end up with the weights, the sort of affinities between these notes optional dropout and then here attention matrix multiply B is basically the gathering of the information according to the affinities we calculated and this is just a weighted sum of the values at all those notes. So this matrix multiplies is doing that weighted sum. And then transpose contiguous view because it's all complicated and bashed in five dimensional tensors |
but it's really not doing anything optional dropout and then a linear projection back to the residual pathway. So this is implementing the communication phase there. Then you can train this transformer and then you can generate infinite Shakespeare and you will simply do this by because our block size is eight, we start with some token. Say like I use in this case, you can use something like a new one as the start token. And then you communicate only to yourself because there's a single node, and you get the problem distribution for the first word in the sequence. And then you decode it, or the first character in the sequence, you decode the character, and then you bring back the character and you re-encode it as an integer, and now you have the second thing. And so you get, okay, we're at the first position, and this is whatever integer it is, add the positional encodings, goes into the sequence |
goes into the transformer. And again, this token now communicates with the first token and its identity. And so you just keep plugging it back. And once you run out of the block size, which is 8, you start to crop. Because you can never have block size more than eight in the way you've trained this transformer. So we have more and more context until eight, and then if you want to generate beyond date, you have to start cropping because the transformer only works for eight elements in time dimension. And so all of these transformers in the Need setting have a finite block size or context length. And in typical models, this will be 1,024 tokens or 2,048 tokens |
048 tokens, something like that. But these tokens are usually like DPE tokens or sentence piece tokens or workpiece tokens. There's many different encodings. So it's not like that long. And so that's why I think they've mentioned. We really want to expand the context size and it gets gnarly because the attention is quadratic in the case. Now, if you want to implement an encoder instead of a decoder attention, then all you have to do is this mask code, you just delete that line. So if you don't mask the attention, then all the notes communicate to each other, and everything is allowed, and information flows between all the notes. So if you want to have the encoder here, just delete. All the encoder blocks will use attention where this line is deleted. That's it. So you're allowing whatever this encoder might store say 10 tokens, like 10 notes |
and they are all allowed to communicate to each other going up the transformer. And then if you want to implement cross-attention, so you have a full encoder decoder-decoater transformer, not just a decoder-only transformer or GPT, then we need to also add cross-attention in the middle. So here there is a self-attention piece, where all the, there's a self-attention piece, a cross-attention piece, and this MLP, and in the cross-attention, we need to take the features from the top of the encoder. We need to add one more line here. And this would be the cross-attention instead of, I should have implemented it, instead of just pointing, I think. But there will be a cross-attention line here, so we'll have three lines, because we need to add another block. And the queries will come from X |
but the keys and the values will come from the top of the encoder. And there will be basically information flowing from the encoder strictly to all the nodes inside X. And then that's it. So it's very simple sort of modifications on the decoder attention. So you'll you'll hear people talk that you kind of have a decoder only model like GPT, you can have an encoder only model like BERT or you can have an encoder only model like BERT, or you can have an encoder decoder model like, say, T5 doing things like machine translation. So, and in BERT |
you can't train it using sort of this language modeling setup that's auto-aggressive and you're just trying to predict the next element sequence. You're training it to a slightly different objectives. You're putting in like the full sentence and the full sentence is allowed to communicate fully and then you're trying to classify sentiment or something like that. So you're not trying to model like the next token in the sequence. So these are translucent with masked, using masking and other denoising techniques. Okay, so that's kind of like the transformer. I'm going to be all the same as good and when we're like something that's the tag. You know, this tempers still perform a dynamic route that they can actually change in every instance. And we also have some teachers doing with pretty things. we are enforcing these constraints on it by just masking |
but it's a temporary enforcement that tends to look. So I'm not sure if I fully follow. So there's different ways to look at this analogy, but one analogy is you can interpret this graph as really fixed. It's just that every time we do the communicate, we are using different weights. You can look at it down. So if we have block size of eight in my example, we would have eight notes. Here we have two, four, six, okay, so we'd have eight notes. They would be connected in, you lay them out and you only connect from left to right. I mean, but for different follow-up that might not be, yeah |
yeah, you have a graph for the connection function. Why would the connect? Usually the connections don't change as a function manage the things that we want to apply to the internet. I don't think I've seen a single example where the connectivity changes dynamically in the function of data. Usually the connectivity is fixed. If you have an encoder and you're training a BERT, you have how many tokens you want and they are fully connected. And if you have a decoderally model, you have this triangular thing. And if you have encoder decoder, then you have awkwardly sort of like two pools of nodes. Yeah. I'll go. I guess for all that. I the way it is like the question. My question is, um, I wonder if you know much more about this than I know I've got a sense of like, you know, if you ran like a number of research, I mean, which hurts the remote. But in my head, I'mthink it's a perpetual connection. You also have different things |
like, very similar norm and academic. Yeah, it's really hard to say. So that's why I think this paper is so interesting is like, yeah, usually you'd see like a path, and maybe they had path internally, they just didn't publish it. All you can see is sort of things that didn't look like a transformer. I mean, you have Resnets, which have lots of this. But a Resnit would be kind of like this, but there's no, there's no self-attention component, but the MLP is there kind of in a Resnit. So a Resnit looks very much like this, except there's no, you can use layer norms in Resnets I believe as well. Typically sometimes they can be bashgrums. So it is kind of like a resonant, it is kind of like they took a resonant and they put in a trans-cell potential block in addition to the pre-existent MLP block, which is kind of like convolutions. And MLP was strictly speaking |
the convolution one by one convolution. But I think the idea is similar in that MLP is just kind of like a, you know, typical weights, non-linear tea weights or operation. And, but I will say like, yeah, it's kind of interesting because a lot of work is is not is not there and then they give you this transformer and then it turns out five years later it's not changed even though everyone's trying to change it. So it's kind of interesting to me that it's kind of like a package, in like a package, which I think is really interesting historically. And I also talked to paper authors, and they were unaware of the impact that transform with half at the time. So when you read this paper, actually, it's kind of unfortunate because this is like the paper that changed everything. But when people read it, it's like question marks because it reads like a pretty random machine translation paper. Like, oh |
oh, we're doing machine translation. Oh, here's a cool architecture. Okay, great. But results. It's, it doesn't sort of know what's going to happen. And so when people read it today, I think that it kind of confused potentially. Like having like having I will have some tweets at the end, but I think I would have renamed it with the benefit of hindsight of like, well, I'llthink that's a good question as well. Currently, I mean, I certainly don't love the auto-aggressive modeling approach. I think it's kind of weird to like sample a token and then commit to it. So, you know, maybe there's some ways to, maybe there's some ways, some hybrids with the fusion, as an example, which I think would be really cool. Or we'll find some other ways to like edit the sequences later |
but still in our aggressive framework. But I think the fusion is kind of like an up-and-coming modeling approach that I personally find much more appealing. When I sample text, I don't go chunk, chunk, chunk, and commit. I do a draft one, and then I do a better draft two. And that feels like a diffusion process. So that would be my hope. Okay, also a question. Yeah. So yeah, it's like the gun logic that it gets a way to which life of traffic. Will you say like the self-cretion is sort of like completing like an edge rate using like the dot product on the person value and then mostly have the edge rate |
be just like multiplied by the values. And then you just appropriate. Yes. Yes. And you think there's like a like analogical network's and like graph neural networks and their potential engineering. I find the graph neural networks kind of like a confusing term because I mean yeah previously there was this notion of I kind of like maybe today everything is a graphener network because the transformer is a grapyroller processor. The native representation that the transformer operates over is sets that are connected by edges in a directed way. And so that's a native representation. And then, yeah. Okay, I should go on because I still have like 30 slides. Sorry, the question is that is, is the tact this office? Oh, yeah. Yeah, the root tea, I think, basically like, as you're |
as you're, if you're initializing with random weight sample from a Gaussian as your dimension size grows so does your values the variance grows and then your soft max will just become the one half vector. So it's just a way to control the variance and bring it to always be in a good range for softmax and nice the piece of distribution. Okay. So it's almost like an initialization thing. Okay. So transformers have been applied to all the other tic all the other fields and the way this was done is in my opinion kind of ridiculous ways honestly because I was a computer vision person and you have comments and they kind of make sense. So what we're doing now with bits as an example is you take an image and you chop it up into little squares and then those squares literally feed into a transformer and that's it. Which is kind of ridiculous. And so, I mean, yeah. And so the transformer doesn't even, in the simplest case |
like really know where these patches might come from. They are usually positionally encoded. But it has to sort of like we discover a lot of the structure I think of them in some ways. And it's kind of weird to approach it that way. But it's just like the simple baseline of the chomping up big images into small squares and feeding them in as like the individual notes actually works fairly well and then this is an transformeroder. So all the patches are talking to each other throughout the entire transformer. And the number of notes here would be sort of like nine. Also in speech recognition, you just take your male spectrogram and you chop it up into the slices and feed them into a transformer. So there was paper like this, but also whisper. Whisper is a copy-based transformer. If you saw Whisper from Open AI |
you just chop up no spectrogram and feed it into a transformer and then pretend you're dealing with text and it works very well. Decision Transformer and RL, you take your states, actions, and reward that you experience an environment and you just pretend it's a language, and you start to model the sequences of that, and then you can use that for planning later. That works pretty well. You know, even things like alpha folds, so we're frequently talking about molecules and how you can point them in. So at the heart of alpha fold computationally is also a transformer. One thing I wanted to also say about transformers is I find that they're very, they're super flexible and I really enjoy that. I'll give you an example from Tesla. Like you have a ComNet that takes an image and makes predictions about the image. And then the big question is, how do you feed in extra information? And it's not always trivier. Like |
say I have additional information that I want to inform, that I want the outputs to be informed by. Maybe I have other sensors like radar. Maybe I have some map information or vehicle type or some audio. And the question is how do you feed information into a combat? Like where do you feed it in? Do you concatenate it, like how do you, do you add it at what stage? And so, with a transformer it's much easier because you just take whatever you want |
you chop it up into pieces and feed it in with a set of what you had before. And you let the self-attention figure out how the things should communicate. And that actually apparently works. So just chop up everything and throw it into the mix is kind of like the way. And it frees neural ads from from this from this burden of equilibrium space where previously you had to you had to arrange your computation to conform to the euclidean space of three dimensions of how you're laying out the compute. Like the compute actually kind of happens in all of like 3D space if you think about it. But in attention everything is just sets. So it's a very flexible framework and you can just like throw in stuff into your conditioning set and everything just self-attended over. So it's quite beautiful to not expect it. Okay. So now what exactly makes transformers so affected? I think a good example of this comes from the GPT3 paper |
which I encourage people to read. Language models are two short learners. I would have probably renamed this a little bit. I would have said something like transformers are capable of in context learning or like meta learning. That's kind of like what makes them really special. So basically the setting that they're working with is, okay, I have some context and I'm trying to, let's say, passage. This is just one example of many. I have a passage and I'm asking questions about it. And then I'm giving, as part of the context in the prompt, I'm giving the questions and the answers. So I'm giving one example of question answer, another example of question answer, another example of question answer and so on. And this becomes, oh yeah, people are going to have to reach some of that. OK, is this really important? Let me think. OK, so it's really interesting is basically like, with more examples given in the context |
the accuracy improves. And so what that hints at is that the transformer is able to somehow learn in the activations without doing any gradient descent in a typical fine-tuning fashion. So if you fine-tune, you have to give an example and the answer and you do fine-tuning using gradient descent. But it looks like the transformer internally in its weights is doing something that looks like potential gradient descent, some kind of a mental learning in the weights of the transformer as it is reading the problem. And so in this paper they go into, okay, distinguishing this outer loop with stochastic grading descent and this inner loop of the end context learning. So the inner loop is the transformer sort of like reading the sequence almost |
and the outer loop is the training by gradient descent. So basically there's some training happening in the activation of the transformer as it is consuming a sequence that maybe very much looks like gradient descent. And so there's some recent papers that kind of hint at this and study it. And so as an example, in this paper here, they propose something called the raw operator. And they argue that the raw operator is implemented by a transformer. and then they show that you can implement things like ridge regression on top of the raw operator. And so this is kind of giving, there are papers hinting that maybe there is some thing that looks like gradient-based learning inside the activations of the transformer. And I think this is not impossible to think through because what is what is gradient-based learning? Forward pass, backward pass, and then update. Well, that looks like a resonant, right? Because you're just changing |
you're adding to the weights. So you start an initial random set of weights, forward pass, backward pass, and update your weights. And then forward pass, backward pass, backward pass up with weights. Looks like a Resnet. Transformers, a Resnet. So much more hand-wavy, but basically some papers trying to hint at why that could be potentially possible. And then I have a bunch of tweets. I just got basically here in the end. I was kind of like meant for general consumption. So they're a bit more high level and high p a little bit. But I'm talking about why this architecture is so interesting and why potentially became so popular. And I think it's simultaneously optimizes three properties that I think are very desirable. Number one, the transformer is very expressive in the forward pass. It's sort of like is able to implement very interesting functions, potentially functions that can even like do meta learning. Number two |
it is very optimizable, thanks to things like residual connections, layer nodes, and so on. And number three, it's extremely efficient. This is not always appreciated, but the transformer, if you look at the computational graph, is a shallow wide network, which is perfect to take adventure the parallelism of GPUs. So I think the transformer was designed very deliberately to run efficiently in GPUs. There's previous work like neural GPU that I really enjoy as well, which is really just like how do we how do we design neural that are efficient on GPUs and thinking backwards from the constraints of the hardware, which I think is a very interesting way to think about it. Oh yeah, so here I'm saying I probably would have called |
I probably would have called the transformer a general purpose efficient optimizable computer instead of attention is all you mean. Like that's what I would have maybe in hindsight called that paper. It's proposing a model that is very general purpose, so forward passes, expressive, it's very efficient in terms of GP usage and it's easily optimizable by gradient descent and trains very nicely. Then I have some other hype tweets here. Anyway, so I, yeah, you can read them later, but I think this one is maybe interesting. So if previews neural nets are special purpose computers designed for a specific task, GPT is a general purpose computer reconfigurable at runtime to run natural language programs. So the program, the programs are given as prompts and then GPT runs the program by completing the document. So I really like these analog can read this later. But let's write out just this lead. So |
it turns out that if you scale up the training set and use a powerful enough neural net like a transformer, the network becomes a kind of general purpose computer over text. So I think that's a kind of like nice way to look at it. And instead of performing a single-fix sequence, you can design the sequence in the prompt. And because the transformer is both powerful, but was trained on large enough very hard data sets It kind of becomes his general purpose text computer and so points to the vote. Yeah. So, uh, I guess, I turned, so it's a function. Yes. Sorry, when it's not a durable history sort of things it? You know, like if it really think it's mostly more efficient or could be the third of like something you have that you'd be the total or not like a specific doctor finance or the uniform agreement. Yeah. So I think there's a bit of that. Yeah. So I would say RNN's like in principle, yes |
yes, they can implement arbitrary programs. I think it's kind of like a useless statement to some extent because they are not, they're probably, I'm not sure that they're probably expressive because in a sense of like power and that they can implement these arbitrary functions, but they're not optimizable. And they're certainly not efficient because they are serial computing devices. So I think, so if you look at it as a compute graph, RNNs are very long, thin, compute graph. Like if you stretched out the neurons and you look |
like take all the individual neurons in our connectivity and stretch them out and try to visualize them. RNs would be like a very long graph and it's bad. And it's bad also for optimizability because I don't exactly know why but just the rock intuition is when you're back propagating you don't want to make too many steps and so transformers are a shallow white graph and so from from supervision to inputs is a very small number of hops and it's along residual pathways which may gradients flow very easily and there's all these layer norms to control the grain of the scales of all of those activations. And so there's not too many hops and you're going from supervision to input very quickly and it flows through the graph. So, and it can all be done in parallel. So you don't need to do this. Encoder decoder RNs, you have to go from first word, then second word, then third word. But here in Transformer |
every single word was processed completely in parallel, which is kind of a, so I think all of these are really important because, you know, all of these are really important. And I think number three is less talked about, but extremely important because in deep learning, scale matters. And so the size of the network that you can train gives you is extremely important. And so if it's efficient on the current hardware, than we canthousand tokens or whatever. And now I have a special, so radar could be also, but I don't actually know the naked representation of radar. So, but you could, you just need to chop it up and enter it and then you have to encode it somehow. Like the transform needs to know that they're coming from radar. So you create a special, you have some kind of a special token that you |
like these radar tokens are reflected in the representation and it's learnable by gradient descent and like vehicle information would also come in with a special embedding token that can be learned. So, um. But you don't, it's all just a set. And there's no. Yeah, it's all just a set. Yeah, it's all just a set. But you can't positionally encode these sets if you want. So, but positional encoding means you can hardwire, for example, the coordinates like using sinus and pluses. You can, you can have wire that, but it's better if you don't have wired the position. You just, it's just a vector that is always hanging out at this location. Whatever content is there just adds on it, and this vector is trainable by background. That's how you do it. Yeah. I don't really like this. I think that's kind of doubt. Like they seem to work |
but it sounds like there's sometimes our information we would actually want to include some doctor of the le I mean, the positional encoder is like,they're actually like not, they have, okay, so they have very little inductive bias or something like that. They're just vectors hanging out in the location always. And you're trying to, you're trying to help the network in some way. And I think the intuition is is good but like if you have enough data usually trying to mess with it is like a bad thing. Like trying to, like trying to enter knowledge when you have enough knowledge in the data set itself is not usually productive. So it really depends on what scale you are. If you have infinity data, then you actually want to encode less and less. That turns out to work better. And if you have very little data, then actually you do want to encode some biases. And maybe if you have a much smaller data set |
then maybe convolutions are a good idea because you actually have this bias coming from more filters. And so, but I think, so the transformer is extremely general, but there are ways to mess with the encodings to put in more structure like you could, for example, encode sinuses and cosine and fix it. Or you could actually go to the attention mechanism and say, okay, if my image is chopped up into patches, this patch can only communicate to this neighborhood. And you can, you just do that in the attention matrix, just mask out whatever you don't want to communicate. And so people really play with this because the full attention is inefficient. So they will interspers, for example, layers that only communicate in little patches and then layers that communicate globally. And they will sort of do all kinds of tricks like that. So you can slowly bring in more inductive bias, you would do it |
but the inductive biases are sort of like, they're factored out from the core transformer and they are factored out in the connectivity of the notes and they are are factored out in positional encodings and can papers on this page of the platform. So there's probably the measures that we have less. So there's probably about 200 papers on this now, if not more. They're kind of hard to keep track up. Honestly, like my Safari browser, which is, oh, it's on my computer, like 200 open tabs. But, um, yes, I'm not even sure if I want to pick my favorite, honestly. Yeah, I'm thinking of was very interesting stuff. And then it was very interesting stuff from Europe this year. And you can think of a transformer as like a CPU. I think it was testing, six slides instructions that instruction, like 4,000 programs. And then like now, when you begin the CPU, what you have is like you store variables, you have time, it's just like |
you don't want to get a program with the CPU, I just like do it multiple time. Some thing that I keep in the user transformer like that, that it is get better. The other one that I actually like even more is potentially keep the context length fixed, but allow the network to somehow use a scratch pad. Okay, and so the way this works is you will teach the transformer somehow beyond examples in the prompt that, hey, you actually have a scratch pad. Hey, hey, trans- basically you can't remember too much. Your context line is finite, but you can use a scratch pad, and you do that by emitting a stark scratch pad, and then writing whatever you want, and then writing, and then writing, and then writing continue with whatever you want. And then later when it's decoding, you actually like have special object. That when you detect, start, scratch back |
you will sort of like save whatever it puts in there in like external thing and allow it to attend over it. So basically you can teach the transformer just dynamically because it's so meta learned. You can teach it dynamically to use other gizmos and gadgets and allow it to expand its memory in that way, if that makes sense. It's just like human learning to use a notepad, right? You don't have to keep it in your brain. So keeping things in your brain is kind of like a context line from the transformer, but maybe we can just give it a notebook. And then it can query the notebook and child-hippity the way, like they're really important to do this, it comes to their memory. It feels like the idea that that presently that. I don't know if I detected that. I kind of feel like, did you feel like it was more than just a long prompt that's unfolding? Yeah, it's not like, that's not what you didn't get to. I didn't try extensively |
but I did see a forgetting event. And I kind of felt like the block size was just moved. Maybe I'm wrong. I don't actually know about the internals of Cheshik. Yeah. Two online questions. So one question is, what do you think about architect? S4? S4? I'm sorry, I don't know this one. Which one is this for? The second question, this one is a personal question. What are you going to work on next? I mean, so right now I'm working on things like nanogpti, where is nanogpti? I mean, I'm going basically slightly from computer vision and like part, kind of like the computer vision based products to a little bit in the language domain. Where's try GPT? Okay, nano GPT. So originally I had minGPT which I wrote to nano GPT and I'm working on this. I'm trying to reproduce GPTs. And I mean, I think something like chat GPT |
I think incrementally improved in a product fashion would be extremely interesting. And I think a lot of people feel it, and that's why it went so wide. So I think there's something like a Google plus plus plus to build that I think is really interesting. So we didn't ever speaker around in part? |
Okay, so yeah, I'm very excited to be here and share our recent research about Nesymbolic common sense reasoning. So part of the goal of this talk will be to address some of the frequently asked the questions these days that NLP or common sense or whatever it looks like almost solved by CHGPT and I have an existential crisis. So people do ask me this from time to time. So perhaps it's a case of hasty generalization, especially if we do look at some of the examples. So the trophy doesn't fit in the brown suitcase because it's too big. What's too big? So this is classical Winograd schema challenge problem. And here, Chechapet answers it correctly, that trophy is too big. So |
impressive. But what if you change the question a little bit? Then he says the trophy itself is too small to fit into the suitcase. So it's not very reliable at the moment. So the situation is a little bit like David and Goliath in the sense that the Bugar appears to be better in many of the cases, although of course some of the more careful studies do reveal that smaller models can be better with better data or better reinforcement to learning with human feedback and whatnot. So it's likely that there are still other ways to improve the transformer performances by building smaller models in a more clever way. So one way to draw the insight is from this classic book known as the Art of War, which of course says nothing about deep neural networks or transformers, but the wisdom here is that know your enemy, choose your battles and innovate your weapons |
which we can translate that as evaluation with realism and scrutiny and focusing on different types of new tasks and leaders boards and then innovating your algorithms and data. So in this talk, I'm going to showcase three such studies, and let's dive in with Mayuric prompting. By the way, so the recording theme in this talk will be that smaller models can be better and the knowledge is power. So let's start with this observation that language models are sometimes amazing. So if you ask GPT3, if you travel west far enough from the west coast, you will reach to the East Coast or not? So it says, the world is around, which is correct. So you will reach the East Coast, eventually, therefore the answer is true. So this looks impressive, except when it's not impressive. So if you ask other questions, like butterflies fly with the three wings or not. It says it has four wings |
therefore the statement is false. But if you read back what it just said as true false questions, then it negates what it just said. So it can be inconsistent with its own statement. And then there are many other such inconsistency problems. So it's not clear what language models do or do not know. It's almost like language models are some sort of lemons. Well, it might be cherries if you only pick cherries but it doesn't make strange mistakes so the question is how would we make better lemonade from GPT3? So one approach might be to get philosophical and use Socrates' myeuretic method that was originally developed for addressing humans' flawed reasoning, because it actually turns out even humans are not all that logically consistent, let alone GPT3. So the way it works is this |
we're going to build my eutic inference tree. And let's use the previous example as a running example. So what we do is we ask the following question, providing the answer being true, and then let attach because, so that we prompt GPT3 to continue on this sentence, which means it will now have to explain, provide explanation why the answer is true. In this case, the explanation is good, so it's E of T, explanation of the answer being T. We ask the same question, switching out true with the follows and then see what BS GPT3 might come up with. So here it's just trying to go with the false as an answer but it just doesn't have a very good answer, it just says you cannot reach. So now we call this as E of F, so it's an explanation of F, answer being F. Now |
let's see how robust or consistent GPT3 is with respect to its own explanations. So we read back E of T and then let GPT3 to decide whether it's going to agree or disagree with a label true or false. So in this case, the last one is negated version of E of a T, so we insert negation not here. And in this case, it's good that it's flipping the answer when the statement is negated. So this is a case when GPT3 is logically integral to E of a T. For E of a false though, which was basically bogus explanation for the wrong answer, it's not able to flip its own labeling, which means GPT3 is not logical integral. So that's good. GPT3 does know something strange about its own explanation given previously. And so we can keep doing this recursively to let to make GPT3 to explain its own explanation of explanation recursively. So we build this myodic tree or graph for some time. And then only keep branches that are logical integral |
throwing out the non-intigral part for now. But even after chopping the branches where there's logical inconsistencies, GPT3 being GPT3, the tree will still have some inconsistent explanations. So in order to improve the logical consistency. Now what we do is we're going to look at pair-wise consistency among any of the nodes. So we compute, uh, sorry, stepping back, we are going to first compute the node-wise confidence. So we call that as a belief, and it's defined by this particular equation that basically looks at different conditional probabilities and then compute its ratio to see how confident it is for any particular node. We then also look at the edge-wise or pair-wise consistency by using off-the-shelf natural language inference models output, whether a pair is contradiction or not. So we then create this pair-wise weights. Now, once you have all of this |
then we can formulate a constrained optimization problem where the inference objective is to assign some label, either true or false on each of the nodes such that it's going to maximize the weight assigned it to all of these node and edges. So sometimes the labeling will have to flip the original label that the model might have preferred to give, because that way you can enhance the graph level consistency. So you can solve this with any max set. So set means satisfiability. And this is a classical AI search algorithm. And we used this particular solver, but you can use many others. And so here, the final output is that the original answer to the original question should be true, and then it also gives you node-wise per node label assignment as well. So what does this mean in the end in terms of empirical result? So when tested on Common Sense QA 2.0, the canonical prompting, so green, used on top of GPT3 |
so it's basically a few shot prompting on GPT3, will give you a bit better than chance performance. So this is true false QA data set, so your chance level is 50, and GPT3 is barely better than chance. But recently, there have been some ideas such as chain of thoughts or self-consistency that can improve the vanilla prompting method considerably. So if you use such variations, then you get performance game. Now the purple is the different variant of it, but together they're all doing worse than my repromting, which in fact does better than supervised model trained on T5. Usually supervised model trend on T5 is hard to beat using GPT3 Few Shot, but basically this is inference time on the algorithm, practically unsupervised, and it does well on that. And similarly |
we see a large boost when tested on other common sense benchmarks such as CRIC or Comto Sense. So what this tells us is that although the emergent capabilities of large transformers are phenomenal. They can be not very robust for some of these common sense challenges. And it's in large part due to the logical inconsistencies, which can be dramatically enhanced when you do this sort of symbolic reasoning on top. So yeah, not only Socrates' method helped with flawed human reasoning, it can also dramatically enhance a flood neural networks reasoning. Okay, so moving to the next topic, symbolic knowledge disillation. So this work is a work that tries to convert general language models on top of transformers to causal common sense models also transformers. And the reason why we might want to worry about common sense models is because despite human level or even superhuman level performances on a variety of litigores |
the state-of-the-art models are brittle when given adversarial or out-of-domain examples. So transformers can make seemingly strange mistakes. And so solving, it's almost like solving only a data set without really solving the underlying task. And this phenomenon sometimes is described as a systematic generalization problem. And why does this happen is that unlike humans who truly learn about how the world works conceptually. Transformers learn sort of a surface patterns in language or images that are powerful for many downstream use cases, but still not really robust understanding of the concepts and how the world works. So in other to bridge this gap, we can really think about this challenge of learning |
acquiring common sense capabilities for machines. So the operational definition of a common sense in this talk will be that it's the basic level of a practical knowledge and reasoning concerning everyday situations and events that are commonly shared among the most people. This is really important, the last part, that is commonly shared among the most people, but it's not the case that it's shared by everybody in the universe. Because the additional context can always change what is common sensical for any given culture or situation. So for example, in general, you and I probably agree that it's okay to keep the closet door open, but it's not okay to keep the fridge open because the food inside might go bad. So these are general rules of thumbs that we might abide by. But you know, of course |
of course, if you go to your friend's house you might behave a little bit and you know keep their closet doors open I'm sorry closed and then as far as a fridge door if you're in a store and it's not really hooked up to the wall, then it doesn't matter whether the fridge door is open or not because there's no food inside. And you know, you can come up with them in many situations in which these basic rules of thumbs will have exceptions. So that is the key challenge of common sense because it's not universal knowledge but it's sort of like shared across large population of people. Okay, so it's essential, such common sense is essential for humans to live and interact with each other in a reasonable and safe way. And so as AI becomes increasingly more important aspect of human lives, and withATGPT, more likely so |
it's good if AI can understand human needs and actions and values better. So the premise of this talk is that language models are not equivalent to knowledge models, even though language models today do acquire a great deal of knowledge, but they're not equivalent. So we developed symbolic common sense knowledge graph, known atomic a few years ago, four years ago now, as well as neural common sense model built on top of or trained using atomic as the source of training, fine-tuning of off-the-shelf language models. Up until two years ago, this atomic was fully crowdsourced by humans, which in this talk, I'm going to lift, but at first the normals that these all has to be human crowd sources. So you can consider almost atomic as human demonstration. In the current version of, you know, CHATGPT |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 30