autoevaluator
HF Staff
Add evaluation results on the default config and test split of multi_news
1e52cc5
| tags: | |
| - summarization | |
| - led | |
| - summary | |
| - longformer | |
| - booksum | |
| - long-document | |
| - long-form | |
| license: | |
| - apache-2.0 | |
| - bsd-3-clause | |
| datasets: | |
| - kmfoda/booksum | |
| metrics: | |
| - rouge | |
| widget: | |
| - text: large earthquakes along a given fault segment do not occur at random intervals | |
| because it takes time to accumulate the strain energy for the rupture. The rates | |
| at which tectonic plates move and accumulate strain at their boundaries are approximately | |
| uniform. Therefore, in first approximation, one may expect that large ruptures | |
| of the same fault segment will occur at approximately constant time intervals. | |
| If subsequent main shocks have different amounts of slip across the fault, then | |
| the recurrence time may vary, and the basic idea of periodic mainshocks must be | |
| modified. For great plate boundary ruptures the length and slip often vary by | |
| a factor of 2. Along the southern segment of the San Andreas fault the recurrence | |
| interval is 145 years with variations of several decades. The smaller the standard | |
| deviation of the average recurrence interval, the more specific could be the long | |
| term prediction of a future mainshock. | |
| example_title: earthquakes | |
| - text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\ | |
| \ are fed into a neural network that predicts values in the reconstructed domain.\ | |
| \ Then, this domain is mapped to the sensor domain where sensor measurements are\ | |
| \ available as supervision. Class and Section Problems Addressed Generalization\ | |
| \ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\ | |
| \ Representations (Section 3) Computation & memory efficiency, representation\ | |
| \ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\ | |
| \ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\ | |
| \ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\ | |
| \ of techniques in the neural field toolbox each addresses problems that arise\ | |
| \ in learning, inference, and control. (Section 3). We can supervise reconstruction\ | |
| \ via differentiable forward maps that transform Or project our domain (e.g, 3D\ | |
| \ reconstruction via 2D images; Section 4) With appropriate network architecture\ | |
| \ choices, we can overcome neural network spectral biases (blurriness) and efficiently\ | |
| \ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\ | |
| \ fields to add constraints and regularizations, and to achieve editable representations\ | |
| \ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\ | |
| \ to help solve problems with neural fields There are three components in a conditional\ | |
| \ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\ | |
| \ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\ | |
| \ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\ | |
| \ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\ | |
| \ field itself $. The encoder \u20AC finds the most probable z given the observations\ | |
| \ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\ | |
| \ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\ | |
| \ schemes with different optimality guarantees (Section 2.1.1), both global and\ | |
| \ local conditioning (Section 2.1.2), and different mapping functions Y (Section\ | |
| \ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\ | |
| \ shape given a partial or noisy point cloud. We need a suitable prior over the\ | |
| \ sur- face in its reconstruction domain to generalize to the partial observations.\ | |
| \ A neural network expresses a prior via the function space of its architecture\ | |
| \ and parameters 0, and generalization is influenced by the inductive bias of\ | |
| \ this function space (Section 5)." | |
| example_title: scientific paper | |
| - text: ' the big variety of data coming from diverse sources is one of the key properties | |
| of the big data phenomenon. It is, therefore, beneficial to understand how data | |
| is generated in various environments and scenarios, before looking at what should | |
| be done with this data and how to design the best possible architecture to accomplish | |
| this The evolution of IT architectures, described in Chapter 2, means that the | |
| data is no longer processed by a few big monolith systems, but rather by a group | |
| of services In parallel to the processing layer, the underlying data storage has | |
| also changed and became more distributed This, in turn, required a significant | |
| paradigm shift as the traditional approach to transactions (ACID) could no longer | |
| be supported. On top of this, cloud computing is becoming a major approach with | |
| the benefits of reducing costs and providing on-demand scalability but at the | |
| same time introducing concerns about privacy, data ownership, etc In the meantime | |
| the Internet continues its exponential growth: Every day both structured and unstructured | |
| data is published and available for processing: To achieve competitive advantage | |
| companies have to relate their corporate resources to external services, e.g. | |
| financial markets, weather forecasts, social media, etc While several of the sites | |
| provide some sort of API to access the data in a more orderly fashion; countless | |
| sources require advanced web mining and Natural Language Processing (NLP) processing | |
| techniques: Advances in science push researchers to construct new instruments | |
| for observing the universe O conducting experiments to understand even better | |
| the laws of physics and other domains. Every year humans have at their disposal | |
| new telescopes, space probes, particle accelerators, etc These instruments generate | |
| huge streams of data, which need to be stored and analyzed. The constant drive | |
| for efficiency in the industry motivates the introduction of new automation techniques | |
| and process optimization: This could not be done without analyzing the precise | |
| data that describe these processes. As more and more human tasks are automated, | |
| machines provide rich data sets, which can be analyzed in real-time to drive efficiency | |
| to new levels. Finally, it is now evident that the growth of the Internet of Things | |
| is becoming a major source of data. More and more of the devices are equipped | |
| with significant computational power and can generate a continuous data stream | |
| from their sensors. In the subsequent sections of this chapter, we will look at | |
| the domains described above to see what they generate in terms of data sets. We | |
| will compare the volumes but will also look at what is characteristic and important | |
| from their respective points of view. 3.1 The Internet is undoubtedly the largest | |
| database ever created by humans. While several well described; cleaned, and structured | |
| data sets have been made available through this medium, most of the resources | |
| are of an ambiguous, unstructured, incomplete or even erroneous nature. Still, | |
| several examples in the areas such as opinion mining, social media analysis, e-governance, | |
| etc, clearly show the potential lying in these resources. Those who can successfully | |
| mine and interpret the Internet data can gain unique insight and competitive advantage | |
| in their business An important area of data analytics on the edge of corporate | |
| IT and the Internet is Web Analytics.' | |
| example_title: data science textbook | |
| - text: "Transformer-based models have shown to be very useful for many NLP tasks.\ | |
| \ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\ | |
| \ time & memory complexity (where nn is sequence length). Hence, it's computationally\ | |
| \ very expensive to apply transformer-based models on long sequences n > 512n>512.\ | |
| \ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\ | |
| \ try to remedy this problem by approximating the full attention matrix. You can\ | |
| \ checkout \U0001F917's recent blog post in case you are unfamiliar with these\ | |
| \ models.\nBigBird (introduced in paper) is one of such recent models to address\ | |
| \ this issue. BigBird relies on block sparse attention instead of normal attention\ | |
| \ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\ | |
| \ much lower computational cost compared to BERT. It has achieved SOTA on various\ | |
| \ tasks involving very long sequences such as long documents summarization, question-answering\ | |
| \ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\ | |
| Transformers. The goal of this post is to give the reader an in-depth understanding\ | |
| \ of big bird implementation & ease one's life in using BigBird with \U0001F917\ | |
| Transformers. But, before going into more depth, it is important to remember that\ | |
| \ the BigBird's attention is an approximation of BERT's full attention and therefore\ | |
| \ does not strive to be better than BERT's full attention, but rather to be more\ | |
| \ efficient. It simply allows to apply transformer-based models to much longer\ | |
| \ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\ | |
| \ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\ | |
| \ would be preferred over block sparse attention (which we are going to discuss\ | |
| \ in this post).\nIf you wonder why we need more compute when working with longer\ | |
| \ sequences, this blog post is just right for you!\nSome of the main questions\ | |
| \ one might have when working with standard BERT-like attention include:\nDo all\ | |
| \ tokens really have to attend to all other tokens? Why not compute attention\ | |
| \ only over important tokens? How to decide what tokens are important? How to\ | |
| \ attend to just a few tokens in a very efficient way? In this blog post, we will\ | |
| \ try to answer those questions.\nWhat tokens should be attended to? We will give\ | |
| \ a practical example of how attention works by considering the sentence 'BigBird\ | |
| \ is now available in HuggingFace for extractive question answering'. In BERT-like\ | |
| \ attention, every word would simply attend to all other tokens.\nLet's think\ | |
| \ about a sensible choice of key tokens that a queried token actually only should\ | |
| \ attend to by writing some pseudo-code. Will will assume that the token available\ | |
| \ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\ | |
| \ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\ | |
| \ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\ | |
| >>> # further let's assume, we're trying to understand the representation of 'available'\ | |
| \ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\ | |
| \ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\ | |
| \ = [] # => currently 'available' token doesn't have anything to attend Nearby\ | |
| \ tokens should be important because, in a sentence (sequence of words), the current\ | |
| \ word is highly dependent on neighboring past & future tokens. This intuition\ | |
| \ is the idea behind the concept of sliding attention." | |
| example_title: bigbird blog intro | |
| - text: 'The majority of available text summarization datasets include short-form | |
| source documents that lack long-range causal and temporal dependencies, and often | |
| contain strong layout and stylistic biases. While relevant, such datasets will | |
| offer limited challenges for future generations of text summarization systems. | |
| We address these issues by introducing BookSum, a collection of datasets for long-form | |
| narrative summarization. Our dataset covers source documents from the literature | |
| domain, such as novels, plays and stories, and includes highly abstractive, human | |
| written summaries on three levels of granularity of increasing difficulty: paragraph-, | |
| chapter-, and book-level. The domain and structure of our dataset poses a unique | |
| set of challenges for summarization systems, which include: processing very long | |
| documents, non-trivial causal and temporal dependencies, and rich discourse structures. | |
| To facilitate future work, we trained and evaluated multiple extractive and abstractive | |
| summarization models as baselines for our dataset.' | |
| example_title: BookSum Abstract | |
| inference: | |
| parameters: | |
| max_length: 64 | |
| min_length: 8 | |
| no_repeat_ngram_size: 3 | |
| early_stopping: true | |
| repetition_penalty: 3.5 | |
| length_penalty: 0.3 | |
| encoder_no_repeat_ngram_size: 3 | |
| num_beams: 4 | |
| model-index: | |
| - name: pszemraj/led-base-book-summary | |
| results: | |
| - task: | |
| type: summarization | |
| name: Summarization | |
| dataset: | |
| name: kmfoda/booksum | |
| type: kmfoda/booksum | |
| config: kmfoda--booksum | |
| split: test | |
| metrics: | |
| - name: ROUGE-1 | |
| type: rouge | |
| value: 33.4536 | |
| verified: true | |
| - name: ROUGE-2 | |
| type: rouge | |
| value: 5.2232 | |
| verified: true | |
| - name: ROUGE-L | |
| type: rouge | |
| value: 16.2044 | |
| verified: true | |
| - name: ROUGE-LSUM | |
| type: rouge | |
| value: 29.9765 | |
| verified: true | |
| - name: loss | |
| type: loss | |
| value: 3.1985862255096436 | |
| verified: true | |
| - name: gen_len | |
| type: gen_len | |
| value: 191.9783 | |
| verified: true | |
| - task: | |
| type: summarization | |
| name: Summarization | |
| dataset: | |
| name: samsum | |
| type: samsum | |
| config: samsum | |
| split: test | |
| metrics: | |
| - name: ROUGE-1 | |
| type: rouge | |
| value: 32.0 | |
| verified: true | |
| - name: ROUGE-2 | |
| type: rouge | |
| value: 10.0781 | |
| verified: true | |
| - name: ROUGE-L | |
| type: rouge | |
| value: 23.6331 | |
| verified: true | |
| - name: ROUGE-LSUM | |
| type: rouge | |
| value: 28.7831 | |
| verified: true | |
| - name: loss | |
| type: loss | |
| value: 2.903024673461914 | |
| verified: true | |
| - name: gen_len | |
| type: gen_len | |
| value: 60.7411 | |
| verified: true | |
| - task: | |
| type: summarization | |
| name: Summarization | |
| dataset: | |
| name: cnn_dailymail | |
| type: cnn_dailymail | |
| config: 3.0.0 | |
| split: test | |
| metrics: | |
| - name: ROUGE-1 | |
| type: rouge | |
| value: 30.5046 | |
| verified: true | |
| - name: ROUGE-2 | |
| type: rouge | |
| value: 13.2577 | |
| verified: true | |
| - name: ROUGE-L | |
| type: rouge | |
| value: 19.0306 | |
| verified: true | |
| - name: ROUGE-LSUM | |
| type: rouge | |
| value: 28.3421 | |
| verified: true | |
| - name: loss | |
| type: loss | |
| value: 3.9484164714813232 | |
| verified: true | |
| - name: gen_len | |
| type: gen_len | |
| value: 231.0762 | |
| verified: true | |
| - task: | |
| type: summarization | |
| name: Summarization | |
| dataset: | |
| name: billsum | |
| type: billsum | |
| config: default | |
| split: test | |
| metrics: | |
| - name: ROUGE-1 | |
| type: rouge | |
| value: 36.8502 | |
| verified: true | |
| - name: ROUGE-2 | |
| type: rouge | |
| value: 15.9147 | |
| verified: true | |
| - name: ROUGE-L | |
| type: rouge | |
| value: 23.4762 | |
| verified: true | |
| - name: ROUGE-LSUM | |
| type: rouge | |
| value: 30.9597 | |
| verified: true | |
| - name: loss | |
| type: loss | |
| value: 3.878790855407715 | |
| verified: true | |
| - name: gen_len | |
| type: gen_len | |
| value: 131.3622 | |
| verified: true | |
| - task: | |
| type: summarization | |
| name: Summarization | |
| dataset: | |
| name: big_patent | |
| type: big_patent | |
| config: y | |
| split: test | |
| metrics: | |
| - name: ROUGE-1 | |
| type: rouge | |
| value: 33.7585 | |
| verified: true | |
| - name: ROUGE-2 | |
| type: rouge | |
| value: 9.4101 | |
| verified: true | |
| - name: ROUGE-L | |
| type: rouge | |
| value: 18.8927 | |
| verified: true | |
| - name: ROUGE-LSUM | |
| type: rouge | |
| value: 28.5051 | |
| verified: true | |
| - name: loss | |
| type: loss | |
| value: 5.162865161895752 | |
| verified: true | |
| - name: gen_len | |
| type: gen_len | |
| value: 222.6626 | |
| verified: true | |
| - task: | |
| type: summarization | |
| name: Summarization | |
| dataset: | |
| name: multi_news | |
| type: multi_news | |
| config: default | |
| split: test | |
| metrics: | |
| - name: ROUGE-1 | |
| type: rouge | |
| value: 38.7332 | |
| verified: true | |
| - name: ROUGE-2 | |
| type: rouge | |
| value: 11.0072 | |
| verified: true | |
| - name: ROUGE-L | |
| type: rouge | |
| value: 18.6018 | |
| verified: true | |
| - name: ROUGE-LSUM | |
| type: rouge | |
| value: 34.5911 | |
| verified: true | |
| - name: loss | |
| type: loss | |
| value: 3.5744354724884033 | |
| verified: true | |
| - name: gen_len | |
| type: gen_len | |
| value: 192.0014 | |
| verified: true | |
| # Longformer Encoder-Decoder (LED) for Narrative-Esque Long Text Summarization | |
| <a href="https://colab.research.google.com/gist/pszemraj/36950064ca76161d9d258e5cdbfa6833/led-base-demo-token-batching.ipynb"> | |
| <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> | |
| </a> | |
| - **What:** This is the (current) result of the quest for a summarization model that condenses technical/long information down well _in general, academic and narrative usage | |
| - **Use cases:** long narrative summarization (think stories - as the dataset intended), article/paper/textbook/other summarization, technical:simple summarization. | |
| - Models trained on this dataset tend to also _explain_ what they are summarizing, which IMO is awesome. | |
| - Works well on lots of text, and can hand 16384 tokens/batch. | |
| - See examples in Colab demo linked above, or try the [demo on Spaces](https://huggingface.co/spaces/pszemraj/summarize-long-text) | |
| - | |
| > Note: the API is set to generate a max of 64 tokens for runtime reasons, so the summaries may be truncated (depending on the length of input text). For best results use python as below. | |
| ## About | |
| - Trained on the BookSum dataset released by SalesForce (this is what adds the `bsd-3-clause` license) | |
| - Trained for 16 epochs vs. [`pszemraj/led-base-16384-finetuned-booksum`](https://huggingface.co/pszemraj/led-base-16384-finetuned-booksum), | |
| - parameters adjusted for _very_ fine-tuning type training (super low LR, etc) | |
| - all the parameters for generation on the API are the same for easy comparison between versions. | |
| ## Other Checkpoints on Booksum | |
| - See [led-large-book-summary](https://huggingface.co/pszemraj/led-large-book-summary) for LED-large trained on the same dataset. | |
| --- | |
| # Usage - Basics | |
| - it is recommended to use `encoder_no_repeat_ngram_size=3` when calling the pipeline object to improve summary quality. | |
| - this param forces the model to use new vocabulary and create an abstractive summary otherwise it may l compile the best _extractive_ summary from the input provided. | |
| - create the pipeline object: | |
| ```python | |
| import torch | |
| from transformers import pipeline | |
| hf_name = 'pszemraj/led-base-book-summary' | |
| summarizer = pipeline( | |
| "summarization", | |
| hf_name, | |
| device=0 if torch.cuda.is_available() else -1, | |
| ) | |
| ``` | |
| - put words into the pipeline object: | |
| ```python | |
| wall_of_text = "your words here" | |
| result = summarizer( | |
| wall_of_text, | |
| min_length=8, | |
| max_length=256, | |
| no_repeat_ngram_size=3, | |
| encoder_no_repeat_ngram_size=3, | |
| repetition_penalty=3.5, | |
| num_beams=4, | |
| do_sample=False, | |
| early_stopping=True, | |
| ) | |
| print(result[0]['generated_text']) | |
| ``` | |
| --- | |