Datasets:

ArXiv:
License:
Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 5 new columns ({'caption_length', 'image_id', 'arXiv_id', 'caption_id', 'categories'}) and 2 missing columns ({'images', 'annotations'}).

This happened while the json dataset builder was generating data using

hf://datasets/CrowdAILab/scicap/train-metadata.json (at revision 3a6b8bc9d9dd204f453107b1339a6e9dce20b51b)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              image_id: int64
              caption_id: int64
              caption_length: int64
              arXiv_id: double
              categories: string
              -- schema metadata --
              pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 711
              to
              {'images': {'figure_type': Value(dtype='string', id=None), 'file_name': Value(dtype='string', id=None), 'id': Value(dtype='int64', id=None), 'ocr': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': {'caption': Value(dtype='string', id=None), 'caption_no_index': Value(dtype='string', id=None), 'id': Value(dtype='int64', id=None), 'image_id': Value(dtype='int64', id=None), 'mention': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'paragraph': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1431, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 992, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 5 new columns ({'caption_length', 'image_id', 'arXiv_id', 'caption_id', 'categories'}) and 2 missing columns ({'images', 'annotations'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/CrowdAILab/scicap/train-metadata.json (at revision 3a6b8bc9d9dd204f453107b1339a6e9dce20b51b)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

images
dict
annotations
dict
{ "figure_type": "Graph Plot", "file_name": "000000053690.png", "id": 53690, "ocr": [ "2500", "LDA", "Z000", "HRIN'", "4", "1", "1Q00", "size Ilhousend c' revers", "Coiqu:" ] }
{ "caption": "Figure 1: Execution time of LDA, MC and HMM on data sets of different sizes. HMM achieves a smaller execution time than LDA but greater than MC.", "caption_no_index": "Execution time of LDA, MC and HMM on data sets of different sizes. HMM achieves a smaller execution time than LDA but greater than MC.", "id": 33458, "image_id": 53690, "mention": [ [ "Figure 1 shows for each sub data set the execution time of the learning phase." ] ], "paragraph": [ "We use the food reviews data set used in (McAuley and Leskovec, 2013) and construct seven sub data sets with 10K, 50K, 100K, 200K, 300K and 500K food reviews respectively. We measure the execution time of the learning algorithms of the models on each of these sub data sets. Figure 1 shows for each sub data set the execution time of the learning phase. As we can see, MC outperforms the other methods in terms of scalability because it only builds n-grams. HMM has a higher execution time because the data sets have to be tagged using a part-of-speech tagger. LDA performs the worst due to the extensive learning phase." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053691.png", "id": 53691, "ocr": [ "3" ] }
{ "caption": "Figure 6: Illustration of the number of websites, webpages and bitext harvested in an unsupervised manner for English-Hindi. other refers to bitext in a language pair other than English-Hindi and unfertile refers to entry points that did not harvest any bitext.", "caption_no_index": "Illustration of the number of websites, webpages and bitext harvested in an unsupervised manner for English-Hindi. other refers to bitext in a language pair other than English-Hindi and unfertile refers to entry points that did not harvest any bitext.", "id": 33459, "image_id": 53691, "mention": [ [ "Figure 6 shows the number of Web sites, pages and bitext harvested using the setup." ], [ "From Figure 6 it can be seen that the growth of the entry points hypothesized at each iteration of the intra-site crawling procedure is not linearly related to the bitext harvested.", "Figure 6: Illustration of the number of websites, webpages and bitext harvested in an unsupervised manner for English-Hindi. other refers to bitext in a language pair other than English-Hindi and unfertile refers to entry points that did not harvest any bitext." ] ], "paragraph": [ "for language pairs with limited resources, i.e., lack of publicly available large database or language resources. As an instantiation of this goal, we conducted a detailed study on English-Hindi. We used the collocated link extraction procedure described in Section 4.1 to compile 1638 potential entry points in English-Hindi. Unlike the language pairs used in the previous section, we did not have access to parallel text or a bilingual dictionary for English-Hindi. Hence, we used a completely unsupervised scheme to harvest parallel text from the initial entry points. The intra-site crawler (see Section 5) was modified to perform document matching using the HTML structure of the webpages and the dynamic programming text alignment procedure relied only on sentence length and identity anchor words (words that are present in the source and target sentence). We computed the distance between the DOM trees of two HTML pages to decide if the pages contained parallel text (Pawlik and Augsten, 2011). We ran the crawler for three iterations, each iteration using the parallel entry points identified in the previous step. Figure 6 shows the number of Web sites, pages and bitext harvested using the setup. Since, Hindi characters have a pre-defined range, we could filter out bitext that did not contain Hindi.", "From Figure 6 it can be seen that the growth of the entry points hypothesized at each iteration of the intra-site crawling procedure is not linearly related to the bitext harvested. We conjecture that websites containing English-Hindi parallel text do not point to significant number of newer English-Hindi websites. Unlike European languages the domain of the WWW containing English-Hindi parallel text is rather limited. Figure 6: Illustration of the number of websites, webpages and bitext harvested in an unsupervised manner for English-Hindi. other refers to bitext in a language pair other than English-Hindi and unfertile refers to entry points that did not harvest any bitext." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053692.png", "id": 53692, "ocr": [ "TETTE}", "Keebr", "'eav", "prT", "40nzx[TIXR1TS", "X\" O;", "coe ", "JhAm;" ] }
{ "caption": "Figure 2: Step 2: Cause candidate extraction using QA.", "caption_no_index": "Step : Cause candidate extraction using QA.", "id": 33460, "image_id": 53692, "mention": [ [ "We exploited embeddings from multi-qa-mpnetbase-dot-v1 5 , a model designed for semantic search to compute the dot similarity score of the question and passages and extract relevant ones (Figure 2)." ] ], "paragraph": [ "What caused <EVENT_TITLE>? We split articles into smaller passages, with a maximum of 200 WordPiece (Schuster and Nakajima, 2012) tokens, retaining the text structure, i.e., sentences and paragraphs. We exploited embeddings from multi-qa-mpnetbase-dot-v1 5 , a model designed for semantic search to compute the dot similarity score of the question and passages and extract relevant ones (Figure 2)." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053693.png", "id": 53693, "ocr": [ "X", "Ia" ] }
{ "caption": "Figure 1: ROC curves", "caption_no_index": "ROC curves.", "id": 33461, "image_id": 53693, "mention": [ [ "Each Figure 1: ROC curves pair of synsets (S t , S h ) is an oriented entailment relation between S t and S h ." ], [ "The ROC curve (Sensitivity vs. 1 − Specif icity) naturally follows (see Fig. 1)." ] ], "paragraph": [ "As test bed we used existing resources: a non trivial set of controlled verb entailment pairs is in fact contained in WordNet (Miller, 1995). There, the entailment relation is a semantic relation defined at the synset level, standing in the verb subhierarchy. Each Figure 1: ROC curves pair of synsets (S t , S h ) is an oriented entailment relation between S t and S h . WordNet contains 415 entailed synsets. These entailment relations are consequently stated also at the lexical level. The pair (S t , S h ) naturally implies that v t entails v h for each possible v t ∈ S t and v h ∈ S h . It is then possible to derive from the 415 entailment synset a test set of 2,250 verb pairs. As the proposed model is applicable only when hypotheses can be personified, the number of the pairs relevant for the experiment is thus reduced to 856. This set is hereafter called the True Set (T S).", "where p((v h , v t ) ∈ T S|S(v h , v t ) > t) is the probability of a candidate pair (v h , v t ) to belong to TS if the test is positive, i.e. the value S(v h , v t ) of the entailment detection measure is greater than t, while p((v h , v t ) ∈ CS|S(v h , v t ) < t) is the probability of belonging to CS if the test is negative. The ROC curve (Sensitivity vs. 1 − Specif icity) naturally follows (see Fig. 1)." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053694.png", "id": 53694, "ocr": [ "39.75", "39.50", "19.25", "39.0]", "5", "38.75", "38.5]", "38.25", "38.0]", "O- encemnlec" ] }
{ "caption": "Figure 4: Relation between number of model ensembles and BLEU score on ASPEC En-Ja.", "caption_no_index": "Relation between number of model ensembles and BLEU score on ASPEC En-Ja.", "id": 33462, "image_id": 53694, "mention": [ [ "Figure 4 shows the relation between the number of model ensembles and the BLEU score 5 ." ] ], "paragraph": [ "Figure 4 shows the relation between the number of model ensembles and the BLEU score 5 . As we increased the number of models used, the BLEU scores improved but the impact gradually decreased. We only ensembled eight models for our submissions due to time and computational cost limitations but it would be more effective to ensemble more models." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053695.png", "id": 53695, "ocr": [ "1", "restaurari", "3", "taxi", "1", "trwiti", "1", "1" ] }
{ "caption": "Figure 2: True intent distribution of the MultiWOZ dataset – Vertical axis denotes percentage of dialogs per intent", "caption_no_index": "True intent distribution of the MultiWOZ dataset – Vertical axis denotes percentage of dialogs per intent.", "id": 33463, "image_id": 53695, "mention": [ [ "To evaluate the quality of the ELDA approach we use the MultiWOZ dataset (Han et al., 2021), which comprises more than 10,000 annotated agentcustomer dialogs across 7 domains/intents, namely: train, taxi, hotel, restaurant, attraction, police and hospital (Table 2, Figure 2)." ] ], "paragraph": [ "To evaluate the quality of the ELDA approach we use the MultiWOZ dataset (Han et al., 2021), which comprises more than 10,000 annotated agentcustomer dialogs across 7 domains/intents, namely: train, taxi, hotel, restaurant, attraction, police and hospital (Table 2, Figure 2). The dialogs are segmented into turns, which we use as utterances, and each dialog is annotated with the customer's intents, each dialog having at least one intent. In our case, we refer to each dialog's set of intents as its \"label\"." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053696.png", "id": 53696, "ocr": [ "rxr" ] }
{ "caption": "Figure 2: An SFST for a post-pivot T-bounded and Rbounded copy relation.", "caption_no_index": "An SFST for a post-pivot T-bounded and Rbounded copy relation.", "id": 33464, "image_id": 53696, "mention": [ [ "This would reverse the pattern in Figure 2, except the left context and not the right context would be bounded." ] ], "paragraph": [ "The reverse of the relation in the proof of Theorem 3 would be a pre-pivot copy relation that is T-bounded and L-bounded. This would reverse the pattern in Figure 2, except the left context and not the right context would be bounded. Such a pattern is not subsequential, but it is reverse subsequential. Corollary 1. A regular pre-pivot copy relation that is T-bounded and L-bounded is reverse subsequential." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053697.png", "id": 53697, "ocr": [ "Mar", "Setu", "Cccmmani", "2hi", "FFMAm", "34", "Tuui-", "Fuxi %", "Sn Racaen" ] }
{ "caption": "Figure 1: The configuration of the conventional CLQA system.", "caption_no_index": "The configuration of the conventional CLQA system.", "id": 33465, "image_id": 53697, "mention": [ [ "Figure 1 shows the configuration of our previous English-to-Japanese cross-lingual QA system, which has almost the same configuration to the conventional CLQA systems." ] ], "paragraph": [ "Figure 1 shows the configuration of our previous English-to-Japanese cross-lingual QA system, which has almost the same configuration to the conventional CLQA systems. Firstly, the input English question is translated into the corresponding Japanese question by using a machine translation. Alternatively, the machine translation can be re- placed by the dictionary-based term-by-term translation. Then, either the English question or the translated Japanese question is analyzed to get the expected answer type." ] }
{ "figure_type": "Equation", "file_name": "000000053698.png", "id": 53698, "ocr": [ "1 d", "1'", "Ji n i d j", "ay & n i 1 i", "\"kiin d i d i" ] }
{ "caption": "Figure 2: Incorrect predictions of a TSL analysis of nasal harmony in Yaka: (a) is ill-formed because of tier adjacent ∗[nd]; (b) is well-formed since there are no voiced stops on the tier disagreeing in nasality; (c) is well-formed because the [d] immediately following [n] in the input string stops the latter from being a trigger for harmony, but it is still ruled out by the constraint needed for (b).", "caption_no_index": "Incorrect predictions of a TSL analysis of nasal harmony in Yaka: (a) is ill-formed because of tier adjacent ∗[nd]; (b) is well-formed since there are no voiced stops on the tier disagreeing in nasality; (c) is well-formed because the [d] immediately following [n] in the input string stops the latter from being a trigger for harmony, but it is still ruled out by the constraint needed for (b).", "id": 33466, "image_id": 53698, "mention": [ [ "For instance, the segmental alternation shown in Ex. ( 1 The reader might point out that the difference between Fig. 2.a and Fig. 2.c can be resolved by extending the tier-grammar to consider 3-grams." ] ], "paragraph": [ "Consider the case of Consonantal Nasal harmony in Yaka, in which a nasal stop induces nasalization of voiced consonants occurring at any distance to its right (Hyman, 1995;Walker, 2000). For instance, the segmental alternation shown in Ex. ( 1 The reader might point out that the difference between Fig. 2.a and Fig. 2.c can be resolved by extending the tier-grammar to consider 3-grams. However, in order to enforce harmony correctly, the tier-projection places every occurrence of voiced stops in the string on the tier, thus making 3-grams constraints insufficient (e.g., Ex. (3c)). Moreover, since the number of segments between harmonizing elements is potentially unbounded, no TSL grammar can generally account for this pattern, independently of the dimension of the tier k-grams." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053699.png", "id": 53699, "ocr": [ "BCBS", "framework", "Filtering", "Standards", "and FAQs", "Independent", "Review", "Human", "Annotation", "Questiom", "Answet", "classification", "classification", "GBS QA", "dataset" ] }
{ "caption": "Figure 2: Overview of a GBS-QA construction process.", "caption_no_index": "Overview of a GBS-QA construction process.", "id": 33467, "image_id": 53699, "mention": [ [ "Overview of a GBS-QA construction process follows Figure 2.", "After completing the whole process in Figure 2, GBS-QA is constructed as a set of questions and answers in banking regulation domain." ] ], "paragraph": [ "Overview of a GBS-QA construction process follows Figure 2. Starting from BCBS framework, all standards and the associated FAQs are automatically filtered out of the BCBS website. These data are organized and preprocessed by human annotation process. Human annotation includes organizing provisions, matching the provisions with the corresponding FAQs and reviewing questions and answers in a reconciled manner. This review is conducted by independent annotators which consists of five financial regulation experts. From the human annotation process, pairs of questions and answers are classified into four types which include 1) Binary answerable type, 2) WH type, 3) How type and 4) Conditional type. Upon question classification, questions are revised into \"Binary answerable type\" and answers are labelled into \"Yes\" or \"No\" to corresponding questions according to the GBS-QA classification guideline in Appendix A. After completing the whole process in Figure 2, GBS-QA is constructed as a set of questions and answers in banking regulation domain." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053700.png", "id": 53700, "ocr": [ "Scdtsb;", "Relat:u AUcNI;", "13", "44", "65,031013]", "140M", "73013]", "30", "1605,42", "301", "40", "[14(G :04", "ip;0" ] }
{ "caption": "Figure 1: Illustration of link relationship of seed websites and related websites, with associated∑ Linkout, ∑ PageRank and ∑ WeightedPageRank in square brackets and with arrows to indicate outgoing links from a seed website to others.", "caption_no_index": "Illustration of link relationship of seed websites and related websites, with associated∑ Linkout, ∑ PageRank and ∑ WeightedPageRank in square brackets and with arrows to indicate outgoing links from a seed website to others.", "id": 33468, "image_id": 53700, "mention": [ [ "An illustration of link relationship of this kind is presented in Figure 1." ] ], "paragraph": [ "The WeightedPageRank(w) is defined as the PageRank(w) weighted by w's credibility C(w). To reach out for a related website s outside the initial seed set of websites, our approach first finds the set R(s) of seed websites that have outgoing links to s, and then computes the sum of these three values over each outgoing link, namely, ∑ w Linkout(w), ∑ w PageRank(w), and ∑ w WeightedPageRank(w) for each w ∈ R(s), for the purpose of measuring how \"likely\" s is bilingual. An illustration of link relationship of this kind is presented in Figure 1." ] }
{ "figure_type": "Equation", "file_name": "000000053701.png", "id": 53701, "ocr": [ "JesiL xehacrc:xJo:kuScuc Aineint: Z ponciI:", "KcakIlkegnic", "Rkzk: t azhaKsizes ukuatfusp", "RES MHHT eclux-ilikctnujoozzt", "'Erhh?: _ Ksuzhh", "RuMuRV HE MHEJVE:" ] }
{ "caption": "Figure 1 : Règles générales du modèle et relations temporelles pour le texte (6)", "caption_no_index": ": Règles générales du modèle et relations temporelles pour le texte (6).", "id": 33469, "image_id": 53701, "mention": [ [ "L'application sur le texte (6), issu d'un corpus de constats d'accidents, des règles d'ordonnancement temporel proposées dans le cadre de ces travaux, conduit aux relations temporelles de la figure 1." ], [ "La figure 2 exprime les relations temporelles de la figure 1 en termes de S-langages." ] ], "paragraph": [ "Un autre aspect essentiel des travaux linguistiques sur la temporalité a trait au type de relations temporelles qui sont envisagées entre propositions. On remarque ainsi qu'elles sont incomplètement spécifiées : en effet, si l'on se réfère aux 13 relations théoriquement possibles entre deux intervalles quelconques 3 , il est en général impossible de préciser si l'on a affaire à l'une ou l'autre de ces relations ; on a plutôt recours à un sous-ensemble de ces relations pour expliciter les relations possibles entre deux propositions. La relation de couverture, notée 0, introduite par Kamp (1981) illustre bien cette problématique. Elle traduit la contemporanéité des deux procès et dit exactement ceci : peu importe les situations respectives de début et de fin des procès, la seule information pertinente est qu'ils aient une partie commune. Considérons à cet effet l'exemple (5). Il sera représenté par P10P2 car rien ne dit si le clignotant a été mis avant, en même temps ou après le début de l'arrêt, et l'histoire n'étant pas achevée, nous n'avons aucune autre information pour affiner la situation entre les deux bornes de droite. Une autre illustration de cette problématique peut être donnée dans le cadre du modèle de Desclés ( 1989) 4 où Sakagami (1997) a analysé le fonctionnement de plusieurs connecteurs temporels en français (« quand », « chaque fois que », « après que », ...). Elle a identifié des relations invariantes exprimées par les connecteurs et a exhibé une cinquantaine de relations possibles entre valeurs aspectuelles, certaines de ces relations se retrouvant dans des phrases contenant des connecteurs différents 5 . Elle a alors dégagé trois caractéristiques communes à ces relations, ce qui témoigne encore selon nous du caractère incomplètement spécifié des relations entre intervalles : (i) une valeur aspectuelle quelconque suivie d'une valeur aspectuelle quelconque dans la chaîne linéaire exprime toujours une certaine succession ; (ii) la fin d'une valeur en précédant une autre est difficilement repérable ; (iii) parmi les 13 relations possibles entre deux intervalles quelconques, seulement 12 sont attestées linguistiquement, celle de chevauchement n'ayant pas été identifiée. L'application sur le texte (6), issu d'un corpus de constats d'accidents, des règles d'ordonnancement temporel proposées dans le cadre de ces travaux, conduit aux relations temporelles de la figure 1. On voit clairement que ces dernières sont incomplètement spécifiées (plusieurs lectures temporelles du texte sont possibles) et souvent issues de calculs envisagés au niveau local seulement, c'est-à-dire en ne considérant que les propositions deux à deux. Les opérateurs des S-langages permettent de résoudre les problèmes posés dans le cadre de ces deux remarques.", "3 Et si l'on admet que l'on associe des intervalles aux situations dénotées par les propositions. Plus généralement, quelque soient les objets primitifs adoptés (points, intervalles, …) dans un modèle linguistique donné, il s'agit en fait de situer des points isolés ou des bornes d'intervalles les uns par rapport aux autres. Or situer relativement deux points sur une droite, c'est les mettre soit en relation de précédence soit en relation de simultanéité. Le modèle mathématique qui synthétise ces deux relations est celui des préordres linéaires. L'ensemble de toutes les situations possibles entre n chaînes de points équivaut à rechercher l'ensemble de tous les pré-ordres linéaires constructibles en préservant ces n chaînes. Les préordres s'expriment simplement en termes de S-langages (Schwer 2002). La figure 2 exprime les relations temporelles de la figure 1 en termes de S-langages." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053702.png", "id": 53702, "ocr": [ "eancicocc: R", "Leege nta", "Eheru", "Ue\"=", "M6TnoS", "Lidro", "\"MTni", "Wtirio", "07agejit", "@fz Ul", "xvilue" ] }
{ "caption": "Figure 5: Mean proportional gaze per referring expression for speaker and listeners. a) The speaker’s gaze during the time of the reference. b) The combined listeners’ gaze during the speaker’s referring expression. c) The speaker’s gaze during the 1 second period before the reference. d) The combined listeners’ gaze during the 1 second period before the speaker’s referring expression.", "caption_no_index": "Mean proportional gaze per referring expression for speaker and listeners. a) The speaker’s gaze during the time of the reference. b) The combined listeners’ gaze during the speaker’s referring expression. c) The speaker’s gaze during the 1 second period before the reference. d) The combined listeners’ gaze during the 1 second period before the speaker’s referring expression.", "id": 33470, "image_id": 53702, "mention": [ [ "In figure 5, the blue colour refers to the proximity area of the referent object from the speaker's utterance while orange refers to the gaze at all other known objects combined on the virtual environment." ], [ "We extracted all occasions where the group was jointly attending the area around the referent object, but also the occasions where either the speaker or the none of the listeners looked at the object: During the preliminary analysis of the corpus we noticed that in many cases at least one of the listeners was already looking at the area around the referent object before the speaker would utter a referring expression (figure 5)." ] ], "paragraph": [ "For each utterance a set of prominent objects was defined from the annotations. Given the gaze targets per participant, we calculated the proportional gaze from the speaker and the listeners during the time of the utterance and exactly 1 second before the utterance. Since all interactions were triadic, there were two listeners and one speaker at all times. To compare across all utterances, we looked at the mean proportional gaze of the two listeners to the area of the prominent object to define them as the listener group. We then compared the gaze of the listener group to the speakers gaze close to the referent objects. We also looked at the proportional combined gaze to other objects during the utterance (all other objects that have been gazed at during the utterance), gaze at the speaker and averted gaze. In figure 5, the blue colour refers to the proximity area of the referent object from the speaker's utterance while orange refers to the gaze at all other known objects combined on the virtual environment. Grey is for the gaze to the listeners or the speaker during the utterance and yellow for averted gaze.", "We extracted all occasions where the group was jointly attending the area around the referent object, but also the occasions where either the speaker or the none of the listeners looked at the object: During the preliminary analysis of the corpus we noticed that in many cases at least one of the listeners was already looking at the area around the referent object before the speaker would utter a referring expression (figure 5). It was our intuition that the salient object was already in the group's attention before. We looked at -1s before the utterance and automatically extracted these cases, as can be seen on table 2." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053703.png", "id": 53703, "ocr": [ "EascImRLJuEi-ntoonnison", "3umu", "Frainn_" ] }
{ "caption": "Figure 5: Decisions of the RL policy (in blue) vs. the baseline policy (in red).", "caption_no_index": "Decisions of the RL policy (in blue) vs. the baseline policy (in red).", "id": 33471, "image_id": 53703, "mention": [ [ "Figure 5 shows the decisions made by the RL (in blue) compared to the decisions made by the baseline (in red)." ] ], "paragraph": [ "agent divided by the total time consumed. Table 2 shows the points per second and the total points scored in some of the image sets by the baseline and the RL. We can observe that the RL consistently scores more points than the baseline, however this comes at the cost of additional time. By scoring more points overall than the baseline, the RL also scores higher in the PPS metric (p<0.05). Table 3 shows the total points scored and the total time spent across all the users by the baseline and the agent. Each set here refers to one round in a game. Figure 4 depicts this result for an image set of bikes (images shown in Figure 1). We plot the total time spent by the agent and the total points scored. Clearly, the RL policy manages to score more points than the baseline in a given amount of time. In order to understand the differences in the actions taken by the RL policy and the baseline policy, we plot on a 3 dimensional scatter plot, the action taken by the policy for confidence values between 0 and 1 (spaced at 0.1 intervals) and the time consumed between 0s to 15s (spaced at 100ms intervals) for one of the image sets (bikes). Figure 5 shows the decisions made by the RL (in blue) compared to the decisions made by the baseline (in red). As we can see there is not much variety in the decisions of the baseline policy; it basically uses thresholds (see Algorithm 1) optimized using real user data. Below we summarize our observations regarding the actions taken by the RL policy. i) Regardless of whether the confidence value is high or low, the RL policy learns to wait for low values of the time consumed. This may be helping the RL policy to avoid the problem illustrated in Case 2 in Figure 2, where instability in the early ASR results for a description can lead an incorrect guess to be momentarily associated with high confidence. The RL policy is more keen on waiting and decides to commit early only when the confidence value is really high (almost 1.0). ii) Requiring a lower degree of confidence when the time consumed is high was also found to be an effective strategy to score more points in the game. Thus the RL policy learns to guess (As-I) even at lower confidence values when the time consumed reaches high values. This combination of time and confidence values helps the RL agent perform better w.r.t. points and consequently PPS in the task." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053704.png", "id": 53704, "ocr": [ "Ghnk :", "En", "Ee", "luipue", "Vruy", "iuipv", "uru ", "aeemn", "If nAR", "4" ] }
{ "caption": "Figure 2: The overall architecture of our system.", "caption_no_index": "The overall architecture of our system.", "id": 33472, "image_id": 53704, "mention": [ [ "As illustrated in Figure 2, the architecture of our system consists of a candidates generation stage, a weighted merging stage, and a combination stage." ] ], "paragraph": [ "As illustrated in Figure 2, the architecture of our system consists of a candidates generation stage, a weighted merging stage, and a combination stage. In the candidates generation stage, the baseline systems are run individually and their outputs are collected. We use 2-best parse trees of Berkeley parser (Petrov and Klein, 2007) and 1-best parse tree of Bikel parser (Bikel, 2004) and Stanford parser as inputs to the full parsing based system. The second best parse tree of Berkeley parser is used here for its good quality. So together we have four different outputs from the full parsing based system. From the shallow parsing based system, we have only one output. In the weighted merging stage, each system output is assigned a weight according to our prior knowledge obtained on the development set. Details about how to obtain appropriate weights will be explained in Section 6. Then all candidates with the same loc and l are merged to one by weighted summing their probabilities. Specifically, suppose that there are n system outputs to be combined, with the i-th output's weight to be w i . And the candidate in the i-th output with loc and l is (loc, l, p i ) (If there is no candidate with loc and l in the i-th output, p i is 0.). Then the merged candidate is (loc, l, p), where p = n i=1 w i p i ." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053705.png", "id": 53705, "ocr": [ "\"Jlii'", "St-Ciah:", "Lfei", "rd--:umei", "Om tl-gh", "3Kj", "Tetkall" ] }
{ "caption": "Figure 2: Lineplot of GQD against temperature for all the five different language families. The trendlines are drawn using LOESS smoothing.", "caption_no_index": "Lineplot of GQD against temperature for all the five different language families. The trendlines are drawn using LOESS smoothing.", "id": 33473, "image_id": 53705, "mention": [ [ "NGens is the number of generations, Time is the time taken to run the inference in number of seconds on a single core Linux machine. pled trees against the temperature for all the five best settings of s/T 0 (in bold in Table 4) in Figure 2." ] ], "paragraph": [ "MAPLE with gold standard cognates We further tested if gold standard cognates make a difference in the inferred tree quality. We find that the tree quality improves if we employ gold standard cognates to infer the trees. This result supports the research track of developing high quality automated cognate detection systems which can be employed to analyze hitherto less studied language families of the world. 4: Results for the MAPLE approach to fast phylogenetic inference for each method. The best step size and initial temperature setting is shown as s/T 0 . NGens is the number of generations, Time is the time taken to run the inference in number of seconds on a single core Linux machine. pled trees against the temperature for all the five best settings of s/T 0 (in bold in Table 4) in Figure 2. The figure clearly shows that at high temperature settings, the quality of the trees is low whereas as temperature approaches zero, the tree quality also gets better for all the language fami- lies. Moreover, the curves are monotonically decreasing once the temperature is below 12." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053706.png", "id": 53706, "ocr": [ "0234 Ferl Fla;r9, Yurop Ler: =", "' nlepmpicsbin:", "Hfenz (PrkFlaer6 , PinaPlaec |", "'nlepzsbFupl;", "Wzaex fupe;,Fik: =", "ollytpd", "' rle6?Ff: Ki ehenzedb; Pi |6 \"", "kie(FikFl rb'", "'i lgra sagrorpassloFin;", "03530 FikFltrS, 2rFasir3|", "ploje(fne li:kf|", "'ik;oili: \"o\" rastebil", "0155/FrkFler3, 2i Pavrli" ] }
{ "caption": "Figure 2: Sample trace from Robocup English data.", "caption_no_index": "Sample trace from Robocup English data.", "id": 33474, "image_id": 53706, "mention": [ [ "Figure 2 shows a sample trace from the Robocup English data." ] ], "paragraph": [ "Figure 2 shows a sample trace from the Robocup English data. Each NL commentary sentence normally has several possible MR matches that occurred within the 5-second window, indicated by edges between the NL and MR. Bold edges represent gold standard matches constructed solely for evaluation purposes. Note that not every NL has a gold matching MR. This occurs because the sentence refers to unrecognized or undetected events or situations or because the matching MR lies outside the 5-second window." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053707.png", "id": 53707, "ocr": [ "XO", "X1", "X3", "X2" ] }
{ "caption": "Figure 2. Graph with Cycle", "caption_no_index": "Graph with Cycle.", "id": 33475, "image_id": 53707, "mention": [ [ "This usually occurs when the graph contains a cycle, as shown in Figure 2." ] ], "paragraph": [ "The forward recursion approach may lead to a situation in which a variable has been filled with two different sets of words. This usually occurs when the graph contains a cycle, as shown in Figure 2. When the forward recursion algorithm reaches X3 in step (d), a second set of possible words for X1 is generated. Since the two sets of words for X1 do not match, the algorithm gets the intersection of (A, B) and (B, C) and assigns this to X1 (in this case, the word \"B\" is assigned to X1). Backward recursion has to be performed starting from step (b) using the new set of words so that other variables with relationships to X1 will also be checked for possible changes in their values." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053708.png", "id": 53708, "ocr": [ "ZReDdeni", "Cee", "FTlreearn-ten", "Ant", "nm-)", "-Vjj-", "Anjien=", "2iJSen", "Tinin", "Ahedoi", "70m:", "L" ] }
{ "caption": "Figure 1: Architecture for building Web corpora", "caption_no_index": "Architecture for building Web corpora.", "id": 33476, "image_id": 53708, "mention": [ [ "However, we set up an architecture that enables Figure 1: Architecture for building Web corpora the construction of web corpora in general, provided the language-dependent modules are available.", "Figure 1 shows the current architecture for CUCWeb." ] ], "paragraph": [ "The initial motivation for the CUCWeb project was to obtain a large annotated corpus for Catalan. However, we set up an architecture that enables Figure 1: Architecture for building Web corpora the construction of web corpora in general, provided the language-dependent modules are available. Figure 1 shows the current architecture for CUCWeb." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053709.png", "id": 53709, "ocr": [ "Sujufah" ] }
{ "caption": "Figure 4: Each line represents experiments with a set number of topics and variable amounts of smoothing on the SEMCOR corpus. The random baseline is at the bottom of the graph, and adding topics improves accuracy. As smoothing increases, the prior (based on token frequency) becomes stronger. Accuracy is the percentage of correctly disambiguated polysemous words in SEMCOR at the mode.", "caption_no_index": "Each line represents experiments with a set number of topics and variable amounts of smoothing on the SEMCOR corpus. The random baseline is at the bottom of the graph, and adding topics improves accuracy. As smoothing increases, the prior (based on token frequency) becomes stronger. Accuracy is the percentage of correctly disambiguated polysemous words in SEMCOR at the mode.", "id": 33477, "image_id": 53709, "mention": [ [ "Figure 4 shows the modal disambiguation achieved for each of the settings of S = {0.1, 1, 5, 10, 15, 20}." ] ], "paragraph": [ "Because the Dirichlet smoothing factor in part determines the topics, it also affects the disambiguation. Figure 4 shows the modal disambiguation achieved for each of the settings of S = {0.1, 1, 5, 10, 15, 20}. Each line is one setting of K and each point on the line is a setting of S. Each data point is a run for the Gibbs sampler for 10,000 iterations. The disambiguation, taken at the mode, improved with moderate settings of S, which suggests that the data are still sparse for many of the walks, although the improvement vanishes if S dominates with much larger values. This makes sense, as each walk has over 100,000 parameters, there are fewer than 100,000 words in SEMCOR, and each word only serves as evidence to at most 19 parameters (the length of the longest path in WORDNET)." ] }
{ "figure_type": "Equation", "file_name": "000000053710.png", "id": 53710, "ocr": [ "Parame crs~alcs", "[cscrinrinn ae_", "TlM", "MJutt;", "Nulmnhdr-", "Ucmint uafr", "fonsld-TWann", "\"uasNmUietance", "Thz Hin disturc: bertr", "#cLicnT \"nd (iting (CITIS", "minRFRank", "Minimub", "RF for", "#CICC[ ", "candidatz kcywordz", "Yaluc:", "cecd", "tzstire S0.", "IS0 and ZCH)", "minlen", "Ninims", "Otti", "kctod", "CLafcn", "akecd", "Cscing walics", "pari", "Tx[", "Niial", "ELce", "(-11it4j41*19", "cude", "SLII4 *Mluj ", "cugta", "-liUR Ertunt", "Numter <", "QE kcywerd:", "prouuied", "selired6y", "consIu-TIL M:", "Julcu iliuiL", "COMMOD scilncb COLI", "cuzlh", "uxl" ] }
{ "caption": "Figure 8. Training parameters for TermMine.", "caption_no_index": "Training parameters for TermMine.", "id": 33478, "image_id": 53710, "mention": [ [ "We performed some experiments with the different values of various system parameters, the resulting parameters are shown in Figure 8." ] ], "paragraph": [ "We select 100 terms from each domain for training TermMine. We performed some experiments with the different values of various system parameters, the resulting parameters are shown in Figure 8. We have not tested all parameters exhaustively and completely. If a parameter has been tuned in some testing and observations, we would have a brief explanation in the figure." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053711.png", "id": 53711, "ocr": [ "jIrIly ", "{E> Mil", "erncrkd", "F6T", "JcLSE", "V;Su", "Mi-aic; 77", "1", "[314u g", "HS", "erus~IIS", "M 4nizAd", "Iti", "Mce: ", "Nb x", "Mu", "Wuc" ] }
{ "caption": "Figure 2: The toxicity and induction success rate of different kinds of contexts. profanity/0.78 means the averaged toxicity score for category profanity is 0.78.", "caption_no_index": "The toxicity and induction success rate of different kinds of contexts. profanity/0.78 means the averaged toxicity score for category profanity is 0.78.", "id": 33479, "image_id": 53711, "mention": [ [ "The result is shown in Figure 2." ], [ "Although Figure 2 shows that the context with higher toxicity usually has a higher induction success rate, the adversarial context with lower toxicity is more difficult to be detected by the classifier, which is more difficult to be defended and more harmful." ] ], "paragraph": [ "The result is shown in Figure 2. We find that (1) the context toxicity is generally positively correlated with the context induction success rate. Usually, more toxic contexts are easier to induce unsafe responses, which is on par with the previous study (Gehman et al., 2020). However, we also discover that (2) context category is another important factor influencing the induction success rate. For example, although the contexts of insult category are more toxic than the contexts of threat category, the former has consistently lower induction success rate than the latter on the three dialogue models. This may be because the model tends to adopt different response strategies for different categories of contexts, which is elaborated in Appendix B. Moreover, we observe that the induction success rate of contexts across different categories also depends on the dialogue model. For instance, the contexts of sexually_explicit category have significantly higher induction success rates than the contexts of threat category for Blender and Plato2, while the two categories of contexts have almost the same induction success rate for DialoGPT.", "Although Figure 2 shows that the context with higher toxicity usually has a higher induction success rate, the adversarial context with lower toxicity is more difficult to be detected by the classifier, which is more difficult to be defended and more harmful. Therefore, we explore controlling reverse generation to decrease the context toxicity and increase the context's induction success rate at the same time. This allows us to get more harmful adversarial contexts with low toxicity and high induction success rate. We first train a reverse generation model to learn P θ (c t |c &lt;t , r), a toxic reverse generation model that specifically generates toxic contexts to learn P γ (c t |c &lt;t , r) and a language model to learn P φ (c t |c &lt;t ). Then at the inference stage, The generation probability is decomposed as:" ] }
{ "figure_type": "Node Diagram", "file_name": "000000053712.png", "id": 53712, "ocr": [ "'0o" ] }
{ "caption": "Figure 2: Latent structure underlying the mention ranking and the antecedent tree approach. The black nodes and arcs represent one substructure for the mention ranking approach.", "caption_no_index": "Latent structure underlying the mention ranking and the antecedent tree approach. The black nodes and arcs represent one substructure for the mention ranking approach.", "id": 33480, "image_id": 53712, "mention": [ [ "Figure 2: Latent structure underlying the mention ranking and the antecedent tree approach." ], [ "Figure 2 shows an example graph." ], [ "One such substructure encoding the antecedent decision for m 3 is colored black in Figure 2." ] ], "paragraph": [ "Figure 2: Latent structure underlying the mention ranking and the antecedent tree approach. The black nodes and arcs represent one substructure for the mention ranking approach.", "Latent Structure. The mention ranking approach can be represented as an unlabeled graph. In particular, we allow any graph with edges A ⊆ {(m j , m i ) | j &gt; i} such that for all j there is exactly one i with (m j , m i ) ∈ A (each anaphor has exactly one antecedent). Figure 2 shows an example graph.", "The distinctive feature of the mention ranking approach is that it considers each anaphor in isolation, but all candidate antecedents at once. We therefore define substructures as follows. The jth substructure is the graph h j with nodes V j = {m 0 , . . . , m j } and A j = {(m j , m i ) | there is i with j &gt; i s.t. (m j , m i ) ∈ A}. A j contains the antecedent decision for m j . One such substructure encoding the antecedent decision for m 3 is colored black in Figure 2." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053713.png", "id": 53713, "ocr": [ "15.04;", "T", "102v", "106", "LG", "16", "Iv", "506", "0,", "bzsdi e", "ML;1#haj", "Iu the Eci &-F", "EchMe", "ird suillme" ] }
{ "caption": "Figure 7. Preference ratio for the (1) baseline system, (2) MDL parameter optimized speech synthesis system and (3) further backing-off and splitting system.", "caption_no_index": "Preference ratio for the (1) baseline system, (2) MDL parameter optimized speech synthesis system and (3) further backing-off and splitting system.", "id": 33481, "image_id": 53713, "mention": [ [ "The results are listed in Fig. 7, where the preference ratios for the three systems are 21.6%, 36.7% and 41.7% respectively." ], [ "From Figure 7, one can conclude that the MDL threshold parameter optimized speech synthesis system and further backing-off and splitting system both out-perform the baseline system." ] ], "paragraph": [ "A subjective listening test was conducted for the three systems: the baseline system (Sys-G), the system with tuned { , } α β (Sys-D), and the system with further backing-off and splitting based on Sys-D. Sixteen sentences were synthesized by each of the systems and five native speakers were asked to choose the best sentence from the randomly ordered three sentences by three systems. The results are listed in Fig. 7, where the preference ratios for the three systems are 21.6%, 36.7% and 41.7% respectively.", "From Figure 7, one can conclude that the MDL threshold parameter optimized speech synthesis system and further backing-off and splitting system both out-perform the baseline system. The proposed method for initialization of the decision tree and the further pruning method are both effective." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053714.png", "id": 53714, "ocr": [ "Fjum:", "JAE4G", "Fa-Ci;ir", "KX4", "FMEL", "4JmS", "IageSa", "D;a;", "prap", "ts: -", "Tgana", "~hena", "LET IFR" ] }
{ "caption": "Figure 3: A sample sentence annotated according to our hybrid dependency-constituency grammar model. See Figure 6 and Table 2 for its linearization and enhanced UD representation.", "caption_no_index": "A sample sentence annotated according to our hybrid dependency-constituency grammar model. See Figure 6 and Table 2 for its linearization and enhanced UD representation.", "id": 33482, "image_id": 53714, "mention": [ [ "A sample sentence is shown in Figure 3, where the basic dependency links are brown, the constituency links of analytic forms are green, and the constituency links of the punctuation mark attachment are purple. 2 for its linearization and enhanced UD representation." ] ], "paragraph": [ "In LVTB, phrasal constructions are grouped into three classes: (i) coordination, (ii) punctuation mark attachment, and (iii) analytic constructions and other phrases with fixed of partially fixed word order, e.g. prepositional phrases, compound predicates, multi-word numerals, etc. A sample sentence is shown in Figure 3, where the basic dependency links are brown, the constituency links of analytic forms are green, and the constituency links of the punctuation mark attachment are purple. 2 for its linearization and enhanced UD representation." ] }
{ "figure_type": "Equation", "file_name": "000000053715.png", "id": 53715, "ocr": [ "Aocnmni", "ri", "TT?", "rrs.c5.u2c d[Ouuu", "Ico", "celDollucuel$", "TFeoiocogleaTO", "460", "8777\"", "Tord?U1 0y orord?", "~ue ut Jv", "cnt", "CICm G4T", "rle_", "Catac", "Kennetan", "74rrs-", "QClcmcni:-", "LTLI", "Fntanc", "~Dosae T Dosak", "RelEne", "Ctt", "Krord?AugtJia rord ", "~VOz[ek?ile/pUztar ", "~Et", "clcncntid-", "Strerds-dc Hrds", "-TITA DUk?", "clrninr_", "cmneut Ad", "rElL 44A", "Enorieann ?Dnre", "Acement;-" ] }
{ "caption": "Figure 5: Sample XML output of the POS tagger web service", "caption_no_index": "Sample XML output of the POS tagger web service.", "id": 33483, "image_id": 53715, "mention": [ [ "Figure 5 shows a sample output from the tagger web service." ] ], "paragraph": [ "To facilitate interpretation and manipulation of the information stored within the repository, various services can be defined through the content models of the digital objects. As mentioned in Section 4.1, a tag service has been defined, and is implemented as a RESTful web service that wraps an Indonesian POS tagger developed using the Stanford POS tagger (Adriani et al., 2009). The text in the main datastream is fed to the tagger, which returns an XML document containing a list of enumerated elements, each of which represents a token and its corresponding POS tag. Figure 5 shows a sample output from the tagger web service." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053716.png", "id": 53716, "ocr": [ "Tt", "", "Dmticu" ] }
{ "caption": "Figure 2: Fraction of words less than a given word frequency.", "caption_no_index": "Fraction of words less than a given word frequency.", "id": 33484, "image_id": 53716, "mention": [ [ "To validate the differences between the collected chat datasets and traditional datasets such as Choi's dataset (Choi, 2000), we computed the fraction of words occurring with a frequency less than a given word frequency, as shown in Figure 2.", "It is clearly evident from the Figure 2 that chat segmentation datasets have a significantly high proportion of less frequent words in comparison to the traditional text segmentation datasets." ] ], "paragraph": [ "To validate the differences between the collected chat datasets and traditional datasets such as Choi's dataset (Choi, 2000), we computed the fraction of words occurring with a frequency less than a given word frequency, as shown in Figure 2. It is clearly evident from the Figure 2 that chat segmentation datasets have a significantly high proportion of less frequent words in comparison to the traditional text segmentation datasets. The presence of large infrequent words makes it hard for textual similarity methods to succeed as it will increase the proportion of out of vocabulary words (Gulcehre et al., 2016). Therefore, it becomes even more critical to utilize the non-textual clues while processing chat text." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053717.png", "id": 53717, "ocr": [ "#HL", "complic", "8ck(oa", "#ngu", "~hi", "Uam ", "FOL-bIiL;/", "cjc Lizuic", "mnratt" ] }
{ "caption": "Figure 2: The morphological analysis stage.", "caption_no_index": "The morphological analysis stage.", "id": 33485, "image_id": 53717, "mention": [ [ "This process receives the segmented text (output of the segmentation stage) as an input and generates an annotated text as an output (see figure 2)." ], [ "The analysis process is as follows: We perform a search in the learning corpus for occurrences of the word i (Wi in figure 2)." ] ], "paragraph": [ "During the morphological analysis, we use an annotated corpus as a knowledge base to predict the morphological category of each word of the input text. This process receives the segmented text (output of the segmentation stage) as an input and generates an annotated text as an output (see figure 2). In this section we begin by presenting the used corpus then we detail the principle of our morphological analysis. Our learning corpus, the Penn Arabic Treebank (ATB), was developed by the LDC at the University of Pennsylvania (Maamouri et al., 2004). It consists of data from linguistic sources written in modern standard Arabic. The corpus consists of 599 unvowelled texts of different stories from a Lebanese news agency publication.", "The analysis process is as follows: We perform a search in the learning corpus for occurrences of the word i (Wi in figure 2). We then extract all the morphological tags of this word from the ATB. Probabilities are then distributed to these tags according to the conditional probability formula. The tag that have the highest probability will be used as the annotation of the word i. There is an exception in the use of the formula for the first word of each sentence and also for each word preceded by an unknown word. If the word is not found in the training corpus, the user has the option to manually annotate the unfound word or to keep it as an unknown word. This process occurs in a sequential mode until the annotation of the whole text. We use the same tag set used in the ATB. We apply our method to a sentence to show the different results." ] }
{ "figure_type": "Scatterplot", "file_name": "000000053718.png", "id": 53718, "ocr": [ "Z 18 W1" ] }
{ "caption": "Figure 2: Comparison between inter-annotator agreement (red line) and three baselines: exact match (blue), fuzzy match (green), and fuzzy + synonyms (orange).", "caption_no_index": "Comparison between inter-annotator agreement (red line) and three baselines: exact match (blue), fuzzy match (green), and fuzzy + synonyms (orange).", "id": 33486, "image_id": 53718, "mention": [ [ "As shown in Figure 2, the fuzzy + synonyms approach outperformed exact and fuzzy match with a mean F1 of .64 (.074), compared to .53 (.073), and .62 (.075)." ] ], "paragraph": [ "As shown in Figure 2, the fuzzy + synonyms approach outperformed exact and fuzzy match with a mean F1 of .64 (.074), compared to .53 (.073), and .62 (.075). This result compares to an average inter-annotator agreement of .84 (.075) for character location overlap between phrases, showing a need for considerable improvement to match human performance. This gap varies between cases, with some having more than 20 points difference in F1 (e.g., Case 204). It is also seen that the variance in responses for certain cases (e.g., 201) is easier to capture computationally compared to others (e.g., 203). Finally, the results show that including a list of synonyms in fuzzy + synonyms does not lead to significant improvement, with the task requiring more sophisticated semantic processing." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053719.png", "id": 53719, "ocr": [ "MAu", "JKTC", "[cies C", "Ja~l", "3r'", "Iinkt", "1o7t [" ] }
{ "caption": "Figure 3: Distribution of inductively derived impact categories and sub-categories", "caption_no_index": "Distribution of inductively derived impact categories and sub-categories.", "id": 33487, "image_id": 53719, "mention": [ [ "Figure 3 provides information on the final set of labeled sentences." ], [ "Figure 3 visualizes the number of instances per sub-category in the labeled dataset." ], [ "The results of our bottom-up approach supplement these findings, and show that researchers discuss anticipated or implemented impact of their projects on the economy and technology (Figure 3).", "In addition, funded projects mention societal impact, including benefits to education, improving or modifying legislation, and raising awareness in society (Figure 3)." ] ], "paragraph": [ "To make the application of the codebook to label each project more efficient, we (1) identified common sections of the reports that address achieved or the potential impact, and marked these sections for the human coders, and, (2) for projects with multiple reports, we selected one report per each project; preferably the one with the overall results of the projects. A total of four annotators (six pairs) annotated the relevant sections of selected documents. The annotators were allowed but not encouraged to choose more than one category per section. Overall, the annotators assigned the same label to 60% of the sentences, and provide no or different labels to 40% of the sentences. The average kappa value (of all six pairs) was around 48%. Furthermore, sentences with disagreement were adjudicated by two researchers who were not involved in the original annotation. Figure 3 provides information on the final set of labeled sentences. The resulting annotated corpus is labeled with impact categories and sub-categories. We will publicly share this resources upon finalizing its preparation for release.", "None of the projects labeled with the deductive approach had \"Monetary Impact\" (Figure 2), and none was solely focused on economic or income impact. The majority of projects (43.95%) were reported to have had \"Monetary and Non-Monetary Impact\"; 39.56% had \"Non-Monetary Impact\"; and 16.48% had no monetary or non-monetary impact. Analyzing the sub-categories from the deductive approach shows that the majority of funded projects (71.4%) focused on technical impact (Figure 2). We also found that 87.5% of the projects labeled as \"Monetary and Non-Monetary Impact\" represent some sort of economic impact. Only 16.48% of all projects focused on increasing income in institutions. Regarding the socio-technical impact of projects, we found that 42.85% of the projects are associated with affecting societal groups or institutions. In the inductively labeled dataset, 60.23% of the annotated sentences do not carry any information related to impact. This finding is not surprising since many sentences even in impact-relevant sections of reports provide other types of information. Moreover, we find that 16.57% of the sentences refer to \"Impact on Domain\", 16.17% to \"Impact on Outcome\", 4% to \"Impact on Society and Public Sphere\", and around 3% discuss \"Impactful Features of Products, Services or Public Goods\". Analyzing the subcategories shows that (a) 55.91% of sentences labeled as \"Impact on Domain\" discuss economic impact, (b) 56.30% of sentences labeled as \"Impact on Outcome\" focus on improving knowledge, (c) 59.43% of sentences tagged as \"Impact on Society\" indicate impact on awareness/perception, and (d) 41.76% of sentences discuss novel or innovative features as outcomes of their projects. Figure 3 visualizes the number of instances per sub-category in the labeled dataset. Overall, our findings show that the majority of funded projects not only aim to advance science within the realms of academia, but also aim at advancing technologies and services for society, and providing public goods and innovative products. We next combined both label types to further analyze the relationship between inductively and deductively derived categories. As shown in Figure 4, the majority of projects with \"Monetary and Non-Monetary Impact\" (deductive category) features \"Impact on Domain\" (inductive category).", "Using the deductive approach, we derived impact categories from prior work and input from experts on common and verifiable indicators of impact (Figure 1), and annotated the projects by interviewing the project members. Using the inductive approach, we extracted impact categories from final project reports by identifying text-based indicators of impact through close reading. This also resulted in a novel yet different impact scheme (Table 1), which we implemented in a codebook and used that to hand-label the texts. Both annotated corpora were then used for supervised, feature-based learning. Overall, our results from the deductive approach show that reports of funded projects address (potential for) both societal and economic impact. The results of the interviews revealed that the majority of funded projects from the domain of mobility aim at technical and economic impact (Figure 2). The results of our bottom-up approach supplement these findings, and show that researchers discuss anticipated or implemented impact of their projects on the economy and technology (Figure 3). In addition, funded projects mention societal impact, including benefits to education, improving or modifying legislation, and raising awareness in society (Figure 3). Combining the labels from the deductive and inductive approach (Figure 4) reveals that impact-relevant statements (mostly) refer to impact on domains and fields." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053720.png", "id": 53720, "ocr": [ "svirbt?", "0", "0448", "044 0,43", "0.43 0.413 (. 13", "Tegs", "Significant ieria", "EuticUns", "Characierslics", "Lexicois", "setllimncuts", "Intesjex(icns", "Ptk" ] }
{ "caption": "Figure 3: Structural features’ performance using the SVM classifier evaluated on the test set.", "caption_no_index": "Structural features’ performance using the SVM classifier evaluated on the test set.", "id": 33488, "image_id": 53720, "mention": [ [ "Structural feature analysis We also evaluate the structural features of the task independently (Figure 3)." ] ], "paragraph": [ "Structural feature analysis We also evaluate the structural features of the task independently (Figure 3). For this, we use the SVC classifier as it is one of the best performing traditional methods." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053721.png", "id": 53721, "ocr": [ "oacActation", "Tfyoc", "PIL vDC", "(:Ja: uJ:Bouy", "xgaci", "@xsod", "UL EaL", "Vao basedon", "4", "athasconstnsut", "Caedod", "Drn?", "DX OS", "co3n3", "Inhrc", "Wc", "Iian hosagor", "9rt}", "oac-hasbody", "[az6;;C775771557", "[0;D?", "6425", "#2230", "bhuuaacus", "4764", "(JRJcEBody", "7hroi", "eXGEC", "EXO:", "nJARG", "'azha Constnent", "Kao OBRAd ", "ert2)", "O;jc EQdY", "eXui0z", "Ulc Lal" ] }
{ "caption": "Figure 5: Literal RDF translation of a GrAF Propbank annotation representation from (Ide & Suderman 2007)", "caption_no_index": "Literal RDF translation of a GrAF Propbank annotation representation from (Ide & Suderman 2007).", "id": 33489, "image_id": 53721, "mention": [ [ "The greater expressivity and simpler structure of RDF based annotations can be clearly seen in contrasting Figure 5 with Figure 6.", "Figure 5 represents a verbatim translation of the LAF following the feature structure in RDF conventions.", "The pb:arg1 relation in Figure 6 alleviates the need for the entire ga04 annotation in Figure 5." ] ], "paragraph": [ "The greater expressivity and simpler structure of RDF based annotations can be clearly seen in contrasting Figure 5 with Figure 6. Both figures depict the same subset of information from a PropBank example in Section 3 of (Ide &amp; Suderman 2007). Figure 5 represents a verbatim translation of the LAF following the feature structure in RDF conventions. In this figure, as in the original LAF figure, the proposition elements are distributed across 3 feature structures, for the relation (rel), arg1, and the proposition itself. In contrast, Figure 6 uses individual RDF triples in the annotation bodies; the representation is not only more succinct, it more naturally expresses the semantics of the information, with the relation and its argument within the same content graph. The pb:arg1 relation in Figure 6 alleviates the need for the entire ga04 annotation in Figure 5. Arguably it was an intentional choice by Ide and Suderman (2007) to use a LAF node/annotation instead of a LAF edge. However, this and other examples point to arbitrary selection of nodes and edges in LAF, with little surrounding semantics to ground them. While it is true that users must understand the semantics of any model to use it, the framework of RDF and the linked data best practices provide a structure for explicitly and formally defining the concepts and links, facilitating interoperability." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053722.png", "id": 53722, "ocr": [] }
{ "caption": "Figure 1: Age bands over number of complex words", "caption_no_index": "Age bands over number of complex words.", "id": 33490, "image_id": 53722, "mention": [ [ "Figures 1 and 2 illustrate average and standard deviation values using 10-year age bands and proficiency levels, respectively." ] ], "paragraph": [ "By inspecting the data, we found interesting correlations between the number of complex words annotated and volunteers' age or English proficiency level. Figures 1 and 2 illustrate average and standard deviation values using 10-year age bands and proficiency levels, respectively. Both graphs show that, although the average number of complex words drops as age and proficiency level increase, the variance within each group is very high, suggesting that such groups may not be significantly distinct from each other. By performing Ftests with p = 0.05, we found a significant difference between the band of 40+ years of age and the bands of 10+, 20+ and 30+ years of age, which suggests that one's English knowledge peaks at such age. We also found significant differences between almost all English proficiency levels above A2, except between B2 and C1. We did not find significant differences among education levels." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053723.png", "id": 53723, "ocr": [ "Qux*", "Kvll;", "Eoarn ", "Tedn=", "IItT", "Ath", "Iitux", "J6" ] }
{ "caption": "Figure 1: The VERBMOBIL Architecture5", "caption_no_index": "The VERBMOBIL Architecture5.", "id": 33491, "image_id": 53723, "mention": [ [ "The responsibility for the contents of the paper lies with the author. 3 A turn comprises a speaker's contribution within the dialogue at a given point and may range from single-word utterances like \"hello\" up to several sentences. 4 See fig. 1 generation component." ] ], "paragraph": [ "Two concurrent concepts of linguistic analysis have been implemented 4 , namely the deep and shallow analysis. Deep processing is performed by a syntax-semantics component, a transfer and a target 1 See [2] for a more detailed description of the objectives of VERBMOBIL. 2 I would like to thank Ute Hauck for her assistance with Word and Visio. Special thanks to Elisabeth Maier for suggestions and comments which helped to improve this paper. The responsibility for the contents of the paper lies with the author. 3 A turn comprises a speaker's contribution within the dialogue at a given point and may range from single-word utterances like \"hello\" up to several sentences. 4 See fig. 1 generation component. These components interact with the modules semantic evaluation and dialog processing which provide additional information, e.g. for disambiguation purposes. The output of the unification-based semantic construction, which takes as input a word lattice (which in turn is the output of the acoustic analysis) is mapped onto a canonical representation called VERBMOBIL Interface Term (VIT [3]). The transfer output is also represented as a VIT, which is further processed by the English generation component. In a last step, the English synthesis produces the speech signals of the resulting translations. The shallow analysis comprises two approaches to translation, namely translations based on dialogue acts 6 , and translations based on the example-based approach. The first approach relies on the correct identification of dialogue acts underlying a turn which trigger template-based translations of dialogue act information. Propositional elements such as proper names and time expressions are integrated into the template and are then translated. The latter approach called ALI produces so-called schematic or example-based translations. The database on which these translations are based contains about 20.000 entries ([2], p. 2). Both shallow approaches work in parallel together with the deep analysis. This means that in the ideal case there are three possible translations available for each turn. The selection of the translation to be synthesized is made by an elementary version of the selection module. In the VERBMOBIL prototype, this module uses simple heuristics such as system performance, i.e. in case the deep analysis does not produce an output, the translation of the shallow processing is chosen (here, the dialogue act-based output would have priority over the ALI-output)." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053724.png", "id": 53724, "ocr": [ "Nimizeni Chile *", "hadsor?kn\"", "Jeratt" ] }
{ "caption": "Figure 2: The inter-cluster distance plot for the learning set Abui verbs (356) v. 2020.", "caption_no_index": "The inter-cluster distance plot for the learning set Abui verbs (356) v. 00.", "id": 33492, "image_id": 53724, "mention": [ [ "An example of the inter-cluster distance plot is given in Figure 2 where the plot shows an elbow-like dip in the inter-cluster distance." ], [ "Figure 2: The inter-cluster distance plot for the learning set Abui verbs (356) v. 2020." ] ], "paragraph": [ "An example of the inter-cluster distance plot is given in Figure 2 where the plot shows an elbow-like dip in the inter-cluster distance. The inter-cluster distance decreases rapidly until 24 clusters, but starts to decrease gradually after 25 clusters. Therefore we set the threshold at 25 clusters.", "Figure 2: The inter-cluster distance plot for the learning set Abui verbs (356) v. 2020." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053725.png", "id": 53725, "ocr": [ "kamPwllh : %uu5", "Thkr ^HwhnQUL | KKav_g", "Km", "Dalu", "wlh", "kvn;" ] }
{ "caption": "Figure 3: Illustration of ‘ngram predicts ngram’.", "caption_no_index": "Illustration of ‘ngram predicts ngram’.", "id": 33493, "image_id": 53725, "mention": [ [ "As shown in figure 3, center bigram 'is written' predicts its surrounding words and bigrams." ] ], "paragraph": [ "We further extend the model to introduce ngrams into center word vocabulary. During the training, center ngrams (including words) predict their surrounding ngrams. As shown in figure 3, center bigram 'is written' predicts its surrounding words and bigrams. The objective of 'ngram predicts ngram' is as follows:" ] }
{ "figure_type": "Node Diagram", "file_name": "000000053726.png", "id": 53726, "ocr": [ "decl", "entity", "wh", "predicted", "cntity", "Eold" ] }
{ "caption": "Figure 1: Gold (solid) and predicted (dashed) entities, with mentions in two categories distinguished by shading.", "caption_no_index": "Gold (solid) and predicted (dashed) entities, with mentions in two categories distinguished by shading.", "id": 33494, "image_id": 53726, "mention": [ [ "Figure 1 illustrates this using the example from Pradhan et al. (2014), which has been extended with shading representing categories." ], [ "where K i is the i th entity in the key (gold) data (and Ri is correspondingly the i th response entity); | | is the weighted partition magnitude within entity i, i.e. the number of instances of a mention from partition type π being either the source or target of a coreference link, multiplied by the weight 0.5 (since source and target may be of different types, and each is worth 'half a link'); and is the set of elements of type π obtained by intersecting the key entities with the response entities, with each mention again being worth 0.5 points for its respective type π. 10 Thus for the example in Figure 1, declaratives get 0.5 points for their correct involvement in a+b, but none for the missing link with c, and 1 point for their involvement in the correct g+f (since both are decl).", "The total possible links for declaratives in Figure 1 are worth 2 points (0.5 for a+b, 0.5 for b+c and 1 for g+f), so that decl scores a recall of 1.5/2 or 0.75 in this example." ] ], "paragraph": [ "Figure 1 illustrates this using the example from Pradhan et al. (2014), which has been extended with shading representing categories. The solid oval represent two gold entities, with mentions {a,b,c} and {d,e,f,g}. Dashed ovals give three predicted entities, with mentions {a,b}, {c,d} and {g,f,h,i}. Note that mention e is not in any predicted entity, and h+i are not in the gold data. Pradhan et al.'s implementation of the MUC metric tallies the partitions with respect to gold and predicted mentions, such that a predicted link a+b is a correct positive (since a+b are in the same gold entity), c+d is a false positive, and the absence of predicted b+c is a false negative.", "where K i is the i th entity in the key (gold) data (and Ri is correspondingly the i th response entity); | | is the weighted partition magnitude within entity i, i.e. the number of instances of a mention from partition type π being either the source or target of a coreference link, multiplied by the weight 0.5 (since source and target may be of different types, and each is worth 'half a link'); and is the set of elements of type π obtained by intersecting the key entities with the response entities, with each mention again being worth 0.5 points for its respective type π. 10 Thus for the example in Figure 1, declaratives get 0.5 points for their correct involvement in a+b, but none for the missing link with c, and 1 point for their involvement in the correct g+f (since both are decl). The total possible links for declaratives in Figure 1 are worth 2 points (0.5 for a+b, 0.5 for b+c and 1 for g+f), so that decl scores a recall of 1.5/2 or 0.75 in this example. Indeed, only 1 of 4 decl link endpoints is missed in this example. We have implemented the p-link metric as an extension to Pradhan et al.'s original code, and our code is freely available. 11 To test whether genre or sentence type has more influence on p-link, we evaluate manual and automatic coreferencer output, using a conzeldes/reference-coreference-scorers. figurable rule-based coreferencer called xrenner (Zeldes &amp; Zhang 2016). 12 The tool can be set up to produce GUM's annotation scheme. The same data subset as for POS tagging was doubly corrected, and is used below." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053727.png", "id": 53727, "ocr": [ "Wiyix" ] }
{ "caption": "Figure 2: Active learning performance for the comparable corpora classification in Urdu-English language-pair", "caption_no_index": "Active learning performance for the comparable corpora classification in Urdu-English language-pair.", "id": 33495, "image_id": 53727, "mention": [ [ "Figure 2 shows our results for the Urdu-English language pair, and Figure 3 plots the Spanish-English results with the x-axis showing the total number of queries posed to obtain annotations and the y-axis shows the resultant improvement in accuracy of the classifier." ] ], "paragraph": [ "In section 5, we proposed multiple active learning strategies for both eliciting both kinds of annotations. A good active learning strategy should select instances that contribute to the maximal improvement of the classifier. The effectiveness of active learning is typically tested by the number of queries the learner asks and the resultant improvement in the performance of the classifier. The classifier performance in the comparable sentence classification task can be computed as the F-score on the held out dataset. For this work, we assume that both the annotations require the same effort level and so assign uniform cost for eliciting each of them. Therefore the number of queries is equivalent to the total cost of supervision. Figure 2 shows our results for the Urdu-English language pair, and Figure 3 plots the Spanish-English results with the x-axis showing the total number of queries posed to obtain annotations and the y-axis shows the resultant improvement in accuracy of the classifier. In these experiments we do not actively select for the second annotation but acquire the parallel segment from the same sentence. We compare this over a random baseline where the sentence pair is selected at random and used for eliciting both annotations at the same time." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053728.png", "id": 53728, "ocr": [ "1", "L", "Fchmatel", "CaA", "Cten", "Training instances", "amert" ] }
{ "caption": "Figure 6: Comparison of Uniform and Estimated representation policies, both using the Random annotator policy and Positive+Negative evidence. The Estimated policy exhibits the performance delay of Figure 4, indicating this stems from the model’s initial poor estimates of unseen label probabilities, but also shows the value of those estimates in the improvements once those estimates are based on sufficient training instances.", "caption_no_index": "Comparison of Uniform and Estimated representation policies, both using the Random annotator policy and Positive+Negative evidence. The Estimated policy exhibits the performance delay of Figure 4, indicating this stems from the model’s initial poor estimates of unseen label probabilities, but also shows the value of those estimates in the improvements once those estimates are based on sufficient training instances.", "id": 33496, "image_id": 53728, "mention": [ [ "Given the same number of annotations-per-instance (line solidity), negative evidence provides significant performance improvements, after a brief initial delay (see Figure 6)." ], [ "Figures 6 and 5 compare representation and annotator policies under the simplest configurations." ] ], "paragraph": [ "Given the same number of annotations-per-instance (line solidity), negative evidence provides significant performance improvements, after a brief initial delay (see Figure 6).", "Figures 6 and 5 compare representation and annotator policies under the simplest configurations. The delayed performance is clearest between the representation policies. The Estimated representation policy also provides the most performance improvement, eventually providing benefits for every value of annotators_per_instance. The annotator policies, on the other hand, only provide significant benefits when annotators_per_instance ≥ 8. Referring back to Figure 4 confirms the combination of both policies outperforms either in isolation." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053729.png", "id": 53729, "ocr": [ "queen", "woman?", "king", "man" ] }
{ "caption": "Figure 1: Using the vector offset method to solve the analogy task (Mikolov et al., 2013c).", "caption_no_index": "Using the vector offset method to solve the analogy task (Mikolov et al., 203c).", "id": 33497, "image_id": 53729, "mention": [ [ "If the offset woman − man represents an abstract gender feature, adding that offset to king should lead us to queen (Figure 1)." ], [ "Figure 1: Using the vector offset method to solve the analogy task (Mikolov et al., 2013c)." ], [ "The offset model is typically understood as in Figure 1: the analogy task is solved by finding x = a * − a + b." ] ], "paragraph": [ "The system is expected to infer the relation between the first two words-man and woman-and find a word that stands in the same relation to king. When this task is solved using the offset method, there is no explicit set of relations that the system is trained to identify. We simply subtract the vector for man from the vector for woman and add it to king. If the offset woman − man represents an abstract gender feature, adding that offset to king should lead us to queen (Figure 1).", "In the rest of this paper, we describe the set of analogy problems that we used to evaluate the VSMs' representation of quantificational features, and explore how accuracy is affected by the con-king queen man woman? woman? Figure 1: Using the vector offset method to solve the analogy task (Mikolov et al., 2013c).", "The offset model is typically understood as in Figure 1: the analogy task is solved by finding x = a * − a + b. In practice, since the space is continuous, x is unlikely to precisely identify a word in the vocabulary. The guess is then taken to be the word x * that is nearest to x:" ] }
{ "figure_type": "Bar Chart", "file_name": "000000053730.png", "id": 53730, "ocr": [ "Full Arzicl", "Bccy Crl;", "Tlle O~I;", "1", "1", "1", "1", "1", "1" ] }
{ "caption": "Figure 2: Data ablation study. Reasonable performance can be obtained with titles only but full articles are crucial for achieving best performance.", "caption_no_index": "Data ablation study. Reasonable performance can be obtained with titles only but full articles are crucial for achieving best performance.", "id": 33498, "image_id": 53730, "mention": [ [ "As we can see in Figure 2, the Longformer achieves the highest performance when titles are removed, given that it is able to process a larger portion of the article body than other pre-trained models." ] ], "paragraph": [ "To study whether full articles are required to achieve reasonable performance on this dataset, we evaluate a few representative models on two versions of the dataset, one with titles only and another with article bodies only. As we can see in Figure 2, the Longformer achieves the highest performance when titles are removed, given that it is able to process a larger portion of the article body than other pre-trained models. We also notice that the gap between BioBERT's 'Title Only' and 'Body Only' performance is much smaller than other models, suggesting that BioBERT's domain-specific pretraining allows it to use the salient title information more efficiently. We conclude that even though the article's body alone is more predictive than the title, standalone titles can still obtain reasonable performance on this dataset and are necessary to achieve the best possible performance." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053731.png", "id": 53731, "ocr": [ "Ijh", "Hu", "Ia", "J", "hx W", "Mulhk", "hraxb", "hlhik" ] }
{ "caption": "Figure 1: Flow of Analysis", "caption_no_index": "Flow of Analysis.", "id": 33499, "image_id": 53731, "mention": [ [ "Figure 1 shows the flow of analysis in our experimental system with the preediting module." ], [ "The preediting module examines from left to right on the list of morpholexical units whether a part of the list and the condition of the rewriting rules are matched, and it rewrites the parts where matches are Figure 1: Flow of Analysis established." ] ], "paragraph": [ "Figure 1 shows the flow of analysis in our experimental system with the preediting module. After the completion of morpholexical analysis, our preediting module runs to rewrite the original expression. Syntactic analysis rules is then applied to produce parse trees for the rewritten expression. If the initial syntactic analysis fails 2 , the process returns to the preediting module. In a second preediting phase, the module restrains some rewriting rules which were applied in the first phase, and/or newly uses rules which were not applied, according to the certainty factor (See Section 2.2) given to the rules.", "The preediting module examines from left to right on the list of morpholexical units whether a part of the list and the condition of the rewriting rules are matched, and it rewrites the parts where matches are Figure 1: Flow of Analysis established. Note that the module is not designed for the exclusive use of the headlines, but is a general framework which deals with ordinary expressions." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053732.png", "id": 53732, "ocr": [ "Input: X", "Taggerppo", "z-flx", "Output: flx)", "Taggerct8", "y-h(xf(x))", "Output: CTB-style", "Tags " ] }
{ "caption": "Figure 1: Traditional Pipeline-based Strategy for Heterogeneous POS Tagging", "caption_no_index": "Traditional Pipeline-based Strategy for Heterogeneous POS Tagging.", "id": 33500, "image_id": 53732, "mention": [ [ "The brief sketch of these methods is shown in Figure 1." ] ], "paragraph": [ "These methods regard one annotation as the main target and another annotation as the complementary/auxiliary purposes. For example, in their solution, an auxiliary tagger Tagger PPD is trained on a complementary corpus PPD, to assist the target CTB-style Tagger CTB . To refine the character-based tagger, PPD-style character labels are directly incorporated as new features. The brief sketch of these methods is shown in Figure 1." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053733.png", "id": 53733, "ocr": [ "Hain FSH", "7\" @RP", "Fxikr FSAtr FSAT\"", "FSG D\"", "T4/", "IcP" ] }
{ "caption": "Figure 3. Combination of two types of FSA into an RTN.", "caption_no_index": "Combination of two types of FSA into an RTN.", "id": 33501, "image_id": 53733, "mention": [ [ "For example, from the tree in Figure 3, some phrasal rules as follows can be extracted:" ] ], "paragraph": [ "The rule sets for converting POS tags into phrasal tags are extracted from the PTB. For example, from the tree in Figure 3, some phrasal rules as follows can be extracted:" ] }
{ "figure_type": "Graph Plot", "file_name": "000000053734.png", "id": 53734, "ocr": [ "HBBA:", "", "-B", "JBKC", "", "j @Cicm", "IIijIIulriz", "JTE[ i(ili [ Zll %sl:" ] }
{ "caption": "Figure 9. The syllable accuracy rates of LI system with different maximum number of the Gaussian components in a Gaussian mixture per state using MCS.", "caption_no_index": "The syllable accuracy rates of LI system with different maximum number of the Gaussian components in a Gaussian mixture per state using MCS.", "id": 33502, "image_id": 53734, "mention": [ [ "The results of MCS part are shown in Figure 9 and Figure 10." ], [ "First, all of the performances of all the languages using MCS (Figure 9) are better than those without using MCS (Figure 3).", "Second, unlike the results shown in Figure 3, the results in Figure 9 show that increasing of the number of Gaussian components in a Gaussian mixture caused the performance to drop in the Hakka case." ] ], "paragraph": [ "For the MCS part, we increased the number of Gaussian components in a Gaussian mixture per state depending on the training occurrence. This means that when we set the number of Gaussian components in a Gaussian mixture per state to be 16, not all of the states would increase the number of Gaussian components in the Gaussian mixture to 16. If the available training occurrences of the corresponding state are satisfied with Equation (5), the state will adjust to the number of Gaussian components in a Gaussian mixture to be 16. We also used the results of the LI recognition system in Figure 3 for comparison. The LI system described in Section 3 increased the number of Gaussian components in a Gaussian mixture per state in a brute-force manner, which means that when we set the number of Gaussian components in a Gaussian mixture per state to 16, then, no matter how many occurrences the particular state has, all of the states have to adjust the number of Gaussian components in the Gaussian mixture to 16. The results of MCS part are shown in Figure 9 and Figure 10.", "First, all of the performances of all the languages using MCS (Figure 9) are better than those without using MCS (Figure 3). The average improvements for Hakka, Mandarin, and Taiwanese are 4.5%, 2.5%, and 2.5%, respectively. The best results for LI-system for each of the languages are 42.01%, 58.45%, and 48.79% when the maximum number of Gaussian components in a Gaussian mixture per state is 32, 64, and 32 for Hakka, Mandarin, and Taiwanese, respectively. Second, unlike the results shown in Figure 3, the results in Figure 9 show that increasing of the number of Gaussian components in a Gaussian mixture caused the performance to drop in the Hakka case. This is because of increasing the number of Gaussian components in a Gaussian mixture per state based on the occurrence training data using MCS. The accuracy rate increases when the number of the Gaussian components in a Gaussian mixture is also increased." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053735.png", "id": 53735, "ocr": [ "861", "1", "03", "1", "1", "I", "1", "1", "1", "1", "6", "1", "1", "1", "1", "1", "1", "7", "1", "1", "1", "1", "1" ] }
{ "caption": "Figure 5: Dendrogram of vectors of measures correlations on a dataset. The height of the bar indicates the distance between vectors or groups of vectors. Postfixes ‘p’ and ‘t’ denote the datasets for PG and TST tasks, respectively.", "caption_no_index": "Dendrogram of vectors of measures correlations on a dataset. The height of the bar indicates the distance between vectors or groups of vectors. Postfixes ‘p’ and ‘t’ denote the datasets for PG and TST tasks, respectively.", "id": 33503, "image_id": 53735, "mention": [ [ "For this purpose, we represent each dataset as a vector of correlations of each measure with the human judgments and plot a dendrogram (see Figure 5) to show the clustered structure of the obtained vectors." ] ], "paragraph": [ "For this purpose, we represent each dataset as a vector of correlations of each measure with the human judgments and plot a dendrogram (see Figure 5) to show the clustered structure of the obtained vectors. The dendrogram should be interpreted as follows. The height at which each dataset is connected to another dataset or group of datasets indicates the distance between the dataset vectors. We additionally plot a heatmap of cosine similarities of these datasets vectors in Appendix Figure 9." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053736.png", "id": 53736, "ocr": [ "otiiciaks", "atje", "persoi", "milions", "TL?" ] }
{ "caption": "Figure 1: Graphical representation of predicate-argument dependencies for the sentence The person who officials say stole millions.", "caption_no_index": "Graphical representation of predicate-argument dependencies for the sentence The person who officials say stole millions.", "id": 33504, "image_id": 53736, "mention": [ [ "The '0' function identifies j as i's predicate concept (so '0' maps entity or eventuality instances to instances of concepts associated with words in X ), the '1' function identifies j as i's first argument (e.g. its subject), the '2' function identifies j as i's second argument (e.g. its direct object), and so on. 3 A graphical representation of the predicate-argument relations generated by this system for the sentence The person who officials say stole millions, is shown in Figure 1." ], [ "The semantic dependency relations for this sentence are represented graphically in Figure 1." ] ], "paragraph": [ "Mapping M defines associations from vocabulary items x ∈ X to meaning functions and associated categories of the form '(λ ...) : uϕ 1 ...ϕ v ψ', where '(λ ...)' is a meaning function and 'uϕ 1 ...ϕ v ψ' is a category consisting of output category u ∈ U, a sequence of argument categories ϕ 1 , ..., ϕ v ∈ {-a, -b, -c, -d}× C, and an optional non-local argument category ψ ∈ ({-r, -i}×C) ∪ {ε}. Since this model will be used to generate predicate-argument relations but not scoping relations, these meaning functions are constrained to describe simple existentiallyquantified variables over instances of entities or eventualities, connected by a set of numbered argument relations. These meaning functions map instances of entities or eventualities i, j, k to truth values based on whether the described argument relations hold between these referents. These argument relations are defined as numbered functions (v i)= j from eventuality or predicate instances i to argument instances j identified by the number of the function v. The '0' function identifies j as i's predicate concept (so '0' maps entity or eventuality instances to instances of concepts associated with words in X ), the '1' function identifies j as i's first argument (e.g. its subject), the '2' function identifies j as i's second argument (e.g. its direct object), and so on. 3 A graphical representation of the predicate-argument relations generated by this system for the sentence The person who officials say stole millions, is shown in Figure 1. This is similar to the semantic dependency representations of Mel'čuk (1988) and Parsons (1990).", "The semantic dependency relations for this sentence are represented graphically in Figure 1. (V-aN-hN)-c(V-aN-hN)-d(V-aN-hN)" ] }
{ "figure_type": "Equation", "file_name": "000000053737.png", "id": 53737, "ocr": [ "TuLub:eri;=", "10J 42405f", "8;", "002 4e* 4e4044374 u71939313 842442 449", "Vc H*4440449234983843432434*442444", "VL 7 Juay743353949nu uY: 444;", "4} {44613003423028803764312748245", "Vr18 Ly#lullloayukuulylal 3/xulyy 455", "4Ek6 4*44i4-4964{4ak4yy 4 .4444: 4EE8" ] }
{ "caption": "Figure 2: A fragment of the Unicode table for Korean Hangul characters.", "caption_no_index": "A fragment of the Unicode table for Korean Hangul characters.", "id": 33505, "image_id": 53737, "mention": [ [ "Figure 2 depicts a fragment of the Unicode table for Korean, in which each line corresponds to a combination of the first consonant and vowel and each column corresponds to the last consonant." ] ], "paragraph": [ "We use Unicode, in which Hangul characters are sorted according to the pronunciation. Figure 2 depicts a fragment of the Unicode table for Korean, in which each line corresponds to a combination of the first consonant and vowel and each column corresponds to the last consonant. The number of columns is 28, i.e., the number of the last consonants and the case in which the last consonant is not used. From this figure, the following rules can be found:" ] }
{ "figure_type": "Equation", "file_name": "000000053738.png", "id": 53738, "ocr": [ "IE", "WL", "WYhN\"HY" ] }
{ "caption": "Figure 3:Extensionalearning curves on as percentage of the training set.", "caption_no_index": "Extensionalearning curves on as percentage of the training set.", "id": 33506, "image_id": 53738, "mention": [ [ "To this end we trained a supervised classifier based on Support Vector Machines, and draw its learning curves as a function of percentage of the training set size (Figure 3)." ] ], "paragraph": [ "A major point of comparison between IL and EL is the amount of supervision effort required to obtain a certain level of performance. To this end we trained a supervised classifier based on Support Vector Machines, and draw its learning curves as a function of percentage of the training set size (Figure 3). In the case of 20newsgroups, to achieve the 65% F1 performance of IL the supervised settings requires about 3200 documents (about 160 texts per category), while our IL method requires only the category name. Reuters-10 is an easier corpus, therefore EL achieves rather rapidly a high performance. But even here using just the category name is equal on average to labeling 70 documents per-category (700 in total). These results suggest that IL may provide an appealing cost-effective alternative in practical settings when sub-optimal accuracy suffices, or when it is too costly or impractical to obtain sufficient amounts of labeled training sets." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053739.png", "id": 53739, "ocr": [ "#ik", "WEha", "Toen", "'5", "77eo", "Ollhk aa ", "ITI;", "2aa:", "Jnhhr", "Ii", "IRenr", "5el", "jnr" ] }
{ "caption": "Figure 3: The decision pipeline for comparative question filtering.", "caption_no_index": "The decision pipeline for comparative question filtering.", "id": 33507, "image_id": 53739, "mention": [ [ "Figure 3 shows the decision pipeline on an abstract level." ] ], "paragraph": [ "Figure 3 shows the decision pipeline on an abstract level. The pipeline performs preprocessing steps such as lowercasing (1.), sorting out sentences based on number of tokens (2.), and determining if the text is a question (3.). The last two steps (4. and 5.) sort the text into one of the comparative categories, which is determined by searching for keywords in the text. Our analysis of comparative data shows us that if a question contains \"than\" or \"as\", it is classified as comparative. Comparative questions have the tendency to contain \"than\" with \"more\" or \"less\". In that case, the words that precede \"than\" are evaluated to check if they belong to the group of adjectives or adverbs; if the last two filters do not classify the text, then the sentence is labeled as not comparative, thus concluding the automatic classification step." ] }
{ "figure_type": "Scatterplot", "file_name": "000000053740.png", "id": 53740, "ocr": [ "Wamch", "(8tc ?JM -", "TeMe" ] }
{ "caption": "Figure 1: Length of identity chains and number of their bridging markables with Spearman’s ρ = 0.6595", "caption_no_index": "Length of identity chains and number of their bridging markables with Spearman’s ρ = 0.6595.", "id": 33508, "image_id": 53740, "mention": [ [ "Figure 1 shows the relation between the chain length and the number of its bridging markables." ] ], "paragraph": [ "We computed the correlation between the length of identity chain and the number of bridging markables that are linked to this chain. Using Spearman's rank correlation coefficient, we found that there is a strong correlation between the chain length and the number of its bridges: 0.6595, with p-value of 1.35E-008. Figure 1 shows the relation between the chain length and the number of its bridging markables." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053741.png", "id": 53741, "ocr": [ "Root", "SBAR", "NNPS", "VBP_", "Statistics show", "that", "NPz", "VP;", "DT;", "NN,", "VBPz", "numher" ] }
{ "caption": "Figure 3. Parse tree of the example sentence.", "caption_no_index": "Parse tree of the example sentence.", "id": 33509, "image_id": 53741, "mention": [ [ "Figure 3 shows the parse tree of this sentence." ], [ "Through Figure 3, the observed words are \"show\" and \"are\", the subjects are \"statistics\" and \"number\" respectively that we can conclude \"statistics\" should use plural verb and \"number\" should use singular verb \"is\" instead of \"are\"." ] ], "paragraph": [ "My brother is a nutritionist. My sisters are dancers. Therefore, the subject of the sentence is the key point. To decide whether the verb is singular or plural should look into the context and find out the POS of the subject. We utilize the existing information given by NUCLE to extract the subject of the verb. For example, the sentence \"Statistics show that the number are continuing to grow with the existing population explosion.\" Figure 3 shows the parse tree of this sentence.", "Through Figure 3, the observed words are \"show\" and \"are\", the subjects are \"statistics\" and \"number\" respectively that we can conclude \"statistics\" should use plural verb and \"number\" should use singular verb \"is\" instead of \"are\". The other features extracted for training are listed in Table 3. The purpose of extracting the noun phrase after the observed word is in the situation of the subject is after the verb, such as \"Where are my scissors?\", \"scissors\" is the subject of this sentence." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053742.png", "id": 53742, "ocr": [ "0.53", "1.855", "0,55", "2,875", "2.855", "Fassiv?", "0.56", "MooSirp ?", "Syrax", "2,355", "doOC", "2200", "30C0)", "40OC", "Jodoo", "Uke'", "lnceleiic Vihich hAVA", "he?4" ] }
{ "caption": "Figure 8: Learning curves of MODSIMPLE and SYN in terms of the number of bunsetsus which have a head.", "caption_no_index": "Learning curves of MODSIMPLE and SYN in terms of the number of bunsetsus which have a head.", "id": 33510, "image_id": 53742, "mention": [ [ "MODSIMPLE is almost always better than PASSIVE and does not cause a significant deterioration of accuracy unlike NAIVE. 6 Comparison of MODSIMPLE and SYN is shown in Figure 8." ] ], "paragraph": [ "Why does this phenomenon occur? It is because each bunsetsu pair is not independent and pairs in the same sentence are related to each other. They satisfy the constraints discussed in Section 3.2. Furthermore, the algorithm we use, i.e., Sassano's, assumes these constraints and has the specific order for processing bunsetsu pairs as we see in Figure 3. Let us consider the meaning of {j, i, \"O\"} if the head of the j-th bunsetsu is the k-th one such that j &lt; k &lt; i. In the context of the algorithm in Figure 3, {j, i, \"O\"} actually means that the j-th bunsetsu modifies th l-th one such that i &lt; l. That is \"O\" does not simply mean that two bunsetsus does not have a dependency relation. Therefore, we should not generate {j, i, \"O\"} in the case of j &lt; k &lt; i. Such labeled instances are not needed and the algorithm in Figure 4 does not generate them even if a fully annotated sentence is given. Based on the analysis above, we modified NAIVE and defined MODSIMPLE, where unnecessary labeled examples are not generated. Now let us compare NAIVE with MODSIMPLE (Figure 7). MODSIMPLE is almost always better than PASSIVE and does not cause a significant deterioration of accuracy unlike NAIVE. 6 Comparison of MODSIMPLE and SYN is shown in Figure 8. Both exhibit a similar curve. Figure 9 shows the same comparison in terms of required queries to human annotators. It shows that SYN is better than MODSIMPLE especially at the earlier stage of active learning." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053743.png", "id": 53743, "ocr": [ "TRL:", "1", "8", "SBLEU-O", "SDLEU-M F", "433", "SHLFU:ILS", "43-0g-]" ] }
{ "caption": "Figure 1: Made-up Example: The 3 sloping lines represent all 3 candidates in N-best list. Their SBLEU (BLEU = SBLEU when only one sentence) and funtions are in the legend of figure.", "caption_no_index": "Made-up Example: The 3 sloping lines represent all 3 candidates in N-best list. Their SBLEU (BLEU = SBLEU when only one sentence) and funtions are in the legend of figure.", "id": 33511, "image_id": 53743, "mention": [ [ "For example, in Figure 1, MERT chooses the middle point of two cross points." ], [ "Using the N-best derivation of a decoder to approximate its search space, we find the optimal set of parametersλ (Fig 1), which maximizes the weighted sum of correlation on a set with M sentences, as in Eq. (1).λ" ] ], "paragraph": [ "For example, in Figure 1, MERT chooses the middle point of two cross points. By contrast, MRC tries to maximize the rank correlation between SBLEU and the model score. and will adjust λ into the open interval (1.5, 2), in which the order of candidates' model score is perfectly the same as their SBLEU. We obtain λ = 1.37 via assuming the objective of Min-Risk is the expectation of BLEU, and the probability of each (c)andicate is given by p(c i ) = exp (γ • score(c))/ i exp (γ • score(c i )) with γ = 1 . In MIRA (Chiang et al., 2008), if we choose candidates whose SBLEU are 0.5 and 0.2 as positive and negative examples respectively, MIRA will make the margin between them as large as possible and λ will no smaller than 2.", "Using the N-best derivation of a decoder to approximate its search space, we find the optimal set of parametersλ (Fig 1), which maximizes the weighted sum of correlation on a set with M sentences, as in Eq. (1).λ" ] }
{ "figure_type": "Graph Plot", "file_name": "000000053744.png", "id": 53744, "ocr": [ "", "1", "", "\"Aiad Se*0#1E:" ] }
{ "caption": "Figure 8. Time path of cosine similarities using continuous space model with word pairs (example 1).", "caption_no_index": "Time path of cosine similarities using continuous space model with word pairs (example 1).", "id": 33512, "image_id": 53744, "mention": [ [ "Figure 8 reprises the first example word list shown for LSA-based semantics in Figure 5, but now using cosine similarity from the new space." ], [ "Figure 8." ] ], "paragraph": [ "Figure 8 reprises the first example word list shown for LSA-based semantics in Figure 5, but now using cosine similarity from the new space. The main feature of four peaks remains, but there are differences such as now instead of increasing similarity with on the right (pets), the plot levels off.", "Figure 8. Time path of cosine similarities using continuous space model with word pairs (example 1)." ] }
{ "figure_type": "Equation", "file_name": "000000053745.png", "id": 53745, "ocr": [ "Frane;", "Body-Movement", "Frame Elements;", "Agent", "Body Part", "Cause", "She clapped her hands in inspiration", "~NP", "~NP", "~PP", "Ext;", "~Obj,", "~Comp." ] }
{ "caption": "Figure 1. Frame for lemma “clap” shown with three core Frame Elements and a sentence annotated with element type, phrase type, and grammatical function.", "caption_no_index": "Frame for lemma “clap” shown with three core Frame Elements and a sentence annotated with element type, phrase type, and grammatical function.", "id": 33513, "image_id": 53745, "mention": [ [ "Figure 1 shows an example of an annotated sentence and its appropriate semantic frame." ] ], "paragraph": [ "In each FrameNet sentence, a single target predicate is identified and all of its relevant Frame Elements are tagged with their element-type (e.g., Agent, Judge), their syntactic Phrase Type (e.g., NP, PP), and their Grammatical Function (e.g., External Argument, Object Argument). Figure 1 shows an example of an annotated sentence and its appropriate semantic frame." ] }
{ "figure_type": "Equation", "file_name": "000000053746.png", "id": 53746, "ocr": [ "Gcreration Pioccss", "fori=1.2_", "choose =", "K€ s", "STOP: returt", "choose a heldset E; c FIELDSKc;f}", "choose a templatc T; € TEMPLATESic;4,Fy", "1record $", "im=" ] }
{ "caption": "Figure 2: Pseudocode for the generation process. The generated text w is a deterministic function of the decisions.", "caption_no_index": "Pseudocode for the generation process. The generated text w is a deterministic function of the decisions.", "id": 33514, "image_id": 53746, "mention": [ [ "Generation Process for i = 1, 2, . . . : −choose a record r i ∈ s −if r i = STOP: return −choose a field set F i ⊂ FIELDS(r i .t) −choose a template T i ∈ TEMPLATES(r i .t, F i ) Figure 2: Pseudocode for the generation process." ], [ "Figure 2 shows the pseudocode for the generation process, while Figure 3 depicts an example of the generation process on a WEATHERGOV scenario." ] ], "paragraph": [ "Generation Process for i = 1, 2, . . . : −choose a record r i ∈ s −if r i = STOP: return −choose a field set F i ⊂ FIELDS(r i .t) −choose a template T i ∈ TEMPLATES(r i .t, F i ) Figure 2: Pseudocode for the generation process. The generated text w is a deterministic function of the decisions.", "The decisions are structured hierarchically into three types of decisions: (i) record decisions, which determine which records in the world state to talk about (macro content selection); (ii) field set decisions, which determine which fields of those records to mention (micro content selection); and (iii) template decisions, which determine the actual words to use to describe the chosen fields (surface realization). Figure 2 shows the pseudocode for the generation process, while Figure 3 depicts an example of the generation process on a WEATHERGOV scenario." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053747.png", "id": 53747, "ocr": [ "i" ] }
{ "caption": "Figure 2: GHKM tree equivalent of example translation object. The light gray nodes are rule nodes of the GHKM tree.", "caption_no_index": "GHKM tree equivalent of example translation object. The light gray nodes are rule nodes of the GHKM tree.", "id": 33515, "image_id": 53747, "mention": [ [ "In Figure 1, we show an example translation object and in Figure 2, we show its associated GHKM tree.", "Essentially, a rule like: \"not 1 → ne 1 pas\" (see Figure 2) means: if we see the word \"not\" in English, followed by a phrase already translated into French, then translate the entire thing as the word \"ne\" + the translated phrase + the word \"pas.\"" ], [ "In Figure 2, some of the tree nodes are annotated with GHKM rules." ], [ "A rule node is a tree node annotated with a GHKM rule (for instance, nodes t (i) or t (v) of Figure 2, but not node t (iv) )." ], [ "For Figure 2, the successor list of node t (i) is t (ii) , t (v) , t (xiii) , and the successor list of node t (v) is t (vii) , t (viii) ." ], [ "For Figure 2, the signature of node t (i) is x 1 , x 2 , x 3 , and the signature of node t (v) is \"am\", x 1 ." ], [ "Notice that the signature of every rule node in Figure 2 coincides with the source list of its GHKM rule." ], [ "Define the substitution of string list S into rule ele-Figure 2: GHKM tree equivalent of example translation object." ], [ "For Figure 2, the rule node successor list of node t (viii) is t (xi) ." ] ], "paragraph": [ "In Figure 1, we show an example translation object and in Figure 2, we show its associated GHKM tree. The GHKM tree is simply the parse tree f of the translation object, annotated with rules (hereafter referred to as GHKM rules). We will not describe in depth the mapping process from translation object to GHKM tree. Suffice it to say that the alignment induces a set of intuitive translation rules. Essentially, a rule like: \"not 1 → ne 1 pas\" (see Figure 2) means: if we see the word \"not\" in English, followed by a phrase already translated into French, then translate the entire thing as the word \"ne\" + the translated phrase + the word \"pas.\" A parse tree node gets labeled with one of these rules if, roughly speaking, its span is still contiguous when projected (via the alignment) into the target language.", "Formally, what is a GHKM tree? Define a rule element as a string or an indexed variable (e.g. x 1 , x 4 , x 32 ). A GHKM rule of rank k (where k is a non-negative integer) is a pair R s , R d , where source list R s and destination list R d are both lists of rule elements, such that each variable of X k {x 1 , x 2 , ..., x k } appears exactly once in R s and exactly once in R d . Moreover, in R s , the variables appear in ascending order. In Figure 2, some of the tree nodes are annotated with GHKM rules. For clarity, we use a simplified notation. For instance, rule x 1 , x 2 , x 3 , x 3 , \",\", x 1 , x 2 is represented as \"1 2 3 → 3 , 1 2\". We have also labeled the nodes with roman numerals. When we want to refer to a particular node in later examples, we will refer to it, e.g., as t (i) or t (vii) .", "A rule node is a tree node annotated with a GHKM rule (for instance, nodes t (i) or t (v) of Figure 2, but not node t (iv) ). A tree node t 2 is reachable from tree node t 1 iff node t 2 is a proper descendant of node t 1 and there is no rule node (not including nodes t 1 , t 2 ) on the path from node t 1 to node t 2 .", "Define the successor list of a tree node t as the list of rule nodes and leaves reachable from t (ordered in left-to-right depth-first search order). For Figure 2, the successor list of node t (i) is t (ii) , t (v) , t (xiii) , and the successor list of node t (v) is t (vii) , t (viii) . The rule node successor list of a tree node is its successor list, with all non-rule nodes removed.", "Define the signature of a parse tree node t as the result of taking its successor list, replacing the jth rule node with variable x j , and replacing every nonrule node with its word label (observe that all nonrule nodes in the successor list are parse tree leaves, and therefore they have word labels). For Figure 2, the signature of node t (i) is x 1 , x 2 , x 3 , and the signature of node t (v) is \"am\", x 1 .", "Notice that the signature of every rule node in Figure 2 coincides with the source list of its GHKM rule. This is no accident, but rather a requirement. Define a GHKM tree node as a parse tree node whose children are all GHKM tree nodes, and whose GHKM rule's source list is equivalent to its signature (if the node is a rule node).", "Given these definitions, we can proceed to define how a GHKM tree expresses a translation theory. Suppose we have a list S = s 1 , ..., s k of strings. Define the substitution of string list S into rule ele-Figure 2: GHKM tree equivalent of example translation object. The light gray nodes are rule nodes of the GHKM tree. ment r as:", "For Figure 2, the rule node successor list of node t (viii) is t (xi) . So:" ] }
{ "figure_type": "Graph Plot", "file_name": "000000053748.png", "id": 53748, "ocr": [ "NHULYSIS", "TRAHSFER", "SYKTHES $", "dxep sprtex", "xlyr[", "TCzEhT", "stilkwsrlix", "SJs", "T", "Hi", "map\"obey |SE-pi3\"", "TCzect|\"", "THp'", "rawlex Ssm", "TSal", "Ma", "srlerqny ih", "mgelbr;ueg? /ohl" ] }
{ "caption": "Figure 1: The general TectoMT architecture (from (Popel and Žabokrtský, 2010, :298)).", "caption_no_index": "The general TectoMT architecture (from (Popel and Žabokrtský, 200, :298)).", "id": 33516, "image_id": 53748, "mention": [ [ "The system works on different levels of abstraction (cf. Figure 1) and uses Blocks and Scenarios to process the information across the architecture." ] ], "paragraph": [ "As with most rule-based systems, TectoMT consists of an analysis, transfer and synthesis stages. The system works on different levels of abstraction (cf. Figure 1) and uses Blocks and Scenarios to process the information across the architecture." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053749.png", "id": 53749, "ocr": [ "C0040426", "(.00m1334", "MnfWoGaric", "(01266853", "(01266846", "Elucl euis", "enti Gnia", "C0z6g858", "C0z66854", "Ineinlant VImIFi", "C470852}" ] }
{ "caption": "Figure 1: PAR-related concepts from C0040426 (Tooth structure). We highlight multiple paths,", "caption_no_index": "PAR-related concepts from C0040426 (Tooth structure). We highlight multiple paths,.", "id": 33517, "image_id": 53749, "mention": [ [ "This means there is a path of one or more PAR relations of distance between pairs of concepts, as shown in Figure 1." ] ], "paragraph": [ "After the previous step, we imported concepts and their PAR relations into a graph database 2 . Next, we queried the graph to select several random concepts and recursively extracted direct or related concepts at multiple distances. This means there is a path of one or more PAR relations of distance between pairs of concepts, as shown in Figure 1. Given that sometimes it is possible to 1 version 2022AA 2 Neo4j (https://neo4j.com/) find multiple paths between two concepts, we only used the shortest path between them. This process allowed us to extract the path length between two concepts. We select 20,000 concepts for this study to conduct the intrinsic tests rapidly. However, we can choose more concepts if necessary. • A dash-dash line represents the path between C0040426 (Tooth structure) and C0266846 (Dentin caries) with a distance of 2 PAR edges." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053750.png", "id": 53750, "ocr": [ "PRED", "Peah", "PRLL", "YEkK", "VkkII", "SUD", "ODJ", "OLJ", "ARGUMENTI", "ARGUMENT?", "ARGUMENT?" ] }
{ "caption": "Figure 3: Coordination in TOROT (reproduced with permission from Berdicevskis and Eckhoff 2015)", "caption_no_index": "Coordination in TOROT (reproduced with permission from Berdicevskis and Eckhoff 2015).", "id": 33518, "image_id": 53750, "mention": [ [ "In TOROT, the conjunction is the head (a null conjunction is inserted in case of asyndetic coordination) and all the conjuncts are its dependents, no special relation is used, see Figure 3." ] ], "paragraph": [ "Coordination. According to Popel et al.'s (2013) classification, SynTagRus and TOROT approaches to coordination are variants of resp. Moscow-style and Prague-style. In SynTagRus, the first conjunct is the head, the conjuction (if present) is its dependent (via the COORD relation or SENT-COORD for sentential coordination), the second conjunct is a dependent on a conjunction (via the COORD-CONJ relation), see Figure 2. In TOROT, the conjunction is the head (a null conjunction is inserted in case of asyndetic coordination) and all the conjuncts are its dependents, no special relation is used, see Figure 3. The SynTagRus approach enables simpler syntactic queries, whereas the TOROT approach makes it possible to render complicated stacked structures better. In addition, TOROT uses secondary dependencies to indicate predicate identity (in case of verb ellipsis) and shared dependents (see also Figure 1). The conversion algorithm handles coordination well, apart from rare cases of several entangled coordinated structures. Berdicevskis &amp; Eckhoff (2015) devised a method for inserting secondary dependencies for shared verb arguments, which achieves near-ceiling performance for subjects (which are often shared), but not for other arguments. The method, however, is not yet implemented in the conversion." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053751.png", "id": 53751, "ocr": [ ";", "Guided Backtrace", "MERT", "Ream Size" ] }
{ "caption": "Figure 4: The BLEU comparison between MERT and guided backtrace on nist04 test set over different beam sizes.", "caption_no_index": "The BLEU comparison between MERT and guided backtrace on nist0 test set over different beam sizes.", "id": 33519, "image_id": 53751, "mention": [ [ "Figure 4 compares the results of different beam sizes (2,4,8,16,30) between traditional MERT and guided backtrace." ] ], "paragraph": [ "where q = n − 1 and x equals to 1 if x is true, 0 otherwise. The final diversity results are shown in Table 3. We can see guided backtrace gets a better diversity than traditional MERT. Beam Size As we discussed before, searchaware tuning helps to accommodate search errors in decoding, and promotes good partial derivations. Thus, we believe that even with a small beam, these good partial derivations can still survive with search-aware tuning, resulting in a good translation quality. Figure 4 compares the results of different beam sizes (2,4,8,16,30) between traditional MERT and guided backtrace. The comparison shows that guided backtrace achieves better result than baseline MERT, and when the beam is smaller, the improvement is bigger. Moreover, guided backtrace method with a beam size 8 could achieve comparable BLEU score to traditional MERT with beam size 30." ] }
{ "figure_type": "Equation", "file_name": "000000053752.png", "id": 53752, "ocr": [ "sra{ Ioswa06 awsua 4", "retel ( aeded ebebU7jebeb iumnpua}", "WDL whesu ahumieyah", "TURHSHGOHEPHHERT HTTHCKDFEJE.SW RHQHHDSYPH" ] }
{ "caption": "Figure 4: Input and output for our automatic headline generation system.", "caption_no_index": "Input and output for our automatic headline generation system.", "id": 33520, "image_id": 53752, "mention": [ [ "In Figure 4, we present an example of input keywords and lexical-dependency phrases automatically extracted from a document describing incidents at the Turkey-Iraq border." ] ], "paragraph": [ "Headline Generation. We generate WIDLexpressions starting from an input document. First, we extract a weighted list of topic keywords from the input document using the algorithm of Zhou and Hovy (2003). This list is enriched with phrases created from the lexical dependencies the topic keywords have in the input document. We associate probability distributions with these phrases using their frequency (we assume Keywords C i raq 0.32, syria 0.25, rebels 0.22, kurdish 0.17, turkish 0.14, attack 0.10F that higher frequency is indicative of increased importance) and their position in the document (we assume that proximity to the beginning of the document is also indicative of importance). In Figure 4, we present an example of input keywords and lexical-dependency phrases automatically extracted from a document describing incidents at the Turkey-Iraq border." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053753.png", "id": 53753, "ocr": [ "IMIOEL", "IllikluLt" ] }
{ "caption": "Figure 3. Relative frequencies of lengths of projective (black) and non-projective (grey) sentences in the Czech treebank.", "caption_no_index": "Relative frequencies of lengths of projective (black) and non-projective (grey) sentences in the Czech treebank.", "id": 33521, "image_id": 53753, "mention": [ [ "After the removal of the outliers, the longest sentence consists of 76 words in case of projective sentences and 91 words in case of non-projective ones. 7 The results of the fit are presented in Figure 3 and Table 2." ] ], "paragraph": [ "where = , + 1, + 2, … are sentence lengths; is the shift of the distribution (in this context, the length of the shortest sentence observed); , , and are free parameters. 5 He also provides a theoretical substantiation of the model. Frequencies of lengths of sentences represented by projective and nonprojective trees were fitted by this distribution; however, extreme outliers (the two longest projective sentences, consisting of 78 and 162 words, and the longest non-projective sentence, with 119 words) were removed 6 before the numerical procedures for the fit were performed. After the removal of the outliers, the longest sentence consists of 76 words in case of projective sentences and 91 words in case of non-projective ones. 7 The results of the fit are presented in Figure 3 and Table 2. Full data (also for Arabic, Polish, Russian, and Slovak) can be found at http://www.cechradek.cz/data/2019_Macutek_etal._Nonprojectiv-ity_Length_proportions.zip . 5 The hyperpascal distribution has three free parameters -, , and . The value of 0 is uniquely determined by the other parameters (cf. Wimmer and Altmann, 1999, p. 280). 6 The usual boxplot-based rules for detection of outliers (i.e., outliers are values below 1 − 1.5 or above 3 + 1.5 , with 1 and 3 being the first and the third quartile, respectively; = 1 − 3 is the interquartile range, cf. Tukey, 1977) indicate too many outliers for highly skewed distributions such as ours. While there are more sophisticated versions of boxplot available for such data (e.g. the one suggested by Bruffaerts et al., 2014), in this paper we use, as a rule of thumb, a boxplot with much wider whiskers defined by 1 − 5 and 3 + 5 ." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053754.png", "id": 53754, "ocr": [ "Training Time for HDP-WS", "VS. HCA-WSI", "1", "10+", "108", "3", "102", "HDP-WSI", "HCA-WS", "8", "Vunber of Lemma Usages ( 1QOO s)", "" ] }
{ "caption": "Figure 1: Comparison of the time taken to train the topic models of HDP-WSI and HCA-WSI for each lemma in the BNC dataset. For each method, one data point is plotted per lemma.", "caption_no_index": "Comparison of the time taken to train the topic models of HDP-WSI and HCA-WSI for each lemma in the BNC dataset. For each method, one data point is plotted per lemma.", "id": 33522, "image_id": 53754, "mention": [ [ "In addition, we compared the time taken 11 to run topic modelling for every lemma using both methods, the results of which are displayed in Figure 1." ] ], "paragraph": [ "In addition, we compared the time taken 11 to run topic modelling for every lemma using both methods, the results of which are displayed in Figure 1. These results show that the computation time of HCA-WSI is consistently lower than that of HDP-WSI, by over an order of magnitude." ] }
{ "figure_type": "Scatterplot", "file_name": "000000053755.png", "id": 53755, "ocr": [ "KALAS", "HLBA", "cisDEn", "raben;", "La-ikcuh" ] }
{ "caption": "Figure 4: Log-likelihood differences (MDIFF) between pairs of sentences for individual samples by five models.", "caption_no_index": "Log-likelihood differences (MDIFF) between pairs of sentences for individual samples by five models.", "id": 33523, "image_id": 53755, "mention": [ [ "Figure 4 shows the distribution of M DIFF between sentence pairs in log-likelihood measures." ], [ "To investigate the results in more detail, we plotted the log-likelihood differences M DIFF in the 383 samples, drawn from the original 100 examples from the CrowS-Pairs dataset, in Figure 4." ] ], "paragraph": [ "Figure 4 shows the distribution of M DIFF between sentence pairs in log-likelihood measures. It show that the differences within each pair tend to be very small between -.25 and .25. It means any slight changes caused by word choice, which may contribute to changes in log-likelihood measures of .25, can change the results. In addition, the five models show varying degrees of dispersion in the log-likelihood differences.", "To investigate the results in more detail, we plotted the log-likelihood differences M DIFF in the 383 samples, drawn from the original 100 examples from the CrowS-Pairs dataset, in Figure 4. The figure also confirms that the log-likelihood differences of many samples lie within the range of -.25 to .25." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053756.png", "id": 53756, "ocr": [ "~Exfl", "EAn", "dF", "Lui:", "ZJ #", "\"is", "\"sn ", "6n\"" ] }
{ "caption": "Figure 1: PRADO Model Architecture", "caption_no_index": "PRADO Model Architecture.", "id": 33524, "image_id": 53756, "mention": [ [ "Figure 1 shows the overall architecture of our proposed network PRADO ." ] ], "paragraph": [ "Figure 1 shows the overall architecture of our proposed network PRADO . It consists of a projected embedding layer, a convolutional and attention encoder mechanism and a final classification layer. We describe each component in detail below and contrast them with existing methods." ] }
{ "figure_type": "Equation", "file_name": "000000053757.png", "id": 53757, "ocr": [ "~f ;", "Meectat", "7153", "Siot", "3ud", "Ehig", "~\"Fc-Z#-lt ns", "xtIc #8", "EwId-", "3", "ML6", "IL:h <", "1la*", "Izl : t5;", "9443i", "ct'; #czk jout tis ier %t%C,", "[6;_|s trz ;>", "pnafig;", "curi? E5e }", "Ut", "sitiaio45 ", "0", "\"1", "~el", "M: t e AIf [:", "Je:", "7 t", "Mane $a te9re Fat &3 #Lin", "7:", "73", "Jrl", "Fcc.", "te thzeze ze7l: tat", "otdb", "Fic? 87", "ji", "clt", "51" ] }
{ "caption": "Figure 2: Examples of tweets artificially generated with a GPT-2 model trained on the MediaEval examples with class T5G.", "caption_no_index": "Examples of tweets artificially generated with a GPT- model trained on the MediaEval examples with class T5G.", "id": 33525, "image_id": 53757, "mention": [ [ "Figure 2 presents three examples of generated texts from the MediaEval 2020 '5G' class." ], [ "As can be seen from these examples (including the Me-diaEval examples in Figure 2), the generated texts seem 3 http://groups.di.unipi.it/˜gulli/AG_ corpus_of_news_articles.html 4 https://huggingface.co/datasets/ viewer/?dataset=ag_news to belong to the expected class (see Section 5.2 for a discussion of this point)." ] ], "paragraph": [ "The data augmentation (i.e., text generation) is performed as explained in the previous section. Figure 2 presents three examples of generated texts from the MediaEval 2020 '5G' class.", "As can be seen from these examples (including the Me-diaEval examples in Figure 2), the generated texts seem 3 http://groups.di.unipi.it/˜gulli/AG_ corpus_of_news_articles.html 4 https://huggingface.co/datasets/ viewer/?dataset=ag_news to belong to the expected class (see Section 5.2 for a discussion of this point). However, they often have flaws that make the fact that they were generated detectable. This is particularly the case for French texts, which can be explained by the fact that we did not have, at the time of the experiments, a pre-trained model for French; the model, as well as the tokenizer, are therefore based on the English GPT model. GPT-2 models for French released very recently 5 could improve this aspect if distributed." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053758.png", "id": 53758, "ocr": [ "t", "'inaninnt" ] }
{ "caption": "Figure 4: Shared Concepts in both languages", "caption_no_index": "Shared Concepts in both languages.", "id": 33526, "image_id": 53758, "mention": [ [ "Instead if we take a look at Figure 4, we can observe that concepts are generally not shared, having an average percentage lower than 0.1%, independently of the ontological type." ] ], "paragraph": [ "The results analysis is more impressive. Figure 3 shows that the lexical overlap (i.e. the subset of terms in common between English and Italian) is composed almost exclusively by entities (i.e. proper nouns). Instead if we take a look at Figure 4, we can observe that concepts are generally not shared, having an average percentage lower than 0.1%, independently of the ontological type. We can also observe the predictable result that ontological categories denoting material objects (i.e. persons, locations and groups, artifacts) still have • noun.location: e.g. Canada, Austria, Houston;" ] }
{ "figure_type": "Node Diagram", "file_name": "000000053759.png", "id": 53759, "ocr": [ "506 emedim", "suzlee" ] }
{ "caption": "Figure 2: The output of the parsing for sentence “Ama hiçbir şey söylemedim ki sizlere.” (in English, “But I did not say anything to you.”)", "caption_no_index": "The output of the parsing for sentence “Ama hiçbir şey söylemedim ki sizlere.” (in English, “But I did not say anything to you.”).", "id": 33527, "image_id": 53759, "mention": [ [ "Once we analysed the output of the semantic parsing model manually, we either applied the rules already defined in UCCA guideline (Abend et al., 2020) or de-Figure 2: The output of the parsing for sentence \"Ama hiçbir şey söylemedim ki sizlere.\"" ] ], "paragraph": [ "Once we analysed the output of the semantic parsing model manually, we either applied the rules already defined in UCCA guideline (Abend et al., 2020) or de-Figure 2: The output of the parsing for sentence \"Ama hiçbir şey söylemedim ki sizlere.\" (in English, \"But I did not say anything to you.\")" ] }
{ "figure_type": "Equation", "file_name": "000000053760.png", "id": 53760, "ocr": [ "~Jc kihdAniIE inmrutmsatiSio", "Humt4ulu", "\"I2JWelc.% 7i;EJAstig x linix", "CcivahifCta TcautmRcreo", "(drrWl C'", "3om4e To7es", "(rimfmirrb;: ACTL aznapwI", "'al\" >0htet 6nar", "6rmne{ ETENT 4eneieted", ":kabeadn\"taitl ( Tetrb[e", "ClSko MTamnekmelc;&I: Fcuai {an", "7n (rhbcftt rfint?e", "Wotse*aerhittoit", "[usdskutinlx .#iin: 1ag4i. Iart:ullzn Ua 4e 1X5LrIlu: '7rJaltn", "pJ f n #f'uL Haks ;ueu:taItCa: 3u,7t0)/7 >s Ie:ak", "ic hl JJ; JaJAJu Jz Jechet", "{u; 4:ddcutemdc AEtkir #Esh (ails", "SNut ctbenfricsltmstr #5 ,en*orhoadmere uc ;ad[c", "Fan" ] }
{ "caption": "Figure 1. Snippet blog_augustine_0000024", "caption_no_index": "Snippet blog_augustine_0000024.", "id": 33528, "image_id": 53760, "mention": [ [ "The web pages in Table 1 are blogs but they also contain either sequences of questions and answers or are organized like a how-to document, like in the snippet in Figure 1 blog augustine 0000024 The snippet shows an example of genre colonization, where the vocabulary and text forms of one genre (FAQs/How to in this case) are inserted in another (cf. Beghtol, 2001)." ] ], "paragraph": [ "An accuracy of 86% is a good achievement for a first implementation, especially if we consider that the standard Naïve Bayes classifier returns an accuracy of about 67%. Although slightly lower than SVM, an accuracy of 86% looks promising because this evaluation is only on a single label. Ideally the inferential model could be more accurate than SVM if more labels could be taken into account. For example, the actual classification returned by the inferential model is shown in Table 1. The web pages in Table 1 are blogs but they also contain either sequences of questions and answers or are organized like a how-to document, like in the snippet in Figure 1 blog augustine 0000024 The snippet shows an example of genre colonization, where the vocabulary and text forms of one genre (FAQs/How to in this case) are inserted in another (cf. Beghtol, 2001). These strategies are frequent on the web and might give rise to new web genres. The model also captures a situation where the genre labels available in the system are not suitable for the web page under analysis, like in the example in Table 2 This web page (shown in Figure 2) from the unannotated SPIRIT collection (see Section 4.1) does not receive any of the genre labels currently available in the system. If the pattern shown in Figure 2 keeps on recurring even when more web genres are added to the system, a possible interpretation could be that this pattern might develop into a stable web genre in future. If this happens, the system will be ready to host such a novelty. In the current implementation, only a few rules need to be added. In future implementations hand-crafted rules can be replaced by other methods. For example, an interesting adaptive solution has been explored by Segal and Kephart (2000). Predictions. Precision of predictions on one web genre is used as an additional evaluation metric. The predictions on the eshop genre issued by the inferential model are compared with the predictions returned by two SVM models built with two different web page collections, Meyerzu-Eissen collection and the 7-web-genre collection (Santini, 2006). Only the predictions on eshops are evaluated, because eshop is the only web genre shared by the three models. The number of predictions is shown in Table 3. The number of retrieved web pages (Total Predictions) is higher when the inferential model is used. Also the value of precision (Correct Predictions) is higher. The manual evaluation of the predictions is available online at:" ] }
{ "figure_type": "Equation", "file_name": "000000053761.png", "id": 53761, "ocr": [ "BPER EPER", "SLOC 0", "hx Take w bui Lorb ,", "LItEAHu ", "B PER ke F PER Vaexe WI HuYhHS LOCLulhL," ] }
{ "caption": "Figure 2: An example of labeled sequence linearization.", "caption_no_index": "An example of labeled sequence linearization.", "id": 33529, "image_id": 53761, "mention": [ [ "As the example shown in Figure 2 To extend this method for multilingual data augmentation, we add special tokens at the beginning of each sentence to indicate the language that it belongs to." ] ], "paragraph": [ "Although labeled sequence translation generates high quality multilingual NER training data, it adds limited variety since translation does not introduce new entities or contexts. Inspired by DAGA (Ding et al., 2020), we propose a generation-based multilingual data augmentation method to add more diversity to the training data. DAGA is a monolingual data augmentation method designed for sequence labeling tasks, which has been shown to be able to add significant diversity to the training data. As the example shown in Figure 2 To extend this method for multilingual data augmentation, we add special tokens at the beginning of each sentence to indicate the language that it belongs to. The source-language data and the multilingual data obtained via translation are concatenated to train/finetune multilingual LMs with a shared vocabulary (as shown in Figure 5). Given a labeled sequence {x 1 , . . . , x M } from the multilingual training data, the LMs are trained to maximize the probability p(x 1 , . . . , x M ) in Eq. 1:" ] }
{ "figure_type": "Equation", "file_name": "000000053762.png", "id": 53762, "ocr": [ "ankha_hengma.,25", "hicceko umajhabe usam baira kha?niglok lenma kanna?", "hicce<0", "uma habe", "'Jich hicce -ko", "~LCC NMILZ JsFCSS: micde -NNZ -LCC :936", "1134 -6331", "6702-", "43", "~6353 -6730 6709-", "bahira khalmtrkk", "ler Ma", "konnor", "'Jch | balira khal ~I-I]", "len; ~Ia", "kond", "cutsice", "~NeG -M", "eto -INDMPST", "1353 - 2457 44430 251", "485 13", "~2330" ] }
{ "caption": "Figure 2: Example of TOOLBOX format (Stoll et al., Unpublished)", "caption_no_index": "Example of TOOLBOX format (Stoll et al., Unpublished).", "id": 33530, "image_id": 53762, "mention": [ [ "An example is given in Figure 2." ] ], "paragraph": [ "The second corpus format that our pipeline accepts as an input format is SIL International's TOOLBOX format. TOOLBOX is a data management tool for collecting and analyzing lexical data and interlinear text. Field linguists Figure 1: Example of CHAT format (Demuth, 2015) often use it for creating a morphologically annotated lexicon. Corpus linguists also use this format for its ease of encoding recording sessions through conversational interactions, where each utterance turn is encoded via several user-defined idiosyncratic tiers (e.g. \\tx for text; \\gw for gloss; \\ps for part-of-speech), each of which is separated by a blank line. As in CHAT, every file represents a recording session (file names differ from corpus to corpus, but often include the recording data in the file name). TOOLBOX files are encoded in plain text Unicode UTF-8 format. An example is given in Figure 2." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053763.png", "id": 53763, "ocr": [ "xikanakuratta (d3 no: ivJrk}", "wbj", "subjec-", "soujukan", "sharing", "{control sticki", "chokurikusito ( and)", "hgitta (move onto}", "subj", "i-coj", "$ Izerc arzphor) yudouro", "Fikouki;", "bujici", "(taxiwaw)", "Ia rplane}", "Ksalely}", "enzphoric", "re atic^" ] }
{ "caption": "Figure 4: Example of ADJ type", "caption_no_index": "Example of ADJ type.", "id": 33531, "image_id": 53763, "mention": [ [ "For example, in example (3), two adjacent predicates, land and move onto, have the same subject but not a direct dependency relation, as illustrated in Figure 4." ] ], "paragraph": [ "ADJ This type is a subject sharing relation between two adjacent predicates, i.e., a predicate pairs that do not have any other predicate between them in the surface order of a sentence. Although two adjacent predicates in a sentence tend to share the same subject, they sometimes cannot be captured as the DEP type due to a long-distance dependency between predicates. For example, in example (3), two adjacent predicates, land and move onto, have the same subject but not a direct dependency relation, as illustrated in Figure 4." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053764.png", "id": 53764, "ocr": [ "IJF-[Jt", "77u7", "6 ueakje=", "0y4Ja", "Fceeet=", "JX?F", "#eru", "C64", "#hyhaini", "J9r", "Manal", "centact", "~~", "uul;", "Fcole", "contact", "(Vit-LLCI" ] }
{ "caption": "FIG. 2 – Appariement des mots enkatakana", "caption_no_index": "– Appariement des mots enkatakana.", "id": 33532, "image_id": 53764, "mention": [ [ "La figure 2 " ] ], "paragraph": [ "La troisième liste contient les mots du texte japonais écrits en katakana. La figure 2 " ] }
{ "figure_type": "Node Diagram", "file_name": "000000053765.png", "id": 53765, "ocr": [ "Kn", "M m" ] }
{ "caption": "Figure 5: Synonymous strategy across one paths, with previously inferred translations", "caption_no_index": "Synonymous strategy across one paths, with previously inferred translations.", "id": 33533, "image_id": 53765, "mention": [ [ "The third approach related to synonymous words is shown in Figure 5 " ] ], "paragraph": [ "The third approach related to synonymous words is shown in Figure 5 " ] }
{ "figure_type": "Equation", "file_name": "000000053766.png", "id": 53766, "ocr": [ "Kec-ion -Yp? 'Iip \"", "'i-mn L;sc-\"Jcri? \" >@lL_v_ter>", "(i.3n l;fe \"Jef4>rrgf_k;i-3>", "{1-47", "tzfe='Aeri7a\"\">rzrgl;d; tet>", "{*36~\"fr", "~ens JES>", "'Lk:", "=", "'a2- yinoh_-?9_IBcv__cm:>", "Arar;", "~argz =", "raltac-r 9", "rars", "arg-\"ra", "awam>", "(tTar;", "'e-Ionec; an}>", "\"Lan:", "Iq- 'ir'> AC)[ECE< tfam>", "(Lr:E:s -Jr4=", "'.idd [:S}", "VFart", "Arg=", "A-Miem: <49", "(rars", "'cal tran>", "(Lrar:", "`ajJ ''zi:) t[JIS", "'LJcs", "'vLan >", "Gatluar", "\"ard", "GIF\"" ] }
{ "caption": "Figure 8: Derivation relations and translations (American sign language, Catalan, French, Esperanto, Latin, Norwegian Bokmål, Norwegian Nynorsk) for wrong (excerpt)", "caption_no_index": "Derivation relations and translations (American sign language, Catalan, French, Esperanto, Latin, Norwegian Bokmål, Norwegian Nynorsk) for wrong (excerpt).", "id": 33534, "image_id": 53766, "mention": [ [ "Morphological sections contain derivation relations (cf. Fig. 8) and looser relations labeled related.", "Translations are given for national and regional languages, living and dead languages, natural and constructed languages (cf. Fig. 8). &lt;pos type=\"noun\" lemma=\"1\" etymNb=\"1\"&gt; &lt;definitions&gt; &lt;definition level=\"1\"&gt;[...]" ], [ "For ex-&lt;section type=\"morpho\"&gt; &lt;item type=\"derived\"&gt;wrength&lt;/item&gt; &lt;item type=\"derived\"&gt;wrongful&lt;/item&gt; &lt;item type=\"derived\"&gt;wrongly&lt;/item&gt; &lt;/section&gt; &lt;translations&gt; &lt;trans lang=\"ase\"&gt;Y@Chin-PalmBack&lt;/trans&gt; &lt;trans lang=\"ca\"&gt;incorrecte&lt;/trans&gt; &lt;trans lang=\"ca\"&gt;erroni&lt;/trans&gt; &lt;trans lang=\"fr\"&gt;erroné&lt;/trans&gt; &lt;trans lang=\"fr\"&gt;incorrect&lt;/trans&gt; &lt;trans lang=\"io\"&gt;vidar&lt;/trans&gt; &lt;trans lang=\"la\"&gt;erroneus&lt;/trans&gt; &lt;trans lang=\"no\"&gt;galt&lt;/trans&gt; &lt;trans lang=\"no\"&gt;uriktig&lt;/trans&gt; &lt;trans lang=\"nn\"&gt;feil&lt;/trans&gt; &lt;/translations&gt; Figure 8: Derivation relations and translations (American sign language, Catalan, French, Esperanto, Latin, Norwegian Bokmål, Norwegian Nynorsk) for wrong (excerpt) ample, the inflections of the irregular verb deal are given in Wiktionary's headword line as depicted in Figure 9." ] ], "paragraph": [ "Translations, semantic and morphological relations occur within POS sections. For example, the synonym French person relates to all the definitions (senses) of the second noun section of frog (Fig. 1). Conversely, some semantic relations (synonymy, antonymy, hypernymy and hyponymy) corresponding to a particular meaning also appear in definitions. For instance, preface and epilogue are meronyms for all senses of book (Fig. 7) while tome and volume are synonyms of only a given one (major division of a long work). Morphological sections contain derivation relations (cf. Fig. 8) and looser relations labeled related. Caution should be exercised when relying on this distinction: we observed that, in reality, words signaled by derivation relations in Wiktionary are often no real derivatives (morphological relations found in etymological sections are more trustworthy). Translations are given for national and regional languages, living and dead languages, natural and constructed languages (cf. Fig. 8). &lt;pos type=\"noun\" lemma=\"1\" etymNb=\"1\"&gt; &lt;definitions&gt; &lt;definition level=\"1\"&gt;[...] A collection of sheets of paper [...]&lt;/definition&gt; &lt;definition level=\"1\"&gt;[...] A major division of a long work &lt;semRel type=\"syn\"&gt;tome, volume&lt;/semRel&gt; [...] &lt;/definition&gt; &lt;!--... --&gt; &lt;/definitions&gt; &lt;section type=\"semRel\"&gt; &lt;item type=\"meronym\"&gt;preface&lt;/item&gt; &lt;item type=\"meronym\"&gt;epilogue&lt;/item&gt; &lt;/section&gt; &lt;/pos&gt; ", "As seen above, both lemmas and inflections may appear as headwords in Wiktionary. When a headword relates to a lemma, the corresponding inflections (plural of nouns, comparative and superlative forms of adjectives and adverbs, participles and third-person forms of verbs, etc.) may be given below or next to the headword line. For ex-&lt;section type=\"morpho\"&gt; &lt;item type=\"derived\"&gt;wrength&lt;/item&gt; &lt;item type=\"derived\"&gt;wrongful&lt;/item&gt; &lt;item type=\"derived\"&gt;wrongly&lt;/item&gt; &lt;/section&gt; &lt;translations&gt; &lt;trans lang=\"ase\"&gt;Y@Chin-PalmBack&lt;/trans&gt; &lt;trans lang=\"ca\"&gt;incorrecte&lt;/trans&gt; &lt;trans lang=\"ca\"&gt;erroni&lt;/trans&gt; &lt;trans lang=\"fr\"&gt;erroné&lt;/trans&gt; &lt;trans lang=\"fr\"&gt;incorrect&lt;/trans&gt; &lt;trans lang=\"io\"&gt;vidar&lt;/trans&gt; &lt;trans lang=\"la\"&gt;erroneus&lt;/trans&gt; &lt;trans lang=\"no\"&gt;galt&lt;/trans&gt; &lt;trans lang=\"no\"&gt;uriktig&lt;/trans&gt; &lt;trans lang=\"nn\"&gt;feil&lt;/trans&gt; &lt;/translations&gt; Figure 8: Derivation relations and translations (American sign language, Catalan, French, Esperanto, Latin, Norwegian Bokmål, Norwegian Nynorsk) for wrong (excerpt) ample, the inflections of the irregular verb deal are given in Wiktionary's headword line as depicted in Figure 9. When a headword relates to an inflected form, its definition usually provides the corresponding lemma and inflection type. An illustration is given in Figure 10 for the article dealing. Both kinds of information (either redundant or complementary) enable the generation of inflectional paradigms such as the verbal paradigm represented in Figure 11 for deal. These paradigms are directly used to produce the inflectional lexicon ENGLAFF (cf. Section 5.2.)." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053767.png", "id": 53767, "ocr": [ "Methods fc\" Encocing", "Fierarchical Stuciure", "End-to-End", "Posl-Processing", "ISaclir 3,2 Ii", "Parameter", "Regula ization", "Decompc: ton", "(Sectio\" 3.22,", "Claz: Based", "Instance Based", "(Cectin 3,2.%2,", "Secton 3,24", "Traring" ] }
{ "caption": "Figure 1: Encoding hierarchical information", "caption_no_index": "Encoding hierarchical information.", "id": 33535, "image_id": 53767, "mention": [ [ "The suitable methods are summarized in the taxonomy in Figure 1." ] ], "paragraph": [ "As mentioned in Section 2, we focus on lightweight methods that introduce a minimal number of additional parameters and are therefore compatible with fine-tuning as part of the final classification layer of a transformer-based architecture. The suitable methods are summarized in the taxonomy in Figure 1. We distinguish, from top to bottom: (1) Methods that post-process the output of a statement classifier to enforce hard constraints vs. methods that incorporate soft constraints into the end-to-end learning process; (2) among the latter, methods that decompose the parameters for the more specific classes vs. regularization methods; (3) among the regularization methods, we compare those which target the representation of the class vs. of the encoded instance. We now describe the application of these methods and assess their characteristics. 1 In earlier work (Dayanik et al., 2021), we experimented with other state-of-the-art architectures, including BiLSTMs with and without attention, but obtained worse performance. 2 The appendix gives details on the BERT models we use. (Riedel and Clarke, 2006) or semantic role labeling (Punyakanok et al., 2004) to enforce linguistically motivated constraints on predicted structures." ] }
{ "figure_type": "Equation", "file_name": "000000053768.png", "id": 53768, "ocr": [ "FRHETHHE HHh", "TFTHEIUHLRaw", "TF NHOCATUH", "TF HHAOCHTHUHOHGVZATU", "TRF OHECTHEAMHwH" ] }
{ "caption": "FIGURE 2: A domain Goal frame from the Iraq question", "caption_no_index": "A domain Goal frame from the Iraq question.", "id": 33536, "image_id": 53768, "mention": [ [ "is shown in Figure 2." ], [ "The clarification question in Figure 4 is generated by comparing the Goal frame in Figure 2 to a partly matching frame (Figure 5) generated from some other text passage." ] ], "paragraph": [ "A very similar framing process is applied to the user's question, resulting in one or more Goal frames, which are subsequently compared to the data frames obtained from retrieved text passages. A Goal frame can be a general frame or any of the typed frames. The Goal frame generated from the question, \"Has Iraq been able to import uranium?\" is shown in Figure 2. This frame is of WMDTransfer type, with 3 role attributes TRF_TO, TRF_FROM and TRF_OBJECT, plus the relation type (TRF_TYPE). Each role attribute is defined over an underlying general frame attribute (given in parentheses), which is used to compare frames of different types.", "The clarification question in Figure 4 is generated by comparing the Goal frame in Figure 2 to a partly matching frame (Figure 5) generated from some other text passage. We note first that the Goal frame for this example is of WMDTransfer type, while the data frame in Figure 5 is of the type WMDDevelop. Nonetheless, both frames match on their general-frame attributes WEAPON and LOCATION. Therefore, HITIQA asks the user if it should expand the answer space to include development of uranium in Iraq as well." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053769.png", "id": 53769, "ocr": [ "UhiHUa a", "Ma Uu NHZNMhoxz", "(l |h" ] }
{ "caption": "Figure 1: The oracle translation for this Arabic VOS sentence would be pruned during search using typical distortion parameters. The Arabic phrases read right-to-left, but we have ordered the sentence from left-to-right in order to clearly illustrate the re-ordering problem.", "caption_no_index": "The oracle translation for this Arabic VOS sentence would be pruned during search using typical distortion parameters. The Arabic phrases read right-to-left, but we have ordered the sentence from left-to-right in order to clearly illustrate the re-ordering problem.", "id": 33537, "image_id": 53769, "mention": [ [ "For languages with very different word or- Figure 1: The oracle translation for this Arabic VOS sentence would be pruned during search using typical distortion parameters." ], [ "To illustrate this problem, consider the Arabic-English example in Figure 1." ], [ "For example, translation of the VOS sentence in Figure 1 requires both a high distortion limit to accommodate the subject movement and tight restrictions on the movement of the PP." ], [ "Returning to Figure 1, we could have an alternate hypothesis They waited for the followers of the Christian and Islamic sects, which is acceptable English and has low distortion, but is semantically inconsistent with the Arabic." ] ], "paragraph": [ "It is well-known that translation performance in Moses-style (Koehn et al., 2007) machine translation (MT) systems deteriorates when high distortion is allowed. The linear distortion cost model used in these systems is partly at fault. It includes no estimate of future distortion cost, thereby increasing the risk of search errors. Linear distortion also penalizes all re-orderings equally, even when appropriate re-orderings are performed. Because linear distortion, which is a soft constraint, does not effectively constrain search, a distortion limit is imposed on the translation model. But hard constraints are ultimately undesirable since they prune the search space. For languages with very different word or- Figure 1: The oracle translation for this Arabic VOS sentence would be pruned during search using typical distortion parameters. The Arabic phrases read right-to-left, but we have ordered the sentence from left-to-right in order to clearly illustrate the re-ordering problem.", "To illustrate this problem, consider the Arabic-English example in Figure 1. Assuming that the English translation is constructed left-to-right, the verb shaaraka must be translated after the noun phrase (NP) subject. If P phrases are used to translate the Arabic source s to the English target t, then the (unsigned) linear distortion is given by D(s, t) = p 1 f irst +", "The implications for translation to English are: (1) prepositions remain in place, (2) NPs are inverted, and most importantly, (3) basic syntactic constituents must often be identified and precisely re-ordered. The VOS configuration is especially challenging for Arabic-English MT. It usually appears when the direct object is short-e.g., pronominal-and the subject is long. For example, translation of the VOS sentence in Figure 1 requires both a high distortion limit to accommodate the subject movement and tight restrictions on the movement of the PP. The particularity of these requirements in Arabic and other languages, and the difficulty of modeling them in phrase-based systems, has inspired significant work in source language preprocessing (Collins et al., 2005;Habash and Sadat, 2006;Habash, 2007).", "subject, and adjective and noun-baseline phrasebased systems rely on the language model to specify an appropriate target word order (Avramidis and Koehn, 2008). Returning to Figure 1, we could have an alternate hypothesis They waited for the followers of the Christian and Islamic sects, which is acceptable English and has low distortion, but is semantically inconsistent with the Arabic." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053770.png", "id": 53770, "ocr": [ "San ", "Frab", "Et", "n:Et", "TMe", "'rihof", "bto", "Fk", "Hn", "Lder", "Fdrn" ] }
{ "caption": "Figure 1: Data Processing Scheme", "caption_no_index": "Data Processing Scheme.", "id": 33538, "image_id": 53770, "mention": [ [ "We inherited the data processing method (as shown in Figure 1) proposed in (Phung et al., 2020)." ], [ "Our network architecture was similar to config V1 (Kong et al., 2020)." ] ], "paragraph": [ "We inherited the data processing method (as shown in Figure 1) proposed in (Phung et al., 2020). We remove non-speech segments from the audio files using Voice Activity Detection (VAD) model (Kim and Hahn, 2018). As for textual data, we normalized the original text to lower case without punctuation, then use the results from an Automatic Speech Recognition (ASR) (Peddinti et al., 2015) model to define unvoiced intervals to automatic punctuation to improve the naturalness and prosody of synthesized voices (Phung et al., 2020). Moreover, there is an enormous number of English words in the provided databases, so our solution is to borrow Vietnamese sounds to read the English words. Even, the English words can consist of Vietnamese syllables and English fricative sounds (for example, x sound) if necessary (for instance, \"study\" becomes 'x-ta-di'), which can make it easier for the model to learn the fricative sounds. Also, by selecting the pronunciation of English words, we introduced uncommon Vietnamese syllables, which enriched the vocabulary of the training data set. The overall text normalization was carried out using regular expressions and a dictionary. Finally, we manually reviewed and corrected the transcription. The data processing scheme is shown in Figure 1 2.1.1 Voice Activity Detection", "-HiFiGAN: To achieve better vocoding quality and higher efficiency, we utilized a HiFiGANbased vocoder instead of WaveGlow vocoder. Our network architecture was similar to config V1 (Kong et al., 2020). A mel-spectrogram was used as input of generator and upsamples it through transposed convolutions until the length of the output sequence matches the temporal resolution of a raw waveform." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053771.png", "id": 53771, "ocr": [ "AcIt", "Eineteenc?", "Opzrationlype", "Stalus", "Owncrship", "Conecpl", "Farticipanll", "ThingType", "(hject", "Pnipetl;", "lerel", "level", "evel" ] }
{ "caption": "Figure 1: Class hierarchy of our conceptual ontology for modeling software requirements.", "caption_no_index": "Class hierarchy of our conceptual ontology for modeling software requirements.", "id": 33539, "image_id": 53771, "mention": [ [ "The class hierarchy of our ontology is shown in Figure 1." ] ], "paragraph": [ "Different representations have been proposed for modeling requirements in previous work: whereas early work focused on deriving simple class diagrams, more recent approaches suggest representing requirements via logical forms (cf. Section 2). In this paper, we propose to model requirements using a formal ontology that captures general concepts from different application domains. Our proposed ontology covers the same properties as earlier work and provides a means to represent requirements in logical form. In practice, such logical forms can be induced by semantic parsers and in subsequent steps be utilized for automatic inference. The class hierarchy of our ontology is shown in Figure 1. At the highest level of the class hierarchy, we distinguish between \"things\" (ThingType) and \"operations\" (OperationType)." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053772.png", "id": 53772, "ocr": [ "" ] }
{ "caption": "Figure 2: Understanding Error Rate vs. utterance rejection on the development and test corpora", "caption_no_index": "Understanding Error Rate vs. utterance rejection on the development and test corpora.", "id": 33540, "image_id": 53772, "mention": [ [ "Figure 2 shows the curve UER vs. utterance rejection on the development and test corpora." ] ], "paragraph": [ "In this experiment we evaluate the decision strategy consisting of accepting or rejecting an hypothesis Γ thanks to a threshold on the probability P (Γ | M ). Figure 2 shows the curve UER vs. utterance rejection on the development and test corpora. As we can see very significant improvements can be achieved with very little utterance rejection." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053773.png", "id": 53773, "ocr": [ "Ddec-] X", "kI-Iae", "an-Max 4", "I det-IF" ] }
{ "caption": "Figure 1: Sabarimala Gaussian Curves measuring polarity percentage", "caption_no_index": "Sabarimala Gaussian Curves measuring polarity percentage.", "id": 33541, "image_id": 53773, "mention": [ [ "(Figure 1)" ], [ "(Figure 1)" ], [ "(Figure 1)" ] ], "paragraph": [ "• Overall negative sentiment increases after the Jan 1 incident. (Figure 1)", "• Opinions polarized due to the event (on both sides), in comparison to a non-event (noncontroversial) normal state. The graph shows a significant decrease in the neutral population in the subsequent periods. (Figure 1)", "• 5-8 months after the polarizing event, the opinions are still polarised and even more so than they were at the time of the event. (Figure 1)" ] }
{ "figure_type": "Graph Plot", "file_name": "000000053774.png", "id": 53774, "ocr": [ "EEGE", "apcn", "Wuca5" ] }
{ "caption": "Figure 2: Differences for alternative unsupervised learners across numbers of clusters.", "caption_no_index": "Differences for alternative unsupervised learners across numbers of clusters.", "id": 33542, "image_id": 53774, "mention": [ [ "The results are illustrated in Figure 2." ] ], "paragraph": [ "We further contrast the use of different unsupervised learners, comparing the three spectral techniques and k-means with Euclidean distance. All contrasts are presented for English pitch accent classification, ranging over different numbers of clusters, with the best parameter setting of neighborhood size. The results are illustrated in Figure 2. K-means and the asymmetric clustering technique are presented for the clean focal Mandarin speech under the standard two stage clustering, in Table 1." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053775.png", "id": 53775, "ocr": [ "Performarce (XEc", "XTL(2,8", "DR-KOW", "AIATN", "VK+ATTI", "Ie", "R+HC", "RLOR", "Correlalion", "Vits" ] }
{ "caption": "Figure 3: Predictive performance (XEC −XEL(T ),C) and average confound correlation (V/rpb) of lexicons generated via our proposed algorithms and a variety of methods in current use. The numbers to the right of each bar indicate the number of winning bootstrap trials.", "caption_no_index": "Predictive performance (XEC −XEL(T ),C) and average confound correlation (V/rpb) of lexicons generated via our proposed algorithms and a variety of methods in current use. The numbers to the right of each bar indicate the number of winning bootstrap trials.", "id": 33543, "image_id": 53775, "mention": [ [ "The proposed methods appear to perform the best, and DR+BOW achieved the largest performance/correlation ratio (Figure 3).", "MI's words appear unrelated to the confounds, but don't seem very persuasive, and our results corroborate this: these words failed to add predictive power over the confounds (Figure 3)." ] ], "paragraph": [ "Setup. We consider 189,486 financial complaints publicly filed with the Consumer Financial Protection Bureau (CFPB) 2 . The CFPB is a product of Dodd-Frank legislation which solicits and addresses complaints from consumers regarding a variety of financial products: mortgages, credit reports, etc. Some submissions are handled on a timely basis (&lt; 15 days) while others languish. We are interested in identifying salient words which help push submissions through the bureaucracy and obtain timely responses, regardless of the specific nature of the complaint. Thus, our target variable is a binary indicator of whether the complaint obtained a timely response. Our 2 These data can be obtained from https: //www.consumerfinance.gov/data-research/ consumer-complaints/ confounds are twofold, (1) a categorical variable tracking the type of issue (131 categories), and (2) a categorical variable tracking the financial product (18 categories). For the proposed DR+BOW, DR+ATTN, A+BOW, and A+ATTN models, we set |e| to 1, 64, 1, and 256, respectively. Results. In general, this seems to be a tractable classification problem, and the confounds alone are moderately predictive of timely response (XE C = 1.06). The proposed methods appear to perform the best, and DR+BOW achieved the largest performance/correlation ratio (Figure 3). We obtain further evidence upon examining the lexicons selected by four representative algorithms: proposed (DR+BOW), a well-performing baseline (RR), and two naive baselines (R, MI) (Table 1). MI's words appear unrelated to the confounds, but don't seem very persuasive, and our results corroborate this: these words failed to add predictive power over the confounds (Figure 3). On the opposite end of the spectrum, R's words appear somewhat predictive of the timely response, but are confound-related: they include the FDCPA (Fair Debt Collection Practices Act) and HIPAA (Health Insurance Portability and Accountability Act), which are directly related to the confound of financial product." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053776.png", "id": 53776, "ocr": [ "YRr", "D =", "30", "XM", "Ke", "F", "IK", "tt", "16\"4" ] }
{ "caption": "Figure 2: Exploration of parser state space using best-first search and error states. States are numbered according to the order in which they become the parser’s current state. The local action classifier is trained with four classes: the three valid actions (represented as Sh for shift, L for reduce-left, and R for reduce-right) and an error class. The error class is not used by the parser and not shown in the diagram, but serves to reduce the total probability of valid parser actions by occupying some probability mass in each state, creating a way to reflect the overall quality of individual states.", "caption_no_index": "Exploration of parser state space using best-first search and error states. States are numbered according to the order in which they become the parser’s current state. The local action classifier is trained with four classes: the three valid actions (represented as Sh for shift, L for reduce-left, and R for reduce-right) and an error class. The error class is not used by the parser and not shown in the diagram, but serves to reduce the total probability of valid parser actions by occupying some probability mass in each state, creating a way to reflect the overall quality of individual states.", "id": 33544, "image_id": 53776, "mention": [ [ "The local classifier is then applied to the current state, and Figure 2: Exploration of parser state space using best-first search and error states." ], [ "Figure 2 illustrates arc-standard dependency parsing with error states and best-first search." ] ], "paragraph": [ "Once a local classifier has been trained with error states, this classifier can be used in a transitionbased parser with no modifications; the error class is simply thrown away during parsing. For example, the type of beam search typically used in transitionbased parsing with the structured perceptron (Zhang and Clark, 2008;Huang and Sagae, 2010) can be used to pursue several derivations in parallel, and global score of a derivation can be decomposed as the sum of the scores of all actions in the deriva- tion. Analogously, we score each derivation using the product of the probabilities for all actions in the derivation. Interestingly, local normalization of action scores allows the use of best-first search (Sagae and Tsujii, 2007), which has the potential to arrive at high quality solutions without having to explore as many states as a typical beam search, and even allows efficient exact or nearly exact inference . Once actions are scored for the parser's current state using a classifier, the score of a new state resulting from the application of a valid action to the current state can be computed as the product of the probabilities of all actions applied up to the new state in its derivation path. In other words, the score of each new state is the score of the current state multiplied by the probability of the action applied to the current state to generate the new state. New scored states resulting from the application of each action to the current state are then placed in a priority queue. The highest scoring item in the priority queue is chosen, and the state corresponding to that item is then made the current state 1 . The local classifier is then applied to the current state, and Figure 2: Exploration of parser state space using best-first search and error states. States are numbered according to the order in which they become the parser's current state. The local action classifier is trained with four classes: the three valid actions (represented as Sh for shift, L for reduce-left, and R for reduce-right) and an error class. The error class is not used by the parser and not shown in the diagram, but serves to reduce the total probability of valid parser actions by occupying some probability mass in each state, creating a way to reflect the overall quality of individual states.", "the process is repeated (without clearing the priority queue, which already contains items corresponding to unexplored new states) until the current state is a final state (a state corresponding to a complete parse). This agenda-driven transition-based parsing approach, where the agenda is a priority queue, is optimal since all scores fall between 0 and 1, inclusive, but in practice a priority queue with limited capacity can be used to improve efficiency by preventing unbounded exploration of the exponential search space in cases where probabilities are nearly uniform. Figure 2 illustrates arc-standard dependency parsing with error states and best-first search. From states 0 and 1, the only possible action is shift. From state 2, the most probably action according to the model is reduce-left, which is not the correct action, but has probability 0.6. The correct action, shift, has probability 0.3. State 3 is then chosen as the current state, but when the classifier is applied to state 3, the only valid action, shift, is assigned probability 0.1. This is because the classifier assigns most of the probability mass to the error class, which the parser does not use. Because the state resulting from a shift from state 3 would have low probability, due to the low probability of shift, the search continues from state 4, and the parser has recovered from the classification error at state 2." ] }
{ "figure_type": "Scatterplot", "file_name": "000000053777.png", "id": 53777, "ocr": [] }
{ "caption": "Figure 9: Texts scored using the two stylistic dimension obtained in our factor analysis", "caption_no_index": "Texts scored using the two stylistic dimension obtained in our factor analysis.", "id": 33545, "image_id": 53777, "mention": [ [ "In figure 9 we plotted the texts in a graph in accordance with their stylistic scores (once again, some texts occupy the same point so they do not appear)." ], [ "In the ideal situation, the generator would have produced texts with the perfect regression scores and they would be identical to the stylistic scores, so the graph in the figure 9 would be like a gridshape one as in figure 6." ] ], "paragraph": [ "In the second part of this experiment we wanted to know whether the regression equations were doing the job they were supposed to do by comparing the regression scores with stylistic scores obtained (from the factor analysis) for each of the generated texts. In figure 9 we plotted the texts in a graph in accordance with their stylistic scores (once again, some texts occupy the same point so they do not appear).", "14 All the correlational figures (R 2 ) presented for this experiment are significant at the 0.01 level (twotailed). In the ideal situation, the generator would have produced texts with the perfect regression scores and they would be identical to the stylistic scores, so the graph in the figure 9 would be like a gridshape one as in figure 6. However we have already seen in figure 7, that this is not the case for the relation between the target coordinates and the regression scores. So we did not expect the plot of stylistic scores 1 (SS1) against stylistic scores 2 (SS2) to be a perfect grid." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053778.png", "id": 53778, "ocr": [] }
{ "caption": "Figure 7: The acceleration record of LDC term.The LDC term produces different accelerations for different model outputs. The x-axis is pj and the y-axis is the corresponding L ′′ pk value. δ ∈ [0.0, 0.1, ...].", "caption_no_index": "The acceleration record of LDC term.The LDC term produces different accelerations for different model outputs. The x-axis is pj and the y-axis is the corresponding L ′′ pk value. δ ∈ [0.0, 0.1, ...].", "id": 33546, "image_id": 53778, "mention": [ [ "Where L p k is a monotone increasing function, when q j =1 and δ ∈ [0.0, 0.1, ...], we obtain the corresponding relation between L p k and p j in the Figure 7." ] ], "paragraph": [ "Where L p k is a monotone increasing function, when q j =1 and δ ∈ [0.0, 0.1, ...], we obtain the corresponding relation between L p k and p j in the Figure 7. It is concluded that L p k is a decreasing and then increasing function, which also shows that the acceleration of L DCE first decreases and then increases with the increase of p j corresponding to the label. When p j approaches to 1 with the q distribution is close to the p distribution, the model will believes correct tags more, and L DCE has larger acceleration in learning correct samples. That benefits the model learns cleaner samples and prevents overfitting. On the contrary, the L DCE term thinks that the model has relatively correct prediction result for noisy samples under the influence of learning other correct labels, and the acceleration is small which also effectively prevents the model overfitting and improves the noise tolerance. And other more detailed proofs are shown in the Appendix A." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053779.png", "id": 53779, "ocr": [ "73.11", "70.63", "[Cniccj:|", "JInI \" MNJ", "FIlvz.6 ]", "rc irjni:", "nolaCr -", "Itartjg", "'Ea-kgrounc", "6 [USi", "~orom:'" ] }
{ "caption": "Figure 2 Results of the best classifier (lex + sem + synt) on different irony types.", "caption_no_index": "Results of the best classifier (lex + sem + synt) on different irony types.", "id": 33547, "image_id": 53779, "mention": [ [ "Figure 2 visualizes the accuracy of the bestperforming classifier (i.e., lexical + semantic + syntactic features) for each irony type and the different types of non-ironic tweets (i.e., hashtag vs. background corpus)." ] ], "paragraph": [ "In the annotations section (see Section 3.2), we found that most ironic tweets in our corpus (i.e., 72%) show a contrast between a positive and a negative polarity expression. In the following paragraphs, we aim to verify whether this category is also the most likely to be recognized automatically, as compared to other irony types. To verify the validity of our assumption, we analyzed the classification output for the different irony types in our corpus. Figure 2 visualizes the accuracy of the bestperforming classifier (i.e., lexical + semantic + syntactic features) for each irony type and the different types of non-ironic tweets (i.e., hashtag vs. background corpus). The bar chart seems to confirm our intuition that the system performs best on detecting ironic tweets that are realized by means of a polarity contrast (78% accuracy), followed by instances describing situational irony. On the other hand, detecting other type of irony appears much more challenging (45%). A closer look at other verbal irony reveals that the instances are often ambiguous and realized in diverse ways, as shown in Examples ( 10) and (11). It is important to recall that, prior to classification, the hashtags #irony, #sarcasm, and #not were removed from the tweets. Classification errors on the ironic by clash category include tweets where the irony results from a polarity contrast which cannot be identified using sentiment lexicon features alone. We see two possible explanations for this. First, we observed that in the majority (77%) of the misclassified tweets, the only clue for a polarity contrast was an irony-related hashtag (i.e., \"#not\"), which was removed from the data prior to training. In fact, as illustrated by Example ( 12), without such a meta-hashtag, it is very difficult to know whether the instance is ironic." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053780.png", "id": 53780, "ocr": [ "GooeoooeooS", "BLEU", "avieng", "cgjidinar? asceng", "1e-05 0,0301 0,01", "ecularizz,Knn Megh;", "P EECIEdE" ] }
{ "caption": "Figure 4: BLEU score on the Finnish Dev set (GBM) with different values for the 1/2σ2 regularization weight. To enable comparable results, the other hyperparameter (length) is kept fixed.", "caption_no_index": "BLEU score on the Finnish Dev set (GBM) with different values for the 1/2σ2 regularization weight. To enable comparable results, the other hyperparameter (length) is kept fixed.", "id": 33548, "image_id": 53780, "mention": [ [ "Using the same GBM experimental setting, Figure 4 compares regularized MERT using the gradient direction finder and coordinate ascent." ] ], "paragraph": [ "Our experimental results with the GBM feature set data are shown in Table 2. Each table is divided into three sections corresponding respectively to MERT (Och, 2003) with Koehn-style coordinate ascent (Koehn, 2004), PRO, and our optimizer featuring both regularization and the gradient-based direction finder. All variants of MERT are initialized with a single starting point, which is either uniform weight or w 0 . Instead of providing MERT with additional random starting points as in Moses, we use random walks as in (Moore and Quirk, 2008) to attempt to move out of local optima. 8 Since PRO and our optimizer have hyperparameters, we use a held-out set (Dev) for adjusting them. For PRO, we adjust three parameters: a regularization penalty for 2 , the parameter α in the add-α smoothed sentence-level version of BLEU (Lin and Och, 2004), and a parameter for scaling the corpus-level length of the references. The latter scaling parameter is discussed in (He and 33.2 20.3 To enable comparable results, the other hyperparameter (length) is kept fixed. Deng, 2012;Nakov et al., 2012) and addresses the problem that systems tuned with PRO tend to produce sentences that are too short. On the other hand, regularized MERT only requires one hyperparameter to tune: a regularization penalty for 2 or 0 . However, since PRO optimizes translation length on the Dev dataset and MERT does so using the Tune set, a comparison of the two systems would yield a discrepancy in length that would be undesirable. Therefore, we add another hyperparameter to regularized MERT to tune length in the same manner using the Dev set. Table 2 offers several findings. First, unregularized MERT can achieve competitive results with a small set of highly engineered features, but adding a large set of more than 200 features causes MERT to perform poorly, particularly on the test set. However, unregularized MERT can recover much of this drop of performance if it is given a good sparse initializer w 0 . Regularized MERT (v1) provides an increase in the order of 0.5 BLEU on the test set compared to the best results with unregularized MERT. Regularized MERT is competitive with PRO, even though the number of features is relatively large. Using the same GBM experimental setting, Figure 4 compares regularized MERT using the gradient direction finder and coordinate ascent. At the best regularization setting, the two algorithms are comparable in terms of BLEU (though coordinate ascent is slower due to its lack of a good direction finder), but our method seems more robust with suboptimal regularization parameters." ] }
{ "figure_type": "Equation", "file_name": "000000053781.png", "id": 53781, "ocr": [ "Algorithm sell-bootstrapping", "Require: labeledsexl sel L", "Require: unlabeled data set U", "Require; batch siz: $", "Repeat", "Trmn", "singl", "classtfier ,n [", "Ru: thc classitict on U", "Find al mnosL", "inslanzzs in U thal the classilier has", "the highest preciction conidence", "Acd thco) into", "Until: no data puints available or the sluppuage", "conditicn ~", "rached" ] }
{ "caption": "Figure 1: Self-bootstrapping algorithm", "caption_no_index": "Self-bootstrapping algorithm.", "id": 33549, "image_id": 53781, "mention": [ [ "Following Zhang (2004), we have developed a baseline self-bootstrapping procedure, which keeps augmenting the labeled data by employing the models trained from previously available labeled data, as shown in Figure 1." ] ], "paragraph": [ "Following Zhang (2004), we have developed a baseline self-bootstrapping procedure, which keeps augmenting the labeled data by employing the models trained from previously available labeled data, as shown in Figure 1." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053782.png", "id": 53782, "ocr": [ "L", "F", "HEM Mk", "JI|OnE", "E51058\"" ] }
{ "caption": "Figure 5. System Processing Time", "caption_no_index": "System Processing Time.", "id": 33550, "image_id": 53782, "mention": [ [ "The plot in Figure 5 shows the relationship between processing time and the addition of new forecast sites." ] ], "paragraph": [ "The main requirements of the case study have been 1) build a NLG capability that produces the quantity of texts required and 2) achieve this quantity without sacrificing the quality expected from the Met Office. As stated previously, the quantity requirement has been met by generating 15,000 texts in less than a minute, without need for high end computing infrastructure or parallelization. Figure 4 is a box plot showing character lengths of forecast texts for an arbitrary set of inputs. The median length is 177 characters. The outliers, with length 1.5 times the interquartile range (1.5 * 134 = 201 characters) above the upper quartile or below the lower quartile, relate to sites experiencing particularly varied weather conditions. Feedback on the appropriateness of the text lengths is discussed in Section 3.4. The system has recently been extended to generate 50,000 texts without loss of performance. This extension has doubled the number of sites processed to 10,000 and extended the forecast to 5 days. It has also increased the geographic extent of the system from UK only to worldwide, discussed in Section 3.5. The plot in Figure 5 shows the relationship between processing time and the addition of new forecast sites. The results were obtained over 10 trials using a MacBook Pro 2.5 GHz Intel Core i5, running OS X 10.8 with 4GB of RAM." ] }
{ "figure_type": "Graph Plot", "file_name": "000000053783.png", "id": 53783, "ocr": [ "9", "'6 50", "FixMatch", "SAT: classifier-based", "SAT: scorer-based", "Examples per class Nc" ] }
{ "caption": "Figure 1: Average scores of accuracy and macro F1 from FixMatch and our method for different sizes of labeled data on the AG News dataset.", "caption_no_index": "Average scores of accuracy and macro F from FixMatch and our method for different sizes of labeled data on the AG News dataset.", "id": 33551, "image_id": 53783, "mention": [ [ "This ablation investigates how our semi-supervised learning method performs for different sizes of labeled data. Fig. 1 compares the average scores of accuracy and macro F1 of FixMatch and our meth-ods." ], [ "Figure 1: Average scores of accuracy and macro F1 from FixMatch and our method for different sizes of labeled data on the AG News dataset." ] ], "paragraph": [ "This ablation investigates how our semi-supervised learning method performs for different sizes of labeled data. Fig. 1 compares the average scores of accuracy and macro F1 of FixMatch and our meth-ods. First, we observed that our methods can consistently outperform FixMatch with varying data sizes. This indicates that strategically selecting the strongly and weakly augmented samples contributes to the final performance in self-training. Second, when N c increases from 3 to 10, the scores of the three methods increase accordingly. When N c becomes 20, the performance of FixMatch and the classifier-based SAT drops, which is consistent with prior findings on the diminished effect of data augmentation for larger datasets (Xie et al., 2020;Andreas, 2020). However, the scorer-based SAT does not show an obvious performance decrease, showing that in few-shot datasets, the scorer-based method is more robust than the classifier-based method.", "Figure 1: Average scores of accuracy and macro F1 from FixMatch and our method for different sizes of labeled data on the AG News dataset." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053784.png", "id": 53784, "ocr": [] }
{ "caption": "Figure 3: A final layer averaging 20 model predictions", "caption_no_index": "A final layer averaging 20 model predictions.", "id": 33552, "image_id": 53784, "mention": [ [ "This adds a final layer to the model, illustrated for 20 models in Figure 3." ] ], "paragraph": [ "Where λ is a tunable weight hyperparameter which in this work we keep at 0.5. This adds a final layer to the model, illustrated for 20 models in Figure 3. The ensemble approach reduces the variance of the results and increase the accuracy of individual predictions." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053785.png", "id": 53785, "ocr": [ "Ule" ] }
{ "caption": "Figure 1: TER distribution in the APE 2022 English-Marathi test set.", "caption_no_index": "TER distribution in the APE 2022 English-Marathi test set.", "id": 33553, "image_id": 53785, "mention": [ [ "The third one, instead, will be discussed by referring to Figure 1." ], [ "As shown in Figure 1, the TER distribution is quite skewed towards lower values (about 45% of the samples fall in the 15&lt;TER&lt;45 interval) but only 10% of the items can be considered as perfect or near-perfect translations (i.e., 0&lt;TER&lt;5)." ] ], "paragraph": [ "To get an idea of the difficulty of the task, in previous rounds, we focused on three aspects of the released data, which provided us with information about the possibility of learning useful correction patterns during training and successfully applying them at test time. These are: i) repetition rate, ii) MT quality, and iii) TER distribution in the test set. For the sake of comparison across the eight rounds of the APE task (2015-2022), Table 1 reports, for each dataset, information about the first two aspects. The third one, instead, will be discussed by referring to Figure 1.", "Also, with respect to this complexity indicator, the APE 2022 test set can be considered of mediumhigh difficulty compared to the past rounds. As shown in Figure 1, the TER distribution is quite skewed towards lower values (about 45% of the samples fall in the 15&lt;TER&lt;45 interval) but only 10% of the items can be considered as perfect or near-perfect translations (i.e., 0&lt;TER&lt;5). These values are lower compared to those observed in the test data of harder rounds and higher compared to those observed in the test data of easier rounds. 11 All in all, the improvements over the baseline observed this year for two of the three participating systems (respectively -3.49 and -1.22 TER for the top-ranked and the second-best one) seem to confirm the correlation between TER distribution and task difficulty. However, weighing and understanding the actual contribution of TER distribution and MT quality, together with the possible additive effect of RR, remains a topic for more focused future research." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053786.png", "id": 53786, "ocr": [ "Ninmten", "Ln .", "anmln *", "777", "Rtan", "FAi=", "2oiTa", "Yo9v", "tn#", "CoA", "Eie", "#p", "#th", "4" ] }
{ "caption": "Fig. 2: The schematic diagram of the proposed Chinese parser.", "caption_no_index": "The schematic diagram of the proposed Chinese parser.", "id": 33554, "image_id": 53786, "mention": [ [ "The block diagram of traditional Chinese parser is shown in Fig. 2." ], [ "To call SRILM's library, three function calls (as shown in Fig. 2) were embedded into our main program to load LM, check word index/out-of-vocabulary (OOV) and compute LM score, respectively." ] ], "paragraph": [ "The block diagram of traditional Chinese parser is shown in Fig. 2. There are three blocks including (1) text normalization, (2) word segmentation and (3) POS tagging.", "To call SRILM's library, three function calls (as shown in Fig. 2) were embedded into our main program to load LM, check word index/out-of-vocabulary (OOV) and compute LM score, respectively. By this way, the 100k tri-gram LM was loaded only once and therefore the LM rescoring time is significantly improved." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053787.png", "id": 53787, "ocr": [ "Jiett", "ZL", "Eetet", "u~}", "Rebit" ] }
{ "caption": "Figure 2: Example of clinical concept “fever” and its important relations (note the diagram is simplified).", "caption_no_index": "Example of clinical concept “fever” and its important relations (note the diagram is simplified).", "id": 33555, "image_id": 53787, "mention": [ [ "Version control was managed using Subversion, and the ontology is available from a public access Google code site. 12 Figure 2 provides a simplified example of relations for the clinical concept instance fever." ] ], "paragraph": [ "The ontology currently includes 484 English keywords and 453 English regular expression. The core classes and relations were developed in Protege-OWL, and the populated ontology is generated from data stored in a spreadsheet (using a Perl script). Version control was managed using Subversion, and the ontology is available from a public access Google code site. 12 Figure 2 provides a simplified example of relations for the clinical concept instance fever." ] }
{ "figure_type": "Node Diagram", "file_name": "000000053788.png", "id": 53788, "ocr": [ "Amn;", "VPa:", "Adjpt", "happys:+ IIQU ", "dances :", "Dia}", "singsa: S/a)", "Rety\"" ] }
{ "caption": "Figure 7: An LF tree illustrating polarity reversal under→.", "caption_no_index": "An LF tree illustrating polarity reversal under→.", "id": 33556, "image_id": 53788, "mention": [ [ "Consider the example of Figure 7.", "Accordingly, Figure 7 Note that Table 1 is used in the process of \"reading off\" the CNF expression for input to the reasoner; it is not used to eliminate signed operators from the LF tree." ] ], "paragraph": [ "In the standard conversion to CNF, rewriting conditionals precedes negation lowering. We can deal with conditionals as follows. We define the signed operator + → to be a synonym for ∨ and − → to be a synonym for ∧. However, unlike ∨ or ∧, ± → reverses the polarity of its first child. Consider the example of Figure 7. The → operator inverts the polarity of its first child, but otherwise polarities are passed unchanged from parent to child. Accordingly, Figure 7 Note that Table 1 is used in the process of \"reading off\" the CNF expression for input to the reasoner; it is not used to eliminate signed operators from the LF tree. The signed operators do serve a purpose beyond the truth function they represent. For one thing, even though + → is equivalent to ∨, the former reverses its first child's polarity whereas the latter does not. The signed operators also permit us to use local constraints to define the assignment of translations. An example of such a local constraint is the following: a node has semantic operator → if it is headed by a CP headed by \"if.\" Such a statement remains valid whether the polarity of the node is positive or negative, though in the former case the signed operator + → is interpreted as ∨ and in the latter case the signed operator − → is interpreted as ∧." ] }
{ "figure_type": "Bar Chart", "file_name": "000000053789.png", "id": 53789, "ocr": [ "intensifiers", "control", "SimAdiMod" ] }
{ "caption": "Figure 4. Intensifiers on average modify semantically more similar adjectives compared to control adverbs.", "caption_no_index": "Intensifiers on average modify semantically more similar adjectives compared to control adverbs.", "id": 33557, "image_id": 53789, "mention": [ [ "Moreover, what distinguishes intensifiers from non-bleaching control adverbs in our data is the variable SIMADJMOD: averaged across 1850-1990, SIMADJMOD is higher among the set of intensifiers compared to the set of control adverbs (Fig. 4)." ] ], "paragraph": [ "Moreover, what distinguishes intensifiers from non-bleaching control adverbs in our data is the variable SIMADJMOD: averaged across 1850-1990, SIMADJMOD is higher among the set of intensifiers compared to the set of control adverbs (Fig. 4). We further performed paired t-tests and found that SIMADJMOD is significantly higher for intensifiers than for the control adverbs (t = 7.3e+1, p&lt;1e-20). 7 Figure 3. The more semantically similar an adverb is to the adjectives that it premodifies (the greater SIMAD-JMOD), the greater its rate of bleaching according to SIMVERY (a), SIMLEX (b) and BREADTH (c). Rates are computed using SVD embeddings and data are for all adverbs (intensifiers and control) at all years. Shaded areas show 95% confidence intervals." ] }
End of preview.

New Released

We have recently released the ground truth for both the public and hidden test sets of the 3rd Scientific Figure Captioning (SciCap) Challenge. Feel free to download them.

The 1st Scientific Figure Captioning (SciCap) Challenge 📖📊

Welcome to the 1st Scientific Figure Captioning (SciCap) Challenge! 🎉 This dataset contains approximately 400,000 scientific figure images sourced from various arXiv papers, along with their captions and relevant paragraphs. The challenge is open to researchers, AI/NLP/CV practitioners, and anyone interested in developing computational models for generating textual descriptions for visuals. 💻

Challenge homepage 🏠

Challenge Overview 🌟

The SciCap Challenge will be hosted at ICCV 2023 in the 5th Workshop on Closing the Loop Between Vision and Language (October 2-3, Paris, France) 🇫🇷. Participants are required to submit the generated captions for a hidden test set for evaluation.

The challenge is divided into two phases:

  • Test Phase (2.5 months): Use the provided training set, validation set, and public test set to build and test the models.
  • Challenge Phase (2 weeks): Submit results for a hidden test set that will be released before the submission deadline.

Winning teams will be determined based on their results for the hidden test set 🏆. Details of the event's important dates, prizes, and judging criteria are listed on the challenge homepage.

Dataset Overview and Download 📚

The SciCap dataset contains an expanded version of the original SciCap dataset, and includes figures and captions from arXiv papers in eight categories: Computer Science, Economics, Electrical Engineering and Systems Science, Mathematics, Physics, Quantitative Biology, Quantitative Finance, and Statistics 📊. Additionally, it covers data from ACL Anthology papers ACL-Fig.

You can download the dataset using the following command:

from huggingface_hub import snapshot_download
snapshot_download(repo_id="CrowdAILab/scicap", repo_type='dataset') 

Merge all image split files into one 🧩

zip -F img-split.zip --out img.zip

The dataset schema is similar to the mscoco dataset:

  • images: two separated folders - arXiv and acl figures 📁
  • annotations: JSON files containing text information (filename, image id, figure type, OCR, and mapped image id, captions, normalized captions, paragraphs, and mentions) 📝

Evaluation and Submission 📩

You have to submit your generated captions in JSON format as shown below:

[
  {
    "image_id": int, 
    "caption": "PREDICTED CAPTION STRING"
  },
  {
    "image_id": int,
    "caption": "PREDICTED CAPTION STRING"
  }
...
]

Submit your results using this challenge link 🔗. Participants must register on Eval.AI to access the leaderboard and submit results.

Please note: Participants should not use the original captions from the arXiv papers (termed "gold data") as input for their systems ⚠️.

Technical Report Submission 🗒️

All participating teams must submit a 2-4 page technical report detailing their system, adhering to the ICCV 2023 paper template 📄. Teams have the option to submit their reports to either the archival or non-archival tracks of the 5th Workshop on Closing the Loop Between Vision and Language.

Good luck with your participation in the 1st SciCap Challenge! 🍀🎊

Downloads last month
579