text
stringlengths
4
222k
label
int64
0
4
There is a rich literature in natural language processing (NLP) and information retrieval on question answering (QA) (Hirschman and Gaizauskas, 2001) , but recently deep learning has sparked interest in a special kind of QA, commonly referred to as reading comprehension (RC) (Vanderwende, 2007) . The aim in RC research is to build intelligent systems with the abilities to read and understand natural language text and answer questions related to it (Burges, 2013) . Such tests are appealing as they require joint understanding of the question and the related passage (i.e. context), and moreover, they can analyze many different types of skills in a rather objective way (Sugawara et al., 2017) .Despite the progress made in recent years, there is still a significant performance gap between humans and deep neural models in RC, and researchers are pushing forward our understanding of the limitations and capabilities of these approaches by introducing new datasets. Existing tasks for RC mainly differ in two major respects: the questionanswer formats, e.g. cloze (fill-in-the-blank), span selection or multiple choice, and the text sources they use, such as news articles (Hermann et al., 2015; Trischler et al., 2017) , fictional stories (Hill et al., 2016) , Wikipedia articles (Kočiský et al., 2018; Hewlett et al., 2016; Rajpurkar et al., 2016) or other web sources (Joshi et al., 2017) . A popular topic in computer vision closely related to RC is Visual Question Answering (VQA) in which context takes the form of an image in the comprehension task, where recent datasets have also been compiled, such as (Antol et al., 2015; Johnson et al., 2017; Goyal et al., 2017) , to name a few.More recently, research in QA has been extended to focus on the multimodal aspects of the problem where different modalities are being explored. Tapaswi et al. (2016) introduced MovieQA where they concentrate on evaluating automatic story comprehension from both video and text. In COMICS, Iyyer et al. (2017) turned to comic books to test understanding of closure, transitions in the narrative from one panel to the next. In AI2D (Kembhavi et al., 2016) and FigureQA (Kahou et al., 2018) , the authors addressed comprehension of scientific diagrams and graphical plots. Last but not least, Kembhavi et al. (2017) has proposed another comprehensive and challenging dataset named TQA, which comprised of middle school science lessons of diagrams and texts.In this study, we focus on multimodal machine comprehension of cooking recipes with images and text. To this end, we introduce a new QA dataset called RecipeQA that consists of recipe instructions and related questions (see Fig. 1 for an example text cloze style question). There are a handful of reasons why understanding and reasoning about 1. Heat oven to 375 degrees F. Spoon a thin layer of sauce over the bottom of a 9-by-13-inch baking dish. 2. Cover with a single layer of ravioli. 3. Top with half the spinach half the mozzarella and a third of the remaining sauce. 4. Repeat with another layer of ravioli and the remaining spinach mozzarella and half the remaining sauce. 5. Top with another layer of ravioli and the remaining sauce not all the ravioli may be needed. Sprinkle with the Parmesan. 6. Cover with foil and bake for 30 minutes. Uncover and bake until bubbly, 5 to 10 minutes. 7. Let cool 5 minutes before spooning onto individual plates.Step 1Step 2Step 3Step 4Step 5Step 6Step 7Question Choose the best text for the missing blank to correctly complete the recipe Cover.. Bake. Cool, serve. An illustrative text cloze style question (context, question and answer triplet). The context is comprised of recipe description and images where the question is generated using the question titles. Each paragraph in the context is taken from another step, as also true for the images. Bold answer is the correct one.recipes is interesting. Recipes are written with a specific goal in mind, that is to teach others how to prepare a particular food. Hence, they contain immensely rich information about the real world. Recipes consist of instructions, wherein one needs to follow each instruction to successfully complete the recipe. As a classical example in introductory programming classes, each recipe might be seen as a particular way of solving a task and in that regard can also be considered as an algorithm. We believe that recipe comprehension is an elusive challenge and might be seen as important milestone in the long-standing goal of artificial intelligence and machine reasoning (Norvig, 1987; Bottou, 2014) .Among previous efforts towards multimodal machine comprehension (Tapaswi et al., 2016; Kembhavi et al., 2016; Iyyer et al., 2017; Kembhavi et al., 2017; Kahou et al., 2018) , our study is closer to what Kembhavi et al. (2017) envisioned in TQA. Our task primarily differs in utilizing substantially larger number of images -the average number of images per recipe in RecipeQA is 12 whereas TQA has only 3 images per question on average. Moreover, in our case, each image is aligned with the text of a particular step in the corresponding recipe. Another important difference is that TQA contains mostly diagrams or textbook images whereas RecipeQA consists of natural images taken by users in unconstrained environments.Some of the important characteristics of RecipeQA are as follows:• There are arbitrary numbers of steps in recipes and images in steps, respectively.• There are different question styles, each requiring a specific comprehension skill.• There exists high lexical and syntactic divergence between contexts, questions and answers.• Answers require understanding procedural language, in particular keeping track of entities and/or actions and their state changes.• Answers may need information coming from multiple steps (i.e. multiple images and multiple paragraphs).• Answers inherently involve multimodal understanding of image(s) and text.To sum up, we believe RecipeQA is a challenging benchmark dataset which will serve as a test bed for evaluating multimodal comprehension systems. In this paper, we present several statistical analyses on RecipeQA and also obtain baseline performances for a number of multimodal comprehension tasks that we introduce for cooking recipes.
0
Twitter is one of the most popular micro-blogging platforms in recent days. There are over 500 million tweets posted per day 1 including real-world events described on Twitter which range from short and daily life events (e.g. falling to the ground) to long and widely-broadcasted events (e.g. a match in World Cup). Such tweets are good sources to detect users' reactions toward real-world events. People behave unusually when they encounter exciting moments in an event, for example, yell out or dance with each other after their favorite soccer team scores a goal. On Twitter, this action is often reflected by a large number of posts within a short time period. When Japan scored a goal against Cameroon in World Cup 2010, there were a maximum of 2, 940 tweets per second (TPS), which marked the record TPS for goals at that time. 2 It is significantly larger than the average of 750 TPS. 2 In this paper, we call such bursty traffic as "numerical spikes". Figure 1 shows the number of tweets per minute during the match of Cameroon vs. Japan, and Table 1 shows the examples of tweets sampled from both numerical spikes and other parts.Detecting emotional upsurge is important for both extracting emerging important real-world events and important moments of them. We call an upsurge that are caused by Twitter users' emotional spike as I look excited but actually I have been crying from being moved. Well, I have been excited but I believed that Japan will win so I am quite calm. "emotional upsurge". Emotional upsurge do overlap with numerical spikes, but it does include moments that are not numerical spikes. For example, Lanagan and Smeaton (2011) reported that emotional upsurge overlaps with numerical spikes and those are useful for tagging key moments in sports matches. However, detecting numerical spikes on Twitter becomes difficult when a target event is not pre-defined or rarely tweeted by Twitter users because the number event-related tweets per unit time is not directly computable. In such cases, detecting upsurge of emotions becomes crucial.One characteristics of tweets is that expressions used in tweets entail many linguistic phenomena. For example, Brody and Diakopoulos (2011) analyzed occurrences of character repetitions in words from a sentiment dictionary. In this paper, we assume that such variations of language expressions are caused by real-world events. Table 1 shows that a character repetition ('Goooal', 'Huraaay') occurs in tweets during emotional upsurge rather than their canonical form ('Goal', 'Hurray'). In contrast, a character repetition does not frequently occur in tweets during non-emotional upsurge. However, to our knowledge, there has not been an attempt to capture emotional upsurge using the linguistic characteristics of tweets.In this paper, we specifically investigate a method to detect emotional upsurge in real-world events us-ing characteristic expressions in a Japanese tweet. Our contribution is that a spiking tweet language model, which we constructed automatically from existing tweet dataset, captures characteristic expressions well and it is an effective approach for detecting emotional upsurge.
0
Iterative bootstrapping algorithms have been proposed to extract semantic lexicons for NLP tasks with limited linguistic resources. Bootstrapping was initially proposed by Riloff and Jones (1999) , and has since been successfully applied to extracting general semantic lexicons (Riloff and Jones, 1999; Thelen and Riloff, 2002) , biomedical entities (Yu and Agichtein, 2003) , facts (Paşca et al., 2006) , and coreference data (Yang and Su, 2007) .Bootstrapping approaches are attractive because they are domain and language independent, require minimal linguistic pre-processing and can be applied to raw text, and are efficient enough for tera-scale extraction (Paşca et al., 2006) .Bootstrapping is minimally supervised, as it is initialised with a small number of seed instances of the information to extract. For semantic lexicons, these seeds are terms from the category of interest. The seeds identify contextual patterns that express a particular semantic category, which in turn recognise new terms (Riloff and Jones, 1999) .Unfortunately, semantic drift often occurs when ambiguous or erroneous terms and/or patterns are introduced into and then dominate the iterative process (Curran et al., 2007) .Bootstrapping algorithms are typically compared using only a single set of hand-picked seeds. We first show that different seeds cause these algorithms to generate diverse lexicons which vary greatly in precision. This makes evaluation unreliable -seeds which perform well on one algorithm can perform surprisingly poorly on another. In fact, random gold-standard seeds often outperform seeds carefully chosen by domain experts.Our second contribution exploits this diversity we have identified. We present an unsupervised bagging algorithm which samples from the extracted lexicon rather than relying on existing gazetteers or hand-selected seeds. Each sample is then fed back as seeds to the bootstrapper and the results combined using voting. This both improves the precision of the lexicon and the robustness of the algorithms to the choice of initial seeds.Unfortunately, semantic drift still dominates in later iterations, since erroneous extracted terms and/or patterns eventually shift the category's direction. Our third contribution focuses on detecting and censoring the terms introduced by semantic drift. We integrate a distributional similarity filter directly into WMEB (McIntosh and Curran, 2008) . This filter judges whether a new term is more similar to the earlier or most recently extracted terms, a sign of potential semantic drift.We demonstrate these methods for extracting biomedical semantic lexicons using two bootstrapping algorithms. Our unsupervised bagging approach outperforms carefully hand-picked seeds by ∼ 10% in later iterations. Our distributional similarity filter gives a similar performance improvement. This allows us to produce large lexicons accurately and efficiently for domain-specific language processing.
0
Tree Adjoining Grammars are a extension of CFG introduced by Joshi in that use trees instead of productions as the primary representing structure. Several parsing algorithms have been proposed for this formalism, most of them based on tabular techniques, ranging from simple bottom-up algorithms (Vijay-Shanker and Joshi, 1985) to sophisticated extensions of the Earley's algorithm (Schabes and Joshi, 1988; Schabes, 1994; Nederhof, 1997) . However, it is difficult to inter-relate different parsing algorithms. In this paper we study several tabular algorithms for TAG parsing, showing their common characteristics and how one algorithm can be derived from another in turn, creating a continuum from simple pure bottom-up to complex predictive algorithms.Formally, a TAG is a 5-tuple ~ = (VN,VT, S,I,A) , where VN is a finite set of non-terminal symbols, VT a finite set of terminal symbols, S the axiom of the grammar, I a finite set of initial trees and A a finite set of auxiliary trees. IUA is the set of elementary trees. Internal nodes are labeled by non-terminals and leaf nodes by terminals or ~, except for just one leaf per auxiliary tree (the foot) which is labeled by the same non-terminal used as the label of its root node. The path in an elementary tree from the root node to the foot node is called the spine of the tree.New trees are derived by adjoining: let a be a tree contaiIiing a node N ~ labeled by A and let be an auxiliary tree whose root and foot nodes are also labeled by A. Then, the adjoining of at the adjunction node N ~ is obtained by excising the subtree of a with root N a, attaching j3 to N ° and attaching the excised subtree to the foot of ~.We use ~ E adj(N ~) to denote that a tree ~ may be adjoined at node N ~ of the elementary tree a.In order to describe the parsing algorithms for TAG, we must be able to represent the partial recognition of elementary trees. Parsing algorithms for context-free grammars usually denote partial recognition of productions by dotted productions. We can extend this approach to the case of TAG by considering each elementary tree q, as formed by a set of context-free productions 7)(7): a node N ~ and its children N~... N~ are represented by a production N ~ --~ N~... N~. Thus, the position of the dot in the tree is indicated by the position of the dot in a production in 7)(3' ). The elements of the productions are the nodes of the tree, except for the case of elements belonging to VT U {E} in the right-hand side of production. Those elements may not have children and are not candidates to be adjunction nodes, so we identify such nodes labeled by a terminal with that terminal.To simplify the description of parsing algorithms we consider an additional production -r -+ R a for each initial tree and the two additional productions T --* R ~ and F ~ ~ 2_ for each auxiliary tree B, where R ~ and F ~ correspond to the root node and the foot node of/3, respectively. After disabling T and 2_ as adjunction nodes the generative capability of the grammars remains intact.The relation ~ of derivation on P(7) is defined by 5 ~ u if there are 5', 5", M ~, v such that 5 = 5'M~5 ", u = 5'v~" and M "r --+ v E 7)(3 ') exists. The reflexive and transitive closure of =~ is denoted :~ .In a abuse of notation, we also use :~ to represent derivations involving an adjunction. So, 5 ~ u if there are 5~, ~",M'r,v such that 5 = 5'M~5 '', R ~ ~ viF~v3, ~ E adj(M~) , M "r --+ v2 and v = ¢~t?31v2u3 ~tt .Given two pairs (p,q) and (i, j) of integers, (p,q) <_ (i,j) is satisfied if/< p and q _< j. Given two integers p and q we define p U q as p if q is undefined and as q if p is undefined, being undefined in other case.We will describe parsing algorithms using Parsing Schemata, a framework for high-level description of parsing algorithms (Sikkel, 1997) . An interesting application of this framework is the analysis of the relations between different parsing algorithms by studying the formal relations between their underlying parsing schemata. Originally, this framework was created for context-free grammars but we have extended it to deal with tree adjoining grammars.A parsing system for a grammar G and string al ... a,~ is a triple (2:, 7-/, D), with :2 a set of items which represent intermediate parse results, 7-/ an initial set of items called hypothesis that encodes the sentence to be parsed, and Z) a set of deduction steps that allow new items to be derived from already known items. Deduction steps are of the form '~'~"'~ cond, meaning that if all antecedents 7]i of a deduction step are present and the conditions cond are satisfied, then the consequent should be generated by the parser. A set 5 v C Z of .final items represent the recognition of a sentence. A parsing schema is a parsing system parameterized by a grammar and a sentence.Parsing schemata are closely related to grammatical deduction systems (Shieber et al., 1995) , where items are called formula schemata, deduction steps are inference rules, hypothesis are axioms and final items are goal formulas.A parsing schema can be generalized from another one using the following transformations (Sikkel, 1997) :• Item refinement, multiple items.breaking single items into•Step refinement, decomposing a single deduction step in a sequence of steps.• Extension of a schema by considering a larger class of grammars.In order to decrease the number of items and deduction steps in a parsing schema, we can apply the following kinds of filtering:• Static filtering, in which redundant parts are simply discarded.• Dynamic filtering, using context information to determine the validity of items.•Step contraction, in which a sequence of deduction steps is replaced by a single one.The set of items in a parsing system PAIg corresponding to the parsing schema Alg describing a given parsing algorithm Alg is denoted 2:Alg, the set of hypotheses 7/Alg, the set of final items ~'Alg and the set of deduction steps is denoted ~)Alg"A CYK-like AlgorithmWe have chosen the CYK-like algorithm for TAG described in (Vijay-Shanker and Joshi, 1985) as our starting point. Due to the intrinsic limitations of this pure bottom-up algorithm, the grammars it can deal with are restricted to those with nodes having at most two children. The tabular interpretation of this algorithm works with items of the form [N "~ , i, j [ p, q I adj] such that N ~ ~ ai+l ...ap F ~ aq+l ...aj ai+l ... aj if and only if (p, q) 7~ (-, -) and N ~ ai+l.., aj if and only if (p,q) = (-,-) , where N ~ is a node of an elementary tree with a label belonging to VN.The two indices with respect to the input string i and j indicate the portion of the input string that has been derived from N "~. If V E A, p and q are two indices with respect to the input string that indicate that part of the input string recognized by the foot node ofv. In other casep= q =representing they are undefined. The element adj indicates whether adjunction has taken place on node N r.The introduction of the element adj taking its value from the set {true, false} corrects the items previously proposed for this kind of algorithms in (Vijay-Shanker and Joshi, 1985) in order to avoid several adjunctions on a node. A value of true indicates that an adjunction has taken place in the node N r and therefore further adjunctions on the same node are forbidden. A value of false indicates that no adjunction was performed on that node. In this case, during future processing this item can play the role of the item recognizing the excised part of an elemetitary tree to be attached to the foot node of an auxiliary tree. As a consequence, only one adjunction can take place on an elementary node, as is prescribed by the tree adjoining grammar formalism (Schabes and Shieber, 1994) . As an additional advantage, the algorithm does not need to require the restriction that every auxiliary tree must have at least one terminal symbol in its frontier (Vijay-Shanker and Joshi, 1985) .The parsing systems ]PCYK corresponding to the CYK-line algorithm for a tree adjoining grammar G and an input string al... an is defined as follows: [ R~, i', j' i, j I adjl, Nr,i,j [p,q false] DAdj ¢YK = [N%i',j' [p,q [ true] such that 3 e A, ~ • adj(N "r)ICYK={ [N 7,i,jlp,qladj] } such that N ~ • 79(7), label(Nr) • VN, 7 E I U A, 0 < i < j, (p,q) <_ (i,j), adj e {true, false} 7"~Cy K = { [a, i --1, i] I a = ai, 1 < i < n } [a, i -1, if N r -+ a ~Scan CYK = [Nr, i -1, i [ -,- I false] 79~'¥K = [N% i, i I -,-I false] N~ -~ e •)Foot CYK = [Fr, i, j I i, j I false] [M r, i, k [ p, q I adj], q~LeftDo,n [P~', k, j I -, --I adj] '-'CYK = [NT, i, j I P, q I false] such that N "r --+ M+rP r E 79(7), M r E spine(v) [M r, i, k l -,-ladj],q~Scan I I-DFoot q'~LeftDoml i DCYK ~'CYK ['j ~)~YK I.J : "-' ~'CYK ~'CYK ~RightDom II T~NoDom U TlUnary TIAdj CYK ~ "CYK ~CYK [J "CYK $'CYK = { [R ~,0,n [ -,-[adj]la e I }The hypotheses defined for this parsing system are the standard ones and therefore they will be omitted in the next parsing systems described in this paper.The key steps in the parsing system IPCyK are DcF°~?t~ and 7?~di K, which are in charge of the recognition of adjunctions. The other steps are in charge of the bottom-up traversal of elementary trees and, in the case of auxiliary trees, the propagation of the information corresponding to the part of the input string recognized by the foot node.The set of deductive steps q-~Foot make it possi-~'CYK ble to start the bottom-up traversal of each auxiliary tree, as it predict all possible parts of the input string that can be recognized by the foot nodes. Several parses can exist for an auxiliary tree which only differs in the part of the input string which was predicted for the foot node. Not all of them need take part on a derivation, only those with a predicted foot compatible with an adjunction. The compatibility between the adjunction node and the foot node of the adjoined ~Adj . when tree is checked by a derivation step ~'CYK" the root of an auxiliary tree /3 has been reached, it checks for the existence of a subtree of an elementary tree rooted by a node N ~ which satisfies the following conditions:i. /3 can be adjoined on N'L 2. N "r derives the same part of the input string derived from the foot node of/3.If the Conditions are satisfied, further adjunctions on N are forbidden and the parsing process continues a bottom-up traverse of the rest of the elementary tree 3' containing N x.
0
Word relatedness between two words refers to the degree of how much one word has to do with another word whereas word similarity is a special case or a subset of word relatedness. A word relatedness method has many applications in NLP, and related areas such as information retrieval (Xu and Croft, 2000) , image retrieval (Coelho et al., 2004) , paraphrase recognition , malapropism detection and correction (Budanitsky and Hirst, 2006) , word sense disambiguation (Schutze, 1998) , automatic creation of thesauri (Lin, 1998a; Li, 2002) , predicting user click behavior (Kaur and Hornof, 2005) , building language models and natural spoken dialogue systems (Fosler-Lussier and Kuo, 2001) , automatic indexing, text annotation and summarization (Lin and Hovy, 2003) . Most of the approaches of determining text similarity use word similarity Li et al., 2006) . There are other areas where word similarity plays an important role. Gauch et al. (1999) and Gauch and Wang (1997) applied word similarity in query expansion to provide conceptual retrieval which ultimately increases the relevance of retrieved documents. Many approaches to spoken language understanding and spoken language systems require a grammar for parsing the input utterance to acquire its semantics. Meng and Siu (2002) used word similarity for semi-automatic grammar induction from unannotated corpora where the grammar contains both semantic and syntactic structures. An example in other areas is database schema matching .Existing work on determining word relatedness is broadly categorized into three major groups: corpus-based (e.g., Cilibrasi and Vitanyi, 2007; Islam and Inkpen, 2006; Weeds et al., 2004; Landauer et al., 1998) , knowledge-based (e.g., Radinsky et al., 2011; Gabrilovich and Markovitch, 2007; Jarmasz and Szpakowicz, 2003; Hirst and St-Onge, 1998; Resnik, 1995) , and hybrid methods (e.g., Li et al., 2003; Lin, 1998b; Jiang and Conrath, 1997) . Corpus-based could be either supervised (e.g., Bollegala et al., 2011) or unsupervised (e.g., Iosif and Potamianos, 2010; Islam and Inkpen, 2006) . In this paper, we will focus only on unsupervised corpus-based measures.Many unsupervised corpus-based measures of word relatedness, implemented on different corpora as resources (e.g., Islam and Inkpen, 2006; Weeds et al., 2004; Landauer et al., 1998; Landauer and Dumais, 1997) , can be found in literature. These measures generally use co-occurrence statistics (mostly word n-grams and their frequencies) of target words generated from a corpus to form probability estimates. As the co-occurrence statistics are corpus-specific, most of the existing corpus-based measures of word relatedness implemented on different corpora are not fairly comparable to each other even on the same task. In practice, most corpora do not have readily available co-occurrence statistics usable by these measures. Again, it is very expensive to precompute co-occurrence statistics for all possible word tuples using the corpus as the word relatedness measures do not know the target words in advance. Thus, one of the main drawbacks of many corpus-based measures is that they are not feasible to be used on-line. There are other corpus-based measures that use web page count of target words from search engine as co-occurrence statistics (e.g., Iosif and Potamianos, 2010; Cilibrasi and Vitanyi, 2007; Turney, 2001) . The performance of these measures are not static as the contents and the number of web pages are constantly changing. As a result, it is hard to fairly compare any new measure to these measures.Thus, the research question arises: How can we compare a new word relatedness measure that is based on co-occurrence statistics of a corpus or a web search engine with the existing measures? We find that the use of a common corpus with co-occurrence statistics-e.g., the Google n-grams (Brants and Franz, 2006) -as the resource could be a good answer to this question. We experimentally evaluated six unsupervised corpus-based measures of word relatedness using the Google n-gram corpus on different tasks. The Google n-gram dataset 1 is a publicly available corpus with co-occurrence statistics of a large volume of web text. This will allow any new corpus based word relatedness measure to use the common corpus and compare with different existing measures on the same tasks. This will also facilitate a measure based on the Google n-gram corpus to be used on-line. Another motivation is to find an indirect mapping of co-occurrence statistics between the Google n-gram corpus and a web search engine. This is also to show that the Google n-gram corpus could be a good resource to many of the existing and future word relatedness measures. One of the previous works of this nature is (Budanitsky and Hirst, 2006) , where they evaluate five knowledge-based measures of word relatedness using WordNet as their central resource.The reasons of using corpus-based measures are threefold. First, to create, maintain and update lexical databases or resources-such as WordNet (Fellbaum, 1998) or Roget's Thesaurus (Roget, 1852 )-requires significant expertise and efforts (Radinsky et al., 2011) . Second, coverage of words in lexical resources is not quite enough for many NLP tasks. Third, such lexical resources are language specific, whereas Google n-gram corpora are available in English and in 10 European Languages (Brants and Franz, 2009) .The rest of this paper is organized as follows: Six corpus-based measures of word relatedness are briefly described in Section 2. Evaluation methods are discussed in Section 3. Section 4 and 5 present the experimental results from two evaluation approaches to compare several measures. We address some contributions and future related work in Conclusion.
0
The similarity between connectionist models of computation and neuron computation suggests that a study of syntactic parsing in a connectionist computational architecture could lead to significant insights into ways natural language can be parsed efficiently. Unfortunately, previous investigations into connectionist parsing (Cottrell, 1989 , Fanty, 1985 , Selman and Hirst, 1987 have not been very successful. They cannot parse arbitrarily long sentences and have inadequate grammar representations. However, the difficulties with connectionist parsing can be overcome by adopting a different connectionist model of computation, namely that proposed by Shastri and Ajjanagadde (1990) . This connectionist computational architecture differs from others in that it directly manifests the symbolic interpretation of the information it stores and manipulates. It also shares the massive parallelism, evidential reasoning ability, and neurological plausibility of other connectionist architectures. Since virtually all characterizations of natural language syntax have relied heavily on symbolic representations, this architecture is ideally suited for the investigation of syntactic parsing.The computational architecture proposed by Shastri and Ajjanagadde (1990) provides a rather general purpose computing framework, but it does have significant limitations. A computing module can represent entities, store predications over those entities, and use pattern-action rules to manipulate this stored information. This form of representation is very expressive, and pattern-action rules are a general purpose way to do computation. However, this architecture has two limitations which pose difficult problems for parsing natural language. First, only a conjunction of predications can be stored. The architecture cannot represent arbitrary disjunction. This limitation implies that the parser's representation of syntactic structure must be able to leave unspecified the information which the input has not yet determined, rather than having a disjunction of more completely specified possibilities for completing the sentence. Second, the memory capacity of any module is bounded. The number of entities which can be stored is bounded by a small constant, and the number of predications per predicate is also bounded. These bounds pose problems for parsing because the syntactic structures which need to be recovered can be arbitrarily large. This problem can be solved by allowing the parser to output the syntactic structure incrementally, thus allowing the parser to forget the information which it has already output and which it no longer needs to complete the parse. This technique requires that the representation of syntactic structure be able to leave unspecified the information which has already been determined but which is no longer needed for the completion of the parse. Thus the limitations of the architecture mean that the parser's representation of syntactic structure must be able to leave unspecified both the information which the input has not yet determined and the information which is no longer needed.In order to comply with these requirements, the parser uses Structure Unification Grammar (Henderson, 1990) as its grammatical framework. SUG is a formalization of accumulating informa-tion about the phrase structure of a sentence until a complete description of the sentence's phrase structure tree is constructed. Its extensive use of partial descriptions makes it ideally suited for dealing with the limitations of the architecture.This paper focuses on the parser's representation of phrase structure information and on the way the parser accumulates this information during a parse. Brief descriptions of the grammar formalism and the implementation in the connectionist architecture are also given. Except where otherwise noted, a simulation of the implementation has been written, and its grammar supports a small set of examples. A more extensive grammar is under development. SUG is clearly an adequate grammatical framework, due to its ability to straightforwardly simulate Feature Structure Based Tree Adjoining Grammar (Vijay-Shanker, 1987) , as well as other formalisms (Henderson, 1990) . Initial investigations suggest that the constraints imposed by the parser do not interfere with this linguistic adequacy, and more extensive empirical verification of this claim is in progress. The remainder of this paper will first give an overview of Structure Unification Grammar, then present the parser design, and finally a sketch of its implementation.Structure Unification Grammar is a formalization of accumulating information about the phrase structure of a sentence until this structure is completely described. This information is specified in partial descriptions of phrase structure trees. An SUG grammar is simply a set of these descriptions. The descriptions cannot use disjunction or negation, but their partiality makes them both flexible enough and powerful enough to state what is known and only what is known where it is known. There is also a simple abstraction operation for SUG descriptions which allows unneeded information to be forgotten, as will be discussed in the section on the parser design. In an SUG derivation, descriptions are combined by equating nodes. This way of combining descriptions is extremely flexible, thus allowing the parser to take full advantage of the flexibility of SUG descriptions, and also providing for efficient parsing strategies. The final description produced by a derivation must completely describe some phrase structure tree. This tree is the result of the derivation. The design of SUG incorporates ideas from Tree Adjoining Grammar, Description Theory (Marcus et al., 1983) , Combinatory Categorial Grammar, Lexical Functional Grammar, and Head-driven Phrase Structure Grammar.An SUG grammar is a set of partial descriptions of phrase structure trees. Each SUG grammar entry simply specifies an allowable grouping of information, thus expressing the information interdependencies. The language which SUG provides for specifying these descriptions allows partiality both in the information about individual nodes, and (crucially) in the information about the structural relations between nodes. As in many formalisms, nodes are described with feature structures. The use of feature structures allows unknown characteristics of a node to be left unspecified. Nodes are divided into nonterminals, which are arbitrary feature structures, and terminals, which are atomic instances of strings. Unlike most formalisms, SUG allows the specification of the structural relations to be equally partial. For example, if a description specifies children for a node, this does not preclude that node from acquiring other children, such as modifiers. This partiality also allows grammar entries to underspecify ordering constraints between nodes, thus allowing for variations in word order. This partiality in structural information is imperative to allow incremental parsing without disjunction (Marcus et al., 1983) . In addition to the immediate dominance relation for specifying parent-child relationships and linear precedence for specifying ordering constraints, SUG allows chains of immediate dominance relationships to be partially specified using the dominance relation. A dominance constraint between two nodes specifies that there must be a chain of zero or more immediate dominance constraints between the two nodes, but it does not say anything about the chain. This relation is necessary to express long distance dependencies in a single grammar entry. Some examples of SUG phrase structure descriptions are given in figure 1, and will be discussed below.A complete description of a phrase structure tree is constructed from the partial descriptions in an SUG grammar by conjoining a set of grammar entries and specifying how these descriptions share nodes. More formally, an SUG derivation starts with descriptions from the grammar, and in each step conjoins a set of one or more descriptions and adds zero or more statements of equality between nonterminal nodes. The description which results from a derivation step must be satisfiable, so the feature structures of any two equated nodes must unify and the resulting structural constraints must be consistent with some phrase structure tree. The final description produced by a derivation must be a complete description of some phrase structure tree. This tree is the result of the derivation. The sentences generated by a derivation are all those terminal strings which are consistent with the ordering constraints on the resulting tree. ure 2 shows an example derivation with one step in which all grammar entries are combined and all equations are done. This definition of derivations provides a very flexible framework for investigating various parsing strategies. Any ordering of combining grammar entries and doing equations is a valid derivation. The only constraints on derivations come from the meanings of the description primitives and from the need to have a unique resulting tree. This flexibility is crucial to allow the parser to compensate for the connectionist architecture's limitations and to parse efficiently.Because the resulting description of an SUG derivation must be both a consistent description and a complete description of some tree, an SUG grammar entry can state both what is true about the phrase structure tree and what needs to be true. For a description to be complete it must specify a single immediate dominance tree and all terminals mentioned in the description must have some (possibly empty) string specified for them. Otherwise there would be no way to determine the exact tree structure or the word for each terminal in the resulting tree. A grammar entry can express grammatical requirements by not satisfying these completion requirements locally. For example, in figure 1 the structure for "ate" has a subject node with category NP and with a terminal as the values of its head feature. Because this terminal does not have its word specified, this NP must equate with another NP node which does have a word for the value of its head feature. The unification of the two NP's feature structures will cause the equation of the two head terminals. In this way the struc-ture for "ate" expresses the fact that it obligatorily subcategorizes for a subject NP. The structure for "ate" also expresses its subcategorization for an object NP, but this object is not obligatory since it does not have an underspecified terminal head. Like the subject of "ate", the root of the structure for "white" in figure 1 has an underspecified terminal head. This expresses the fact that "white" obligatorily modifies N's. The need to construct a single immediate dominance tree is used in the structure for "who" to express the need for the subcategorized S to have an NP gap. Because the dominated NP node does not have an immediate parent, it must equate with some node which has an immediate parent. The site of this equation is the gap associated with "who".The parser presented in this paper accumulates phrase structure information in the same way as does Structure Unification Grammar. It calculates SUG derivation steps using a small set of operations, and incrementally outputs the derivation as it parses. The parser is implemented in the connectionist architecture proposed by Shastri and Ajjanagadde (1990) as a special purpose module for syntactic constituent structure parsing. An SUG description is stored in the module's memory by representing nonterminal nodes as entities and all other needed information as predications over these nodes. If the parser starts to run out of memory space, then it can remove some nodes from the memory, thus forgetting all information about those nodes. The parser operations are implemented in pattern-action rules. As each word is input to the parser, one of these rules combines one of the word's grammar entries with the current description. When the parse is finished the parser checks to make sure it has produced a complete description of some phrase structure tree.The grammars which are supported by the parser are a subset of those for Structure Unification Grammar. These grammars are for the most part lexicalized. Each lexicalized grammar entry is a rooted tree fragment with exactly one phonetically realized terminal, which is the word of the entry. Such grammar entries specify what information is known about the phrase structure of the sentence given the presence of the word, and can be used (Henderson, 1990) to simulate Lexicalized Tree Adjoining Grammar (Schabes, 1990) . Nonlexical grammar entries are rooted tree fragments with no words. They can be used to express constructions like reduced relative clauses, for which no lexical information is necessary. The xis u., l with y I Figure 2 : A derivation for the sentence 'TVho did Barbie see a picture of yesterday".. current mechanism the parser uses to find possible long distance dependencies requires some information about possible extractions to be specified in grammar entries, despite the fact that this information currently only has meaning at the level of the parser.The primary limitations on the parser's ability to parse the sentences derivable with a grammax are due to the architecture's lack of disjunction and limited memory capacity. Technically, constraints on long distance dependencies are enforced by the parser's limited ability to calculate dominance relationships, but the definition of an SUG derivation could be changed to manifest these constraints. This new definition would be necessary to maintain the traditional split between competence and performance phenomena. The remaining constraints imposed at the level of the parser are traditionally treated as performance constraints. For example, the parser's bounded memory prevents it from being able to parse arbitrarily center embedded sentences or from allowing arbitrarily many phrases on the right frontier of a sentence to be modified. These are well established performance constraints on natural language (Chomsky, 1959 , and many others). The lack of a disjunction operator limits the parser's ability to represent local ambiguities. This results in some locally ambiguous grammatical sentences being unparsable. The existence of such sentences for the human parser, called garden path sentences, is also well documented (Bever, 1970, among others) . The representations currently used for handling local ambiguities appear to be adequate for building the constituent structure of any non-garden path sentences. The full verification of this claim awaits a study of how effectively probabilistic constraints can be used to resolve ambiguities. The work presented in this paper does not directly address the question of how ambiguities between possible predicate-argument structures are resolved. Also, the current parser is not intended to be a model of performance phenomena, although since the parser is intended to be computationally adequate, all limitations imposed by the parser must fall within the set of performance constraints on natural language.The parser follows SUG derivations, incrementally combining a grammar entry for each word with the description built from the previous words of the sentence. Like in SUG the intermediate descriptions can specify multiple rooted tree fragments, but the parser represents such a set as a list in order to represent the ordering between terminals in the fragments. The parser begins with a description containing only an S node which needs a head. This description expresses the parser's expectation for a sentence. As each word is read, a grammar entry for that word is chosen and combined with the current description using one of four combination operations. Nonlexical grammar entries can be combined with the current description at any time using the same operations. There is also an internal operation which equates two nodes already in the current description without using a grammar entry. The parser outputs each operation it does as it does them, thus providing incremental output to other language modules. After each operation the parser's representation of the current description is updated so that it fully reflects the new information added by the operation. The five operations used by the parser axe shown in figure 3. The first combination operation, called attaching, adds the grammar entry to the current description and equates the root of the grammar entry with some node already in the current description. The second, called dominance instantiating, equates a node without a parent in the current description with a node in the grammar entry, and equates the host of the unparented node with the root of the grammar entry. The host function is used in the parser's mechanism for enforcing dominance constraints, and represents the fact that the unparented node is potentially dominated by its current host. In the case of long distance dependencies, a node's host is changed to nodes further and further down in the tree in a manner similar to slash passing in Generalized Phrase Structure Grammar, but the resulting domain of possible extractions is more similar to that of Tree Adjoining Grammar. The equationless combining operation simply adds a grammar entry to the end of the tree fragment list. This operation is sometimes necessary in order to delay attachment decisions long enough to make the right choice. The leftward attaching operation equates the root of the tree fragment on the end of the list with some node in the grammar entry, as long as this root is not the initializing matrix S 1. The one parser operation which does not involve a grammar entry is called internal equating. When the parser's representation of the current description is updated so that it fully reflects newly added information, some potential equations are calculated for nodes which do not yet have immediate parents. The internal equating operation executes one of these potential equations. There are two cases when this can occur, equating fillers with gaps and equating a root of a tree fragment with a node in the next earlier tree fragment on the list. The later is how tree fragments are removed from the list.The bound on the number of entities which can be stored in the parser's memory requires that the parser be able to forget entities. The implementation of the parser only represents nonterminal nodes as entities. The number of nonterminals in the memory is kept low simply by forgetting nodes when the memory starts getting full, thereby also forgetting the predications over the nodes. This forgetting operation abstracts away from the existence of the forgotten node in the phrase structure. Once a node is forgotten it can no longer be equated with, so nodes which must be equated with in order for the total description to be complete can not be forgotten. Forgetting nodes may eliminate some otherwise possible parses, but it will never allow parses which violate 1As of this writing the implementation of the tree fragment list and these later two combination operations has been designed, but not coded in the simulation of the parser's implementation. the forgotten constraints. Any forgetting strategy can be used as long as the only eliminated parses are for readings which people do not get. Several such strategies have been proposed in the literature.As a simple example parse consider the parse of "Barbie dresses fashionably" sketched in figure 4. The parser begins with an S which needs a head, and receives the word "Barbie". The underlined grammar entry is chosen because it can attach to the S in the current description using the attaching operation. The next word input is "dresses", and its verb grammar entry is chosen and combined with the current description using the dominance instantiating operation. In the resulting description the subject NP is no longer on the right frontier, so it will not be involved in any future equations and thus can be forgotten. Remember that the output of the parser is incremental, so forgetting the subject will not interfere with semantic interpretation. The next word input is "fashionably", which is a VP modifier. The parser could simply attach "fashionably", but for the purposes of exposition assume the parser is not sure where to attach this modifier, so it simply adds this grammar entry to the end of the tree fragment list using equationless combining. The updating rules of the parser then calculate that the VP root of this tree fragment could equate with the VP for "dresses", and it records this fact. The internal equating operation can then apply to do this equation, thereby choosing this attachment site for "fashionably". This technique can be used to delay resolving any attachment ambiguity. At this point the end of the sentence has been reached and the current description is complete, so a successful parse is signaled.Another example which illustrates the parser's ability to use underspecification to delay disambiguation decisions is given in figure 5 . The feature decomposition ~:A,:EV is used for the major categories (N, V, A, and P) in order to allow the object of "know" to be underspecified as to whether it is of category i ([-A,-V]) or V ([-A,TV]). When parser state : grammar entry: Barbiet knows t ~ at mant leftt Figure 5 : Delaying the resolution of the ambiguity between "Barbie knows a man." and "Barbie knows a man left.""a man" is input the parser is not sure if it is the object of "know" or the subject of this object, so the structure for "a man" is simply added to the parser state using equationless combining. This underspecification can be maintained for as long as necessary, provided there are resources available to maintain it. If no verb is subsequently input then the NP can be equated with the -A node using internal equation, thus making "a man" the object of "know". If, as shown, a verb is input then leftward attaching can be used to attach "a man" as the subject of the verb, and then the verb's S node can be equated with the -A node to make it the object of "know". Since this parser is only concerned with constituent structure and not with predicate-argument structure, the fact that the -A node plays two different semantic roles in the two cases is not a problem.The above parser is implemented using the connectionist computational architecture proposed by Shastri and Ajjanagadde (1990) . This architecture solves the variable binding problem 2 by using units which pulse periodically, and representing different entities in different phases. Units which are storing predications about the same entity pulse synchronously, and units which are storing predications about different entities pulse in different phases. The number of distinct entities which can be stored in a module's memory at one time is determined by the width of a pulse spike and the time between periodic firings (the period). Neurologically plausible estimates of these values put the maximum number of entities in the general vicinity of 7-4-2. The architecture does computation with sets of units which implement pattern-action rules. When such a set of units finds its pattern in the predications in the memory, it modifies the memory contents in accordance with its action and 2The variable binding problem is keeping track of what predications are for what variables when more than one variable is being used. This connectionist computational architecture is used to implement a special purpose module for syntactic constituent structure parsing. A di~ agram of the parser's architecture is shown in figure 6. This parsing module uses its memory to store information about the phrase structure description being built. Nonterminals are the entities in the memory, and predications over nonterminals are used to represent all the information the parser needs about the current description. Pattern-action rules are used to make changes to this information. Most of these rules implement the grammar. For each grammar entry there is a rule for each way of using that grammar entry in a combination operation. The patterns for these rules look for nodes in the current description where their grammar entry can be combined in their way. The actions for these rules add information to the memory so as to represent the changes to the current description which result from their combination. If the grammar entry is lexical then its rules are only activated when its word is the next word in the sentence. A general purpose connectionist arbitrator is used to choose between multiple rule pattern matches, as with other disambiguation decisions 3. This arbitrator 3Because a rule's pattern matches must be communicated to the rule's action through an arbitrator, the existence and quality of a match must be specified in a single node's phase. For rules which involve more than one node, information about one of the nodes must be represented in the phase of the other node for the purposes of testing patterns. This is the purpose weighs the preferences for the possible choices and makes a decision. This mechanism for doing disambiguation allows higher level components of the language system to influence disambiguation by adding to the preferences of the arbitrator 4. It also allows probabilistic constraints such as lexical preferences and structural biases to be used, although these aspects of the parser design have not yet been adequately investigated. Because the parser's grammar is implemented in rules which all compute in parallel, the speed of the parser is independent of the size of the grammar. The internal equating operation is implemented with a rule that looks for pairs of nodes which have been specified as possible equations, and equates them, provided that that equation is chosen by the arbitrator. Equation is done by translating all predications for one node to the phase of the other node, then forgetting the first node. The forgetting operation is implemented with links which suppress all predications stored for the node to be forgotten. The only other rules update the parser state to fully reflects any new information added by a grammar rule. These rules act whenever they apply, and include the calculation of equatability and host relationships.This paper has given an overview of a connectionist syntactic constituent structure parser which uses Structure Unification Grammar as its grammatical framework. The connectionist computational architecture which is used stores and dynamically manipulates symbolic representations, thus making it ideally suited for syntactic parsing. However, the architecture's inability to represent arbitrary disjunction and its bounded memory capacity pose problems for parsing. These difficulties can be overcome by using Structure Unification Grammar as the grammatical framework, due to SUG's extensive use of partial descriptions.This investigation has indeed led to insights into efficient natural language parsing. This parser's speed is independent of the size of its grammar. It only uses a bounded amount of memory. Its output is incremental, monotonic, and does not include disjunction. Its disambiguation of the signal generation box in figure 6 . For all such rules, the identity of one of the nodes can be determined uniquely given the other node and the parser state. For example in the dominance instantiating operation, given the unparented node, the host of that node can be found because host is a function. This constraint on parser operations seems to have significant linguistic import, but more investigation of this possibility is necessary.4In the current simulation of the parser implementation the arbitrators are controlled by the user. mechanism provides a parallel interface for the influence of higher level language modules. Assuming neurologically plausible timing characteristics for the computing units of the connectionist architecture, the parser's speed is roughly compatible with the speed of human speech. In the future the ability of this architecture to do evidential reasoning should allow the use of statistical information in the parser, thus making use of both grammatical and statistical approaches to language in a single framework.
0
The development of software packages and code libraries that implement algorithms and perform tasks in scientific areas is of great advantage for both researchers and educators. The availability of these tools saves the researchers a lot of the time and the effort needed to implement the new approaches they propose and conduct experiments to verify their hypotheses. Educators also find these tools useful in class demonstrations and for setting up practical programming assignments and projects for their students.A large number of systems have been developed over the years to solve problems and perform tasks in Natural Language Processing, Information Retrieval, or Network Analysis. Many of these systems perform specific tasks such as parsing, Graph Partitioning, co-reference resolution, web crawling etc. Some other systems are frameworks for performing generic tasks in one area of focus such as NLTK (Bird and Loper, 2004) and GATE (Cunningham et al., 2002) for Natural Language Processing; Pajek (Batagelj and Mrvar, 2003) and GUESS (Adar, 2006) for Network Analysis and Visualization; and Lemur 1 for Language Modeling and Information Retrieval. This paper presents Clairlib, an open-source toolkit that contains a suit of modules for generic tasks in Natural Language Processing (NLP), Information Retrieval (IR), and Network Analysis (NA). While many systems have been developed to address tasks or subtasks in one of these areas as we have just mentioned, Clairlib provides one integrated environment that addresses tasks in the three areas. This makes it useful for a wide range of applications within and across the three domains.Clairlib is designed to meet the needs of researchers and educators with varying purposes and backgrounds. For this purpose, Clairlib provides three different interfaces to its functionality: a graphical interface, a command-line interface, and an application programming interface (API).Clairlib is developed and maintained by the Computational Linguistics and Information Retrieval (CLAIR) group at the University of Michigan. The first version of Clairlib was released in the year 2007. It has been heavily developed since then until it witnessed a qualitative leap by adding the Graphical Interface and many new features to the latest version that we are presenting here.Clairlib core modules are written in Perl. The GUI was written in Java. The Perl back-end and the Java front-end are efficiently tied together through a communication module. Clairlib is compatible with all the common platforms and operating systems. The only requirements are a Perl interpreter and Java Runtime Environment (JRE).Clairlib has been used in several research projects to implement systems and conduct experiments. It also has been used in several academic courses.The rest of this paper is organized as follows. In Section 2, we describe the structure of Clairlib. In Section 3, we present its functionality. Section 4 presents some usage examples. We conclude in Section 5.
0
It was probably Chuck who coined the term "armchair linguist" (Svartvik, 1991) . Chuck Fillmore's deep commitment to the study of language -in particular lexical semantics -on the basis of corpus data served as a model that kept many of us honest in our investigation of language. Today, we are lucky to be able to work from our office chairs while collecting data from a broad speaker group by means of crowdsourcing. And Chuck's FrameNet taught us the importance of considering word meanings in their contexts. Our paper presents work that tries to take this legacy to heart.
0
The increasing availability of large-scale corpus resources has had a lasting impact on the field of linguistics. In the field of corpus linguistics, large quantities of data have made it possible to precisely model complex multifactorial processes of linguistic change (e.g. Perek and Hilpert, 2017; Gries et al., 2018) . Modern methods in natural language processing also increasingly make use of word embeddings, which encode rich information about the use of a word learned from large datasets (Collobert et al., 2011 ; see Kutuzov et al., 2018 for diachronic word embeddings).From a diachronic perspective, the Greek language corpus is an ideal candidate for a largescale corpus-linguistic approach: it is not only one of the longest preserved languages (with a large body of text already in the 8th century BC, and continuing up until the present day), but it also is extremely well-documented: the Thesaurus Linguae Graecae library of Ancient Greek literary texts, for example, contains more than 110 million words (Pantelia, 2021) . To make such an approach possible, this paper will describe GLAUx ("the Greek Language Automated"), a project aiming to collect a large corpus (spanning sixteen centuries) of Ancient Greek texts from various sources and to automatically annotate this corpus for rich linguistic information.The construction of such a long-term historical corpus is obviously not a trivial task. The goal of this paper is therefore to describe the central problems encountered during this endeavor and the approaches currently adopted to tackle these problems. This will be discussed in section 3, after giving an overview of the data and annotation layers in section 2. Finally, section 4 will give an outlook of future work for this (long-term) project.
0
Lexical sample task is a kind of WSD evaluation task providing training and test data in which a small pre-selected set of target words is chosen and the target words are marked up. In the training data the target words' senses are given, but in the test data are not and need to be predicted by task participants.HIT-IR-WSD regards the lexical sample task as a classification problem, and devotes to extract effective features from the instances. We didn't use any additional training data besides the official ones the task organizers provided. Section 2 gives the architecture of this system. As the task provides correct word sense for each instance, a supervised learning approach is used. In this system, we choose Support Vector Machine (SVM) as classifier. SVM is introduced in section 3. Knowledge sources are presented in section 4. The last section discusses the experimental results and present the main conclusion of the work performed.
0
In recent years, simultaneous translation has attracted increasing interest both in research and industry community. It aims at a real-time translation that demands high translation quality and an as-short-as-possible delay between speech and translation output.A typical simultaneous translation system consists of an auto-speech-recognition (ASR) system that transcribes the source speech into source streaming text, and a machine translation (MT) system that performs the translation from the source into the target text. However, there is a gap between the output of ASR and the input of MT. The MT system takes sentences as input, while the streaming ASR output has no segmentation boundaries. Therefore, exploring a policy to split ASR output into appropriate segments becomes a vital issue for simultaneous translation. If translation starts before adequate source content is delivered, the translation quality degrades. However, waiting for too much source text increases latency.The policies of recent work generally falls into two classes:• Fixed Policies are hard policies that follow a pre-defined schedule independent of the context. They segment the source text based on a fixed length (Ma et al., 2019; Dalvi et al., 2018) . For example, the wait-k method (Ma et al., 2019) first reads k source words, and then generates one target word immediately after each subsequent word is received. Policies of this type are simple and easy to implement. However, they do not consider contextual information and usually result in a drop in translation accuracy.• Adaptive Policies learn to do segmentation according to dynamic contextual information. They either use a specific model to chunk the streaming source text Oda et al., 2014; Cho and Esipova, 2016; Gu et al., 2017; Zheng et al., 2019a Zheng et al., , 2020 or jointly learn segmentation and translation in an end-to-end framework (Arivazhagan et al., 2019; Zheng et al., 2019b; . The adaptive methods are more flexible than the fixed ones and achieve state-of-the-art.In this paper, we propose a novel adaptive segmentation policy for simultaneous translation. Our method is motivated by two widely used strategies in simultaneous interpretation:• Meaningful Unit (MU) Chunking. While listening to speakers, interpreters usually preemptively group the streaming words into units with clear and definite meaning, referred to as meaningful units that can be directly translated without waiting for more words. I went to the park at 10 a.m. Source with MU:上午 10 点 || 我 去了 趟 || 公园 Simul. Interpretation:At 10 a.m. || I went to || the park. Table 1 : A comparison of Chinese-English text translation and simultaneous interpretation. A text translator translates the full sentence after reading all the source words and produces a translation with a long-distance reordering by moving the initial part (as underlined) of the source sentence to the end of the target side. But when doing simultaneous interpreting, an interpreter starts to translate as soon as he or she judges that the current received streaming text constitutes an MU ("||") and translate them monotonically.• Interpreters are often obliged to keep close to the source speech and render the translation of MUs in order, i.e., perform translation monotonically while making the translation grammatically tolerable.See Table 1 for illustration. Unlike text translator, a simultaneous interpreter dynamically segments the source text into 3 MUs and translates them monotonically.In our approach, we model the policy as an MU segmentation model, which dynamically splits the streaming text into meaning units. Once a meaning unit is detected 1 , it is fed to the MT model to generate translation. The MU segmentation is implemented by a classification model under the pre-training & fine-tuning framework (Devlin et al., 2018; Sun et al., 2019) . As there are no standard training corpora to train the MU segmentation classifier, we propose a novel translation-prefix based method to generate training data. Basically, the method detects whether the translation of a sequence of words is a prefix of the full sentence's translation. If so, the sequence is considered as an MU. This makes the segmentation model consistent with the translation model. We further propose a refined method to extract fine-grained MUs to reduce latency.Experimental results on NIST Chinese-English and WMT 2015 German-English datasets show that our method outperforms the previous state-ofthe-art methods in balancing translation accuracy and latency. The contributions of this paper can be summarized as follows:• Inspired by human interpreters, we propose a novel adaptive segmentation policy that splits the ASR output into meaning units for simultaneous translation. The meaning units ensure the MT model to produce high-quality translation with low latency.• We propose a novel prefix-attention method to extract fine-grained MUs by training a neural machine translation (NMT) model that generates monotonic translations.• Our method is simple yet effective. It can be easily integrated into a practical simultaneous translation system.
0
The evolution of social media texts such as blogs, micro-blogs (e.g., Twitter), and chats (e.g., WhatsApp and Facebook messages) has created many new opportunities for information access and language technologies. However, it has also posed many new challenges making it one of the current prime research areas in Natural Language Processing (NLP).Current language technologies primarily focus on English (Young, 2020 ), yet social media platforms demand methods that can also process other languages as they are inherently multilingual environments. 2 Besides, multilingual communities around the world regularly express their thoughts in social media employing and alternating different languages in the same utterance. This mixing of languages, also known as code-mixing or code-switching, 3 is a norm in multilingual societies and is one of the many NLP challenges that social media has facilitated.In addition to the writing aspects in social media, such as flexible grammar, permissive spelling, arbitrary punctuation, slang, and informal abbreviations (Baldwin et al., 2015; Eisenstein, 2013) , code-mixing has introduced a diverse set of linguistic challenges. For instance, multilingual speakers tend to code-mix using a single alphabet regardless of whether the languages involved belong to different writing systems (i.e., language scripts). This behavior is known as transliteration, and code-mixers rely on the phonetic patterns of their writing (i.e., the actual sound) to convey their thoughts in the foreign language (i.e., the language adapted to a new script) (Sitaram et al., 2019) . Another common pattern in code-mixing is the alternation of languages at the word level. This behavior often happens by inflecting words from one language with the rules of another language (Solorio and Liu, 2008) . For instance, in the second example below, the word pushes is the result of conjugating the English verb push according to Spanish grammar rules for the present tense in third person (in this case, the inflection -es). The Hinglish example shows that phonetic Latin script typing is a popular practice in India, instead of using Devanagari script to write Hindi words. We capture both transliteration and word-level code-mixing inflections in the Hinglish and Spanglish corpora of this competition, respectively.Aye HI aur HI enjoy EN kare HI Eng. Trans.: come and enjoy No SP me SP pushes EN please EN Eng. Trans.: Don't push me, pleaseConsidering the previous challenges, code-mixing demands new research methods where the focus goes beyond simply combining monolingual resources to address this linguistic phenomenon. Codemixing poses difficulties in a variety of language pairs and on multiple tasks along the NLP stack, such as word-level language identification, part-of-speech tagging, dependency parsing, machine translation, and semantic processing (Sitaram et al., 2019) . Conventional NLP systems heavily rely on monolingual resources to address code-mixed text, limiting them when properly handling issues such as phonetic typing and word-level code-mixing.Naturally, code-mixing is more common in geographical regions with a high percentage of bi-or multilingual speakers, such as in Texas and California in the US, Hong Kong and Macao in China, many European and African countries, and the countries in South-East Asia. Multilingualism and code-mixing are also widespread in India, which has more than 400 languages (Eberhard et al., 2020) with about 30 languages having more than 1 million speakers. Language diversity and dialect changes trigger Indians to frequently change and mix languages, particularly in speech and social media contexts. As of 2020, Hindi and Spanish have over 630 million and over 530 million speakers (Eberhard et al., 2020) , respectively, ranking them in 3rd and 4th place based on the number of speakers worldwide, which speaks of the relevancy of using these languages in our code-mixing competition.This paper provides an overview of the SemEval-2020 Task 9 competition on sentiment analysis of codemixed social media text (SentiMix). Specifically, we provide code-mixed text annotated with word-level language identification and sentence-level sentiment labels (negative, neutral, and positive). We release our Hinglish (Hindi-English) and Spanglish (Spanish-English) corpora, which are comprised of 20K and 19K tweets, respectively. We describe general statistics of the corpora as well as the baseline for the competition.We received 61 final submissions for Hinglish and 28 for Spanglish, adding to a total number of 89 submissions. We received 33 system description papers. We provide an overview of the participants' results and describe their methods at a high level. Notably, the majority of these methods employed BERT-like and ensemble models to reach competitive results, with the best performers reaching 75.0% and 80.6% F1 scores for Hinglish and Spanglish on held-out test data, respectively. We hope that this shared task will continue to catch the NLP community's attention on the linguistic code-mixing phenomenon.
0
In this paper, we introduce the Action ENRICH4ALL (E-goverNment [RI] CHatbot for ALL) which is about the development of a multilingual chatbot service to be deployed in public administration in Luxembourg, Denmark, and Romania. ENRICH4ALL is funded by the Connecting Europe Facility and its duration is from June 2021 to May 2023. The partners are Luxembourg Institute of Science and Technology, BEIA Consulting Romania, Romanian Academy Institute for AI and SupWiz, Denmark. In this paper, we refer to the benefits and challenges of egovernment chatbots and to the integration of eTranslation with the chatbot platform.
0
Sequence to sequence model decoding remains something of a paradox. The most widely adopted training method for these models is maximum likelihood estimation (MLE), which aims at maximising the probability of the ground truth outputs provided in the training datasets. Consequently, decoding from MLE-trained models is done by trying to find the output to which the model assigns maximum likelihood. Unfortunately, as models usually predict tokens one by one, exact search is not feasible in the general case and practitioners resort to heuristic mechanisms instead.The most popular of these heuristics is beam search (Reddy, 1977) , which maintains several hypotheses in parallel and is guaranteed to find a more likely output than the more basic greedy decoding. This approach has some obvious flaws: for one, it is completely agnostic to the actual metrics (or scores) practitioners actually want to optimise.Even more crucially, in most cases beam search fails at the one thing it is supposed to do: finding the optimal output sequence (w.r.t the model), as shown by Stahlberg and Byrne (2019) . Also alarming are the findings of Welleck et al. (2020) , proving that traditional search mechanisms can yield infinite-length outputs, to which the model assigns zero probability. Finally, the use of likelihood as a training objective has a spectacular sideeffect: it causes trained models to have an inordinate fondness for empty outputs. By using exact search on the output likelihood in machine translation, Stahlberg and Byrne (2019) show that in more than half of cases the highest scoring output according to the model is the empty sentence! All told, we rely on models placing a surprising emphasis on empty outputs, and on a decoding mechanism which usually fails to find optimal outputs; and both ignore the relevant metrics. One can then justifiably wonder why we observe impressive MT results. Stahlberg and Byrne (2019) provide an apparently paradoxical explanation: it is precisely because the decoding mechanisms are imperfect that models produce outputs of high quality. Meister et al. (2020a) elaborate on this assumption; they show that beam search optimises for a slightly modified likelihood objective, promoting uniform distribution probability inside sentences.This state of affairs seems highly unsatisfactory. While a whole body of work has been devoted to alleviating these issues, most approaches have been concerned with training (Bengio et al., 2015; Ranzato et al., 2016; Shen et al., 2016; Bahdanau et al., 2017; Edunov et al., 2018; Leblond et al., 2018) , or making the search mechanism differentiable (Collobert et al., 2019) . These have resulted in performance increase, but they still rely on likelihood as an objective for decoding. Further, Choshen et al. (2019) shows that performance improvements using RL are limited and poorly understood.In this paper, we focus instead on contrasting the performance of beam search to alternative decoding algorithms aimed at optimising various metrics of interest directly, via a value function (or the metric itself when available). Notably, we experiment with variants of the powerful Monte Carlo Tree Search (MCTS) (Coulom, 2006; Kocsis and Szepesvári, 2006) mechanism, which has a proven track record in other sequential applications (Browne et al., 2012; Silver et al., 2017) . We investigate whether, by optimising the metric of interest at test time, one can obtain improved performance compared to likelihood-based approaches, and whether performance scales with the amount of computation -as opposed to that of beam search which has been shown to degrade with large beam sizes (Cohen and Beck, 2019) .We concentrate on machine translation (MT), an emblematic, well-studied sequence to sequence task, with readily available data and benchmarks.Contributions. (i) We recall that there are two different types of metrics: reference-based scores, which rely on ground truth translations, in contrast to reference-less ones. We design a new score, Multilingual BERTScore, as an imperfect but illustrative example of the latter. (ii) We introduce several new decoding algorithms, detailing their implementation and how best to use them for MT. We provide a blueprint for how to use MCTS profitably in NLP (with pseudocode for a batched Numpy-based (Harris et al., 2020) implementation), which opens the door for many exciting applications. (iii) We run extensive experiments to study the performance of decoding mechanisms for different metrics. We show that beam search is the best option only for reference-based metrics. For those, value-based alternatives falter as the value problem is too hard -since it ultimately relies on reconstructing hidden information. For reference-less scores, beam search is outperformed by its competitors, including MCTS.Outline. We go over the related work in Section 2. In Section 3, we contrast several types of metrics, and introduce illustrative examples. We review beam search and introduce alternative algorithms in Section 4. We explain how we train the required value function for value-based methods in Section 5. In Section 6 go over experimental details and results. Finally, we discuss our results, their limitations and possible next steps in Section 7.
0
Reference problems are a central issue in natural language processing. For instance, we need to understand the antecedents of pronouns in translating one language into another. Consider: Someone killed Jim. The police have no suspect, but they think that he or she needed money and knew that he was a wealthy man. He walked in the house with a big suitcase and put the money in it.We will not be able to translate the sentences into, say, Japanese when we are uncertain of what the pronouns like he, she, they, and it in this text are referring to.A pronoun refers to a linguistic object in the preceding or succeeding sentences. In the sequence,Clinton visited Japan. He gave a talk at a university. the second sentence is connected to the first one by He (personal pronoun) referring to Clinton (its antecedent). The same relation holds in:Clinton visited Japan. The president gave a talk at a university.Here, the noun president refers to Clinton.In this paper we try to devise a method for determining antecedent of the noun phrase containing determiner "kono (this)" or "sono (that, its)" in Japanese.
0
Being able to converse like humans in a closed domain is a precondition before an intelligent opendomain chatbot, which further requires transiting among various domains, can be designed Su et al., 2020) . Nonetheless, even if constrained in a specific domain, current chatbots are still far from satisfactory. Unlike task-oriented systems that can be relatively well-resolved with handcrafted templates, human conversations feature a complex mixture of QA, chitchat, recommendation, etc. without pre-specified goals or conversational patterns (Dodge et al., 2016; Akasaki and Kaji, 2017; . Selecting proper domain knowledge to support response generation at all the different situations is challenging (Milward and Beveridge, 2003; Shen et al., 2019) . In this work, we direct our focus to the movie domain and present a large-scale, crowdsourced Chinese dataset with fine-grained annotations in hope of boosting the study towards a human-like closed-domain chatbot.A variety of dialogue datasets with grounded domain knowledge have already been proposed. However, they are collected either through (1) online forum crawling (Dodge et al., 2016; Ghazvininejad et al., 2018; Liu et al., 2018; Zhou et al., 2018a; , which are noisy, multi-party, mostly contain only single-exchange QA, or (2) crowdsourced (Zhu et al., 2017; Zhou et al., 2018b; Moon et al., 2019; , which are small-scale and often created in an overconstrained setting like teacher-student (Moghe et al., 2018) . Even for datasets crowd-sourced in unconstrained scenarios, suggestive domain knowledge is provided for humans before an utterance is provided. This would inevitably prompt humans to utilize these knowledge deliberately, yielding unnatural conversations simply connecting the knowledge (Dinan et al., 2019; . We show examples from other datasets in Appendix Table 10 . In comparison, our dataset has the following advantages: 1. Natural: Crowdworkers chat in a free environment without further constraint or prompt in order to mimic the human daily conversations to the largest extent.2. Large-scale: It covers 270k human dialogues with over 3M utterances, which is at least one order of magnitude larger than all the other crowd-sourced datasets.3. Annotated: Utterances are labeled with entity information and dialogue acts classified into 15 fine-grained aspects, based on which linked into different types of knowledge.Different from previous crowd-sourced works, our annotation process is conducted posteriori so that it will not interfere with human conversations, e.g., prompt them to overuse suggested knowledge.Built upon our dataset, we propose a simple unified language model approach to push the limits of movie-domain chatbots. The model is first pretrained on 2.2B words collected from various general-domain conversational resources, then finetuned on the movie dataset with additional knowledge and dialogue acts incorporated. We pool all components like intent prediction and knowledge retrieval into a sequence prediction task and solve them with a unified language model architecture. It avoids designing complex systems for individual components separately and all subtasks can be easily trained simultaneously (Hosseini-Asl et al., 2020; Peng et al., 2020) . We show our simple unified approach outperforms strong baselines for each separate subtask. Knowledge retrieval, dialogue acts prediction and general-domain pretrain benefit from each other and altogether bring improvement to the generation quality. In the online interactive test, our best model succeeds at chatting with humans for 11.4 turns without being detected to be a machine, outperforming even commercial chatbots Mitsuku 2 and Microsoft XiaoIce 3 which further rely on complex rules. By analyzing the limitations of our model, we find it especially has difficulty at dealing with in-depth discussions over long turns. Future research can consider employing larger knowledge base or explicit state tracking.In summary, our main contributions are (1) presenting a high-quality, large-scale Chinese conversational corpus with fine-grained annotations in the movie domain to benefit future study, (2) showing that a simple unified neural model trained on the high-quality dataset can approach human performance and even outperform commercial systems replying on complex rules, and (3) studying the shortcomings of current techniques, providing suggestive directions for future research.
0
Since the introduction of Semantic Textual Similarity (STS) task at SemEval 2012 and the Semantic Relatedness (SR) task at SemEval 2014, a large number of participating systems have been developed to resolve the tasks. 1, 2 The systems must quantifiably identify the degree of similarity, relatedness, respectively, for pair of short pieces of text, like sentences, where the similarity or relatedness is a broad concept and its value is normally obtained by averaging the opinion of several annotators. A semantic similarity/relatedness score is usually a real number in a semantic scale, [0] [1] [2] [3] [4] [5] in STS, [1] [2] [3] [4] [5] in SR, in the direction from no relevance to semantic equivalence. Some examples from the dataset MSRpar of STS 2012 with associated similarity scores (by human judgment) are as below:• The bird is bathing in the sink. vs. Birdie is washing itself in the water basin. (score = 5.0) (score = 2.6) • John went horse back riding at dawn with a whole group of friends. vs. Sunrise at dawn is a magnificent view to take in if you wake up early enough for it. (score = 0)From our reading of the literature (Marelli et al., 2014b; Agirre et al., 2012; Agirre et al., 2013; Agirrea et al., 2014) , most of STS/SR systems rely on pairwise similarity, such as lexical similarity using taxonomies (WordNet (Fellbaum, 1998) ) or distributional semantic models (LDA (Blei et al., 2003) , LSA (Landauer et al., 1998) , ESA (Gabrilovich and Markovitch, 2007) , etc), and word/n-grams overlap as main features to train a support vector machines (Joachims, 1998) regression model (supervised), or use a word-alignment metric (unsupervised) aligning the two given texts to compute their semantic similarity.Intuitively, the syntactic structure plays an important role for human being to understand the mean-ing of a given text. Thus, it also may help to identify the semantic equivalence/relatedness between two given texts. However, in the STS/SR tasks, very few systems provide evidence of the contribution of syntactic structure in its overall performance. Some systems report partially on this issue, for example, iKernels (Severyn et al., 2013) carried out an analysis on the STS 2012, but not on STS 2013 datasets. They found that syntactic structure contributes 0.0271 and 0.0281 points more to the overall performance, from 0.8187 to 0.8458 and 0.8468, for adopting constituency and dependency trees, respectively.In this paper, we analyze the impact of syntactic structure on the STS 2014 and SICK datasets of STS/SR tasks. We consider three systems which are reported to perform efficiently and effectively on processing syntactic trees using three proposed approaches Syntactic Tree Kernel (Moschitti, 2006) , Syntactic Generalization (Galitsky, 2013) and Distributed Tree Kernel (Zanzotto and Dell'Arciprete, 2012) .The remainder of the paper is as follows: Section 2 introduces three approaches to exploit the syntactic structure in STS/SR tasks, Section 3 describes Experimental Settings, Section 4 discusses about the Evaluations and Section 5 is the Conclusions and Future Work.
0
Coreference resolution refers to the task of identifying noun phrases that refer to the same extralinguistic entity in a text. Using coreference information has been shown to be beneficial in a number of other tasks, including information extraction (McCarthy and Lehnert, 1995) , question answering (Morton, 2000) and summarization (Steinberger et al., 2007) . Developing a full coreference system, however, is a considerable engineering effort, which is why a large body of research concerned with feature engineering or learning methods (e.g. Culotta et al. 2007; Denis and Baldridge 2007) uses a simpler but non-realistic setting, using pre-identified mentions, and the use of coreference information in summa-rization or question answering techniques is not as widespread as it could be. We believe that the availability of a modular toolkit for coreference will significantly lower the entrance barrier for researchers interested in coreference resolution, as well as provide a component that can be easily integrated into other NLP applications.A number of systems that perform coreference resolution are publicly available, such as GUITAR (Steinberger et al., 2007) , which handles the full coreference task, and JAVARAP (Qiu et al., 2004) , which only resolves pronouns. However, literature on coreference resolution, if providing a baseline, usually uses the algorithm and feature set of Soon et al. (2001) for this purpose.Using the built-in maximum entropy learner with feature combination, BART reaches 65.8% F-measure on MUC6 and 62.9% F-measure on MUC7 using Soon et al.'s features, outperforming JAVARAP on pronoun resolution, as well as the Soon et al. reimplementation of Uryupina (2006) . Using a specialized tagger for ACE mentions and an extended feature set including syntactic features (e.g. using tree kernels to represent the syntactic relation between anaphor and antecedent, cf. Yang et al. 2006) , as well as features based on knowledge extracted from Wikipedia (cf. Ponzetto and Smith, in preparation), BART reaches state-of-the-art results on ACE-2. Table 1 compares our results, obtained using this extended feature set, with results from Ng (2007) . Pronoun resolution using the extended feature set gives 73.4% recall, coming near specialized pronoun resolution systems such as (Denis and Baldridge, 2007) . 2 System Architecture The BART toolkit has been developed as a tool to explore the integration of knowledge-rich features into a coreference system at the Johns Hopkins Summer Workshop 2007. It is based on code and ideas from the system of Ponzetto and Strube (2006) , but also includes some ideas from GUITAR (Steinberger et al., 2007) and other coreference systems (Versley, 2006; Yang et al., 2006) . 1 The goal of bringing together state-of-the-art approaches to different aspects of coreference resolution, including specialized preprocessing and syntax-based features has led to a design that is very modular. This design provides effective separation of concerns across several several tasks/roles, including engineering new features that exploit different sources of knowledge, designing improved or specialized preprocessing methods, and improving the way that coreference resolution is mapped to a machine learning problem.Preprocessing To store results of preprocessing components, BART uses the standoff format of the MMAX2 annotation tool (Müller and Strube, 2006) with MiniDiscourse, a library that efficiently implements a subset of MMAX2's functions. Using a generic format for standoff annotation allows the use of the coreference resolution as part of a larger system, but also performing qualitative error analysis using integrated MMAX2 functionality (annotation 1 An open source version of BART is available from http://www.sfs.uni-tuebingen.de/˜versley/BART/.diff, visual display).Preprocessing consists in marking up noun chunks and named entities, as well as additional information such as part-of-speech tags and merging these information into markables that are the starting point for the mentions used by the coreference resolution proper.Starting out with a chunking pipeline, which uses a classical combination of tagger and chunker, with the Stanford POS tagger (Toutanova et al., 2003) , the YamCha chunker (Kudoh and Matsumoto, 2000) and the Stanford Named Entity Recognizer (Finkel et al., 2005) , the desire to use richer syntactic representations led to the development of a parsing pipeline, which uses Charniak and Johnson's reranking parser (Charniak and Johnson, 2005) to assign POS tags and uses base NPs as chunk equivalents, while also providing syntactic trees that can be used by feature extractors. BART also supports using the Berkeley parser (Petrov et al., 2006) , yielding an easy-to-use Java-only solution.To provide a better starting point for mention detection on the ACE corpora, the Carafe pipeline uses an ACE mention tagger provided by MITRE (Wellner and Vilain, 2006) . A specialized merger then discards any base NP that was not detected to be an ACE mention.To perform coreference resolution proper, the mention-building module uses the markables created by the pipeline to create mention objects, which provide an interface more appropriate for coreference resolution than the MiniDiscourse markables. These objects are grouped into equivalence classes by the resolution process and a coreference layer is written into the document, which can be used for detailed error analysis.Feature Extraction BART's default resolver goes through all mentions and looks for possible antecedents in previous mentions as described by Soon et al. (2001) . Each pair of anaphor and candidate is represented as a PairInstance object, which is enriched with classification features by feature extractors, and then handed over to a machine learning-based classifier that decides, given the features, whether anaphor and candidate are coreferent or not. Feature extractors are realized as separate classes, allowing for their independent develop- Learning BART provides a generic abstraction layer that maps application-internal representations to a suitable format for several machine learning toolkits: One module exposes the functionality of the the WEKA machine learning toolkit (Witten and Frank, 2005) , while others interface to specialized state-of-the art learners. SVMLight (Joachims, 1999) , in the SVMLight/TK (Moschitti, 2006) variant, allows to use tree-valued features. SVM Classification uses a Java Native Interface-based wrapper replacing SVMLight/TK's svm classify program to improve the classification speed. Also included is a Maximum entropy classifier that is based upon Robert Dodier's translation of Liu and Nocedal's (1989) L-BFGS optimization code, with a function for programmatic feature combination. 2Training/Testing The training and testing phases slightly differ from each other. In the training phase, the pairs that are to be used as training examples have to be selected in a process of sample selection, whereas in the testing phase, it has to be decided which pairs are to be given to the decision function and how to group mentions into equivalence relations given the classifier decisions.This functionality is factored out into the en-coder/decoder component, which is separate from feature extraction and machine learning itself. It is possible to completely change the basic behavior of the coreference system by providing new encoders/decoders, and still rely on the surrounding infrastructure for feature extraction and machine learning components.
0
Machine reading comprehension (MRC) or question answering (QA) has been a long-standing goal in Natural Language Processing. There is a surge of interest in this area due to new end-to-end modeling techniques and the release of several largescale, open-domain datasets.In earlier datasets (Hermann et al., 2015; Hill et al., 2016; Yang et al., 2015; Rajpurkar et al., 2016) , the questions did not arise from actual end users. Instead, they were constructed in cloze style or created by crowdworkers given a short passage from well-edited sources such as Wikipedia and CNN/Daily Mail. As a consequence, the questions are usually well-formed and about simple facts, and the answers are guaranteed to exist as short spans in the given candidate passages.In MS-MARCO (Nguyen et al., 2016) , the questions were sampled from actual search queries, which may have typos and may not be phrased as questions. 1 Multiple short passages, which might have the answer to the query, were extracted from webpages by a separate information retrieval system. He et al. (2017) made the DuReader dataset a more realistic reflection of the real-world problem by including not only questions with relatively short and factual answers, but also questions about complex descriptions, procedures, opinions, etc. which may have multiple, much longer answers, or no answer at all. Furthermore, full-body text from webpages listed in top search results are directly provided as context. These documents tend to be much noiser than Wikipedia and CNN. They are much longer (5 times longer than those in MS-MARCO on average) and contain many paragraphs that are irrelevant to the query.New problems arise as we now consider the task of machine reading comprehension in a much more challenging real-world setting. First, multiple valid answers to a single question are not only possible but quite common. Figure 1 shows some examples of questions with multiple answers from the DuReader dataset. There could be multiple ways to perform the same task (Question 1), multiple opinions about the same subject (Question 2), or multiple explanations for the same observation (Question 3). However, few works have been done with multiple answers in machine reading comprehension. To address this problem, we propose a multi-answer multi-task scheme which incorporates multiple reference answers in the objective Question 1: word character spacing in Word Answers: 1. " " " " " " " " 2. 3. " "→" "→" " 1. Select the text you want to indent, and right-click on it to select Font, open the Font dialog box, and then select the Character Spacing tab, select the Kerning for fonts check box, and enter a number, at last click OK. 2. Select the text you want to set character spacing, and right-click on it to select Font, and then switch to Character Spacing. 3. click Format from the upper menu bar, select Font, select Character Spacing, and then change the point size.Is LONGZHU worth watching Answers: 1.1. It is, the pace is good and it's adorable and sweet. 2.2. It's not, personally I don't like that kind of TV series.3. It's ok, it's fine to follow if you don't care about the historical accuracy.Got "User Busy" message after one ring Answers: 1.1. Your number is in that person's blacklist.2. The person you called hung up on you. function (but still predicts a single answer in decoding time). We propose three different kinds of multi-answer loss functions and compare their performance through experiment. Another problem is the multiple occurrences of the same answer. As rich context is provided for a single question, the same answer could occur more than one time in different passages, or even at different places of the same passage. In this case, using only one gold span for the answer could be problematic, as the model is forced to choose one span over others that contain the same content. To solve this problem, we propose to apply Minimum Risk Training (MRT), which uses the expected metric as the loss and gives reward to all spans that are similar with the gold answer.In this paper, we present a multi-answer, multitask objective function to train an end-to-end MRC/QA system. We experiment with various alternatives on the DuReader dataset and show that our model out-performs other competing systems and increases the state-of-the-art ROUGE-L score by about 7 points.
0
Question Answering (QA) is a challenging task that draws upon many aspects of NLP. Unlike search or information retrieval, answers infrequently contain lexical overlap with the question (e.g. What should we eat for breakfast? -Zoe's Diner has good pancakes), and require QA models to draw upon more complex methods to bridge this "lexical chasm" (Berger et al., 2000) . These methods range from robust shallow models based on lexical semantics, to deeper, explainably-correct, but much more brittle inference methods based on first order logic. Berger et al. (2000) proposed that this "lexical chasm" might be partially bridged by repurposing statistical machine translation (SMT) models for QA. Instead of translating text from one language to another, these monolingual alignment models learn to translate from question to answer 1 , learning common associations from question terms such as eat or breakfast to answer terms like kitchen, pancakes, or cereal.While monolingual alignment models have enjoyed a good deal of recent success in QA (see related work), they have expensive training data requirements, requiring a large set of aligned indomain question-answer pairs for training. For lowresource languages or specialized domains like science or biology, often the only option is to enlist a domain expert to generate gold QA pairs -a process that is both expensive and time consuming. All of this means that only in rare cases are we accorded the luxury of having enough high-quality QA pairs to properly train an alignment model, and so these models are often underutilized or left struggling for resources.Making use of recent advancements in discourse parsing (Feng and Hirst, 2012), here we address this issue, and investigate whether alignment models for QA can be trained from artificial question-answer pairs generated from discourse structures imposed on free text. We evaluate our methods on two corpora, generating alignment models for an opendomain community QA task using Gigaword 2 , and for a biology-domain QA task using a biology textbook.The contributions of this work are: 1. We demonstrate that by exploiting the discourse structure of free text, monolingual alignment models can be trained to surpass the performance of models built from expensive indomain question-answer pairs. 2. We compare two methods of discourse parsing: a simple sequential model, and a deep model based on Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) . We show that the RST-based method captures within and across-sentence alignments and performs better than the sequential model, but the sequential model is an acceptable approximation when a discourse parser is not available. 3. We evaluate the proposed methods on two corpora, including a low-resource domain where training data is expensive (biology). 4. We experimentally demonstrate that monolingual alignment models trained using our method considerably outperform state-of-theart neural network language models in low resource domains.
0
Natural language processing (NLP) is enamored of contextual word representations-and for good reason! Contextual word-embedders, e.g. BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018) , have bolstered NLP model performance on myriad tasks, such as syntactic parsing (Kitaev et al., 2019) , coreference resolution (Joshi et al., 2019) , morphological tagging (Kondratyuk, 2019) and text generation (Zellers et al., 2019) . Given the large empirical gains observed when they are employed, it is all but certain that word representations derived from neural networks encode some continuous analogue of linguistic structures. 1 Code and data are available at https://github. com/rycolab/intrinsic-probing. Exactly what these representations encode about linguistic structure, however, remains little understood. Researchers have studied this question by attributing function to specific network cells with visualization methods (Karpathy et al., 2015; Li et al., 2016) and by probing (Alain and Bengio, 2017; , which seeks to extract structure from the representations. Recent work has probed various representations for correlates of morphological Giulianelli et al., 2018 ), syntactic (Hupkes et al., 2018 Zhang and Bowman, 2018; Hewitt and Manning, 2019; Lin et al., 2019) , and semantic (Kim et al., 2019) structure.Most current probing efforts focus on what we term extrinsic probing, where the goal is to determine whether the posited linguistic structure is predictable from the learned representation. Generally, extrinsic probing works argue for the presence of linguistic structure by showing that it is extractable from the representations using a machine learning model. In contrast, we focus on intrinsic probing-whose goals are a proper superset of the goals of extrinsic probing. In intrinsic probing, one seeks to determine not only whether a signature of linguistic structure can be found, but also how it is encoded in the representations. In short, we aim to discover which particular "neurons" (a.k.a. dimensions) in the representations correlate with a given linguistic structure.Intrinsic probing also has ancillary benefits that extrinsic probing lacks; it can facilitate manual analyses of representations and potentially yield a nuanced view about the information encoded by them.The technical portion of our paper focuses on developing a novel framework for intrinsic probing: we scan sets of dimensions, or neurons, in a word vector representation which activate when they correlate with target linguistic properties. We show that when intrinsically probing high-dimensional representations, the present probing paradigm is insufficient ( §2). Current probes are too slow to be used under our framework, which invariably leads to low-resolution scans that can only look at one or a few neurons at a time.Instead, we introduce decomposable probes, which can be trained once on the whole representation and henceforth be used to scan any selection of neurons. To that end, we describe one such probe that leverages the multivariate Gaussian distribution's inherent decomposability, and evaluate its performance on a large-scale, multi-lingual, morphosyntactic probing task ( §3).We experiment on 36 languages 2 from the Universal Dependencies treebanks (Nivre et al., 2017) . We find that all the morphosyntactic features we considered are encoded by a relatively small selection of neurons. In some cases, very few neurons are needed; for instance, for multilingual BERT English representations, we see that, with two neurons, we can largely separate past and present tense in Fig. 1 . In this, our work is closest to Lakretz et al. (2019) , except that we extend the investigation beyond individual neurons-a move which is only made tractable by decomposable probing. We also provide analyses on morphological features beyond number and tense. Across all languages, 35 out of 768 neurons on average suffice to reach a reasonable amount of encoded information, and adding more yields diminishing returns (see Fig. 2 ). Interestingly, in our head-to-head comparison of BERT and fastText, we find that fastText almost always encodes information about morphosyntactic 2 See App. F for a list.properties using fewer dimensions.
0
Technical terms and named entities (NEs) constitute the bulk of the Out Of Vocabulary (OOV) words. Named entities are usually not found in bilingual dictionaries and are very generative in nature. Proper identification, classification and translation of Named entities (NEs) are very important in many Natural Language Processing (NLP) applications. Translation of NEs involves both translation and transliteration. Transliteration is the method of translating into another language by expressing the original foreign word using characters of the target language preserving the pronunciation in their source language. Thus, the central problem in transliteration is predicting the pronunciation of the original word. Transliteration between two languages that use the same set of alphabets is trivial: the word is left as it is. However, for languages those use different alphabet sets the names must be transliterated or rendered in the target language alphabets. Transliteration of NEs is necessary in many applications, such as machine translation, corpus alignment, cross-language Information Retrieval, information extraction and automatic lexicon acquisition. In the literature, a number of transliteration algorithms are available involving English (Li et al., 2004; Vigra and Khudanpur, 2003; Goto et al., 2003) , European languages (Marino et al., 2005) and some of the Asian languages, namely Chinese (Li et al., 2004; Vigra and Khudanpur, 2003) , Japanese (Goto et al., 2003; Knight and Graehl, 1998) , Korean (Jung et al., 2000) and Arabic (Al-Onaizan and Knight, 2002a; Al-Onaizan and Knight, 2002c) . Recently, some works have been initiated involving Indian languages (Ekbal et al., 2006; Ekbal et al., 2007; Surana and Singh, 2008) .
0
Language identification (LID) is an important NLP task that usually acts as an enabling technology in a pipeline involving another downstream task such as machine translation (Salloum et al., 2014) or sentiment analysis (Abdul-Mageed, 2017b,a). Although several works have focused on detecting languages in global settings (see Jauhiainen et al. (2018) for a survey), there has not been extensive research on teasing apart similar languages or language varieties . This is the case for Arabic, the term used to collectively refer to a large number of varieties with a vast population of native speakers (∼ 300 million). For this reason, we focus on detecting fine-grained Arabic dialect as part of our contribution to the MADAR shared task 2, twitter user dialect identification (Bouamor et al., 2019) .Previous works on Arabic (e.g., Callison-Burch (2011, 2014) ; Elfardy and Diab (2013) ; Cotterell and Callison-Burch (2014) ) have primarily targeted cross-country regional varieties such as Egyptian, Gulf, and Levantine, in addition to Modern Standard Arabic (MSA). These works exploited social data from blogs (Diab et al., 2010; Elfardy and Diab, 2012; Al-Sabbagh and Girju, 2012; Sadat et al., 2014) , the general Web (Al-Sabbagh and Girju, 2012) , online news sites comments sections (Zaidan and Callison-Burch, 2011), and Twitter (Abdul-Mageed and Abdul-Mageed et al., 2014; Mubarak and Darwish, 2014; Qwaider et al., 2018) . Other works have used translated data (e.g., Bouamor et al. (2018) ), or speech transcripts (e.g., Malmasi and Zampieri (2016) . More recently, other works reporting larger-scale datasets at the country-level were undertaken. These include data spanning 10to-17 different countries (Zaghouani and Charfi, 2018; .To solve Arabic dialect identification, many researchers developed models based on computational linguistics and machine learning (Elfardy and Diab, 2013; Salloum et al., 2014; Cotterell and Callison-Burch, 2014) , and deep learning . In this paper, we focus on using state-of-the-arts deep learning architectures to identify Arabic dialects of Twitter users at the country level. We use the MADAR twitter corpus (Bouamor et al., 2019) , comprising 21 country-level dialect labels. Namely, we employ unidirectional Gated Recurrent Unit (GRU) as our baseline and pre-trained Multilingual Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) to identify dialect classes for individual tweets (which we then port at user level). We also apply semi-supervised learning to augment our training data, with a goal to improve model performance. Our system ranks top 1 in the shared task. The rest of the paper is organized as follows: data are described in Section 2. Section 3 introduces our methods, follow by experiments in Section 4. We conclude in Section 5.
0
Over recent years there has been much interest in the field of distributional semantics, drawing on the distributional hypothesis: words that occur in similar contexts tend to have similar meanings (Harris, 1954) . There is a large body of work on the use of different similarity measures (Lee, 1999; Weeds and Weir, 2003; Curran, 2004 ) and many researchers have built thesauri (i.e. lists of "nearest neighbours") automatically and applied them in a variety of applications, generally with a good deal of success.In early research there was much interest in how these automatically generated thesauri compare with human-constructed gold standards such as WordNet and Roget (Lin, 1998; Kilgarriff and Yallop, 2000) . More recently, the focus has tended to shift to building thesauri to alleviate the sparse-data problem. Distributional thesauri have been used in a wide variety of areas including sentiment classification (Bollegala et al., 2011) , WSD (Miller et al., 2012; Khapra et al., 2010) , textual entailment (Berant et al., 2010) , predicting semantic compositionality (Bergsma et al., 2010) , acquisition of semantic lexicons (McIntosh, 2010) , conversation entailment (Zhang and Chai, 2010) , lexical substitution (Szarvas et al., 2013) , taxonomy induction (Fountain and Lapata, 2012) , and parser lexicalisation (Rei and Briscoe, 2013) .A primary focus of distributional semantics has been on identifying words which are similar to each other. However, semantic similarity encompasses a variety of different lexico-semantic and topical relations. Even if we just consider nouns, an automatically generated thesaurus will tend to return a mix of synonyms, antonyms, hyponyms, hypernyms, co-hyponyms, meronyms and other topically related words. A central problem here is that whilst most measures of distributional similarity are symmetric, some of the important semantic relations are not. The hyponymy relation (and converse hypernymy) which forms the ISA backbone of taxonomies and ontologies such as WordNet (Fellbaum, 1989) , and determines lexical entailment (Geffet and Dagan, 2005) , is asymmetric. On the other hand, the cohyponymy relation which relates two words unrelated by hyponymy but sharing a (close) hypernym, is symmetric, as are synonymy and antonymy. Table 1 shows the distributionally nearest neighbours of the words cat, animal and dog. In the list for cat we can see 2 hypernyms and 13 co-hyponyms 1 . cat dog 0.32, animal 0.29, rabbit 0.27, bird 0.26, bear 0.26, monkey 0.26, mouse 0.25, pig 0.25, snake 0.24, horse 0.24, rat 0.24, elephant 0.23, tiger 0.23, deer 0.23, creature 0.23 animal bird 0.36, fish 0.34, creature 0.33, dog 0.31, horse 0.30, insect 0.30, species 0.29, cat 0.29, human 0.28, mammal, 0.28, cattle 0.27, snake 0.27, pig 0.26, rabbit 0.26, elephant 0 .25 dog cat 0.32, animal 0.31, horse 0.29, bird 0.26, rabbit 0.26, pig 0.25, bear 0.26, man 0.25, fish 0.24, boy 0.24, creature 0.24, monkey 0.24, snake 0.24, mouse 0.24, rat 0.23 Table 1 : Top 15 neighbours of cat, animal and dog generated using Lin's similarity measure (Lin, 1998) considering all words and dependency features occurring 100 or more times in Wikipedia.Distributional similarity is being deployed (e.g., Dinu and Thater (2012) ) in situations where it can be useful to be able to distinguish between these different relationships. Consider the following two sentences.The cat ran across the road.(1)The animal ran across the road.Sentence 1 textually entails sentence 2, but sentence 2 does not textually entail sentence 1. The ability to determine whether entailment holds between the sentences, and in which direction, depends on the ability to identify hyponymy. Given a similarity score of 0.29 between cat and animal, how do we know which is the hyponym and which is the hypernym?In applying distributional semantics to the problem of textual entailment, there is a need to generalise lexical entailment to phrases and sentences. Thus, the ability to distinguish different semantic relations is crucial if approaches to the composition of distributional representations of meaning that are currently receiving considerable interest (Widdows, 2008; Mitchell and Lapata, 2008; Baroni and Zamparelli, 2010; Grefenstette et al., 2011; Socher et al., 2012; Weeds et al., 2014) are to be applied to the textual entailment problem.We formulate the challenge as follows: Consider a set of pairs of similar words A, B where one of three relationships hold between A and B: A lexically entails B, B lexically entails A or A and B are related by co-hyponymy. Given such a set, how can we determine which relationship holds? In Section 2, we discuss existing attempts to address this problem through the use of various directional measures of distributional similarity.This paper considers the effectiveness of various supervised approaches, and makes the following contributions. First, we show that a SVM can distinguish the entailment and co-hyponymy relations, achieving a significant reduction in error rate in comparison to existing state-of-the-art methods based on the notion of distributional generality. Second, by comparing two different data sets, one built from BLESS (Baroni and Lenci, 2011) and the other from WordNet (Fellbaum, 1989), we derive important insights into the requirements of a valid evaluation of supervised approaches, and provide a data set for further research in this area. Third, we show that when learning how to determine an ontological relationship between a pair of similar words by means of the word's distributional vectors, quite different vector operations are useful when identifying different ontological relationships. In particular, using the difference between the vectors for pairs of words is appropriate for the entailment task, whereas adding the vectors works well for the co-hyponym task.
0
Various tasks dealing with natural language data have to cope with the numerous different senses possessed by every lexical item: machine translation, information retrieval, information extraction ... This very old issue is far from being solved, and evaluation of methods addressing it is far from obvious (Resnik and Yarowsky, 2000) . This problem has been tackled in a number of ways 1 : by looking at contexts of use (with supervised learning or unsupervised sense clustering) or by using lexical resources such as dictionaries or thesauri. The first kind of approach relies on data that are hard to collect (supervised) or very sensitive to the type of corpus (unsupervised). The second kind of approach tries to exploit the lexical knowledge that is represented in dictionaries or thesaurus, with various results from its inception up to now (Lesk, 1986; . In all cases, a distance between words or word senses is used as a way to find the right sense in a given context. Dictionarybased approaches usually rely on a comparison of the set of words used in sense definitions and 1 A good introduction is (Ide and Véronis, 1998) , or (Manning and Schütze, 1999), chap. 7. in the context to disambiguate 2 .This paper presents an algorithm which uses a dictionary as a network of lexical items (cf. sections 2 and 3) to compute a semantic similarity measure between words and word senses. It takes into account the whole topology of the dictionary instead of just the entry of target words. This arguably gives a certain robustness of the results with respect to the dictionary. We have begun testing this approach on word sense disambiguation on definitions of the dictionary itself (section 5), but the method is expected to be more general, although this has not been tested yet. Preliminary results are quite encouraging considering that the method does not require any prior annotated data, while operating on an unconstrained vocabulary.
0
Currently an important portion of research in natural language processing is devoted to the goal of reducing or getting rid of large labeled datasets. Recent examples include language model fine-tuning (Devlin et al., 2019) , transfer learning (Zoph et al., 2016) or few-shot learning (Brown et al., 2020) . Another common approach is weakly supervised learning. The idea is to make use of human intuitions or already acquired human knowledge to create weak labels. Examples of such sources are keyword lists, regular expressions, heuristics or independently existing curated data sources, e.g. a movie database if the task is concerned with TV shows. While the resulting labels are noisy, they provide a quick and easy way to create large labeled datasets. In the following, we use the term labeling functions, introduced in , to describe functions which create weak labels based on the notions above.Throughout the weak supervision literature generative modeling ideas are found (Takamatsu et al., 2012; Alfonseca et al., 2012; . Probably the most popular example of a system using generative modeling in weak supervision is the data programming paradigm of Snorkel . It uses correlations within labeling functions to learn a graph capturing dependencies between labeling functions and true labels.However, such an approach does not directly model biases of weak supervision reflected in the feature space. In order to directly model the relevant aspects in the feature space of a weakly supervised dataset, we investigate the use of density estimation using normalizing flows. More specifically, in this work, we model probability distributions over the input space induced by labeling functions, and combine those distributions for better weakly supervised prediction.We propose and examine four novel models for weakly supervised learning based on normalizing flows (WeaNF-*): Firstly, we introduce a standard model WeaNF-S, where each labeling function is represented by a multivariate normal distribution, and its iterative variant WeaNF-I. Furthermore WeaNF-N additionally learns the negative space, i.e. a density for the space where the labeling function does not match, and a mixed model, WeaNF-M, where correlations of sets of labeling functions are represented by the normalizing flow. As a consequence, the classification task is a two step procedure. The first step estimates the densities, and the second step aggregates them to model label prediction. Multiple alternatives are discussed and analyzed.We benchmark our approach on several commonly used weak supervision datasets. The results highlight that our proposed generative approach is competitive with standard weak supervision methods. Additionally the results show that smart aggregation schemes prove beneficial.In summary, our contributions are i) the development of multiple models based on normalizing flows for weak supervision combined with density aggregation schemes, ii) a quantitative and qualitative analysis highlighting opportunities and problems and iii) an implementation of the method 1 . To the best of our knowledge we are the first to use normalizing flows to generatively model labeling functions.
0
De nombreux auteurs ont souligné combien les énoncés que nous produisons quotidiennement contiennent un grand nombre d'expressions figuratives qui posent des problèmes tant aux modèles psychologiques de la compréhension du langage qu'aux approches linguistiques en traitement automatique de la langue (Gibbs, 1994; Martin, 1992) . Durant ces dix dernières années, ces deux disciplines ont fait d'importants progrès tout particulièrement au niveau du traitement de la métaphore. De plus en plus de données empiriques sont venues étayer les hypothèses des chercheurs quant à la manière dont nous comprenons une métaphore (Cacciari, Glucksberg, 1994; Kintsch, 2000) . Simultanément, des méthodes pour détecter et interpréter automatiquement les métaphores ont été proposées et implémentées (Fass, 1991; Ferrari, 1996; Ferrari et al., 2000; Martin, 1992) . Si ces disciplines sont loin de s'ignorer, leurs visées très différentes n'ont jusqu'à présent pas permis de réel rapprochement. Récemment toutefois, Kintsch (2000) a entrouvert la voie à un tel rapprochement en proposant un "modèle computationnel" de l'interprétation des énoncés métaphoriques. Après avoir décrit l'algorithme qu'il propose, nous présentons une étude préliminaire qui confronte cet algorithme à des métaphores variées d'origine littéraire et qui teste la possibilité de l'appliquer à leur détection.Un modèle computationnel de l'interprétation des métaphores basé sur l'analyse sémantique latente Kintsch (2000) propose un modèle computationnel qui s'appuie sur la conception interactive de l'interprétation des métaphores. Selon celle-ci, tant le véhicule 1 que la topique contribuent au sens de la métaphore puisque le véhicule propose des propriétés parmi lesquelles la topique sélectionne celles qui sont acceptables et les adaptent en fonction de ses propriétés sémantiques. Par exemple, dans la métaphore reprise par Kintsch à Glucksberg "Mon avocat est un requin", avocat (topique) sélectionne les traits de requin (véhicule) qui peuvent lui être attribués. Ce seront par exemple sanguinaire ou vicieux. Une caractéristique importante de cette conception est qu'elle peut être étendue à n'importe quelle prédication, la topique y jouant le rôle de l'argument que le véhicule-prédicat enrichit de certaines de ses propriétés (Glucksberg, McGlone, 1999; Kintsch, 2001) . Ceci rejoint la thèse, actuellement privilégiée en psycholinguistique, selon laquelle une métaphore est comprise par les mêmes processus mentaux que ceux qui s'appliquent aux énoncés dont le sens littéral est pertinent (Gibbs, 1994; Glucksberg et al., 1997; voir Martin (1992) pour un autre emploi de cette même thèse).Pour implémenter cette conception, il est nécessaire de pouvoir identifier les traits sémantiques qui participeront au sens de la métaphore et de proposer un algorithme capable d'effectuer la sélection. La première composante est fournie par l'analyse sémantique latente (ASL). Issue de travaux sur l'indexation automatique de documents, cette technique vise à construire un espace sémantique de très grandes dimensions à partir de l'analyse statistique des cooccurrences dans un corpus de textes 2 . Le sens de chaque mot y est représenté par un vecteur. Pour mesurer la similarité sémantique entre deux mots, on calcule le cosinus entre les vecteurs qui les représentent. Plus deux mots sont sémantiquement proches, plus les deux vecteurs qui les représentent pointent dans la même direction et donc plus leur cosinus se rapproche de 1. Un cosinus de 0 indique une absence de similarité puisque les vecteurs correspondants sont orthogonaux. L'algorithme employé pour déterminer le sens d'une prédication vise à sélectionner parmi les "traits" du prédicat ceux qui sont proches de l'argument. On procède en recherchant parmi les n plus proches voisins du prédicat les k plus proches voisins de l'argument. Afin de garantir que les termes sélectionnés sont suffisamment liés aux deux éléments de la prédication, un seuil de proximité minimal est imposé aux termes sélectionnés. Le sens de la prédication est alors déterminé en prenant le centroïde du prédicat, de l'argument et des k termes qui viennent d'être sélectionnés, c'est-à-dire en additionnant les vecteurs correspondants. L'adéquation du "sens" attribué par cette procédure à une prédication, et donc aussi à une métaphore, est évaluée en déterminant la proximité entre ce nouveau vecteur et des points de repères considérés comme proches du sens de la métaphore (p.e., vicieux, sanguinaire pour la métaphore mon avocat est un requin).Selon ce modèle, le seul facteur qui change lorsqu'on analyse, non un énoncé littéral, mais une métaphore, est le paramètre n, c'est-à-dire le nombre de voisins du prédicat parmi lesquels on cherche les plus proches voisins de l'argument. Pour un énoncé littéral, on se limite aux 20 plus proches voisins, alors que pour un énoncé métaphorique, il faut aller jusqu'à 200 voire
0
Question Answer (QA) ranking, or the task of accurately ranking the best answers to an input question, has been a long-standing research pursuit with practical applications in a variety of domains. Popular examples of such applications are customer support chat-bots, community question answering portals, and digital assistants like Siri or Alexa Yih and Ma (2016) .Early work on QA ranking relied heavily on linguistic knowledge (such as parse-trees), feature engineering or external resources (Wang and Manning, 2010; Wang et al., 2007; Yih et al., 2013) . Yih et al. (2013) constructed semantic features from WordNet and paired semantically related words based on these features and relations. Wang and Manning (2010) ; Wang et al. (2007) used syntactic matching between question and answer parse trees for answer selection. Other proposals used minimal edit sequences between dependency parse trees as a matching score between question and answer (Heilman and Smith, 2010; Severyn and Moschitti, 2013; Yao et al., 2013) .The majority of the recent developments for QA ranking algorithms are based on deep learning techniques, and fall into two different classes of models: representation-based or interactionbased. In representation-based models, both question and answer are mapped to the same representation space via network layers with shared weights, and a final relevance or matching score is computed from these representations (Bowman et al., 2015; Tan et al., 2015; Huang et al., 2013; Tan et al., 2016; Wang et al., 2016) . In interactionbased models, the network attempts to capture multiple levels of interaction (or similarity) between question and answer (Hu et al., 2014; Pang et al., 2016; Yu et al., 2018) . The final relevance/matching score can be computed out of the partial similarities derived from the multiple interactions.Recent results have indicated that representation-based models, when used with attention layers to focus on relevant parts of the question and answer, tend to outperform interaction-based models (Tan et al., 2016; Wang et al., 2016) . The recently proposed Multihop Attention Network (MAN) model (Tran and Niederee, 2018) currently achieves state-ofthe-art performance on ranking tasks by using sequential attention (Brarda et al., 2017) over multiple attention layers. This model is discussed in detail in Section 2.2.Adversarial training and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) have been successfully applied to Computer Vision (Karras et al., 2017; Kelkar et al., 2018) and Natural Language Processing (Lin et al., 2017) applications, but only sparsely studied in Information Retrieval tasks. As described by , adversarial training in Information Retrieval can be approached by having a generator model to sample difficult adversarial examples which are passed to a discriminator model that learns to rank on increasingly difficult adversarial examples. This adversarial training process in principle can lead to increased robustness and accuracy of the final ranking model.We show that in general most models do benefit from adversarial training, with a clear increase in ranking metrics. However, we also observed that not all types of models benefit from straightforward adversarial training. For instance, Multihop Attention Network often displayed worse results with adversarial training. In such cases, we observed that the model was excessively compensating to the current adversarial training data batch and often forgetting previous batches, thus reducing its performance on test data.To help address this issue, we propose a novel committee representation to adversarial modeling for QA ranking that can be applied to any underlying ranking algorithm. Not only does it address the observed "overfitting" that may occur during adversarial training, but provides an improvement to all baseline QA ranking models we tested. In particular, we introduce a new state-of-the-art model AdvCom-MAN (Adversarial Committee -Multihop Attention Network) for QA ranking that displays, to the best of our knowledge, state-ofthe-art results on four different datasets for QA Ranking.
0
Word embeddings are an essential component in systems for many natural language processing tasks such as part-of-speech tagging (Al-Rfou' et al., 2013) , dependency parsing (Chen and Manning, 2014) and named entity recognition (Pennington et al., 2014) . Cross-lingual word representations provide a shared space for word embeddings of two languages, and make it possible to transfer information between languages (Ruder et al., 2019) . A common approach to learn cross-lingual embeddings is to learn a matrix to map the embeddings of one language to another using supervised (e.g., Mikolov et al., 2013b) , semi-supervised (Artetxe et al., 2017) , or unsupervised (e.g., Lample et al., 2018) methods. These methods rely on the assumption that the geometric arrangement of embeddings in different languages is the same. However, it has been shown that this assumption does not always hold, and that methods which instead jointly train embeddings for two languages produce embeddings that are more isomorphic and achieve stronger results for bilingual lexicon induction (BLI, Ormazabal et al., 2019) , a well-known in-trinsic evaluation for cross-lingual word representations (Ruder et al., 2019; Anastasopoulos and Neubig, 2020) . The approach of Ormazabal et al. uses a parallel corpus as a cross-lingual signal. Parallel corpora are, however, unavailable for many language pairs, particularly low-resource languages. Duong et al. (2016) introduce a joint training approach that extends CBOW (Mikolov et al., 2013a) to learn cross-lingual word embeddings from modest size monolingual corpora, using a bilingual dictionary as the cross-lingual signal. Bilingual dictionaries are available for many language pairs, e.g., Panlex (Baldwin et al., 2010) provides translations for roughly 5700 languages. These training resource requirements suggest this method could be well-suited to lower-resource languages. However, this word-level approach is unable to form representations for out-of-vocabulary (OOV) words, which could be particularly common in the case of lowresource, and morphologically-rich, languages.Hakimi Parizi and Cook (2020b) propose an extension of Duong et al. (2016) that incorporates subword information during training and therefore can generate representations for OOVs in the shared cross-lingual space. This method also does not require parallel corpora for training, and could therefore be particularly well-suited to lower-resource, and morphologically-rich, languages. However, Hakimi Parizi and Cook only evaluate on synthetic low-resource languages. We refer to the methods of Duong et al. and Hakimi Parizi and Cook as DUONG2016 and HAKIMI2020, respectively.Most prior work on BLI focuses on invocabulary (IV) words and well-resourced languages (e.g., Artetxe et al., 2017; Ormazabal et al., 2019; Zhang et al., 2020) , although there has been some work on OOVs (Hakimi Parizi and Cook, 2020a ) and low-resource languages (Anastasopoulos and Neubig, 2020). In this paper, we evaluate HAKIMI2020 on BLI for twelve lower-resource languages, and also consider an evaluation focused on OOVs. Our results indicate that HAKIMI2020 gives improvements over DUONG2016 and several strong baselines, particularly for OOVs. EQUATIONFollowing Bojanowski et al. (2017) , HAKIMI2020 modifies Equation 1 by including sub-word information during the joint training process as follows:O = i∈Ds∪Dt α log S(w i , h i ) + (1 − α) log S(w i , h i ) + p j=1 E w j ∼Pn(w) log −S(w j , h i ) (2) S(w, h) = 1 |G w| g∈Gw z T g h (3)where G w is the set of sub-words appearing in w and z g is the sub-word embedding for g. h is calculated by averaging the representations for each word appearing in the context, where each word is itself represented by the average of its sub-word embeddings. HAKIMI2020 use character n-grams as subwords. Specifically, each word is augmented with special beginning and end of word markers, and then represented as a bag of character n-grams, using n-grams of length 3-6 characters. The entire word itself (with beginning and end of word markers) is also included among the sub-words.
0
Neural machine translation (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; which directly leverages a single neural network to transform the source sentence into the target sentence, has drawn more and more attention in both academia and industry (Shen et al., 2015; Johnson et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) . This end-to-end NMT typically consists of two sub neural networks. The encoder network reads and encodes the source sentence into the context vector representation; and the decoder network generates the target sentence word by word based on the context vector. To dynamically generate a context vector for a target word being generated, the attention mechanism which enables the model to focus on the relevant words in the sourceside sentence is usually deployed. Under the encoder-decoder framework, many variants of the model structure, such as convolutional neural network (CNN) and recurrent neural network (RN-N) are proposed Gehring et al., 2017) . Recently, (Gehring et al., 2017) propose the Transformer, the first sequence transduction model based entirely on attention, achieving state-of-the-art performance on the English-German and English-French translation tasks. Despite its success, the Transformer, similar to traditional NMT models, is still optimized to maximize the likelihood estimation of the ground word (M-LE) at each time step. Such an objective poses a hidden danger to NMT models. That is, the model may generate the best candidate word for the current time step yet a bad component of the whole sentence in the long run. Minimum risk training (MRT) (Shen et al., 2015) is proposed to alleviate such a limitation by adopting the sequence level objective, i.e., the sentence-level BLEU, for traditional NMT models. Yet somewhat improved, this objective still does not guarantee the translation results to be natural and sufficient. Since the BLEU point is computed as the geometric mean of the modified n-gram precisions (Papineni et al., 2002) , almost all of the existing objectives essentially train NMT models to generate sentences with n-gram precisions as high as possible (MLE can be viewed to generate sentences with high 1gram precisions). While n-gram precisions largely tell the good sentence apart from the bad one, it is widely acknowledged that higher n-gram precisions do not guarantee better sentences (Callison-Burch and Osborne, 2006; Chatterjee et al., 2007) . Additionally, the manually defined objective, i.e., the n-gram precision, is unable to cover all crucial aspects of the data distribution and NMT models may be trained to generate suboptimal sentences (Luc et al., 2016) .In this paper, to address the limitation mentioned above, we borrow the idea of generative adversarial training from computer vision (Goodfellow et al., 2014; Denton et al., 2015) to directly train the NMT model generating sentences which are hard to be discriminated from human translations. The motivation behind is that while we can not manually define the data distribution of golden sentences comprehensively, we are able to utilize a discriminative network to learn automatically what the golden sentences look like. Following this motivation, we build a conditional sequence generative adversarial net where we jointly train two sub adversarial models: A generator generates the target-language sentence based on the input source-language sentence; And a discriminator, conditioned on the source-language sentence, predicts the probability of the target-language sentence being a human-generated one. During the training process, the generator aims to fool the discriminator into believing that its output is a human-generated sentence, and the discriminator makes efforts not to be fooled by improving its ability to distinguish the machine-generated sentence from the human-generated one. This kind of adversarial training achieves a win-win situation when the generator and discriminator reach a Nash Equilibrium (Zhao et al., 2016; Arora et al., 2017; Guimaraes et al., 2017) . Besides generating the desired distribution, we also want to directly guide the generator with a static and specific objective, such as generating sentences with high BLEU points. To this end, the smoothed sentence-level BLEU (Nakov et al., 2012) is utilized as the reinforced objective for the generator. During training, we employ both the dynamic discriminator and the static BLEU objective to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. In summary, we mainly make the following contributions:• To the best of our knowledge, this work is among the first endeavors to introduce the generative adversarial training into NMT. We directly train the NMT model to generate sentences which are hard to be discriminated from human translations. The proposed mod-el can be applied to any end-to-end NMT systems.• We conduct extensive experiments on English-German and Chinese-English translation tasks and we test two different NMT models, the traditional RNNSearch and the state-of-the-art Transformer. Experimental results show that the proposed approach consistently achieves great success.• Last but not least, we propose the smoothed sentence-level BLEU as the static and specific objective for the generator which biases the generation towards achieving high BLEU points. We show that the proposed approach is a weighted combination of the naive GAN and MRT.2 Background and Related Work
0
Il est aujourd'hui un consensus clair, non seulement que les corpus annotés sont indispensables aux outils de traitement automatique des langues (TAL) pour leur entraînement et leur évaluation, mais également que l'annotation doit être consistante pour être profitable (voir, par exemple (Reidsma et Carletta, 2008) ). Or, l'obtention d'une annotation manuelle de qualité requiert l'utilisation d'un guide d'annotation suffisamment complet et cohérent (Nédellec et al., 2006) . La mise au point d'un tel guide est cependant, comme le soulignent Sampson (2000) et (Scott et al., 2012) , loin d'être triviale.En outre, il est rare, une fois une campagne d'annotation terminée, que le guide d'annotation et le corpus annoté soient complètement cohérents, ce qui n'est pas sans poser problème pour les systèmes ou les linguistes utilisant le corpus (voir par exemple (Candito et Seddah, 2012) , en ce qui concerne le corpus arboré du français).Une solution pour remédier à ces deux difficultés consiste à développer le guide et à annoter le corpus selon des cycles courts de prototypage. Cette méthodologie est appelée Agile Annotation (Voormann et Gut, 2008 ) à l'image de l'Agile Development (voir figure 1). Elle n'a, à notre connaissance, été appliquée que dans un seul cas d'annotation réel (Alex et al., 2010) . FIGURE 1 -Phases de l'annotation traditionnelle (à gauche) et cycles de l'annotation agile (à droite). Reproduction de la figure 2 de (Voormann et Gut, 2008) Indépendamment de la notion d'annotation agile, nous avions utilisé la réécriture de graphes pour rechercher des erreurs récurrentes dans le corpus Sequoia 1 (Candito et Seddah, 2012) . Cette application directe de la réécriture à la détection d'erreurs a permis d'identifier une centaine d'erreurs d'annotation et a conduit à la publication d'une nouvelle version (3.3) du corpus en juillet 2012.Nous présentons ici les expériences que nous avons menées plus récemment dans le cadre de la correction d'annotations syntaxiques, pour laquelle nous avons transformé les instructions d'un guide d'annotation existant en règles de réécriture appliquées sur le corpus annoté. Ces expériences ont montré l'intérêt d'une telle formalisation et nous proposons donc son intégration dans le processus d'annotation manuelle, ce qui conduirait à la mise en place d'une annotation agile assistée.
0
Tim increased inl;erest in collocation ext;raetion comes from t;hu faeI; l;hal, t;hey can be used for many NLP at)plical;ions such as machine transla-(;ion, maehilw, aids R)r t;ra.nslal,ion, dictionary consl;ru(:i;ion, and secon(1 language learning, t.o mmm a few.Recently, large scale textual corpora give the potential of working with the real data, (!ither fin' grammar inferring, or for enriching the le.xicon. These corlms-based at)preaches have also been used for the extract, ion of collocal,ions.In this t)al)er we are concerned wil;h nested collocations. Collocations Lhat are subst;rings of oLher longer ones. I{egar(ling l;his l;ypu of (:olloeation, the approaches till ilOW could be divi(led inl;o t;wo groups: those thai; do uo(, refer to s'ttbstrings of colloco, l, ions as a l)arti(:ular problem, (Church and lla.nks, t99() ; Kim and Cho, 1993; Nagao and Mori, 1994) , and those t.hat; do (Kita et al., t994; Smadja, 1993; lkchara et al., 1995; Kjelhner, 11994) . [towew;r, (well the lal;t, er, deal wiLh only 1)arl; of the probh;m: they l,ry not to extract the mlwanl;cd substrings of collocations. In favour of this, l;hcy leave a large number of nested colloc.ations unextracted.ht section 2 collocations arc briefly discussed and the. l)roblem is determined. In section 3 our approach to t;he probl0an, 1;he algorithm and an examl)le are given. In section d the experimeld, S are discussed and t;he Inethod is (;olnpare(t with t, hat proposed by (Kita et a.l., 199d) . In sectioll 5 I;tlel'e are conlmenl;s on relal;ed work and tinally Section 6 eonl;ains I;he conc, hlsions and 1;he fill;life work.Collocations are perwtsive in language: "letters" are "deliw:red", "tea" is "strong" and not "powelful", we "l'mt progrants", aitd so Oll. Linguists have long been interested in collocations and the detinitions are nuiaerous and varied. Some researchers include multi-o.leinent eOlnpOuIlds as (;xamples of collocations; some admit only collocations (:onsisl;ing of pairs of words, while others admit only eollo(;ations consisting of a maximum of tive or six words; some emphasize synl, aglnat, ic aspecl;s, others Selnmtl;ic aspects. The COlllillOil poini;s regarding collocations appear to be, as (Smadja, 1993) suggestsl: they are m'bil;rary (it is nol; clear why to "Bill through" means to "fail"), th('y are domain-dependent ("interest rate", "stock market"), t;hey are recurrenl; and cohesive lo~xical clusters: the presence of one of the. collocates strongly Sltggesl;S /,tie rest of the cellocat, ion ("Ulfited" could ilnply "States" or "Kingdom").the classiiics collocations into i)redicative relations, rigid noun phrases and phrasal telnplatcs.It is not the goal of this paper to provide yet another definition of collocation. We adopt as a working definition the one by (Sinclair, 1991) Collocation is the occurrence of two or more words within a short space of each other in a text.Let us recall that collocations are domaindependent. Sublanguages have remarlmbly high incidences of collocation (Ananiadou and Mc-Naught, 1995) . (Frawley, 1988) neatly sums up the nature of sublanguage, showing the key contribution of collocation: sublanguage is strongly lexically based sublanguage texts focus on content lexical selection is syntactified in sublanguages collocation plays a major role in sublanguage sublanguages demonstrate elaborate lexical cohesion.The particular structures found in sublanguage texts reflect very closely the structuring of a sublanguage's associated conceptual domain. It is the particular syntactified combinations of words that reveal this structure. Since we work with sublanguages we can use "small" corpora as opposed as if we were working with a general language corpus. In the Brown Corpus for example, which consists of one million words, there are only 2 occurrences of "reading material", 2 of "cups of coffee", 5 of "for good" and 7 of "as always", (Kjellmer, 1994) .We extract uninterrupted and interrupted collocations. The interrupted are phrasal templates only and not predicatiw~ relations. We focus on the problem of the extraction of those collocations we call nested collocations. These collocations are at the same time substrings of other longer collocations. To make this (:lear, consider the following strings: "New York Stock Exchange", "York Stock", "New York" and "Stock Exchange". Assuine that the first string, being a collocation, is extracted by some method able to extract collocations of length two or more. Are the other three extracted as well? "New York" and "Stock Exchange" should be extracted, while "York Stock" should not. Though the examples here are front domain-specific lexieal collocations, grammatieM ones can be nested as well: "put down as", "put down for", "put down to" and "put down". (Smadja, 1993; Kits et al., 1994; Ikehara et al., 1995) , mention about substrings of collocations. Smadja's Xtract produces only the biggest possible n-grams. Ikehara et al., exclude the substrings of the retrieved collocations.A more precise approach to the problem is provided by (Kits et al., 1994) . They extract a substring of a collocation if it, appears a significant amount of times by itself. The following example illustrates the problem and their N)proach: consider the strings a="in spite" and b="in spite of", with n(a) and n(b) their numbers of oceurrencies in the corpus respectively. It will always be n(a) > n(b), so whenever b is identified as a collocation, a is too. Itowever, a should not be extracted as a collocation. So, they modify the measure of frequency of occurrence to becomeK(a) = (lal -1)(n(a) -n(b)) (1)where a is a word sequence la[ is the length of a n(a) is the number of occurrencies of a in the curpus. b is every word sequence that contains a n(b) is the number of occurrencies of bAs a result they do not extract the sub-strings of longer collocations unless they appear a significant amount of times by themselves in the corpus.The problem is not solved. Table 2 gives the extracted by Cost-Criteria n-grams containing "Wall Street". The corpus consists of 40,000 words of market reports. Only those n-grants of frequency 3 or more are considered. It (:an be seen that "Wall Street" is not extracted as a collocation, though it has a frequency of occurrence of 38. We, call the extracted strings candidate collocations rather than collocations, since what we accet)t as collo(:ations depends oil tile application. It is the human judge that will give the tinal de-(:ision. This is tile reason we consider tile method as semi-automatic. Let us consider the string "New York Stock Ex-(:hange". Within this string, that has already been extra(:ted as a candidate collocation, there are two substrings that should/)e extracted, and one that shouhl not. The issue is how to distinguish when a substring of a (:andidate (:ollo(:ation is a candidate collocation, and when it is not. Kita et al. assume that the substring is a candidate (:ollocation if it appears by itself (with a relatively high frequency). ~lb this we add that: the sut)string aI)1)ears in more than one, (:an(li(lat(~' eollo(:ations, eVell if it, (h)es not appear by itself."Wall Street", for exalnple, appears 30 times in 6 longer candidate colh)cations, and 8 times by itself. If we considered only the number of times il; appears by itself, it would get a low value as a candidate collocation. We have to consider the number of tilnes it apI)ears within hmger candidate collocations. A second fa(:tor is tit(! number of these hmger collocations. The greater this numt)er is, the better the string is distribute.d, an(l the greater its value as a (:andi(late collocat;ion. We make the above (:onditions more spe(:iti(: and give the measure for a string being a candidate coll()cation. The measure is called C-value and the fa(> tors involved are the string's frequency of o(:eurrence in the corpus, its fre(luen(:y of oe(:urrence in longer candidate collocations, the immber of these longer ('andidate (:ollocations and its length. Regar(ling its length, we (:onsider hmger collocations to t)e "more important" than shorter appearii~g with the same fi'equency. More specifically, if ]a] is the length 2 of the string a, its C-value is analog()us to la I -1. The 1 is giv(m sin('e the shortest collocations are of length 2, and we want them to be "of ilnportan(;e" 2-1= 1. More specifically:1. If a has the same hequen('y with a longer candidate (:ollocation that contains a, it is assigne(t C-value(a)=O i.e. is not a collocation. it is straightforward that in this case a appears in one only hmger candidate collocation.2We use tit(', same nol;ation with (Kita et al., 1.994).2. If n(a) is the number of times a appears, and a is not a substring of an already extracted candidate collocation, then a is assigned3. If a appears as a substring in one or more collocations (not with the same frequency), then it is assigned (I-I t(.)) 3where t(a) is the total frequency of a in longer candidate collocations and c(a) the number of ttmse candidate collocations. This is the most complicate ease.Tit(; ilnportance of the. Iluinber of occurrences of a string in a longer string is illustrated with the de.nominator of the fraction in Equation 3.The bigger the nulnber of strings a substring appears in, the smaller the fraction Ill the, initial stage, n(a) is set to the frequency of a appearing on its own, and t(a) and c(a) are set to 0. Let us calculate the C-value for the string "Wall Street". Table 2 shows all the strings that appear more that twice, and that contain "Wall Street". For each substrings eon|;ained in the 7-gram, tile number 11.9 (the l'requen(:y of the 7-gram) is kept, as its (till now) fl'equeney of occurrence in longer ,strings. For each of them, the fact that they have been already l'oun(t in a longer string is kept as well. Therefbre, t("Wall Street")=19 and c("\gall Street")=l. 2. We continue with the two 6-grams. Both of them, "l~,eporter of The Wall Street Journal" and :'Staff Reporter of The Wall Street" get; C-value=O since they ~q)pear with the same l'requeney as the 7-gram that contains the're. Therefore, they do not tbrm candidate collocations and they do not change the t("Wall Street") and the c("Wall Street") values. 3. F/)r the 5-grams, there is one appearing with a l'requency })igger than that of the 7-gram it: is (:()nta,incd in, "of The Wall Street Jourlml". This gets its C-value [rom Equation 3. its substrings increase their frequcmey of occurrence ~ (as substrings) by 20 19=1 (20 is the frequency of the 5-gram and 19 the fr0,queney it appeared in longer candidate collocal;ions), and the numt)er of oeeurrence ~s su/)string by 1. There-[ore, t("Wall Street"')=19+l=20 and c("Wall Street")--1+1--2. The other 5-gram is not a can-didate collocations (it gets C-value=O).4. For tile 4-grams, the "The Wall Street Journal" occurs in two longer n-grams and therefore gets its C-value from Equation 3. Froin this string, t("Wall Street")=20+2=22 and c("Wall Street")-2+1=3. The "of The Wall Street" is not accepted as a eamtidate collocations since it; apt)ears with the same fl'equeney as the "of The Wall Street Jom'nal'. 5 ."Wall Street analysts" appears for the first time so it; gets its C-value from Equation 2. "Wall Street Journal" mnl "The Wall Street" appearing in longer extracted n-grams get their values from Equation 3. They make t("Wall Street")=22+3+4+l=30 and c("Wall St, lee t" ) = 3+ ] + 1+ 1 =6.6. Finally, we evaluate the C-value for "Wall Street" from Equation 3. We find C-value("\¥all Street")=33.The eortms used for the experiments is quite small (40,000 words) and consists of material l¥otn th(~ Wall Street Journal newswire. For these experilnents we used n-grams of maxilnuln length 10. Longer n-grains apt)ea.r once, only (because of the size of the corpus). The, maximum length of the n-grams to be extracted is variallle attd depends on the size of the corpus and the application. From the extracted n-grams, those with a flequc'ncy of 3 or more were kept (other approaches get rid of n-grams of such low frequencies (Smadja, 1993) ). These n-grams were lbrwarded into the, implementation of our algoril;hm as well as our implementation of the algorithm by (Kita et al.,The Cost-Criteria algorithm needs a second threshold (besides tile one for tile frequency of the n-grams): for every n-gram a, K(a) is evaluated, and only those n-grams with this value greater than the' preset threshold will take part to tile rest of the algorithm. We set this threshold to ;I again for the, same reason as above (the gain we wouhl gel; for precision if we had set a higher threshold would be lost on recall). Table 3 shows the candidate c, ollocations with the higher values, extra('Le(l with C-value. A lot of eandidate e, otlocations extracted may seem unimportant. This is because t}le algorithm extracts tile word sequences that are fl'equent. Which of these candidate collocations we should keep depends on the apt)lication. Brill's t)art-of-speeeh tagger, (Brill, 1992) , was used to remove the ngrams that had an article as their last wor(1.'l'a,I)l(~ 3: Exi,raci, ('d c~m(tida,t(~ (:olloca, i,ion with Cvakae in (l(~,s(:(m(iin<~ or(l(;r Am()ng l,h(l (;xl;ra(;lxxl ii-<~l',~l_illS w(', c}iAl sc(; 1;h(; doniain-si)(x:iti(: (',and|date c, ollocal;ions, mmh a,s "SI,aff l{(;t)orl;er o[ l;h(; "vVa,I1 Strc(;l;", "Na.l;ional lianl¢" etc., and those l;ha, t a,i)pe, ar within other colloca,Lions ral;ho, r i;ha,n by 1;h(~,ms(~,lv(~,s, "~;a][ Sla'(x',t .h)urmfl',"WM1 Strc(%" etc. Tlw, r(,~ are, howe, vet, t)robl(;ms:|. W(! did tit)l; (:nh:ula, i;e l;tl(', 1)recision or recall ()l' l; [W, (,'-'val' ttc algoril,hni. Th(!se cal(:ulai;ions (tepen(1 Oll l;}le (let|nil;ion of ('ol]o('.~t;ion ant| l;}ley m'(; domain dCl)endenl;. (l(j(~lhn(;r tU(~liLiOllf4 1,9 ca, l;(> ~>orics o [ collocation (l(,j(~ilm(;r, 1994) ). of compound words. They extend the measure for three words in a different way than that defined by (Fano, 1961) , and no mention is given to how their formulas would be extended for wordsequences of length more that three. They do not consider nested collocations. (Smadja, 1993) , extracts uninterrupted as well as interrupted collocations (predicative relations, rigid noun phrases and phrasal templates). The system performs very well under two conditions: the corpus must be large, and the collocations we are interested in extracting, must have high frequencies. (Nagao and Mori, 1994) , extract collocations using the tbllowing rule: longer collocations and frequent collocations are more important. An improvement to this algorithm is that of (Ikehara et al., 1995) . They proposed an algorithm for the extraction of uninterrupted as well as interrupted collocations from Japanese corpora. The extraction involves the following conditions: longer collocations have priority, more frequent collocations have priority, substrings are extracted only if tbund in other places by themselves.Finally, the Dictionary of English Collocations, (Kjellmer, 1994) , includes n-grams appearing even only ()nee. For each of them its exclusive frequency (number of occurrences the n-gram appeared by itself), its inclusive frequency (number of times it appeared in total) and its relative frequency (the ratio of its ac.tual frequency to its expected frequency), is given.
0
Response selection remains at the core of conversation modeling, with the objective of selecting an appropriate response utterance from a set of candidate utterances, for a given conversation history consisting of previous utterances (context). Decades of research in this task includes traditional methods such as (Kitano, 1991; Ritter et al., 2011) and recent deep learning based methods (Ji et al., 2014; Chaudhuri et al., 2018; Xu et al., 2018; Chen et al., 2017; Song et al., 2018; Wen et al., 2016) . Underlying these methods, a fundamental need is to capture the semantics of the context and use it for selecting the appropriate response. While the context provides essential clues as to what could be a follow-up response, research (Kumar et al., 2018) has further shown that any additional information available in the form of dialogue acts can also be helpful for response selection. Such information when used along with the context improves the performance of the response selection task. However, the above method assumes that dialogue acts are available at the time of response selection, which is rarely the case -as dialogue acts are usually not available for new conversations in a live setting-thus making them impractical for practitioners. In this paper, we propose a novel model that bridges this gap between theory and practice. In other words, our proposed model leverages the dialogue acts for response selection, as well as is practical.In the literature, researchers (Kumar et al., 2018; Xu et al., 2018; Zhao et al., 2017) have proposed deep learning models that use actual dialogue acts in conversation modeling. While actual dialogue acts help in response selection, a natural question is, can we build a system that eliminates the dependency on actual dialogue acts at the time of response selection, and rather predict them as an integral part of the model? Second, and a more important question is: Is such a system going to be helpful in response selection, because the dialogue acts predictions will have some error in it, i.e., the underlying prediction model would not be 100% accurate in its predictions? And, if the answer to the second question is positive, then what is the gap -in terms of performance -between the proposed system that uses predicted dialogue acts and the system that uses actual dialogue acts? In this paper, we answer all of the above questions: our proposed model is a multi-task model that has dialogue acts prediction as an integral part of it, i.e., it does not need the actual dialogue acts to select an appropriate response, rather it predicts the dialogue acts and use them for response selection. Furthermore, our model is by design robust to the errors in dialogue act prediction; our novel way of combining dialogue acts of context and response, is able to compensate for the errors in dialogue act predictions, and performs on par with the model that uses actual dialogue acts.The main contributions of this paper are as follows:• We model the task of response selection as a multi-task learning problem, with the objective of performing two tasks in a single end-to-end model: first, learn to predict the dialogue acts of utterances (context and response), and second, use the previous utterances (context) and the predicted dialogue acts of both the context and the response to select a response from a given set of candidate responses.• While modeling the response selection conditioned on the dialogue acts of the context helps (Zhao et al., 2017) , an important contribution is the additional utility of the dialogue act of the response. Our simple yet novel way of combining the dialogue act representations of the context and response with the utterance representations of the context and response promotes cross similarities, and thereby bring in ensemble characteristics in the model. That is, the ensemble model outperforms all other non-ensemble models, and is robust to the errors made by any underlying components of the ensemble.• We evaluate the proposed model on two dialogue datasets, DailyDialog (Li et al., 2017) and Switchboard Dialogue Act Corpus (SwDA (Jurafsky, 1997)) , and show that having dialogue act prediction as an integral part of the model improves the performance of the response selection consistently across both datasets. An important observation is the significant performance boost obtained from the proposed Crossway model (ensemble-model); that is, it not only improves the MRR for the response selection task but also improves the accuracy of the dialogue act prediction task.
0
Traditional recommendation systems factorize users' historical data (i.e., ratings on movies) to extract common preference patterns (Koren et al., 1 https://github.com/facebookresearch/ParlAI 2009; He et al., 2017b) . However, besides making it difficult to accommodate new users because of the cold-start problem, relying on aggregated history makes these systems static, and prevents users from making specific requests, or exploring a temporary interest. For example, a user who usually likes horror movies, but is in the mood for a fantasy movie, has no way to indicate their preference to the system, and would likely get a recommendation that is not useful. Further, they cannot iterate upon initial recommendations with clarifications or modified requests, all of which are best specified in natural language.Recommending through dialogue interactions (Reschke et al., 2013; Wärnestål, 2005 ) offers a promising solution to these problems, and recent work by Li et al. (2018) explores this approach in detail. However, the dataset introduced in that work does not capture higher-level strategic behaviors that can impact the quality of the recommendation made (for example, it may be better to elicit user preferences first, before making a recommendation). This makes it difficult for models trained on this data to learn optimal recommendation strategies. Additionally, the recommendations are not grounded in real observed movie preferences, which may make trained models less consistent with actual users. This paper aims to provide goal-driven recommendation dialogues grounded in real-world data. We collect a corpus of goal-driven dialogues grounded in real user movie preferences through a carefully designed gamified setup (see Figure 1 ) and show that models trained with that corpus can learn a successful recommendation dialogue strategy. The training is conducted in two stages: first, a supervised phase that trains the model to mimic human behavior on the task; second, a bot-play phase that improves the goal-directed strategy of the model. The contribution of this work is thus twofold.Hmm, we've got .. I like comedy movies....Iron Man is a 2008 American superhero film based on the Marvel Comics character of the same name, produced by Marvel Studios and distributed by Paramount Pictures….Figure 1: Recommendation as a dialogue game. We collect 81,260 recommendation utterances between pairs of human players (experts and seekers) with a collaborative goal: the expert must recommend the correct (blue) movie, avoiding incorrect (red) ones, and the seeker must accept it. A chatbot is then trained to play the expert in the game.(1) We provide the first (to the best of our knowledge) large-scale goal-driven recommendation dialogue dataset with specific goals and reward signals, grounded in a real-world knowledge base.(2) We propose a two-stage recommendation strategy learning framework and empirically validate that it leads to better recommendation conversation strategies.
0
With the rapid development of deep neural networks and parallel computing, distributed representation of knowledge attracts much research interest. Models for learning distributed representations of knowledge have been proposed at different granularity level, including word sense level (Huang et al., 2012; Neelakantan et al., 2014; Tian et al., 2014; Guo et al., 2014) , word level (Rummelhart, 1986; Bengio et al., 2003; Collobert and Weston, 2008; Mnih and Hinton, 2009; Mikolov et al., 2010; Mikolov et al., 2013) , phrase level (Socher et al., 2010; Cho et al., 2014) , sentence level (Mikolov et al., 2010; Socher et al., 2013; Kalchbrenner et al., 2014; Kim, 2014; Le and Mikolov, 2014) , discourse level (Ji and Eisenstein, 2014) and document level (Le and Mikolov, 2014) .In distributed representations of word senses, each word sense is usually represented by a dense and real-valued vector in a low-dimensional space which captures the contextual semantic information. Most existing approaches adopted a clusterbased paradigm, which produces different sense vectors for each polysemy or homonymy through clustering the context of a target word. However, this paradigm usually has two limitations: (1) The performance of these approaches is sensitive to the clustering algorithm which requires the setting of the sense number for each word. For example, Neelakantan et al. (2014) proposed two clustering based model: the Multi-Sense Skip-Gram (MSSG) model and Non-Parametric Multi-Sense Skip-Gram (NP-MSSG) model. MSSG assumes each word has the same k-sense (e.g. k = 3), i.e., the same number of possible senses. However, the number of senses in WordNet (Miller, 1995) varies from 1 such as "ben" to 75 such as "break". As such, fixing the number of senses for all words would result in poor representations. NP-MSSG can learn the number of senses for each word directly from data. But it requires a tuning of a hyperparameter λ which controls the creation of cluster centroids during training. Different λ needs to be tuned for different datasets. (2) The initial value of sense representation is critical for most statistical clustering based approaches. However, previous approaches usually adopted random initialization (Neelakantan et al., 2014) or the mean average of candidate words in a gloss . As a result, they may not produce optimal clustering results for word senses.Focusing on the aforementioned two problems, this paper proposes to learn distributed representations of word senses through WordNet gloss composition and context clustering. The basic idea is that a word sense is represented as a synonym set (synset) in WordNet. In this way, instead of assigning a fixed sense number to each word as in the previous methods, different word will be assigned with different number of senses based on their corresponding entries in WordNet. Moreover, we notice that each synset has a textual definition (named as gloss). Naturally, we use a convolutional neural network (CNN) to learn distributed representations of these glosses (a.k.a. sense vectors) through sentence composition. Then, we modify MSSG for context clustering by initializing the sense vectors with the representations learned by our CNN-based sentence composition model. We expect that word sense vectors initialized in this way would potentially lead to better representations of word senses generated from context clustering.The obtained word sense representations are evaluated on two tasks. One is word similarity task, the other is analogical reasoning task provided by WordRep . The results show that our approach attains comparable performance on learning distributed representations of word senses. In specific, our learned representation outperforms publicly available embeddings on the globalSim and localSim metrics in word similarity task, and 6 in 13 subtasks in the analogical reasoning task.
0
Inducing factual information during response generation has garnered a lot of attention in dialogue systems research. While language models Zheng et al., 2020) have been shown to generate responses akin to the dialogue history, they seldom contain factual information, leading to a bland conversation with the agent. Knowledgegrounded dialogue systems focus on leveraging external knowledge to generate coherent responses. Knowledge Graphs (KGs) are a rich source of factual information and can be combined with an utterance generator for a natural and informative conversational flow. showed that utilising KGs in dialogue systems improves the appropriateness and informativeness of the conversation. Augmenting utterances in a dialogue with the KG information Figure 1: An example conversation wherein the agent utilises relevant information from the KG while generating responses. The agent generates facts about "Christopher Nolan" in utterance 4 while utilising the semantic information in the dialogue history and the KG.guides the conversational agent to include relevant entities and facts in the response. For example, Figure 1 shows an example conversation where a user is interacting with a dialogue agent about movies. The agent has access to a KG that aids in suggesting relevant facts during the dialogue flow. When responding to utterance 3, the agent can utilise information from the KG and produce relevant facts about "Christopher Nolan". This information would be more engaging than responding with information about "Batman" or "Batman Begins".While KGs have been used extensively to include relevant facts in a dialogue, the explicability of such systems is limited. Naturally, this fostered research on developing models for explainable conversation reasoning. Moon et al. (2019) addressed this problem by inducing KG paths for conversation explainability. They posited a dialogue-KG path aligned corpus wherein utterances are augmented with a KG path to denote fact transitions in the dialogue. The KG paths emanate from entities or facts mentioned in the dialogue history and terminate at the entity to be mentioned in the response text. Such paths form a sequence of entities and relations and aid the dialogue agent in introducing appropriate knowledge to the dialogue. In addition to this, they proposed an attention-based recurrent decoder over the KG to generate entity paths. Jung et al. (2020) designed a novel dialogue-context infused graph neural network to propagate attention scores over the knowledge graph entities for KG path generation. While such approaches have their inherent strengths, their limitations are manifold.Given a dialogue context, it is desirable to generate paths that results in a natural dialogue flow. Therefore it is essential to capture the semantic information in the dialogue context as well as the KG elements. Transformer based models (Devlin et al., 2019; Lan et al., 2020; Liu et al., 2019a) have enabled the capture of contextual relationships between different words in a sentence. Textual representations from such models have been successfully adapted for the dialogue conditioned KG reasoning task (Jung et al., 2020) . However, prior works use the embedding of the [CLS] token to encode the dialogue history and the KG elements. Reimers and Gurevych (2019) demonstrated that such sentence embeddings are sub-optimal and lead to degraded performance in downstream application tasks. Sentence-transformers (Reimers and Gurevych, 2019) are strong tools for capturing the semantic information of a sentence into a fixed-size vector. As KG elements can be long phrases, KG-CRUSE uses the Sentence-BERT (SBERT) model to encode both the dialogue history and the KG elements for capturing their semantic information.As a result of the long tailed distribution of node neighbors in a KG, it can become difficult to generate relevant paths over the KG for explainable conversation. Given the dialogue history, it is desirable to traverse paths that are semantically relevant. KG-CRUSE utilises the rich sequential information in the dialogue history and the path history to sample the top-k semantically similar neighbors for extending its walk over the KG.We show that our KG-CRUSE improves upon the current state-of-the-art on multiple metrics, demonstrating the effectiveness of KG-CRUSE for explainable conversation reasoning.To summarise, our contributions are as follows:• We propose a KG-CRUSE , a LSTM based decoder leveraging Sentence-Transformer (S-BERT) embedding to reason KG paths for explainable conversation.• We show the efficacy of our model by improving the current state-of-the-art performances over multiple metrics on the OpenDialKG (Moon et al., 2019) dataset. Additionally, we conduct extensive empirical analysis to emphasise the effectiveness of KG-CRUSE for the reasoning task.• We release 1 our system and baseline systems as an open-source toolkit to allow reproducibility and future comparison on this task.
0
Lately, deep learning conversational systems have seen increasing interest from industry and academia alike (Chen et al., 2017) . These systems find usage in various contexts, starting from personal speech assistants like Google Assistant through the "chatbots" on instant messaging platforms like Facebook Messenger, and finally, conversational services like LUIS 1 . Many of these applications serve the objective of completing a specific function like purchasing a product or booking services (e.g., hotels, flights). Nonetheless, these applications can still profit from open-domain dialogue skills like chit-chatting, which would provide a more human-like interaction with users.Presently, scientists and engineers working on computer-based conversational systems need humanbased evaluation to assess the quality and usability of their work (Dinan et al., 2019; Yoshino et al., 2019) . These evaluations are costly in terms of resources. Thus, the field of dialogue systems could take advantage of an automated method for assessing conversations.Seminal works in text summarization and machine translation have already proposed their fieldspecific metrics for automated assessments -for the former ROUGE (Lin, 2004) , and, for the latter, BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) . Dialogue system research (Ritter et al., 2011; Yoshino et al., 2019) constantly uses these metrics. However, Liu et al. show that these metrics based on word-overlap between prediction and references are not reliable for evaluating the usefulness of dialogue systems (2016) . Hence, the field should use more sophisticated methods that consider the previous utterances of a conversation and its semantic meaning.When human annotators evaluate a dialogue, they do not use an explicit reference or necessarily seek word overlap between context and response (or the lack of it). Their assessment bases itself on experience with the language and the implicit knowledge they have about it. The core principle of statistical language models (LM) is to capture and reproduce these properties. LM have proven themselves invaluable in state-of-the-art approaches in natural language processing, and natural language understanding (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; .Thus, the main aim of this work 2 is to investigate their usability as means for evaluating dialogues since they do not need a reference or supervision. We demonstrate that there is a significant positive correlation between the predictions of language models and human evaluation scores. Furthermore, we provide insights into the inner-workings and behavior of language models in the dialogue context.
0
Hand-built NLP grammars frequently have a depth of linguistic representation and constraints not present in current treebanks, giving them potential importance for tasks requiring deeper processing. On the other hand, these manually built grammars need to solve the disambiguation problem to be practically usable.This paper presents work on the problem of probabilistic parse selection from among a set of alternatives licensed by a hand-built grammar in the context of the newly developed Redwoods HPSG treebank (Oepen et al., 2002) . HPSG (Head-driven Phrase Structure Grammar) is a modern constraintbased lexicalist (unification) grammar, described in Pollard and Sag (1994) .The Redwoods treebank makes available syntactic and semantic analyses of much greater depth than, for example, the Penn Treebank. Therefore there are a large number of features available that could be used by stochastic models for disambiguation. Other researchers have worked on extracting features useful for disambiguation from unification grammar analyses and have built log linear models a.k.a. Stochastic Unification Based Grammars (Johnson et al., 1999; Riezler et al., 2000) . Here we also use log linear models to estimate conditional probabilities of sentence analyses. Since feature selection is almost prohibitive for these models, because of high computational costs, we use PCFG models to select features for log linear models. Even though this method may be expected to be suboptimal, it proves to be useful. We select features for PCFGs using decision trees and use the same features in a conditional log linear model. We compare the performance of the two models using equivalent features.Our PCFG models are comparable to branching process models for parsing the Penn Treebank, in which the next state of the model depends on a history of features. In most recent parsing work the history consists of a small number of manually selected features (Charniak, 1997; Collins, 1997) . Other researchers have proposed automatically selecting the conditioning information for various states of the model, thus potentially increasing greatly the space of possible features and selectively choosing the best predictors for each situation. Decision trees have been applied for feature selection for statistical parsing models by Magerman (1995) and Haruno et al. (1998) . Another example of automatic feature selection for parsing is in the context of a deterministic parsing model that chooses parse actions based on automatically induced decision structures over a very rich feature set (Hermjakob and Mooney, 1997) .Our experiments in feature selection using decision trees suggest that single decision trees may not be able to make optimal use of a large number of relevant features. This may be due to the greedy search procedures or to the fact that trees combine information from different features only through partitioning of the space. For example they have difficulty in weighing evidence from different features without fully partitioning the space.A common approach to overcoming some of the problems with decision trees -such as reducing their variance or increasing their representational power -has been building ensembles of decision trees by, for example, bagging (Breiman, 1996) or boosting (Freund and Schapire, 1996) . Haruno et al. (1998) have experimented with boosting decision trees, reporting significant gains. Our approach is to build separate decision trees using different (although not disjoint) subsets of the feature space and then to combine their estimates by using the average of their predictions. A similar method based on random feature subspaces has been proposed by Ho (1998) , who found that the random feature subspace method outperformed bagging and boosting for datasets with a large number of relevant features where there is redundancy in the features. Other examples of ensemble combination based on different feature subspaces include Zheng (1998) who learns combinations of Naive Bayes classifiers and Zenobi and Cunningham (2001) who create ensembles of kNN classifiers. We begin by describing the information our HPSG corpus makes available and the subset we have attempted to use in our models. Next we describe our ensembles of decision trees for learning parameterizations of branching process models. Finally, we report parse disambiguation results for these models and corresponding conditional log linear models.
0
In recent years, soft-attention based neural machine translation models (Bahdanau et al., 2015; Gehring et al., 2017; Hassan et al., 2018) have achieved state-of-the-art results on different machine translation tasks. The soft-attention mechanism computes the context (encoder-decoder attention) vector for each target token by weighting and combining all the tokens of the source sequence, which makes them ineffective for long sequence translation (Lawson et al., 2017) . Moreover, weighting and combining all the tokens of the source sequence may not be required -a few relevant tokens are sufficient for each target token.Different attention mechanisms have been proposed to improve the quality of the context vector. For example, Luong et al. (2015) ; Yang et al. (2018) proposed a local-attention mechanism to selectively focus on a small window of source tokens to compute the context vector. Even though local-attention has improved the translation quality, it is not flexible enough to focus on relevant tokens when they fall outside the specified window size.To overcome the shortcomings of the above approaches, we propose a hard-attention mechanism for a deep NMT model (Vaswani et al., 2017 ). The proposed model solely selects a few relevant tokens across the entire source sequence for each target token to effectively handle long sequence translation. Due to the discrete nature of the hard-attention mechanism, we design a Reinforcement Learning (RL) algorithm with reward shaping strategy (Ng et al., 1999) to train it. The proposed hard-attention based NMT model consistently outperforms the soft-attention based NMT model (Vaswani et al., 2017) , and the gap grows as the sequence length increases.
0
Owing to notable advances in deep learning and representation learning, important progress has been achieved on text classification, reading comprehension, and other NLP tasks. Recently, pretrained language representations with self-supervised objectives (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2018) have further pushed forward the state-of-the-art on many English tasks. While these sorts of deep models can be trained on different languages, deep models typically require substantial amounts of labeled data for the specific domain of data.Unfortunately, the cost of acquiring new custom-built resources for each combination of language and domain is very high, as it typically requires human annotation. Available resources for domain-specific tasks are often imbalanced between different languages. The scarcity of non-English annotated corpora may preclude our ability to train language-specific machine learning models. In contrast, English-language annotations are often readily available to train deep models. Although translation can be an option, human translation is very costly and for many language pairs, any available domain-specific parallel corpora are too small to train high-quality machine translation systems.Cross-lingual systems rely on training data from one language to train a model that can be applied to other languages (de Melo and Siersdorfer, 2007) , alleviating the training bottleneck issues for low-resource languages. This is facilitated by recent advances in learning joint multilingual representations (Lample and Conneau, 2019; Artetxe and Schwenk, 2018; Devlin et al., 2018) .In our work, we propose a self-learning framework to incorporate the predictions of the multilingual BERT model (Devlin et al., 2018) on non-English data into an English training procedure. The initial multilingual BERT model was simultaneously pretrained on 104 languages, and has shown to perform well for cross-lingual transfer of natural language tasks (Wu and Dredze, 2019) . Our model begins by learning just from available English samples, but then makes predictions on unlabeled non-English samples and a part of those samples with high confidence prediction scores are repurposed to serve as labeled examples for a next iteration of fine-tuning until the model converges.Based on this multilingual self-learning technique, we demonstrate the superiority of our framework on Multilingual Document Classification (MLDoc) (Schwenk and Li, 2018) in comparison with several strong baselines. Our study then proceeds to show that our method is better on Chinese sentiment classification than other cross-lingual methods that also consider unla-beled non-English data. This shows that our method is more effective at cross-lingual transfer for domain-specific tasks, using a mix of labeled and unlabeled data via a multilingual BERT sentence model.
0
In this paper, we study the effect of non-normalized text on natural language processing (NLP). Non-normalized text includes non-canonical word forms, noisy word forms, and word forms with "small" perturbations, such as informal spellings, typos, scrambled words. Compared to normalized text, the variability of non-normalized text is much greater and aggravates the problem of data sparsity.Non-normalized text dominates in many real world applications. Similar to humans, ideally NLP should perform reliably and robustly also under suboptimal or even adversarial conditions, without a significant degradation in performance. Web-based content and social media are a rich source for noisy and informal text. Noise can also be introduced in a downstream NLP application where errors are propagated from one module to the next. For example, speech translation where the machine translation (MT) module needs to be robust against errors introduced by the automatic speech recognition (ASR) module. Moreover, NLP should not be vulnerable to adversarial input examples. While all these examples do not pose a real challenge to an experienced human reader, even "small" perturbations from the canonical form can make a state-of-the-art NLP system fail.To illustrate the typical behavior of state-of-the-art NLP on normalized and nonnormalized text, we discuss an example in the context of neural MT (NMT). Different research groups have shown that NMT can generate natural and fluent translations (Bentivogli et al., 2016) , achieving human-like performance in certain settings (Wu et al., 2016) . The state-ofthe-art NMT engine Google Translate 1 , for example, perfectly translates the English sentence I used my card to purchase a meal on the menu and the total on my receipt was $ 8.95 but when I went on line to check my transaction it shows $ 10.74 .Ich benutzte meine Karte , um eine Mahlzeit auf der Speisekarte zu kaufen und die Gesamtsumme auf meiner Quittung war $ 8, 95 , aber als ich online ging , um meine Transaktion zuüberprüfen , zeigt es $ 10, 74 . Adding some noise to the source sentence by swapping a few neighboring characters, e.g., I used my card ot purchase a meal no the mneu and the total no my receipt was $ 8.95 but whne I went on line to check ym transaction it show $ 1.074 .confuses the same NMT engine considerably:Ich benutzte meine Karte ot Kauf eine Mahlzeit nicht die Mneu und die insgesamt nicht meine Quittung war $ 8,95 aber whne ging ich auf Linie zuüberprüfen ym Transaktion es $ 1.074 .By contrast, an experienced human reader can still understand and correctly translate the noisy sentence and compensate for some information loss (including real word errors such as "no" vs. "on", but rather not "10.74" vs. "1.074"), with little additional effort and often not even noticing "small" perturbations.One might argue that a good translation should in fact translate corrupted language into corrupted language. Here, we rather adopt the position that the objective is to preserve the intended content and meaning of a sentence regardless of noise.It should be noted that neural networks with sufficient capacity, in particular recurrent neural networks, are universal function approximators (Schäfer and Zimmermann, 2006) . Hence, the performance degradation on non-normalized text is not so much a question whether the model can capture the variability but rather how to train a robust model. In particular, it can be expected that training on noisy data will make NLP more robust, as it was successfully demonstrated for other application domains including vision (Cui et al., 2015) and speech recognition (Doulaty et al., 2016) .In this paper, we empirically evaluate the robustness of different models (convolutional neural networks, recurrent neural networks, non-neural models), different basic units (characters, byte pair encoding units), and different NLP tasks (morphological tagging, NMT). Due to easy availability and to have more control on the experimental setup with respect to error type and error density, we use synthetic data generated from existing clean corpora by perturbing the word forms. The perturbations include character flips and swaps of neighboring characters to imitate typos, and word scrambling.The contributions of this paper are the following. Our experiments confirm that (i) noisy input substantially degrades the output of models trained on clean data. The experiments show that (ii) training on noisy data can help models achieve performance on noisy data similar to that of models trained on clean data tested on clean data, that (iii) models trained noisy data can achieve good results on noisy data almost without performance loss on clean data, that (iv) error type mismatches between training and test data can have a greater impact than error density mismatches, that (v) character based approaches are almost always better than byte pair encoding (BPE) approaches with noisy data, that (vi) the choice of neural models (recurrent, convolutional) is not as significant, and that (vii) for morphological tagging, under the same data conditions, the neural models outperform a conditional random field (CRF) based model.The remainder of the paper is organized as follows. Section 2 discusses related work. Section 3 describes the noise types and Section 4 briefly summarizes the modeling approaches used in this paper. Experimental results are shown and discussed in Section 5. The paper is concluded in Section 6.
0
Summarization of meetings faces many challenges not found in texts, i.e., high word error rates, absence of punctuation, and sometimes lack of grammaticality and coherent ordering. On the other hand, meetings present a rich source of structural and pragmatic information that makes summarization of multi-party speech quite unique. In particular, our analyses of patterns in the verbal exchange between participants found that adjacency pairs (AP), a concept drawn from the conversational analysis literature (Schegloff and Sacks, 1973) , have particular relevance to summarization. APs are pairs of utterances such as QUESTION-ANSWER or OFFER-ACCEPT, in which the second utterance is said to be conditionally relevant on the first. We show that there is a strong correlation between the two elements of an AP in summarization, and that one is unlikely to be included if the other element is not present in the summary.Most current statistical sequence models in natural language processing (NLP), such as hidden Markov models (HMMs) (Rabiner, 1989) , are linear chains that only encode local dependencies between utterances to be labeled. In multi-party speech, the two elements of an AP are generally arbitrarily distant, and such models can only poorly account for dependencies underlying APs in summarization. We use instead skip-chain sequence models (Sutton and McCallum, 2004) , which allow us to explicitly model dependencies between distant utterances, and turn out to be particularly effective in the summarization task.In this paper, we compare two types of network structures-linear-chain and skip-chain-and two types of network semantics-Bayesian Networks (BNs) and Conditional Random Fields (CRFs). We discuss the problem of estimating the class posterior probability of each utterance in a sequence in order to extract the N most probable ones, and show that the cost assigned by a CRF to each utterance needs to be locally normalized in order to outperform BNs. After analyzing the predictive power of a large set of durational, acoustical, lexical, structural, and information retrieval features, we perform feature selection to have a competitive set of predictors to test the different models. Empirical evaluations using two standard summarization metrics-the Pyramid method (Nenkova and Passonneau, 2004b) and ROUGE (Lin, 2004) -show that the best performing system is a CRF incorporating both order-2 Markov dependencies and skip-chain dependencies, which achieves 91.3% of human performance in Pyramid score, and outperforms our best-performing non-sequential model by 3.9%.
0
In the Fourth SIGHAN Bakeoff, besides providing the evaluation tasks for the word segmentation and NER, it also introduced another important evalua-tion task, POS tagging for Chinese language. In this bakeoff, our models built for the tasks are similar to that in the work of Ng and Low (2004) . The models are based on a maximum entropy framework (Ratnaparkhi, 1996; Xue and Shen, 2003) . They are trained on the corpora for the tasks from the bakeoff. To understand the model, the implementation of the models is wholly done ourselves. We used Visual Studio .NET 2003 and C++ as the implementation language. The Improved Iterative Scaling (IIS) (Pietra et al., 1997) is used as the parameter estimation algorithm for the models. We tried all the closed track tests of the word segmentation, the CITYU closed track tests for POS tagging and NER.
0
Recent work in unsupervised and self-supervised pre-training has revolutionised the field of natural language understanding (NLU), resulting in high performance ceilings across multiple tasks (Devlin et al., 2019; Dong et al., 2019) . The recent success of language model pre-training with masked language modelling (MLM) such as BERT (Devlin et al., 2019) further paved the way for more complex approaches that combine language pre-training with images (Tan and Bansal, 2019; Su et al., 2020; , video (Sun et al., 2019) , and speech (Chuang et al., 2020) . Most of these approaches follow a task-specific fine-tuning step after the model is pre-trained.However, there has been little work on exploiting pre-trained MLMs for natural language generation (NLG) tasks. Previous work argues that the MLM objective is ill-suited for generation tasks such as machine translation Rothe et al., 2020) . Recent work in this direction has predominantly investigated the use of pre-trained models to either initialise Transformer-based encoderdecoder models (Imamura and Sumita, 2019; Clinchant et al., 2019; Yang et al., 2020; Rothe et al., 2020) or to distill knowledge for sequence generation tasks (Chen et al., 2020) .In this work, we present BERTGEN, which extends BERT in a generative setting ( § 2.1). This results in a single generator -without a separation between the encoder and the decoder -capable of consuming multiple input modalities and generating in multiple languages. The latter features are achieved by transferring knowledge from state-of-the-art pretrained models, namely VL-BERT (Su et al., 2020) and multilingual BERT (M-BERT) (Devlin et al., 2019) . We train BERTGEN on various tasks, including image captioning, machine translation and multimodal machine translation, and datasets in four different languages ( § 2.2).Based on a number of experiments, our findings ( § 3) show that BERTGEN (i) is surprisingly versatile as it is capable of describing images and performing translation in unimodal and multimodal settings, across all languages, (ii) generalises well across zero-shot image captioning, multimodal machine translation, and out-of-domain news translation tasks, and finally (iii) is parameter efficient when compared to state-of-the-art models for each of the tasks combined together.
0
In recent years, contextual embeddings (Peters et al., 2018; Devlin et al., 2018) have made immense progress in semantic understanding-based tasks. After being trained using large amounts of data, for example via a self-supervised task like masked language-modeling, such models learn crucial elements of language, such as syntax and semantics (Jawahar et al., 2019; Goldberg, 2019; Wiedemann et al., 2019) from just raw text. The best performing contextual embeddings are trained with Transformerbased methods (TBM) (Vaswani et al., 2017; Devlin et al., 2018) . These embeddings have been shown to frequently achieve state-of-the-art results in downstream tasks like question answering and sentiment analysis (van Aken et al., 2019; Sun et al., 2019) . Contextual embeddings are also often used to capture the similarity between pairs of documents; for example, on the Semantic Textual Similarity (STS) task (Cer et al., 2017) included in the GLUE benchmark (Wang et al., 2018) , TBMs have shown competitive performance, substantially outperforming embedding baselines like Word2Vec (Mikolov et al., 2013) and GloVE (Pennington et al., 2014) . However, their performance on similarity tasks beyond abstract, semantic ones (Mickus et al., 2019 ) -for example, on granular news article matching -is less understood.In this work, we study the performance of TBMs in textual similarity tasks with the following research question: Are transformer-based methods as performant for granular tasks as they are for abstract ones? Here, granular and abstract reflect varying amounts of coarseness in the concept of similarity. For example, consider the news domain: A granular notion of similarity might be whether a pair of articles both report the exact same news event. Conversely, an abstract notion might be when the articles share the same topical category, like sports or finance. Figure 1 illustrates this with an example for clarity.Firstly, we define separate tasks to explore these two notions of similarity on two datasets from different domains -News Articles, and Bug Reports. Our analysis on both datasets reveals that contextual Figure 1 : An example pair of articles from the News Dedup dataset: Both report the same news event, and are thus similar on a granular level; the colored text indicates fine-grained details associated with this determination. Both articles are also of the "sports" topic, and are thus similar on an abstract level.embeddings do not perform well on granular tasks, and are outperformed by simple baselines like TF-IDF. Secondly, we demonstrate that TBM contextual embeddings do in fact contain important semantic information, and a simple interpolation strategy between the two methods can help boost the relative individual performance of TBMs (TF-IDF) by up to 36% (6%) on the granular task.
0
One of the major challenges of entity linking is resolving contextually polysemous mentions. For example, Germany may refer to a nation, to that nation's government, or even to a soccer team. Past approaches to such cases have often focused on collective entity linking: nearby mentions in a document might be expected to link to topically-similar entities, which can give us clues about the identity of the mention currently being resolved (Ratinov et al., 2011; Hoffart et al., 2011; He et al., 2013; Cheng and Roth, 2013; Durrett and Klein, 2014) . But an even simpler approach is to use context information from just the words in the source document itself to make sure the entity is being resolved sensibly in context. In past work, these approaches have typically relied on heuristics such as tf-idf (Ratinov et al., 2011) , but such heuristics are hard to calibrate and they capture structure in a coarser way than learning-based methods.In this work, we model semantic similarity between a mention's source document context and its potential entity targets using convolutional neural networks (CNNs). CNNs have been shown to be effective for sentence classification tasks (Kalchbrenner et al., 2014; Kim, 2014; Iyyer et al., 2015) and for capturing similarity in models for entity linking (Sun et al., 2015) and other related tasks (Dong et al., 2015; Shen et al., 2014) , so we expect them to be effective at isolating the relevant topic semantics for entity linking. We show that convolutions over multiple granularities of the input document are useful for providing different notions of semantic context. Finally, we show how to integrate these networks with a preexisting entity linking system (Durrett and Klein, 2014) . Through a combination of these two distinct methods into a single system that leverages their complementary strengths, we achieve state-ofthe-art performance across several datasets.
0
The blogosphere, which is a subset of the web and is comprised of personal electronic journals (weblogs) currently encompasses 27.2 million pages and doubles in size every 5.5 months (Technorati, 2006) . The information contained in the blogosphere has been proven valuable for applications such as marketing intelligence, trend discovery, and opinion tracking (Hurst, 2005) . Unfortunately in the last year the blogosphere has been heavily polluted with spam weblogs (called splogs) which are weblogs used for different purposes, including promoting affiliated websites (Wikipedia, 2006) . Splogs can skew the results of applications meant to quantitatively analyze the blogosphere. Sophisticated content-based methods or methods based on link analysis (Gyöngyi et al., 2004) , while providing effective splog filtering, require extra web crawling and can be slow. While a combination of approaches is necessary to provide adequate splog filtering, similar to (Kan & Thi, 2005) , we propose, as a preliminary step in the overall splog filtering, a fast, lightweight and accurate method merely based on the analysis of the URL of the weblog without considering its content.For quantitative and qualitative analysis of the content of the blogosphere, it is acceptable to eliminate a small fraction of good data from analysis as long as the remainder of the data is splog-free. This elimination should be kept to a minimum to preserve counts needed for reliable analysis. When using an ensemble of methods for comprehensive splog filtering it is acceptable for pre-filtering approaches to lower recall in order to improve precision allowing more expensive techniques to be applied on a smaller set of weblogs. The proposed method reaches 93.3% of precision in classifying a weblog in terms of spam or good if 49.1% of the data are left aside (labeled as unknown). If all data needs to be classified our method achieves 78% accuracy which is comparable to the average accuracy of humans (76%) on the same classification task.Sploggers, in creating splogs, aim to increase the traffic to specific websites. To do so, they frequently communicate a concept (e.g., a service or a product) through a short, sometimes non-grammatical phrase embedded in the URL of the weblog (e.g., http://adult-video-mpegs.blogspot.com ) . We want to build a statistical classifier which leverages the language used in these descriptive URLs in order to classify weblogs as spam or good. We built an initial language model-based classifier on the tokens of the URLs after tokenizing on punctuation (., -, , /, ?, =, etc.). We ran the system and got an accuracy of 72.2% which is close to the accuracy of humans-76% (the baseline is 50% as the training data is balanced). When we did error analysis on the misclassified examples we observed that many of the mistakes were on URLs that contain words glued together as one token (e.g., dailyfreeipod). Had the words in these tokens been segmented the initial system would have classified the URL correctly. We, thus, turned our attention to additional segmenting of the URLs beyond just punctuation and using this intra-token segmentation in the classification.Training a segmenter on standard available text collections (e.g., PTB or BNC) did not seem the way to procede because the lexical items used and the sequence in which they appear differ from the usage in the URLs. Given that we are interested in unsupervised lightweight approaches for URL segmentation, one possibility is to use the URLs themselves after segmenting on punctuation and to try to learn the segmenting (the majority of URLs are naturally segmented using punctuation as we shall see later). We trained a segmenter on the tokens in the URLs, unfortunately this method did not provide sufficient improvement over the system which uses tokenization on punctuation. We hypothesized that the content of the splog pages corresponding to the splog URLs could be used as a corpus to learn the segmentation. We crawled 20K weblogs corresponding to the 20K URLs labeled as spam and good in the training set, converted them to text, tokenized and used the token sequences as training data for the segmenter. This led to a statistically significant improvement of 5.8% of the accuracy of the splog filter.
0
Word embeddings capture distributional similarities and thus inherit demographic stereotypes (Bolukbasi et al., 2016) . Such embedding biases tend to track statistical regularities such as the percentage of people with a given occupation (Nikhil Garg and Zou, 2018) but sometimes deviate from them (Bhatia, 2017) . Recent work has shown that gender bias exists in contextualized embeddings May et al., 2019) .Here, we provide a quantitative analysis of bias in traditional and contextual word embeddings and introduce a method of mitigating bias (i.e., debiasing) using the debiasing conceptor, a clean mathematical representation of subspaces that can be operated on and composed by logic-based manipulations (Jaeger, 2014) . Specifically, conceptor negation is a soft damping of the principal components of the target subspace (e.g., the subset of words being debiased) (Liu et al., 2019b) (See Figure 1.) Key to our method is how it treats wordassociation lists (sometimes called target lists), which define the bias subspace. These lists include pre-chosen words associated with a target (a) The original space (b) After applying the debiasing conceptor Figure 1 : BERT word representations of the union of the set of contextualized word representations of relatives, executive, wedding, salary projected on to the first two principal components of the WEAT gender first names, which capture the primary component of gender. Note how the debiasing conceptor collapses relatives and wedding, and executive and salary once the bias is removed. demographic group (often referred to as a "protected class"). For example, he / she or Mary / John have been used for gender (Bolukbasi et al., 2016) . More generally, conceptors can combine multiple subspaces defined by word lists. Unlike most current methods, conceptor debiasing uses a soft, rather than a hard projection.We test the debiasing conceptor on a range of traditional and contextualized word embeddings 1 and examine whether they remove stereotypical demographic biases. All tests have been performed on English word embeddings.This paper contributes the following:• Introduces debiasing conceptors along with a formal definition and mathematical relation to the Word Embedding Association Test. • Demonstrates the effectiveness of the debiasing conceptor on both traditional and contextualized word embeddings.
0
Social media texts attract a lot of attention in the fields of information extraction and text mining. Although texts of this type contain a lot of information, such as one's reputation or emotions, they often contain non-standard tokens (lexical variants) that are considered out-of-Vocabulary (OOV) terms. We define an OOV as a word that does not exist in the dictionary. Texts in micro-blogging services such as Twitter are particularly apt to contain words written in a non-standard style, e.g., by lengthening them ("goooood" for "good") or abbreviating them ("thinkin' " for "thinking"). This is also seen in the Japanese language, which has standard word forms and variants of them that are often used in social media texts. To take one word as an example, the standard form is (oishii, "It is delicious") and its variants include (oishiiiii), (oishii), and (oishii), where the underlined characters are the differences from the standard form. Such non-standard tokens often degrade the accuracy of existing language processing systems, which are trained using a clean corpus.Almost all text normalization tasks for languages other than Japanese (e.g., English), aim to replace the non-standard tokens that are explicitly segmented using the context-appropriate standard words (Han et al. (2012) , Han and Baldwin (2011) , Hassan and Menezes (2013) , Li and Liu (2012) , , , Pennell and Liu (2011) , Cook and Stevenson (2009) , Aw et al. (2006) ). On the other hand, the problem is more complicated in Japanese morphological analysis because Japanese words are not segmented by explicit delimiters. In traditional Japanese morphological analysis, word segmentation and part-of-speech (POS) tagging are simultaneously estimated. Therefore, we have to simultaneously analyze normalization, word segmentation, and POS tagging to estimate the normalized form using the context information. For example, the input (pan-keiki oishiiii, "This pancake tastes good") written in the standard form is (pan-keiki oishii). The result obtained with the conventional Japanese morphological analyzer MeCab (Kudo (2005) ) for this input is (pancake, noun)/ (unk)/ (unk)/ (unk)/, where slashes indicate the word segmentations and "unk" means an unknown word. As this result shows, Japanese morphological analyzers often fail to correctly estimate the word segmentation if there are unknown words, so the pipeline method (e.g., first estimating the word segmentations and then estimating the normalization forms) is unsuitable.Moreover, Japanese has several writing scripts, the main ones being Kanji, Hiragana, and Katakana. Each word has its own formal written script (e.g., (kyoukasyo, "textbook") as formally written in Kanji), but in noisy text, there are many words that are intentionally written in a different script (e.g., (kyoukasyo, "textbook") is the Hiragana form of ). These tokens written in different script also degrade the performance of existing systems because dictionaries basically include only the standard script. Unlike the character-level variation we described above, this type of variation occurs on a word level one. Therefore, there are both character-level and word-level non-standard tokens in Japanese informal written text. Several normalization approaches have been applied to Japanese text. Sasano et al. (2013) and Oka et al. (2011) introduced simple character level derivational rules for Japanese morphological analysis that are used to normalize specific patterns of non-standard tokens, such as for word lengthening and lower-case substitution. Although these approaches handle Japanese noisy text fairly effectively, they can handle only limited kinds of non-standard tokens.We propose a novel method of normalization in this study that can handle both character-and wordlevel lexical variations in one model. Since it automatically extracts character-level transformation patterns in character-level normalization, it can handle many types of character-level transformations. It uses two steps (character-and word-level) to generate normalization candidates, and then formulates a cost function of the word sequences as a discriminative model. The contributions this research makes can be summarized by citing three points. First, the proposed system can analyze a wider variety of non-standard token patterns than the conventional system by using our two-step normalization candidate generation algorithms. Second, it can largely improve the accuracy of Japanese morphological analysis for non-standard written text by simultaneously performing the normalization and morphological analyses. Third, it can automatically extract character alignments and in so doing reduces the cost of manually creating many types of transformation patterns. The rest of this paper is organized as follows. Section 2 describes the background to our research, including Japanese traditional morphological analysis, related work, and data collection methods. Section 3 introduces the proposed approach, which includes lattice generation and formulation, as a discriminative model. Section 4 discusses experiments we performed and our analyses of the experimental results. Section 5 concludes the paper with a brief summary and a mention of future work.
0
Many words have multiple meanings, and the process of identifying the correct meaning, or sense of a word in context, is known as word sense disambiguation (WSD). Among the various approaches to WSD, corpus-based supervised machine learning methods have been the most successful to date. With this approach, one would need to obtain a corpus in which each ambiguous word has been manually annotated with the correct sense, to serve as training data.However, supervised WSD systems faced an important issue of domain dependence when using such a corpus-based approach. To investigate this, Escudero et al. (2000) conducted experiments using the DSO corpus, which contains sentences drawn from two different corpora, namely Brown Corpus (BC) and Wall Street Journal (WSJ). They found that training a WSD system on one part (BC or WSJ) of the DSO corpus and applying it to the other part can result in an accuracy drop of 12% to 19%. One reason for this is the difference in sense priors (i.e., the proportions of the different senses of a word) between BC and WSJ. For instance, the noun interest has these 6 senses in the DSO corpus: sense 1, 2, 3, 4, 5, and 8. In the BC part of the DSO corpus, these senses occur with the proportions: 34%, 9%, 16%, 14%, 12%, and 15%. However, in the WSJ part of the DSO corpus, the proportions are different: 13%, 4%, 3%, 56%, 22%, and 2%. When the authors assumed they knew the sense priors of each word in BC and WSJ, and adjusted these two datasets such that the proportions of the different senses of each word were the same between BC and WSJ, accuracy improved by 9%. In another work, Agirre and Martinez (2004) trained a WSD system on data which was automatically gathered from the Internet. The authors reported a 14% improvement in accuracy if they have an accurate estimate of the sense priors in the evaluation data and sampled their training data according to these sense priors. The work of these researchers showed that when the domain of the training data differs from the domain of the data on which the system is applied, there will be a decrease in WSD accuracy.To build WSD systems that are portable across different domains, estimation of the sense priors (i.e., determining the proportions of the different senses of a word) occurring in a text corpus drawn from a domain is important. McCarthy et al. (2004) provided a partial solution by describing a method to predict the predominant sense, or the most frequent sense, of a word in a corpus. Using the noun interest as an example, their method will try to predict that sense 1 is the predominant sense in the BC part of the DSO corpus, while sense 4 is the predominant sense in the WSJ part of the corpus.In our recent work (Chan and Ng, 2005b) , we directly addressed the problem by applying machine learning methods to automatically estimate the sense priors in the target domain. For instance, given the noun interest and the WSJ part of the DSO corpus, we attempt to estimate the proportion of each sense of interest occurring in WSJ and showed that these estimates help to improve WSD accuracy. In our work, we used naive Bayes as the training algorithm to provide posterior probabilities, or class membership estimates, for the instances in the target domain. These probabilities were then used by the machine learning methods to estimate the sense priors of each word in the target domain.However, it is known that the posterior probabilities assigned by naive Bayes are not reliable, or not well calibrated (Domingos and Pazzani, 1996) . These probabilities are typically too extreme, often being very near 0 or 1. Since these probabilities are used in estimating the sense priors, it is important that they are well calibrated.In this paper, we explore the estimation of sense priors by first calibrating the probabilities from naive Bayes. We also propose using probabilities from another algorithm (logistic regression, which already gives well calibrated probabilities) to estimate the sense priors. We show that by using well calibrated probabilities, we can estimate the sense priors more effectively. Using these estimates improves WSD accuracy and we achieve results that are significantly better than using our earlier approach described in (Chan and Ng, 2005b) .In the following section, we describe the algorithm to estimate the sense priors. Then, we describe the notion of being well calibrated and discuss why using well calibrated probabilities helps in estimating the sense priors. Next, we describe an algorithm to calibrate the probability estimates from naive Bayes. Then, we discuss the corpora and the set of words we use for our experiments before presenting our experimental results. Next, we propose using the well calibrated probabilities of logistic regression to estimate the sense priors, and perform significance tests to compare our various results before concluding.
0
Microblogging websites have been a huge source of data containing different kinds of information. Since the users on these microblogging websites tend to write informal real-time messages, they also tend to mix languages as they are just being spontaneous and want to ease the communication or they are just multilingual or non-native language speakers who mix between their native language and the language they are trying to use Patwa et al. (2020) . This type of writing is called Code-Mixing or Code-Switching and it could be defined as the phenomenon of mixing the vocabulary and syntax of multiple languages in the same sentence Lal et al. (2019) . Sentiment Analysis (SA) is the task of detecting, extracting, and classifying sentiment and opinions Montoyo et al. (2012) . SA can help in structuring data of public opinions about products, brands or any topic that people can express opinions about, to be used in a very wide set of practical applications varying from political use, e.g. monitoring public events Tumasjan et al. (2010) to commercial use, e.g. making decisions in the stock market Jansen et al. (2009) . The task of monolingual sentiment analysis has been a well-studied topic in the literature over the past few decades. However, little attention has been directed to SA based on code-mixed data.In SemEval-2020 Task 9: Sentiment Analysis for Code-Mixed Social Media Text Patwa et al. (2020) , the organizers provide a dataset of Code-Mixed tweets with word-level language labels that we will explore in Section 2.1, and with the following sentiment labels: positive, negative, neutral. Given a code-mixed text, the task is to classify the overall sentiment of the input text to one of the three sentiment labels mentioned above. The official evaluation metric for this task is Average F1-Score. We report the experiments made only on Spanish-English (Spanglish) data, whereas SemEval-2020 Task 9 contains Hindi-English (Hinglish) data as well. The challenges of this shared task could be summarized as follows: a) The relatively small dataset provided makes it hard to train complex models b) the target classes distribution is imbalanced in the training data c) the characteristics of social media text pose difficulties such as out-of-vocabulary words and ungrammatical sentences due to the spontaneous and informal writing with lots of extra embedded information in a single sentence (e.g. hashtags, emojis, or repeated characters in a word) some of this embedded information can be utilized to make a better predictive model and others may hurt the model prediction badly.We conducted several experiments to tackle this problem. We used a Linear SVM, Logistic Regression, and Multinomial Naive Bayes models with TF-IDF feature vectors as an input. We also used XLM-RoBERTa Conneau et al. (2020) , a transformer-based multilingual masked language model which is trained on 100 languages, fine-tuned on our downstream SA task which achieves our highest score outperforming the official baseline. The rest of the paper is structured as follows: Section 2 introduces some background about the task, an overview of the dataset, and related work. We describe the applied preprocessing steps and the experiments in Section 3. We report the results in Section 4. And Section 5 summarizes our work.
0
In this paper, we present our preliminary work on automatic scoring of a summarization task that is designed to measure the reading comprehension skills of students from grades 6 through 9. We first introduce our underlying reading comprehension assessment framework (Sabatini and O'Reilly, In Press; Sabatini et al., In Press) that motivates the task of writing summaries as a key component of such assessments in §2. We then describe the summarization task in more detail in §3. In §4, we describe our approach to automatically scoring summaries written by students for this task and compare the results we obtain using our system to those obtained by human scoring. Finally, we conclude in §6 with a brief discussion and possible future work.
0
In multi-modal translation, the task is to translate from a source sentence and the image that it describes, into a target sentence in another language. As both automatic image captioning systems and crowd captioning efforts tend to mainly yield descriptions in English, multi-modal translation can be useful for generating descriptions of images for languages other than English. In the MeMAD project 1 , multi-modal translation is of interest for creating textual versions or descriptions of audio-visual content. Conversion to text enables both indexing for multi-lingual image and video search, and increased access to the audio-visual materials for visually impaired users.We adapt 2 the Transformer (Vaswani et al., 2017) architecture to use global image features extracted from Detectron, a pre-trained object detection and localization neural network. We use two additional training corpora: MS-COCO (Lin et al., 2014) and OpenSub-titles2018 (Tiedemann, 2009) . MS-COCO is multi-modal, but not multi-lingual. We extended it to a synthetic multi-modal and multilingual training set. OpenSubtitles is multilingual, but does not include associated images, and was used as text-only training data. This places our entry in the unconstrained category of the WMT shared task. Details on the architecture used in this work can be found in Section 4.1. Further details on the synthetic data are presented in Section 2. Data sets are summarized in Table 1 .
0
Vector space models of semantics frequently employ some form of dimensionality reduction for improvement in representations or computational overhead. Many of the dimensionality reduction algorithms assume that the unreduced word space is linear. However, word similarities have been shown to exhibit many non-metric properties: asymmetry, e.g North Korea is more similar to Red China than Red China is to North Korea, and non-transitivity, e.g. Cuba is similar the former USSR, Jamaica is similar to Cuba, but Jamaica is not similar to the USSR (Tversky, 1977) . We hypothesize that a non-linear word space model might more accurately preserve these non-metric relationships.To test our hypothesis, we capture the nonlinear structure with dimensionality reduction by using Locality Preserving Projection (LPP) (He and Niyogi, 2003) , an efficient, linear approximation of Eigenmaps (Belkin and Niyogi, 2002) . With this reduction, the word space vectors are assumed to exist on a nonlinear manifold that LPP learns in order to project the vectors into a Euclidean space. We measure the effects of using LPP on two basic word space models: the Vector Space Model and a Word Co-occurrence model. We begin with a brief overview of these word spaces and common dimensionality reduction techniques. We then formally introduce LPP. Following, we use two experiments to demonstrate LPP's capacity to accurately dimensionally reduce word spaces.
0
In the area of media analysis, one of the key tasks is collecting detailed information about opinions and attitudes toward specific topics from various sources, both offline (traditional newspapers, archives) and online (news sites, blogs, forums). Specifically, media analysis concerns the following system task: given a topic and list of documents (discussing the topic), find all instances of attitudes toward the topic (e.g., positive/negative sentiments, or, if the topic is an organization or person, support/criticism of this entity). For every such instance, one should identify the source of the sentiment, the polarity and, possibly, subtopics that this attitude relates to (e.g., specific targets of criticism or support). Subsequently, a (human) media analyst must be able to aggregate the extracted information by source, polarity or subtopics, allowing him to build support/criticism networks etc. (Altheide, 1996) . Recent advances in language technology, especially in sentiment analysis, promise to (partially) automate this task.Sentiment analysis is often considered in the context of the following two tasks:• sentiment extraction: given a set of textual documents, identify phrases, clauses, sentences or entire documents that express attitudes, and determine the polarity of these attitudes (Kim and Hovy, 2004) ; and • sentiment retrieval: given a topic (and possibly, a list of documents relevant to the topic), identify documents that express attitudes toward this topic (Ounis et al., 2007) .How can technology developed for sentiment analysis be applied to media analysis? In order to use a sentiment extraction system for a media analysis problem, a system would have to be able to determine which of the extracted sentiments are actually relevant, i.e., it would not only have to identify specific targets of all extracted sentiments, but also decide which of the targets are relevant for the topic at hand. This is a difficult task, as the relation between a topic (e.g., a movie) and specific targets of sentiments (e.g., acting or special effects in the movie) is not always straightforward, in the face of ubiquitous complex linguistic phenomena such as referential expressions (". . . this beautifully shot documentary") or bridging anaphora ("the director did an excellent jobs").In sentiment retrieval, on the other hand, the topic is initially present in the task definition, but it is left to the user to identify sources and targets of sentiments, as systems typically return a list of documents ranked by relevance and opinionatedness. To use a traditional sentiment retrieval system in media analysis, one would still have to manually go through ranked lists of documents returned by the system.To be able to support media analysis, we need to combine the specificity of (phrase-or word-level) sentiment analysis with the topicality provided by sentiment retrieval. Moreover, we should be able to identify sources and specific targets of opinions.Another important issue in the media analysis context is evidence for a system's decision. If the output of a system is to be used to inform actions, the system should present evidence, e.g., highlighting words or phrases that indicate a specific attitude. Most modern approaches to sentiment analysis, however, use various flavors of classification, where decisions (typically) come with confidence scores, but without explicit support.In order to move towards the requirements of media analysis, in this paper we focus on two of the problems identified above: (1) pinpointing evidence for a system's decisions about the presence of sentiment in text, and (2) identifying specific targets of sentiment.We address these problems by introducing a special type of lexical resource: a topic-specific subjectivity lexicon that indicates specific relevant targets for which sentiments may be expressed; for a given topic, such a lexicon consists of pairs (syntactic clue, target). We present a method for automatically generating a topic-specific lexicon for a given topic and query-biased set of documents. We evaluate the quality of the lexicon both manually and in the setting of an opinionated blog post retrieval task. We demonstrate that such a lexicon is highly focused, allowing one to effectively pinpoint evidence for sentiment, while being competetive with traditional subjectivity lexicons consisting of (a large number of) clue words.Unlike other methods for topic-specific sentiment analysis, we do not expand a seed lexicon. Instead, we make an existing lexicon more focused, so that it can be used to actually pin-point subjectivity in documents relevant to a given topic.
0
Recent studies have exposed the vulnerability of ML models to adversarial attacks, small input perturbations which lead to misclassification by the model. Adversarial example generation in NLP (Zhang et al., 2019) is more challenging than in commonly studied computer vision tasks (Szegedy et al., 2014; Kurakin et al., 2017; Papernot et al., 2017) because of (i) the discrete nature of the input space and (ii) the need to ensure semantic coherence with the original text. A major bottleneck in applying gradient based (Goodfellow et al., 2015) or generator model (Zhao et al., 2018) based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space. We use BERT-MLM to predict masked tokens in the text for generating adversarial examples. The MASK token replaces a word (BAE-R attack) or is inserted to the left/right of the word (BAE-I).Initial works for attacking text models relied on introducing errors at the character level (Ebrahimi et al., 2018; Gao et al., 2018) or adding and deleting words (Li et al., 2016; Liang et al., 2017; Feng et al., 2018) for creating adversarial examples. These techniques often result in unnatural looking adversarial examples which lack grammatical correctness, thereby being easily identifiable by humans.Rule-based synonym replacement strategies (Alzantot et al., 2018; Ren et al., 2019) have recently lead to more natural looking adversarial examples. Jin et al. (2019) combine both these works by proposing TextFooler, a strong black-box attack baseline for text classification models. However, the adversarial examples generated by TextFooler solely account for the token level similarity via word embeddings, and not the overall sentence semantics. This can lead to out-of-context and unnaturally complex replacements (see Table 3 ), which are easily human-identifiable. Consider a simple example: "The restaurant service was poor". Token level synonym replacement of 'poor' may lead to an inappropriate choice such as 'broke', while a context-aware choice such as 'terrible' leads to better retention of semantics and grammaticality.Therefore, a token replacement strategy contingent on retaining sentence semantics using a pow-erful language model (Devlin et al., 2018; Radford et al., 2019) can alleviate the errors made by existing techniques for homonyms (tokens having multiple meanings). In this paper, we present BAE (BERT-based Adversarial Examples), a novel technique using the BERT masked language model (MLM) for word replacements to better fit the overall context of the English language. In addition to replacing words, we also propose inserting new tokens in the sentence to improve the attack strength of BAE. These perturbations in the input sentence are achieved by masking a part of the input and using a LM to fill in the mask (See Figure 1) .Our BAE attack beats the previous baselines by a large margin on empirical evaluation over multiple datasets and models. We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT classifier by over 80% on some datasets. Moreover, our human evaluation reveals the improved grammaticality of the adversarial examples generated by BAE over the baseline TextFooler, which can be attributed to the BERT-MLM. To the best of our knowledge, we are the first to use a LM for generating adversarial examples. We summarize our contributions as:• We propose BAE, an adversarial example generation technique using the BERT-MLM. • We introduce 4 BAE attack modes by replacing and inserting tokens, all of which are almost always stronger than previous baselines on 7 text classification datasets. • Through human evaluation, we show that BAE yields adversarial examples with improved grammaticality and semantic coherence.
0
Online social media platforms are dealing with an unprecedented scale of offensive (e.g., hateful, threatening, profane, racist, and xenophobic) language (Twitter; Facebook; Reddit) . Given the scale of the problem, online social media platforms now increasingly rely on machine learning based systems to proactively and automatically detect offensive language (Rosen, 2020; Gadde and Derella, 2020; Kastrenakes, 2019; Hutchinson, 2020) . The research community is actively working to improve the quality of offensive language classification (Zampieri et al., 2020 (Zampieri et al., , 2019b Liu et al., 2019; Nikolov and Radivchev, 2019; Mahata et al., 2019; Arango et al., 2020; Agrawal and Awekar, 2018; Fortuna and Nunes, 2018) . A variety of offensive language classifiers ranging from traditional shallow models (SVM, Random Forest), deep learning models (CNN, LSTM, GRU), to transformerbased models (BERT, GPT-2) have been proposed in prior literature (Liu et al., 2019; Nikolov and Radivchev, 2019; Mahata et al., 2019) . Amongst these approaches, BERT-based transformer models have achieved state-of-the-art performance while ensembles of deep learning models also generally perform well (Zampieri et al., 2019b (Zampieri et al., , 2020 .It remains unclear whether the state-of-the-art offensive language classifiers are robust to adversarial attacks. While adversarial attacks are of broad interest in the ML/NLP community (Hsieh et al., 2019; Behjati et al., 2019) , they are of particular interest for offensive language classification because malicious users can make subtle perturbations such that the offensive text is still intelligible to humans but evades detection by machine learning classifiers. Prior work on the robustness of text classification is limited to analyzing the impact on classifiers of primitive adversarial changes such as deliberate misspellings , adding extraneous spaces (Gröndahl et al., 2018) , or changing words with their synonyms (Jin et al., 2020; Ren et al., 2019; Li et al., 2020) . However, the primitive attacks can be easily defended againsta spell checker can fix misspellings and a word segmenter can correctly identify word boundaries even with extra spaces (Rojas-Galeano, 2017; . Additionally, a normal synonym substitution will not theoretically hold for offensive language as less offensive language will be substituted and thus meaning will be lost. Crucially, we do not know how effective these text classifiers are against crafty adversarial attacks employing more advanced strategies for text modifications.To address this gap, we analyze the robustness of offensive language classifiers against an adversary who uses a novel word embedding to identify word replacements and a surrogate offense classifier in a black-box setting to guide modifications. This embedding is purpose-built to evade offensive language classifiers by leveraging an evasion collection that comprises of evasive offensive text gathered from online social media. Using this embedding, the adversary modifies the offensive text while also being able to preserve text readability and semantics. We present a comprehensive evaluation of the state-of-the-art BERT and CNN/LSTM based offensive language classifiers, as well as an offensive lexicon and Google's Perspective API, on two datasets.We summarize our key contributions below.• We systematically study the ability of an adversary who uses a novel, crafty strategy to attack and bypass offensive language classifiers. The adversary first builds a new embedding from a special evasion collection, then uses it alongside a surrogate offensive language classifier deployed in black-box mode to launch the attack.• We explore variations of our adversarial strategy. These include greedy versus attention based selection of text words to replace. These also include two different versions of embeddings for word substitutions.• We evaluate robustness of state-of-the-art offensive language classifiers, as well as a real-world offensive language classification system on two datasets from Twitter and Reddit. Our results show that 50% of our attacks cause an accuracy drop of ≥ 24% and 69% of attacks cause drops ≥ 20% against classifiers across datasets.Ethics Statement: We acknowledge that our research demonstrating attacks against offensive language classifiers could be used by bad agents. Our goal is to highlight the vulnerability within offensive language classifiers. We hope our work will inspire further research to improve their robustness against the presented and similar attacks.
0
The Task # 2 of Semeval 2012 focuses on measuring the degree of relational similarity between the reference words pairs (training) and the test pairs for a given class (Jurgens et al., 2012) .The training data set consists of 10 classes and the testing data set consists of the 69 classes. These datasets as well as the particularities of the task are better described at overview paper (Jurgens et al., 2012) . In this paper we report the approach submitted to the competition, which is based on a vector space model representation for each pair (Salton et al., 1975) . With respect to the type of features used, we have observed that Fabio Celli (Celli, 2010) considers that contextual information is useful, as well the lexical and semantic information are in the extraction of semantic relationships task. Additionally, in (Chen et al., 2010) and (Negri and Kouylekov, 2010 ) are proposed WordNet based features with the same purpose.In the experiments carried out in this paper, we use a set of lexical, semantic, WordNet-based and contextual features which allows to construct the vectors. Actually, we have tested a subset of the 20 contextual features proposed by Celli (Celli, 2010) and some of those proposed by Chen (Chen et al., 2010) and Negri (Negri and Kouylekov, 2010) .The cosine similarity measure is used for determining the degree of relational similarity (Frakes and Baeza-Yates, 1992) among the vectors.The rest of this paper is structured as follows. Section 2 describes the system employed. Section 3 show the obtained results. Finally, in Section 4 the final conclusions are given.
0
Identification of cognates and recurrent sound correspondences is a component of two principal tasks of historical linguistics: demonstrating the relatedness of languages, and reconstructing the histories of language families. Manually compiling the list of cognates is an error-prone and time-consuming task. Several methods for constructing comparative dictionaries have been proposed and applied to specific language families: Algonquian (Hewson, 1974) , Yuman (Johnson, 1985) , Tamang (Lowe and Mazaudon, 1994) , and Malayo-Javanic (Oakes, 2000) . Most of those methods crucially depend on previously determined regular sound correspondences; each of them was both developed and tested on a single language family. Kondrak (2002) proposes a number of algorithms for automatically detecting and quantifying three characteristics of cognates: recurrent sound correspondences, phonetic similarity, and semantic affin-ity. The algorithms were tested on two well-studied language families: Indo-European and Algonquian. In this paper, we apply them instead to a set of languages whose mutual relationship is still being investigated. This is consistent with the original research goal of providing tools for the analysis of relatively unfamiliar languages represented by word lists. We show that by combining expert linguistic knowledge with computational analysis, it is possible to quickly identify a large number of cognate sets within a relatively little-studied language family.The experiments reported in this paper were performed in the context of the Upper Necaxa Totonac Project (Beck, 2005) , of which one of the authors is the principal investigator. Upper Necaxa is a seriously endangered language spoken by around 3,400 indigenous people in Puebla State, Mexico. The primary goal of the project is to document the language through the compilation of an extensive dictionary and other resources, which may aid revitalization efforts. One aim of the project is the investigation of the relationship between Upper Necaxa Totonac and the other languages of the Totonac-Tepehua language family, whose family tree is not yet wellunderstood.The paper is organized as follows. In Section 2, we provide background on the Totonac-Tepehua family. Section 3 describes our data sets. In Section 4, we outline our algorithms. In Section 5, we report on a pilot study involving only two languages. In Section 6, we present the details of our system that generates a comparative dictionary involving five languages. Section 7 discusses the practical significance of our project.
0
The aim of the shared task is to research and develop automatic systems that can help mental health professionals with the process of triaging posts with ideations of depression and/or self-harm. We structured our participation in the CLPsych 2016 shared task in order to focus on different facets of modelling online forum discussions: (i) vector space representations (TF-IDF vs. embeddings); (ii) different text granularities (e.g., sentences vs posts); and (iii) fineversus coarse-grained (FG and CG respectively) labels indicating concern.(i) For our exploration of vector space representations, we explored the traditional TF-IDF feature representation that has been widely applied to NLP. We also investigated the use of post embeddings, which have recently attracted much attention as feature vectors for representing text (Zhou et al., 2015; Salehi et al., 2015) . Here, as in other related work (Guo et al., 2014) , the post embeddings are learned from the unlabelled data as features for supervised classifiers. (ii) Our exploration of text granularity focuses on classifiers for sentences as well as posts. For the sentence-level classifiers, a post is split into sentences as the basic unit of annotation using a sentence segmenter. (iii) To explore the granularity of labels indicating concern, we note that the data includes a set of 12 FG labels representing factors that assist in deciding on whether a post is concerning or not. These are in addition to 4 CG labels.We trained 6 single classifiers based on different combinations of vector space features, text granularities and label sets. We also explored ensemble classifiers (based on these 6 single classifiers), as this is a way of combining the strengths of the single classifiers. We used one of two ensemble methods: majority voting and probability scores over labels. We submitted five different systems as submissions to the shared task. Two of them were based on single classifiers, whereas the remaining three systems used ensemble-based classifiers. We achieved an F1-score of 0.42 using an ensemble classification approach that predicts FG labels of concern. This was the best score obtained by any submitted system in the 2016 shared task.The paper is organised as follows: Section 2 briefly discusses the data of the shared task. Section 3 presents the details of the systems we sub-mitted. Section 4 then shows experimental results. Finally, we summarise our findings in Section 5.
0
Transformer-based neural language models, such as BERT (Devlin et al., 2018) , have achieved state-of-the-art performance for a variety of natural language processing (NLP) tasks. Since most are pre-trained on large general domain corpora, many efforts have been made to continue pretaining general-domain language models on clinical/biomedical corpora to derive domain-specific language models Alsentzer et al., 2019; Beltagy et al., 2019 ).Yet, as Gu et al. (2020a) pointed out, in specialized domains such as biomedicine, continued pretraining from generic language models is inferior to domain-specific pretraining from scratch. Continued pre-training from a generic model would break down many of the domain specific terms into sub-words through the Byte-Pair Encoding (BPE) (Gage, 1994) or variants like WordPiece tokenization because these specific terms are not in the vocabulary of the generic pretrained model. A clinical domain-specific pretraining from scratch would derive an in-domain vocabulary as many of the biomedical terms, such as diseases, signs/symptoms, medications, anatomical sites, procedures, would be represented in their original form. Such an improved word-level representation is expected to bring substantial performance gains in clinical domain tasks because the model would learn the characteristics of the term along with its surrounding context as one unit.In our preliminary work on a clinical relation extraction task, we observed a performance gain with the PubMedBERT model (Gu et al., 2020a) which outperformed BioBERT , ClinicalBERT (Alsentzer et al., 2019) , and even some larger general domain models like RoBERTa and BART-large . The performance gain was primarily attributed to PubMedBERT's in-domain vocabulary as we observed that PubMedBERT kept 30% more in-domain words in its vocabulary than BERT. When we swapped PubMedBERT tokenization with BERT or RoBERTa tokenization, the performance of PubMedBERT degraded.Thus, PubMedBERT appears to provide a vocabulary that is helpful to the clinical domain. However, the language of biomedical literature is different from the language of the clinical documents found in electronic medical records (EMRs). In general, a clinical document is written by physicians who have very limited time to express the numerous details of a patient-physician encounter. Many nonstandard expressions, abbreviations, assumptions and domain knowledge are used in clinical notes which makes the text hard to understand outside of the clinical community and presents challenges for automated systems. Pretraining a language model specific to the clinical domain requires large amounts of unlabeled clinical text on par with what the generic models are trained on. Unfortunately, such data are not available to the community. The only available such corpus is MIMIC III used to train ClinicalBERT (Alsentzer et al., 2019) and BlueBERT (Peng et al., 2019) , but it is magnitudes smaller and represents one specialty in medicine -intensive care.Pretraining is agnostic to downstream tasks: it learns representations for all words using a selfsupervised data-rich task. Yet, not all words are important for downstream fine-tuning tasks. Numerous pretrained words are not even used in the fine-tuning step, while important words crucial for the downstream task are not well represented due to insufficient amounts of labeled data. Many clinical NLP tasks are centered around entities: clinical named entity recognition aims to detect clinical entities (Wu et al., 2017; Elhadad et al., 2015) , clinical negation extraction decides if a certain clinical entity is negated (Chapman et al., 2001; Harkema et al., 2009; Mehrabi et al., 2015) , clinical relation discovery extracts relations among clinical entities (Lv et al., 2016; Leeuwenberg and Moens, 2017) , etc. Though various masking strategies have been employed during pretraining -masking contiguous spans of text (SpanBERT, Joshi et al., 2020; BART, Lewis et al., 2019) , varying masking ratios (Raffel et al., 2019) , building additional neural models to predict which words to mask (Gu et al., 2020b) , incorporating knowledge graphs (Zhang et al., 2019) , masking entities for a named entity recognition task (Ziyadi et al., 2020 ) -none of the masking techniques so far have investigated and focused on clinical entities.Besides transformer-based models, there are other efforts (Beam et al., 2019; to characterize the biomedical/clinical entities at the word embedding level. There are also other statistical methods applied to the downstream tasks. We do not include these efforts in our discussion because the focus of our paper is the investigation of a novel entity-based masking strategy in a transformer-based setting.In this paper, we propose a methodology to produce a model focused on clinical entities: continued pretraining of a model with a broad representation of biomedical terminology (the PubMedBERT model) on a clinical corpus, along with a novel entity-centric masking strategy to infuse domain knowledge in the learning process 1 . We show that such a model achieves superior results on clinical extraction tasks by comparing our entity-centric masking strategy with classic random masking on three clinical NLP tasks: cross-domain negation detetction (Wu et al., 2014) , document time relation (DocTimeRel) classification (Lin et al., 2020b) , and temporal relation extraction (Wright-Bettner et al., 2020) .The contributions of this paper are: (1) a continued pretraining methodology for clinical domain specific neural language models, (2) a novel entitycentric masking strategy to infuse domain specific knowledge, (3) evaluation of the proposed strategies on three clinical tasks: cross-domain negation detection, DocTimeRel classification, and temporal relation extraction, and (4) evaluation of our models on the PubMedQA dataset to measure the models' performance on a non-entitycentric task in the biomedical domain.
0
In recent years, optical character recognition of printed text has reached high accuracy rates for modern fonts. However, historical documents still pose a challenge for character recognition and OCR of those documents still does not yield satisfying results. This is a problem for all researchers who would like to use those documents as a part of their research.The main reasons why historical documents still pose a challenge for OCR are: fonts differ in different materials, lack of orthographic standard (same words spelled differently), material quality (some documents can have deformations) and a lexicon of known historical spelling variants is not available (although if they were, they might not give any OCR advantage for morphologically rich languages as noted by Silfverberg and Rueter (2015) , but they can be useful in the post-processing phase).The leading software frameworks for OCR are commercial ABBYY FineReader 1 and two open source frameworks: Ocropy 2 (previously known as OCRopus) and Tesseract 3 . Springmann et al. (2014) experiment with these three and compare their performance on five pages of historical printings of Latin texts. The mean character accuracy they achieve is 81.66% for Ocropy, 80.57% for ABBYY FineReader, and 78.77% for Tesseract.However, Finnish historical documents are mainly written in Gothic (Fraktur) font, which is harder to recognize. The National Library of Finland has scanned, segmented and performed OCR on their historical newspaper corpus with ABBYY FineReader. On a test set that is representative of the bulk of the Finnish material, AB-BYY FineReader's recognition accuracy is only 90.16%.In this work we test how Ocropy performs optical character recognition on historical Finnish documents. We achieve a character accuracy of 93.50% with Ocropy when training with Finnish historical data. Additionally, we also wanted to find out whether any further improvement in the OCR quality could be achieved by performing OCR post-correction with an unstructured classifier and a lexicon on the Ocropy output.Our experiments show that already with a relatively small training set (around 10,000 lines) we can get over 93% accuracy with Ocropy and with additional post-correction, the accuracy goes beyond 94%. With two training sets combined (around 60,000 lines), we get accuracy even over 95%.In Springmann et al. (2014) , they apply different OCR methods to historical printings of Latin text and get the highest accuracies when using Ocropy. Some work on Fraktur fonts has been reported in Breuel et al. (2013) where models were trained on artificial training data and got high accuracies when tested on scanned books with Fraktur text.In Shafait (2009) , alongside with the overview of different OCR methods, they present the architecture of Ocropy and explain different steps of a typical OCR process.Approaches to OCR post-processing are numerous and commonly rely on an error model for generating correction candidates. A language model may be incorporated to model output-level character dependencies. A lexicon can be used to determine which suggestions are valid words of the language -historical OCR may pose a challenge here if lexical resources are scarce. The postprocessing method used in our work is described by Silfverberg et al. (2016) and can be described as an unstructured classifier. While the method is relatively simple from both a theoretical and computational points of view as it lacks a language model and a segmentation model found in many recently proposed approaches (see e.g. Eger et al. (2016) , Llobet et al. (2010) ) the classifier nevertheless captures the regularities of character-level errors occurring in OCR output and demonstrably improves the quality of the processed text. A more detailed comparison with other OCR-post processing methods can be found in Silfverberg et al. (2016) .We divided the DIGI data set into three parts: 9,345 images and lines were allocated to training, 1,038 served as development data and the remaining 2,046 lines were reserved for testing. The motivation for splitting the data this way comes from practical reasons: we initially had separate sets of 10,383 and 2,046 lines, so we decided to take 10% from the bigger set as the development data, 90% as the training data and to use the smaller set for testing.The NATLIB data was on the other hand completely randomly split into three parts: 43,704 lines was used for training, 100 was used as development set and 5,308 as test set. In this case we used a very small development set because from our previous experience with DIGI data, we learned that recognition of large amount of lines can be quite slow. And since we had to do recognition for all saved models to find the best one, we decided to save time by reducing the size of the development set.
0
Paraphrase identification is an area of research with a long history. Approaches to the task can be divided into supervised methods, such as (Madnani et al., 2012) , currently the most commonly used, and unsupervised techniques (Socher et al., 2011) .While many approaches of both types use carefully selected features to determine similarity, such as string edit distance (Dolan et al., 2004) ) or longest common subsequence (Fernando and Stevenson, 2008) , several recent supervised approaches apply Neural Networks to the task (Filice et al., 2015; He et al., 2015) , often linking it to the related issue of semantic similarity (Tai et al., 2015; Yin and Schütze, 2015) .Traditionally, paraphrase detection has been formulated as a binary problem. Corpora employed in this work contain pairs of sentences labeled as paraphrase or non-paraphrase. The most representative of these corpora, such as the Microsoft Paraphrase Corpus (Dolan et al., 2004) , conform to this paradigm.This approach is different from the one adopted in semantic similarity datasets, where a pair of words or sentences is labeled on a gradient classification system. In some cases, semantic similarity tasks overlap with paraphrase detection, as in Xu et al. (2015) and in Agirre et al. (2016) . Xu et al. (2015) is one of the first works that tries to connect paraphrase identification with semantic similarity. They define a task where the system generates both a binary judgment and a gradient score for sentences pairs.We present a new dataset for paraphrase identification which is built on two main ideas: (i) Paraphrase recognition is a gradient classification task. (ii) Paraphrase recognition is an ordering problem, where sets of sentences are ranked by similarity with respect to a reference sentence.While the first assumption is shared by some of the work we have cited here, our corpus is, to the best of our knowledge, the first one constructed on the basis of the second claim.We believe that annotating sets of sentences for similarity with respect to a reference sentence can help with both the learning and the testing processes in paraphrase identification.We use this corpus to test a neural network architecture formed by a combination of Convolutional Neural Networks (CNNs) and Long Short Term Memory Recurrent Neural Networks (LSTM RNNs) . We test this model on two classification problems: (i) binary paraphrase classification, and (ii) paraphrase ranking. We show that our system can achieve a significant correlation to human paraphrase judgments on the ranking task as a by-product of supervised binary learning.
0
A great deal of linguistic knowledge is encoded implicitly in bilingual resources such as parallel texts and bilingual dictionaries. Dyvik (1998 Dyvik ( , 2005 has provided a knowledge discovery method based on the semantic relationship between words in a source language and words in a target language, as manifested in parallel texts. His method is called Semantic mirroring and the approach utilizes the way that different languages encode lexical meaning by mirroring source words and target words back and forth, in order to establish semantic relations like synonymy and hyponymy. Work in this area is strongly related to work within Word Sense Disambiguation (WSD) and the observation that translations are a good source for detecting such distinctions (Resnik & Yarowsky 1999 , Ide 2000 , Diab & Resnik 2002 . A word that has multiple meanings in one language is likely to have different translations in other languages. This means that translations serve as sense indicators for a particular source word, and make it possible to divide a given word into different senses.In this paper we propose a new graph-based approach to the analysis of semantic mirrors. The objective is to find a viable way to discover synonyms and group them into different senses. The method has been applied to a bilingual dictionary of English and Swedish adjectives.
0
One of the important reasons for communication is the desire on the part of the members to express their emotions (Millar 1951) . Language is the effective tool to carry out this task and speech is the most efficient mode of language communication between humans. In the recent years, emotion recognition in human speech is more important because of human computer interaction as in automatic dialogue systems or robotic interactions (Cowie et al., 2001) . In interactive applications, detection of emotions such as frustration, boredom or annoyance in the speaker's voice helps to adapt the system response, making the system more effective. Speech carries a lot of information over and above the text content in the language. Speaker's voice expresses the physical and emotional state, sex, age, intelligence and personality (Kramel, 1963) . Emotion is intimately connected with cognition and many physiological indices change during emotion arousal (Lindsay and Norman, 1972) . The task of speech emotion recognition is challenging as it is not clear which speech features are effective in distinguishing a large range and shades of emotions over a range of human voices and context. How a certain emotion is expressed generally depends on the speaker, his or her culture and environment (Ayadi et al., 2011) . Therefore, integration of acoustic and linguistic information has been tried out. (Lee and Pieraccini, 2002, Schuller et al. 2004) . Spoken dialogue and written language are very different due to many paralinguistic aspects such as the emotiphons, defined and discussed in this paper.In this study, we examine specific lexical expressions in Indian languages conveying emotion, referred to as emotiphons. This is the first attempt of its kind to list and study these lexical expressions. We consider two Indian languages, namely Marathi from Indo-Aryan family and Kannada from Dravidian family, whose people are culturally very connected. This data across languages and their acoustic correlates would throw light on the flow of information from the prosodic level to the highest cognitive level of speech processing, in general, and emotional speech processing in particular.The following section describes the role of emotiphons in emotion recognition. Section 3 lists emotiphons in Marathi and Kannada. Section 4 mentions the observations along with discussion. Section 5 states conclusions.Speech and emotion Cowie and Cornelius (2003) have described issues related to speech and emotion in great details, covering the basic concepts and relevant techniques to study conceptual approaches. It is well recognized that emotion analysis in human communication is multi-faceted and varied. It is also intertwined with the culture of the language users.
0
Multi-document summarization (MDS) is the summarization of a collection of related documents (Mani (1999) ). Its application includes the summarization of a news story from different sources where document sources are related by the theme or topic of the story. Another application is the tracking of news stories from the single source over different time frame. In this case, documents are related by topic over time.Multi-document summarization is also an extension of single document summarization. One of the most robust and domain-independent summarization approaches is extraction-based or shallow summarization (Mani (1999) ). In extraction-based summarization, salient sentences are automatically extracted to form a summary directly (Kupiec et. al, (1995) , Myaeng & Jang (1999) , Jing et. al, (2000) , Nomoto & Matsumoto (2001 , Zha (2002) , Osborne (2002) ), or followed by a synthesis stage to generate a more natural summary (McKeown & Radev (1999) , Hovy & Lin (1999) ). Summarization therefore involves some theme or topic identification and then extraction of salient segments in a document.Story segmentation, document and sentence and classification can often be accomplished by unsupervised, clustering methods, with little or no requirement of human labeled data (Deerwester (1991) , White & Cardie (2002) , Jing et. al (2000) ).Unsupervised methods or hybrids of supervised and unsupervised methods for extractive summarization have been found to yield promising results that are either comparable or superior to supervised methods (Nomoto & Matsumoto (2001 ). In these works, vector space models are used and document or sentence vectors are clustered together according to some similarity measure (Deerwester (1991) , Dagan et al. (1997) ).The disadvantage of clustering methods lies in their ad hoc nature. Since sentence vectors are considered to be independent sample points, the sentence order information is lost. Various heuristics and revision strategies have been applied to the general sentence selection schema to take into consideration text cohesion (White & Cardie (2002) , Mani and Bloedorn (1999) , Aone et. al (1999) , Zha (2002) , Barzilay et al., (2001) ). We would like to preserve the natural linear cohesion of sentences in a text as a baseline prior to the application of any revision strategies.To compensate for the ad hoc nature of vector space models, probabilistic approaches have regained some interests in information retrieval in recent years (Knight & Marcu (2000) , Berger & Lafferty (1999 ), Miller et al., (1999 ). These recent probabilistic methods in information retrieval are largely inspired by the success of probabilistic models in machine translation in the early 90s (Brown et. al), and regard information retrieval as a noisy channel problem. Hidden Markov Models proposed by Miller et al. (1999) , and have shown to outperform tf, idf in TREC information retrieval tasks. The advantage of probabilistic models is that they provide a more rigorous and robust framework to model query-document relations than ad hoc information retrieval. Nevertheless, such probabilistic IR models still require annotated training data.In this paper, we propose an iterative unsupervised training method for multi-document extractive summarization, combining vectors space model with a probabilistic model. We iteratively classify news articles, then paragraphs within articles, and finally sentences within paragraphs into common story themes, by using modified K-means (MKM) clustering and segmental K-means (SKM) decoding. We obtain an initial clustering of article classes by MKM, which determines the inherent number of theme classes of all news articles. Next, we use SKM to classify paragraphs and then sentences. SKM iterates between a k-means clustering step, and a Viterbi decoding step, to obtain a final classification of sentences into theme classes. Our MKM-SKM paradigm combines vector space clustering model with a probabilistic framework, preserving some of the natural sentence cohesion, without the requirement of annotated data. Our method also avoids any arbitrary or ad hoc setting of parameters.In section 2, we introduce the modified K-means algorithm as a better alternative than conventional K-means for document clustering. In section 3 we present the stochastic framework of theme classification and sentence extraction. We describe the training algorithm in section 4, where details of the model parameters and Viterbi scoring are presented. Our sentence selection algorithm is described in Section 5. Section 6 describes our evaluation experiments. We discuss the results and conclude in section 7.
0
In recent years, an intense research focus on machine translation (MT) has raised the quality of MT systems to the degree that they are now viable for a variety of real-world applications. Because of this, the research community has turned its attention to a major drawback of such systems: they are still quite slow. Recent years have seen a flurry of innovative techniques designed to tackle this problem. These include cube pruning (Chiang, 2007) , cube growing (Huang and Chiang, 2007) , early pruning (Moore and Quirk, 2007) , closing spans (Roark and Hollingshead, 2008; Roark and Hollingshead, 2009) , coarse-to-fine methods (Petrov et al., 2008) , pervasive laziness (Pust and Knight, 2009) , and many more.This massive interest in speed is bringing rapid progress to the field, but it comes with a certain amount of baggage. Each technique brings its own terminology (from the cubes of (Chiang, 2007) to the lazy lists of (Pust and Knight, 2009) ) into the mix. Often, it is not entirely clear why they work. Many apply only to specialized MT situations. Without a deeper understanding of these methods, it is difficult for the practitioner to combine them and adapt them to new use cases.In this paper, we attempt to bring some clarity to the situation by taking a closer look at one of these existing methods. Specifically, we cast the popular technique of cube pruning (Chiang, 2007) in the well-understood terms of heuristic search (Pearl, 1984) . We show that cube pruning is essentially equivalent to A* search on a specific search space with specific heuristics. This simple observation affords a deeper insight into how and why cube pruning works. We show how this insight enables us to easily develop faster and exact variants of cube pruning for tree-to-string transducer-based MT (Galley et al., 2004; Galley et al., 2006; DeNero et al., 2009) .
0
Over the past couple of years, we witness tremendous advances of language models in solving various Natural Language Processing (NLP) tasks. Most of the time, these models were trained on large datasets, each model pushing the limits of the other. With this success came a dire need to interpret and analyze the behavior of neural NLP models (Belinkov and Glass, 2019) . Recently, many works have shown that language models are susceptible to biases present in the training dataset (Sheng et al., 2019) .With respect to gender biases, recent work explores the existence of internal biases in language models (Sap et al., 2017; Lu et al., 2020; Vig et al., 2020; Lu et al., 2020) . Previous work uses prefix template-based prompts to elicit language models to produce biased behaviors. Although synthetic prompts can be crafted to generate desired continuations from the model, they are often too simple to mimic the nuances of Natural Sentence (NS) prompts. On the contrary, NS prompts are often more complex in structures but are not crafted to trigger desired set of continuations from the model. In this paper, we ask the question: can synthetic datasets accurately reflect the level of biases in language models? Moreover, can we design an evaluation dataset based on natural sentence prompts?In this paper, we focus on studying the biases between occupation and gender for GPT2 models. We find that biases evaluation is extremely sensitive to different design choices when curating template prompts.We summarize our contributions as:• We collected a real-world natural sentence prompt dataset that could be used to trigger a biased association between professions and gender.• We find bias evaluations are very sensitive to the design choices of template prompts. Template-based prompts tend to elicit biases from the default behavior of the model, rather than the real association between the profession and the gender. We posit that natural sentence prompts (our dataset) alleviate some of the issues present in template-based prompts (synthetic dataset).
0
We participated in the WMT shared news translation task and focus on the bidirections: English and Khmer, English and Pashto. We applied fairseq as our develop tool and use transformer (Vaswani et al., 2017) as the main architecture. The primary ranking index for submitted systems is BLEU (Papineni et al., 2002) , therefore we apply BLEU as the evaluation matrix for our translation system. For Khmer, we use polyglot 1 as the tokenizer before evaluation.For data preprocessing, the basic method includes punctuation normalization for all language. Further, according to the different language characteristics. Tokenization, truecase and byte pair encoding (BPE) (Sennrich et al., 2015b) are applied for English, and sentencepiece (Kudo and Richardson, 2018 ) is applied for Khmer and Pashto. Besides, human rules, language model and RoBERTa model are also involved to clean 1 https://github.com/aboSamoor/polyglot parallel data, monolingual data and synthetic data. Regard to the techniques on model training, backtranslation (Sennrich et al., 2015a) and forwardtranslation are applied to verify whether these techniques could improve the translation performance especially in low-resource condition.We all know that it is more difficult to train a model in low-resource condition, because it suffers from data sparsity and out-of-vocabulary problem. Normally knowledge distillation (Kim and Rush, 2016 ) is a good way to generate synthetic data. But in this task we suppose that knowledge distillation can only generate 100 thousand to 1 million parallel sentences due to the size of provided data. Therefore, we use forward-translation with monolingual data to generate more synthetic data. Here forword-translation refers to translate the source sentences to target language, and clean synthetic data.This paper is arranged as follows. We firstly describe the task and show the data information, then introduce how we do data filtering, including human rules, language model and RoBERTa model. After that, we describe the techniques on low-resource condition and show the conducted experiments in detail of all directions, including data preprocessing, model architecture, back-translation and forwor-translation. At last, we analyze the results of experiments and draw the conclusion.
0
In 2013, World Economic Forum (WEF) listed massive digital misinformation as one of the top global risks likely to occur in 10 years 1 . Unfortunately, we witnessed many unpleasant incidents due to misinformation spread over Internet such as massive stock price changes 2 , gunfights 3 , and others. Since the start of COVID-19 pandemic, we have also observed many incidents showing the value of true information and how misinformation about health issues can be deadly (e.g., misusing disinfectants to prevent coronavirus after Donald Trump suggested injecting disinfectants as treatment 4 ).In order to prevent the negative outcomes of misinformation, many fact-checking websites emerged all over the world in the last decade (Cherubini and Graves, 2016) . The fact-checking websites manually investigate veracity of claims and share their findings with their readers. While they play an important role in the combat against misinformation, their precious journalistic effort is not enough to reduce spread of misinformation and its negative outcomes. While making a claim is so easy, investigating its veracity is highly time consuming, taking around one day (Hassan et al., 2017) . Furthermore, Vosoughi et al. (2018) report that misinformation spread eight times faster than true information. Hence, we need effective solutions to help human fact-checkers and to reduce the negative impact of misinformation.As outline, the first task of a fact-checking system is to detect whether a statement contains a check-worthy claim or not. Considering the massive amount of messages shared on social media platforms, check-worthy claim detection models help human fact-checkers to filter out unimportant claims and use their valuable time to detect veracity of the most important claims. A number of researchers worked on this problem (e.g., (Lespagnol et al., 2019; Hassan et al., 2017; Jaradat et al., 2018) ) and shared tasks for check-worthy claim detection have been organized Atanasova et al., 2019; Barrón-Cedeno et al., 2020) .While researchers showed great interest in factchecking, the available resources are still limited and the vast majority of the studies focused on English. Regarding the task of detecting check-worthy claims, the only available labeled datasets are for English and Arabic Atanasova et al., 2019) . However, as WEF notes in its aforementioned report, misinformation is a global problem affecting all countries. Misinformation can also spread internationally. For instance, during 2019 European elections, same or similar stories have been shared in different languages across European countries (Fletcher et al., 2018) . Hence, in order to have an effective combat against spread of misinformation, we need research studies for a wide range of languages.In this work, we focus on Turkish and introduce TrClaim-19, which is the very first labeled Turkish tweets with the rationales of annotators for checkworthy claim detection task. Turkish is a particularly important language for fact-checking studies because Fletcher et al. (2018) report that 49% of Internet users in Turkey coincide with at least one fake news in a week, which is higher than all other countries investigated in their study. Furthermore, being a member of Altaic language family, Turkish language has different linguistic features than other languages studied for fact-checking, such as being an agglutinative language and having flexible word order structure in sentences. In addition to developing a useful resource for the research community, we also seek answers for the following research questions.• RQ-1: What is the agreement level between non-expert fact-checkers on check-worthiness of claims?• RQ-2: Do non-experts have different opinion about check-worthiness of claims than experts?• RQ-3: What are the main rationales to label claims as check-worthy?In particular, we have first crawled Turkish tweets for 344 days in 2019, tracking important events happened in Turkey such as local elections, earthquake in Istanbul, and military operation in Syria. Eventually, we gathered around 225 millions Turkish tweets. Subsequently, we crawled 765 claims fact-checked by two Turkish fact-checking websites. Next, for each claim, we retrieved three tweets from our tweet crawl using Lucene search engine library 5 . Each retrieved tweet has been labeled by three separate annotators. For each tweet, we asked annotators whether it is relevant to the respective claim, and whether it contains a check-worthy claim. Inspired by McDonnell et al. (2016) 's study, we also asked their rationale for the tweets labeled as check-worthy. 5 https://lucene.apache.org/core/ The number of tweets crawled 225M The number of tweets annotated 2287 The number of check-worthy claims 875 The number of rationale categories 26 Table 1 summarizes general features of TrClaim-19. In total, we collected labels for 2287 tweets, and 875 of them are labeled as check-worthy when labels are aggregated by majority voting. We have observed that agreement on check-worthiness of tweets among non-experts are low (Fleiss' kappa = 0.23) . In 36% of cases, non-experts disagreed with experts on check-worthiness of claims. Assessors provided rationales in 26 different categories. Rationales we collect suggest that topics and possible negative impacts of claims are the main factors in making a claim check-worthy.The contributions of our work are as follows.• We develope and share TrClaim-19, which is the very first labeled data resource for Turkish check-worthy claim detection 6 .• TrClaim-19 is also the first data resource with annotator rationales for check-worthy claim detection task, enabling better understanding of the research problem to develop effective solutions.• We investigate the subjectivity of checkworthiness of claims. In particular, we explore how much non-expert and expert factcheckers agree on check-worthiness of claims.• We provide performance results of four models on TrClaim-19 to provide reference baselines for future studies.
0
As the volume of information present on the internet steeply increases, automatic fact-checking has become a promising approach to identify and stop the spread of misinformation. Active research in this area has been supported in part by a handful of carefully curated datasets (Thorne et al., 2018a; Hanselowski et al., 2019; Wadden et al., 2020) .While these datasets have been playing a crucial role in the development of the latest fact-checkers, they do not faithfully represent the reality in certain aspects. First, these datasets come with short evidence snippets for the claims; most existing factcheckers rely on such sentence-level evidence annotations to build the natural language inference (NLI) component of their systems-e.g. given an evidence sentence and a claim sentence, determine if the claim is supported (Thorne et al., 2018b) . Second, most of the datasets consist of synthetic claims written by annotators based on snippets of * Work done while at the University of Richmond evidence. Both aspects render it difficult to readily apply the findings in real applications. WikiFactCheck-English (Sathe et al., 2020 ) was constructed to address the aforementioned concerns. The dataset consists of 124k entries each consisting of a claim 1 , context, and evidence document extracted from English Wikipedia articles. (See Table 1 for an example.) We believe that the real claims and lengthy evidence documents without sentence-level annotations will lead to factcheckers that can better handle claims in the wild.In this paper, we tackle the NLI subtask-given a document and a (sentence) claim, determine whether the document supports or refutes the claim-only using document-level annotations. We improve on existing systems trained and tested on the WikiFactCheck-English dataset during both steps of the 2-step pipeline: evidence retrieval and support verification. We find that fine-tuned BERT with multiple instance learning (MIL)to use multiple candidate evidence sentencesresults in about 13% increase in accuracy over the baseline. However, incorporating Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) to identify candidate evidence sentences during evidence retrieval does not lead to a noticeable improvement.
0
Named Entity Recognition (NER) is an important tool in almost all NLP application areas such as information retrieval, machine translation, ques tion-answering system, automatic summarization etc. Proper identification and classification of NEs are very crucial and pose a very big challenge to the NLP researchers. The level of ambiguity in NER makes it difficult to attain human performance NER has drawn more and more attention from the NE tasks (Chinchor 95; Chinchor 98) in Message Understanding Conferences (MUCs) [MUC6; MUC7] . The problem of correct identification of NEs is specifically addressed and benchmarked by the developers of Information Extraction System, such as the GATE system (Cunningham, 2001) . NER also finds application in question-answering systems (Maldovan et al., 2002) and machine translation (Babych and Hartley, 2003) .The current trend in NER is to use the machinelearning approach, which is more attractive in that it is trainable and adoptable and the maintenance of a machine-learning system is much cheaper than that of a rule-based one. The representative machine-learning approaches used in NER are Hidden Markov Model (HMM) (BBN's IdentiFinder in (Bikel, 1999) ), Maximum Entropy (New York University's MEME in (Borthwick, 1999) ), Decision Tree (New York University's system in (Sekine, 1998) and Conditional Random Fields (CRFs) (Lafferty et al., 2001) . Support Vector Machines (SVMs) based NER system was proposed by Yamada et al. (2002) for Japanese. His system is an extension of Kudo's chunking system (Kudo and Matsumoto, 2001 ) that gave the best performance at CoNLL-2000 shared tasks. The other SVM-based NER systems can be found in (Takeuchi and Collier, 2002) and (Asahara and Matsumoto, 2003) .Named entity identification in Indian languages in general and particularly in Bengali is difficult and challenging. In English, the NE always appears with capitalized letter but there is no concept of capitalization in Bengali. There has been a very little work in the area of NER in Indian languages. In Indian languages, particularly in Bengali, the works in NER can be found in (Ekbal and Bandyopadhyay, 2007a; Ekbal and Bandyopadhyay, 2007b) with the pattern directed shallow parsing approach and in (Ekbal et al., 2007c) with the HMM. Other than Bengali, a CRF-based Hindi NER system can be found in (Li and McCallum, 2004) .The rest of the paper is organized as follows. Support Vector Machine framework is described briefly in Section 2. Section 3 deals with the named entity recognition in Bengali that describes the named entity tagset and the detailed descriptions of the features for NER. Experimental results are presented in Section 4. Finally, Section 5 concludes the paper.
0
Research on human perception has shown that world knowledge supports the processing of sensory information (Mitterer et al., 2009; Ishizu, 2013) . For instance, humans have been found to use their knowledge about typical colours of an object when perceiving an instance of that object, in order to compensate for, e.g., perceptually challenging illumination conditions and achieve colour constancy (Mitterer and de Ruiter, 2008; Witzel and Gegenfurtner, 2018) . Thus, the visual perception of object colours can be thought of as leveraging top-down knowledge for bottom-up processing of sensory input, in accordance with traditional approaches in psychology (e.g. Colman, 2009) . The integration of visual information and world knowledge in perception, however, is far from obvious, with views ranging from processing through bidirectionally connected bottom-up and top-down components to the assumption that visual and conceptual representations themselves are inseparably intertwined (Kubat et al., 2009) . A lot of recent work in Language & Vision (L&V) has looked at grounding language in realistic sensory information, e.g. images of complex, real-world scenes and objects (Bernardi et al., 2016; Kafle and Kanan, 2017) . In L&V, however, the use of top-down knowledge has mostly been discussed in the context of zero-shot or few-shot learning scenarios where few or no visual instances of a particular object category are available (Frome et al., 2013; Xian et al., 2018) . 1 We present a simple experiment on language grounding that highlights the great potential of top-down processing even for very common words with a lot of visual instances: we learn to ground colour terms in visual representations of real-world objects and show that model predictions improve strongly when incorporating prior knowledge and assumptions about the object itself. We investigate visual grounding of colour terms by combining bottom-up and top-down modeling components based on early and late fusion strategies, reflecting different interpretations about the integration of visual and conceptual information in human perception. We find that these strategies lead to differ-ent predictions, especially for atypical colours of objects that do have a strong tendency towards a certain colour. 2
0
As a result of the world getting more connected and digitized, cyber attacks become increasingly common and pose serious issues for the society. More recently in 2017, a ransomware called Wan-naCry, which has the capability to lock down the data files using strong encryption, spread around the world targeting public utilities and large corporations (Mohurle and Patil, 2017) . Another example is the botnet known as Mirai, which used infected Internet of Things (IoT) devices to disable Internet access for millions of users in the US West Coast (US-CERT, 2016) through largescale Distributed Denial of Service (DDoS) attacks. The impact levels of these attacks is ranging from simple ransomware on personal laptops (Andronio et al., 2015) to taking over the control of moving cars (Checkoway et al., 2011) .Along with the importance of cybersecurity in today's context, there is an increasing potential for substantial contribution in cybersecurity using natural language processing (NLP) techniques, even though this has not been significantly addressed. We introduced this task as a shared task on Se-mEval for the first time with the intention of motivating NLP researchers for this critical research area. Even though there exists a large repository of malware related texts online, the sheer volume and diversity of these texts make it difficult for NLP researchers to quickly move to this research field. Another challenge is that most of the data is unannotated. Lim et al. (2017) has introduced a dataset of annotated malware reports for facilitating future NLP work in cybersecurity. In the light of that, we improved Lim's malware dataset to create, to the best of our knowledge, the world's largest publicly available dataset of annotated malware reports. The aim of our annotation is to mark the words and phrases in malware reports that describe the behaviour and capabilities of the malware and assign them to some certain categories.Most of the machine learning efforts in the task of malware detection were based on the system calls. Rieck et al. (2011) and Alazab et al. (2010) proposed models using machine learning techniques for detecting and classifying malware through system calls. Previously, our group has proposed models to predict a malware's signatures based on the text describing the malware (Lim et al., 2017) . We defined the same SubTasks mentioned in this paper and used the proposed models as the standard baselines for the shared task. This shared task is hosted on CodaLab 1 .The remainder of this paper is organized as follows: the information regarding the annotated dataset and its statistics, together with the Sub-Tasks are described in Section 2. Information about the evaluation measures and the baselines is described in Section 3. Different approaches used by the participants are described in Section 4. The evaluation scores of the participating systems and rankings are presented and discussed in Section 5. Finally, the paper concludes with an overall assessment of the task.
0
Previous work has brought up multiple legal assistant systems with various functions, such as finding relevant cases given the query , providing applicable law articles for a given case (Liu and Liao, 2005) and etc., which have substantially improved the working efficiency. As legal assistant systems, charge prediction systems aim to determine appropriate charges such as homicide and assault for varied criminal cases by analyzing textual fact descriptions from cases (Luo et al., 2017) , but ignore to give out the interpretations for the charge determination.Court view is the written explanation from judges to interprete the charge decision for certain criminal case and is also the core part in a legal document, which consists of rationales and a charge where the charge is supported by the rationales as shown in Fig. 1 . In this work, we propose to study the problem of COURT VIEW GENeration from fact descriptions in cases, and we formulate it as a text-to-text natural language generation (NLG) problem (Gatt and Krahmer, 2017) . The input is the fact description in a case and the output is the corresponding court view. We only focus on generating rationales because charges can be decided by judges or charge prediction systems by also analyzing the fact descriptions (Luo et al., 2017; Lin et al., 2012) . COURT-VIEW-GEN has beneficial functions, in that: (1) improve the interpretability of charge prediction systems by generating rationales in court views to support the predicted charges. The justification for charge decision is as important as deciding the charge itself (Hendricks et al., 2016; Lei et al., 2016) . (2) benefit the automatic legal document generation as legal assistant systems, by automatically generating court views from fact descriptions, to release much human labor especially for simple cases but in large amount, where fact descriptions can be obtained from legal professionals or techniques such as information extraction (Cowie and Lehnert, 1996) .COURT-VIEW-GEN is not a trivial task. Highquality rationales in court views should contain the important fact details such as the degree of injury for charge of intentional injury, as they are important basis for charge determination. Fact details are like the summary for the fact description similar to the task of DOCument SUMmarization (Yao et al., 2017) . However, rationales are not the simple summary with only fact details, to support charges, they should be charge-discriminative with deduced information which does not appear in fact descriptions. The fact descriptions for charge of negligent homicide usually only describe someone being killed without direct statement about FACT DESCRIPTION ... 经审理查明, 2009年7月10日23时许 , 被告人陈某伙同八至九名男青年在徐闻县新寮镇建寮路口附近路上拦截住搭载着李某的摩托车, 然后, 被告人陈某等人持钢管、刀对李某进行殴打。经法医鉴定, 李某伤情为轻伤。... # ... After hearing, our court identified that at 23:00 on July 10, 2009, the defendant Chen together with other eight or nine young men stopped Lee who was riding a motorcycle on street near the road in Xinliao town Xuwen County, after that the defendant Chen and the others beat Lee with steel pipe and knife. According to forensic identification, Lee suffered minor wound. ...被告人陈某无视国家法律, 伙同他人, 持器械故意伤害他人身体致一人轻伤 rationales , 其 行 为 已 构 成故意伤害罪 charge 。# Our court hold that the defendant Chen ignored the state law and caused others minor wound with equipment together with others rationales .His acts constituted the crime of intentional assault charge . ... Table 1 : An example of fact description and court view from a legal document for a case.task. Firstly, it is hard to maintain the discriminations of generated court views when input fact descriptions are none-discriminative among charges in subtle difference. For example, the charges of intentional homicide and negligent homicide are similar and the corresponding fact descriptions will be expressed in similar way. Both of the fact descriptions of the two charges will describe the defendant killing someone but will not directly point out that the defendant is in intention or in neglect, causing it hard to generate chargediscriminative court views. Secondly, high-quality court views should contain the fact details in the fact descriptions such as the degree of injury for intentional injury charge because fact details are the important basis for charge determination.Traditional natural language generation (NLG) will need much human-labor to design rules and templates. To overcome the difficulties of COURT-VIEW-GEN mentioned above and the shortcomings of traditional NLG methods, in this work, we propose a novel label conditioned sequence to sequence model with attention for COURT-VIEW-GEN aiming to directly map fact descriptions to court views. The architecture of our model is shown in Figure 1 . Fact descriptions are encoded into context vectors by an encoder then a decoder generates court views with these vectors. To generate more class-discriminative court views from none-discriminative fact descriptions among charges with subtle difference, we introduce to encode charges as the labels for the corresponding fact descriptions and decode the court views conditioned on the charge labels by additionally encoding the charge information. The intuition lies in that charge labels will provide extra information to classify the non-discriminative fact descriptions and make the decoder learn to select words related to the charges to decode. To maintain the fact details from fact descriptions like the degree of injury for charge of intentional injury, we further apply the widely used attention mechanism (?) into Seq2Seq model. By applying attention technic, every time context vectors will contain most important information from the fact descriptions for decoder. Experimental results show that our model has strong performance on COURT-VIEW-GEN and exploiting charge labels will significantly improve the class-discriminations of generated court views especially for charges with subtle differences.Our contributions of this paper can be summarized as follows:• We propose the task of court view generation which is meaningful but has bot been well studied before.• We introduce a novel label conditioned sequence to sequence model with attention for COURT-VIEW-GEN.• Experimental results demonstrate the effectiveness of our model and exploiting charge labels will significantly improve the classdiscriminations of generated court views.
0
Nobody reads all available user-generated comments about products they might buy. Summarizing reviews in a short paragraph would save valuable time, as well as provide better insights into the main opinions of previous buyers. In addition to traditional difficulties of summarization, the specific setting of opinion summarization faces the entanglement of multiple facets in reviews: polarity (including contradictory opinions), aspects, tone (descriptive, evaluative).Obtaining large parallel corpora for opinion summarization is costly and makes unsupervised methods attractive. Very recently, a neural method for unsupervised multi-document abstractive summarization was proposed by Chu and Liu (2019, Meansum) , based on an auto-encoder which is given the average encoding of all documents at inference time. Major limitations identified by the authors of this work are factual inaccuracies and the inability to deal with contradictory statements. We argue that this can be attributed to feeding the decoder the summation of sentence representations in the embedding space, which is not equivalent to the average meaning representation of all the input sentences.In this paper, we present a work in progress that investigates better ways of aggregating sentence representations in a way that preserves semantics. While available gold summaries might be expensive to acquire, we leverage more attainable training signals such as a small amount of sentiment and aspect annotations. We adopt a strategy based on a language model -used both for encoding reviews and for generating summaries -and aspectaware sentence clustering. This clustering ensures coverage of all relevant aspects and allows the system to generate independently a sentence for each aspect mentioned in reviews. Our system proceeds by projecting reviews to a vector space, clustering them according to their main aspect, and generating one sentence for each cluster that has been discovered. Our experiments, performed on the Oposum dataset (Angelidis and Lapata, 2018) , demonstrate the importance of the clustering step and assess the effect of leveraging aspect information to improve clustering.
0
Language models (LMs) are one of the most fundamental technologies in NLP, with applications spanning text generation (Bahdanau et al., 2015; Rush et al., 2015) , representation learning (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019) , and few-shot learning (Radford et al., 2019; Brown et al., 2020) . Modern neural language models (NLMs) based on recurrent (Mikolov et al., 2010; Sundermeyer et al., 2012) or selfattentional (Vaswani et al., 2017; Al-Rfou et al., 2019) neural networks are mostly parametric, where the predictions are solely dependent on the model parameters given the input data.In contrast, recent non-parametric LMs (Guu et al., 2018; Khandelwal et al., 2019; He et al., ) model text distributions by referencing both the parameters of the underlying model and examples from an external datastore. Non-parametric LMs are appealing since they allow for effective language modeling -particularly for rarer patterns -through explicit memorization via a datastore, which mitigates the burden on model parameters to learn to encode all information from a large dataset. One effective and representative example is the k-nearest neighbors LM (kNN-LM, Khandelwal et al. (2019) ). The kNN-LM computes the probability of the next token by interpolating a parametric LM with a distribution calculated from the k nearest context-token pairs in the datastore, as demonstrated in Figure 2 . This model is particularly notable for its large improvements in performanceit outperforms the previous best parametric LMs by a large margin in standard language modeling benchmarks, in domain adaptation settings, and on other conditional generation tasks such as machine translation (Khandelwal et al., 2020 ).However, one downside to the kNN-LM is that the datastore stores high-dimensional dense vectors < l a t e x i t s h a 1 _ b a s e 6 4 = " c 3 t 6 j U g I 4 O f V 4 C e k u 2 K R m O a k u M k = " > A A A 1 r H i c l V t b c x v L c a b t J H a Y O D 5 2 H v O y F U p l O U W x R J 2 j c i p P 5 g W 8 i C A J k u B F E n R U C 6 C x W H F v 3 B k s A W 6 Q Z / 8 E v 9 r / K v 8 m P T M 7 2 z 2 L p U 6 F V S J 3 v q + n d y 5 f T z c W q 2 E W h U K + e f O / P / v 5 L / 7 u 7 / / h l 7 / 6 x / V / + u d f / 8 t v v v v t 7 2 5 E O s t H c D 1 K o z S / G / o C o j C B a x n K C O 6 y H P x 4 G M H t 8 H 5 P 8 b c F 5 C J M k 7 5 c Z P A 5 9 o M k n I Q j X y L 0 Y / a l H E i Y y / K s e 7 p c f v l u 4 8 3 W G / 3 j r V 5 s V x c b a 9 V P 7 8 t v f / j z Y J y O Z j E k c h T 5 Q n z a f p P J z 6 W f y 3 A U w X J 9 M B O Q + a N 7 P 4 B P e J n 4 M Y j P p R 7 2 0 n u J y N i b p D n + S 6 S n U d 6 j 9 G M h F v E Q L W N f T k W T U 2 A b 9 2 k m J / / 5 u Q y T b C Y h G Z k b T W a R J 1 N P r Y E 3 D n M Y y W i B F / 4 o D 3 G s 3 m j q 5 / 5 I 4 k q t r 7 9 U P 9 5 Z 5 9 Y 7 3 e k f e f u d g + O z 4 / 7 x + d m V p 6 n 1 t o F s 4 l 8 1 D b E 5 j J f o w z v 1 8 3 t P 4 H 1 w n Y W X T r y R n 5 l r N e M c J p D n Y R K o Q Y 3 D I h T W b B I G s x x w Q g k 8 j t I 4 9 p N x O U A w g o l c l u U A Y u 9 V F 6 / / s F y u 2 I x w H y C 3 V n u 6 1 W a X h 8 G 0 d n a p G m 1 W M s 2 s T T / N 2 i y G q Z R p b I 1 2 d W v F r p q 3 b 8 3 8 5 y y G 1 m L 4 n M X I W o y e s x h b i 7 G y w G 0 4 w t l F a o a e 7 6 G 9 2 n S Y Y L C M P V y b 2 P W B 1 w p c f t r + j F 6 G E 2 9 j W z l p T n u + L A e x n w c o M D 8 v D 4 7 v m m P B a 8 c E p d Q 0 6 Z / v n + v 7 6 P D T 0 i 9 z w N E r 4 r / M j V 2 f H e 1 S T t O s H H S a b O c B 2 Q 4 G c 1 4 8 D U Q Y e w 9 4 X W T T c P l K Q f + N v + Y r S 9 b J n h q 9 1 H F Q y C l I / 9 v 9 6 m 7 z q t v 4 V a v l w x O u V e t Q X L t M 2 T 1 z 8 4 b l / G n F c q 4 s n 1 Y t V w 1 X b M b a a N x O q j u 9 a n P 9 8 N S c V d N C L S z C D X S u 0 X k D X W h 0 0 U B j j c Z N F W g 0 a d r O p B L H D I c 0 3 / T s j F e M M s d I j b 1 h M j Y d 8 T b + M P J p 7 Z p m q i s z a v O E 2 I o z x P j o M D g P 9 F l n D k M 8 q W H T i 9 J H y F + P M K N t r Q 8 w U v V p B Z O N 7 d K c i / 8 z w F a p w 6 O t O 5 4 C o f S j L e 8 A z 1 g h M Q + p I 1 W o g x B 5 4 / H A e j x o e t S 0 f E z t P T f e V n c V n j X y c H p V 4 6 3 t 8 T D z x 9 R l 4 / u N H 1 a 6 b d Z 9 7 N X 3 3 N U P e j p X J l l 8 c z k w o Z j B V 5 n F W Y 8 W B 3 Z B T O 8 r 2 / u q p f e l 7 a X z 5 G N a J 6 + t e m H M 3 Y V e m T q 1 P b M 0 T Y f T H K D p k v n b + H 7 V I 6 0 a 8 / 3 9 q m 8 / 8 Q A 3 Q X V u W T J 4 M H O 2 J s 9 P 2 v E z y z L I P e X H u O l U b j p t b n a 8 3 H + k d W 8 4 e / 3 6 t V + k 4 d i b C Z X x w 4 m X p U K E W K M Z 1 1 n k Y 0 a q / D 8 / O l W k Z J i g W u a o G N O 9 s v l / T 7 J y t F c 7 2 v t J R z j n J A B d 2 h h b Y X x o u B 4 R S s X S 1 t X r 1 8 / K B E f n R 0 G K R d k 0 b p k n c m Z 0 t d E 3 J 8 p c r c x 0 x 7 r a a X F l B W / v h 5 O o f X 3 7 M O g 7 n X Z + s t P K o m L B I K u Z M / U p 1 A x X X X 1 r U 0 z / p n p 7 d f + e 2 9 / O t L 4 B j l p d P z v g S n A Q R k q s k b r A c g U N 1 F X l b x K l a a 5 p f W V 4 f V k Z I D W M y 5 U i R + Y Y C F W d M / K j c r 9 p U P h R O O Y G X 8 x 1 H p e G W q 6 4 B C H b O 2 h m W c 8 I M q F K x 0 y E U Z r o s g + X F l 2 k s V f 4 e Y h J D K y + M X + V p n B L 0 j x G r y 8 G C L 1 Y 2 u X M G 7 R P z N B l h s S M X G Z E z N h l x s S A y w A x E 5 e Z E B O 4 T E D M 1 G W m x I Q u E x L z 1 W W + E n P v M v f E R C 4 T L b W M 8 9 g L B U Y s f n o d L 9 R h Z 3 Z w 0 / s 6 E 9 I b p 8 n v p a c + P 6 I c F + r k c T b G i y v f i e s 7 o b u m L p M S k 7 l M R s y D y z w Q k 7 t M T o x w G U G M d B l J z M x l Z s Q U L l M Q 8 + g y j 8 T M X W Z O z M J l F s Q 8 u c z T 0 h R o N g A w M 6 f 1 8 V 5 U Q V K a U B p O W N j U 4 9 Z V H r O o q r 6 a Z x y H h w S z 2 C h G B L P A K M Y E s 6 g o g G A W E s W E Y B Y P R U A w C 4 Z i S j C L h G J G M A u D 4 i v B L A a K e 4 J Z A B Q R w R G D Y 4 J j B r O F 5 i u c E s z E X G Q E M y U X D w Q z G R c 5 w U z D h S B Y 8 E 0 l W L a v C Z d u Q T D T b f F I M B N t M S e Y K b Z Y E M z k W j w R b L X a i U A 9 h 9 I P U f I W 3 Y I R X e u 5 D E Z 5 r S c z G P m 1 n s 1 g N N h 6 O o M R Y u v 5 D E a N r S c 0 G E m 2 n t F g d N l 6 S i P 3 7 D k N R q G t J z U Y m b a e 1 W C 0 2 j y t L R e 7 X M y 5 Z 0 9 i M N J t P Y v B 6 L f 1 N A Y j 4 t b z G I y S W 0 9 k M H J u P Z P B a L r 1 V A Y j 7 N Z z G Y y 6 W 0 9 m M B J v P Z v B 6 L z 1 d A Y j 9 t b z G Y z i n z + h M R b y c F R X K P E O x c c O h U 2 8 S / A u g / c I 3 m P w P s H 7 D O 4 Q 3 G H w A c E H D D 4 k + J D B R w Q f M f i Y 4 G M G v y f 4 P Y N P C D 5 h c J f g L o N P C T 5 l 8 B n B Z w w + J / i c w T 2 C e w y + I P i C w Z c E X z L 4 i u A r B v c J 7 j P 4 m u B r B t 8 Q f M P g W 4 J v G X x H 8 B 2 D P x D 8 g c E f C f 7 4 / P H q i g 6 M 6 p h G d 5 h + t f Q Y t 8 u 5 P Z f b 4 9 y + y + 1 z r u N y H c 4 d u N w B 5 w 5 d 7 p B z R y 5 3 x L l j l z v m 3 H u X e 8 + 5 E 5 c 7 4 V z X 5 b q c O 3 W 5 U 8 6 d u d w Z 5 8 5 d 7 p x z P Z f r c e 7 C 5 S 4 4 d + l y l 5 y 7 c r k r z v V d r s + 5 a 5 e 7 5 t y N y 9 1 w 7 t b l b j l 3 5 3 J 3 n P v g c h 8 4 9 9 H l r O x v e A l R P I H + H I G f X d / U f Y s 0 g d J + n r V Y P D P Q I K a k U d f E C n f r Y f V s t C L 0 0 1 Q L V 9 b M c G g Q K k 9 0 c Y I I F S W 6 J E G E S p G i G i A V I L r 8 Q I T K D l 1 0 I E L F h i 4 1 E K E S o 6 g G y U b 4 1 S B U T u h i A h E q I n Q J g U j E l s c g V D D o c g G R h C 2 r Q V K 2 S A a h k k A X B I h Q I a D L A E Q o / e v k j 4 h g + 2 A Q S v V F t V t s r w q D U F r X S R 0 R S u Y 6 l S N C K V w n c E Q o c e u 0 j U h b k e p W p 4 U f Z V O 1 3 / p v L c x i W G m m e h B v Q P o E R g 8 s K i r y 4 + F Y 9 T A X R K Q x B A r X f w n W S l U q t Q A 6 R A R / E y T C I F Z d 9 V + C r Z 7 r b w m q i Z Q l H 3 + p x G p b K N Y R t V C o Y z a p U g n U t l C g E 2 q h O A N q o T C n 1 M L h s r G i I L 9 S C 8 V 4 z 9 a m V C K s Z 1 4 q A d o W L i Z b R R R f y p a k V K K z L R T d A 7 V Q c D l b q V I J r V 6 g U o n M t n C h 2 T K j w A p q o b g e q Y X C m l M L R b W g F g r q a V l 9 w 4 z p d 2 5 w n X p R Z 5 R y d c J F h B K t T r O I U H r V y R U R S q o 6 p S J C q V Q n U k Q o g e r 0 i Q i l T Z 0 0 E a F k q V M l I p Q i d Y J E h B K j T o u I U D r U y R A R S o I 6 B S J C q U 8 n P k Q o 4 e l 0 h w i l O Z 3 k E K H k p l M b I p T S d E J D h B K Z T m O I U P r S y Q s R S l o 6 Z S F C q U o n K k Q o Q e n 0 h A i l J Z 2 U E K F k p F M R I p S C d A J C 5 C P b Q U o X Q 5 4 t 4 l 6 d L X o s W 8 R d G / q K 6 V b h X 0 + u i m H F X Z k 4 1 i r q Q y L U e x f 7 M I r 8 H F B U 0 x 1 1 A u E d T Q 0 o J q F 6 g g r J K B 2 H S Y D O / F m k E D G p r + N l K d T D 3 y u Q z z k Y p t H 4 p 9 w M 5 8 u y + e W m x P G Z b 8 p 1 O q 3 8 6 Y f X 1 d S k K T s T w d Q v d y 1 G + p d 7 F q M I k P s W o x i Q H Y t R F M g D i 1 E c y E O L U S T I I 4 t R L M h j i 1 E 0 y P c W o 3 i Q J x a j i J B d i 1 F M y F O L U V T I M 4 t R X M h z i 1 F k y J 7 F K D b k h c U o O u S l x S g + 5 J X F K E J k 3 2 I U I / L a Y h Q l 8 s Z i F C f y 1 m I U K f L O Y h Q r 8 o P F K F r k R 4 u Z Q g 2 F f J j 7 2 d S w g f 3 4 O 3 I + h Q S 7 D C Z d B H s M J m k E + w w m d Q Q d B p N A g g M G k 0 a C Q w a T T I I j B p N S g m M G k 1 i C 9 w w m v Q Q n D C b J B F 0 G k 2 q C U w a T c I I z B p N 2 g n M G k 3 y C H o N J Q c E F g 0 l E w S W D S U f B F Y N J S k G f w a S m 4 J r B J K j g h s G k q e C W w S S r 4 I 7 B p K z g A 4 N J X M F H B t s P A n i 0 V a W a q B + u D J m 4 x C 6 h p C 2 x R y h J S + w T q p X 1 0 t v X X 3 D M B H i + J 0 B 6 e O s I x l 5 n 0 x v C y F e 4 n I b C e 0 x n 0 R g h b I E n 9 N c h W E v O c k + 9 K J d G 6 E i 9 X Q b z D G t L / R 2 v / a a 9 Q 3 c k 0 Y o D Q k m z 4 p B Q k q w 4 I p Q U K 4 4 J J c G K 9 4 S S X s U J o S R X 0 S W U 1 C p O C S W x i j N C S a v i n F C S q u g R S k o V F 4 S S U M U l o a R T c U U o y V T 0 C S W V i m t C S a T i h l D S q L g l l C Q q 7 g g l h Y o P h J J A x U d C 6 + c z C V a D o D 9 Y + O b J T F U a A t U F X f c j g S o a d 6 i F A t 6 l F g p 3 j 1 o o 2 H 1 q o Z g 6 1 E I R H V A L x X N I L R T N E b V Q L M f U Q p G 8 p x a K 4 4 R a K I o u t V A M p 9 R C E Z x R C z f / n F q 4 6 T 1 q 4 W Z f U A s 3 + Z J a u L l X 1 M J N 7 V M L N / O a W r i J N 9 T C z b u l F m 7 a H b V w s z 5 Q C z f p I 7 t f V X 9 V t Z f a M u B b J k 0 d h g e N i m r 9 R i y G t k E 3 v c d Q T t O Z 9 L A I 8 h 4 x 0 W W Q u 2 U S U J 3 k 1 E j V 7 W W t A W 2 4 U h 6 C L q K g U U W B L q O g U U e B L q S g U U m B L q W g U U u B L q a g U U 2 B L q e g U U + B L q i g UV G B L q m g U V O B L q q g U V W B L q u g U V e B L q y g U V m B L q 2 g U V u B L q 6 g U V 2 B L q + g U V + B L r C g U W G B L r G g U W O B L r K g U W W B L r O g U We B L r S g U W m B L r W g U W u B L r a g U W 2 B L r e g U W + B L r i g U X G B L r m g U X O B L r q g U X W B L r u A 1 V 3 4 + Q E T k c x n 4 M 2 S M e T R Q r 3 h N P a l 7 w W Q Q I 4 5 S L V D g U o f z l R C a r 6 B 6 a u X B N W L m n l c 6 o Z O h 8 o r x F m Y h 5 g I n f 7 1 + 7 v D h U 6 C + p 0 R d R P M m g 3 f 9 n W S q S / x 8 7 t 7 C 8 e y x y 1 7 y 7 b B x O k Y o m 9 N R B v U M z G t l f t U R r 1 v G W U y j M Z Q W Q 5 0 o x 5 9 3 Q O P C Z m O p r 5 Q L 6 / 7 M 5 n q z 1 W Q O y N s v E S e G Z t 6 j F W X 1 Q G M w b E z z R a 7 H A k 8 d K y d a a I W R v r J m m s c + V n k j 2 B Z v 3 7 T r Y C l 9 9 K r r t 3 l b b x 0 v O T 1 i 8 t 1 B X v D p 9 t k L 5 c 8 t z d O z T h j a 9 w g o 3 x p n 8 a 5 R A 7 B s n 6 + 1 q R G k u a o W u E k h L z p W q Q T G f t z s r R A 0 w 6 T R a r f e D K P 3 l a 9 Z N F M z f 5 J P R 9 w 2 Z P u k r / u d N J d 2 c A b P 6 c R q E b T v 8 Q / f o 5 7 n 6 f M 8 m p l A / b S g m j V U A K 9 T a N J 7 s f q M d X 0 M c 2 x b B X + Q n g v u j + + f a F e 9 d H / 8 W O W m F d W R Y b 7 L / S r Z i 8 G E E X M x j 4 m f e n t Y g L E k E / U r w X G O 8 T q l T d V G x u n z F q 9 b 5 r O A p 0 z d a k c S t j U 7 k X q j V N Q 7 h 7 D + z C D c e h v N V 6 y T v M 4 U k / 6 l 2 X 3 x z f L F j J N Q H H b b Z x 8 1 P 3 e t n G Z Y r I W R m u h + + M g T C Z y 0 Q y d z M / V I 2 M 8 N n w V L F e A Z 6 3 w A / D C x E v S q s y X M N / y 9 q a p U M u T q g J w N P X 2 8 R N x A r 8 X 3 j B N 7 7 f W n Y c 8 5 5 k 6 n d P 8 P 1 D j e a A H g H 8 H m + r q W 4 b q n D S G e N X u U q s V z f T v Z y z 6 K K i + e h 0 w A j n w h x h n U f o 4 z M G / X z d Y q g 7 B T C 7 w W B / Y C 8 P U c O H n O P 8 p H v 7 r 6 1 + + 2 9 h u / k + l 1 Y u b t 1 v b b 7 a 2 L 3 7 Y + N N u 9 b + Y f r X 2 b 2 v / v v Z q b X v t j 2 t / W j t a 6 6 1 d r 4 3 W 8 r W / r P 1 1 7 W / v t t 7 1 3 3 1 6 9 9 m Y / v x n V Z 9 / X X N + 3 k 3 + D 3 T m z / o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " c 3 t 6 j U g I 4 O f V 4 C e k u 2 K R m O a k u M k = " > A A A 1 r H i c l V t b c x v L c a b t J H a Y O D 5 2 H v O y F U p l O U W x R J 2 j c i p P 5 g W 8 i C A J k u B F E n R U C 6 C x W H F v 3 B k s A W 6 Q Z / 8 E v 9 r / K v 8 m P T M 7 2 z 2 L p U 6 F V S J 3 v q + n d y 5 f T z c W q 2 E W h U K + e f O / P / v 5 L / 7 u 7 / / h l 7 / 6 x / V / + u d f / 8 t v v v v t 7 2 5 E O s t H c D 1 K o z S / G / o C o j C B a x n K C O 6 y H P x 4 G M H t 8 H 5 P 8 b c F 5 C J M k 7 5 c Z P A 5 9 o M k n I Q j X y L 0 Y / a l H E i Y y / K s e 7 p c f v l u 4 8 3 W G / 3 j r V 5 s V x c b a 9 V P 7 8 t v f / j z Y J y O Z j E k c h T 5 Q n z a f p P J z 6 W f y 3 A U w X J 9 M B O Q + a N 7 P 4 B P e J n 4 M Y j P p R 7 2 0 n u J y N i b p D n + S 6 S n U d 6 j 9 G M h F v E Q L W N f T k W T U 2 A b 9 2 k m J / / 5 u Q y T b C Y h G Z k b T W a R J 1 N P r Y E 3 D n M Y y W i B F / 4 o D 3 G s 3 m j q 5 / 5 I 4 k q t r 7 9 U P 9 5 Z 5 9 Y 7 3 e k f e f u d g + O z 4 / 7 x + d m V p 6 n 1 t o F s 4 l 8 1 D b E 5 j J f o w z v 1 8 3 t P 4 H 1 w n Y W X T r y R n 5 l r N e M c J p D n Y R K o Q Y 3 D I h T W b B I G s x x w Q g k 8 j t I 4 9 p N x O U A w g o l c l u U A Y u 9 V F 6 / / s F y u 2 I x w H y C 3 V n u 6 1 W a X h 8 G 0 d n a p G m 1 W M s 2 s T T / N 2 i y G q Z R p b I 1 2 d W v F r p q 3 b 8 3 8 5 y y G 1 m L 4 n M X I W o y e s x h b i 7 G y w G 0 4 w t l F a o a e 7 6 G 9 2 n S Y Y L C M P V y b 2 P W B 1 w p c f t r + j F 6 G E 2 9 j W z l p T n u + L A e x n w c o M D 8 v D 4 7 v m m P B a 8 c E p d Q 0 6 Z / v n + v 7 6 P D T 0 i 9 z w N E r 4 r / M j V 2 f H e 1 S T t O s H H S a b O c B 2 Q 4 G c 1 4 8 D U Q Y e w 9 4 X W T T c P l K Q f + N v + Y r S 9 b J n h q 9 1 H F Q y C l I / 9 v 9 6 m 7 z q t v 4 V a v l w x O u V e t Q X L t M 2 T 1 z 8 4 b l / G n F c q 4 s n 1 Y t V w 1 X b M b a a N x O q j u 9 a n P 9 8 N S c V d N C L S z C D X S u 0 X k D X W h 0 0 U B j j c Z N F W g 0 a d r O p B L H D I c 0 3 / T s j F e M M s d I j b 1 h M j Y d 8 T b + M P J p 7 Z p m q i s z a v O E 2 I o z x P j o M D g P 9 F l n D k M 8 q W H T i 9 J H y F + P M K N t r Q 8 w U v V p B Z O N 7 d K c i / 8 z w F a p w 6 O t O 5 4 C o f S j L e 8 A z 1 g h M Q + p I 1 W o g x B 5 4 / H A e j x o e t S 0 f E z t P T f e V n c V n j X y c H p V 4 6 3 t 8 T D z x 9 R l 4 / u N H 1 a 6 b d Z 9 7 N X 3 3 N U P e j p X J l l 8 c z k w o Z j B V 5 n F W Y 8 W B 3 Z B T O 8 r 2 / u q p f e l 7 a X z 5 G N a J 6 + t e m H M 3 Y V e m T q 1 P b M 0 T Y f T H K D p k v n b + H 7 V I 6 0 a 8 / 3 9 q m 8 / 8 Q A 3 Q X V u W T J 4 M H O 2 J s 9 P 2 v E z y z L I P e X H u O l U b j p t b n a 8 3 H + k d W 8 4 e / 3 6 t V + k 4 d i b C Z X x w 4 m X p U K E W K M Z 1 1 n k Y 0 a q / D 8 / O l W k Z J i g W u a o G N O 9 s v l / T 7 J y t F c 7 2 v t J R z j n J A B d 2 h h b Y X x o u B 4 R S s X S 1 t X r 1 8 / K B E f n R 0 G K R d k 0 b p k n c m Z 0 t d E 3 J 8 p c r c x 0 x 7 r a a X F l B W / v h 5 O o f X 3 7 M O g 7 n X Z + s t P K o m L B I K u Z M / U p 1 A x X X X 1 r U 0 z / p n p 7 d f + e 2 9 / O t L 4 B j l p d P z v g S n A Q R k q s k b r A c g U N 1 F X l b x K l a a 5 p f W V 4 f V k Z I D W M y 5 U i R + Y Y C F W d M / K j c r 9 p U P h R O O Y G X 8 x 1 H p e G W q 6 4 B C H b O 2 h m W c 8 I M q F K x 0 y E U Z r o s g + X F l 2 k s V f 4 e Y h J D K y + M X + V p n B L 0 j x G r y 8 G C L 1 Y 2 u X M G 7 R P z N B l h s S M X G Z E z N h l x s S A y w A x E 5 e Z E B O 4 T E D M 1 G W m x I Q u E x L z 1 W W + E n P v M v f E R C 4 T L b W M 8 9 g L B U Y s f n o d L 9 R h Z 3 Z w 0 / s 6 E 9 I b p 8 n v p a c + P 6 I c F + r k c T b G i y v f i e s 7 o b u m L p M S k 7 l M R s y D y z w Q k 7 t M T o x w G U G M d B l J z M x l Z s Q U L l M Q 8 + g y j 8 T M X W Z O z M J l F s Q 8 u c z T 0 h R o N g A w M 6 f 1 8 V 5 U Q V K a U B p O W N j U 4 9 Z V H r O o q r 6 a Z x y H h w S z 2 C h G B L P A K M Y E s 6 g o g G A W E s W E Y B Y P R U A w C 4 Z i S j C L h G J G M A u D 4 i v B L A a K e 4 J Z A B Q R w R G D Y 4 J j B r O F 5 i u c E s z E X G Q E M y U X D w Q z G R c 5 w U z D h S B Y 8 E 0 l W L a v C Z d u Q T D T b f F I M B N t M S e Y K b Z Y E M z k W j w R b L X a i U A 9 h 9 I P U f I W 3 Y I R X e u 5 D E Z 5 r S c z G P m 1 n s 1 g N N h 6 O o M R Y u v 5 D E a N r S c 0 G E m 2 n t F g d N l 6 S i P 3 7 D k N R q G t J z U Y m b a e 1 W C 0 2 j y t L R e 7 X M y 5 Z 0 9 i M N J t P Y v B 6 L f 1 N A Y j 4 t b z G I y S W 0 9 k M H J u P Z P B a L r 1 V A Y j 7 N Z z G Y y 6 W 0 9 m M B J v P Z v B 6 L z 1 d A Y j 9 t b z G Y z i n z + h M R b y c F R X K P E O x c c O h U 2 8 S / A u g / c I 3 m P w P s H 7 D O 4 Q 3 G H w A c E H D D 4 k + J D B R w Q f M f i Y 4 G M G v y f 4 P Y N P C D 5 h c J f g L o N P C T 5 l 8 B n B Z w w + J / i c w T 2 C e w y + I P i C w Z c E X z L 4 i u A r B v c J 7 j P 4 m u B r B t 8 Q f M P g W 4 J v G X x H 8 B 2 D P x D 8 g c E f C f 7 4 / P H q i g 6 M 6 p h G d 5 h + t f Q Y t 8 u 5 P Z f b 4 9 y + y + 1 z r u N y H c 4 d u N w B 5 w 5 d 7 p B z R y 5 3 x L l j l z v m 3 H u X e 8 + 5 E 5 c 7 4 V z X 5 b q c O 3 W 5 U 8 6 d u d w Z 5 8 5 d 7 p x z P Z f r c e 7 C 5 S 4 4 d + l y l 5 y 7 c r k r z v V d r s + 5 a 5 e 7 5 t y N y 9 1 w 7 t b l b j l 3 5 3 J 3 n P v g c h 8 4 9 9 H l r O x v e A l R P I H + H I G f X d / U f Y s 0 g d J + n r V Y P D P Q I K a k U d f E C n f r Y f V s t C L 0 0 1 Q L V 9 b M c G g Q K k 9 0 c Y I I F S W 6 J E G E S p G i G i A V I L r 8 Q I T K D l 1 0 I E L F h i 4 1 E K E S o 6 g G y U b 4 1 S B U T u h i A h E q I n Q J g U j E l s c g V D D o c g G R h C 2 r Q V K 2 S A a h k k A X B I h Q I a D L A E Q o / e v k j 4 h g + 2 A Q S v V F t V t s r w q D U F r X S R 0 R S u Y 6 l S N C K V w n c E Q o c e u 0 j U h b k e p W p 4 U f Z V O 1 3 / p v L c x i W G m m e h B v Q P o E R g 8 s K i r y 4 + F Y 9 T A X R K Q x B A r X f w n W S l U q t Q A 6 R A R / E y T C I F Z d 9 V + C r Z 7 r b w m q i Z Q l H 3 + p x G p b K N Y R t V C o Y z a p U g n U t l C g E 2 q h O A N q o T C n 1 M L h s r G i I L 9 S C 8 V 4 z 9 a m V C K s Z 1 4 q A d o W L i Z b R R R f y p a k V K K z L R T d A 7 V Q c D l b q V I J r V 6 g U o n M t n C h 2 T K j w A p q o b g e q Y X C m l M L R b W g F g r q a V l 9 w 4 z p d 2 5 w n X p R Z 5 R y d c J F h B K t T r O I U H r V y R U R S q o 6 p S J C q V Q n U k Q o g e r 0 i Q i l T Z 0 0 E a F k q V M l I p Q i d Y J E h B K j T o u I U D r U y R A R S o I 6 B S J C q U 8 n P k Q o 4 e l 0 h w i l O Z 3 k E K H k p l M b I p T S d E J D h B K Z T m O I U P r S y Q s R S l o 6 Z S F C q U o n K k Q o Q e n 0 h A i l J Z 2 U E K F k p F M R I p S C d A J C 5 C P b Q U o X Q 5 4 t 4 l 6 d L X o s W 8 R d G / q K 6 V b h X 0 + u i m H F X Z k 4 1 i r q Q y L U e x f 7 M I r 8 H F B U 0 x 1 1 A u E d T Q 0 o J q F 6 g g r J K B 2 H S Y D O / F m k E D G p r + N l K d T D 3 y u Q z z k Y p t H 4 p 9 w M 5 8 u y + e W m x P G Z b 8 p 1 O q 3 8 6 Y f X 1 d S k K T s T w d Q v d y 1 G + p d 7 F q M I k P s W o x i Q H Y t R F M g D i 1 E c y E O L U S T I I 4 t R L M h j i 1 E 0 y P c W o 3 i Q J x a j i J B d i 1 F M y F O L U V T I M 4 t R X M h z i 1 F k y J 7 F K D b k h c U o O u S l x S g + 5 J X F K E J k 3 2 I U I / L a Y h Q l 8 s Z i F C f y 1 m I U K f L O Y h Q r 8 o P F K F r k R 4 u Z Q g 2 F f J j 7 2 d S w g f 3 4 O 3 I + h Q S 7 D C Z d B H s M J m k E + w w m d Q Q d B p N A g g M G k 0 a C Q w a T T I I j B p N S g m M G k 1 i C 9 w w m v Q Q n D C b J B F 0 G k 2 q C U w a T c I I z B p N 2 g n M G k 3 y C H o N J Q c E F g 0 l E w S W D S U f B F Y N J S k G f w a S m 4 J r B J K j g h s G k q e C W w S S r 4 I 7 B p K z g A 4 N J X M F H B t s P A n i 0V a W a q B + u D J m 4 x C 6 h p C 2 x R y h J S + w T q p X 1 0 t v X X 3 D M B H i + J 0 B 6 e O s I x l 5 n 0 x v C y F e 4 n I b C e 0 x n 0 R g h b I E n 9 N c h W E v O c k + 9 K J d G 6 E i 9 X Q b z D G t L / R 2 v / a a 9 Q 3 c k 0 Y o D Q k m z 4 p B Q k q w 4 I p Q U K 4 4 J J c G K 9 4 S S X s U J o S R X 0 S W U 1 C p O C S W x i j N C S a v i n F C S q u g R S k o V F 4 S S U M U l o a R T c U U o y V T 0 C S W V i m t C S a T i h l D S q L g l l C Q q 7 g g l h Y o P h J J A x U d C 6 + c z C V a D o D 9 Y + O b J T F U a A t U F X f c j g S o a d 6 i F A t 6 l F g p 3 j 1 o o 2 H 1 q o Z g 6 1 E I R H V A L x X N I L R T N E b V Q L M f U Q p G 8 p x a K 4 4 R a K I o u t V A M p 9 R C E Z x R C z f / n F q 4 6 T 1 q 4 W Z f U A s 3 + Z J a u L l X 1 M J N 7 V M L N / O a W r i J N 9 T C z b u l F m 7 a H b V w s z 5 Q C z f p I 7 t f V X 9 V t Z f a M u B b J k 0 d h g e N i m r 9 R i y G t k E 3 v c d Q T t O Z 9 L A I 8 h 4 x 0 W W Q u 2 U S U J 3 k 1 E j V 7 W W t A W 2 4 U h 6 C L q K g U U W B L q O g U U e B L q S g U U m B L q W g U U u B L q a g U U 2 B L q e g U U + B L q i g U V G B L q m g U V O B L q q g U V W B L q u g U V e B L q y g U V m B L q 2 g U V u B L q 6 g U V 2 B L q + g U V + B L r C g U W G B L r G g U W O B L r K g U W W B L r O g U W e B L r S g U W m B L r W g U W u B L r a g U W 2 B L r e g U W + B L r i g U X G B L r m g U X O B L r q g U X W B L r u A 1 V 3 4 + Q E T k c x n 4 M 2 S M e T R Q r 3 h N P a l 7 w W Q Q I 4 5 S L V D g U o f z l R C a r 6 B 6 a u X B N W L m n l c 6 o Z O h 8 o r x F m Y h 5 g I n f 7 1 + 7 v D h U 6 C + p 0 R d R P M m g 3 f 9 n W S q S / x 8 7 t 7 C 8 e y x y 1 7 y 7 b B x O k Y o m 9 N R B v U M z G t l f t U R r 1 v G W U y j M Z Q W Q 5 0 o x 5 9 3 Q O P C Z m O p r 5 Q L 6 / 7 M 5 n q z 1 W Q O y N s v E S e G Z t 6 j F W X 1 Q G M w b E z z R a 7 H A k 8 d K y d a a I W R v r J m m s c + V n k j 2 B Z v 3 7 T r Y C l 9 9 K r r t 3 l b b x 0 v O T 1 i 8 t 1 B X v D p 9 t k L 5 c 8 t z d O z T h j a 9 w g o 3 x p n 8 a 5 R A 7 B s n 6 + 1 q R G k u a o W u E k h L z p W q Q T G f t z s r R A 0 w 6 T R a r f e D K P 3 l a 9 Z N F M z f 5 J P R 9 w 2 Z P u k r / u d N J d 2 c A b P 6 c R q E b T v 8 Q / f o 5 7 n 6 f M 8 m p l A / b S g m j V U A K 9 T a N J 7 s f q M d X 0 M c 2 x b B X + Q n g v u j + + f a F e 9 d H / 8 W O W m F d W R Y b 7 L / S r Z i 8 G E E X M x j 4 m f e n t Y g L E k E / U r w X G O 8 T q l T d V G x u n z F q 9 b 5 r O A p 0 z d a k c S t j U 7 k X q j V N Q 7 h 7 D + z C D c e h v N V 6 y T v M 4 U k / 6 l 2 X 3 x z f L F j J N Q H H b b Z x 8 1 P 3 e t n G Z Y r I W R m u h + + M g T C Z y 0 Q y d z M / V I 2 M 8 N n w V L F e A Z 6 3 w A / D C x E v S q s y X M N / y 9 q a p U M u T q g J w N P X 2 8 R N x A r 8 X 3 j B N 7 7 f W n Y c 8 5 5 k 6 n d P 8 P 1 D j e a A H g H 8 H m + r q W 4 b q n D S G e N X u U q s V z f T v Z y z 6 K K i + e h 0 w A j n w h x h n U f o 4 z M G / X z d Y q g 7 B T C 7 w W B / Y C 8 P U c O H n O P 8 p H v 7 r 6 1 + + 2 9 h u / k + l 1 Y u b t 1 v b b 7 a 2 L 3 7 Y + N N u 9 b + Y f r X 2 b 2 v / v v Z q b X vt j 2 t / W j t a 6 6 1 d r 4 3 W 8 r W / r P 1 1 7 W / v t t 7 1 3 3 1 6 9 9 m Y / v x n V Z 9 / X X N + 3 k 3 + D 3 T m z / o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " c 3 t 6 j U g I 4 O f V 4 C e k u 2 K R m O a k u M k = " > A A A 1 r H i c lV t b c x v L c a b t J H a Y O D 5 2 H v O y F U p l O U W x R J 2 j c i p P 5 g W 8 i C A J k u B F E n R U C 6 C x W H F v 3 B k s A W 6 Q Z / 8 E v 9 r / K v 8 m P T M 7 2 z 2 L p U 6 F V S J 3 v q + n d y 5 f T z c W q 2 E W h U K + e f O / P / v 5 L / 7 u 7 / / h l 7 / 6 x / V / + u d f / 8 t v v v v t 7 2 5 E O s t H c D 1 K o z S / G / o C o j C B a x n K C O 6 y H P x 4 G M H t 8 H 5 P 8 b c F 5 C J M k 7 5 c Z P A 5 9 o M k n I Q j X y L 0 Y / a l H E i Y y / K s e 7 p c f v l u 4 8 3 W G / 3 j r V 5 s V x c b a 9 V P 7 8 t v f / j z Y J y O Z j E k c h T 5 Q n z a f p P J z 6 W f y 3 A U w X J 9 M B O Q + a N 7 P 4 B P e J n 4 M Y j P p R 7 2 0 n u J y N i b p D n + S 6 S n U d 6 j 9 G M h F v E Q L W N f T k W T U 2 A b 9 2 k m J / / 5 u Q y T b C Y h G Z k b T W a R J 1 N P r Y E 3 D n M Y y W i B F / 4 o D 3 Gs 3 m j q 5 / 5 I 4 k q t r 7 9 U P 9 5 Z 5 9 Y 7 3 e k f e f u d g + O z 4 / 7 x + d mV p 6 n 1 t o F s 4 l 8 1 D b E 5 j J f o w z v 1 8 3 t P 4 H 1 w n Y W X T r y R n 5 l r N e M c J p D n Y R K o Q Y 3 D I h T W b B I G s x x w Q g k 8 j t I 4 9 p N x O U A w g o l c l u U A Y u 9 V F 6 / / s F y u 2 I x w H y C 3 V n u 6 1 W a X h 8 G 0 d n a p G m 1 W M s 2 s T T / N 2 i y G q Z R p b I 1 2 d W v F r p q 3 b 8 3 8 5 y y G 1 m L 4 n M X I W o y e s x h b i 7 G y w G 0 4 w t l F a o a e 7 6 G 9 2 n S Y Y L C M P V y b 2 P W B 1 w p c f t r + j F 6 G E 2 9 j W z l p T n u + L A e x n w c o M D 8 v D 4 7 v m m P B a 8 c E p d Q 0 6 Z / v n + v 7 6 P D T 0 i 9 z w N E r 4 r / M j V 2 f H e 1 S T t O s H H S a b O c B 2 Q 4 G c 1 4 8 D U Q Y e w 9 4 X W T T c P l K Q f + N v + Y r S 9 b J n h q 9 1 H F Q y C l I / 9 v 9 6 m 7 z q t v 4 V a v l w x O u V e t Q X L t M 2 T 1 z 8 4 b l / G n F c q 4 s n 1 Y t V w 1 X b M b a a N x O q j u 9 a n P 9 8 N S c V d N C L S z C D X S u 0 X k D X W h 0 0 U B j j c Z N F W g 0 a d r O p B L H D I c 0 3 / T s j F e M M s d I j b 1 h M j Y d 8 T b + M P J p 7 Z p m q i s z a v O E 2 I o z x P j o M D g P 9 F l n D k M 8 q W H T i 9 J H y F + P M K N t r Q 8 w U v V p B Z O N 7 d K c i / 8 z w F a p w 6 O t O 5 4 C o f S j L e 8 A z 1 g h M Q + p I 1 W o g x B 54 / H A e j x o e t S 0 f E z t P T f e V n c V n j X y c H p V 4 6 3 t 8 T D z x 9 R l 4 / u N H 1 a 6 b d Z 9 7 N X 3 3 N U P e j p X J l l 8 c z k w o Z j B V 5 n F W Y 8 W B 3 Z B T O 8 r 2 / u q p f e l 7 a X z 5 G N a J 6 + t e m H M 3 Y V e m T q 1 P b M 0 T Y f T H K D p k v n b + H 7 V I 6 0 a 8 / 3 9 q m 8 / 8 Q A 3 Q X V u W T J 4 M H O 2 J s 9 P 2 v E z y z L I P e X H u O l U b j p t b n a 8 3 H + k d W 8 4 e / 3 6 tV + k 4 d i b C Z X x w 4 m X p U K E W K M Z 1 1 n k Y 0 a q / D 8 / O l W k Z J i g W u a o G N O 9 s v l / T 7 J y t F c 7 2 v t J R z j n J A B d 2 h h b Y X x o u B 4 R S s X S 1 t X r 1 8 / K B E f n R 0 G K R d k 0 b p k n c m Z 0 t d E 3 J 8 p c r c x 0 x 7 r a a X F l B W / v h 5 O o f X 3 7 M O g 7 n X Z + s t P K o m L B I K u Z M / U p 1 A x X X X 1 r U 0 z / p n p 7 d f + e 2 9 / O t L 4 B j l p d P z v g S n A Q R k q s k b r A c g U N 1 F X l b x K l a a 5 p f W V 4 f V k Z I D W M y 5 U i R + Y Y C F W d M / K j c r 9 p U P h R O O Y G X 8 x 1 H p e G W q 6 4 B C H b O 2 h m W c 8 I M q F K x 0 y E U Z r o s g + X F l 2 k s V f 4 e Y h J D K y + M X + V p n B L 0 j x G r y 8 G C L 1 Y 2 u X M G 7 R P z N B l h s S M X G Z E z N h l x s S A y w A x E 5 e Z E B O 4 T E D M 1 G W m x I Q u E x L z 1 W W + E n P v M v f E R C 4 T L b W M 8 9 g L B U Y s f n o d L 9 R h Z 3 Z w 0 / s 6 E 9 I b p 8 n v p a c + P 6 I c F + r k c T b G i y v f i e s 7 o b u m L p M S k 7 l M R s y D y z w Q k 7 t M T o x w G U G M d B l J z M x l Z s Q U L l M Q 8 + g y j 8 T M X W Z O z M J l F s Q 8 u c z T 0 h R o N g A w M 6 f 1 8 V 5 U Q V K a U B p O W N j U 4 9 Z V H r O o q r 6 a Z x y H h w S z 2 C h G B L P A K M Y E s 6 g o g G A W E s W E Y B Y P R U A w C 4 Z i S j C L h G J G M A u D 4 i v B L A a K e 4 J Z A B Q R w R G D Y 4 J j B r O F 5 i u c E s z E X G Q E M y U X D w Q z G R c 5 w U z D h S B Y 8 E 0 l W L a v C Z d u Q T D T b f F I M B N t M S e Y K b Z Y E M z k W j w R b L X a i U A 9 h 9 I P U f I W 3 Y I R X e u 5 D E Z 5 r S c z G P m 1 n s 1 g N N h 6 O o M R Y u v 5 D E a N r S c 0 G E m 2 n t F g d N l 6 S i P 3 7 D k N R q G t J z U Y m b a e 1 W C 0 2 j y t L R e 7 X M y 5 Z 0 9 i M N J t P Y v B 6 L f 1 N A Y j 4 t b z G I y S W 0 9 k M H J u P Z P B a L r 1 V A Y j 7 N Z z G Y y 6 W 0 9 m M B J v P Z v B 6 L z 1 d A Y j 9 t b z G Y z i n z + h M R b y c F R X K P E O x c c O h U 2 8 S / A u g / c I 3 m P w P s H 7 D O 4 Q 3 G H w A c E H D D 4 k + J D B R w Q f M f i Y 4 G M G v y f 4 P Y N P C D 5 h c J f g L o N P C T 5 l 8 B n B Z w w + J / i c w T 2 C e w y + I P i C w Z c E X z L 4 i u A r B v c J 7 j P 4 m u B r B t 8 Q f M P g W 4 J v G X x H 8 B 2 D P x D 8g c E f C f 7 4 / P H q i g 6 M 6 p h G d 5 h + t f Q Y t 8 u 5 P Z f b 4 9 y + y + 1 z r u N y H c 4 d u N w B 5 w 5 d 7 p B z R y 5 3 x L l j l z v m 3 H u X e 8 + 5 E 5 c 7 4 V z X 5 b q c O 3 W 5 U 8 6 d u d w Z 5 8 5 d 7 p x z P Z f r c e 7 C 5 S 4 4 d + l y l 5 y 7 c r k r z v V d r s + 5 a 5 e 7 5 t y N y 9 1 w 7 t b l b j l 3 5 3 J 3 n P v g c h 8 4 9 9 H l r O x v e A l R P I H + H IG f X d / U f Y s 0 g d J + n r V Y P D P Q I K a k U d f E C n f r Y f V s t C L 0 0 1 Q L V 9 b M c G g Q K k 9 0 c Y I I F S W 6 J E G E S p G i G i A V I L r 8 Q I T K D l 1 0 I E L F h i 4 1 E K E S o 6 g G y U b 4 1 S B U T u h i A h E q I n Q J g U j E l s c g V D D o c g G R h C 2 r Q V K 2 S A a h k k A X B I h Q I a D L A E Q o / e v k j 4 h g + 2 A Q S v V F t V t s r w q D U F r X S R 0 R S u Y 6 l S N C K V w n c E Q o c e u 0 j U h b k e p W p 4 U f Z V O 1 3 / p v L c x i W G m m e h B v Q P o E R g 8 s K i r y 4 + F Y 9 T A X R K Q x B A r X f w n W S l U q t Q A 6 R A R / E y T C I F Z d 9 V + C r Z 7 r b w m q i Z Q l H 3 + p x G p b K N Y R t V C o Y z a p U g n U t l C g E 2 q h O A N q o T C n 1 M L h s r G i I L 9 S C 8 V 4 z 9 a m V C K s Z 1 4 q A d o W L i Z b R R R f y p a k V K K z L R T d A 7 V Q c D l b q V I J r V 6 g U o n M t n C h 2 T K j w A p q o b g e q Y X C m l M L R b W g F g r q a V l 9 w 4 z p d 2 5 w n X p R Z 5 R y d c J F h B K t T r O I U H r V y R U R S q o 6 p S J C q V Q n U k Q o g e r 0 i Q i l T Z 0 0 E a F k q V M l I p Q i d Y J E h B K j T o u I U D r U y R A R S o I 6 B S J C q U 8 n P k Q o 4 e l 0 h w i l O Z 3 k E K H k p l M b I p T S d E J D h B K Z T m O I U P r S y Q s R S l o 6 Z S F C q U o n K k Q o Q e n 0 h A i l J Z 2 U E K F k p F M R I p S C d A J C 5 C P b Q U o X Q 5 4 t 4 l 6 d L X o s W 8 R d G / q K 6 V b h X 0 + u i m H F X Z k 4 1 i r q Q y L U e x f 7 M I r 8 H F B U 0 x 1 1 A u E d T Q 0 o J q F 6 g g r J K B 2 H S Y D O / F m k E D G p r + N l K d T D 3 y u Q z z k Y p t H 4 p 9 w M 5 8 u y + e W m x P G Z b 8 p 1 O q 3 8 6 Y f X 1 d S k K T s T w d Q v d y 1 G + p d 7 F q M I k P s W o x i Q H Y t R F M g D i 1 E c y E O L U S T I I 4 t R L M h j i 1 E 0 y P c W o 3 i Q J x a j i J B d i 1 F M y F O L U V T I M 4 t R X M h z i 1 F k y J 7 F K D b k h c U o O u S l x S g + 5 J X F K E J k 3 2 I U I / L a Y h Q l 8 s Z i F C f y 1 m I U K f L O Y h Q r 8 o P F K F r k R 4 u Z Q g 2 F f J j 7 2 d S w g f 3 4 O 3 I + h Q S 7 D C Z d B H s M J m k E + w w m d Q Q d B p N A g g M G k 0 a C Q w a T T I I j B p N S g m M G k 1 i C 9 w w m v Q Q n D C b J B F 0 G k 2 q C U w a T c I I z B p N 2 g n M G k 3 y C H o N J Q c E F g 0 l E w S W D S U f B F Y N J S k G f w a S m 4 J r B J K j g h s G k q e C W w S S r 4 I 7 B p K z g A 4 N J X M F H B t s P A n i 0 V a W a q B + u D J m 4 x C 6 h p C 2 x R y h J S + w T q p X 1 0 t v X X 3 D M B H i + J 0 B 6 e O s I x l 5 n 0 x v C y F e 4 n I b C e 0 x n 0 R g h b I E n 9 N c h W E v O c k + 9 K J d G 6 E i 9 X Q b z D G t L / R 2 v / a a 9 Q 3 c k 0 Y o D Q k m z 4 p B Q k q w 4 I p Q U K 4 4 J J c G K 9 4 S S X s U J o S R X 0 S W U 1 C p O C S W x i j N C S a v i n F C S q u g R S k o V F 4 S S U M U l o a R T c U U o y V T 0 C S W V i m t C S a T i h l D S q L g l l C Q q 7 g g l h Y o P h J J A x U d C 6 + c z C V a D o D 9 Y + O b J T F U a A t U F X f c j g S o a d 6 i F A t 6 l F g p 3 j 1 o o 2 H 1 q o Z g 6 1 E I R H V A L x X N I L R T N E b V Q L M f U Q p G 8 p x a K 4 4 R a K I o u t V A M p 9 R C E Z x R C z f / n F q 4 6 T 1 q 4 W Z f U A s 3 + Z J a u L l X 1 M J N 7 V M L N / O a W r i J N 9 T C z b u l F m 7 a H b V w s z 5 Q C z f p I 7 t f V X 9 V t Z f a M u B b J k 0 d h g e N i m r 9 R i y G t k E 3 v c d Q T t O Z 9 L A I 8 h 4 x 0 W W Q u 2 U S U J 3 k 1 E j V 7 W W t A W 2 4 U h 6 C L q K g U U W B L q O g U U e B L q S g U U m B L q W g U U u B L q a g U U 2 B L q e g U U + B L q i g U V G B L q m g U V O B L q q g U V W B L q u g U V e B L q y g U V m B L q 2 g U V u B L q 6 g U V 2 B L q + g U V + B L r C g U W G B L r G g U W O B L r K g U W W B L r O g U W e B L r S g U W m B L r W g U W u B L r a g U W 2 B L r e g U W + B L r i g U X G B L r m g U X O B L r q g U X W B L r u A 1 V 3 4 + Q E T k c x n 4 M 2 S M e T R Q r 3 h N P a l 7 w W Q Q I 4 5 S L V D g U o f z l R C a r 6 B 6 a u X B N W L m n l c 6 o Z O h 8 o r x F m Y h 5 g I n f 7 1 + 7 v D h U 6 C + p 0 R d R P M m g 3 f 9 n W S q S / x 8 7 t 7 C 8 e y x y 1 7 y 7 b B x O k Y o m 9 N R B v U M z G t l f t U R r 1 v G W U y j M Z Q W Q 5 0 o x 5 9 3 Q O P C Z m O p r 5 Q L 6 / 7 M 5 n q z 1 W Q O y N s v E S e G Z t 6 j F W X 1 Q G M w b E z z R a 7 H A k 8 d K y d a a I W R v r J m m s c + V n k j 2 B Z v 3 7 T r Y C l 9 9 K r r t 3 l b b x 0 v O T 1 i 8 t 1 B X v D p 9 t k L 5 c 8 t z d O z T h j a 9 w g o 3 x p n 8 a 5 R A 7 B s n 6 + 1 q R G k u a o W u E k h L z p W q Q T G f t z s r R A 0 w 6 T R a r f e D K P 3 l a 9 Z N F M z f 5 J P R 9 w 2 Z P u k r / u d N J d 2 c A b P 6 c R q E b T v 8 Q / f o 5 7 n 6 f M 8 m p l A / b S g m j V U A K 9 T a N J 7 s f q M d X 0 M c 2 x b B X + Q n g v u j + + f a F e 9 d H / 8 W O W m F d W R Y b 7 L / S r Z i 8 G E E X M x j 4 m f e n t Y g L E k E / U r w X G O 8 T q l T d V G x u n z F q 9 b 5 r O A p 0 z d a k c S t j U 7 k X q j V N Q 7 h 7 D + z C D c e h v N V 6 y T v M 4 U k / 6 l 2 X 3 x z f L F j J N Q H H b b Z x 8 1 P 3 e t n G Z Y r I W R m u h + + M g T C Z y 0 Q y d z M / V I 2 M 8 N n w V L F e A Z 6 3 w A / D C x E v S q s y X M N / y 9 q a p U M u T q g J w N P X 2 8 R N x A r 8 X 3 j B N 7 7 f W n Y c 8 5 5 k 6 n d P 8 P 1 D j e a A H g H 8 H m + r q W 4 b q n D S G e N X u U q s V z f T v Z y z 6 K K i + e h 0 w A j n w h x h n U f o 4 z M G / X z d Y q g 7 B T C 7 w W B / Y C 8 P U c O H n O P 8 p H v 7 r 6 1 + + 2 9 h u / k + l 1 Y u b t 1 v b b 7 a 2 L 3 7 Y + N N u 9 b + Y f r X 2 b 2 v / v v Z q b X vt j 2 t / W j t a 6 6 1 d r 4 3 W 8 r W / r P 1 1 7 W / v t t 7 1 3 3 1 6 9 9 m Y / v x n V Z 9 / X X N + 3 k 3 + D 3 T m z / o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " c 3 t 6 j U g I 4 O f V 4 C e k u 2 K R m O a k u M k = " > A A A 1 r H i c lV t b c x v L c a b t J H a Y O D 5 2 H v O y F U p l O U W x R J 2 j c i p P 5 g W 8 i C A J k u B F E n R U C 6 C x W H F v 3 B k s A W 6 Q Z / 8 E v 9 r / K v 8 m P T M 7 2 z 2 L p U 6 F V S J 3 v q + n d y 5 f T z c W q 2 E W h U K + e f O / P / v 5 L / 7 u 7 / / h l 7 / 6 x / V / + u d f / 8 t v v v v t 7 2 5 E O s t H c D 1 K o z S / G / o C o j C B a x n K C O 6 y H P x 4 G M H t 8 H 5 P 8 b c F 5 C J M k 7 5 c Z P A 5 9 o M k n I Q j X y L 0 Y / a l H E i Y y / K s e 7 p c f v l u 4 8 3 W G / 3 j r V 5 s V x c b a 9 V P 7 8 t v f / j z Y J y O Z j E k c h T 5 Q n z a f p P J z 6 W f y 3 A U w X J 9 M B O Q + a N 7 P 4 B P e J n 4 M Y j P p R 7 2 0 n u J y N i b p D n + S 6 S n U d 6 j 9 G M h F v E Q L W N f T k W T U 2 A b 9 2 k m J / / 5 u Q y T b C Y h G Z k b T W a R J 1 N P r Y E 3 D n M Y y W i B F / 4 o D 3 Gs 3 m j q 5 / 5 I 4 k q t r 7 9 U P 9 5 Z 5 9 Y 7 3 e k f e f u d g + O z 4 / 7 x + d mV p 6 n 1 t o F s 4 l 8 1 D b E 5 j J f o w z v 1 8 3 t P 4 H 1 w n Y W X T r y R n 5 l r N e M c J p D n Y R K o Q Y 3 D I h T W b B I G s x x w Q g k 8 j t I 4 9 p N x O U A w g o l c l u U A Y u 9 V F 6 / / s F y u 2 I x w H y C 3 V n u 6 1 W a X h 8 G 0 d n a p G m 1 W M s 2 s T T / N 2 i y G q Z R p b I 1 2 d W v F r p q 3 b 8 3 8 5 y y G 1 m L 4 n M X I W o y e s x h b i 7 G y w G 0 4 w t l F a o a e 7 6 G 9 2 n S Y Y L C M P V y b 2 P W B 1 w p c f t r + j F 6 G E 2 9 j W z l p T n u + L A e x n w c o M D 8 v D 4 7 v m m P B a 8 c E p d Q 0 6 Z / v n + v 7 6 P D T 0 i 9 z w N E r 4 r / M j V 2 f H e 1 S T t O s H H S a b O c B 2 Q 4 G c 1 4 8 D U Q Y e w 9 4 X W T T c P l K Q f + N v + Y r S 9 b J n h q 9 1 H F Q y C l I / 9 v 9 6 m 7 z q t v 4 V a v l w x O u V e t Q X L t M 2 T 1 z 8 4 b l / G n F c q 4 s n 1 Y t V w 1 X b M b a a N x O q j u 9 a n P 9 8 N S c V d N C L S z C D X S u 0 X k D X W h 0 0 U B j j c Z N F W g 0 a d r O p B L H D I c 0 3 / T s j F e M M s d I j b 1 h M j Y d 8 T b + M P J p 7 Z p m q i s z a v O E 2 I o z x P j o M D g P 9 F l n D k M 8 q W H T i 9 J H y F + P M K N t r Q 8 w U v V p B Z O N 7 d K c i / 8 z w F a p w 6 O t O 5 4 C o f S j L e 8 A z 1 g h M Q + p I 1 W o g x B 54 / H A e j x o e t S 0 f E z t P T f e V n c V n j X y c H p V 4 6 3 t 8 T D z x 9 R l 4 / u N H 1 a 6 b d Z 9 7 N X 3 3 N U P e j p X J l l 8 c z k w o Z j B V 5 n F W Y 8 W B 3 Z B T O 8 r 2 / u q p f e l 7 a X z 5 G N a J 6 + t e m H M 3 Y V e m T q 1 P b M 0 T Y f T H K D p k v n b + H 7 V I 6 0 a 8 / 3 9 q m 8 / 8 Q A 3 Q X V u W T J 4 M H O 2 J s 9 P 2 v E z y z L I P e X H u O l U b j p t b n a 8 3 H + k d W 8 4 e / 3 6 tV + k 4 d i b C Z X x w 4 m X p U K E W K M Z 1 1 n k Y 0 a q / D 8 / O l W k Z J i g W u a o G N O 9 s v l / T 7 J y t F c 7 2 v t J R z j n J A B d 2 h h b Y X x o u B 4 R S s X S 1 t X r 1 8 / K B E f n R 0 G K R d k 0 b p k n c m Z 0 t d E 3 J 8 p c r c x 0 x 7 r a a X F l B W / v h 5 O o f X 3 7 M O g 7 n X Z + s t P K o m L B I K u Z M / U p 1 A x X X X 1 r U 0 z / p n p 7 d f + e 2 9 / O t L 4 B j l p d P z v g S n A Q R k q s k b r A c g U N 1 F X l b x K l a a 5 p f W V 4 f V k Z I D W M y 5 U i R + Y Y C F W d M / K j c r 9 p U P h R O O Y G X 8 x 1 H p e G W q 6 4 B C H b O 2 h m W c 8 I M q F K x 0 y E U Z r o s g + X F l 2 k s V f 4 e Y h J D K y + M X + V p n B L 0 j x G r y 8 G C L 1 Y 2 u X M G 7 R P z N B l h s S M X G Z E z N h l x s S A y w A x E 5 e Z E B O 4 T E D M 1 G W m x I Q u E x L z 1 W W + E n P v M v f E R C 4 T L b W M 8 9 g L B U Y s f n o d L 9 R h Z 3 Z w 0 / s 6 E 9 I b p 8 n v p a c + P 6 I c F + r k c T b G i y v f i e s 7 o b u m L p M S k 7 l M R s y D y z w Q k 7 t M T o x w G U G M d B l J z M x l Z s Q U L l M Q 8 + g y j 8 T M X W Z O z M J l F s Q 8 u c z T 0 h R o N g A w M 6 f 1 8 V 5 U Q V K a U B p O W N j U 4 9 Z V H r O o q r 6 a Z x y H h w S z 2 C h G B L P A K M Y E s 6 g o g G A W E s W E Y B Y P R U A w C 4 Z i S j C L h G J G M A u D 4 i v B L A a K e 4 J Z A B Q R w R G D Y 4 J j B r O F 5 i u c E s z E X G Q E M y U X D w Q z G R c 5 w U z D h S B Y 8 E 0 l W L a v C Z d u Q T D T b f F I M B N t M S e Y K b Z Y E M z k W j w R b L X a i U A 9 h 9 I P U f I W 3 Y I R X e u 5 D E Z 5 r S c z G P m 1 n s 1 g N N h 6 O o M R Y u v 5 D E a N r S c 0 G E m 2 n t F g d N l 6 S i P 3 7 D k N R q G t J z U Y m b a e 1 W C 0 2 j y t L R e 7 X M y 5 Z 0 9 i M N J t P Y v B 6 L f 1 N A Y j 4 t b z G I y S W 0 9 k M H J u P Z P B a L r 1 V A Y j 7 N Z z G Y y 6 W 0 9 m M B J v P Z v B 6 L z 1 d A Y j 9 t b z G Y z i n z + h M R b y c F R X K P E O x c c O h U 2 8 S / A u g / c I 3 m P w P s H 7 D O 4 Q 3 G H w A c E H D D 4 k + J D B R w Q f M f i Y 4 G M G v y f 4 P Y N P C D 5 h c J f g L o N P C T 5 l 8 B n B Z w w + J / i c w T 2 C e w y + I P i C w Z c E X z L 4 i u A r B v c J 7 j P 4 m u B r B t 8 Q f M P g W 4 J v G X x H 8 B 2 D P x D 8g c E f C f 7 4 / P H q i g 6 M 6 p h G d 5 h + t f Q Y t 8 u 5 P Z f b 4 9 y + y + 1 z r u N y H c 4 d u N w B 5 w 5 d 7 p B z R y 5 3 x L l j l z v m 3 H u X e 8 + 5 E 5 c 7 4 V z X 5 b q c O 3 W 5 U 8 6 d u d w Z 5 8 5 d 7 p x z P Z f r c e 7 C 5 S 4 4 d + l y l 5 y 7 c r k r z v V d r s + 5 a 5 e 7 5 t y N y 9 1 w 7 t b l b j l 3 5 3 J 3 n P v g c h 8 4 9 9 H l r O x v e A l R P I H + H IG f X d / U f Y s 0 g d J + n r V Y P D P Q I K a k U d f E C n f r Y f V s t C L 0 0 1 Q L V 9 b M c G g Q K k 9 0 c Y I I F S W 6 J E G E S p G i G i A V I L r 8 Q I T K D l 1 0 I E L F h i 4 1 E K E S o 6 g G y U b 4 1 S B U T u h i A h E q I n Q J g U j E l s c g V D D o c g G R h C 2 r Q V K 2 S A a h k k A X B I h Q I a D L A E Q o / e v k j 4 h g + 2 A Q S v V F t V t s r w q D U F r X S R 0 R S u Y 6 l S N C K V w n c E Q o c e u 0 j U h b k e p W p 4 U f Z V O 1 3 / p v L c x i W G m m e h B v Q P o E R g 8 s K i r y 4 + F Y 9 T A X R K Q x B A r X f w n W S l U q t Q A 6 R A R / E y T C I F Z d 9 V + C r Z 7 r b w m q i Z Q l H 3 + p x G p b K N Y R t V C o Y z a p U g n U t l C g E 2 q h O A N q o T C n 1 M L h s r G i I L 9 S C 8 V 4 z 9 a m V C K s Z 1 4 q A d o W L i Z b R R R f y p a k V K K z L R T d A 7 V Q c D l b q V I J r V 6 g U o n M t n C h 2 T K j w A p q o b g e q Y X C m l M L R b W g F g r q a V l 9 w 4 z p d 2 5 w n X p R Z 5 R y d c J F h B K t T r O I U H r V y R U R S q o 6 p S J C q V Q n U k Q o g e r 0 i Q i l T Z 0 0 E a F k q V M l I p Q i d Y J E h B K j T o u I U D r U y R A R S o I 6 B S J C q U 8 n P k Q o 4 e l 0 h w i l O Z 3 k E K H k p l M b I p T S d E J D h B K Z T m O I U P r S y Q s R S l o 6 Z S F C q U o n K k Q o Q e n 0 h A i l J Z 2 U E K F k p F M R I p S C d A J C 5 C P b Q U o X Q 5 4 t 4 l 6 d L X o s W 8 R d G / q K 6 V b h X 0 + u i m H F X Z k 4 1 i r q Q y L U e x f 7 M I r 8 H F B U 0 x 1 1 A u E d T Q 0 o J q F 6 g g r J K B 2 H S Y D O / F m k E D G p r + N l K d T D 3 y u Q z z k Y p t H 4 p 9 w M 5 8 u y + e W m x P G Z b 8 p 1 O q 3 8 6 Y f X 1 d S k K T s T w d Q v d y 1 G + p d 7 F q M I k P s W o x i Q H Y t R F M g D i 1 E c y E O L U S T I I 4 t R L M h j i 1 E 0 y P c W o 3 i Q J x a j i J B d i 1 F M y F O L U V T I M 4 t R X M h z i 1 F k y J 7 F K D b k h c U o O u S l x S g + 5 J X F K E J k 3 2 I U I / L a Y h Q l 8 s Z i F C f y 1 m I U K f L O Y h Q r 8 o P F K F r k R 4 u Z Q g 2 F f J j 7 2 d S w g f 3 4 O 3 I + h Q S 7 D C Z d B H s M J m k E + w w m d Q Q d B p N A g g M G k 0 a C Q w a T T I I j B p N S g m M G k 1 i C 9 w w m v Q Q n D C b J B F 0 G k 2 q C U w a T c I I z B p N 2 g n M G k 3 y C H o N J Q c E F g 0 l E w S W D S U f B F Y N J S k G f w a S m 4 J r B J K j g h s G k q e C W w S S r 4 I 7 B p K z g A 4 N J X M F H B t s P A n i 0 V a W a q B + u D J m 4 x C 6 h p C 2 x R y h J S + w T q p X 1 0 t v X X 3 D M B H i + J 0 B 6 e O s I x l 5 n 0 x v C y F e 4 n I b C e 0 x n 0 R g h b I E n 9 N c h W E v O c k + 9 K J d G 6 E i 9 X Q b z D G t L / R 2 v / a a 9 Q 3 c k 0 Y o D Q k m z 4 p B Q k q w 4 I p Q U K 4 4 J J c G K 9 4 S S X s U J o S R X 0 S W U 1 C p O C S W x i j N C S a v i n F C S q u g R S k o V F 4 S S U M U l o a R T c U U o y V T 0 C S W V i m t C S a T i h l D S q L g l l C Q q 7 g g l h Y o P h J J A x U d C 6 + c z C V a D o D 9 Y + O b J T F U a A t U F X f c j g S o a d 6 i F A t 6 l F g p 3 j 1 o o 2 H 1 q o Z g 6 1 E I R H V A L x X N I L R T N E b V Q L M f U Q p G 8 p x a K 4 4 R a K I o u t V A M p 9 R C E Z x R C z f / n F q 4 6 T 1 q 4 W Z f U A s 3 + Z J a u L l X 1 M J N 7 V M L N / O a W r i J N 9 T C z b u l F m 7 a H b V w s z 5 Q C z f p I 7 t f V X 9 V t Z f a M u B b J k 0 d h g e N i m r 9 R i y G t k E 3 v c d Q T t O Z 9 L A I 8 h 4 x 0 W W Q u 2 U S U J 3 k 1 E j V 7 W W t A W 2 4 U h 6 C L q K g U U W B L q O g U U e B L q S g U U m B L q W g U U u B L q a g U U 2 B L q e g U U + B L q i g U V G B L q m g U V O B L q q g U V W B L q u g U V e B L q y g U V m B L q 2 g U V u B L q 6 g U V 2 B L q + g U V + B L r C g U W G B L r G g U W O B L r K g U W W B L r O g U W e B L r S g U W m B L r W g U W u B L r a g U W 2 B L r e g U W + B L r i g U X G B L r m g U X O B L r q g U X W B L r u A 1 V 3 4 + Q E T k c x n 4 M 2 S M e T R Q r 3 h N P a l 7 w W Q Q I 4 5 S L V D g U o f z l R C a r 6 B 6 a u X B N W L m n l c 6 o Z O h 8 o r x F m Y h 5 g I n f 7 1 + 7 v D h U 6 C + p 0 R d R P M m g 3 f 9 n W S q S / x 8 7 t 7 C 8 e y x y 1 7 y 7 b B x O k Y o m 9 N R B v U M z G t l f t U R r 1 v G W U y j M Z Q W Q 5 0 o x 5 9 3 Q O P C Z m O p r 5 Q L 6 / 7 M 5 n q z 1 W Q O y N s v E S e G Z t 6 j F W X 1 Q G M w b E z z R a 7 H A k 8 d K y d a a I W R v r J m m s c + V n k j 2 B Z v 3 7 T r Y C l 9 9 K r r t 3 l b b x 0 v O T 1 i 8 t 1 B X v D p 9 t k L 5 c 8 t z d O z T h j a 9 w g o 3 x p n 8 a 5 R A 7 B s n 6 + 1 q R G k u a o W u E k h L z p W q Q T G f t z s r R A 0 w 6 T R a r f e D K P 3 l a 9 Z N F M z f 5 J P R 9 w 2 Z P u k r / u d N J d 2 c A b P 6 c R q E b T v 8 Q / f o 5 7 n 6 f M 8 m p l A / b S g m j V U A K 9 T a N J 7 s f q M d X 0 M c 2 x b B X + Q n g v u j + + f a F e 9 d H / 8 W O W m F d W R Y b 7 L / S r Z i 8 G E E X M x j 4 m f e n t Y g L E k E / U r w X G O 8 T q l T d V G x u n z F q 9 b 5 r O A p 0 z d a k c S t j U 7 k X q j V N Q 7 h 7 D + z C D c e h v N V 6 y T v M 4 U k / 6 l 2 X 3 x z f L F j J N Q H H b b Z x 8 1 P 3 e t n G Z Y r I W R m u h + + M g T C Z y 0 Q y d z M / V I 2 M 8 N n w V L F e A Z 6 3 w A / D C x E v S q s y X M N / y 9 q a p U M u T q g J w N P X 2 8 R N x A r 8 X 3 j B N 7 7 f W n Y c 8 5 5 k 6 n d P 8 P 1 D j e a A H g H 8 H m + r q W 4 b q n D S G e N X u U q s V z f T v Z y z 6 K K i + e h 0 w A j n w h x h n U f o 4 z M G / X z d Y q g 7 B T C 7 w W B / Y C 8 P U c O H n O P 8 p H v 7 r 6 1 + + 2 9 h u / k + l 1 Y u b t 1 v b b 7 a 2 L 3 7 Y + N N u 9 b + Y f r X 2 b 2 v / v v Z q b X vt j 2 t / W j t a 6 6 1 d r 4 3 W 8 r W / r P 1 1 7 W / v t t 7 1 3 3 1 6 9 9 m Y / v x n V Z 9 / X X N + 3 k 3 + D 3 T m z / o = < / l a t e x i t > p kNN < l a t e x i t s h a 1 _ b a s e 6 4 = " b 8 n P u CK B + x x W C 4 f 3 E d T H Z p A U f R o = " > A A A 1 r H i c l V v b c h v L d a X t J H a Y O D 5 2 H v M y F U p l O U W x R J 2 j c i p P 5 g W 8 i C A J k u B F E n R U A 2 B j M O L c O N 0 Y A p w g z / 4 E v 9 p / l b / J 7 u 7 p 2 b s H Q 5 0 K q 0 R O r 7 W 7 p y + r e y 8 M R s M s C o V 8 8 + Z / f / b z X / z d 3 / / D L 3 / 1 j + v / 9 M + / / p f f f P f b 3 9 2 I d J a P 4 H q U R m l + N / Q F R G E C 1 z K U E d x l O f j x M I L b 4 f 2 e 4 m 8 L y E W Y J n 2 5 y O B z 7 A d J O A l H v k T o x + x L e T + Q M J f l 2 d l y + e W 7 j T d b b / S P t 3 q x X V 1 s r F U / v S + / / e H P g 3 E 6 m s W Q y F H k C / F p + 0 0 m P 5 d + L s N R B M v 1 w U x A 5 o / u / Q A + 4 W X i x y A + l 7 r b S + 8 l I m N v k u b 4 L 5 G e R n m N 0 o + F W M R D j I x 9 O R V N T o F t 3 K e Z n P z n 5 z J M s p m E Z G R u N J l F n k w 9 N Q f e O M x h J K M F X v i j P M S + e q O p n / s j i T O 1 v v 5 S / X h n n V v v d K d / 5 O 1 3 D o 7 P j v v H 5 2 d X n q b W 2 z q y i X / V M M T m M F 5 i G 9 6 p n 9 9 7 A u + D 8 y y 8 d O K N / M x c q x H n M I E 8 D 5 N A d W o c F q G w Y Z M w m O W A A 0 r g c Z T G s Z + M y w G C E U z k s i w H E H u v u n j 9 h + V y J W a E 6 w C 5 j d r T p b a 4 P A y m d W O X q t A W J d P M x v T T r C 1 i m E q Z x j Z o V 5 d W 4 q p x + z b M f y 5 i a C O G z 0 W M b M T o u Y i x j R i r C F y G I x x d p E b o + R 7 G q 0 W H C W 6 W s Y d z E 7 t t 4 L U C l 5 + 2 P 2 M r w 4 m 3 s a 0 a a Q 5 7 v i w H s Z 8 H K D A / L w + O 7 5 p 9 w W s n B K X U D O m f 7 5 / r + + j d p 6 V f 5 o C 9 V 8 R / m R u 7 b X Z 0 k 3 K a Z u W g 0 2 Q 7 D 8 h 2 v p S D v H g a i D D 2 H v C 6 y K b h 8 p W C / h t / z V e m r J M 9 N W p l q p a c g v S / X a + u N q + q j V + 1 R j 4 8 4 V y 1 d s W N y 1 T c M z d v R M 6 f V i L n K v J p N X I 1 c C V m r I P G 7 a S 6 0 6 u 2 p h + e m q N q R q i J R b i B z j U 6 b 6 A L j S 4 a a K z R u K k C j S b N 2 J l U 4 p h h l + a b n h 3 x S l D m B K m + N 0 L G p i L e x h 9 G P s 1 d M 0 x V Z U F t L S G 2 0 h h i v H e 4 O Q / 0 W W c O Q z y p Y d O L 0 k f I X 4 8 w o 2 2 t D 3 C n 6 t M K J h v b p T k X / 2 e A p V J v j 7 b q e A q E 0 o + 2 v A M 8 Y 4 X E P K S O V K E O Q u R N i w e 2 x Y N m i 5 q W j 6 m 9 5 8 b b 6 q 7 C s 0 E e D q 8 q v L U 1 H m b + m K p s f L / x w 0 q 1 z b q O v f q e N / W D H s 6 V S R b f n A 5 M K K b z V W Z x 5 q O l A T s h p v a V r X 3 V U v v S 1 t J 5 8 j G t k 9 d W P T H m 7 k L P T J 3 a n p m a Z o P T H K D Z J G t v 4 / v V F m n W W N v f r 7 b t J x 7 g I q j K L V M G D 2 b M N u T 5 Q T v t z L I M c k + 1 Y 5 r p V M 1 0 2 p r Z 8 X L / k e a 9 0 d j r 1 6 / 9 I g 3 H 3 k y o j B 9 O v C w V I k S P Z p r O I h 8 z U t X + 8 7 1 T J i X D B N U y R s W Y 6 l X M / 3 u Q V U N 7 d U N 7 P 9 k Q j j k J Q F s b E y t M G x q u e 4 R S s b R t 6 v X r Z 2 W C v f O j I E V T N o 1 b x o m c 6 V 0 d 9 M 2 B s q Z W R r p j m 9 p p a c o K 3 t 4 P B 1 G 3 9 e 3 D o O 9 U 2 v n J S i u T i o Z B V i N n 6 l O o 6 a 6 6 + t a i m P p N 9 f b q + j 2 3 v h 1 p f Q P s t b p + t s O V 4 C C M l F g j d Y F 2 B Q P U V d X e J E r T X N P 6 y v D 6 s g p A a h i X K y Z H 5 r g R K p 8 z 8 q N y v x l Q + F E 4 5 g F f z H U e l 4 Z a r j Q J Q r Z X 0 M y y H h F k Q l n H T I R R m m j b h 1 O L T a S x V / h 5 i E k M r L 4 x f 5 X G u C V p H m O r L w Y I v V j a 6 c w b t E / M 0 G W G x I x c Z k T M 2 G X G x I D L A D E T l 5 k Q E 7 h M Q M z U Z a b E h C 4 T E v P V Z b 4 S c + 8 y 9 8 R E L h M t t Y z z 2 A s F 7 l j 8 9 D p e q M P O r O C m 9 3 U m p D d O k 9 9 L T 3 1 + R D k u 1 M n j L I w X V 2 0 n b t s J 3 T V 1 m Z S Y z G U y Y h 5 c 5 o G Y 3 G V y Y o T L C G K k y 0 h i Z i 4 z I 6 Z w m Y K Y R 5 d 5 J G b u M n N i F i 6 z I O b J Z Z 6 W x q D Z D Y C Z O a 2 P 9 6 L a J K X Z S s M J 2 z Z 1 v 7 X L Y x G V 6 6 t 5 x n F 4 S D D b G 8 W I Y L Y x i j H B b F c U Q D D b E s W E Y L Y f i o B g t h m K K c F s J x Q z g t k 2 K L 4 S z P Z A c U 8 w 2 w B F R H D E 4 J j g m M F s o v k M p w Q z M R c Z w U z J x Q P B T M Z F T j D T c C E I F n x R C Z b t c 8 K l W x D M d F s 8 E s x E W 8 w J Z o o t F g Q z u R Z P B F u t d i J Q z 6 H 0 Q 5 S 8 R b d g R N d 6 L o N R X u v J D E Z + r W c z G A 2 2 n s 5 g h N h 6 P o N R Y + s J D U a S r W c 0 G F 2 2 n t L I P X t O g 1 F o 6 0 k N R q a t Z z U Y r T Z P a 8 v F L h d z 7 t m T G I x 0 W 8 9 i M P p t P Y 3 B i L j 1 P A a j 5 N Y T G Y y c W 8 9 k M J p u P Z X B C L v 1 X A a j 7 t a T G Y z E W 8 9 m M D p v P Z 3 B i L 3 1 f A a j + O d P a N w L e T i q H U q 8 Q / t j h 7 Z N v E v w L o P 3 C N 5 j 8 D 7 B + w z u E N x h 8 A H B B w w + J P i Q w U c E H z H 4 m O B j B r 8 n + D 2 D T w g + Y X C X 4 C 6 D T w k + Z f A Z w W c M P i f 4 n M E 9 g n s M v i D 4 g s G X B F 8 y + I r g K w b 3 C e 4 z + J r g a w b f E H z D 4 F u C b x l 8 R / A d g z 8 Q / I H B H w n + + P z x 6 o o O j O q Y R n e Y f r X 0 G L f L u T 2 X 2 + P c v s vt c 6 7 j c h 3 O H b j c A e c O X e 6 Q c 0 c u d 8 S 5 Y 5 c 7 5 t x 7 l 3 v P u R O X O + F c 1 + W 6 n D t 1 u V P O n b n c G e f O X e 6 c c z 2 X 6 3 H u w u U u O H f p c p e c u 3 K 5 K 8 7 1 X a 7 P u W u X u + b c j c v d c O 7 W 5 W 4 5 d + d y d 5 z 7 4 H I f O P f R 5 a z s b 7 i F K J 5 A f 4 7 A z 6 5 v 6 r p F m k B p P 8 9 a L J 4 Z a B B T 0 q g 9 s c J d P 6 y e jV a E f p p q 4 S q a B Q 4 N Q v Z E m x N E y J R o S 4 I I W Z G i 6 i A Z E G 0 / E C H b o U 0 H I m Q 2 t N V A h C x G U X W S 9 f C r Q c h O a D O B C J k I b S E Q i d j 0 G I Q M g 7 Y L i C R s W g 2 S s k k y C F k C b Q g Q I S O g b Q A i l P 5 1 8 k d E s H U w C K X 6 o l o t t l a F Q S i t 6 6 S O C C V z n c o R o R S u E z g i l L h 1 2 k a k z a S 6 7 r T w o 2 y q 1 l v / r Y V Z D C v N V A / i D U i f w O i B R U V F f j w c q x r m g o g 0 h k D h + i / B W q l K p R b A B h H B 3 w S J M I h V V f 2 X Y K v n + l u C a i B l y f t f K r H a E o p 1 R C U U 6 p g N q l Q C t S U U 6 I R K K M 6 A S i j M K Z W w u 6 y v K M i v V E I x 3 r O 5 K Z U I 6 5 G X S o C 2 h J P J Z h H F l 7 I p K Z X o b A l F 9 0 A l F F z O Z q p U Q q s n q F Q i s y W c a D b N K L C C S i i u R y q h s O Z U Q l E t q I S C e l p W 3 z B j + p 0 b X K d e 1 B m l X J 1 w E a F E q 9 M s I p R e d X J F h J K q T q m I U C r V i R Q R S q A 6 f S J C a V M n T U Q o W e p U i Q i l S J 0 g E a H E q N M i I p Q O d T J E h J K g T o G I U O r T i Q 8 R S n g 6 3 S F C a U 4 n O U Q o u e n U h g i l N J 3 Q E K F E p t M Y I p S + d P J C h J K W T l m I U K r S i Q o R S l A 6 P S F C a U k n J U Q o G e l U h A i l I J 2 A E P n I V p D S x Z B n i 7 h X Z 4 s e y x Z x 1 2 5 9 x X S r 7 V 8 P r t r D i r s y + 1 i r q A + J U O 9 d 7 M M o 8 n N A U U 1 3 1 A m E d z Q e U E x C 9 Q Q V k l E 6 D p M A G / N n k U L E p L 6 O l 6 V Q D 3 + v Q D 7 X w D C N x j / V z H C + L J t f b k r s n / m m X K f T q j 3 9 8 L o a m j S 2 M x F M / X L X Y q R / u W c x 2 g F y 3 2 K 0 B 2 T H Y r Q L 5 I H F a B / I Q 4 v R T p B H F q O 9 I I 8 t R r t B v r c Y 7 Q d 5 Y j H a E b J r M d o T 8 t R i t C v k m c V o X 8 h z i 9 H O k D 2 L 0 d 6 Q F x a j 3 S E v L U b 7 Q 1 5 Z j H a I 7 F u M 9 o i 8 t h j t E n l j M d o n 8 t Z i t F P k n c V o r 8 g P F q P d I j 9 a z B g 1 F P J h 7 m d T w w b 2 4 + / I + R Q S 7 D K Y d B H s M Z i k E e w z m N Q R d B h M A g k O G E w a C Q 4 Z T D I J j h h M S g m O G U x i C d 4 z m P Q S n D C Y J B N 0 G U y q C U 4 Z T M I J z h h M 2 g n O G U z y C X o M J g U F F w w m E Q W X D C Y d B V c M J i k F f Q a T m o J r B p O g g h s G k 6 a C W w a T r I I 7 B p O y g g 8 M J n E F H x l s P w j g 0 V Z Z N V E / X B k y c Y l d Q k l b Y o 9 Q k p b Y J 1 Q r 6 6 W 3 r 7 / g m A n w f E + A 9 P D W E Y y 9 z q Y 3 h J G v c D k N h f e Y z q I x Q l g C T + i v Q 9 B L z n J P v S i X R t i Q e r s M 5 h l 6 S / 0 d r / 2 m v U N 3 J N G K A 0 J J s + K Q U J K s O C K U F C u O C S X B i v e E k l 7 F C a E k V 9 E l l N Q q T g k l s Y o z Q k m r 4 p x Q k q r o E U p K F R e E k l D F J a G k U 3 F F K M l U 9 A k l l Y p r Q k m k 4 o Z Q 0 q i 4 J Z Q k K u 4 I J Y W K D 4 S S Q M V H Q u v n M w m 6 Q d A f L H z z Z K a y h k C + o O t + J F C m c Y d K K O B d K q F w 9 6 i E g t 2 n E o q p Q y U U 0 Q G V U D y H V E L R H F E J x X J M J R T J e y q h O E 6 o h K L o U g n F c E o l F M E Z l X D x z 6 m E i 9 6 j E i 7 2 B Z V w k S + p h I t 7 R S V c 1 D 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 6 T 2 G c p r O p I c m y H v E R J d B 7 t o k I J / k e K T q 9 r L W g A 5 c s Y e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E M p + B N 0 v G k E c L 9 Y b T 2 J e + F 0 A C O e Y g V Q 4 F K n 0 4 U w m p + Q a m r 1 4 S V C 9 q 5 n G p C z o d q l Y h z s I 8 x E T o 1 K / f 3 x 0 u d B L U 7 4 y o m 2 D W b L R t X y e Z + h I / v 7 u 3 c C J 7 P L K 3 b O t M n I 4 h + t Z A d E A 9 E l N a u U 8 V 1 P t W U C b D a A x V 5 E A X 6 t 7 X N f C Y k O l o 6 g v 1 8 r o / k 6 n + X A W 5 0 8 P G S + S Z i a n 7 W F V Z 7 c A Y n D h T b I n L k c B D x 8 a Z I m p h p J + s u c G R n 0 X + C J b 1 6 z f d C l h 6 L 7 3 q 2 p 3 e x k v H S + 5 f X K 4 r 2 B s + 3 S Z 7 u e S 5 v X F q x h m b 4 w Y Z 5 U v 7 N M 4 l c g i W 9 f O 1 J j W S N E Z V C i c h 5 M 2 m R T q R s T + n S A s 0 4 z B Z p P q N J / P o b b W V L J q p 0 T + p 5 w M u e 9 J d 8 t e d T r o r C 3 j j 5 9 Q D V W i 2 L / G P n + P a 5 y m L v F p Z g L 2 0 I F o V l E B v 0 2 i S + 7 F 6 T D V 9 T H O 0 r c J f C O 9 F 9 8 e 3 L 9 S r P v o / f s w S 8 8 q q y H D 9 h X 7 V 7 M U A o o j F 2 M e k L 7 1 d T I C 4 5 R P 1 a 4 H 7 H W L 1 y p v y x q Z R F q 3 e N 0 1 n g c 6 Z 2 i q H E j Z 1 8 y L 1 x i m o 5 h 7 D + z C D c e h v N V 6 y T v M 4 U k / 6 l 2 X 3 x z f L F j J N Q H H b b Z x 8 1 P X e t n G Z Y r I W R m u h + + M g T C Z y 0 d w 6 m Z + r R 8 Z 4 b P h q s 1 w B n r X C D 8 A L E y 9 J K 5 s v Y b 7 l 7 U 1 T o a Y n V Q Z w N P X 2 8 R Nx A r 8 X 3 j B N 7 7 f W n Y c 8 5 5 k 6 n d P 8 P 1 D j e a A 7 g H 8 H m + r q W 4 H q n D S B e N X e p F Y r h u n f z 0 T 0 U V B 9 9 T p g B H L g D 3 G f R e n j M A f / f t 1 g q T o E M 7 n A Y 3 1 g L w x T w 4 W f 4 / i n e P i v r 3 / 5 b m O 7 + T + V V i 9 u 3 m 5 t v 9 n a v v h h 4 0 + 7 1 f 9 i + t X a v 6 3 9 + 9 q r t e 2 1 P 6 7 9 a e 1 o r b d 2 v T Z a y 9 f + s v b X t b + 9 2 3 r X f / f p 3 W c T + v O f V X X + d c 3 5 e T f 5 P x 0 R 0 B o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " b 8 n P u CK B + x x W C 4 f 3 E d T H Z p A U f R o = " > A A A 1 r H i c l V v b c h v L d a X t J H a Y O D 5 2 H v M y F U p l O U W x R J 2 j c i p P 5 g W 8 i C A J k u B F E n R U A 2 B j M O L c O N 0 Y A p w g z / 4 E v 9 p / l b / J 7 u 7 p 2 b s H Q 5 0 K q 0 R O r 7 W 7 p y + r e y 8 M R s M s C o V 8 8 + Z / f / b z X / z d 3 / / D L 3 / 1 j + v / 9 M + / / p f f f P f b 3 9 2 I d J a P 4 H q U R m l + N / Q F R G E C 1 z K U E d x l O f j x M I L b 4 f 2 e 4 m 8 L y E W Y J n 2 5 y O B z 7 A d J O A l H v k T o x + x L e T + Q M J f l 2 d l y + e W 7 j T d b b / S P t 3 q x X V 1 s r F U / v S + / / e H P g 3 E 6 m s W Q y F H k C / F p + 0 0 m P 5 d + L s N R B M v 1 w U x A 5 o / u / Q A + 4 W X i x y A + l 7 r b S + 8 l I m N v k u b 4 L 5 G e R n m N 0 o + F W M R D j I x 9 O R V N T o F t 3 K e Z n P z n 5 z J M s p m E Z G R u N J l F n k w 9 N Q f e O M x h J K M F X v i j P M S + e q O p n / s j i T O 1 v v 5 S / X h n n V v v d K d / 5 O 1 3 D o 7 P j v v H 5 2 d X n q b W 2 z q y i X / V M M T m M F 5 i G 9 6 p n 9 9 7 A u + D 8 y y 8 d O K N / M x c q x H n M I E 8 D 5 N A d W o c F q G w Y Z M w m O W A A 0 r g c Z T G s Z + M y w G C E U z k s i w H E H u v u n j 9 h + V y J W a E 6 w C 5 j d r T p b a 4 P A y m d W O X q t A W J d P M x v T T r C 1 i m E q Z x j Z o V 5 d W 4 q p x + z b M f y 5 i a C O G z 0 W M b M T o u Y i x j R i r C F y G I x x d p E b o + R 7 G q 0 W H C W 6 W s Y d z E 7 t t 4 L U C l 5 + 2 P 2 M r w 4 m 3 s a 0 a a Q 5 7 v i w H s Z 8 H K D A / L w + O 7 5 p 9 w W s n B K X U D O m f 7 5 / r + + j d p 6 V f 5 o C 9 V 8 R / m R u 7 b X Z 0 k 3 K a Z u W g 0 2 Q 7 D 8 h 2 v p S D v H g a i D D 2 H v C 6 y K b h 8 p W C / h t / z V e m r J M 9 N W p l q p a c g v S / X a + u N q + q j V + 1 R j 4 8 4 V y 1 d s W N y 1 T c M z d v R M 6 f V i L n K v J p N X I 1 c C V m r I P G 7 a S 6 0 6 u 2 p h + e m q N q R q i J R b i B z j U 6 b 6 A L j S 4 a a K z R u K k C j S b N 2 J l U 4 p h h l + a b n h 3 x S l D m B K m + N 0 L G p i L e x h 9 G P s 1 d M 0 x V Z U F t L S G 2 0 h h i v H e 4 O Q / 0 W W c O Q z y p Y d O L 0 k f I X 4 8 w o 2 2 t D 3 C n 6 t M K J h v b p T k X / 2 e A p V J v j 7 b q e A q E 0 o + 2 v A M 8 Y 4 X E P K S O V K E O Q u R N i w e 2 x Y N m i 5 q W j 6 m 9 5 8 b b 6 q 7 C s 0 E e D q 8 q v L U 1 H m b + m K p s f L / x w 0 q 1 z b q O v f q e N / W D H s 6 V S R b f n A 5 M K K b z V W Z x 5 q O l A T s h p v a V r X 3 V U v v S 1 t J 5 8 j G t k 9 d W P T H m 7 k L P T J 3 a n p m a Z o P T H K D Z J G t v 4 / v V F m n W W N v f r 7 b t J x 7 g I q j K L V M G D 2 b M N u T 5 Q T v t z L I M c k + 1 Y 5 r p V M 1 0 2 p r Z 8 X L / k e a 9 0 d j r 1 6 / 9 I g 3 H 3 k y o j B 9 O v C w V I k S P Z p r O I h 8 z U t X + 8 7 1 T J i X D B N U y R s W Y 6 l X M / 3 u Q V U N 7 d U N 7 P 9 k Q j j k J Q F s b E y t M G x q u e 4 R S s b R t 6 v X r Z 2 W C v f O j I E V T N o 1 b x o m c 6 V 0 d 9 M 2 B s q Z W R r p j m 9 p p a c o K 3 t 4 P B 1 G 3 9 e 3 D o O 9 U 2 v n J S i u T i o Z B V i N n 6 l O o 6 a 6 6 + t a i m P p N 9 f b q + j 2 3 v h 1 p f Q P s t b p + t s O V 4 C C M l F g j d Y F 2 B Q P U V d X e J E r T X N P 6 y v D 6 s g p A a h i X K y Z H 5 r g R K p 8 z 8 q N y v x l Q + F E 4 5 g F f z H U e l 4 Z a r j Q J Q r Z X 0 M y y H h F k Q l n H T I R R m m j b h 1 O L T a S x V / h 5 i E k M r L 4 x f 5 X G u C V p H m O r L w Y I v V j a 6 c w b t E / M 0 G W G x I x c Z k T M 2 G X G x I D L A D E T l 5 k Q E 7 h M Q M z U Z a b E h C 4 T E v P V Z b 4 S c + 8 y 9 8 R E L h M t t Y z z 2 A s F 7 l j 8 9 D p e q M P O r O C m 9 3 U m p D d O k 9 9 L T 3 1 + R D k u 1 M n j L I w X V 2 0 n b t s J 3 T V 1 m Z S Y z G U y Y h 5 c 5 o G Y 3 G V y Y o T L C G K k y 0 h i Z i 4 z I 6 Z w m Y K Y R 5 d 5 J G b u M n N i F i 6 z I O b J Z Z 6 W x q D Z D Y C Z O a 2 P 9 6 L a J K X Z S s M J 2 z Z 1 v 7 X L Y x G V 6 6 t 5 x n F 4 S D D b G 8 W I Y L Y x i j H B b F c U Q D D b E s W E Y L Y f i o B g t h m K K c F s J x Q z g t k 2 K L 4 S z P Z A c U 8 w 2 w B F R H D E 4 J j g m M F s o v k M p w Q z M R c Z w U z J x Q P B T M Z F T j D T c C E I F n x R C Z b t c 8 K l W x D M d F s 8 E s x E W 8 w J Z o o t F g Q z u R Z P B F u t d i J Q z 6 H 0 Q 5 S 8 R b d g R N d 6 L o N R X u v J D E Z + r W c z G A 2 2 n s 5 g h N h 6 P o N R Y + s J D U a S r W c 0 G F 2 2 n t L I P X t O g 1 F o 6 0 k N R q a t Z z U Y r T Z P a 8 v F L h d z 7 t m T G I x 0 W 8 9 i M P p t P Y 3 B i L j 1 P A a j 5 N Y T G Y y c W 8 9 k M J p u P Z X B C L v 1 X A a j 7 t a T G Y z E W 8 9 m M D p v P Z 3 B i L 3 1 f A a j + O d P a N w L e T i q H U q 8 Q / t j h 7 Z N v E v w L o P 3 C N 5 j 8 D 7 B + w z u E N x h 8 A H B B w w + J P i Q w U c E H z H 4 m O B j B r 8 n + D 2 D T w g + Y X C X 4 C 6 D T w k + Z f A Z w W c M P i f 4 n M E 9 g n s M v i D 4 g s G X B F 8 y + I r g K w b 3 C e 4 z + J r g a w b f E H z D 4 F u C b x l 8 R / A d g z 8 Q / I H B H w n + + P z x 6 o o O j O q Y R n e Y f r X 0 G L f L u T 2 X 2 + P c v s vt c 6 7 j c h 3 O H b j c A e c O X e 6 Q c 0 c u d 8 S 5 Y 5 c 7 5 t x 7 l 3 v P u R O X O + F c 1 + W 6 n D t 1 u V P O n b n c G e f O X e 6 c c z 2 X 6 3 H u w u U u O H f p c p e c u 3 K 5 K 8 7 1 X a 7 P u W u X u + b c j c v d c O 7 W 5 W 4 5 d + d y d 5 z 7 4 H I f O P f R 5 a z s b 7 i F K J 5 A f 4 7 A z 6 5 v 6 r p F m k B p P 8 9 a L J 4 Z a B B T 0 q g 9 s c J d P 6 y e jV a E f p p q 4 S q a B Q 4 N Q v Z E m x N E y J R o S 4 I I W Z G i 6 i A Z E G 0 / E C H b o U 0 H I m Q 2 t N V A h C x G U X W S 9 f C r Q c h O a D O B C J k I b S E Q i d j 0 G I Q M g 7 Y L i C R s W g 2 S s k k y C F k C b Q g Q I S O g b Q A i l P 5 1 8 k d E s H U w C K X 6 o l o t t l a F Q S i t 6 6 S O C C V z n c o R o R S u E z g i l L h 1 2 k a k z a S 6 7 r T w o 2 y q 1 l v / r Y V Z D C v N V A / i D U i f w O i B R U V F f j w c q x r m g o g 0 h k D h + i / B W q l K p R b A B h H B 3 w S J M I h V V f 2 X Y K v n + l u C a i B l y f t f K r H a E o p 1 R C U U 6 p g N q l Q C t S U U 6 I R K K M 6 A S i j M K Z W w u 6 y v K M i v V E I x 3 r O 5 K Z U I 6 5 G X S o C 2 h J P J Z h H F l 7 I p K Z X o b A l F 9 0 A l F F z O Z q p U Q q s n q F Q i s y W c a D b N K L C C S i i u R y q h s O Z U Q l E t q I S C e l p W 3 z B j + p 0 b X K d e 1 B m l X J 1 w E a F E q 9 M s I p R e d X J F h J K q T q m I U C r V i R Q R S q A 6 f S J C a V M n T U Q o W e p U i Q i l S J 0 g E a H E q N M i I p Q O d T J E h J K g T o G I U O r T i Q 8 R S n g 6 3 S F C a U 4 n O U Q o u e n U h g i l N J 3 Q E K F E p t M Y I p S + d P J C h J K W T l m I U K r S i Q o R S l A 6 P S F C a U k n J U Q o G e l U h A i l I J 2 A E P n I V p D S x Z B n i 7 h X Z 4 s e y x Z x 1 2 5 9 x X S r 7 V 8 P r t r D i r s y + 1 i r q A + J U O 9 d 7 M M o 8 n N A U U 1 3 1 A m E d z Q e U E x C 9 Q Q V k l E 6 D p M A G / N n k U L E p L 6 O l 6 V Q D 3 + v Q D 7 X w D C N x j / V z H C + L J t f b k r s n / m m X K f T q j 3 9 8 L o a m j S 2 M x F M / X L X Y q R / u W c x 2 g F y 3 2 K 0 B 2 T H Y r Q L 5 I H F a B / I Q 4 v R T p B H F q O 9 I I 8 t R r t B v r c Y 7 Q d 5 Y j H a E b J r M d o T 8 t R i t C v k m c V o X 8 h z i 9 H O k D 2 L 0 d 6 Q F x a j 3 S E v L U b 7 Q 1 5 Z j H a I 7 F u M 9 o i 8 t h j t E n l j M d o n 8 t Z i t F P k n c V o r 8 g P F q P d I j 9 a z B g 1 F P J h 7 m d T w w b 2 4 + / I + R Q S 7 D K Y d B H s M Z i k E e w z m N Q R d B h M A g k O G E w a C Q 4 Z T D I J j h h M S g m O G U x i C d 4 z m P Q S n D C Y J B N 0 G U y q C U 4 Z T M I J z h h M 2 g n O G U z y C X o M J g U F F w w m E Q W X D C Y d B V c M J i k F f Q a T m o J r B p O g g h s G k 6 a C W w a T r I I 7 B p O y g g 8 M J n E F H x l s P w j g 0 V Z Z N V E / X B k y c Y l d Q k l b Y o 9 Q k p b Y J 1 Q r 6 6 W 3 r 7 / g m A n w f E + A 9 P D W E Y y 9 z q Y 3 h J G v c D k N h f e Y z q I x Q l g C T + i v Q 9 B L z n J P v S i X R t i Q e r s M 5 h l 6 S / 0 d r / 2 m v U N 3 J N G K A 0 J J s + K Q U J K s O C K U F C u O C S X B i v e E k l 7 F C a E k V 9 E l l N Q q T g k l s Y o z Q k m r 4 p x Q k q r o E U p K F R e E k l D F J a G k U 3 F F K M l U 9 A k l l Y p r Q k m k 4 o Z Q 0 q i 4 J Z Q k K u 4 I J Y W K D 4 S S Q M V H Q u v n M w m 6 Q d A f L H z z Z K a y h k C + o O t + J F C m c Y d K K O B d K q F w 9 6 i E g t 2 n E o q p Q y U U 0 Q G V U D y H V E L R H F E J x X J M J R T J e y q h O E 6 o h K L o U g n F c E o l F M E Z l X D x z 6 m E i 9 6 j E i 7 2 B Z V w k S + p h I t 7 R S V c 1 D 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 6 T 2 G c p r O p I c m y H v E R J d B 7 t o k I J / k e K T q 9 r L W g A 5 c s Y e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E M p + B N 0 v G k E c L 9 Y b T 2 J e + F 0 A C O e Y g V Q 4 F K n 0 4 U w m p + Q a m r 1 4 S V C 9 q 5 n G p C z o d q l Y h z s I 8 x E T o 1 K / f 3 x 0 u d B L U 7 4 y o m 2 D W b L R t X y e Z + h I / v 7 u 3 c C J 7 P L K 3 b O t M n I 4 h + t Z A d E A 9 E l N a u U 8 V 1 P t W U C b D a A x V 5 E A X 6 t 7 X N f C Y k O l o 6 g v 1 8 r o / k 6 n + X A W 5 0 8 P G S + S Z i a n 7 W F V Z 7 c A Y n D h T b I n L k c B D x 8 a Z I m p h p J + s u c G R n 0 X + C J b 1 6 z f d C l h 6 L 7 3 q 2 p 3 e x k v H S + 5 f X K 4 r 2 B s + 3 S Z 7 u e S 5 v X F q x h m b 4 w Y Z 5 U v 7 N M 4 l c g i W 9 f O 1 J j W S N E Z V C i c h 5 M 2 m R T q R s T + n S A s 0 4 z B Z p P q N J / P o b b W V L J q p 0 T + p 5 w M u e 9 J d 8 t e d T r o r C 3 j j 5 9 Q D V W i 2 L / G P n + P a 5 y m L v F p Z g L 2 0 I F o V l E B v 0 2 i S + 7 F 6 T D V 9 T H O 0 r c J f C O 9 F 9 8 e 3 L 9 S r P v o / f s w S 8 8 q q y H D 9 h X 7 V 7 M U A o o j F 2 M e k L 7 1 d T I C 4 5 R P 1 a 4 H 7 H W L 1 y p v y x q Z R F q 3 e N 0 1 n g c 6 Z 2 i q H E j Z 1 8 y L 1 x i m o 5 h 7 D + z C D c e h v N V 6 y T v M 4 U k / 6 l 2 X 3 x z f L F j J N Q H H b b Z x 8 1 P X e t n G Z Y r I W R m u h + + M g T C Z y 0 d w 6 m Z + r R 8 Z 4 b P h q s 1 w B n r X C D 8 A L E y 9 J K 5 s v Y b 7 l 7 U 1 T o a Y n V Q Z w N P X 2 8 R N x A r 8 X 3 j B N 7 7 f W n Y c 8 5 5 k 6 n d P 8 P 1 D j e a A 7 g H 8 H m + r q W 4 H q n D S B e N X e p F Y r h u n f z 0 T 0 U V B 9 9 T p g B H L g D 3 G f R e n j M A f / f t 1 g q T o E M 7 n A Y 3 1 g L w x T w 4 W f 4 / i n e P i v r 3 / 5 b m O 7 + T + V V i 9 u 3 m 5 t v 9 n a v vh h 4 0 + 7 1 f 9 i + t X a v 6 3 9 + 9 q r t e 2 1 P 6 7 9 a e 1 o r b d 2 v T Z a y 9 f + s v b X t b + 9 2 3 r X f / f p 3 W c T + v O f V X X + d c 3 5 e T f 5 P x 0 R 0 B o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " b 8 n P u CK B + x x W C 4 f 3 E d T H Z p A U f R o = " > A A A 1 r H i c l V v b c h v L d a X t J H a Y O D 5 2 H v M y F U p l O U W x R J 2 j c i p P 5 g W 8 i C A J k u B F E n R U A 2 B j M O L c O N 0 Y A p w g z / 4 E v 9 p / l b / J 7 u 7 p 2 b s H Q 5 0 K q 0 R O r 7 W 7 p y + r e y 8 M R s M s C o V 8 8 + Z / f / b z X / z d 3 / / D L 3 / 1 j + v / 9 M + / / p f f f P f b 3 9 2 I d J a P 4 H q U R m l + N / Q F R G E C 1 z K U E d x l O f j x M I L b 4 f 2 e 4 m 8 L y E W Y J n 2 5 y O B z 7 A d J O A l H v k T o x + x L e T + Q M J f l 2 d l y + e W 7 j T d b b / S P t 3 q x X V 1 s r F U / v S + / / e H P g 3 E 6 m s W Q y F H k C / F p + 0 0 m P 5 d + L s N R B M v 1 w U x A 5 o / u / Q A + 4 W X i x y A + l 7 r b S + 8 l I m N v k u b 4 L 5 G e R n m N 0 o + F W M R D j I x 9 O R V N T o F t 3 K e Z n P z n 5 z J M s p m E Z G R u N J l F n k w 9 N Q f e O M x h J K M F X v i j P M S + e q O p n / s j i T O 1 v v 5 S / X h n n V v v d K d / 5 O 1 3 D o 7 P j v v H 5 2 d X n q b W 2 z q y i X / V M M T m M F 5 i G 9 6 p n 9 9 7 A u + D 8 y y 8 d O K N / M x c q x H n M I E 8 D 5 N A d W o c F q G w Y Z M w m O W A A 0 r g c Z T G s Z + M y w G C E U z k s i w H E H u v u n j 9 h + V y J W a E 6 w C 5 j d r T p b a 4 P A y m d W O X q t A W J d P M x v T T r C 1 i m E q Z x j Z o V 5 d W 4 q p x + z b M f y 5 i a C O G z 0 W M b M T o u Y i x j R i r C F y G I x x d p E b o + R 7 G q 0 W H C W 6 W s Y d z E 7 t t 4 L U C l 5 + 2 P 2 M r w 4 m 3 s a 0 a a Q 5 7 v i w H s Z 8 H K D A / L w + O 7 5 p 9 w W s n B K X U D O m f 7 5 / r + + j d p 6 V f 5 o C 9 V 8 R / m R u 7 b X Z 0 k 3 K a Z u W g 0 2 Q 7 D 8 h 2 v p S D v H g a i D D 2 H v C 6 y K b h 8 p W C / h t / z V e m r J M 9 N W p l q p a c g v S / X a + u N q + q j V + 1 R j 4 8 4 V y 1 d s W N y 1 T c M z d v R M 6 f V i L n K v J p N X I 1 c C V m r I P G 7 a S 6 0 6 u 2 p h + e m q N q R q i J R b i B z j U 6 b 6 A L j S 4 a a K z R u K k C j S b N 2 J l U 4 p h h l + a b n h 3 x S l D m B K m + N 0 L G p i L e x h 9 G P s 1 d M 0 x V Z U F t L S G 2 0 h h i v H e 4 O Q / 0 W W c O Q z y p Y d O L 0 k f I X 4 8 w o 2 2 t D 3 C n 6 t M K J h v b p T k X / 2 e A p V J v j 7 b q e A q E 0 o + 2 v A M 8 Y 4 X E P K S O V K E O Q u R N i w e 2 x Y N m i 5 q W j 6 m 9 5 8 b b 6 q 7 C s 0 E e D q 8 q v L U 1 H m b + m K p s f L / x w 0 q 1 z b q O v f q e N / W D H s 6 V S R b f n A 5 M K K b z V W Z x 5 q O l A T s h p v a V r X 3 V U v v S 1 t J 5 8 j G t k 9 d W P T H m 7 k L P T J 3 a n p m a Z o P T H K D Z J G t v 4 / v V F m n W W N v f r 7 b t J x 7 g I q j K L V M G D 2 b M N u T 5 Q T v t z L I M c k + 1 Y 5 r p V M 1 0 2 p r Z 8 X L / k e a 9 0 d j r 1 6 / 9 I g 3 H 3 k y o j B 9 O v C w V I k S P Z p r O I h 8 z U t X + 8 7 1 T J i X D B N U y R s W Y 6 l X M / 3 u Q V U N 7 d U N 7 P 9 k Q j j k J Q F s b E y t M G x q u e 4 R S s b R t 6 v X r Z 2 W C v f O j I E V T N o 1 b x o m c 6 V 0 d 9 M 2 B s q Z W R r p j m 9 p p a c o K 3 t 4 P B 1 G 3 9 e 3 D o O 9 U 2 v n J S i u T i o Z B V i N n 6 l O o 6 a 6 6 + t a i m P p N 9 f b q + j 2 3 v h 1 p f Q P s t b p + t s O V 4 C C M l F g j d Y F 2 B Q P U V d X e J E r T X N P 6 y v D 6 s g p A a h i X K y Z H 5 r g R K p 8 z 8 q N y v x l Q + F E 4 5 g F f z H U e l 4 Z a r j Q J Q r Z X 0 M y y H h F k Q l n H T I R R m m j b h 1 O L T a S x V / h 5 i E k M r L 4 x f 5 X G u C V p H m O r L w Y I v V j a 6 c w b t E / M 0 G W G x I x c Z k T M 2 G X G x I D L A D E T l 5 k Q E 7 h M Q M z U Z a b E h C 4 T E v P V Z b 4 S c + 8 y 9 8 R E L h M t t Y z z 2 A s F 7 l j 8 9 D p e q M P O r O C m 9 3 U m p D d O k 9 9 L T 3 1 + R D k u 1 M n j L I w X V 2 0 n b t s J 3 T V 1 m Z S Y z G U y Y h 5 c 5 o G Y 3 G V y Y o T L C G K k y 0 h i Z i 4 z I 6 Z w m Y K Y R 5 d 5 J G b u M n N i F i 6 z I O b J Z Z 6 W x q D Z D Y C Z O a 2 P 9 6 L a J K X Z S s M J 2 z Z 1 v 7 X L Y x G V 6 6 t 5 x n F 4 S D D b G 8 W I Y L Y x i j H B b F c U Q D D b E s W E Y L Y f i o B g t h m K K c F s J x Q z g t k 2 K L 4 S z P Z A c U 8 w 2 w B F R H D E 4 J j g m M F s o v k M p w Q z M R c Z w U z J x Q P B T M Z F T j D T c C E I F n x R C Z b t c 8 K l W x D M d F s 8 E s x E W 8 w J Z o o t F g Q z u R Z P B F u t d i J Q z 6 H 0 Q 5 S 8 R b d g R N d 6 L o N R X u v J D E Z + r W c z G A 2 2 n s 5 g h N h 6 P o N R Y + s J D U a S r W c 0 G F 2 2 n t L I P X t O g 1 F o 6 0 k N R q a t Z z U Y r T Z P a 8 v F L h d z 7 t m T G I x 0 W 8 9 i M P p t P Y 3 B i L j 1 P A a j 5 N Y T G Y y c W 8 9 k M J p u P Z X B C L v 1 X A a j 7 t a T G Y z E W 8 9 m M D p v P Z 3 B i L 3 1 f A a j + O d P a N w L e T i q H U q 8 Q / t j h 7 Z N v E v w L o P 3 C N 5 j 8 D 7 B + w z u E N x h 8 A H B B w w + J P i Q w U c E H z H 4 m O B j B r 8 n + D 2 D T w g + Y X C X 4 C 6 D T w k + Z f A Z w W c M P i f 4 n M E 9 g n s M v i D 4 g s G X B F 8 y + I r g K w b 3 C e 4 z + J r g a w b f E H z D 4 F u C b x l 8 R / A d g z 8 Q / I H B H w n + + P z x 6 o o O j O q Y R n e Y f r X 0 G L f L u T 2 X 2 + P c v s v t c 6 7 j c h 3 O H b j c A e c O X e 6 Q c 0 c u d 8 S 5 Y 5 c 7 5 t x 7 l 3 v P u R O X O + F c 1 + W 6 n D t 1 u V P O n b n c G e f O X e 6 c c z 2 X 6 3 H u w u U u O H f p c p e c u 3 K 5 K 8 7 1 X a 7 P u W u X u + b c j c v d c O 7 W 5 W 4 5 d + d y d 5 z 7 4 H I f O P f R 5 a z s b 7 i F K J 5 A f 4 7 A z 6 5 v 6 r p F m k B p P 8 9 a L J 4 Z a B B T 0 q g 9 s c J d P 6 y e j V a E f p p q 4 S q a B Q 4 N Q v Z E m x N E y J R o S 4 I I W Z G i 6 i A Z E G 0 / E C H b o U 0 H I m Q 2 t N V A h C x G U X W S 9 f C r Q c h O a D O B C J k I b S E Q i d j 0 G I Q M g 7 Y L i C R s W g 2 S s k k y C F k C b Q g Q I S O g b Q A i l P 5 1 8 k d E s H U w C K X 6 o l o t t l a F Q S i t 6 6 S O C C V z n c o R o R S u E z g i l L h 1 2 k a k z a S 6 7 r T w o 2 y q 1 l v / r Y V Z D C v N V A / i D U i f w O i B R U V F f j w c q x r m g o g 0 h k D h + i / B W q l K p R b A B h H B 3 w S J M I h V V f 2 X Y K v n + l u C a i B l y f t f K r H a E o p 1 R C U U 6 p g N q l Q C t S U U 6 I R K K M 6 A S i j M K Z W w u 6 y v K M i v V E I x 3 r O 5 K Z U I 6 5 G X S o C 2 h J P J Z h H F l 7 I p K Z X o b A l F 9 0 A l F F z O Z q p U Q q s n q F Q i s y W c a D b N K L C C S i i u R y q h s O Z U Q l E t q I S C e l p W 3 z B j + p 0 b X K d e 1 B m l X J 1 w E a F E q 9 M s I p R e d X J F h J K q T q m I U C r V i R Q R S q A 6 f S J C a V M n T U Q o W e p U i Q i l S J 0 g E a H E q N M i I p Q O d T J E h J K g T o G I U O r T i Q 8 R S n g 6 3 S F C a U 4 n O U Q o u e n U h g i l N J 3 Q E K F E p t M Y I p S + d P J C h J K W T l m I U K r S i Q o R S l A 6 P S F C a U k n J U Q o G e l U h A i l I J 2 A E P n I V p D S x Z B n i 7 h X Z 4 s e y x Z x 1 2 5 9 x X S r 7 V 8 P r t r D i r s y + 1 i r q A + J U O 9 d 7 M M o 8 n N A U U 1 3 1 A m E d z Q e U E x C 9 Q Q V k l E 6 D p M A G / N n k U L E p L 6 O l 6 V Q D 3 + v Q D 7 X w D C N x j / V z H C + L J t f b k r s n / m m X K f T q j 3 9 8 L o a m j S 2 M x F M / X L X Y q R / u W c x 2 g F y 3 2 K 0 B 2 T H Y r Q L 5 I H F a B / I Q 4 v R T p B H F q O 9 I I 8 t R r t B v r c Y 7 Q d 5 Y j H a E b J r M d o T 8 t R i t C v k m c V o X 8 h z i 9 H O k D 2 L 0 d 6 Q F x a j 3 S E v L U b 7 Q 1 5 Z j H a I 7 F u M 9 o i 8 t h j t E n l j M d o n 8 t Z i t F P k n c V o r 8 g P F q P d I j 9 a z B g 1 F P J h 7 m d T w w b 2 4 + / I + R Q S 7 D K Y d B H s M Z i k E e w z m N Q R d B h M A g k O G E w a C Q 4 Z T D I J j h h M S g m O G U x i C d 4 z m P Q S n D C Y J B N 0 G U y q C U 4 Z T M I J z h h M 2 g n O G U z y C X o M J g U F F w w m E Q W X D C Y d B V c M J i k F f Q a T m o J r B p O g g h s G k 6 a C W w a T r I I 7 B p O y g g 8 M J n E F H x l s P w j g 0 V Z Z N V E / X B k y c Y l d Q k l b Y o 9 Q k p b Y J 1 Q r 6 6 W 3 r 7 / g m A n w f E + A 9 P D W E Y y 9 z q Y 3 h J G v c D k N h f e Y z q I x Q l g C T + i v Q 9 B L z n J P v S i X R t i Q e r s M 5 h l 6 S / 0 d r / 2 m v U N 3 J N G K A 0 J J s + K Q U J K s O C K U F C u O C S X B i v e E k l 7 F C a E k V 9 E l l N Q q T g k l s Y o z Q k m r 4 p x Q k q r o E U p K F R e E k l D F J a G k U 3 F F K M l U 9 A k l l Y p r Q k m k 4 o Z Q 0 q i 4 J Z Q k K u 4 I J Y W K D 4 S S Q M V H Q u v n M w m 6 Q d A f L H z z Z K a y h k C + o O t + J F C m c Y d K K O B d K q F w 9 6 i E g t 2 n E o q p Q y U U 0 Q G V U D y H V E L R H F E J x X J M J R T J e y q h O E 6 o h K L o U g n F c E o l F M E Z l X D x z 6 m E i 9 6 j E i 7 2 B Z V w k S + p h I t 7 R S V c 1 D 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 6 T 2 G c p r O p I c m y H v E R J d B 7 t o k I J / k e K T q 9 r L W g A 5 c s Y e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E M p + B N 0 v G k E c L 9 Y b T 2 J e + F 0 A C O e Y g V Q 4 F K n 0 4 U w m p + Q a m r 1 4 S V C 9 q 5 n G p C z o d q l Y h z s I 8 x E T o 1 K / f 3 x 0 u d B L U 7 4 y o m 2 D W b L R t X y e Z + h I / v 7 u 3 c C J 7 P L K 3 b O t M n I 4 h + t Z A d E A 9 E l N a u U 8 V 1 P t W U C b D a A x V 5 E A X 6 t 7 X N f C Y k O l o 6 g v 1 8 r o / k 6 n + X A W 5 0 8 P G S + S Z i a n 7 W F V Z 7 c A Y n D h T b I n L k c B D x 8 a Z I m p h p J + s u c G R n 0 X + C J b 1 6 z f d C l h 6 L 7 3 q 2 p 3 e x k v H S + 5 f X K 4 r 2 B s + 3 S Z 7 u e S 5 v X F q x h m b 4 w Y Z 5 U v 7 N M 4 l c g i W 9 f O 1 J j W S N E Z V C i c h 5 M 2 m R T q R s T + n S A s 0 4 z B Z p P q N J / P o b b W V L J q p 0 T + p 5 w M u e 9 J d 8 t e d T r o r C 3 j j 5 9 Q D V W i 2 L / G P n + P a 5 y m L v F p Z g L 2 0 I F o V l E B v 0 2 i S + 7 F 6 T D V 9 T H O 0 r c J f C O 9 F 9 8 e 3 L 9 S r P v o / f s w S 8 8 q q y H D 9 h X 7 V 7 M U A o o j F 2 M e k L 7 1 d T I C 4 5 R P 1 a 4 H 7 H W L 1 y p v y x q Z R F q 3 e N 0 1 n g c 6 Z 2 i q H E j Z 1 8 y L 1 x i m o 5 h 7 D + z C D c e h v N V 6 y T v M 4 U k / 6 l 2 X 3 x z f L F j J N Q H H b b Z x 8 1 P X e t n G Z Y r I W R m u h + + M g T C Z y 0 d w 6 m Z + r R 8 Z 4 b P h q s 1 w B n r X C D 8 A L E y 9 J K 5 s v Y b 7 l 7 U 1 T o a Y n V Q Z w N P X 2 8 R N x A r 8 X 3 j B N 7 7 f W n Y c 8 5 5 k 6 n d P 8 P 1 D j e a A 7 g H 8 H m + r q W 4 H q n D S B e N X e p F Y r h u n f z 0 T 0 U V B 9 9 T p g B H L g D 3 G f R e n j M A f / f t 1 g q T o E M 7 n A Y 3 1 g L w x T w 4 W f 4 / i n e P i v r 3 / 5 b m O 7 + T + V V i 9 u 3 m 5 t v 9 n a v vh h 4 0 + 7 1 f 9 i + t X a v 6 3 9 + 9 q r t e 2 1 P 6 7 9 a e 1 o r b d 2 v T Z a y 9 f + s v b X t b + 9 2 3 r X f / f p 3 W c T + v O f V X X + d c 3 5 e T f 5 P x 0 R 0 B o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " b 8 n P u CK B + x x W C 4 f 3 E d T H Z p A U f R o = " > A A A 1 r H i c l V v b c h v L d a X t J H a Y O D 5 2 H v M y F U p l O U W x R J 2 j c i p P 5 g W 8 i C A J k u B F E n R U A 2 B j M O L c O N 0 Y A p w g z / 4 E v 9 p / l b / J 7 u 7 p 2 b s H Q 5 0 K q 0 R O r 7 W 7 p y + r e y 8 M R s M s C o V 8 8 + Z / f / b z X / z d 3 / / D L 3 / 1 j + v / 9 M + / / p f f f P f b 3 9 2 I d J a P 4 H q U R m l + N / Q F R G E C 1 z K U E d x l O f j x M I L b 4 f 2 e 4 m 8 L y E W Y J n 2 5 y O B z 7 A d J O A l H v k T o x + x L e T + Q M J f l 2 d l y + e W 7 j T d b b / S P t 3 q x X V 1 s r F U / v S + / / e H P g 3 E 6 m s W Q y F H k C / F p + 0 0 m P 5 d + L s N R B M v 1 w U x A 5 o / u / Q A + 4 W X i x y A + l 7 r b S + 8 l I m N v k u b 4 L 5 G e R n m N 0 o + F W M R D j I x 9 O R V N T o F t 3 K e Z n P z n 5 z J M s p m E Z G R u N J l F n k w 9 N Q f e O M x h J K M F X v i j P M S + e q O p n / s j i T O 1 v v 5 S / X h n n V v v d K d / 5 O 1 3 D o 7 P j v v H 5 2 d X n q b W 2 z q y i X / V M M T m M F 5 i G 9 6 p n 9 9 7 A u + D 8 y y 8 d O K N / M x c q x H n M I E 8 D 5 N A d W o c F q G w Y Z M w m O W A A 0 r g c Z T G s Z + M y w G C E U z k s i w H E H u v u n j 9 h + V y J W a E 6 w C 5 j d r T p b a 4 P A y m d W O X q t A W J d P M x v T T r C 1 i m E q Z x j Z o V 5 d W 4 q p x + z b M f y 5 i a C O G z 0 W M b M T o u Y i x j R i r C F y G I x x d p E b o + R 7 G q 0 W H C W 6 W s Y d z E 7 t t 4 L U C l 5 + 2 P 2 M r w 4 m 3 s a 0 a a Q 5 7 v i w H s Z 8 H K D A / L w + O 7 5 p 9 w W s n B K X U D O m f 7 5 / r + + j d p 6 V f 5 o C 9 V 8 R / m R u 7 b X Z 0 k 3 K a Z u W g 0 2 Q 7 D 8 h 2 v p S D v H g a i D D 2 H v C 6 y K b h 8 p W C / h t / z V e m r J M 9 N W p l q p a c g v S / X a + u N q + q j V + 1 R j 4 8 4 V y 1 d s W N y 1 T c M z d v R M 6 f V i L n K v J p N X I 1 c C V m r I P G 7 a S 6 0 6 u 2 p h + e m q N q R q i J R b i B z j U 6 b 6 A L j S 4 a a K z R u K k C j S b N 2 J l U 4 p h h l + a b n h 3 x S l D m B K m + N 0 L G p i L e x h 9 G P s 1 d M 0 x V Z U F t L S G 2 0 h h i v H e 4 O Q / 0 W W c O Q z y p Y d O L 0 k f I X 4 8 w o 2 2 t D 3 C n 6 t M K J h v b p T k X / 2 e A p V J v j 7 b q e A q E 0 o + 2 v A M 8 Y 4 X E P K S O V K E O Q u R N i w e 2 x Y N m i 5 q W j 6 m 9 5 8 b b 6 q 7 C s 0 E e D q 8 q v L U 1 H m b + m K p s f L / x w 0 q 1 z b q O v f q e N / W D H s 6 V S R b f n A 5 M K K b z V W Z x 5 q O l A T s h p v a V r X 3 V U v v S 1 t J 5 8 j G t k 9 d W P T H m 7 k L P T J 3 a n p m a Z o P T H K D Z J G t v 4 / v V F m n W W N v f r 7 b t J x 7 g I q j K L V M G D 2 b M N u T 5 Q T v t z L I M c k + 1 Y 5 r p V M 1 0 2 p r Z 8 X L / k e a 9 0 d j r 1 6 / 9 I g 3 H 3 k y o j B 9 O v C w V I k S P Z p r O I h 8 z U t X + 8 7 1 T J i X D B N U y R s W Y 6 l X M / 3 u Q V U N 7 d U N 7 P 9 k Q j j k J Q F s b E y t M G x q u e 4 R S s b R t 6 v X r Z 2 W C v f O j I E V T N o 1 b x o m c 6 V 0 d 9 M 2 B s q Z W R r p j m 9 p p a c o K 3 t 4 P B 1 G 3 9 e 3 D o O 9 U 2 v n J S i u T i o Z B V i N n 6 l O o 6 a 6 6 + t a i m P p N 9 f b q + j 2 3 v h 1 p f Q P s t b p + t s O V 4 C C M l F g j d Y F 2 B Q P U V d X e J E r T X N P 6 y v D 6 s g p A a h i X K y Z H 5 r g R K p 8 z 8 q N y v x l Q + F E 4 5 g F f z H U e l 4 Z a r j Q J Q r Z X 0 M y y H h F k Q l n H T I R R m m j b h 1 O L T a S x V / h 5 i E k M r L 4 x f 5 X G u C V p H m O r L w Y I v V j a 6 c w b t E / M 0 G W G x I x c Z k T M 2 G X G x I D L A D E T l 5 k Q E 7 h M Q M z U Z a b E h C 4 T E v P V Z b 4 S c + 8 y 9 8 R E L h M t t Y z z 2 A s F 7 l j 8 9 D p e q M P O r O C m 9 3 U m p D d O k 9 9 L T 3 1 + R D k u 1 M n j L I w X V 2 0 n b t s J 3 T V 1 m Z S Y z G U y Y h 5 c 5 o G Y 3 G V y Y o T L C G K k y 0 h i Z i 4 z I 6 Z w m Y K Y R 5 d 5 J G b u M n N i F i 6 z I O b J Z Z 6 W x q D Z D Y C Z O a 2 P 9 6 L a J K X Z S s M J 2 z Z 1 v 7 X L Y x G V 6 6 t 5 x n F 4 S D D b G 8 W I Y L Y x i j H B b F c U Q D D b E s W E Y L Y f i o B g t h m K K c F s J x Q z g t k 2 K L 4 S z P Z A c U 8 w 2 w B F R H D E 4 J j g m M F s o v k M p w Q z M R c Z w U z J x Q P B T M Z F T j D T c C E I F n x R C Z b t c 8 K l W x D M d F s 8 E s x E W 8 w J Z o o t F g Q z u R Z P B F u t d i J Q z 6 H 0 Q 5 S 8 R b d g R N d 6 L o N R X u v J D E Z + r W c z G A 2 2 n s 5 g h N h 6 P o N R Y + s J D U a S r W c 0 G F 2 2 n t L I P X t O g 1 F o 6 0 k N R q a t Z z U Y r T Z P a 8 v F L h d z 7 t m T G I x 0 W 8 9 i M P p t P Y 3 B i L j 1 P A a j 5 N Y T G Y y c W 8 9 k M J p u P Z X B C L v 1 X A a j 7 t a T G Y z E W 8 9 m M D p v P Z 3 B i L 3 1 f A a j + O d P a N w L e T i q H U q 8 Q / t j h 7 Z N v E v w L o P 3 C N 5 j 8 D 7 B + w z u E N x h 8 A H B B w w + J P i Q w U c E H z H 4 m O B j B r 8 n + D 2 D T w g + Y X C X 4 C 6 D T w k + Z f A Z w W c M P i f 4 n M E 9 g n s M v i D 4 g s G X B F 8 y + I r g K w b 3 C e 4 z + J r g a w b f E H z D 4 F u C b x l 8 R / A d g z 8 Q / I H B H w n + + P z x 6 o o O j O q Y R n e Y f r X 0 G L f L u T 2 X 2 + P c v s v t c 6 7 j c h 3 O H b j c A e c O X e 6 Q c 0 c u d 8 S 5 Y 5 c 7 5 t x 7 l 3 v P u R O X O + F c 1 + W 6 n D t 1 u V P O n b n c G e f O X e 6 c c z 2 X 6 3 H u w u U u O H f p c p e c u 3 K 5 K 8 7 1 X a 7 P u W u X u + b c j c v d c O 7 W 5 W 4 5 d + d y d 5 z 7 4 H I f O P f R 5 a z s b 7 i F K J 5 A f 4 7 A z 6 5 v 6 r p F m k B p P 8 9 a L J 4 Z a B B T 0 q g 9 s c J d P 6 y e j V a E f p p q 4 S q a B Q 4 N Q v Z E m x N E y J R o S 4 I I W Z G i 6 i A Z E G 0 / E C H b o U 0 H I m Q 2 t N V A h C x G U X W S 9 f C r Q c h O a D O B C J k I b S E Q i d j 0 G I Q M g 7 Y L i C R s W g 2 S s k k y C F k C b Q g Q I S O g b Q A i l P 5 1 8 k d E s H U w C K X 6 o l o t t l a F Q S i t 6 6 S O C C V z n c o R o R S u E z g i l L h 1 2 k a k z a S 6 7 r T w o 2 y q 1 l v / r Y V Z D C v N V A / i D U i f w O i B R U V F f j w c q x r m g o g 0 h k D h + i / B W q l K p R b A B h H B 3 w S J M I h V V f 2 X Y K v n + l u C a i B l y f t f K r H a E o p 1 R C U U 6 p g N q l Q C t S U U 6 I R K K M 6 A S i j M K Z W w u 6 y v K M i v V E I x 3 r O 5 K Z U I 6 5 G X S o C 2 h J P J Z h H F l 7 I p K Z X o b A l F 9 0 A l F F z O Z q p U Q q s n q F Q i s y W c a D b N K L C C S i i u R y q h s O Z U Q l E t q I S C e l p W 3 z B j + p 0 b X K d e 1 B m l X J 1 w E a F E q 9 M s I p R e d X J F h J K q T q m I U C r V i R Q R S q A 6 f S J C a V M n T U Q o W e p U i Q i l S J 0 g E a H E q N M i I p Q O d T J E h J K g T o G I U O r T i Q 8 R S n g 6 3 S F C a U 4 n O U Q o u e n U h g i l N J 3 Q E K F E p t M Y I p S + d P J C h J K W T l m I U K r S i Q o R S l A 6 P S F C a U k n J U Q o G e l U h A i l I J 2 A E P n I V p D S x Z B n i 7 h X Z 4 s e y x Z x 1 2 5 9 x X S r 7 V 8 P r t r D i r s y + 1 i r q A + J U O 9 d 7 M M o 8 n N A U U 1 3 1 A m E d z Q e U E x C 9 Q Q V k l E 6 D p M A G / N n k U L E p L 6 O l 6 V Q D 3 + v Q D 7 X w D C N x j / V z H C + L J t f b k r s n / m m X K f T q j 3 9 8 L o a m j S 2 M x F M / X L X Y q R / u W c x 2 g F y 3 2 K 0 B 2 T H Y r Q L 5 I H F a B / I Q 4 v R T p B H F q O 9 I I 8 t R r t B v r c Y 7 Q d 5 Y j H a E b J r M d o T 8 t R i t C v k m c V o X 8 h z i 9 H O k D 2 L 0 d 6 Q F x a j 3 S E v L U b 7 Q 1 5 Z j H a I 7 F u M 9 o i 8 t h j t E n l j M d o n 8 t Z i t F P k n c V o r 8 g P F q P d I j 9 a z B g 1 F P J h 7 m d T w w b 2 4 + / I + R Q S 7 D K Y d B H s M Z i k E e w z m N Q R d B h M A g k O G E w a C Q 4 Z T D I J j h h M S g m O G U x i C d 4 z m P Q S n D C Y J B N 0 G U y q C U 4 Z T M I J z h h M 2 g n O G U z y C X o M J g U F F w w m E Q W X D C Y d B V c M J i k F f Q a T m o J r B p O g g h s G k 6 a C W w a T r I I 7 B p O y g g 8 M J n E F H x l s P w j g 0 V Z Z N V E / X B k y c Y l d Q k l b Y o 9 Q k p b Y J 1 Q r 6 6 W 3 r 7 / g m A n w f E + A 9 P D W E Y y 9 z q Y 3 h J G v c D k N h f e Y z q I x Q l g C T + i v Q 9 B L z n J P v S i X R t i Q e r s M 5 h l 6 S / 0 d r / 2 m v U N 3 J N G K A 0 J J s + K Q U J K s O C K U F C u O C S X B i v e E k l 7 F C a E k V 9 E l l N Q q T g k l s Y o z Q k m r 4 p x Q k q r o E U p K F R e E k l D F J a G k U 3 F F K M l U 9 A k l l Y p r Q k m k 4 o Z Q 0 q i 4 J Z Q k K u 4 I J Y W K D 4 S S Q M V H Q u v n M w m 6 Q d A f L H z z Z K a y h k C + o O t + J F C m c Y d K K O B d K q F w 9 6 i E g t 2 n E o q p Q y U U 0 Q G V U D y H V E L R H F E J x X J M J R T J e y q h O E 6 o h K L o U g n F c E o l F M E Z l X D x z 6 m E i 9 6 j E i 7 2 B Z V w k S + p h I t 7 R S V c 1 D 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 6 T 2 G c p r O p I c m y H v E R J d B 7 t o k I J / k e K T q 9 r L W g A 5 c s Y e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E M p + B N 0 v G k E c L 9 Y b T 2 J e + F 0 A C O e Y g V Q 4 F K n 0 4 U w m p + Q a m r 1 4 S V C 9 q 5 n G p C z o d q l Y h z s I 8 x E T o 1 K / f 3 x 0 u d B L U 7 4 y o m 2 D W b L R t X y e Z + h I / v 7 u 3 c C J 7 P L K 3 b O t M n I 4 h + t Z A d E A 9 E l N a u U 8 V 1 P t W U C b D a A x V 5 E A X 6 t 7 X N f C Y k O l o 6 g v 1 8 r o / k 6 n + X A W 5 0 8 P G S + S Z i a n 7 W F V Z 7 c A Y n D h T b I n L k c B D x 8 a Z I m p h p J + s u c G R n 0 X + C J b 1 6 z f d C l h 6 L 7 3 q 2 p 3 e x k v H S + 5 f X K 4 r 2 B s + 3 S Z 7 u e S 5 v X F q x h m b 4 w Y Z 5 U v 7 N M 4 l c g i W 9 f O 1 J j W S N E Z V C i c h 5 M 2 m R T q R s T + n S A s 0 4 z B Z p P q N J / P o b b W V L J q p 0 T + p 5 w M u e 9 J d 8 t e d T r o r C 3 j j 5 9 Q D V W i 2 L / G P n + P a 5 y m L v F p Z g L 2 0 I F o V l E B v 0 2 i S + 7 F 6 T D V 9 T H O 0 r c J f C O 9 F 9 8 e 3 L 9 S r P v o / f s w S 8 8 q q y H D 9 h X 7 V 7 M U A o o j F 2 M e k L 7 1 d T I C 4 5 R P 1 a 4 H 7 H W L 1 y p v y x q Z R F q 3 e N 0 1 n g c 6 Z 2 i q H E j Z 1 8 y L 1 x i m o 5 h 7 D + z C D c e h v N V 6 y T v M 4 U k / 6 l 2 X 3 x z f L F j J N Q H H b b Z x 8 1 P X e t n G Z Y r I W R m u h + + M g T C Z y 0 d w 6 m Z + r R 8 Z 4 b P h q s 1 w B n r X C D 8 A L E y 9 J K 5 s v Y b 7 l 7 U 1 T o a Y n V Q Z w N P X 2 8 R Nx A r 8 X 3 j B N 7 7 f W n Y c 8 5 5 k 6 n d P 8 P 1 D j e a A 7 g H 8 H m + r q W 4 H q n D S B e N X e p F Y r h u n f z 0 T 0 U V B 9 9 T p g B H L g D 3 G f R e n j M A f / f t 1 g q T o E M 7 n A Y 3 1 g L w x T w 4 W f 4 / i n e P i v r 3 / 5 b m O 7 + T + V V i 9 u 3 m 5 t v 9 n a v v h h 4 0 + 7 1 f 9 i + t X a v 6 3 9 + 9 q r t e 2 1 P 6 7 9 a e 1 o r b d 2 v T Z a y 9 f + s v b X t b + 9 2 3 r X f / f p 3 W c T + v O f V X X + d c 3 5 e T f 5 P x 0 R 0 B o = < / l a t e x i t > p = p kNN + (1 )p NLM < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 I Z A 8 q I s 4 y t Y 0 X N A E o H 9 F 4 p C y y U = " > A A A 1 0 X i c lV t b b + N K c v Z u b h s n m 5 x N H v N C x D P Y 2 V 2 P M Z 5 z B g k C B F h f 5 M t Y v k q + z B z N G V B S i e K Y N 7 N b t G R G i 8 U + L Z A f k F + T 1 + Q 3 5 N 9 s d T e b V U 3 R c x A f H J v 9 f d X F v n z V V a I 4 w y w K h X z z 5 v 9 + 8 t M / + / O / + M u / + t l f r / / N 3 / 7 8 7 / 7 + m 1 / 8 w 4 1 I Z / k I r k d p l O Z 3 Q 1 9 A F C Z w L U M Z w V 2 W g x 8 P I 7 g d 3 u 8 p / r a A X I R p 0 p e L D D 7 F f p C E k 3 D k S 4 Q + f 7 O d e f / u D f C / 7 H N 5 P 5 A w l + X Z 2 X L p / c Z 7 t e 2 9 1 s y v k K q Y 7 u l y + f m b j T d b b / S P t 3 q x X V 1 s r F U / F 5 9 / 8 d 0 f B + N 0 N I s h k a P I F + L 7 7 T e Z / F T 6 u Q x H E S z X B z M B m T + 6 9 w P 4 H i 8 T P w b x q d R z W 3 o v E R l 7 k z T H / x P p a Z T 3 K P 1 Y i E U 8 R M v Y l 1 P R 5 B T Y x n 0 / k 5 N / / V S G S T a T k I z M j S a z y J O p p x b K G 4 c 5 j G S 0 w A t / l I c 4 V m 8 0 9 X N / J H E 5 1 9 d f q h / vr H P r n e 7 0 j 7 z 9 z s H x 2 X H / + P y s 5 2 l qv W 0 g m / h X T U N s D u M l + v B O / f z e E 3 g f 3 A z h p R N v 5 G f m W s 0 4 h w n k e Z g E a l D j s A i F N Z u E w S w H n F A C j 6 M 0 j v 1 k X A 4 Q j G A i l 2 U 5 g N h 7 1 c X r X y 2 X K z Y j 3 A f I r d W e b r X Z 5 W E w r Z 1 d q U a b l U w z a 9 N P s z a L Y S p l G l u j X d 1 a s a v m 7 V s z / z m L o b U Y P m c x s h a j 5 y z G 1 m K s L H A b j n B 2 k Z q h 5 3 t o r z Y d J h h R Y w / X J n Z 9 4 L U C l 9 9 v f 0 I v w 4 m 3 s a 2 c N K c 9 X 5 a D 2 M 8 D F J i f l w f H d 8 2 x 4 L V j g l J q m v T P 9 8 / 1 f X T 4 a e m X O e D o F f F v 5 s a u z 4 5 2 K a d p V g 4 6 T b b z g G w H g z k v n g Y i j L 0 H v C 6 y a b h 8 p a D / w F / z l S X r Z E + N X u o 4 K O Q U p P / 1 f n W 3 e d V t / K r V 8 u E J 1 6 p 1 K K 5 d p u y e u X n D c v 6 0 Y j l X l k + r l q u G K z Z j b T R u J 9 W d X r W 5 f n h q z q p p o R Y W 4 Q Y 6 1 + i 8 g S 4 0 u m i g s U b j p g o 0 m j R t Z 1 K J Y 4 Z D m m 9 6 d s Y r R p l j p M b e M B m b j n g b f x j 5 t H Z N M 9 W V G b V 5 Q m z F G W J 8 d B i c B / q s M 4 c h n t S w 6 U X p I + S v R 5 j 2 t t Y H G K n 6 t I L J x n Z p z s X f D b B V 6 v B o 6 4 6 n Q C j 9 a M s 7 w D N W S M x D 6 k g V 6 i B E 3 n g 8 s B 4 P m h 4 1 L R 9 T e 8 + N t 9 V d h W e N P J x e 1 X h r e z z M / D F 1 2 f h 2 4 7 u V b p t 1 H 3 v 1 L X f 1 n Z 5 O z y S L r y 4 H J h Q z + C q z O O v R 4 s A u i O n d s 7 1 7 L b 2 v b C + d J x / T O n l t 1 Q t j 7 i 7 0 y t S p 7 Z m l a T q c 5 g B N l 8 z f x r e r H m n V m O 9 v V 3 3 7 i Q e 4 C a p z y 5 L B g 5 m z N X l + 0 o 6 f W Z Z B 7 i k / x k 2 n c t N p c 7 P j 5 f 4 j r X v D 2 e v X r / 0 i D c f e T K i M H 0 6 8 L B U i x E L O u M 4 i H z N S 5 f / 5 0 a k i J c M E 1 T J H x Z j u l c 3 / e 5 K V o 7 3 a 0 d 6 P O s I 5 J w H o 0 s b Y C u N D w / W I U C q W t q 5 e v 3 5 W J j g 6 P w p S L M q m c c s 8 k T O j q 4 2 + O l H m a m W m O 9 b V T o s r K 3 h 7 P 5 x E 7 e v r h 0 H f 6 b T z o 5 1 W F h U L B l n N n K l P o W a 4 6 u p r m 2 L 6 N 9 V 7 U f e / c P v b m d Y 3 w F G r 6 2 c H X A k O w k i J N V I X W K 6 g g b q q / E 2 i N M 0 1 r a 8 M r y 8 r A 6 S G c b l S 5 M g c A 6 G q c 0 Z + V O 4 3 D Q o / C s f c 4 L O 5 z u P S U M s V l y B k e w f N L O s Z Q S Z U 6 Z i J M E o T X f b h 0 q K L N P Y K P w 8 x i Y H V N + a v 0 h R u S Z r H 6 P X F A K E X S 7 u c e Y P 2 i R m 6 z J C Y k c u M i B m 7 z J g Y c B k g Z u I y E 2 I C l w m I m b r M l J j Q Z U J i v r j M F 2 L u X e a e m M h l o q W W c R 5 7 o c C I x Y + 4 4 4 U 6 7 M w O b n p f Z k J 6 4 z T 5 p f T U 5 0 e U 4 0 K d P M 7 G e H H l O 3 F 9 J 3 T X 1 G V S Y j K X y Y h 5 c J k H Y n K X y Y k R L i O I k S 4 j i Z m 5 z I y Y w m U K Y h 5 d 5 p G Y u c v M i V m 4 z I K Y J 5 d 5 W p o C z Q Y A Z u a 0 P t 6 L K k h K E 0 r D C Q u b e t y 6 y m M W V d V X 8 4 z j 8 J B g F h v F i G A W G M W Y Y B Y V B R D M Q q K Y E M z i o Q g I Z s F Q T A l m k V D M C G Z h U H w h m M V A c U 8 w C 4 A i I j h i c E x w z G C 2 0 H y F U 4 K Z m I u M Y K b k 4 o F g J u M i J 5 h p u B A E C 7 6 p B M v 2 N e H S L Q h m u i 0 e C W a i L e Y E M 8 U W C 4 K Z X I s n g q 1 W O x G o 5 1 D 6 I U r e o l s w o m s 9 l 8 E o r / V k B i O / 1 r M Z j A Z b T 2 c w Q m w 9 n 8 G o s f W E B i P J 1 j M a j C 5 b T 2 n k n j 2 n w S i 0 9 a Q G I 9 P W s x q M V p u n t e V i l 4 s 5 9 + x J D E a 6 r W c x G P 2 2 n s Z g R N x 6 H o N R c u u J D E b O r W c y G E 2 3 n s p g h N 1 6 L o N R d + v J D E b i r W c z G J 2 3 n s 5 g x N 5 6 P o N R / P M n N M Z C H o 7 q C i X e o f j Y o b C J d w n e Z f A e w X s M 3 i d 4 n 8 E d g j s M P i D 4 g M G H B B 8 y + I j g I w Y f E 3 z M 4 P c E v 2 f w C c E n D O 4 S 3 G X w K c G n D D 4 j + I z B 5 w S f M / i C 4 A s G X x J 8 y e A r g q 8 Y 3 C O 4 x + A + w X 0 G X x N 8 z e A b g m 8 Y f E v w L Y P vC L 5 j 8 A e C P z D 4 I 8 E f n z 9 e X d G B U R 3 T 6 A 7 T r 5 Y e 4 3 Y 5 t + d y e 5 z b d 7 l 9 z n V c r s O 5 A 5 c 7 4 N y h y x 1 y 7 s j l j j h 3 7 H L H n H v v c u 8 5 d + J y J 5 z r u l y X c 6 c u d 8 q 5 M 5 c 7 4 9 y 5 y 5 1 z 7 s L l L j h 3 6 X K X n L t y u S v O 9 V y u x 7 m + y / U 5 d + 1 y 1 5 y 7 c b k b z t 2 6 3 C 3 n 7 l z u j n M f X O 4 D 5 z 6 6 n J X 9 D S 8 h i i f Q n y P w s+ u b u m + R J l D a z 7 M W i 2 c G G s S U N O q a W O F u P a y e j V a E f p p q 4 c q a G Q 4 N Q u W J L k 4 Q o a J E l y S I U C l S V A O k A k S X H 4 h Q 2 a G L D k S o 2 N C l B i J U Y h T V I N k I v x i E y g l d T C B C R Y Q u I R C J 2 P I Y h A o G X S 4 g k r B l N U j K F s k g V B L o g g A R K g R 0 G Y A I p X + d / B E R b B 8 M Q q m + q H a L 7 V V h E E r r O q k j Q s l c p 3 J E K I X r B I 4 I J W 6 d t h F p K 1 L d 6 r T w o 2 y q 9 l v / r Y V Z D C v N V A / i D U i f w O i B R U V F f j w c q x 7 m g o g 0 h k D h + i / B W q l K p R Z A h 4 j g b 4 J E G M S q q / 5 L s N V z / S 1 B N Z G y 5 O M v l V h t C 8 U 6 o h Y K d c w m V S q B 2 h Y K d E I t F G d A L R T m l F o 4 X D Z W F O Q X a q E Y 7 9 n a l E q E 9 c x L J U D b w s V k q 4 j i S 9 m S l E p 0 t o W i e 6 A W C i 5 n K 1 U q o d U L V C q R 2 R Y u N F t m F F h B L R T X I 7 V Q W H N q o a g W 1 E J B P S 2 r b 5 g x / c 4 N r l M v 6 o x S r k 6 4 i F C i 1 W k W E U q v O r k i Q k l V p 1 R E K J X q R I o I J V C d P h G h t K m T J i K U L H W q R I R S p E 6 Q i F B i 1 G k R E U q H O h k i Q k l Q p 0 B E K P X p x I c I J T y d 7 h C h N K e T H C K U 3 H R q Q 4 R S m k 5 o i F A i 0 2 k M E U p f O n k h Q k l L p y x E K F X p R I U I J S i d n h C h t K S T E i K U j H Q q Q o R S k E 5 A i H x k O 0 j p Y s i z R X x R Z 4 s L l i 3 i r g 1 9 x X S r 8 K 8 n V 8 W w 4 n o m j r W K + p A I 9 d 7 F P o w i P w c U 1 X R H n U B 4 R 1 M D i k m o n q B C M k r H Y R K g M 3 8 W K U R M 6 u t 4 W Q r 1 8 L c H 8 j k H w z Q a / 5 i b 4 X x Z N r / c l D g + 8 0 2 5 T q e V P / 3 w u p q a N G V n I p j 6 5 a 7 F S P 9 y z 2 I U A X L f Y h Q D s m M x i g J 5 Y D G K A 3 l o M Y o E e W Q x i g V 5 b D G K B v n e Y h Q P 8 s R i F B G y a z G K C X l q M Y o K e W Y x i g t 5 b j G K D H l h M Y o N e W k x i g 5 5 Z T G K D 9 m z G E W I 7 F u M Y k R e W 4 y i R N 5 Y j O J E 3 l q M I k X e W Y x i R X 6 w G E W L / G g x U 6 i h k A 9 z P 5 s a N r A f f 0 f O p 5 B g l 8 G k i 2 C P w S S N Y J / B p I 6 g w 2 A S S H D A Y N J I c M h g k k l w x G B S S n D M Y B J L 8 J 7 B p J f g h M E k m a D L Y F J N c M p g E k 5 w x m D S T n D O Y J J P c M F g U l B w y W A S U X D F Y N J R 0 G M w S S n o M 5 j U F F w z m A Q V 3 D C Y N B X c M p h k F d w x m J Q V f G A w i S v 4 y G D 7 Q Q C P t q p U E / X D l S E T l 9 g l l L Q l 9 g g l a Y l 9 Q r W y X n r 7 + g u O m Q D P 9 w R I D 2 8 d w d j r b H p D G P k K l 9 N Q e I / p L B o j h C 3 w h P 4 6 B G v J W e 6 p F + X S C B 2 p t 8 t g n m F t q b / j t d + 0 d + i O J F p x Q C h p V h w S S p I V R 4 S S Y s U x o S R Y 8 Z 5 Q 0 q s 4 I Z T k K r q E k l r F K a E k V n F G K G l V n B N K U h U X h J J S x S W h J F R x R S j p V P Q I J Z m K P q G k U n F N K I l U 3 B B K G h W 3 h J J E x R 2 h p F D x g V A S q P h I a P 1 8 J s F q E P Q H C 9 8 8 m a l K Q 6 C 6 o O t + J F B F 4 w 6 1 U M C 7 1 E L h 7 l E L B b t P L R R T h 1 o o o g N q o X g O q Y W i O a I W i u W Y W i i S 9 9 R C c Z x Q C 0 X R p R a K 4 Z R a K I I z a u H m n 1 M L N / 2 C W r j Z l 9 T C T b 6 i F m 5 u j 1 q 4 q X 1 q 4 W Z e U w s 3 8 Y Z a u H m 3 1 M J N u 6 M W b t Y H a u E m f W T 3 q + q v q v Z S W w Z 8 y 6 S p w / C g U V G t 3 4 j F 0 D b o p v c Y y m k 6 k x 4 W Q d 4 j J r o M c r d M A q q T n B q p u r 2 s N a A N V 8 p D 0 E U U N K o o 0 G U U N O o o 0 I U U N C o p 0 K U U N G o p 0 M U U N K o p 0 O U U N O o p 0 A U V N C o q 0 C U V N G o q 0 E U V N K o q 0 G U V N O o q 0 I U V N C o r 0 K U V N G o r 0 M U V N K o r 0 O U V N O o r 0 A U W N C o s 0 C U W N G o s 0 E U W N K o s 0 G U W N O o s 0 I U W N C o t 0 K U W N G o t 0 M U W N K o t 0 O U W N O o t 0 A U X N C o u 0 C U X N G o u 0 E U X N K o u 0 G U X s L o L P z 9 g I p L 5 D L x Z M o Y 8 W q g 3 n M a + 9 L 0 A E s g x B 6 l 2 K F D p w 5 l K S M 0 3 M H 3 1 k q B 6 U T O P S 9 3 Q 6 V B 5 h T g L 8 x A T o d O / f n 9 3 u N B J U L 8 z o m 6 C W b P h 2 7 5 O M v U l f n 5 3 b + F Y X n D L i 2 X b Y O J 0 D N H X J q I N 6 p m Y 1 s p 9 K q O L r x l l M o z G U F k O d K M e f d 0 D j w m Z j q a + U C + v + z O Z 6 s 9 V k D s j b L x E n h m b e o x V l 9 U B j M G x M 8 0 W u x w J P H S s n W m i F k b 6 y Z p r H P l Z 5 I 9 g W b 9 + 0 6 2 A p f f S q 6 7 d 5 W 2 8 d L z k 9 Y v L d Q V 7 w 6 f b Z K + W P L c 3 T s 0 4 Y 2 v c I K N 8 a Z / G u U Q O w b J + v t a k R p L m q F r h J I S 8 6 V q k E x n 7 c 7 K 0 Q N M O k 0 W q 3 3 g y j 9 5 W v W T R T M 3 + S T 0 f c N m T 7 p K / 7 n T S X d n A G z + n E a h G 0 7 / E P 3 6 O e 5 + n z L K 3 s g F 7 a U G 0 a i i B 3 q b R J P d j 9 Z h q + p j m W L Y K f y G 8 F 9 0 f 3 r 5 Q r / r o f / g x S 8 w r q y L D / R f 6 V b M X A 4 g i Z m M f k 7 7 0 d j E B Y s g n 6 t c C 4 x 1 i 9 c q b q o 2 N U 2 a t 3 j d N Z 4 H O m b p U D i V s a v c i 9 c Y p K H e P 4 X 2 Y w T j 0 t x o v W a d 5 H K k n / c u y + 8 O b Z Q u Z J q C 4 7 T Z O P u p + b 9 u 4 T D F Z C 6 O 1 0 P 1 h E C Y T u W i G T u b n 6 p E x H h u + C p Y e 4 F k r / A C 8 M P G S t C r z J c y 3 v L 1 p K t T y p K o A H E 2 9 f f x E n M A v h T d M 0 / u t d e c h z 3 m m T u c 0 / z V q P A / 0 A P D v Y F N d f c 1 Q n Z P G E K / a X W q 1 o p n + / Y x F H w X V V 6 8 D R i A H / h D j L E o f h z n 4 9 + s G S 9 U h m M k F H u s D e 2 G Y G i 7 8 H O c / x c N / f f 3 z N x v b z X + p t H p x 8 3 Z r + 8 3 W 9 u V 3 G 7 / d r f 4 V 0 8 / W / m n t n 9 d e r W 2 v / c va b 9 e O 1 i 7 W r t d G a / + 1 9 t 9 r / 7 P 2 v + 9 6 7 x b v f v / u D 8 b 0 p z + p + v z j m v P z 7 j / / B E y l 2 6 c = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 I Z A 8 q I s 4 y t Y 0 X N A E o H 9 F 4 p C y y U = " > A A A 1 0 X i c lV t b b + N K c v Z u b h s n m 5 x N H v N C x D P Y 2 V 2 P M Z 5 z B g k C B F h f 5 M t Y v k q + z B z N G V B S i e K Y N 7 N b t G R G i 8 U + L Z A f k F + T 1 + Q 3 5 N 9 s d T e b V U 3 R c x A f H J v 9 f d X F v n z V V a I 4 w y w K h X z z 5 v 9 + 8 t M / + / O / + M u / + t l f r / / N 3 / 7 8 7 / 7 + m 1 / 8 w 4 1 I Z / k I r k d p l O Z 3 Q 1 9 A F C Z w L U M Z w V 2 W g x 8 P I 7 g d 3 u 8 p / r a A X I R p 0 p e L D D 7 F f p C E k 3 D k S 4 Q + f 7 O d e f / u D f C / 7 H N 5 P 5 A w l + X Z 2 X L p / c Z 7 t e 2 9 1 s y v k K q Y 7 u l y + f m b j T d b b / S P t 3 q x X V 1 s r F U / F 5 9 / 8 d 0 f B + N 0 N I s h k a P I F + L 7 7 T e Z / F T 6 u Q x H E S z X B z M B m T + 6 9 w P 4 H i 8 T P w b x q d R z W 3 o v E R l 7 k z T H / x P p a Z T 3 K P 1 Y i E U 8 R M v Y l 1 P R 5 B T Y x n 0 / k 5 N / / V S G S T a T k I z M j S a z y J O p p x b K G 4 c 5 j G S 0 w A t / l I c 4 V m 8 0 9 X N / J H E 5 1 9 d f q h / vr H P r n e 7 0 j 7 z 9 z s H x 2 X H / + P y s 5 2 l qv W 0 g m / h X T U N s D u M l + v B O / f z e E 3 g f 3 A z h p R N v 5 G f m W s 0 4 h w n k e Z g E a l D j s A i F N Z u E w S w H n F A C j 6 M 0 j v 1 k X A 4 Q j G A i l 2 U 5 g N h 7 1 c X r X y 2 X K z Y j 3 A f I r d W e b r X Z 5 W E w r Z 1 d q U a b l U w z a 9 N P s z a L Y S p l G l u j X d 1 a s a v m 7 V s z / z m L o b U Y P m c x s h a j 5 y z G 1 m K s L H A b j n B 2 k Z q h 5 3 t o r z Y d J h h R Y w / X J n Z 9 4 L U C l 9 9 v f 0 I v w 4 m 3 s a 2 c N K c 9 X 5 a D 2 M 8 D F J i f l w f H d 8 2 x 4 L V j g l J q m v T P 9 8 / 1 f X T 4 a e m X O e D o F f F v 5 s a u z 4 5 2 K a d p V g 4 6 T b b z g G w H g z k v n g Y i j L 0 H v C 6 y a b h 8 p a D / w F / z l S X r Z E + N X u o 4 K O Q U p P / 1 f n W 3 e d V t / K r V 8 u E J 1 6 p 1 K K 5 d p u y e u X n D c v 6 0 Y j l X l k + r l q u G K z Z j b T R u J 9 W d X r W 5 f n h q z q p p o R Y W 4 Q Y 6 1 + i 8 g S 4 0 u m i g s U b j p g o 0 m j R t Z 1 K J Y 4 Z D m m 9 6 d s Y r R p l j p M b e M B m b j n g b f x j 5 t H Z N M 9 W V G b V 5 Q m z F G W J 8 d B i c B / q s M 4 c h n t S w 6 U X p I + S v R 5 j 2 t t Y H G K n 6 t I L J x n Z p z s X f D b B V 6 v B o 6 4 6 n Q C j 9 a M s 7 w D N W S M x D 6 k g V 6 i B E 3 n g 8 s B 4 P m h 4 1 L R 9 T e 8 + N t 9 V d h W e N P J x e 1 X h r e z z M / D F 1 2 f h 2 4 7 u V b p t 1 H 3 v 1 L X f 1 n Z 5 O z y S L r y 4 H J h Q z + C q z O O v R 4 s A u i O n d s 7 1 7 L b 2 v b C + d J x / T O n l t 1 Q t j 7 i 7 0 y t S p 7 Z m l a T q c 5 g B N l 8 z f x r e r H m n V m O 9 v V 3 3 7 i Q e 4 C a p z y 5 L B g 5 m z N X l + 0 o 6 f W Z Z B 7 i k / x k 2 n c t N p c 7 P j 5 f 4 j r X v D 2 e v X r / 0 i D c f e T K i M H 0 6 8 L B U i x E L O u M 4 i H z N S 5 f / 5 0 a k i J c M E 1 T J H x Z j u l c 3 / e 5 K V o 7 3 a 0 d 6 P O s I 5 J w H o 0 s b Y C u N D w / W I U C q W t q 5 e v 3 5 W J j g 6 P w p S L M q m c c s 8 k T O j q 4 2 + O l H m a m W m O 9 b V T o s r K 3 h 7 P 5 x E 7 e v r h 0 H f 6 b T z o 5 1 W F h U L B l n N n K l P o W a 4 6 u p r m 2 L 6 N 9 V 7 U f e / c P v b m d Y 3 w F G r 6 2 c H X A k O w k i J N V I X W K 6 g g b q q / E 2 i N M 0 1 r a 8 M r y 8 r A 6 S G c b l S 5 M g c A 6 G q c 0 Z + V O 4 3 D Q o / C s f c 4 L O 5 z u P S U M s V l y B k e w f N L O s Z Q S Z U 6 Z i J M E o T X f b h 0 q K L N P Y K P w 8 x i Y H V N + a v 0 h R u S Z r H 6 P X F A K E X S 7 u c e Y P 2 i R m 6 z J C Y k c u M i B m 7 z J g Y c B k g Z u I y E 2 I C l w m I m b r M l J j Q Z U J i v r j M F 2 L u X e a e m M h l o q W W c R 5 7 o c C I x Y + 4 4 4 U 6 7 M w O b n p f Z k J 6 4 z T 5 p f T U 5 0 e U 4 0 K d P M 7 G e H H l O 3 F 9 J 3 T X 1 G V S Y j K X y Y h 5 c J k H Y n K X y Y k R L i O I k S 4 j i Z m 5 z I y Y w m U K Y h 5 d 5 p G Y u c v M i V m 4 z I K Y J 5 d 5 W p o C z Q Y A Z u a 0 P t 6 L K k h K E 0 r D C Q u b e t y 6 y m M W V d V X 8 4 z j 8 J B g F h v F i G A W G M W Y Y B Y V B R D M Q q K Y E M z i o Q g I Z s F Q T A l m k V D M C G Z h U H w h m M V A c U 8 w C 4 A i I j h i c E x w z G C 2 0 H y F U 4 K Z m I u M Y K b k 4 o F g J u M i J 5 h p u B A E C 7 6 p B M v 2 N e H S L Q h m u i 0 e C W a i L e Y E M 8 U W C 4 K Z X I s n g q 1 W O x G o 5 1 D 6 I U r e o l s w o m s 9 l 8 E o r / V k B i O / 1 r M Z j A Z b T 2 c w Q m w 9 n 8 G o s f W E B i P J 1 j M a j C 5 b T 2 n k n j 2 n w S i 0 9 a Q G I 9 P W s x q M V p u n t e V i l 4 s 5 9 + x J D E a 6 r W c x G P 2 2 n s Z g R N x 6 H o N R c u u J D E b O r W c y G E 2 3 n s p g h N 1 6 L o N R d + v J D E b i r W c z G J 2 3 n s 5 g x N 5 6 P o N R / P M n N M Z C H o 7 q C i X e o f j Y o b C J d w n e Z f A e w X s M 3 i d 4 n 8 E d g j s M P i D 4 g M G H B B 8 y + I j g I w Y f E 3 z M 4 P c E v 2 f w C c E n D O 4 S 3 G X w K c G n D D 4 j + I z B 5 w S f M / i C 4 A s G X x J 8 y e A r g q 8 Y 3 C O 4 x + A + w X 0 G X x N 8 z e A b g m 8 Y f E v w L Y P v C L 5 j 8A e C P z D 4 I 8 E f n z 9 e X d G B U R 3 T 6 A 7 T r 5 Y e 4 3 Y 5 t + d y e 5 z b d 7 l 9 z n V c r s O 5 A 5 c 7 4 N y h y x 1 y 7 s j l j j h 3 7 H L H n H v v c u 8 5 d + J y J 5 z r u l y X c 6 c u d 8 q 5 M 5 c 7 4 9 y 5 y 5 1 z 7 s L l L j h 3 6 X K X n L t y u S v O 9 V y u x 7 m + y / U 5 d + 1 y 1 5 y 7 c b k b z t 2 6 3 C 3 n 7 l z u j n M f X O 4 D 5 z 6 6 n J X 9 D S 8 hi i f Q n y P w s + u b u m + R J l D a z 7 M W i 2 c G G s S U N O q a W O F u P a y e j V a E f p p q 4 c q a G Q 4 N Q u W J L k 4 Q o a J E l y S I U C l S V A O k A k S X H 4 h Q 2 a G L D k S o 2 N C l B i J U Y h T V I N k I v x i E y g l d T C B C R Y Q u I R C J 2 P I Y h A o G X S 4 g k r B l N U j K F s k g V B L o g g A R K g R 0 G Y A I p X + d / B E R b B 8 M Q q m + q H a L 7 V V h E E r r O q k j Q s l c p 3 J E K I X r B I 4 I J W 6 d t h F p K 1 L d 6 r T w o 2 y q 9 l v / r Y V Z D C v N V A / i D U i f w O i B R U V F f j w c q x 7 m g o g 0 h k D h + i / B W q l K p R Z A h 4 j g b 4 J E G M S q q / 5 L s N V z / S 1 B N Z G y 5 O M v l V h t C 8 U 6 o h Y K d c w m V S q B 2 h Y K d E I t F G d A L R T m l F o 4 X D Z W F O Q X a q E Y 7 9 n a l E q E 9 c x L J U D b w s V k q 4 j i S 9 m S l E p 0 t o W i e 6 A W C i 5 n K 1 U q o d U L V C q R 2 R Y u N F t m F F h B L R T X I 7 V Q W H N q o a g W 1 E J B P S 2 r b 5 g x / c 4 N r l M v 6 o x S r k 6 4 i F C i 1 W k W E U q v O r k i Q k l V p 1 R E K J X q R I o I J V C d P h G h t K m T J i K U L H W q R I R S p E 6 Q i F B i 1 G k R E U q H O h k i Q k l Q p 0 B E K P X p x I c I J T y d 7 h C h N K e T H C K U 3 H R q Q 4 R S m k 5 o i F A i 0 2 k M E U p f O n k h Q k l L p y x E K F X p R I U I J S i d n h C h t K S T E i K U j H Q q Q o R S k E 5 A i H x k O 0 j p Y s i z R X x R Z 4 s L l i 3 i r g 1 9 x X S r 8 K 8 n V 8 W w 4 n o m j r W K + p A I 9 d 7 F P o w i P w c U 1 X R H n U B 4 R 1 M D i k m o n q B C M k r H Y R K g M 3 8 W K U R M 6 u t 4 W Q r 1 8 L c H 8 j k H w z Q a / 5 i b 4 X x Z N r / c l D g + 8 0 2 5 T q e V P / 3 w u p q a N G V n I p j 6 5 a 7 F S P 9 y z 2 I U A X L f Y h Q D s m M x i g J 5 Y D G K A 3 l o M Y o E e W Q x i g V 5 b D G K B v n e Y h Q P 8 s R i F B G y a z G K C X l q M Y o K e W Y x i g t 5 b j G K D H l h M Y o N e W k x i g 5 5 Z T G K D 9 m z G E W I 7 F u M Y k R e W 4 y i R N 5 Y j O J E 3 l q M I k X e W Y x i R X 6 w G E W L / G g x U 6 i h k A 9 z P 5 s a N r A f f 0 f O p 5 B g l 8 G k i 2 C P w S S N Y J / B p I 6 g w 2 A S S H D A Y N J I c M h g k k l w x G B S S n D M Y B J L 8 J 7 B p J f g h M E k m a D L Y F J N c M p g E k 5 w x m D S T n D O Y J J P c M F g U l B w y W A S U X D F Y N J R 0 G M w S S n o M 5 j U F F w z m A Q V 3 D C Y N B X c M p h k F d w x m J Q V f G A w i S v 4 y G D 7 Q Q C P t q p U E / X D l S E T l 9 g l l L Q l 9 g g l a Y l 9 Q r W y X n r 7 + g u O m Q D P 9 w R I D 2 8 d w d j r b H p D G P k K l 9 N Q e I / p L B o j h C 3 w h P 4 6 B G v J W e 6 p F + X S C B 2 p t 8 t g n m F t q b / j t d + 0 d + i O J F p x Q C h p V h w S S p I V R 4 S S Y s U x o S R Y 8 Z 5 Q 0 q s 4 I Z T k K r q E k l r F K a E k V n F G K G l V n B N K U h U X h J J S x S W h J F R x R S j p V P Q I J Z m K P q G k U n F N K I l U 3 B B K G h W 3 h J J E x R 2 h p F D x g V A S q P h I a P 1 8 J s F q E P Q H C 9 8 8 m a l K Q 6 C 6 o O t + J F B F 4 w 6 1 U M C 7 1 E L h 7 l E L B b t P L R R T h 1 o o o g N q o X g O q Y W i O a I W i u W Y W i i S 9 9 R C c Z x Q C 0 X R p R a K 4 Z R a K I I z a u H m n 1 M L N / 2 C W r j Z l 9 T C T b 6 i F m 5 u j 1 q 4 q X 1 q 4 W Z e U w s 3 8 Y Z a u H m 3 1 M J N u 6 M W b t Y H a u E m f W T 3 q + q v q v Z S W w Z 8 y 6 S p w / C g U V G t 3 4 j F 0 D b o p v c Y y m k 6 k x 4 W Q d 4 j J r o M c r d M A q q T n B q p u r 2 s N a A N V 8 p D 0 E U U N K o o 0 G U U N O o o 0 I U U N C o p 0 K U U N G o p 0 M U U N K o p 0 O U U N O o p 0 A U V N C o q 0 C U V N G o q 0 E U V N K o q 0 G U V N O o q 0 I U V N C o r 0 K U V N G o r 0 M U V N K o r 0 O U V N O o r 0 A U W N C o s 0 C U W N G o s 0 E U W N K o s 0 G U W N O o s 0 I U W N C o t 0 K U W N G o t 0 M U W N K o t 0 O U W N O o t 0 A U X N C o u 0 C U X N G o u 0 E U X N K o u 0 G U X s L o L P z 9 g I p L 5 D L x Z M o Y 8 W q g 3 n M a + 9 L 0 A E s g x B 6 l 2 K F D p w 5 l K S M 0 3 M H 3 1 k q B 6 U T O P S 9 3 Q 6 V B 5 h T g L 8 x A T o d O / f n 9 3 u N B J U L 8 z o m 6 C W b P h 2 7 5 O M v U l f n 5 3 b + F Y X n D L i 2 X b Y O J 0 D N H X J q I N 6 p m Y 1 s p 9 K q O L r x l l M o z G U F k O d K M e f d 0 D j w m Z j q a + U C + v + z O Z 6 s 9 V k D s j b L x E n h m b e o x V l 9 U B j M G x M 8 0 W u x w J P H S s n W m i F k b 6 y Z p r H P l Z 5 I 9 g W b 9 + 0 6 2 A p f f S q 6 7 d 5 W 2 8 d L z k 9 Y v L d Q V 7 w 6 f b Z K + W P L c 3 T s 0 4 Y 2 v c I K N 8 a Z / G u U Q O w b J + v t a k R p L m q F r h J I S 8 6 V q k E x n 7 c 7 K 0 Q N M O k 0 W q 3 3 g y j 9 5 W v W T R T M 3 + S T 0 f c N m T 7 p K / 7 n T S X d n A G z + n E a h G 0 7 / E P 3 6 O e 5 + n z L K 3 s g F 7 a U G 0 a i i B 3 q b R J P d j 9 Z h q + p j m W L Y K f y G 8 F 9 0 f 3 r 5 Q r / r o f / g x S 8 w r q y L D / R f 6 V b M X A 4 g i Z m M f k 7 7 0 d j E B Y s g n 6 t c C 4 x 1 i 9 c q b q o 2 N U 2 a t 3 j d N Z 4 H O m b p U D i V s a v c i 9 c Y p K H e P 4 X 2 Y w T j 0 t x o v W a d 5 H K k n / c u y + 8 O b Z Q u Z J q C 4 7 T Z O P u p + b 9 u 4 T D F Z C 6 O 1 0 P 1 h E C Y T u W i G T u b n 6 p E x H h u + C p Y e 4 F k r / A C 8 M P G S t C r z J c y 3 v L 1 p K t T y p K o A H E 2 9 f f x E n M A v h T d M 0 / u t d e c h z 3 m m T u c 0 / z V q P A / 0 A P D v Y F N d f c 1 Q n Z P G E K / a X W q 1 o p n + / Y x F H w X V V 6 8 D R i A H / h D j L E o f h z n 4 9 + s G S 9 U h m M k F H u s D e 2 G Y G i 7 8 H O c / x c N / f f 3 z N x v b z X + p t H p x 8 3 Z r + 8 3 W 9 u V 3 G 7 / d r f 4 V 0 8 / W / m n t n 9 d e r W 2 v / c va b 9 e O 1 i 7 W r t d G a / + 1 9 t 9 r / 7 P 2 v + 9 6 7 x b v f v / u D 8 b 0 p z + p + v z j m v P z 7 j / / B E y l 2 6 c = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 I Z A 8 q I s 4 y t Y 0 X N A E o H 9 F 4 p C y y U = " > A A A 1 0 X i c lV t b b + N K c v Z u b h s n m 5 x N H v N C x D P Y 2 V 2 P M Z 5 z B g k C B F h f 5 M t Y v k q + z B z N G V B S i e K Y N 7 N b t G R G i 8 U + L Z A f k F + T 1 + Q 3 5 N 9 s d T e b V U 3 R c x A f H J v 9 f d X F v n z V V a I 4 w y w K h X z z 5 v 9 + 8 t M / + / O / + M u / + t l f r / / N 3 / 7 8 7 / 7 + m 1 / 8 w 4 1 I Z / k I r k d p l O Z 3 Q 1 9 A F C Z w L U M Z w V 2 W g x 8 P I 7 g d 3 u 8 p / r a A X I R p 0 p e L D D 7 F f p C E k 3 D k S 4 Q + f 7 O d e f / u D f C / 7 H N 5 P 5 A w l + X Z 2 X L p / c Z 7 t e 2 9 1 s y v k K q Y 7 u l y + f m b j T d b b / S P t 3 q x X V 1 s r F U / F 5 9 / 8 d 0 f B + N 0 N I s h k a P I F + L 7 7 T e Z / F T 6 u Q x H E S z X B z M B m T + 6 9 w P 4 H i 8 T P w b x q d R z W 3 o v E R l 7 k z T H / x P p a Z T 3 K P 1 Y i E U 8 R M v Y l 1 P R 5 B T Y x n 0 / k 5 N / / V S G S T a T k I z M j S a z y J O p p x b K G 4 c 5 j G S 0 w A t / l I c 4 V m 8 0 9 X N / J H E 5 1 9 d f q h / vr H P r n e 7 0 j 7 z 9 z s H x 2 X H / + P y s 5 2 l qv W 0 g m / h X T U N s D u M l + v B O / f z e E 3 g f 3 A z h p R N v 5 G f m W s 0 4 h w n k e Z g E a l D j s A i F N Z u E w S w H n F A C j 6 M 0 j v 1 k X A 4 Q j G A i l 2 U 5 g N h 7 1 c X r X y 2 X K z Y j 3 A f I r d W e b r X Z 5 W E w r Z 1 d q U a b l U w z a 9 N P s z a L Y S p l G l u j X d 1 a s a v m 7 V s z / z m L o b U Y P m c x s h a j 5 y z G 1 m K s L H A b j n B 2 k Z q h 5 3 t o r z Y d J h h R Y w / X J n Z 9 4 L U C l 9 9 v f 0 I v w 4 m 3 s a 2 c N K c 9 X 5 a D 2 M 8 D F J i f l w f H d 8 2 x 4 L V j g l J q m v T P 9 8 / 1 f X T 4 a e m X O e D o F f F v 5 s a u z 4 5 2 K a d p V g 4 6 T b b z g G w H g z k v n g Y i j L 0 H v C 6 y a b h 8 p a D / w F / z l S X r Z E + N X u o 4 K O Q U p P / 1 f n W 3 e d V t / K r V 8 u E J 1 6 p 1 K K 5 d p u y e u X n D c v 6 0 Y j l X l k + r l q u G K z Z j b T R u J 9 W d X r W 5 f n h q z q p p o R Y W 4 Q Y 6 1 + i 8 g S 4 0 u m i g s U b j p g o 0 m j R t Z 1 K J Y 4 Z D m m 9 6 d s Y r R p l j p M b e M B m b j n g b f x j 5 t H Z N M 9 W V G b V 5 Q m z F G W J 8 d B i c B / q s M 4 c h n t S w 6 U X p I + S v R 5 j 2 t t Y H G K n 6 t I L J x n Z p z s X f D b B V 6 v B o 6 4 6 n Q C j 9 a M s 7 w D N W S M x D 6 k g V 6 i B E 3 n g 8 s B 4 P m h 4 1 L R 9 T e 8 + N t 9 V d h W e N P J x e 1 X h r e z z M / D F 1 2 f h 2 4 7 u V b p t 1 H 3 v 1 L X f 1 n Z 5 O z y S L r y 4 H J h Q z + C q z O O v R 4 s A u i O n d s 7 1 7 L b 2 v b C + d J x / T O n l t 1 Q t j 7 i 7 0 y t S p 7 Z m l a T q c 5 g B N l 8 z f x r e r H m n V m O 9 v V 3 3 7 i Q e 4 C a p z y 5 L B g 5 m z N X l + 0 o 6 f W Z Z B 7 i k / x k 2 n c t N p c 7 P j 5 f 4 j r X v D 2 e v X r / 0 i D c f e T K i M H 0 6 8 L B U i x E L O u M 4 i H z N S 5 f / 5 0 a k i J c M E 1 T J H x Z j u l c 3 / e 5 K V o 7 3 a 0 d 6 P O s I 5 J w H o 0 s b Y C u N D w / W I U C q W t q 5 e v 3 5 W J j g 6 P w p S L M q m c c s 8 k T O j q 4 2 + O l H m a m W m O 9 b V T o s r K 3 h 7 P 5 x E 7 e v r h 0 H f 6 b T z o 5 1 W F h U L B l n N n K l P o W a 4 6 u p r m 2 L 6 N 9 V 7 U f e / c P v b m d Y 3 w F G r 6 2 c H X A k O w k i J N V I X W K 6 g g b q q / E 2 i N M 0 1 r a 8 M r y 8 r A 6 S G c b l S 5 M g c A 6 G q c 0 Z + V O 4 3 D Q o / C s f c 4 L O 5 z u P S U M s V l y B k e w f N L O s Z Q S Z U 6 Z i J M E o T X f b h 0 q K L N P Y K P w 8 x i Y H V N + a v 0 h R u S Z r H 6 P X F A K E X S 7 u c e Y P 2 i R m 6 z J C Y k c u M i B m 7 z J g Y c B k g Z u I y E 2 I C l w m I m b r M l J j Q Z U J i v r j M F 2 L u X e a e m M h l o q W W c R 5 7 o c C I x Y + 4 4 4 U 6 7 M w O b n p f Z k J 6 4 z T 5 p f T U 5 0 e U 4 0 K d P M 7 G e H H l O 3 F 9 J 3 T X 1 G V S Y j K X y Y h 5 c J k H Y n K X y Y k R L i O I k S 4 j i Z m 5 z I y Y w m U K Y h 5 d 5 p G Y u c v M i V m 4 z I K Y J 5 d 5 W p o C z Q Y A Z u a 0 P t 6 L K k h K E 0 r D C Q u b e t y 6 y m M W V d V X 8 4 z j 8 J B g F h v F i G A W G M W Y Y B Y V B R D M Q q K Y E M z i o Q g I Z s F Q T A l m k V D M C G Z h U H w h m M V A c U 8 w C 4 A i I j h i c E x w z G C 2 0 H y F U 4 K Z m I u M Y K b k 4 o F g J u M i J 5 h p u B A E C 7 6 p B M v 2 N e H S L Q h m u i 0 e C W a i L e Y E M 8 U W C 4 K Z X I s n g q 1 W O x G o 5 1 D 6 I U r e o l s w o m s 9 l 8 E o r / V k B i O / 1 r M Z j A Z b T 2 c w Q m w 9 n 8 G o s f W E B i P J 1 j M a j C 5 b T 2 n k n j 2 n w S i 0 9 a Q G I 9 P W s x q M V p u n t e V i l 4 s 5 9 + x J D E a 6 r W c x G P 2 2 n s Z g R N x 6 H o N R c u u J D E b O r W c y G E 2 3 n s p g h N 1 6 L o N R d + v J D E b i r W c z G J 2 3 n s 5 g x N 5 6 P o N R / P M n N M Z C H o 7 q C i X e o f j Y o b C J d w n e Z f A e w X s M 3 i d 4 n 8 E d g j s M P i D 4 g M G H B B 8 y + I j g I w Y f E 3 z M 4 P c E v 2 f w C c E n D O 4 S 3 G X w K c G n D D 4 j + I z B 5 w S f M / i C 4 A s G X x J 8 y e A r g q 8 Y 3 C O 4 x + A + w X 0 G X x N 8 z e A b g m 8 Y f E v w L Y P v C L 5 j 8A e C P z D 4 I 8 E f n z 9 e X d G B U R 3 T 6 A 7 T r 5 Y e 4 3 Y 5 t + d y e 5 z b d 7 l 9 z n V c r s O 5 A 5 c 7 4 N y h y x 1 y 7 s j l j j h 3 7 H L H n H v v c u 8 5 d + J y J 5 z r u l y X c 6 c u d 8 q 5 M 5 c 7 4 9 y 5 y 5 1 z 7 s L l L j h 3 6 X K X n L t y u S v O 9 V y u x 7 m + y / U 5 d + 1 y 1 5 y 7 c b k b z t 2 6 3 C 3 n 7 l z u j n M f X O 4 D 5 z 6 6 n J X 9 D S 8 hi i f Q n y P w s + u b u m + R J l D a z 7 M W i 2 c G G s S U N O q a W O F u P a y e j V a E f p p q 4 c q a G Q 4 N Q u W J L k 4 Q o a J E l y S I U C l S V A O k A k S X H 4 h Q 2 a G L D k S o 2 N C l B i J U Y h T V I N k I v x i E y g l d T C B C R Y Q u I R C J 2 P I Y h A o G X S 4 g k r B l N U j K F s k g V B L o g g A R K g R 0 G Y A I p X + d / B E R b B 8 M Q q m + q H a L 7 V V h E E r r O q k j Q s l c p 3 J E K I X r B I 4 I J W 6 d t h F p K 1 L d 6 r T w o 2 y q 9 l v / r Y V Z D C v N V A / i D U i f w O i B R U V F f j w c q x 7 m g o g 0 h k D h + i / B W q l K p R Z A h 4 j g b 4 J E G M S q q / 5 L s N V z / S 1 B N Z G y 5 O M v l V h t C 8 U 6 o h Y K d c w m V S q B 2 h Y K d E I t F G d A L R T m l F o 4 X D Z W F O Q X a q E Y 7 9 n a l E q E 9 c x L J U D b w s V k q 4 j i S 9 m S l E p 0 t o W i e 6 A W C i 5 n K 1 U q o d U L V C q R 2 R Y u N F t m F F h B L R T X I 7 V Q W H N q o a g W 1 E J B P S 2 r b 5 g x / c 4 N r l M v 6 o x S r k 6 4 i F C i 1 W k W E U q v O r k i Q k l V p 1 R E K J X q R I o I J V C d P h G h t K m T J i K U L H W q R I R S p E 6 Q i F B i 1 G k R E U q H O h k i Q k l Q p 0 B E K P X p x I c I J T y d 7 h C h N K e T H C K U 3 H R q Q 4 R S m k 5 o i F A i 0 2 k M E U p f O n k h Q k l L p y x E K F X p R I U I J S i d n h C h t K S T E i K U j H Q q Q o R S k E 5 A i H x k O 0 j p Y s i z R X x R Z 4 s L l i 3 i r g 1 9 x X S r 8 K 8 n V 8 W w 4 n o m j r W K + p A I 9 d 7 F P o w i P w c U 1 X R H n U B 4 R 1 M D i k m o n q B C M k r H Y R K g M 3 8 W K U R M 6 u t 4 W Q r 1 8 L c H 8 j k H w z Q a / 5 i b 4 X x Z N r / c l D g + 8 0 2 5 T q e V P / 3 w u p q a N G V n I p j 6 5 a 7 F S P 9 y z 2 I U A X L f Y h Q D s m M x i g J 5 Y D G K A 3 l o M Y o E e W Q x i g V 5 b D G K B v n e Y h Q P 8 s R i F B G y a z G K C X l q M Y o K e W Y x i g t 5 b j G K D H l h M Y o N e W k x i g 5 5 Z T G K D 9 m z G E W I 7 F u M Y k R e W 4 y i R N 5 Y j O J E 3 l q M I k X e W Y x i R X 6 w G E W L / G g x U 6 i h k A 9 z P 5 s a N r A f f 0 f O p 5 B g l 8 G k i 2 C P w S S N Y J / B p I 6 g w 2 A S S H D A Y N J I c M h g k k l w x G B S S n D M Y B J L 8 J 7 B p J f g h M E k m a D L Y F J N c M p g E k 5 w x m D S T n D O Y J J P c M F g U l B w y W A S U X D F Y N J R 0 G M w S S n o M 5 j U F F w z m A Q V 3 D C Y N B X c M p h k F d w x m J Q V f G A w i S v 4 y G D 7 Q Q C P t q p U E / X D l S E T l 9 g l l L Q l 9 g g l a Y l 9 Q r W y X n r 7 + g u O m Q D P 9 w R I D 2 8 d w d j r b H p D G P k K l 9 N Q e I / p L B o j h C 3 w h P 4 6 B G v J W e 6 p F + X S C B 2 p t 8 t g n m F t q b / j t d + 0 d + i O J F p x Q C h p V h w S S p I V R 4 S S Y s U x o S R Y 8 Z 5 Q 0 q s 4 I Z T k K r q E k l r F K a E k V n F G K G l V n B N K U h U X h J J S x S W h J F R x R S j p V P Q I J Z m K P q G k U n F N K I l U 3 B B K G h W 3 h J J E x R 2 h p F D x g V A S q P h I a P 1 8 J s F q E P Q H C 9 8 8 m a l K Q 6 C 6 o O t + J F B F 4 w 6 1 U M C 7 1 E L h 7 l E L B b t P L R R T h 1 o o o g N q o X g O q Y W i O a I W i u W Y W i i S 9 9 R C c Z x Q C 0 X R p R a K 4 Z R a K I I z a u H m n 1 M L N / 2 C W r j Z l 9 T C T b 6 i F m 5 u j 1 q 4 q X 1 q 4 W Z e U w s 3 8 Y Z a u H m 3 1 M J N u 6 M W b t Y H a u E m f W T 3 q + q v q v Z S W w Z 8 y 6 S p w / C g U V G t 3 4 j F 0 D b o p v c Y y m k 6 k x 4 W Q d 4 j J r o M c r d M A q q T n B q p u r 2 s N a A N V 8 p D 0 E U U N K o o 0 G U U N O o o 0 I U U N C o p 0 K U U N G o p 0 M U U N K o p 0 O U U N O o p 0 A U V N C o q 0 C U V N G o q 0 E U V N K o q 0 G U V N O o q 0 I U V N C o r 0 K U V N G o r 0 M U V N K o r 0 O U V N O o r 0 A U W N C o s 0 C U W N G o s 0 E U W N K o s 0 G U W N O o s 0 I U W N C o t 0 K U W N G o t 0 M U W N K o t 0 O U W N O o t 0 A U X N C o u 0 C U X N G o u 0 E U X N K o u 0 G U X s L o L P z 9 g I p L 5 D L x Z M o Y 8 W q g 3 n M a + 9 L 0 A E s g x B 6 l 2 K F D p w 5 l K S M 0 3 M H 3 1 k q B 6 U T O P S 9 3 Q 6 V B 5 h T g L 8 x A T o d O / f n 9 3 u N B J U L 8 z o m 6 C W b P h 2 7 5 O M v U l f n 5 3 b + F Y X n D L i 2 X b Y O J 0 D N H X J q I N 6 p m Y 1 s p 9 K q O L r x l l M o z G U F k O d K M e f d 0 D j w m Z j q a + U C + v + z O Z 6 s 9 V k D s j b L x E n h m b e o x V l 9 U B j M G x M 8 0 W u x w J P H S s n W m i F k b 6 y Z p r H P l Z 5 I 9 g W b 9 + 0 6 2 A p f f S q 6 7 d 5 W 2 8 d L z k 9 Y v L d Q V 7 w 6 f b Z K + W P L c 3 T s 0 4 Y 2 v c I K N 8 a Z / G u U Q O w b J + v t a k R p L m q F r h J I S 8 6 V q k E x n 7 c 7 K 0 Q N M O k 0 W q 3 3 g y j 9 5 W v W T R T M 3 + S T 0 f c N m T 7 p K / 7 n T S X d n A G z + n E a h G 0 7 / E P 3 6 O e 5 + n z L K 3 s g F 7 a U G 0 a i i B 3 q b R J P d j 9 Z h q + p j m W L Y K f y G 8 F 9 0 f 3 r 5 Q r / r o f / g x S 8 w r q y L D / R f 6 V b M X A 4 g i Z m M f k 7 7 0 d j E B Y s g n 6 t c C 4 x 1 i 9 c q b q o 2 N U 2 a t 3 j d N Z 4 H O m b p U D i V s a v c i 9 c Y p K H e P 4 X 2 Y w T j 0 t x o v W a d 5 H K k n / c u y + 8 O b Z Q u Z J q C 4 7 T Z O P u p + b 9 u 4 T D F Z C 6 O 1 0 P 1 h E C Y T u W i G T u b n 6 p E x H h u + C p Y e 4 F k r / A C 8 M P G S t C r z J c y 3 v L 1 p K t T y p K o A H E 2 9 f f x E n M A v h T d M 0 / u t d e c h z 3 m m T u c 0 / z V q P A / 0 A P D v Y F N d f c 1 Q n Z P G E K / a X W q 1 o p n + / Y x F H w X V V 6 8 D R i A H / h D j L E o f h z n 4 9 + s G S 9 U h m M k F H u s D e 2 G Y G i 7 8 H O c / x c N / f f 3 z N x v b z X + p t H p x 8 3 Z r + 8 3 W 9 u V 3 G 7 / d r f 4 V 0 8 / W / m n t n 9 d e r W 2 v / c va b 9 e O 1 i 7 W r t d G a / + 1 9 t 9 r / 7 P 2 v + 9 6 7 x b v f v / u D 8 b 0 p z + p + v z j m v P z 7 j / / B E y l 2 6 c = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 I Z A 8 q I s 4 y t Y 0 X N A E o H 9 F 4 p C y y U = " > A A A 1 0 X i c lV t b b + N K c v Z u b h s n m 5 x N H v N C x D P Y 2 V 2 P M Z 5 z B g k C B F h f 5 M t Y v k q + z B z N G V B S i e K Y N 7 N b t G R G i 8 U + L Z A f k F + T 1 + Q 3 5 N 9 s d T e b V U 3 R c x A f H J v 9 f d X F v n z V V a I 4 w y w K h X z z 5 v 9 + 8 t M / + / O / + M u / + t l f r / / N 3 / 7 8 7 / 7 + m 1 / 8 w 4 1 I Z / k I r k d p l O Z 3 Q 1 9 A F C Z w L U M Z w V 2 W g x 8 P I 7 g d 3 u 8 p / r a A X I R p 0 p e L D D 7 F f p C E k 3 D k S 4 Q + f 7 O d e f / u D f C / 7 H N 5 P 5 A w l + X Z 2 X L p / c Z 7 t e 2 9 1 s y v k K q Y 7 u l y + f m b j T d b b / S P t 3 q x X V 1 s r F U / F 5 9 / 8 d 0 f B + N 0 N I s h k a P I F + L 7 7 T e Z / F T 6 u Q x H E S z X B z M B m T + 6 9 w P 4 H i 8 T P w b x q d R z W 3 o v E R l 7 k z T H / x P p a Z T 3 K P 1 Y i E U 8 R M v Y l 1 P R 5 B T Y x n 0 / k 5 N / / V S G S T a T k I z M j S a z y J O p p x b K G 4 c 5 j G S 0 w A t / l I c 4 V m 8 0 9 X N / J H E 5 1 9 d f q h / vr H P r n e 7 0 j 7 z 9 z s H x 2 X H / + P y s 5 2 l qv W 0 g m / h X T U N s D u M l + v B O / f z e E 3 g f 3 A z h p R N v 5 G f m W s 0 4 h w n k e Z g E a l D j s A i F N Z u E w S w H n F A C j 6 M 0 j v 1 k X A 4 Q j G A i l 2 U 5 g N h 7 1 c X r X y 2 X K z Y j 3 A f I r d W e b r X Z 5 W E w r Z 1 d q U a b l U w z a 9 N P s z a L Y S p l G l u j X d 1 a s a v m 7 V s z / z m L o b U Y P m c x s h a j 5 y z G 1 m K s L H A b j n B 2 k Z q h 5 3 t o r z Y d J h h R Y w / X J n Z 9 4 L U C l 9 9 v f 0 I v w 4 m 3 s a 2 c N K c 9 X 5 a D 2 M 8 D F J i f l w f H d 8 2 x 4 L V j g l J q m v T P 9 8 / 1 f X T 4 a e m X O e D o F f F v 5 s a u z 4 5 2 K a d p V g 4 6 T b b z g G w H g z k v n g Y i j L 0 H v C 6 y a b h 8 p a D / w F / z l S X r Z E + N X u o 4 K O Q U p P / 1 f n W 3 e d V t / K r V 8 u E J 1 6 p 1 K K 5 d p u y e u X n D c v 6 0 Y j l X l k + r l q u G K z Z j b T R u J 9 W d X r W 5 f n h q z q p p o R Y W 4 Q Y 6 1 + i 8 g S 4 0 u m i g s U b j p g o 0 m j R t Z 1 K J Y 4 Z D m m 9 6 d s Y r R p l j p M b e M B m b j n g b f x j 5 t H Z N M 9 W V G b V 5 Q m z F G W J 8 d B i c B / q s M 4 c h n t S w 6 U X p I + S v R 5 j 2 t t Y H G K n 6 t I L J x n Z p z s X f D b B V 6 v B o 6 4 6 n Q C j 9 a M s 7 w D N W S M x D 6 k g V 6 i B E 3 n g 8 s B 4 P m h 4 1 L R 9 T e 8 + N t 9 V d h W e N P J x e 1 X h r e z z M / D F 1 2 f h 2 4 7 u V b p t 1 H 3 v 1 L X f 1 n Z 5 O z y S L r y 4 H J h Q z + C q z O O v R 4 s A u i O n d s 7 1 7 L b 2 v b C + d J x / T O n l t 1 Q t j 7 i 7 0 y t S p 7 Z m l a T q c 5 g B N l 8 z f x r e r H m n V m O 9 v V 3 3 7 i Q e 4 C a p z y 5 L B g 5 m z N X l + 0 o 6 f W Z Z B 7 i k / x k 2 n c t N p c 7 P j 5 f 4 j r X v D 2 e v X r / 0 i D c f e T K i M H 0 6 8 L B U i x E L O u M 4 i H z N S 5 f / 5 0 a k i J c M E 1 T J H x Z j u l c 3 / e 5 K V o 7 3 a 0 d 6 P O s I 5 J w H o 0 s b Y C u N D w / W I U C q W t q 5 e v 3 5 W J j g 6 P w p S L M q m c c s 8 k T O j q 4 2 + O l H m a m W m O 9 b V T o s r K 3 h 7 P 5 x E 7 e v r h 0 H f 6 b T z o 5 1 W F h U L B l n N n K l P o W a 4 6 u p r m 2 L 6 N 9 V 7 U f e / c P v b m d Y 3 w F G r 6 2 c H X A k O w k i J N V I X W K 6 g g b q q / E 2 i N M 0 1 r a 8 M r y 8 r A 6 S G c b l S 5 M g c A 6 G q c 0 Z + V O 4 3 D Q o / C s f c 4 L O 5 z u P S U M s V l y B k e w f N L O s Z Q S Z U 6 Z i J M E o T X f b h 0 q K L N P Y K P w 8 x i Y H V N + a v 0 h R u S Z r H 6 P X F A K E X S 7 u c e Y P 2 i R m 6 z J C Y k c u M i B m 7 z J g Y c B k g Z u I y E 2 I C l w m I m b r M l J j Q Z U J i v r j M F 2 L u Xe a e m M h l o q W W c R 5 7 o c C I x Y + 4 4 4 U 6 7 M w O b n p f Z k J 6 4 z T 5 p f T U 5 0 e U 4 0 K d P M 7 G e H H l O 3 F 9 J 3 T X 1 GV S Y j K X y Y h 5 c J k H Y n K X y Y k R L i O I k S 4 j i Z m 5 z I y Y w m U K Y h 5 d 5 p G Y u c v M i V m 4 z I K Y J 5 d 5 W p o C z Q Y A Z u a 0 P t 6 L K k h K E 0 r D C Q u b e t y 6 y m M W V d V X 8 4 z j 8 J B g F h v F i G A W G M W Y Y B Y V B R D M Q q K Y E M z i o Q g I Z s F Q T A l m k V D M C G Z h U H w h m M V A c U 8 w C 4 A i I j h i c E x w z G C 2 0 H y F U 4 K Z m I u M Y K b k 4 o F g J u M i J 5 h p u B A E C 7 6 p B M v 2 Ne H S L Q h m u i 0 e C W a i L e Y E M 8 U W C 4 K Z X I s n g q 1 W O x G o 5 1 D 6 I U r e o l s w o m s 9 l 8 E o r / V k B i O / 1 r M Z j A Z b T 2 c w Q m w 9 n 8 G o s f W E B i P J 1 j M a j C 5 b T 2 n k n j 2 n w S i 0 9 a Q G I 9 P W s x q M V p u n t e V i l 4 s 5 9 + x J D E a 6 r W c x G P 2 2 n s Z g RN x 6 H o N R c u u J D E b O r W c y G E 2 3 n s p g h N 1 6 L o N R d + v J D E b i r W c z G J 2 3 n s 5 g x N 5 6 P o N R / P M n N M Z C H o 7 q C i X e o f j Y o b C J d w n e Z f A e w X s M 3 i d 4 n 8 E d g j s M P i D 4 g M G H B B 8 y + I j g I w Y f E 3 z M 4 P c E v 2 f w C c E n D O 4 S 3 G X w K c G n D D 4 j + I z B 5 w S f M / i C 4 A s G X x J 8 y e A r g q 8 Y 3 C O 4 x + A + w X 0 G X x N 8 z e A b g m 8 Y f E v w L Y P vC L 5 j 8 A e C P z D 4 I 8 E f n z 9 e X d G B U R 3 T 6 A 7 T r 5 Y e 4 3 Y 5 t + d y e 5 z b d 7 l 9 z n V c r s O 5 A 5 c 7 4 N y h y x 1 y 7 s j l j j h 3 7 H L H n H v v c u 8 5 d + J y J 5 z r u l y X c 6 c u d 8 q 5 M 5 c 7 4 9 y 5 y 5 1 z 7 s L l L j h 3 6 X K X n L t y u S v O 9 V y u x 7 m + y / U 5 d + 1 y 1 5 y 7 c b k b z t 2 6 3 C 3 n 7 l z u j n M f X O 4 D 5 z 6 6 n J X 9 D S 8 h i i f Q n y P w s + u b u m + R J l D a z 7 M W i 2 c G G s S U N O q a W O F u P a y e j V a E f p p q 4 c q a G Q 4 N Q u W J L k 4 Q o a J E l y S I U C l SV A O k A k S X H 4 h Q 2 a G L D k S o 2 N C l B i J U Y h T V I N k I v x i E y g l d T C B C R Y Q u I R C J 2 P I Y h A o G X S 4 g k r B l N U j K F s k g V B L o g g A R K g R 0 G Y A I p X + d / B E R b B 8 M Q q m + q H a L 7 V V h E E r r O q k j Q s l c p 3 J E K I X r B I 4 I J W 6 d t h F p K 1 L d 6 r T w o 2 y q 9 l v / r Y V Z D C v N V A / i D U i f w O i B R U V F f j w c q x 7 m g o g 0 h k D h + i / B W q l K p R Z A h 4 j g b 4 J E G M S q q / 5 L s N V z / S 1 B N Z G y 5 O M v l V h t C 8 U 6 o h Y K d c w m V S q B 2 h Y K d E I t F G d A L R T m l F o 4 X D Z W F O Q X a q E Y 7 9 n a l E q E 9 c x L J U D b w s V k q 4 j i S 9 m S l E p 0 t o W i e 6 A W C i 5 n K 1 U q o d U L V C q R 2 R Y u N F t m F F h B L R T X I 7 V Q W H N q o a g W 1 E J B P S 2 r b 5 g x / c 4 N r l M v 6 o x S r k 6 4 i F C i 1 W k W E U q v O r k i Q k l V p 1 R E K J X q R I o I J V C d P h G h t K m T J i K U L H W q R I R S p E 6 Q i F B i 1 G k R E U q H O h k i Q k l Q p 0 B E K P X p x I c I J T y d 7 h C h N K e T H C K U 3 H R q Q 4 R S m k 5 o i F A i 0 2 k M E U p f O n k h Q k l L p y x E K F X p R I U I J S i d n h C h t K S T E i K U j H Q q Q o R S k E 5 A i H x k O 0 j p Y s i z R X x R Z 4 s L l i 3 i r g 1 9x X S r 8 K 8 n V 8 W w 4 n o m j r W K + p A I 9 d 7 F P o w i P w c U 1 X R H n U B 4 R 1 M D i k m o n q B C M k r H Y R K g M 3 8 W K U R M 6 u t 4 W Q r 1 8 L c H 8 j k H w z Q a / 5 i b 4 X x Z N r / c l D g + 8 0 2 5 T q e V P / 3 w u p q a N G V n I p j 6 5 a 7 F S P 9 y z2 I U A X L f Y h Q D s m M x i g J 5 Y D G K A 3 l o M Y o E e W Q x i g V 5 b D G K B v n e Y h Q P 8 s R i F B G y a z G K C X l q M Y o K e W Y x i g t 5 b j G K D H l h M Y o N e W k x i g 5 5 Z T G K D 9 m z G E W I 7 F u M Y k R e W 4 y i R N 5 Y j O J E 3 l q M I k X e W Y x i R X 6 w G E W L / G g x U 6 i h k A 9 z P 5 s a N r A f f 0 f O p 5 B g l 8 G k i 2 C P w S S N Y J / B p I 6 g w 2 A S S H D A Y N J I c M h g k k l w x G B S S n D M Y B J L 8 J 7 B p J f g h M E k m a D L Y F J N c M p g E k 5 w x m D S T n D O Y J J P c M F g U l B w y W A S U X D F Y N J R 0 G M w S S n o M 5 j U F F w z m A Q V 3 D C Y N B X c M p h k F d w x m J Q V f G A w i S v4 y G D 7 Q Q C P t q p U E / X D l S E T l 9 g l l L Q l 9 g g l a Y l 9 Q r W y X n r 7 + g u O m Q D P 9 w R I D 2 8 d w d j r b H p D G P k K l 9 N Q e I / p L B o j h C 3 w h P 4 6 B G v J W e 6 p F + X S C B 2 p t 8 t g n m F t q b / j td + 0 d + i O J F p x Q C h p V h w S S p I V R 4 S S Y s U x o S R Y 8 Z 5 Q 0 q s 4 I Z T k K r q E k l r F K a E k V n F G K G l V n B N K U h U X h J J S x S W h J F R x R S j p V P Q I J Z m K P q G k U n F N K I l U 3 B B K G h W 3 h J J E x R 2 h p F D x g V A S q P h I a P 1 8 J s F q E P Q H C 9 8 8 m a l K Q 6 C 6 o O t + J F B F 4 w 6 1 U M C 7 1 E L h 7 l E L B b t P L R R T h 1 o o o g N q o X g O q Y W i O a I W i u W Y W i i S 9 9 R C c Z x Q C 0 X R p R a K 4 Z R a K I I z a u H m n 1 M L N / 2 C W r j Z l 9 T C T b 6 i F m 5 u j 1 q 4 q X 1 q 4 W Z e U w s 3 8 Y Z a u H m 3 1 M J N u 6 M W b t Y H a u E m f W T 3 q + q v q v Z S W w Z 8 y 6 S p w / C g U V G t 3 4 j F 0 D b o p v c Y y m k 6 k x 4 W Q d 4 j J r o M c r d M A q q T n B q p u r 2 s N a A N V 8 p D 0 E U U N K o o 0 G U U N O o o 0 I U U N C o p 0 K U U N G o p 0 M U U N K o p 0 O U U N O o p 0 A U V N C o q 0 C U V N G o q 0 E U V N K o q 0 G U V N O o q 0 I U V N C o r 0 K U V N G o r 0 M U V N K o r 0 O U V N O o r 0 A U W N C o s 0 C U W N G o s 0 E U W N K o s 0 G U W N O o s 0 I U W N C o t 0 K U W N G o t 0 M U W N K o t 0 O U W N O o t 0 A U X N C o u 0 C U X N G o u 0 E U X N K o u 0 G U X s L o L P z 9 g I p L 5 D L x Z M o Y 8 W q g 3 n M a + 9 L 0 A E s g x B 6 l 2 K F D p w 5 l K S M 0 3 M H 3 1 k q B 6 U T O P S 9 3 Q 6 V B 5 h T g L 8 x A T o d O / f n 9 3 u N B J U L 8 z o m 6 C W b P h 2 7 5 O M v U l f n 5 3 b + F Y X n D L i 2 X b Y O J 0 D N H X J q I N 6 p m Y 1 s p 9 K q O L r x l l M o z G U F k O d K M e f d 0 D j w m Z j q a + U C + v + z O Z 6 s 9 V k D s j b L x E n h m b e o x Vl 9 U B j M G x M 8 0 W u x w J P H S s n W m i F k b 6 y Z p r H P l Z 5 I 9 g W b 9 + 0 6 2 A p f f S q 6 7 d5 W 2 8 d L z k 9 Y v L d Q V 7 w 6 f b Z K + W P L c 3 T s 0 4 Y 2 v c I K N 8 a Z / G u U Q O w b J + v t a k R p L m q F r h J I S 8 6 V q k Ex n 7 c 7 K 0 Q N M O k 0 W q 3 3 g y j 9 5 W v W T R T M 3 + S T 0 f c N m T 7 p K / 7 n T S X d n A G z + n E a h G 0 7 / E P 3 6 O e 5 + n z L K 3 s g F 7 a U G 0 a i i B 3 q b R J P d j 9 Z h q + p j m W L Y K f y G 8 F 9 0 f 3 r 5 Q r / r o f / g x S 8 w r q y L D / R f 6 V b M X A 4 g i Z m M f k 7 7 0 d j E B Y s g n 6 t c C 4 x 1 i 9 c q b q o 2 N U 2 a t 3 j d N Z 4 H O m b p U D i V s a v c i 9 c Y p K H e P 4 X 2 Y w T j 0 t x o v W a d 5 H K k n / c u y + 8 O b Z Q u Z J q C 4 7 T Z O P u p + b 9 u 4 T D F Z C 6 O 1 0 P 1 h E C Y T u W i G T u b n 6 p E x H h u + C p Y e 4 F k r / A C 8 M P G S t C r z J c y 3v L 1 p K t T y p K o A H E 2 9 f f x E n M A v h T d M 0 / u t d e c h z 3 m m T u c 0 / z V q P A / 0 A P D v Y F N d f c 1 Q n Z P G E K / a X W q 1 o p n + / Y x F H w X V V 6 8 D R i A H / h D j L E o f h z n 4 9 + s G S 9 U h m M k F H u s D e 2 G Y G i 7 8 H O c / x c N / f f 3 z N x v b z X + p t H px 8 3 Z r + 8 3 W 9 u V 3 G 7 / d r f 4 V 0 8 / W / m n t n 9 d e r W 2 v / c v a b 9 e O 1 i 7 W r t d G a / + 1 9 t 9 r / 7 P 2 v + 9 6 7 x b v f v / u D 8 b 0 p z + p + v z j m v P z 7 j / / B E y l 2 6 c = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M a 5 d m B C L e H Q V E k + 1 g 0 x r 7 6 f 4 m 8 4 = " > A A A 1 p X i c lV t b d 9 v K d V a S N k n V J j 1 J H v O C V d n r u F m y l m Q f n y R 9 i i 7 U x a L u 1 M U + 9 P E C w U 0 Q F m 7 C D C F S L P v c 1 7 6 2 v 6 z / p n t m M N h 7 Q M h n V W t Z w n z f n s F c vp n 9 E Y Q H e R w J u b n 5 v z / 7 + S / + 7 u 9 / + a t f / 8 P q P / 7 T b 3 7 7 z 9 / 8 7 v c 3 I p s U AV w H W Z w V d w N f Q B y l c C 0 j G c N d X o C f D G K 4 H d z v K v 6 2 h E J E W d q T s x w + J X 6 Y R q M o 8 C V C t / 0 Y Q 4 f + 5 2 / W N j f + u v n m 7 d t N b 3 N j c / P P b z a / Vx f f b 7 7 9 y / f e F i L q Z 2 2 l + j n / / L v v / r M / z I J J A q k M Y l + I H 7 Y 2 c / l p 7 h c y C m J Y r P Y n A n I / u P d D + A E v U z 8 B 8 W m u + 7 v w X i I y 9 E Z Z g f 9 S 6 W m U 1 5 j 7 i R C z Z I C R i S / H o s k p s I 3 7 Y S J H f / k 0 j 9 J 8 I i E N z I 1 G k 9 i T m a c G 7 w 2 j A g I Z z / D C D 4 o I + + o F Y 7 / w A 4 l T t L r 6 U v 1 4 p 5 1 b 7 2 S 7 d + j t d f a P T o 9 6 R 2 e n V 5 6 m V t s 6 s o 5/ 1 T D E + i B Z Y B v e i V / c e w L v g x M s v G z k B X 5 u r t W I C x h B U U R p q D o 1 j M p I 2 L B R F E 4 K w A G l 8 B h k S e K n w 3 k f w R h G c j G f 9 y H x X n X x + l 8 X i 6 W Y A N c B C h u 1 q 0 t t c U U U j u v G L l W h L U p m u Y 3 p Z X l b x C C T M k t s 0 I 4 u L c V V 4 / Z t m P 9 c x M B G D J 6 L C G x E 8 F z E 0 E Y M V Q Q u w y G O L l Y j 9 H w P 4 9 W i w w h 3 y d D D u U n c N v B a g Y s f t j 5 h K 4 O R t 7 a l G m k O e 7 q Y 9 x O / C F F g f j H f P 7 p r 9 g W v n R C U U j O k d 7 Z 3 p u / T l z C V W v r z A r D 3 i v g 3 c 2 O 3 z Y 5 u U o 6 z f N 7 v N N n O A 7 K d z / N + U T 7 1 R Z R 4 D 3 h d 5 u N o 8 U p B / 4 6 / p k t T 1 s m f G r V y V U u O Q f p f r 1 d X m 1 b V h q 9 a I x + e c K 5 a u + L G 5 S r u m Z s 3 I q d P S 5 F T F f m 0 H L k c u B Q z 1 E H D d l L d 6 V V b 0 w 9 P z V E 1 I 9 T E I t x A p x q d N t C Z R m c N N N F o 0 l S B R t N m 7 E Qq c U y w S 9 N 1 z 4 5 4 K S h 3 g l T f G y F D U x F v 4 w 9 i n + a u G a a q s q C 2 l h B b a g w x 3 j v c n P v 6 r D O H I Z 7 U s O 7 F 2 S M U r w N M Z R u r f d y p + r S C 0 d r W 3 J y L / 9 H H 0 l x v j 7 b q e A p E 0 o 8 3 v H 0 8 Y 4 X E P K S O V K E O Q u R N i / u 2 x f 1 m i 5 q W j 5 m 9 5 9 q b 6 q 7 C s 0 E e D q 8 q v L E 1 H i b + k K q s v V 3 7 b q n a e l 3 H X r 3 l T X 2 n h 3 N l k s V X p w M T i u l 8 l V m c + W h p w E 6 I q X 1 l a 1 + 1 1 L 6 0 t X S e f M z q 5 L V R T 4 y 5 u 9 A z U 6 e 2 Z 6 a m 2 e C 4 A G g 2 y d p b e 7 v c I s 0 a a / v t c t t + 6 g E u g q r c M m X w Y M Z s Q 5 4 f t N P O J M + h 8 F Q 7 p p l O 1 U y n r Z l t r / A f a d 4 b j b 1 + / d o v s 2 j o T Y T K + N H I y z M h I j R n p u k 8 9 j E j V e 0 / 3 z t l U n J M U C 1 jV I y p X s X 8 v w d Z N b R b N 7 T 7 k w 3 h m N M Q t L U x s c K 0 o e G 6 R y g V S 9 u m X r 9 + V i b Y O z 8 O M z R l 4 6 R l n M i Z 3 t V B X x 0 o a 2 p p p N u 2q e 2 W p q z g 7 f 1 w E H V b X z 8 M e k 6 l 7 Z + s t D S p a B h k N X K m P o W a 7 q q r r y 2 K q d 9 U 7 3 l d / 9 y t b 0 d a 3 w B 7 r a 6 f 7 X A l O I h i J d Z Y X a B d w Q B 1 V b U 3 i r O s 0 L S + M r y + r A K Q G i T z J Z M j C 9 w I l c 8 J / H i + 1 w w o / T g a 8 o D P 5 r p I 5 o Z a L D U J Q r Z X 0 M y i H h H k Q l n H X E R x l m r b h 1 O L T W S J V / p F h E k M r L 4 x f 8 2 N c U u z I s F W X / Q R e r G w 0 1 k 0 a J + Y g c s M i A l c J i B m 6 D J D Y s B l g J i R y 4 y I C V 0 m J G b s M m N i I p e J i P n i M l + I u X e Z e 2 J i l 4 k X W s Z F 4 k U C d y x + b B 3 O 1 G F n VT I g m G 2 M c k g w 2 x U l E M y 2 R D k i m O 2 H M i S Y b Y Z y T D D b C e W E Y L Y N y i 8 E s z 1 Q 3 h P M N k A Z E x w z O C E 4 Y T C b a D 7 D G c F M z G V O M F N y + U A w k 3 F Z E M w 0 X A q C B V 9 U g m X 7 n H D p l g Q z 3 Z a P B D P R l l O C m W L L G c F M r u U T w Va r n R j U c y j 9 E K V o 0 S 0 Y 0 b W e y 2 C U 1 3 o y g 5 F f 6 9 k M R o O t p z M Y I b a e z 2 D U 2 H p C g 5 F k 6 x k N R p e t p z R y z 5 7 T Y B T a e l K D k W n r W Q 1 G q 8 3 T 2 n K J y y W c e / Y k B i P d 1 r M Y j H 5 b T 2 M w I m 4 9 j 8 E o u f V E B i P n 1 j M Z j K Z b T 2 U w w m 4 9 l 8 G o u /V k B i P x 1 r M Z j M 5 b T 2 c w Y m 8 9 n 8 E o / v k T G v d C E Q W 1 Q 0 m 2 a X 9 s 0 7 Z J d g j e Y f A u w b s M 3 i N 4 j 8 E d g j s M 3 i d 4 n 8 E H B B 8 w + J D g Q w Y f E X z E 4 P c E v 2 f w M c H H D O 4 S 3 G X w C c E n D D 4 l + J T B Z w S f M f i c 4 H M G X x B 8 w e B L g i 8 Z f E X w F Y N 7 B P c Y f E 3 w N Y N v C L 5 h 8 C 3 B t w y + I / i O w R 8 I / s D g j w R / f P 5 4 d U U H R n V M o 9 t M v 1 p 6 j N v h 3 K 7 L 7 X J u z + X 2 O N dx u Q 7 n 9 l 1 u n 3 M H L n f A u U O X O + T c k c s d c e 6 9 y 7 3 n 3 L H L H X O u 6 3 J d z p 2 4 3 A n n T l 3 u l H N n L n f G u X O X O + f c h c t d c O 7 S 5 S 4 5 d + V y V 5 z r u V y P c 9 c u d 8 2 5 G 5 e 7 4 d y t y 9 1 y 7 s 7 l 7 j j 3 w e U + c O 6 j y 1 n Z 3 3 A L U T 6 B / h y B n 1 0 3 6 7 p l l s L c f p 6 1 W D I x U D + h p F F 7 Y o W 7 f l g 9 G 6 0 I / T TV w l U 0 C x w Y h O y J N i e I k C n R l g Q R s i J l 1 U E y I N p + I E K 2 Q 5 s O R M h s a K u B C F m M s u o k 6 + E X g 5 C d 0 G Y C E T I R 2 k I g E r P p M Q g Z B m 0 X E E n Z t B o k Y 5 N k E L I E 2 h A g Q k Z A 2 w B E K P 3 r 5 I + I Y O t g E E r 1 Z b V a b K 1 K g 1 B a 1 0 k d E U r m O p U j Q i l c J 3 B E K H H r t I 1 I m 0 l 1 3 W n p x / l Y r b f + W w u z H F S a q R 7 E G 5 A + g d E D i 4 o y X + U i Y y 6 I y B I I F a 7 / E q y V q l R q A W w Q E f x N k I j C R F X V f w m 2 e q 6 / J a g G M p / z / s + V W G 0 J x R p Q C Y U 6 Z I O a K 4 H a E g p 0 R C U U Z 0 g l F O a Y S t h d 1 l c U 5 B c q o R j v 2 d z M l Q j r k c + V A G 0 J J 5 P N I o o v Y 1 M y V 6 K z J R T d A 5 V Q c A W b q b k S W j 1 B c y U y W 8 K J Z t O M A i u p h O J 6 p B I K a 0 o l F N W M S i i o p 0 X 1 D T O m 3 6 n B d e p F n V H K 1 Q k X E U q 0 O s 0 i Q u l V J 1 d E K K n q l I o I p V K d S B G h B K r T J y K U N n X S R I S S p U 6 V i F C K 1 A k S E U q M O i 0 i Q u l Q J 0 N E K A n q F I g I p T 6 d + B C h h K f T H S K U 5 n S S Q 4 S S m 0 5 t i F B K 0 w k N E U p k O o 0 h Q u l L J y 9 E K G n p l I U I p S q d q B C h B K X T E y K U l n R S Q o S S k U 5 F i F A K 0 g k I k Y 9 s B S l d D H i 2 S M 7 r b H H O s k X S t V t f M d 1 q + 9 e D q / a w 4 q 7 M P t Y q 6 k E q 1 H s X e x D E f g E o q v G 2 O o H w j s Y D i l G k n q B C G m T D K A 2 x M X 8 S K 0 S M 6 u t k M R f q 4 e 8 V y O c a G G T x 8 K e a G U w X 8 + a X m x L 7 Z 7 4 p 1 + m 0 a k 8 / v K 6 G J o 3 t T A V T v 9 y x G O l f 7 l q M d o D c s x j t A d m x G O 0 C u W 8 x 2 g f y w G K 0 E + S h x W g vy C O L 0 W 6 Q 7 y 1 G + 0 E e W 4 x 2 h O x a j P a E P L E Y 7 Q p 5 a j H a F / L M Y r Q z 5 L n F a G / I C 4 v R 7 p C X F q P 9 I a 8 s R j t E 9 i x G e 0 R e W 4 x 2 i b y x G O 0 T e W s x 2 i n y z m K 0 V + Q H i 9 F u k R 8 t Z o w a C v m g 8 P O x Y U P 78 T d w P o W E O w w m X Y S 7 D C Z p h H s M J n W E H Q a T Q M J 9 B p N G w g M G k 0 z C Q w a T U s I j B p N Y w v c M J r 2 E x w w m y Y R d B p N q w h M G k 3 D C U w a T d s I z B p N 8 w n M G k 4 L C C w a T i M J L B p O O w i s G k 5 T C H o N J T e E 1 g 0 l Q 4 Q 2 D S V P h L Y N J V u E d g 0 l Z 4 Q c G k 7 j C j w y 2 H w T w a K u s m q g f r g y Y u M Q O o a Q t s U s o S U v s E a q V 9 d L b 0 1 9 w T A R 4 v i d Ae n j r G I Z e Z 9 0 b Q O A r X I 4 j 4 T 1 m k 3 i I E J b A E / r r E P S S k 8 J T L 8 p l M T a k 3 i 6 D a Y 7 e U n / H a 7 9 p 7 9 A d S bR i n 1 D S r D g g l C Q r D g k l x Y o j Q k m w 4 j 2 h p F d x T C j J V X Q J J b W K E 0 J J r O K U U N K q O C O U p C r O C S W l i g t C S a j i k l D S q b g i l G Q q e o S S S s U 1 o S R S c U M o a V T c E k o S F X e E k k L F B 0 J J o O I j o f X z m R T d I O g P F r 5 5 M l N Z Q y B f 0 H U / E i j T u E 0 l F P A O l V C 4 u 1 R C w e 5 R C c X U o R K K a J 9 K K J 4 D K q F o D q m E Y j m i E o r k P Z V Q H M d U Q l F 0 q Y R i O K E S i u C U S r j 4 Z 1 T C R T + n E i 7 2 B Z V w k S + p h I t 7 R S V c 1 B 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 7 j 1 G c p x N p I c m y H v E R J d D 4 d o k I J / k e K T q 9 r L W g A 5 c s o e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E s p i A N 0 m H U M Q z 9 Y b T 0 J e + F 0 I K B e Y g V Y 4 E K n 0 w U Q m p + Q a m r 1 4 S V C 9 q F s l c F 3 Q 6 V K 1 C k k d F h I n Q q V + / v z u Y 6 S S o 3 x l R N 8 G s 2 W j b v k 4 y 9 i V + f n d v 4 U Se 8 8 j z R V t n k m w I 8 d c G o g P q k Z j S 0 n 2 q o P O v B e U y i o d Q R f Z 1 o e 5 9 X Q O P C Z k F Y 1 + o l 9 f 9 i c z 0 5 y o o n B 4 2 X i L P T U z d x 6 r K c g e G 4 M S Z Y k t c g Q Q e O j b O F F E L g X 6 y 5 g b H f h 7 7 A S z q 1 2 + 6 F b D w X n r V t T u 9 j Z e O F 9 y / u F x X s D d 8 u k 3 2 c s F z e + P U T HI 2 x w 0 y L h b 2 a Z x L F B A u 6 u d r T S q Q N E Z V i k Y R F M 2 m R T a S i T + l S A s 0 4 z B Z Z P q N J / P o b b m V P J 6 o 0 T + p 5 w M u e 9 x d 8 N e d j r t L C 3 j j F 9 Q D V W i 2 L / G P X + D a F x m L v F p a g N 2 s J F o V l E B vs 3 h U + I l 6 T D V + z A q 0 r c K f C e 9 F 9 8 c 3 L 9 S r P v o / f k x S 8 8 q q y H H 9 h X 7 V 7 E U f 4 p j F 2 M e k L 7 0 d T I C 4 5 V P 1 a 4 b 7 H R L 1 y p v y x q Z R F q 3 e N 8 0 m o c 6 Z 2 i p H E t Z 1 8 y L z h h m o 5 h 6 j + y i H Y e R v N F 6 y z o o k V k / 6 F / P u j 5 u L F j J L Q X F b b Z x 8 1 P X e t H G 5 Y v I W R m u h + 2 M / S k d y 1 t w 6 u V + o R 8 Z 4 b P h q s 1 w B n r X C D 8 G L U i / N K p s v Y b r h 7 Y 4 z o a Y n U w Y w G H t 7 + I k 4 h W + F N 8 i y + 4 1 V 5 y H P W a 5 O 5 6 z 4 E 2 q 8 C H U H 8 G 9 / X V 1 9 L V C d k y Y Q r 9 q b 1 G r F M P 3 7 m Y g e C q q n X g e M Q f b 9 A e 6 z O H s c F O D f r x o s U 4 d g L m d 4 r P f t h W F q u P Q L H P 8 Y D / / V 1 c / f r N n / o O Q 9 f 3 H z Z m N r c 2 P r 4 r u 1 v + 1 U / 4 v p 1 y t / X P m X l V c r W y t / X v n b y u H K + c r 1 S r B y v / J f K / + 9 8 j / v v n 1 3 8 q 7 3 7 s a E / v x n V Z 0 / r D g / 7 z 7 / H 4 f o z O E = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M a 5 d m B C L e H Q V E k + 1 g 0 x r 7 6 f 4 m 8 4 = " > A A A 1 p X i c lV t b d 9 v K d V a S N k n V J j 1 J H v O C V d n r u F m y l m Q f n y R 9 i i 7 U x a L u 1 M U + 9 P E C w U 0 Q F m 7 C D C F S L P v c 1 7 6 2 v 6 z / p n t m M N h 7 Q M h n V W t Z w n z f n s F c vp n 9 E Y Q H e R w J u b n 5 v z / 7 + S / + 7 u 9 / + a t f / 8 P q P / 7 T b 3 7 7 z 9 / 8 7 v c 3 I p s U AV w H W Z w V d w N f Q B y l c C 0 j G c N d X o C f D G K 4 H d z v K v 6 2 h E J E W d q T s x w + J X 6 Y R q M o 8 C V C t / 0 Y Q 4 f + 5 2 / W N j f + u v n m 7 d t N b 3 N j c / P P b z a / Vx f f b 7 7 9 y / f e F i L q Z 2 2 l + j n / / L v v / r M / z I J J A q k M Y l + I H 7 Y 2 c / l p 7 h c y C m J Y r P Y n A n I / u P d D + A E v U z 8 B 8 W m u + 7 v w X i I y 9 E Z Z g f 9 S 6 W m U 1 5 j 7 i R C z Z I C R i S / H o s k p s I 3 7 Y S J H f / k 0 j 9 J 8 I i E N z I 1 G k 9 i T m a c G 7 w 2 j A g I Z z / D C D 4 o I + + o F Y 7 / w A 4 l T t L r 6 U v 1 4 p 5 1 b 7 2 S 7 d + j t d f a P T o 9 6 R 2 e n V 5 6 m V t s 6 s o 5/ 1 T D E + i B Z Y B v e i V / c e w L v g x M s v G z k B X 5 u r t W I C x h B U U R p q D o 1 j M p I 2 L B R F E 4 K w A G l 8 B h k S e K n w 3 k f w R h G c j G f 9 y H x X n X x + l 8 X i 6 W Y A N c B C h u 1 q 0 t t c U U U j u v G L l W h L U p m u Y 3 p Z X l b x C C T M k t s 0 I 4 u L c V V 4 / Z t m P 9 c x M B G D J 6 L C G x E 8 F z E 0 E Y M V Q Q u w y G O L l Y j 9 H w P 4 9 W i w w h 3 y d D D u U n c N v B a g Y s f t j 5 h K 4 O R t 7 a l G m k O e 7 q Y 9 x O / C F F g f j H f P 7 p r 9 g W v n R C U U j O k d 7 Z 3 p u / T l z C V W v r z A r D 3 i v g 3 c 2 O 3 z Y 5 u U o 6 z f N 7 v N N n O A 7 K d z / N + U T 7 1 R Z R 4 D 3 h d 5 u N o 8 U p B / 4 6 / p k t T 1 s m f G r V y V U u O Q f p f r 1 d X m 1 b V h q 9 a I x + e c K 5 a u + L G 5 S r u m Z s 3 I q d P S 5 F T F f m 0 H L k c u B Q z 1 E H D d l L d 6 V V b 0 w 9 P z V E 1 I 9 T E I t x A p x q d N t C Z R m c N N N F o 0 l S B R t N m 7 E Qq c U y w S 9 N 1 z 4 5 4 K S h 3 g l T f G y F D U x F v 4 w 9 i n + a u G a a q s q C 2 l h B b a g w x 3 j v c n P v 6 r D O H I Z 7 U s O 7 F 2 S M U r w N M Z R u r f d y p + r S C 0 d r W 3 J y L / 9 H H 0 l x v j 7 b q e A p E 0 o 8 3 v H 0 8 Y 4 X E P K S O V K E O Q u R N i / u 2 x f 1 m i 5 q W j 5 m 9 5 9 q b 6 q 7 C s 0 E e D q 8 q v L E 1 H i b + k K q s v V 3 7 b q n a e l 3 H X r 3 l T X 2 n h 3 N l k s V X p w M T i u l 8 l V m c + W h p w E 6 I q X 1 l a 1 + 1 1 L 6 0 t X S e f M z q 5 L V R T 4 y 5 u 9 A z U 6 e 2 Z 6 a m 2 e C 4 A G g 2 y d p b e 7 v c I s 0 a a / v t c t t + 6 g E u g q r c M m X w Y M Z s Q 5 4 f t N P O J M + h 8 F Q 7 p p l O 1 U y n r Z l t r / A f a d 4 b j b 1+ / d o v s 2 j o T Y T K + N H I y z M h I j R n p u k 8 9 j E j V e 0 / 3 z t l U n J M U C 1 j V I y p X s X 8 v w d Z N b R b N 7 T 7 k w 3 h m N M Q t L U x s c K 0 o e G 6 R y g V S 9 u m X r 9 + V i b Y O z 8 O M z R l 4 6 R l n M i Z 3 t V B X x 0 o a 2 p p p N u 2q e 2 W p q z g 7 f 1 w E H V b X z 8 M e k 6 l 7 Z + s t D S p a B h k N X K m P o W a 7 q q r r y 2 K q d 9 U 7 3 l d / 9 y t b 0 d a 3 w B 7 r a 6 f 7 X A l O I h i J d Z Y X a B d w Q B 1 V b U 3 i r O s 0 L S + M r y + r A K Q G i T z J Z M j C 9 w I l c 8 J / H i + 1 w w o / T g a 8 o D P 5 r p I 5 o Z a L D U J Q r Z X 0 M y i H h H k Q l n H X E R x l m r b h 1 O L T W S J V / p F h E k M r L 4 x f 8 2 N c U u z I s F W X / Q R e r G w 0 1 k 0 a J + Y g c s M i A l c J i B m 6 D J D Y s B l g J i R y 4 y I C V 0 m J G b s M m N i I p e J i P n i M l + I u X e Z e 2 J i l 4 k X W s Z F 4 k U C d y x + b B 3 O 1 G F n VT I g m G 2 M c k g w 2 x U l E M y 2 R D k i m O 2 H M i S Y b Y Z y T D D b C e W E Y L Y N y i 8 E s z 1 Q 3 h P M N k A Z E x w z O C E 4 Y T C b a D 7 D G c F M z G V O M F N y + U A w k 3 F Z E M w 0 X A q C B V 9 U g m X 7 n H D p l g Q z 3 Z a P B D P R l l O C m W L L G c F M r u U T w Va r n R j U c y j 9 E K V o 0 S 0 Y 0 b W e y 2 C U 1 3 o y g 5 F f 6 9 k M R o O t p z M Y I b a e z 2 D U 2 H p C g 5 F k 6 x k N R p e t p z R y z 5 7 T Y B T a e l K D k W n r W Q 1 G q 8 3 T 2 n K J y y W c e / Y k Bi P d 1 r M Y j H 5 b T 2 M w I m 4 9 j 8 E o u f V E B i P n 1 j M Z j K Z b T 2 U w w m 4 9 l 8 G o u / V k B i P x 1 r M Z j M 5 b T 2 c w Y m 8 9 n 8 E o / v k T G v d C E Q W 1 Q 0 m 2 a X 9 s 0 7 Z J d g j e Y f A u w b s M 3 i N 4 j 8 E d g j s M 3 i d 4 n 8 E H B B 8 w + J D g Q w Y f E X z E 4 P c E v 2 f w M c H H D O 4 S 3 G X w C c E n D D 4 l + J T B Z w S f M f i c 4 H M G X x B 8 w e B L g i 8 Z f E X w F Y N 7 B P c Y f E 3 w N Y N v C L 5 h 8 C 3 B t w y + I / i O w R 8 I / s D g j w R / f P 5 4 d U U H R n V M o 9 t M v 1 p 6 j N v h 3 K 7 L 7 X J u z + X 2 O N dx u Q 7 n 9 l 1 u n 3 M H L n f A u U O X O + T c k c s d c e 6 9 y 7 3 n 3 L H L H X O u 6 3 J d z p 2 4 3 A n n T l 3 u l H N n L n f G u X O X O + f c h c t d c O 7 S 5 S 4 5 d + V y V 5 z r u V y P c 9 c u d 8 2 5 G 5 e 7 4 d y t y 9 1 y 7 s 7 l 7 j j 3 w e U + c O 6 j y 1 n Z 3 3 A L U T 6 B / h y B n 1 0 3 6 7 p l l s L c f p 6 1 W D I x U D + h p F F 7 Y o W 7 f l g 9 G 6 0 I / T TV w l U 0 C x w Y h O y J N i e I k C n R l g Q R s i J l 1 U E y I N p + I E K 2 Q 5 s O R M h s a K u B C F m M s u o k 6 + E X g 5 C d 0 G Y C E T I R 2 k I g E r P p M Q g Z B m 0 X E E n Z t B o k Y 5 N k E L I E 2 h A g Q k Z A 2 w B E K P 3 r 5 I + I Y O t g E E r 1 Z b V a b K 1 K g 1 B a 1 0 k d E U r m O p U j Q i l c J 3 B E K H H r t I 1 I m 0 l 1 3 W n p x / l Y r b f + W w u z H F S a q R 7 E G 5 A + g d E D i 4 o y X + U i Y y 6 I y B I I F a 7 / E q y V q l R q A W w Q E f x N k I j C R F X V f w m 2 e q 6 / J a g G M p / z / s + V W G 0 J x R p Q C Y U 6 Z I O a K 4 H a E g p 0 R C U U Z 0 g l F O a Y S t h d 1 l c U 5 B c q o R j v 2 d z M l Q j r k c + V A G 0 J J 5 P N I o o v Y 1 M y V 6 K z J R T d A 5 V Q c A W b q b k S W j 1 B c y U y W 8 K J Z t O M A i u p h O J 6 p B I K a 0 o l F N W M S i i o p 0 X 1 D T O m 3 6 n B d e p F n V H K 1 Q k X E U q 0 O s 0 i Q u l V J 1 d E K K n q l I o I p V K d S B G h B K r T J y K U N n X S R I S S p U 6 V i F C K 1 A k S E U q M O i 0 i Q u l Q J 0 N E K A n q F I g I p T 6 d + B C h h K f T H S K U 5 n S S Q 4 S S m 0 5 t i F B K 0 w k N E U p k O o 0 h Q u l L J y 9 E K G n p l I U I p S q d q B C h B K X T E y K U l n R S Q o S S k U 5 F i F A K 0 g k I k Y 9 s B S l d D H i 2 S M 7 r b H H O s k X S t V t f M d 1 q + 9 e D q / a w 4 q 7 M P t Y q 6 k E q 1 H s X e x D E f g E o q v G 2 O o H w j s Y D i l G k n q B C G m T D K A 2 x M X 8 S K 0 S M 6 u t k M R f q 4 e 8 V y O c a G G T x 8 K e a G U w X 8 + a X m x L 7 Z 7 4 p 1 + m 0 a k 8 / v K 6 G J o 3 t T A V T v 9 y x G O l f 7 l q M d o D c s x j t A d m x G O 0 C u W 8 x 2 g f y w G K 0 E + S h x W g vy C O L 0 W 6 Q 7 y 1 G + 0 E e W 4 x 2 h O x a j P a E P L E Y 7 Q p 5 a j H a F / L M Y r Q z 5 L n F a G / I C 4 v R 7 p C X F q P 9 I a 8 s R j t E 9 i x G e 0 R e W 4 x 2 i b y x G O 0 T e W s x 2 i n y z m K 0V + Q H i 9 F u k R 8 t Z o w a C v m g 8 P O x Y U P 7 8 T d w P o W E O w w m X Y S 7 D C Z p h H s M J n W E H Q a T Q M J 9 B p N G w g M G k 0 z C Q w a T U s I j B p N Y w v c M J r 2 E x w w m y Y R d B p N q w h M G k 3 D C U w a T d s I z B p N 8 w n M G k 4 L C C w a T i M J L B p O O w i s G k 5 T C H o N J T e E 1 g 0 l Q 4 Q 2 D S V P h L Y N J V u E d g 0 l Z 4 Q c G k 7 j C j w y 2 H w T w a K u s m q g f r g y Y u M Q O o a Q t s U s o S U v s E a q V 9 d L b 0 1 9 w T A R 4 v i d Ae n j r G I Z e Z 9 0 b Q O A r X I 4 j 4 T 1 m k 3 i I E J b A E / r r E P S S k 8 J T L 8 p l M T a k 3 i 6 D a Y 7 e U n / H a 7 9 p 7 9 A d S bR i n 1 D S r D g g l C Q r D g k l x Y o j Q k m w 4 j 2 h p F d x T C j J V X Q J J b W K E 0 J J r O K U U N K q O C O U p C r O C S W l i g t C S a j i k l D S q b g i l G Q q e o S S S s U 1 o S R S c U M o a V T c E k o S F X e E k k L F B 0 J J o O I j o f X z m R T d I O g P F r 5 5 M l N Z Q y B f 0 H U / E i j T u E 0 l F P A O l V C 4 u 1 R C w e 5 R C c X U o R K K a J 9 K K J 4 D K q F o D q m E Y j m i E o r k P Z V Q H M d U Q l F 0 q Y R i O K E S i u C U S r j 4 Z 1 T C R T + n E i 7 2 B Z V w k S + p h I t 7 R S V c 1 B 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 7 j 1 G c p x N p I c m y H v E R J d D 4 d o k I J / k e K T q 9 r L W g A 5 c s o e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E s p i A N 0 m H U M Q z 9 Y b T 0 J e + F 0 I K B e Y g V Y 4 E K n 0 w U Q m p + Q a m r 1 4 S V C 9 q F s l c F 3 Q 6 V K 1 C k k d F h I n Q q V + / v z u Y 6 S S o 3 x l R N 8 G s 2 W j b v k 4 y 9 i V + f n d v 4U S e 8 8 j z R V t n k m w I 8 d c G o g P q k Z j S 0 n 2 q o P O v B e U y i o d Q R f Z 1 o e 5 9 X Q O P C Z k F Y 1 + o l 9 f 9 i c z 0 5 y o o n B 4 2 X i L P T U z d x 6 r K c g e G 4 M S Z Y k t c g Q Q e O j b O F F E L g X 6 y 5 g b H f h 7 7 A S z q 1 2 + 6 F b D w X n r V t T u 9 j Z e O F 9 y / u F x X s D d 8 u k 3 2 c s F z e + P U T HI 2 x w 0 y L h b 2 a Z x L F B A u 6 u d r T S q Q N E Z V i k Y R F M 2 m R T a S i T + l S A s 0 4 z B Z Z P q N J / P o b b m V P J 6 o 0 T + p 5 w M u e 9 x d 8 N e d j r t L C 3 j j F 9 Q D V W i 2 L / G P X + D a F x m L v F p a g N 2 s J F o V l E B vs 3 h U + I l 6 T D V + z A q 0 r c K f C e 9 F 9 8 c 3 L 9 S r P v o / f k x S 8 8 q q y H H 9 h X 7 V 7 E U f 4 p j F 2 M e k L 7 0 d T I C 4 5 V P 1 a 4 b 7 H R L 1 y p v y x q Z R F q 3 e N 8 0 m o c 6 Z 2 i p H E t Z 1 8 y L z h h m o 5 h 6 j + y i H Y e R v N F 6 y z o o k V k / 6 F / P u j 5 u L F j J L Q X F b b Z x 8 1 P X e t H G 5 Y v I W R m u h + 2 M / S k d y 1 t w 6 u V + o R 8 Z 4 b P h q s 1 w B n r X C D 8 G L U i / N K p s v Y b r h 7 Y 4 z o a Y n U w Y w G H t 7 + I k 4 h W + F N 8 i y + 4 1 V 5 y H P W a 5 O 5 6 z 4 E 2 q 8 C H U H 8 G 9 / X V 1 9 L V C d k y Y Q r 9 q b 1 G r F M P 3 7 m Y g e C q q n X g e M Q f b 9 A e 6 z O H s c F O D f r x o s U 4 d g L m d 4 r P f t h W F q u P Q L H P 8 Y D / / V 1 c / f r N n / o O Q 9 f 3 H z Z m N r c 2 P r 4 r u 1 v + 1 U / 4 v p 1 y t / X P m X l V c r W y t / X v n b y u H K + c r 1 S r B y v / J f K / + 9 8 j / v v n 1 3 8 q 7 3 7 s a E / v x n V Z 0 / r D g / 7 z 7 / H 4 f o z O E = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M a 5 d m B C L e H Q V E k + 1 g 0 x r 7 6 f 4 m 8 4 = " > A A A 1 p X i c lV t b d 9 v K d V a S N k n V J j 1 J H v O C V d n r u F m y l m Q f n y R 9 i i 7 U x a L u 1 M U + 9 P E C w U 0 Q F m 7 C D C F S L P v c 1 7 6 2 v 6 z / p n t m M N h 7 Q M h n V W t Z w n z f n s F c vp n 9 E Y Q H e R w J u b n 5 v z / 7 + S / + 7 u 9 / + a t f / 8 P q P / 7 T b 3 7 7 z 9 / 8 7 v c 3 I p s U AV w H W Z w V d w N f Q B y l c C 0 j G c N d X o C f D G K 4 H d z v K v 6 2 h E J E W d q T s x w + J X 6 Y R q M o 8 C V C t / 0 Y Q 4 f + 5 2 / W N j f + u v n m 7 d t N b 3 N j c / P P b z a / Vx f f b 7 7 9 y / f e F i L q Z 2 2 l + j n / / L v v / r M / z I J J A q k M Y l + I H 7 Y 2 c / l p 7 h c y C m J Y r P Y n A n I / u P d D + A E v U z 8 B 8 W m u + 7 v w X i I y 9 E Z Z g f 9 S 6 W m U 1 5 j 7 i R C z Z I C R i S / H o s k p s I 3 7 Y S J H f / k 0 j 9 J 8 I i E N z I 1 G k 9 i T m a c G 7 w 2 j A g I Z z / D C D 4 o I + + o F Y 7 / w A 4 l T t L r 6 U v 1 4 p 5 1 b 7 2 S 7 d + j t d f a P T o 9 6 R 2 e n V 5 6 m V t s 6 s o 5/ 1 T D E + i B Z Y B v e i V / c e w L v g x M s v G z k B X 5 u r t W I C x h B U U R p q D o 1 j M p I 2 L B R F E 4 K w A G l 8 B h k S e K n w 3 k f w R h G c j G f 9 y H x X n X x + l 8 X i 6 W Y A N c B C h u 1 q 0 t t c U U U j u v G L l W h L U p m u Y 3 p Z X l b x C C T M k t s 0 I 4 u L c V V 4 / Z t m P 9 c x M B G D J 6 L C G x E 8 F z E 0 E Y M V Q Q u w y G O L l Y j 9 H w P 4 9 W i w w h 3 y d D D u U n c N v B a g Y s f t j 5 h K 4 O R t 7 a l G m k O e 7 q Y 9 x O / C F F g f j H f P 7 p r 9 g W v n R C U U j O k d 7 Z 3 p u / T l z C V W v r z A r D 3 i v g 3 c 2 O 3 z Y 5 u U o 6 z f N 7 v N N n O A 7 K d z / N + U T 7 1 R Z R 4 D 3 h d 5 u N o 8 U p B / 4 6 / p k t T 1 s m f G r V y V U u O Q f p f r 1 d X m 1 b V h q 9 a I x + e c K 5 a u + L G 5 S r u m Z s 3 I q d P S 5 F T F f m 0 H L k c u B Q z 1 E H D d l L d 6 V V b 0 w 9 P z V E 1 I 9 T E I t x A p x q d N t C Z R m c N N N F o 0 l S B R t N m 7 E Q q c U y w S 9 N 1 z 4 5 4 K S h 3 g l T f G y F D U x F v 4 w 9 i n + a u G a a q s q C 2 l h B b a g w x 3 j v c n P v 6 r D O H I Z 7 U s O 7 F 2 S M U r w N M Z R u r f d y p + r S C 0 d r W 3 J y L / 9 H H 0 l x v j 7 b q e A p E 0 o 8 3 v H 0 8 Y 4 X E P K S O V K E O Q u R N i / u 2 xf 1 m i 5 q W j 5 m 9 5 9 q b 6 q 7 C s 0 E e D q 8 q v L E 1 H i b + k K q s v V 3 7 b q n a e l 3 H X r 3 l T X 2 n h 3 N l k s V X p w M T i u l 8 l V m c + W h p w E 6 I q X 1 l a 1 + 1 1 L 6 0 t X S e f M z q 5 L V R T 4 y 5 u 9 A z U 6 e 2 Z 6 a m 2 e C 4 A G g 2 y d p b e 7 v c I s 0 a a / v t c t t + 6 g E u g q r c M m X w Y M Z s Q 5 4 f t N P O J M + h 8 F Q 7 p p l O 1 U y n r Z l t r / A f a d 4 b j b 1+ / d o v s 2 j o T Y T K + N H I y z M h I j R n p u k 8 9 j E j V e 0 / 3 z t l U n J M U C 1 j V I y p X s X 8 v w d Z N b R b N 7 T 7 k w 3 h m N M Q t L U x s c K 0 o e G 6 R y g V S 9 u m X r 9 + V i b Y O z 8 O M z R l 4 6 R l n M i Z 3 t V B X x 0 o a 2 p p p N u 2q e 2 W p q z g 7 f 1 w E H V b X z 8 M e k 6 l 7 Z + s t D S p a B h k N X K m P o W a 7 q q r r y 2 K q d 9 U 7 3 l d / 9 y t b 0 d a 3 w B 7 r a 6 f 7 X A l O I h i J d Z Y X a B d w Q B 1 V b U 3 i r O s 0 L S + M r y + r A K Q G i T z J Z M j C 9 w I l c 8 J / H i + 1 w w o / T g a 8 o D P 5 r p I 5 o Z a L D U J Q r Z X 0 M y i H h H k Q l n H X E R x l m r b h 1 O L T W S J V / p F h E k M r L 4 x f 8 2 N c U u z I s F W X / Q R e r G w 0 1 k 0 a J + Y g c s M i A l c J i B m 6 D J D Y s B l g J i R y 4 y I C V 0 m J G b s M m N i I p e J i P n i M l + I u X e Z e 2 J i l 4 k X W s Z F 4 k U C d y x + b B 3 O 1 G F n VT I g m G 2 M c k g w 2 x U l E M y 2 R D k i m O 2 H M i S Y b Y Z y T D D b C e W E Y L Y N y i 8 E s z 1 Q 3 h P M N k A Z E x w z O C E 4 Y T C b a D 7 D G c F M z G V O M F N y + U A w k 3 F Z E M w 0 X A q C B V 9 U g m X 7 n H D p l g Q z 3 Z a P B D P R l l O C m W L L G c F M r u U T w Va r n R j U c y j 9 E K V o 0 S 0 Y 0 b W e y 2 C U 1 3 o y g 5 F f 6 9 k M R o O t p z M Y I b a e z 2 D U 2 H p C g 5 F k 6 x k N R p e t p z R y z 5 7 T Y B T a e l K D k W n r W Q 1 G q 8 3 T 2 n K J y y W c e / Y k Bi P d 1 r M Y j H 5 b T 2 M w I m 4 9 j 8 E o u f V E B i P n 1 j M Z j K Z b T 2 U w w m 4 9 l 8 G o u / V k B i P x 1 r M Z j M 5 b T 2 c w Y m 8 9 n 8 E o / v k T G v d C E Q W 1 Q 0 m 2 a X 9 s 0 7 Z J d g j e Y f A u w b s M 3 i N 4 j 8 E d g j s M 3 i d 4 n 8 E H B B 8 w + J D g Q w Y f E X z E 4 P c E v 2 f w M c H H D O 4 S 3 G X w C c E n D D 4 l + J T B Z w S f M f i c 4 H M G X x B 8 w e B L g i 8 Z f E X w F Y N 7 B P c Y f E 3 w N Y N v C L 5 h 8 C 3 B t w y + I / i O w R 8 I / s D g j w R / f P 5 4 d U U H R n V M o 9 t M v 1 p 6 j N v h 3 K 7 L 7 X J u z + X 2 O N dx u Q 7 n 9 l 1 u n 3 M H L n f A u U O X O + T c k c s d c e 6 9 y 7 3 n 3 L H L H X O u 6 3 J d z p 2 4 3 A n n T l 3 u l H N n L n f G u X O X O + f c h c t d c O 7 S 5 S 4 5 d + V y V 5 z r u V y P c 9 c u d 8 2 5 G 5 e 7 4 d y t y 9 1 y 7 s 7 l 7 j j 3 w e U + c O 6 j y 1 n Z 3 3 A L U T 6 B / h y B n 1 0 3 6 7 p l l s L c f p 6 1 W D I x U D + h p F F 7 Y o W 7 f l g 9 G 6 0 I / T TV w l U 0 C x w Y h O y J N i e I k C n R l g Q R s i J l 1 U E y I N p + I E K 2 Q 5 s O R M h s a K u B C F m M s u o k 6 + E X g 5 C d 0 G Y C E T I R 2 k I g E r P p M Q g Z B m 0 X E E n Z t B o k Y 5 N k E L I E 2 h A g Q k Z A 2 w B E K P 3 r 5 I + I Y O t g E E r 1 Z b V a b K 1 K g 1 B a 1 0 k d E U r m O p U j Q i l c J 3 B E K H H r t I 1 I m 0 l 1 3 W n p x / l Y r b f + W w u z H F S a q R 7 E G 5 A + g d E D i 4 o y X + U i Y y 6 I y B I I F a 7 / E q y V q l R q A W w Q E f x N k I j C R F X V f w m 2 e q 6 / J a g G M p / z / s + V W G 0 J x R p Q C Y U 6 Z I O a K 4 H a E g p 0 R C U U Z 0 g l F O a Y S t h d 1 l c U 5 B c q o R j v 2 d z M l Q j r k c + V A G 0 J J 5 P N I o o v Y 1 M y V 6 K z J R T d A 5 V Q c A W b q b k S W j 1 B c y U y W 8 K J Z t O M A i u p h O J 6 p B I K a 0 o l F N W M S i i o p 0 X 1 D T O m 3 6 n B d e p F n V H K 1 Q k X E U q 0 O s 0 i Q u l V J 1 d E K K n q l I o I p V K d S B G h B K r T J y K U N n X S R I S S p U 6 V i F C K 1 A k S E U q M O i 0 i Q u l Q J 0 N E K A n q F I g I p T 6 d + B C h h K f T H S K U 5 n S S Q 4 S S m 0 5 t i F B K 0 w k N E U p k O o 0 h Q u l L J y 9 E K G n p l I U I p S q d q B C h B K X T E y K U l n R S Q o S S k U 5 F i F A K 0 g k I k Y 9 s B S l d D H i 2 S M 7 r b H H O s k X S t V t f M d 1 q + 9 e D q / a w 4 q 7 M P t Y q 6 k E q 1 H s X e x D E f g E o q v G 2 O o H w j s Y D i l G k n q B C G m T D K A 2 x M X 8 S K 0 S M 6 u t k M R f q 4 e 8 V y O c a G G T x 8 K e a G U w X 8 + a X m x L 7 Z 7 4 p 1 + m 0 a k 8 / v K 6 G J o 3 t T A V T v 9 y x G O l f 7 l q M d o D c s x j t A d m x G O 0 C u W 8 x 2 g f y w G K 0 E + S h x W g vy C O L 0 W 6 Q 7 y 1 G + 0 E e W 4 x 2 h O x a j P a E P L E Y 7 Q p 5 a j H a F / L M Y r Q z 5 L n F a G / I C 4 v R 7 p C X F q P 9 I a 8 s R j t E 9 i x G e 0 R e W 4 x 2 i b y x G O 0 T e W s x 2 i n y z m K 0V + Q H i 9 F u k R 8 t Z o w a C v m g 8 P O x Y U P 7 8 T d w P o W E O w w m X Y S 7 D C Z p h H s M J n W E H Q a T Q M J 9 B p N G w g M G k 0 z C Q w a T U s I j B p N Y w v c M J r 2 E x w w m y Y R d B p N q w h M G k 3 D C U w a T d s I z B p N 8 w n M G k 4 L C C w a T i M J L B p O O w i s G k 5 T C H o N J T e E 1 g 0 l Q 4 Q 2 D S V P h L Y N J V u E d g 0 l Z 4 Q c G k 7 j C j w y 2 H w T w a K u s m q g f r g y Y u M Q O o a Q t s U s o S U v s E a q V 9 d L b 0 1 9 w T A R 4 v i d Ae n j r G I Z e Z 9 0 b Q O A r X I 4 j 4 T 1 m k 3 i I E J b A E / r r E P S S k 8 J T L 8 p l M T a k 3 i 6 D a Y 7 e U n / H a 7 9 p 7 9 A d S bR i n 1 D S r D g g l C Q r D g k l x Y o j Q k m w 4 j 2 h p F d x T C j J V X Q J J b W K E 0 J J r O K U U N K q O C O U p C r O C S W l i g t C S a j i k l D S q b g i l G Q q e o S S S s U 1 o S R S c U M o a V T c E k o S F X e E k k L F B 0 J J o O I j o f X z m R T d I O g P F r 5 5 M l N Z Q y B f 0 H U / E i j T u E 0 l F P A O l V C 4 u 1 R C w e 5 R C c X U o R K K a J 9 K K J 4 D K q F o D q m E Y j m i E o r k P Z V Q H M d U Q l F 0 q Y R i O K E S i u C U S r j 4 Z 1 T C R T + n E i 7 2 B Z V w k S + p h I t 7 R S V c 1 B 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 7 j 1 G c p x N p I c m y H v E R J d D 4 d o k I J / k e K T q 9 r L W g A 5 c s o e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E s p i A N 0 m H U M Q z 9 Y b T 0 J e + F 0 I K B e Y g V Y 4 E K n 0 w U Q m p + Q a m r 1 4 S V C 9 q F s l c F 3 Q 6 V K 1 C k k d F h I n Q q V + / v z u Y 6 S S o 3 x l R N 8 G s 2 W j b v k 4 y 9 i V + f n d v 4U S e 8 8 j z R V t n k m w I 8 d c G o g P q k Z j S 0 n 2 q o P O v B e U y i o d Q R f Z 1 o e 5 9 X Q O P C Z k F Y 1 + o l 9 f 9 i c z 0 5 y o o n B 4 2 X i L P T U z d x 6 r K c g e G 4 M S Z Y k t c g Q Q e O j b O F F E L g X 6 y 5 g b H f h 7 7 A S z q 1 2 + 6 F b D w X n r V t T u 9 j Z e O F 9 y / u F x X s D d 8 u k 3 2 c s F z e + P U T HI 2 x w 0 y L h b 2 a Z x L F B A u 6 u d r T S q Q N E Z V i k Y R F M 2 m R T a S i T + l S A s 0 4 z B Z Z P q N J / P o b b m V P J 6 o 0 T + p 5 w M u e 9 x d 8 N e d j r t L C 3 j j F 9 Q D V W i 2 L / G P X + D a F x m L v F p a g N 2 s J F o V l E B vs 3 h U + I l 6 T D V + z A q 0 r c K f C e 9 F 9 8 c 3 L 9 S r P v o / f k x S 8 8 q q y H H 9 h X 7 V 7 E U f 4 p j F 2 M e k L 7 0 d T I C 4 5 V P 1 a 4 b 7 H R L 1 y p v y x q Z R F q 3 e N 8 0 m o c 6 Z 2 i p H E t Z 1 8 y L z h h m o 5 h 6 j + y i H Y e R v N F 6 y z o o k V k / 6 F / P u j 5 u L F j J L Q X F b b Z x 8 1 P X e t H G 5 Y v I W R m u h + 2 M / S k d y 1 t w 6 u V + o R 8 Z 4 b P h q s 1 w B n r X C D 8 G L U i / N K p s v Y b r h 7 Y 4 z o a Y n U w Y w G H t 7 + I k 4 h W + F N 8 i y + 4 1 V 5 y H P W a 5 O 5 6 z 4 E 2 q 8 C H U H 8 G 9 / X V 1 9 L V C d k y Y Q r 9 q b 1 G r F M P 3 7 m Y g e C q q n X g e M Q f b 9 A e 6 z O H s c F O D f r x o s U 4 d g L m d 4 r P f t h W F q u P Q L H P 8 Y D / / V 1 c / f r N n / o O Q 9 f 3 H z Z m N r c 2 P r 4 r u 1 v + 1 U / 4 v p 1 y t / X P m X l V c r W y t / X v n b y u H K + c r 1 S r B y v / J f K / + 9 8 j / v v n 1 3 8 q 7 3 7 s a E / v x n V Z 0 / r D g / 7 z 7 / H 4 f o z O E = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M a 5 d m B C L e H Q V E k + 1 g 0 x r 7 6 f 4 m 8 4 = " > A A A 1 p X i c lV t b d 9 v K d V a S N k n V J j 1 J H v O C V d n r u F m y l m Q f n y R 9 i i 7 U x a L u 1 M U + 9 P E C w U 0 Q F m 7 C D C F S L P v c 1 7 6 2 v 6 z / p n t m M N h 7 Q M h n V W t Z w n z f n s F c vp n 9 E Y Q H e R w J u b n 5 v z / 7 + S / + 7 u 9 / + a t f / 8 P q P / 7 T b 3 7 7 z 9 / 8 7 v c 3 I p s U AV w H W Z w V d w N f Q B y l c C 0 j G c N d X o C f D G K 4 H d z v K v 6 2 h E J E W d q T s x w + J X 6 Y R q M o 8 C V C t / 0 Y Q 4 f + 5 2 / W N j f + u v n m 7 d t N b 3 N j c / P P b z a / Vx f f b 7 7 9 y / f e F i L q Z 2 2 l + j n / / L v v / r M / z I J J A q k M Y l + I H 7 Y 2 c / l p 7 h c y C m J Y r P Y n A n I / u P d D + A E v U z 8 B 8 W m u + 7 v w X i I y 9 E Z Z g f 9 S 6 W m U 1 5 j 7 i R C z Z I C R i S / H o s k p s I 3 7 Y S J H f / k 0 j 9 J 8 I i E N z I 1 G k 9 i T m a c G 7 w 2 j A g I Z z / D C D 4 o I + + o F Y 7 / w A 4 l T t L r 6 U v 1 4 p 5 1 b 7 2 S 7 d + j t d f a P T o 9 6 R 2 e n V 5 6 m V t s 6 s o 5/ 1 T D E + i B Z Y B v e i V / c e w L v g x M s v G z k B X 5 u r t W I C x h B U U R p q D o 1 j M p I 2 L B R F E 4 K w A G l 8 B h k S e K n w 3 k f w R h G c j G f 9 y H x X n X x + l 8 X i 6 W Y A N c B C h u 1 q 0 t t c U U U j u v G L l W h L U p m u Y 3 p Z X l b x C C T M k t s 0 I 4 u L c V V 4 / Z t m P 9 c x M B G D J 6 L C G x E 8 F z E 0 E Y M V Q Q u w y G O L l Y j 9 H w P 4 9 W i w w h 3 y d D D u U n c N v B a g Y s f t j 5 h K 4 O R t 7 a l G m k O e 7 q Y 9 x O / C F F g f j H f P 7 p r 9 g W v n R C U U j O k d 7 Z 3 p u / T l z C V W v r z A r D 3 i v g 3 c 2 O 3 z Y 5 u U o 6 z f N 7 v N N n O A 7 K d z / N + U T 7 1 R Z R 4 D 3 h d 5 u N o 8 U p B / 4 6 / p k t T 1 s m f G r V y V U u O Q f p f r 1 d X m 1 b V h q 9 a I x + e c K 5 a u + L G 5 S r u m Z s 3 I q d P S 5 F T F f m 0 H L k c u B Q z 1 E H D d l L d 6 V V b 0 w 9 P z V E 1 I 9 T E I t x A p x q d N t C Z R m c N N N F o 0 l S B R t N m 7 E Q q c U y w S 9 N 1 z 4 5 4 K S h 3 g l T f G y F D U x F v 4 w 9 i n + a u G a a q s q C 2 l h B b a g w x 3 j v c n P v 6 r D O H I Z 7 U s O 7 F 2 S M U r w N M Z R u r f d y p + r S C 0 d r W 3 J y L / 9 H H 0 l x v j 7 b q e A p E 0 o 8 3 v H 0 8 Y 4 X E P K S O V K E O Q u R N i / u 2 xf 1 m i 5 q W j 5 m 9 5 9 q b 6 q 7 C s 0 E e D q 8 q v L E 1 H i b + k K q s v V 3 7 b q n a e l 3 H X r 3 l T X 2 n h 3 N l k s V X p w M T i u l 8 l V m c + W h p w E 6 I q X 1 l a 1 + 1 1 L 6 0 t X S e f M z q 5 L V R T 4 y 5 u 9 A z U 6 e 2 Z 6 a m 2 e C 4 A G g 2 y d p b e 7 v c I s 0 a a / v t c t t+ 6 g E u g q r c M m X w Y M Z s Q 5 4 f t N P O J M + h 8 F Q 7 p p l O 1 U y n r Z l t r / A f a d 4 b j b 1 + / d o v s 2 j o T Y T K + N H I y z M h I j R n p u k 8 9 j E j V e 0 / 3 z t l U n J M U C 1 j V I y p X s X 8 v w d Z N b R b N 7 T 7 k w 3 h m N M Q t L U x s c K 0 o e G 6 R y g V S 9 u m X r 9 + V i b Y O z 8 O M z R l 4 6 R l n M i Z 3 t V B X x 0 o a 2 p p p N u 2q e 2 W p q z g 7 f 1 w E H V b X z 8 M e k 6 l 7 Z + s t D S p a B h k N X K m P o W a 7 q q r r y 2 K q d 9 U 7 3 l d / 9 y t b 0 d a 3 w B 7 r a 6 f 7 X A l O I hi J d Z Y X a B d w Q B 1 V b U 3 i r O s 0 L S + M r y + r A K Q G i T z J Z M j C 9 w I l c 8 J / H i + 1 w w o / T g a 8 o D P 5 r p I 5 o Z a L D U J Q r Z X 0 M y i H h H k Q l n H X E R x l m r b h 1 O L T W S J V / p F h E k M r L 4 x f 8 2 N c U u z I s F W X / Q R e r G w 0 1 k 0 a J + Y g c s M i A l c J i B m 6 D J D Y s B l g J i R y 4 y I C V 0 m J G b s M m N i I p e J i P n i M l + I u X e Z e 2 J i l 4 k X W s Z F 4 k U C d y x + b B 3 O 1 G F n V n D d + z I R 0 h t m 6 b f S U 5 8 f U Y 4 z d f I 4 C + M l V d u p 2 3 Z K d 8 1 c J i M m d 5 m c m A e X e S C m c J m C G O E y g h j p M p K Y i c t M i C l d p i T m 0 W U e i Z m 6 z J S Y m c v M i H l y m a e F M W h 2 A 2 B m z u r j v a w 2 y d x s p c G I b Z u 6 3 9 r l s Y j K 9 d U 8 4 z g 8 I J j t j T I g m G 2 M c k g w 2 x U l E M y 2 R D k i m O 2 H M i S Y b Y Z y T D D b C e W E Y L Y N y i 8 E s z 1 Q 3 h P M N k A Z E x w z O C E 4 Y T C b a D 7 D G c F M z G V O M F N y + U A w k 3 F Z E M w 0 X A q C B V 9 U g m X 7 n H D p l g Q z 3 Z a P B D P R l l O C m W L L G c F M r u U T w Va r n R j U c y j 9 E K V o 0 S 0 Y 0 b W e y 2 C U 1 3 o y g 5 F f 6 9 k M R o O t p z M Y I b a e z 2 D U 2 H p C g 5 F k 6 x k N R p e t p z R y z 5 7 T Y B T a e l K D k W n r W Q 1 G q 8 3 T 2 n K J y y W c e / Y k Bi P d 1 r M Y j H 5 b T 2 M w I m 4 9 j 8 E o u f V E B i P n 1 j M Z j K Z b T 2 U w w m 4 9 l 8 G o u / V k B i P x 1 r M Z j M 5 b T 2 c w Y m 8 9 n 8 E o / v k T G v d C E Q W 1 Q 0 m 2 a X 9 s 0 7 Z J d g j e Y f A u w b s M 3 i N 4 j 8 E d g j s M 3 i d 4 n 8 E H B B 8 w + J D g Q w Y f E X z E 4 P c E v 2 f w M c H H D O 4 S 3 G X w C c E n D D 4 l + J T B Z w S f M f i c 4 H M G X x B 8 w e B L g i 8 Z f E X w F Y N 7 B P c Y f E 3 w N Y N v C L 5 h 8 C 3 B t w y + I / i O w R 8 I / s D g j w R / f P 5 4 d U U H R n V M o 9 t M v 1 p 6 j N v h 3 K 7 L 7 X J u z + X 2 O N dx u Q 7 n 9 l 1 u n 3 M H L n f A u U O X O + T c k c s d c e 6 9 y 7 3 n 3 L H L H X O u 6 3 J d z p 2 4 3 A n n T l 3 u l H N n L n f G u X O X O + f c h c t d c O 7 S 5 S 4 5 d + V y V 5 z r u V y P c 9 c u d 8 2 5 G 5 e 7 4 d y t y 9 1 y 7 s 7 l 7 j j 3 w e U + c O 6 j y 1 n Z 3 3 A L U T 6 B / h y B n 1 0 3 6 7 p l l s L c f p 6 1 W D I x U D + h p F F 7 Y o W 7 f l g 9 G 6 0 I / T TV w l U 0 C x w Y h O y J N i e I k C n R l g Q R s i J l 1 U E y I N p + I E K 2 Q 5 s O R M h s a K u B C F m M s u o k 6 + E X g 5 C d 0 G Y C E T I R 2 k I g E r P p M Q g Z B m 0 X E E n Z t B o k Y 5 N k E L I E 2 h A g Q k Z A 2 w B E K P 3 r 5 I + I Y O t g E E r 1 Z b V a b K 1 K g 1 B a 1 0 k d E U r m O p U j Q i l c J 3 B E K H H r t I 1 I m 0 l 1 3 W n p x / l Y r b f + W w u z H F S a q R 7 E G 5 A + g d E D i 4 o y X + U i Y y 6 I y B I I F a 7 / E q y V q l R q A W w Q E f x N k I j C R F X V f w m 2 e q 6 / J a g G M p / z / s + V W G 0 J x R p Q C Y U 6 Z I O a K 4 H a E g p 0 R C U U Z 0 g l F O a Y S t h d 1 l c U 5 B c q o R j v 2 d z M l Q j r k c + V A G 0 J J 5 P N I o o v Y 1 M y V 6 K z J R T d A 5 V Q c A W b q b k S W j 1 B c y U y W 8 K J Z t O M A i u p h O J 6 p B I K a 0 o l F N W M S i i o p 0 X 1 D T O m 3 6 n B d e p F n V H K 1 Q k X E U q 0 O s 0 i Q u l V J 1 d E K K n q l I o I p V K d S B G h B K r T J y K U N n X S R I S S p U 6 V i F C K 1 A k S E U q M O i 0 i Q u l Q J 0 N E K A n q F I g I p T 6 d + B C h h K f T H S K U 5 n S S Q 4 S S m 0 5 t i F B K 0 w k N E U p k O o 0 h Q u l L J y 9 E K G n p l I U I p S q d q B C h B K X T E y K U l n R S Q o S S k U 5 F i F A K 0 g k I k Y 9 s B S l d D H i 2 S M 7 r b H H O s k X S t V t f M d 1 q + 9 e D q / a w 4 q 7 M P t Y q 6 k E q 1 H s X e x D E f g E o q v G 2 O o H w j s Y D i l G k n q B C G m T D K A 2 x M X 8 S K 0 S M 6 u t k M R f q 4 e 8 V y O c a G G T x 8 K e a G U w X 8 + a X m x L 7 Z 7 4 p 1 + m 0 a k 8 / v K 6 G J o 3 t T A V T v 9 y x G O l f 7 l q M d o D c s x j t A d m x G O 0 C u W 8 x 2 g f y w G K 0 E + S h x W g vy C O L 0 W 6 Q 7 y 1 G + 0 E e W 4 x 2 h O x a j P a E P L E Y 7 Q p 5 a j H a F / L M Y r Q z 5 L n F a G / I C 4 v R 7 p C X F q P 9 I a 8 s R j t E 9 i x G e 0 R e W 4 x 2 i b y x G O 0 T e W s x 2 i n y z m K 0V + Q H i 9 F u k R 8 t Z o w a C v m g 8 P O x Y U P 7 8 T d w P o W E O w w m X Y S 7 D C Z p h H s M J n W E H Q a T Q M J 9 B p N G w g M G k 0 z C Q w a T U s I j B p N Y w v c M J r 2 E x w w m y Y R d B p N q w h M G k 3 D C U w a T d s I z B p N 8 w n M G k 4 L C C w a T i M J L B p O O w i s G k 5 T C H o N J T e E 1 g 0 l Q 4 Q 2 D S V P h L Y N J V u E d g 0 l Z 4 Q c G k 7 j C j w y 2 H w T w a K u s m q g f r g y Y u M Q O o a Q t s U s o S U v s E a q V 9 d L b 0 1 9 w T A R 4 v i d A e nj r G I Z e Z 9 0 b Q O A r X I 4 j 4 T 1 m k 3 i I E J b A E / r r E P S S k 8 J T L 8 p l M T a k 3 i 6 D a Y 7 e U n / H a 7 9 p 7 9 A d S bR i n 1 D S r D g g l C Q r D g k l x Y o j Q k m w 4 j 2 h p F d x T C j J V X Q J J b W K E 0 J J r O K U U N K q O C O U p C r O C S W l i g t C S a j i k l D S q b g i l G Q q e o S S S s U 1 o S R S c U M o a V T c E k o S F X e E k k L F B 0 J J o O I j o f X z m R T d I O g P F r 5 5 M l N Z Q y B f 0 H U / E i j T u E 0 l F P A O l V C 4 u 1 R C w e 5 R C c X U o R K K a J 9 K K J 4 D K q F o D q m E Y j m i E o r k P Z V Q H M d U Q l F 0 q Y R i O K E S i u C U S r j 4 Z 1 T C R T + n E i 7 2 B Z V w k S + p h I t 7 R S V c 1 B 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 7 j 1 G c p x N p I c m y H v E R J d D 4 d o k I J / k e K T q 9 r L W g A 5 c s o e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E s p i A N 0 m H U M Q z 9 Y b T 0 J e + F 0 I K B e Y g V Y 4 E K n 0 w U Q m p + Q a m r 1 4 S V C 9 q F s l c F 3 Q 6 V K 1 C k k d F h I n Q q V + / v z u Y 6 S S o 3 x l R N 8 G s 2 W j b v k 4 y 9 i V + f n d v 4 U Se 8 8 j z R V t n k m w I 8 d c G o g P q k Z j S 0 n 2 q o P O v B e U y i o d Q R f Z 1 o e 5 9 X Q O P C Z k F Y 1 + o l 9 f 9 i c z 0 5 y o o n B 4 2 X i L P T U z d x 6 r K c g e G 4 M S Z Y k t c g Q Q e O j b O F F E L g X 6 y 5 g b H f h 7 7 A S z q 1 2 + 6 F b D w X n r V t T u 9 j Z e O F 9 y / u F x X s D d 8 u k 3 2 c s F z e + P U T HI 2 x w 0 y L h b 2 a Z x L F B A u 6 u d r T S q Q N E Z V i k Y R F M 2 m R T a S i T + l S A s 0 4 z B Z Z P q N J / P o b b m V P J 6 o 0 T + p 5 w M u e 9 x d 8 N e d j r t L C 3 j j F 9 Q D V W i 2 L / G P X + D a F x m L v F p a g N 2 s J F o V l E B v s 3 h U + I l 6 T D V + z A q 0 r c K f C e 9 F 9 8 c 3 L 9 S r P v o / f k x S 8 8 q q y H H 9 h X 7 V 7 E U f 4 p j F 2 M e k L 7 0 d T I C 4 5 V P 1 a 4 b 7 H R L 1 y p v y x q Z R F q 3 e N 8 0 m o c 6 Z 2 i p H E t Z 1 8 y L z h h m o 5 h 6 j + y i H Y e R v N F 6 y z o o k V k / 6 F / P u j 5 u L F j J L Q X F b b Z x 8 1 P X e t H G 5 Y v I W R m u h + 2 M / S k d y 1 t w 6 u V + o R 8 Z 4 b P h q s 1 w B n r X C D 8 G L U i / N K p s vY b r h 7 Y 4 z o a Y n U w Y w G H t 7 + I k 4 h W + F N 8 i y + 4 1 V 5 y H P W a 5 O 5 6 z 4 E 2 q 8 C H U H 8 G 9 / X V 1 9 L V C d k y Y Q r 9 q b 1 G r F M P 3 7 m Y g e C q q n X g e M Q f b 9 A e 6 z O H s c F O D f r x o s U 4 d g L m d 4 r P f t h W F q u P Q L H P 8 Y D / / V 1 c / f r N n / o O Q 9 f 3 H z Z m N r c 2 P r 4 r u 1 v + 1 U / 4 v p 1 y t / X P m X l V c r W y t / X v n b y u H K + c r 1 S r B y v / J f K / + 9 8 j / v v n 1 3 8 q 7 3 7 s a E / v x n V Z 0 / r D g / 7 z 7 / H 4 f o z O E = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M a 5 d m B C L e H Q V E k + 1 g 0 x r 7 6 f 4 m 8 4 = " > A A A 1 p X i c lV t b d 9 v K d V a S N k n V J j 1 J H v O C V d n r u F m y l m Q f n y R 9 i i 7 U x a L u 1 M U + 9 P E C w U 0 Q F m 7 C D C F S L P v c 1 7 6 2 v 6 z / p n t m M N h 7 Q M h n V W t Z w n z f n s F c vp n 9 E Y Q H e R w J u b n 5 v z / 7 + S / + 7 u 9 / + a t f / 8 P q P / 7 T b 3 7 7 z 9 / 8 7v c 3 I p s U A V w H W Z w V d w N f Q B y l c C 0 j G c N d X o C f D G K 4 H d z v K v 6 2 h E J E W d q T s x w + J X 6 Y R q M o 8 C V C t / 0 Y Q 4 f + 5 2 / W N j f + u v n m 7 d t N b 3 N j c / P P b z a / Vx f f b 7 7 9 y / f e F i L q Z 2 2 l + j n / / L v v / r M / z I J J A q k M Y l + I H 7 Y 2 c / l p 7 h c y C m J Y r P Y n A n I / u P d D + A E v U z 8 B 8 W m u + 7 v w X i I y 9 E Z Z g f 9 S 6 W m U 1 5 j 7 i R C z Z I C R i S / H o s k p s I 3 7 Y S J H f / k 0 j 9 J 8 I i E N z I 1 G k 9 i T m a c G 7 w 2 j A g I Z z / D C D 4 o I + + o F Y 7 / w A 4 l T t L r 6 U v 1 4 p 5 1 b 7 2 S 7 d + j t d f a P T o 9 6 R 2 e n V 5 6 m V t s 6 s o 5/ 1 T D E + i B Z Y B v e i V / c e w L v g x M s v G z k B X 5 u r t W I C x h B U U R p q D o 1 j M p I 2 L B R F E 4 K w A G l 8 B h k S e K n w 3 k f w R h G c j G f 9 y H x X n X x + l 8 X i 6 W Y A N c B C h u 1 q 0 t t c U U U j u v G L l W h L U p m u Y 3 p Z X l b x C C T M k t s 0 I 4 u L c V V 4 / Z t m P 9 c x M B G D J 6 L C G x E 8 F z E 0 E Y M V Q Q u w y G O L l Y j 9 H w P 4 9 W i w w h 3 y d D D u U n c N v B a g Y s f t j 5 h K 4 O R t 7 a l G m k O e 7 q Y 9 x O / C F F g f j H f P 7 p r 9 g W v n R C U U j O k d 7 Z 3 p u / T l z C V W v r z A r D 3 i v g 3 c 2 O 3 z Y 5 u U o 6 z f N 7 v N N n O A 7 K d z / N + U T 7 1 R Z R 4 D 3 h d 5 u N o 8 U p B / 4 6 / p k t T 1 s m f G r V y V U u O Q f p f r 1 d X m 1 b V h q 9 a I x + e c K 5 a u + L G 5 S r u m Z s 3 I q d P S 5 F T F f m 0 H L k c u B Q z 1 E H D d l L d 6 V V b 0 w 9 P z V E 1 I 9 T E I t x A p x q d N t C Z R m c N N N F o 0 l S B R t N m 7 E Q q c U y w S 9 N 1 z 4 5 4 K S h 3 g l T f G y F D U x F v 4 w 9 i n + a u G a a q s q C 2 l h B b a g w x 3 j v c n P v 6 r D O H I Z 7 U s O 7 F 2 S M U r w N M Z R u r f d y p + r S C 0 d r W 3 J y L / 9 H H 0 l x v j 7 b q e A p E 0 o 8 3 v H 0 8 Y 4 X E P K S O V K E O Q u R N i / u 2 xf 1 m i 5 q W j 5 m 9 5 9 q b 6 q 7 C s 0 E e D q 8 q v L E 1 H i b + k K q s v V 3 7 b q n a e l 3 H X r 3 l T X 2 n h 3 N l k s V X p w M T i u l 8 l V m c + W h p w E 6 I q X 1 l a 1 + 1 1 L 6 0 t X S e f M z q 5 L V R T 4 y 5 u 9 A z U 6 e 2 Z 6 a m 2 e C 4 A G g 2 y d p b e 7 v c I s 0 a a / v t c t t+ 6 g E u g q r c M m X w Y M Z s Q 5 4 f t N P O J M + h 8 F Q 7 p p l O 1 U y n r Z l t r / A f a d 4 b j b 1 + / d o v s 2 j o T Y T K + N H I y z M h I j R n p u k 8 9 j E j V e 0 / 3 z t l U n J M U C 1 j V I y p X s X 8 v w d Z N b R b N 7 T 7 k w 3 h m N M Q t L U x s c K 0 o e G 6 R y g V S 9 u m X r 9 + V i b Y O z 8 O M z R l 4 6 R l n M i Z 3 t V B X x 0 o a 2 p p p N u 2q e 2 W p q z g 7 f 1 w E H V b X z 8 M e k 6 l 7 Z + s t D S p a B h k N X K m P o W a 7 q q r r y 2 K q d 9 U 7 3 l d / 9 y t b 0 d a 3 w B 7 r a 6 f 7 X A l O I hi J d Z Y X a B d w Q B 1 V b U 3 i r O s 0 L S + M r y + r A K Q G i T z J Z M j C 9 w I l c 8 J / H i + 1 w w o / T g a 8 o D P 5 r p I 5 o Z a L D U J Q r Z X 0 M y i H h H k Q l n H X E R x l m r b h 1 O L T W S J V / p F h E k M r L 4 x f 8 2 N c U u z I s F W X / Q R e r G w 0 1 k 0 a J + Y g c s M i A l c J i B m 6 D J D Y s B l g J i R y 4 y I C V 0 m J G b s M m N i I p e J i P n i M l + I u X e Z e 2 J i l 4 k X W s Z F 4 k U C d y x + b B 3 O 1 G F n V n D d + z I R 0 h t m 6 b f S U 5 8 f U Y 4 z d f I 4 C + M l V d u p 2 3 Z K d 8 1 c J i M m d 5 m c m A e X e S C m c J m C G O E y g h j p M p K Y i c t M i C l d p i T m 0 W U e i Z m 6 z J S Y m c v M i H l y m a e F M W h 2 A 2 B m z u r j v a w 2 y d x s p c G I b Z u 6 3 9 r l s Y j K 9 d U 8 4 z g 8 I J j t j T I g m G 2 M c k g w 2 x U l E M y 2 R D k i m O 2 H M i S Y b Y Z y T D D b C e W E Y L Y N y i 8 E s z 1 Q 3 h P M N k A Z E x w z O C E 4 Y T C b a D 7 D G c F M z G V O M F N y + U A w k 3 F Z E M w 0 X A q C B V 9 U g m X 7 n H D p l g Q z 3 Z a P B D P R l l O C m W L L G c F M r u U T w Va r n R j U c y j 9 E K V o 0 S 0 Y 0 b W e y 2 C U 1 3 o y g 5 F f 6 9 k M R o O t p z M Y I b a e z 2 D U 2 H p C g 5 F k 6 x k N R p e t p z R y z 5 7 T Y B T a e l K D k W n r W Q 1 G q 8 3 T 2 n K J y y W c e / Y k Bi P d 1 r M Y j H 5 b T 2 M w I m 4 9 j 8 E o u f V E B i P n 1 j M Z j K Z b T 2 U w w m 4 9 l 8 G o u / V k B i P x 1 r M Z j M 5 b T 2 c w Y m 8 9 n 8 E o / v k T G v d C E Q W 1 Q 0 m 2 a X 9 s 0 7 Z J d g j e Y f A u w b s M 3 i N 4 j 8 E d g j s M 3 i d 4 n 8 E H B B 8 w + J D g Q w Y f E X z E 4 P c E v 2 f w M c H H D O 4 S 3 G X w C c E n D D 4 l + J T B Z w S f M f i c 4 H M G X x B 8 w e B L g i 8 Z f E X w F Y N 7 B P c Y f E 3 w N Y N v C L 5 h 8 C 3 B t w y + I / i O w R 8 I / s D g j w R / f P 5 4 d U U H R n V M o 9 t M v 1 p 6 j N v h 3 K 7 L 7 X J u z + X 2 O N dx u Q 7 n 9 l 1 u n 3 M H L n f A u U O X O + T c k c s d c e 6 9 y 7 3 n 3 L H L H X O u 6 3 J d z p 2 4 3 A n n T l 3 u l H N n L n f G u X O X O + f c h c t d c O 7 S 5 S 4 5 d + V y V 5 z r u V y P c 9 c u d 8 2 5 G 5 e 7 4 d y t y 9 1 y 7 s 7 l 7 j j 3 w e U + c O 6 j y 1 n Z 3 3 A L U T 6 B / h y B n 1 0 3 6 7 p l l s L c f p 6 1 W D I x U D + h p F F 7 Y o W 7 f l g 9 G 6 0 I / T TV w l U 0 C x w Y h O y J N i e I k C n R l g Q R s i J l 1 U E y I N p + I E K 2 Q 5 s O R M h s a K u B C F m M s u o k 6 + E X g 5 C d 0 G Y C E T I R 2 k I g E r P p M Q g Z B m 0 X E E n Z t B o k Y 5 N k E L I E 2 h A g Q k Z A 2 w B E K P 3 r 5 I + I Y O t g E E r 1 Z b V a b K 1 K g 1 B a 1 0 k d E U r m O p U j Q i l c J 3 B E K H H r t I 1 I m 0 l 1 3 W n p x / l Y r b f + W w u z H F S a q R 7 E G 5 A + g d E D i 4 o y X + U i Y y 6 I y B I I F a 7 / E q y V q l R q A W w Q E f x N k I j C R F X V f w m 2 e q 6 / J a g G M p / z / s + V W G 0 J x R p Q C Y U 6 Z I O a K 4 H a E g p 0 R C U U Z 0 g l F O a Y S t h d 1 l c U 5 B c q o R j v 2 d z M l Q j r k c + V A G 0 J J 5 P N I o o v Y 1 M y V 6 K z J R T d A 5 V Q c A W b q b k S W j 1 B c y U y W 8 K J Z t O M A i u p h O J 6 p B I K a 0 o l F N W M S i i o p 0 X 1 D T O m 3 6 n B d e p F n V H K 1 Q k X E U q 0 O s 0 i Q u l V J 1 d E K K n q l I o I p V K d S B G h B K r T J y K U N n X S R I S S p U 6 V i F C K 1 A k S E U q M O i 0 i Q u l Q J 0 N E K A n q F I g I p T 6 d + B C h h K f T H S K U 5 n S S Q 4 S S m 0 5 t i F B K 0 w k N E U p k O o 0 h Q u l L J y 9 E K G n p l I U I p S q d q B C h B K X T E y K U l n R S Q o S S k U 5 F i F A K 0 g k I k Y 9 s B S l d D H i 2 S M 7 r b H H O s k X S t V t f M d 1 q + 9 e D q / a w 4 q 7 M P t Y q 6 k E q 1 H s X e x D E f g E o q v G 2 O o H w j s Y D i l G k n q B C G m T D K A 2 x M X 8 S K 0 S M 6 u t k M R f q 4 e 8 V y O c a G G T x 8 K e a G U w X 8 + a X m x L 7 Z 7 4 p 1 + m 0 a k 8 / v K 6 G J o 3 t T A V T v 9 y x G O l f 7 l q M d o D c s x j t A d m x G O 0 C u W 8 x 2 g f y w G K 0 E + S h x W g vy C O L 0 W 6 Q 7 y 1 G + 0 E e W 4 x 2 h O x a j P a E P L E Y 7 Q p 5 a j H a F / L M Y r Q z 5 L n F a G / I C 4 v R 7 p C X F q P 9 I a 8 s R j t E 9 i x G e 0 R e W 4 x 2 i b y x G O 0 T e W s x 2 i n y z m K 0V + Q H i 9 F u k R 8 t Z o w a C v m g 8 P O x Y U P 7 8 T d w P o W E O w w m X Y S 7 D C Z p h H s M J n W E H Q a T Q M J 9 B p N G w g M G k 0 z C Q w a T U s I j B p N Y w v c M J r 2 E x w w m y Y R d B p N q w h M G k 3 D C U w a T d s I z B p N 8 w n M G k 4 L C C w a T i M J L B p O O w i s G k 5 T C H o N J T e E 1 g 0 l Q 4 Q 2 D S V P h L Y N J V u E d g 0 l Z 4 Q c G k 7 j C j w y 2 H w T w a K u s m q g f r g y Y u M Q O o a Q t s U s o S U v s E a q V 9 d L b 0 1 9 w T A R 4 v i d A e nj r G I Z e Z 9 0 b Q O A r X I 4 j 4 T 1 m k 3 i I E J b A E / r r E P S S k 8 J T L 8 p l M T a k 3 i 6 D a Y 7 e U n / H a 7 9 p 7 9 A d S bR i n 1 D S r D g g l C Q r D g k l x Y o j Q k m w 4 j 2 h p F d x T C j J V X Q J J b W K E 0 J J r O K U U N K q O C O U p C r O C S W l i g t C S a j i k l D S q b g i l G Q q e o S S S s U 1 o S R S c U M o a V T c E k o S F X e E k k L F B 0 J J o O I j o f X z m R T d I O g P F r 5 5 M l N Z Q y B f 0 H U / E i j T u E 0 l F P A O l V C 4 u 1 R C w e 5 R C c X U o R K K a J 9 K K J 4 D K q F o D q m E Y j m i E o r k P Z V Q H M d U Q l F 0 q Y R i O K E S i u C U S r j 4 Z 1 T C R T + n E i 7 2 B Z V w k S + p h I t 7 R S V c 1 B 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 7 j 1 G c p x N p I c m y H v E R J d D 4 d o k I J / k e K T q 9 r L W g A 5 c s o e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E s p i A N 0 m H U M Q z 9 Y b T 0 J e + F 0 I K B e Y g V Y 4 E K n 0 w U Q m p + Q a m r 1 4 S V C 9 q F s l c F 3 Q 6 V K 1 C k k d F h I n Q q V + / v z u Y 6 S S o 3 x l R N 8 G s 2 W j b v k 4 y 9 i V + f n d v 4 U Se 8 8 j z R V t n k m w I 8 d c G o g P q k Z j S 0 n 2 q o P O v B e U y i o d Q R f Z 1 o e 5 9 X Q O P C Z k F Y 1 + o l 9 f 9 i c z 0 5 y o o n B 4 2 X i L P T U z d x 6 r K c g e G 4 M S Z Y k t c g Q Q e O j b O F F E L g X 6 y 5 g b H f h 7 7 A S z q 1 2 + 6 F b D w X n r V t T u 9 j Z e O F 9 y / u F x X s D d 8 u k 3 2 c s F z e + P U T HI 2 x w 0 y L h b 2 a Z x L F B A u 6 u d r T S q Q N E Z V i k Y R F M 2 m R T a S i T + l S A s 0 4 z B Z Z P q N J / P o b b m V P J 6 o 0 T + p 5 w M u e 9 x d 8 N e d j r t L C 3 j j F 9 Q D V W i 2 L / G P X + D a F x m L v F p a g N 2 s J F o V l E B v s 3 h U + I l 6 T D V + z A q 0 r c K f C e 9 F 9 8 c 3 L 9 S r P v o / f k x S 8 8 q q y H H 9 h X 7 V 7 E U f 4 p j F 2 M e k L 7 0 d T I C 4 5 V P 1 a 4 b 7 H R L 1 y p v y x q Z R F q 3 e N 8 0 m o c 6 Z 2 i p H E t Z 1 8 y L z h h m o 5 h 6 j + y i H Y e R v N F 6 y z o o k V k / 6 F / P u j 5 u L F j J L Q X F b b Z x 8 1 P X e t H G 5 Y v I W R m u h + 2 M / S k d y 1 t w 6 u V + o R 8 Z 4 b P h q s 1 w B n r X C D 8 G L U i / N K p s vY b r h 7 Y 4 z o a Y n U w Y w G H t 7 + I k 4 h W + F N 8 i y + 4 1 V 5 y H P W a 5 O 5 6 z 4 E 2 q 8 C H U H 8 G 9 / X V 1 9 L V C d k y Y Q r 9 q b 1 G r F M P 3 7 m Y g e C q q n X g e M Q f b 9 A e 6 z O H s c F O D f r x o s U 4 d g L m d 4 r P f t h W F q u P Q L H P 8 Y D / / V 1 c / f r N n / o O Q 9 f 3 H z Z m N r c 2 P r 4 r u 1 v + 1 U / 4 v p 1 y t / X P m X l V c r W y t / X v n b y u H K + c r 1 S r B y v / J f K / + 9 8 j / v v n 1 3 8 q 7 3 7 s a E / v x n V Z 0 / r D g / 7 z 7 / H 4 f o z O E = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M a 5 d m B C L e H Q V E k + 1 g 0 x r 7 6 f 4 m 8 4 = " > A A A 1 p X i c lV t b d 9 v K d V a S N k n V J j 1 J H v O C V d n r u F m y l m Q f n y R 9 i i 7 U x a L u 1 M U + 9 P E C w U 0 Q F m 7 C D C F S L P v c 1 7 6 2 v 6 z / p n t m M N h 7 Q M h n V W t Z w n z f n s F c vp n 9 E Y Q H e R w J u b n 5 v z / 7 + S / + 7 u 9 / + a t f / 8 P q P / 7 T b 3 7 7 z 9 / 8 7v c 3 I p s U A V w H W Z w V d w N f Q B y l c C 0 j G c N d X o C f D G K 4 H d z v K v 6 2 h E J E W d q T s x w + J X 6 Y R q M o 8 C V C t / 0 Y Q 4 f + 5 2 / W N j f + u v n m 7 d t N b 3 N j c / P P b z a / Vx f f b 7 7 9 y / f e F i L q Z 2 2 l + j n / / L v v / r M / z I J J A q k M Y l + I H 7 Y 2 c / l p 7 h c y C m J Y r P Y n A n I / u P d D + A E v U z 8 B 8 W m u + 7 v w X i I y 9 E Z Z g f 9 S 6 W m U 1 5 j 7 i R C z Z I C R i S / H o s k p s I 3 7 Y S J H f / k 0 j 9 J 8 I i E N z I 1 G k 9 i T m a c G 7 w 2 j A g I Z z / D C D 4 o I + + o F Y 7 / w A 4 l T t L r 6 U v 1 4 p 5 1 b 7 2 S 7 d + j t d f a P T o 9 6 R 2 e n V 5 6 m V t s 6 s o 5 /1 T D E + i B Z Y B v e i V / c e w L v g x M s v G z k B X 5 u r t W I C x h B U U R p q D o 1 j M p I 2 L B R F E 4 K w A G l 8 B h k S e K n w 3 k f w R h G c j G f 9 y H x X n X x + l 8 X i 6 W Y A N c B C h u 1 q 0 t t c U U U j u v G L l W h L U p m u Y 3 p Z X l b x C C T M k t s 0 I 4 u L c V V 4 / Z t m P 9 c x M B G D J 6 L C G x E 8 F z E 0 E Y M V Q Q u w y G O L l Y j 9 H w P 4 9 W i w w h 3 y d D D u U n c N v B a g Y s f t j 5 h K 4 O R t 7 a l G m k O e 7 q Y 9 x O / C F F g f j H f P 7 p r 9 g W v n R C U U j O k d 7 Z 3 p u / T l z C V W v r z A r D 3 i v g 3 c 2 O 3 z Y 5 u U o 6 z f N 7 v N N n O A 7 K d z / N + U T 7 1 R Z R 4 D 3 h d 5 u N o 8 U p B / 4 6 / p k t T 1 s m f G r V y V U u O Q f p f r 1 d X m 1 b V h q 9 a I x + e c K 5 a u + L G 5 S r u m Z s 3 I q d P S 5 F T F f m 0 H L k c u B Q z 1 E H D d l L d 6 V V b 0 w 9 P z V E 1 I 9 T E I t x A p x q d N t C Z R m c N N N F o 0 l S B R t N m 7 E Qq c U y w S 9 N 1 z 4 5 4 K S h 3 g l T f G y F D U x F v 4 w 9 i n + a u G a a q s q C 2 l h B b a g w x 3 j v c n P v 6 r D O H I Z 7 U s O 7 F 2 S M U r w N M Z R u r f d y p + r S C 0 d r W 3 J y L / 9 H H 0 l x v j 7 b q e A p E 0 o 8 3 v H 0 8 Y 4 X E P K S O V K E O Q u R N i / u 2 x f 1 m i 5 q W j 5 m 9 5 9 q b 6 q 7 C s 0 E e D q 8 q v L E 1 H i b + k K q s v V 3 7 b q n a e l 3 H X r 3 l T X 2 n h 3 N l k s V X p w M T i u l 8 l V m c + W h p w E 6 I q X 1 l a 1 + 1 1 L 6 0 t X S e f M z q 5 L V R T 4 y 5 u 9 A z U 6 e 2 Z 6 a m 2 e C 4 A G g 2 y d p b e 7 v c I s 0 a a / v t c t t + 6 g E u g q r c M m X w Y M Z s Q 5 4 f t N P O J M + h 8 F Q 7 p p l O 1 U y n r Z l t r / A f a d 4 b j b 1 + / d o v s 2 j o T Y T K + N H I y z M h I j R n p u k 8 9 j E j V e 0 / 3 z t l U n J M U C 1 jV I y p X s X 8 v w d Z N b R b N 7 T 7 k w 3 h m N M Q t L Ux s c K 0 o e G 6 R y g V S 9 u m X r 9 + V i b Y O z 8 O M z R l 4 6 R l n M i Z 3 t V B X x 0 o a 2 p p p N u 2 q e 2 W p q z g 7 f 1 w E H V b X z 8 M e k 6 l 7 Z + s t D S p a B h k N X K m P o W a 7 q q r r y 2 K q d 9 U 7 3 l d / 9 y t b 0 d a 3 w B 7 r a 6 f 7 X A l O I h i J d Z Y X a B d w Q B 1 V b U 3 i r O s 0 L S + M r y + r A K Q G i T z J Z M j C 9 w I l c 8 J / H i + 1 w w o / T g a 8 o D P 5 r p I 5 o Z a L D U J Q r Z X 0 M y i H h H k Q l n H X E R x l m r b h 1 O L T W S J V / p F h E k M r L 4 x f 8 2 N c U u z I s F W X / Q RM i S Y b Y Z y T D D b C e W E Y L Y N y i 8 E s z 1 Q 3 h P M N k A Z E x w z O C E 4 Y T C b a D 7 D G c F M z G V O M F N y + U A w k 3 F Z E M w 0 X A q C B V 9 U g m X 7 n H D p l g Q z 3 Z a P B D P R l l O C m W L L G c F M r u U T w Va r n R j U c y j 9 E K V o 0 S 0 Y 0 b W e y 2 C U 1 3 o y g 5 F f 6 9 k M R o O t p z M Y I b a e z 2 D U 2 H p C g 5 F k 6 x k N R p e t p z R y z 5 7 T Y B T a e l K D k W n r W Q 1 G q 8 3 T 2 n K J y y W c e / Y k B i P d 1 r M Y j H 5 b T 2 M w I m 4 9 j 8 E o u f V E B i P n 1 j M Z j K Z b T 2 U w w m 4 9 l 8 G o u /V k B i P x 1 r M Z j M 5 b T 2 c w Y m 8 9 n 8 E o / v k T G v d C E Q W 1 Q 0 m 2 a X 9 s 0 7 Z J d g j e Y f A u w b s M 3 i N 4 j 8 E d g j s M 3 i d 4 n 8 E H B B 8 w + J D g Q w Y f E X z E 4 P c E v 2 f w M c H H D O 4 S 3 G X w C c E n D D 4 l + J T B Z w S f M f i c 4 H M G X x B 8 w e B L g i 8 Z f E X w F Y N 7 B P c Y f E 3 w N Y N v C L 5 h 8 C 3 B t w y + I / i O w R 8 I / s D g j w R / f P 5 4 d U U H R n V M o 9 t M v 1 p 6 j N v h 3 K 7 L 7 X J u z + X 2 O N dx u Q 7 n 9 l 1 u n 3 M H L n f A u U O X O + T c k c s d c e 6 9 y 7 3 n 3 L H L H X O u 6 3 J d z p 2 4 3 A n n T l 3 u l H N n L n f G u X O X O + f c h c t d c O 7 S 5 S 4 5 d + V y V 5 z r u V y P c 9 c u d 8 2 5 G 5 e 7 4 d y t y 9 1 y 7 s 7 l 7 j j 3 w e U + c O 6 j y 1 n Z 3 3 A L U T 6 B / h y B n 1 0 3 6 7 p l l s L c f p 6 1 W D I x U D + h p F F 7 Y o W 7 f l g 9 G 6 0 I / T T V w l U 0 C x w Y h O y J N i e I k C n R l g Q R s i J l 1 U E y I N p + I E K 2 Q 5 s O R M h s a K u B C F m M s u o k 6 + E X g 5 C d 0 G Y C E T I R 2 k I g E r P p M Q g Z B m 0 X E E n Z t B o k Y 5 N k E L I E 2 h A g Q k Z A 2 w B E K P 3 r 5 I + I Y O t g E E r 1 Z b V a b K 1 K g 1 B a 1 0 k d E U r m O p U j Q i l c J 3 B E K H H r t I 1 I m 0 l 1 3 W n p x / l Y r b f + W w u z H F S a q R 7 E G 5 A + g d E D i 4 o y X + U i Y y 6 I y B I I F a 7 / E q yV q l R q A W w Q E f x N k I j C R F X V f w m 2 e q 6 / J a g G M p / z / s + V W G 0 J x R p Q C Y U 6 Z I O a K 4 H a E g p 0 R C U U Z 0 g l F O a Y S t h d 1 l c U 5 B c q o R j v 2 d z M l Q j r k c + V A G 0 J J 5 P N I o o v Y 1 M y V 6 K z J R T d A 5 V Q c A W b q b k S W j 1 B c y U y W 8 K J Z t O M A i u p h O J 6 p B I K a 0 o l F N W M S i i o p 0 X 1 D T O m 3 6 n B d e p F n V H K 1 Q k X E U q 0 O s 0 i Q u l V J 1 d E K K n q l I o I p V K d S B G h B K r T J y K U N n X S R I S S p U 6 V i F C K 1 A k S E U q M O i 0 i Q u l Q J 0 N E K A n q F I g I p T 6 d + B C h h K f T H S K U 5 n S S Q 4 S S m 0 5 t i F B K 0 w k N E U p k O o 0 h Q u l L J y 9 E K G n p l I U I p S q d q B C h B K X T E y K U l n R S Q o S S k U 5 F i F A K 0 g k I k Y 9 s B S l d D H i 2 S M 7 r b H H O s k X S t V t f M d 1 q + 9 e D q / a w 4 q 7 M P t Y q 6 k E q 1 H s X e x D E f g E o q v G 2 O o H w j s Y D i l G k n q B C G m T D K A 2 x M X 8 S K 0 S M 6 u t k M R f q 4 e 8 V y O c a G G T x 8 K e a G U w X 8 + a X m x L 7 Z 7 4 p 1 + m 0 a k 8 / v K 6 G J o 3 t T A V T v 9 y x G O l f 7 l q M d o D c s x j t A d m x G O 0 C u W 8x 2 g f y w G K 0 E + S h x W g v y C O L 0 W 6 Q 7 y 1 G + 0 E e W 4 x 2 h O x a j P a E P L E Y 7 Q p 5 a j H a F / L M Y r Q z 5 L n F a G / I C 4 v R 7 p C X F q P 9 I a 8 s R j t E 9 i x G e 0 R e W 4 x 2 i b y x G O 0 T e W s x 2 i n y z m K 0 V + Q H i 9 F u k R 8 t Z o w a C v m g 8 P O x Y U P 78 T d w P o W E O w w m X Y S 7 D C Z p h H s M J n W E H Q a T Q M J 9 B p N G w g M G k 0 z C Q w a T U s I j B p N Y w v c M J r 2 E x w w m y Y R d B p N q w h M G k 3 D C U w a T d s I z B p N 8 w n M G k 4 L C C w a T i M J L B p O O w i s G k 5 T C H o N J T e E 1 g 0 l Q 4 Q 2 D S V P h L Y N J V u E d g 0 l Z 4 Q c G k 7 j C j w y 2 H w T w a K u s m q g f r g y Y u M Q O o a Q t s U s o S U v s E a q V 9 d L b 0 1 9 w T A R 4 v i dA e n j r G I Z e Z 9 0 b Q O A r X I 4 j 4 T 1 m k 3 i I E J b A E / r r E P S S k 8 J T L 8 p l M T a k 3 i 6 D a Y 7 e U n / H a 7 9 p 7 9 A d S bR i n 1 D S r D g g l C Q r D g k l x Y o j Q k m w 4 j 2 h p F d x T C j J V X Q J J b W K E 0 J J r O K U U N K q O C O U p C r O C S W l i g t C S a j i k l D S q b g i l G Q q e o S S S s U 1 o S R S c U M o a V T c E k o S F X e E k k L F B 0 J J o O I j o f X z m R T d I O g P F r 5 5 M l N Z Q y B f 0 H U / E i j T u E 0 l F P A O l V C 4 u 1 R C w e 5 R C c X U o R K K a J 9 K K J 4 D K q F o D q m E Y j m i E o r k P Z V Q H M d U Q l F 0 q Y R i O K E S i u C U S r j 4 Z 1 T C R T + n E i 7 2 B Z V w k S + p h I t 7 R S V c 1 B 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 7 j 1 G c p x N p I c m y H v E R J d D 4 d o k I J / k e K T q 9 r L W g A 5 c s o e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E s p i A N 0 m H U M Q z 9 Y b T 0 J e + F 0 I K B e Y g V Y 4 E K n 0 w U Q m p + Q a m r 1 4 S V C 9 q F s l c F 3 Q 6 V K 1 C k k d F h I n Q q V + / v z u Y 6 S S o 3 x l R N 8 G s 2 W j b v k 4 y 9 i V + f n d v 4U S e 8 8 j z R V t n k m w I 8 d c G o g P q k Z j S 0 n 2 q o P O v B e U y i o d Q R f Z 1 o e 5 9 X Q O P C Z k F Y 1 + o l 9 f 9 i c z 0 5 y o o n B 4 2 X i L P T U z d x 6 r K c g e G 4 M S Z Y k t c g Q Q e O j b O F F E L g X 6 y 5 g b H f h 7 7 A S z q 1 2 + 6 F b D w X n r V t T u 9 j Z e O F 9 y / u F x X s D d 8 u k 3 2 c s F z e + P U T H I 2 x w 0 y L h b 2 a Z x L F B A u 6 u d r T S q Q N E Z V i k Y R F M 2 m R T a S i T + l S A s 0 4 z B Z Z P q N J / P o b b m V P J 6 o 0 T + p 5 w M u e 9 x d 8 N e d j r t L C 3 j j F 9 Q D V W i 2 L / G P X + D a F x m L v F p a g N 2 s J F o V l E B v s 3 h U + I l 6 T D V + z A q 0 r c K f C e 9 F 9 8 c 3 L 9 S r P v o / f k x S 8 8 q q y H H 9 h X 7 V 7 E U f 4 p j F 2 M e k L 7 0 d T I C 4 5 V P 1 a 4 b 7 H R L 1 y p v y x q Z R F q 3 e N 8 0 m o c 6 Z 2 i p H E t Z 1 8 y L z h h m o 5 h 6 j + y i H Y e R v N F 6 y z o o k V k / 6 F / P u j 5 u L F j J L Q X F b b Z x 8 1 P X e t H G 5 Y v I W R m u h + 2 M / S k d y 1 t w 6 u V + o R 8 Z 4 b P h q s 1 w B n r X C D 8 G L U i / N K p s v Y b r h 7 Y 4 z o a Y n U w Y w G H t 7 + I k 4 h W + F N 8 i y + 4 1 V 5 y H P W a 5 O 5 6 z 4 E 2 q 8 C H U H 8 G 9 / X V 1 9 L V C d k y Y Q r 9 q b 1 G r F M P 3 7 m Y g e C q q n X g e M Q f b 9 A e 6 z O H s c F O D f r x o s U 4 d g L m d 4 r P f t h W F q u P Q L H P 8 Y D / / V 1 c / f r N n / o O Q 9 f 3 H z Z m N r c 2 P r 4 r u 1 v + 1 U / 4 v p 1 y t / X P m X l V c r W y t / X v n b y u H K + c r 1 S r B y v / J f K / + 9 8 j / v v n 1 3 8 q 7 3 7 s a E / v x n V Z 0 / r D g / 7 z 7 / H 4 f o z O E = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M a 5 d m B C L e H Q V E k + 1 g 0 x r 7 6 f 4 m 8 4 = " > A A A 1 p X i c lV t b d 9 v K d V a S N k n V J j 1 J H v O C V d n r u F m y l m Q f n y R 9 i i 7 Ux a L u 1 M U + 9 P E C w U 0 Q F m 7 C D C F S L P v c 1 7 6 2 v 6 z / p n t m M N h 7 Q M h n V W t Z w n z f n s F c v p n 9 E Y Q H e R w J u b n 5 v z / 7 + S / + 7 u 9 / + a t f / 8 P q P / 7 T b 3 7 7 z 9 / 8 7 v c 3 I p s U AV w H W Z w V d w N f Q B y l c C 0 j G c N d X o C f D G K 4 H d z v K v 6 2 h E J E W d q T s x w + J X 6 Y R q M o 8 C V C t / 0 Y Q 4 f + 5 2 / W N j f + u v n m 7 d t N b 3 N j c / P P b z a / Vx f f b 7 7 9 y / f e F i L q Z 2 2 l + j n / / L v v / r M / z I J J A q k M Y l + I H 7 Y 2 c / l p 7 h c y C m J Y r P Y n A n I / u P d D + A E v U z 8 B 8 W m u + 7 v w X i I y 9 E Z Z g f 9 S 6 W m U 1 5 j 7 i R C z Z I C R i S / H o s k p s I 3 7 Y S J H f / k 0 j 9 J 8 I i E N z I 1 G k 9 i T m a c G 7 w 2 j A g I Z z / D C D 4 o I + + o F Y 7 / w A 4 l T t L r 6 U v 1 4 p 5 1 b 7 2 S 7 d + j t d f a P T o 9 6 R 2 e n V 5 6 m V t s 6 s o 5 /1 T D E + i B Z Y B v e i V / c e w L v g x M s v G z k B X 5 u r t W I C x h B U U R p q D o 1 j M p I 2 L B R F E 4 K w A G l 8 B h k S e K n w 3 k f w R h G c j G f 9 y H x X n X x + l 8 X i 6 W Y A N c B C h u 1 q 0 t t c U U U j u v G L l W h L U p m u Y 3 p Z X l b x C C T M k t s 0 I 4 u L c V V 4 / Z t m P 9 c x M B G D J 6 L C G x E 8 F z E 0 E Y M V Q Q u w y G O L l Y j 9 H w P 4 9 W i w w h 3 y d D D u U n c N v B a g Y s f t j 5 h K 4 O R t 7 a l G m k O e 7 q Y 9 x O / C F F g f j H f P 7 p r 9 g W v n R C U U j O k d 7 Z 3 p u / T l z C V W v r z A r D 3 i v g 3 c 2 O 3 z Y 5 u U o 6 z f N 7 v N N n O A 7 K d z / N + U T 7 1 R Z R 4 D 3 h d 5 u N o 8 U p B / 4 6 / p k t T 1 s m f G r V y V U u O Q f p f r 1 d X m 1 b V h q 9 a I x + e c K 5 a u + L G 5 S r u m Z s 3 I q d P S 5 F T F f m 0 H L k c u B Q z 1 E H D d l L d 6 V V b 0 w 9 P z V E 1 I 9 T E I t x A p x q d N t C Z R m c N N N F o 0 l S B R t N m 7 E Qq c U y w S 9 N 1 z 4 5 4 K S h 3 g l T f G y F D U x F v 4 w 9 i n + a u G a a q s q C 2 l h B b a g w x 3 j v c n P v 6 r D O H I Z 7 U s O 7 F 2 S M U r w N M Z R u r f d y p + r S C 0 d r W 3 J y L / 9 H H 0 l x v j 7 b q e A p E 0 o 8 3 v H 0 8 Y 4 X E P K S O V K E O Q u R N i / u 2 x f 1 m i 5 q W j 5 m 9 5 9 q b 6 q 7 C s 0 E e D q 8 q v L E 1 H i b + k K q s v V 3 7 b q n a e l 3 H X r 3 l T X 2 n h 3 N l k s V X p w M T i u l 8 l V m c + W h p w E 6 I q X 1 l a 1 + 1 1 L 6 0 t X S e f M z q 5 L V R T 4 y 5 u 9 A z U 6 e 2 Z 6 a m 2 e C 4 A G g 2 y d p b e 7 v c I s 0 a a / v t c t t + 6 g E u g q r c M m X w Y M Z s Q 5 4 f t N P O J M + h 8 F Q 7 p p l O 1 U y n r Z l t r / A f a d 4 b j b 1 + / d o v s 2 j o T Y T K + N H I y z M h I j R n p u k 8 9 j E j V e 0 / 3 z t l U n J M U C 1 jV I y p X s X 8 v w d Z N b R b N 7 T 7 k w 3 h m N M Q t L Ux s c K 0 o e G 6 R y g V S 9 u m X r 9 + V i b Y O z 8 O M z R l 4 6 R l n M i Z 3 t V B X x 0 o a 2 p p p N u 2 q e 2 W p q z g 7 f 1 w E H V b X z 8 M e k 6 l 7 Z + s t D S p a B h k N X K m P o W a 7 q q r r y 2 K q d 9 U 7 3 l d / 9 y t b 0 d a 3 w B 7 r a 6 f 7 X A l O I h i J d Z Y X a B d w Q B 1 V b U 3 i r O s 0 L S + M r y + r A K Q G i T z J Z M j C 9 w I l c 8 J / H i + 1 w w o / T g a 8 o D P 5 r p I 5 o Z a L D U J Q r Z X 0 M y i H h H k Q l n H X E R x l m r b h 1 O L T W S J V / p F h E k M r L 4 x f 8 2 N c U u z I s F W X / Q RM i S Y b Y Z y T D D b C e W E Y L Y N y i 8 E s z 1 Q 3 h P M N k A Z E x w z O C E 4 Y T C b a D 7 D G c F M z G V O M F N y + U A w k 3 F Z E M w 0 X A q C B V 9 U g m X 7 n H D p l g Q z 3 Z a P B D P R l l O C m W L L G c F M r u U T w Va r n R j U c y j 9 E K V o 0 S 0 Y 0 b W e y 2 C U 1 3 o y g 5 F f 6 9 k M R o O t p z M Y I b a e z 2 D U 2 H p C g 5 F k 6 x k N R p e t p z R y z 5 7 T Y B T a e l K D k W n r W Q 1 G q 8 3 T 2 n K J y y W c e / Y k B i P d 1 r M Y j H 5 b T 2 M w I m 4 9 j 8 E o u f V E B i P n 1 j M Z j K Z b T 2 U w w m 4 9 l 8 G o u /V k B i P x 1 r M Z j M 5 b T 2 c w Y m 8 9 n 8 E o / v k T G v d C E Q W 1 Q 0 m 2 a X 9 s 0 7 Z J d g j e Y f A u w b s M 3 i N 4 j 8 E d g j s M 3 i d 4 n 8 E H B B 8 w + J D g Q w Y f E X z E 4 P c E v 2 f w M c H H D O 4 S 3 G X w C c E n D D 4 l + J T B Z w S f M f i c 4 H M G X x B 8 w e B L g i 8 Z f E X w F Y N 7 B P c Y f E 3 w N Y N v C L 5 h 8 C 3 B t w y + I / i O w R 8 I / s D g j w R / f P 5 4 d U U H R n V M o 9 t M v 1 p 6 j N v h 3 K 7 L 7 X J u z + X 2 O N dx u Q 7 n 9 l 1 u n 3 M H L n f A u U O X O + T c k c s d c e 6 9 y 7 3 n 3 L H L H X O u 6 3 J d z p 2 4 3 A n n T l 3 u l H N n L n f G u X O X O + f c h c t d c O 7 S 5 S 4 5 d + V y V 5 z r u V y P c 9 c u d 8 2 5 G 5 e 7 4 d y t y 9 1 y 7 s 7 l 7 j j 3 w e U + c O 6 j y 1 n Z 3 3 A L U T 6 B / h y B n 1 0 3 6 7 p l l s L c f p 6 1 W D I x U D + h p F F 7 Y o W 7 f l g 9 G 6 0 I / T T V w l U 0 C x w Y h O y J N i e I k C n R l g Q R s i J l 1 U E y I N p + I E K 2 Q 5 s O R M h s a K u B C F m M s u o k 6 + E X g 5 C d 0 G Y C E T I R 2 k I g E r P p M Q g Z B m 0 X E E n Z t B o k Y 5 N k E L I E 2 h A g Q k Z A 2 w B E K P 3 r 5 I + I Y O t g E E r 1 Z b V a b K 1 K g 1 B a 1 0 k d E U r m O p U j Q i l c J 3 B E K H H r t I 1 I m 0 l 1 3 W n p x / l Y r b f + W w u z H F S a q R 7 E G 5 A + g d E D i 4 o y X + U i Y y 6 I y B I I F a 7 / E q yV q l R q A W w Q E f x N k I j C R F X V f w m 2 e q 6 / J a g G M p / z / s + V W G 0 J x R p Q C Y U 6 Z I O a K 4 H a E g p 0 R C U U Z 0 g l F O a Y S t h d 1 l c U 5 B c q o R j v 2 d z M l Q j r k c + V A G 0 J J 5 P N I o o v Y 1 M y V 6 K z J R T d A 5 V Q c A W b q b k S W j 1 B c y U y W 8 K J Z t O M A i u p h O J 6 p B I K a 0 o l F N W M S i i o p 0 X 1 D T O m 3 6 n B d e p F n V H K 1 Q k X E U q 0 O s 0 i Q u l V J 1 d E K K n q l I o I p V K d S B G h B K r T J y K U N n X S R I S S p U 6 V i F C K 1 A k S E U q M O i 0 i Q u l Q J 0 N E K A n q F I g I p T 6 d + B C h h K f T H S K U 5 n S S Q 4 S S m 0 5 t i F B K 0 w k N E U p k O o 0 h Q u l L J y 9 E K G n p l I U I p S q d q B C h B K X T E y K U l n R S Q o S S k U 5 F i F A K 0 g k I k Y 9 s B S l d D H i 2 S M 7 r b H H O s k X S t V t f M d 1 q + 9 e D q / a w 4 q 7 M P t Y q 6 k E q 1 H s X e x D E f g E o q v G 2 O o H w j s Y D i l G k n q B C G m T D K A 2 x M X 8 S K 0 S M 6 u t k M R f q 4 e 8 V y O c a G G T x 8 K e a G U w X 8 + a X m x L 7 Z 7 4 p 1 + m 0 a k 8 / v K 6 G J o 3 t T A V T v 9 y x G O l f 7 l q M d o D c s x j t A d m x G O 0 C u W 8x 2 g f y w G K 0 E + S h x W g v y C O L 0 W 6 Q 7 y 1 G + 0 E e W 4 x 2 h O x a j P a E P L E Y 7 Q p 5 a j H a F / L M Y r Q z 5 L n F a G / I C 4 v R 7 p C X F q P 9 I a 8 s R j t E 9 i x G e 0 R e W 4 x 2 i b y x G O 0 T e W s x 2 i n y z m K 0 V + Q H i 9 F u k R 8 t Z o w a C v m g 8 P O x Y U P 78 T d w P o W E O w w m X Y S 7 D C Z p h H s M J n W E H Q a T Q M J 9 B p N G w g M G k 0 z C Q w a T U s I j B p N Y w v c M J r 2 E x w w m y Y R d B p N q w h M G k 3 D C U w a T d s I z B p N 8 w n M G k 4 L C C w a T i M J L B p O O w i s G k 5 T C H o N J T e E 1 g 0 l Q 4 Q 2 D S V P h L Y N J V u E d g 0 l Z 4 Q c G k 7 j C j w y 2 H w T w a K u s m q g f r g y Y u M Q O o a Q t s U s o S U v s E a q V 9 d L b 0 1 9 w T A R 4 v i dA e n j r G I Z e Z 9 0 b Q O A r X I 4 j 4 T 1 m k 3 i I E J b A E / r r E P S S k 8 J T L 8 p l M T a k 3 i 6 D a Y 7 e U n / H a 7 9 p 7 9 A d S bR i n 1 D S r D g g l C Q r D g k l x Y o j Q k m w 4 j 2 h p F d x T C j J V X Q J J b W K E 0 J J r O K U U N K q O C O U p C r O C S W l i g t C S a j i k l D S q b g i l G Q q e o S S S s U 1 o S R S c U M o a V T c E k o S F X e E k k L F B 0 J J o O I j o f X z m R T d I O g P F r 5 5 M l N Z Q y B f 0 H U / E i j T u E 0 l F P A O l V C 4 u 1 R C w e 5 R C c X U o R K K a J 9 K K J 4 D K q F o D q m E Y j m i E o r k P Z V Q H M d U Q l F 0 q Y R i O K E S i u C U S r j 4 Z 1 T C R T + n E i 7 2 B Z V w k S + p h I t 7 R S V c 1 B 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 7 j 1 G c p x N p I c m y H v E R J d D 4 d o k I J / k e K T q 9 r L W g A 5 c s o e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E s p i A N 0 m H U M Q z 9 Y b T 0 J e + F 0 I K B e Y g V Y 4 E K n 0 w U Q m p + Q a m r 1 4 S V C 9 q F s l c F 3 Q 6 V K 1 C k k d F h I n Q q V + / v z u Y 6 S S o 3 x l R N 8 G s 2 W j b v k 4 y 9 i V + f n d v 4U S e 8 8 j z R V t n k m w I 8 d c G o g P q k Z j S 0 n 2 q o P O v B e U y i o d Q R f Z 1 o e 5 9 X Q O P C Z k F Y 1 + o l 9 f 9 i c z 0 5 y o o n B 4 2 X i L P T U z d x 6 r K c g e G 4 M S Z Y k t c g Q Q e O j b O F F E L g X 6 y 5 g b H f h 7 7 A S z q 1 2 + 6 F b D w X n r V t T u 9 j Z e O F 9 y / u F x X s D d 8 u k 3 2 c s F z e + P U T H I 2 x w 0 y L h b 2 a Z x L F B A u 6 u d r T S q Q N E Z V i k Y R F M 2 m R T a S i T + l S A s 0 4 z B Z Z P q N J / P o b b m V P J 6 o 0 T + p 5 w M u e 9 x d 8 N e d j r t L C 3 j j F 9 Q D V W i 2 L / G P X + D a F x m L v F p a g N 2 s J F o V l E B v s 3 h U + I l 6 T D V + z A q 0 r c K f C e 9 F 9 8 c 3 L 9 S r P v o / f k x S 8 8 q q y H H 9 h X 7 V 7 E U f 4 p j F 2 M e k L 7 0 d T I C 4 5 V P 1 a 4 b 7 H R L 1 y p v y x q Z R F q 3 e N 8 0 m o c 6 Z 2 i p H E t Z 1 8 y L z h h m o 5 h 6 j + y i H Y e R v N F 6 y z o o k V k / 6 F / P u j 5 u L F j J L Q X F b b Z x 8 1 P X e t H G 5 Y v I W R m u h + 2 M / S k d y 1 t w 6 u V + o R 8 Z 4 b P h q s 1 w B n r X C D 8 G L U i / N K p s v Y b r h 7 Y 4 z o a Y n U w Y w G H t 7 + I k 4 h W + F N 8 i y + 4 1 V 5 y H P W a 5 O 5 6 z 4 E 2 q 8 C H U H 8 G 9 / X V 1 9 L V C d k y Y Q r 9 q b 1 G r F M P 3 7 m Y g e C q q n X g e M Q f b 9 A e 6 z O H s c F O D f r x o s U 4 d g L m d 4 r P f t h W F q u P Q L H P 8 Y D / / V 1 c / f r N n / o O Q 9 f 3 H z Z m N r c 2 P r 4 r u 1 v + 1 U / 4 v p 1 y t / X P m X l V c r W y t / X v n b y u H K + c r 1 S r B y v / J f K / + 9 8 j / v v n 1 3 8 q 7 3 7 s a E / v x n V Z 0 / r D g / 7 z 7 / H 4 f o z O E = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " M a 5 d m B C L e H Q V E k + 1 g 0 x r 7 6 f 4 m 8 4 = " > A A A 1 p X i c lV t b d 9 v K d V a S N k n V J j 1 J H v O C V d n r u F m y l m Q f n y R 9 i i 7 U x a L u 1 M U + 9 P E C w U 0 Q F m 7 C D C F S L P v c 1 7 6 2 v 6 z / p n t m M N h 7 Q M h n V W t Z w n z f n s F c vp n 9 E Y Q H e R w J u b n 5 v z / 7 + S / + 7 u 9 / + a t f / 8 P q P / 7 T b 3 7 7 z 9 / 8 7 v c 3 I p s U AV w H W Z w V d w N f Q B y l c C 0 j G c N d X o C f D G K 4 H d z v K v 6 2 h E J E W d q T s x w + J X 6 Y R q M o 8 C V C t / 0 Y Q 4 f + 5 2 / W N j f + u v n m 7 d t N b 3 N j c / P P b z a / Vx f f b 7 7 9 y / f e F i L q Z 2 2 l + j n / / L v v / r M / z I J J A q k M Y l + I H 7 Y 2 c / l p 7 h c y C m J Y r P Y n A n I / u P d D + A E v U z 8 B 8 W m u + 7 v w X i I y 9 E Z Z g f 9 S 6 W m U 1 5 j 7 i R C z Z I C R i S / H o s k p s I 3 7 Y S J H f / k 0 j 9 J 8 I i E N z I 1 G k 9 i T m a c G 7 w 2 j A g I Z z / D C D 4 o I + + o F Y 7 / w A 4 l T t L r 6 U v 1 4 p 5 1 b 7 2 S 7 d + j t d f a P T o 9 6 R 2 e n V 5 6 m V t s 6 s o 5/ 1 T D E + i B Z Y B v e i V / c e w L v g x M s v G z k B X 5 u r t W I C x h B U U R p q D o 1 j M p I 2 L B R F E 4 K w A G l 8 B h k S e K n w 3 k f w R h G c j G f 9 y H x X n X x + l 8 X i 6 W Y A N c B C h u 1 q 0 t t c U U U j u v G L l W h L U p m u Y 3 p Z X l b x C C T M k t s 0 I 4 u L c V V 4 / Z t m P 9 c x M B G D J 6 L C G x E 8 F z E 0 E Y M V Q Q u w y G O L l Y j 9 H w P 4 9 W i w w h 3 y d D D u U n c N v B a g Y s f t j 5 h K 4 O R t 7 a l G m k O e 7 q Y 9 x O / C F F g f j H f P 7 p r 9 g W v n R C U U j O k d 7 Z 3 p u / T l z C V W v r z A r D 3 i v g 3 c 2 O 3 z Y 5 u U o 6 z f N 7 v N N n O A 7 K d z / N + U T 7 1 R Z R 4 D 3 h d 5 u N o 8 U p B / 4 6 / p k t T 1 s m f G r V y V U u O Q f p f r 1 d X m 1 b V h q 9 a I x + e c K 5 a u + L G 5 S r u m Z s 3 I q d P S 5 F T F f m 0 H L k c u B Q z 1 E H D d l L d 6 V V b 0 w 9 P z V E 1 I 9 T E I t x A p x q d N t C Z R m c N N N F o 0 l S B R t N m 7 E Q q c U y w S 9 N 1 z 4 5 4 K S h 3 g l T f G y F D U x F v 4 w 9 i n + a u G a a q s q C 2 l h B b a g w x 3 j v c n P v 6 r D O H I Z 7 U s O 7 F 2 S M U r w N M Z R u r f d y p + r S C 0 d r W 3 J y L / 9 H H 0 l x v j 7 b q e A p E 0 o 8 3 v H 0 8 Y 4 X E P K S O V K E O Q u R N i / u 2 xf 1 m i 5 q W j 5 m 9 5 9 q b 6 q 7 C s 0 E e D q 8 q v L E 1 H i b + k K q s v V 3 7 b q n a e l 3 H X r 3 l T X 2 n h 3 N l k s V X p w M T i u l 8 l V m c + W h p w E 6 I q X 1 l a 1 + 1 1 L 6 0 t X S e f M z q 5 L V R T 4 y 5 u 9 A z U 6 e 2 Z 6 a m 2 e C 4 A G g 2 y d p b e 7 v c I s 0 a a / v t c t t + 6 g E u g q r c M m X w Y M Z s Q 5 4 f t N P O J M + h 8 F Q 7 p p l O 1 U y n r Z l t r / A f a d 4 b j b 1+ / d o v s 2 j o T Y T K + N H I y z M h I j R n p u k 8 9 j E j V e 0 / 3 z t l U n J M U C 1 j V I y p X s X 8 v w d Z N b R b N 7 T 7 k w 3 h m N M Q t L U x s c K 0 o e G 6 R y g V S 9 u m X r 9 + V i b Y O z 8 O M z R l 4 6 R l n M i Z 3 t V B X x 0 o a 2 p p p N u 2q e 2 W p q z g 7 f 1 w E H V b X z 8 M e k 6 l 7 Z + s t D S p a B h k N X K m P o W a 7 q q r r y 2 K q d 9 U 7 3 l d / 9 y t b 0 d a 3 w B 7 r a 6 f 7 X A l O I hi J d Z Y X a B d w Q B 1 V b U 3 i r O s 0 L S + M r y + r A K Q G i T z J Z M j C 9 w I l c 8 J / H i + 1 w w o / T g a 8 o D P 5 r p I 5 o Z a L D U J Q r Z X 0 M y i H h H k Q l n H X E R x l m r b h 1 O L T W S J V / p F h E k M r L 4 x f 8 2 N c U u z I s F W X / Q R e r G w 0 1 k 0 a J + Y g c s M i A l c J i B m 6 D J D Y s B l g J i R y 4 y I C V 0 m J G b s M m N i I p e J i P n i M l + I u X e Z e 2 J i l 4 k X W s Z F 4 k U C d y x + b B 3 O 1 G F n V n D d + z I R 0 h t m 6 b f S U 5 8 f U Y 4 z d f I 4 C + M l V d u p 2 3 Z K d 8 1 c J i M m d 5 m c m A e X e S C m c J m C G O E y g h j p M p K Y i c t M i C l d p i T m 0 W U e i Z m 6 z J S Y m c v M i H l y m a e F M W h 2 A 2 B m z u r j v a w 2 y d x s p c G I b Z u 6 3 9 r l s Y j K 9 d U 8 4 z g 8 I J j t j T I g m G 2 M c k g w 2 x U l E M y 2 R D k i m O 2 H M i S Y b Y Z y T D D b C e W E Y L Y N y i 8 E s z 1 Q 3 h P M N k A Z E x w z O C E 4 Y T C b a D 7 D G c F M z G V O M F N y + U A w k 3 F Z E M w 0 X A q C B V 9 U g m X 7 n H D p l g Q z 3 Z a P B D P R l l O C m W L L G c F M r u U T w Va r n R j U c y j 9 E K V o 0 S 0 Y 0 b W e y 2 C U 1 3 o y g 5 F f 6 9 k M R o O t p z M Y I b a e z 2 D U 2 H p C g 5 F k 6 x k N R p e t p z R y z 5 7 T Y B T a e l K D k W n r W Q 1 G q 8 3 T 2 n K J y y W c e / Y k Bi P d 1 r M Y j H 5 b T 2 M w I m 4 9 j 8 E o u f V E B i P n 1 j M Z j K Z b T 2 U w w m 4 9 l 8 G o u / V k B i P x 1 r M Z j M 5 b T 2 c w Y m 8 9 n 8 E o / v k T G v d C E Q W 1 Q 0 m 2 a X 9 s 0 7 Z J d g j e Y f A u w b s M 3 i N 4 j 8 E d g j s M 3 i d 4 n 8 E H B B 8 w + J D g Q w Y f E X z E 4 P c E v 2 f w M c H H D O 4 S 3 G X w C c E n D D 4 l + J T B Z w S f M f i c 4 H M G X x B 8 w e B L g i 8 Z f E X w F Y N 7 B P c Y f E 3 w N Y N v C L 5 h 8 C 3 B t w y + I / i O w R 8 I / s D g j w R / f P 5 4 d U U H R n V M o 9 t M v 1 p 6 j N v h 3 K 7 L 7 X J u z + X 2 O N dx u Q 7 n 9 l 1 u n 3 M H L n f A u U O X O + T c k c s d c e 6 9 y 7 3 n 3 L H L H X O u 6 3 J d z p 2 4 3 A n n T l 3 u l H N n L n f G u X O X O + f c h c t d c O 7 S 5 S 4 5 d + V y V 5 z r u V y P c 9 c u d 8 2 5 G 5 e 7 4 d y t y 9 1 y 7 s 7 l 7 j j 3 w e U + c O 6 j y 1 n Z 3 3 A L U T 6 B / h y B n 1 0 3 6 7 p l l s L c f p 6 1 W D I x U D + h p F F 7 Y o W 7 f l g 9 G 6 0 I / T TV w l U 0 C x w Y h O y J N i e I k C n R l g Q R s i J l 1 U E y I N p + I E K 2 Q 5 s O R M h s a K u B C F m M s u o k 6 + E X g 5 C d 0 G Y C E T I R 2 k I g E r P p M Q g Z B m 0 X E E n Z t B o k Y 5 N k E L I E 2 h A g Q k Z A 2 w B E K P 3 r 5 I + I Y O t g E E r 1 Z b V a b K 1 K g 1 B a 1 0 k d E U r m O p U j Q i l c J 3 B E K H H r t I 1 I m 0 l 1 3 W n p x / l Y r b f + W w u z H F S a q R 7 E G 5 A + g d E D i 4 o y X + U i Y y 6 I y B I I F a 7 / E q y V q l R q A W w Q E f x N k I j C R F X V f w m 2 e q 6 / J a g G M p / z / s + V W G 0 J x R p Q C Y U 6 Z I O a K 4 H a E g p 0 R C U U Z 0 g l F O a Y S t h d 1 l c U 5 B c q o R j v 2 d z M l Q j r k c + V A G 0 J J 5 P N I o o v Y 1 M y V 6 K z J R T d A 5 V Q c A W b q b k S W j 1 B c y U y W 8 K J Z t O M A i u p h O J 6 p B I K a 0 o l F N W M S i i o p 0 X 1 D T O m 3 6 n B d e p F n V H K 1 Q k X E U q 0 O s 0 i Q u l V J 1 d E K K n q l I o I p V K d S B G h B K r T J y K U N n X S R I S S p U 6 V i F C K 1 A k S E U q M O i 0 i Q u l Q J 0 N E K A n q F I g I p T 6 d + B C h h K f T H S K U 5 n S S Q 4 S S m 0 5 t i F B K 0 w k N E U p k O o 0 h Q u l L J y 9 E K G n p l I U I p S q d q B C h B K X T E y K U l n R S Q o S S k U 5 F i F A K 0 g k I k Y 9 s B S l d D H i 2 S M 7 r b H H O s k X S t V t f M d 1 q + 9 e D q / a w 4 q 7 M P t Y q 6 k E q 1 H s X e x D E f g E o q v G 2 O o H w j s Y D i l G k n q B C G m T D K A 2 x M X 8 S K 0 S M 6 u t k M R f q 4 e 8 V y O c a G G T x 8 K e a G U w X 8 + a X m x L 7 Z 7 4 p 1 + m 0 a k 8 / v K 6 G J o 3 t T A V T v 9 y x G O l f 7 l q M d o D c s x j t A d m x G O 0 C u W 8 x 2 g f y w G K 0 E + S h x W g vy C O L 0 W 6 Q 7 y 1 G + 0 E e W 4 x 2 h O x a j P a E P L E Y 7 Q p 5 a j H a F / L M Y r Q z 5 L n F a G / I C 4 v R 7 p C X F q P 9 I a 8 s R j t E 9 i x G e 0 R e W 4 x 2 i b y x G O 0 T e W s x 2 i n y z m K 0 V + Q H i 9 F u k R 8 t Z o w a C v m g 8 P O x Y U P 78 T d w P o W E O w w m X Y S 7 D C Z p h H s M J n W E H Q a T Q M J 9 B p N G w g M G k 0 z C Q w a T U s I j B p N Y w v c M J r 2 E x w w m y Y R d B p N q w h M G k 3 D C U w a T d s I z B p N 8 w n M G k 4 L C C w a T i M J L B p O O w i s G k 5 T C H o N J T e E 1 g 0 l Q 4 Q 2 D S V P h L Y N J V u E d g 0 l Z 4 Q c G k 7 j C j w y 2 H w T w a K u s m q g f r g y Y u M Q O o a Q t s U s o S U v s E a q V 9 d L b 0 1 9 w T A R 4 v i d Ae n j r G I Z e Z 9 0 b Q O A r X I 4 j 4 T 1 m k 3 i I E J b A E / r r E P S S k 8 J T L 8 p l M T a k 3 i 6 D a Y 7 e U n / H a 7 9 p 7 9 A d S bR i n 1 D S r D g g l C Q r D g k l x Y o j Q k m w 4 j 2 h p F d x T C j J V X Q J J b W K E 0 J J r O K U U N K q O C O U p C r O C S W l i g t C S a j i k l D S q b g i l G Q q e o S S S s U 1 o S R S c U M o a V T c E k o S F X e E k k L F B 0 J J o O I j o f X z m R T d I O g P F r 5 5 M l N Z Q y B f 0 H U / E i j T u E 0 l F P A O l V C 4 u 1 R C w e 5 R C c X U o R K K a J 9 K K J 4 D K q F o D q m E Y j m i E o r k P Z V Q H M d U Q l F 0 q Y R i O K E S i u C U S r j 4 Z 1 T C R T + n E i 7 2 B Z V w k S + p h I t 7 R S V c 1 B 6 V c D G v q Y S L e E M l X L x b K u G i 3 V E J F + s D l X C R P r L 7 V f 6 r 8 l 5 q y Y A v m T Q + D A 8 a t a v 1 G 7 G 4 t Q 2 6 7 j 1 G c p x N p I c m y H v E R J d D 4 d o k I J / k e K T q 9 r L W g A 5 c s o e g T R Q 0 X B R o G w U N H w X a S E H D S Y G 2 U t D w U q D N F D T c F G g 7 B Q 0 / B d p Q Q c N R g b Z U 0 P B U o E 0 V N F w V a F s F D V 8 F 2 l h B w 1 m B t l b Q 8 F a g z R U 0 3 B V o e w U N f w X a Y E H D Y Y G 2 W N D w W K B N F j R c F m i b B Q 2 f B d p o Q c N p g b Z a 0 P B a o M 0 W N N w W a L s F D b 8 F 2 n B B w 3 G B t l z Q 8 F y g T R c 0 X B d o 2 w X M d + H n B 0 x E s p i A N 0 m H U M Q z 9 Y b T 0 J e + F 0 I K B e Y g V Y 4 E K n 0 w U Q m p + Q a m r 1 4 S V C 9 q F s l c F 3 Q 6 V K 1 C k k d F h I n Q q V + / v z u Y 6 S S o 3 x l R N 8 G s 2 W j b v k 4 y 9 i V + f n d v 4 U Se 8 8 j z R V t n k m w I 8 d c G o g P q k Z j S 0 n 2 q o P O v B e U y i o d Q R f Z 1 o e 5 9 X Q O P C Z k F Y 1 + o l 9 f 9 i c z 0 5 y o o n B 4 2 X i L P T U z d x 6 r K c g e G 4 M S Z Y k t c g Q Q e O j b O F F E L g X 6 y 5 g b H f h 7 7 A S z q 1 2 + 6 F b D w X n r V t T u 9 j Z e O F 9 y / u F x X s D d 8 u k 3 2 c s F z e + P U T HI 2 x w 0 y L h b 2 a Z x L F B A u 6 u d r T S q Q N E Z V i k Y R F M 2 m R T a S i T + l S A s 0 4 z B Z Z P q N J / P o b b m V P J 6 o 0 T + p 5 w M u e 9 x d 8 N e d j r t L C 3 j j F 9 Q D V W i 2 L / G P X + D a F x m L v F p a g N 2 s J F o V l E B vs 3 h U + I l 6 T D V + z A q 0 r c K f C e 9 F 9 8 c 3 L 9 S r P v o / f k x S 8 8 q q y H H 9 h X 7 V 7 E U f 4 p j F 2 M e k L 7 0 d T I C 4 5 V P 1 a 4 b 7 H R L 1 y p v y x q Z R F q 3 e N 8 0 m o c 6 Z 2 i p H E t Z 1 8 y L z h h m o 5 h 6 j + y i H Y e R v N F 6 y z o o k V k / 6 F / P u j 5 u L F j J L Q X F b b Z x 8 1 P X e t H G 5 Y v I W R m u h + 2 M / S k d y 1 t w 6 u V + o R 8 Z 4 b P h q s 1 w B n r X C D 8 G L U i / N K p s v Y b r h 7 Y 4 z o a Y n U w Y w G H t 7 + I k 4 h W + F N 8 i y + 4 1 V 5 y H P W a 5 O 5 6 z 4 E 2 q 8 C H U H 8 G 9 / X V 1 9 L V C d k y Y Q r 9 q b 1 G r F M P 3 7 m Y g e C q q n X g e M Q f b 9 A e 6 z O H s c F O D f r x o s U 4 d g L m d 4 r P f t h W F q u P Q L H P 8 Y D / / V 1 c / f r N n / o O Q 9 f 3 H z Z m N r c 2 P r 4 r u 1 v + 1 U / 4 v p 1 y t / X P m X l V c r W y t / X v n b y u H K + c r 1 S r B y v / J f K / + 9 8 j / v v n 1 3 8 q 7 3 7 s a E / v x n V Z 0 / r D g / 7 z 7 / H 4 f o z O E = < / l a t e x i t > query Figure 2 : Illustration of kNN-LM . The datastore consists of paired context representations and the corresponding next tokens. The index represents a method that performs approximate k-nearest neighbors search over the datastore. Red and bolded text represent three dimensions that we explore in this paper to improve the efficiency. In adaptive retrieval, we use a predictive model to decide whether or not to query the datastore.for each token in the training data; this can easily scale to hundreds of millions or even billions of records. As a result, the extra retrieval step from such datastores greatly decreases model efficiency at test time. For example, a 100M-entry datastore can lead to an over 10x slow-down compared to parametric models ( §3.3) as shown in Figure 1 . This issue poses a serious hurdle for the practical deployment of non-parametric LMs, despite their effectiveness.In this paper, we attempt to address this issue of test-time inefficiency and make non-parametric LMs more applicable in real-world settings. We take kNN-LM as an example, first analyzing the evaluation overhead, and raise three questions that we aim to answer in this paper: (1) Do we really need to perform retrieval on the prediction of every single token? (2) Can we identify and prune redundant records from the datastore? (3) Is it possible to further compress the datastore by reducing the vector dimensionality without losing performance? We propose and explore potential solutions for each question to aid efficiency. Specifically, we (1) show that a lightweight network can be learned to automatically prune unnecessary retrieval operations (adaptive retrieval, §4.1), (2) explore several different methods for datastore pruning based on clustering, importance-guided filtering, or greedy merging ( §4.2), and (3) empirically demonstrate that simple dimension reduction techniques are able to improve both the performance and speed ( §4.3). Figure 1 illustrate the overall performance of these methods. Our experiments on the WikiText-103 language modeling benchmark (Merity et al., 2017 ) and a training-free domain-adaptation setting demonstrate speed improvements of up to 6x with comparable perplexity to the kNN-LM. On a higher level, we expect the empirical results and analysis in the paper to help researchers better understand the speed-performance tradeoff in non-parametric NLMs, and provide a springboard for future research on more efficient non-parametric LMs.
0
Lexical ambiguity, an inherent phenomenon in natural languages, refers to words or phrases that can have multiple meanings. In written text, lexical ambiguity can be roughly characterized into two categories: polysemy and homonymy. A polysemous word has multiple senses that express different but related meanings (e.g. 'head' as an anatomical body part, or as a person in charge), whereas homonyms are different words that happen to have the same spelling (e.g. 'bass' as an instrument vs. a fish) (Löbner, 2013) . Homographs are words that have the same spelling but may have different pronunciation and meaning.A diacritic is a mark that is added above, below, or within letters to indicate pronunciation, vowels, or other functions. For languages that use diacritical marks, such as Arabic or Hebrew, the orthography is typically under-specified for such marks, i.e. the diacritics are omitted. This phenomenon exacerbates the lexical ambiguity problem since it increases the rate of homographs. For example, without considering context, the undiacritized Arabic word ktb may refer to any of the following diacritized variants: 1 katab "wrote", kutub "books", or kutib "was written". As an illustrative analogy in English, dropping vowels in a word such as pan yields the underspecified token pn which can be mapped to pin, pan, pun, pen. It should be noted that even after fully specifying words with their relevant diacritics, homonyms such as "bass" are still ambiguous; likewise in Arabic, the fully-specified word bayot can either mean "verse" or "house".In this paper, we devise strategies to automatically identify and disambiguate a subset of homographs that result from omitting diacritics. While context is often sufficient for determining the meaning of ambiguous words, explicitly restoring missing diacritics should provide valuable additional information for homograph disambiguation. This process, diacritization, would render the resulting text comparable to that of languages whose words are orthographically fully specified such as English.Past studies have focused on developing models for automatic diacritic restoration that can be used as a pre-processing step for various applications such as text-to-speech (Ungurean et al., 2008) and reading comprehension (Hermena et al., 2015) . In theory, restoring all diacritics should also help improve the performance of NLP applications such as machine translation. However, in practice, full diacritic restoration results in increased sparsity and out-of-vocabulary words, which leads to degradation in performance (Diab et al., 2007; Alqahtani et al., 2016) . The main objective of this work is to find a sweet spot between zero and full diacritization in order to reduce lexical ambiguity without increasing sparsity. We propose selective diacritization, a process of restoring diacritics to a subset of the words in a sentence sufficient to disambiguate homographs without significantly increasing sparsity. Selective diacritization can be viewed as a relaxed variant of word sense disambiguation since only homographs that arise from missing diacritics are disambiguated. 2 Intrinsically evaluating the quality of a devised selective diacritization scheme against a gold set is challenging since it is difficult to obtain a dataset that exhibits consistent selective diacritization with reliable inter-annotator agreement (Zaghouani et al., 2016b; Bouamor et al., 2015) , thereby necessitating an empirical automatic investigation. Hence, in this work, we evaluate the proposed selective diacritization schemes extrinsically on various semantic and syntactic downstream NLP applications: Semantic Textual Similarity (STS), Neural Machine Translation (NMT), and Part-of-Speech (POS) tagging. We compare our selective strategies against two baselines full diacritization and zero diacritics applied on all the words in the text. We use Modern Standard Arabic (MSA) as a case-study. 3 Our approach is summarized as follows: we start with full diacritic restoration of a large corpus, then apply different unsupervised methods to identify the words that are ambiguous when undiacritized. This results in a dictionary where each word is assigned an ambiguity label (ambiguous vs. unambiguous). Selectively-diacritized datasets can then be constructed by restoring the full diacritics only to the words that are identified as ambiguous.The contribution of this paper is threefold:1. We introduce automatic selective diacritization as a viable step in lexical disambiguation and provide an encouraging baseline for future developments towards optimal diacriti-zation. Section 2 describes existing work towards optimal diacritization and how they differ from our approach;2. We propose several unsupervised data-driven methods for the automatic identification of ambiguous words;3. We evaluate and analyze the impact of partial sense disambiguation (i.e. selective diacritic restoration of identified homographs) in downstream applications for MSA.
0
La géolinguistique étudie la variation des phénomènes linguistiques dans l'espace, qu'elle peut discrétiser avec un degré de granularité plus ou moins fin. Depuis le début du XX e siècle, époque à laquelle les atlas dialectologiques ont connu un certain intérêt, le paysage linguistique de la France et de l'Europe a considérablement changé. C'est l'espace actuel que nous avons tenté d'appréhender via une expérience à grande échelle, portant sur les variantes de prononciation en français parlé en Europe. Un travail de collecte d'informations a été mené, débouchant sur une importante quantité de données qu'il est difficile de représenter visuellement. En reprenant le principe des atlas linguistiques sur papier et en tirant bénéfice de l'outil informatique, nous avons développé un site web qui permet de cartographier les résultats de manière dynamique. Le site mis au point, cartopho, sera présenté dans ce qui suit.L'expérience qui a été mise sur pied était centrée sur le timbre des voyelles moyennes (ex. épée~épais, jeûne~jeune, beauté~botté) et quelques mots emblématiques, dont en particulier la consonne finale peut être prononcée ou non, selon les régions (ex. vingt, moins) . Une liste de 70 mots a été établie et lue par un phonéticien, avec à chaque fois deux prononciations possibles. Il était demandé aux participants de préciser laquelle des deux était la plus proche de leur prononciation la plus courante. Des informations complémentaires (notamment sur l'origine et l'ancrage géographique des sujets) ont de plus été demandées. Au total, 2506 sujets, dans tous les départements français, les cantons suisses romands et les provinces belges francophones, ont pris part à cette expérience.La section suivante introduit la motivation de notre travail, quelques études antérieures et le contexte. La section 3 présente le questionnaire qui a servi à cartographier les variantes de prononciation (pour le français parlé principalement en Europe), la tâche des sujets et les participants. La section 4 décrit l'interface de visualisation, son développement et ses fonctionnalités. Les principaux résultats sont rapportés en section 5. Enfin, la section 6 conclut et ouvre de nouvelles perspectives.
0
Biomedical named entity recognition (NER) is a computational technique used to identify and classify strings of text (mentions) that designate important concepts in biomedicine. Over the last fourteen years there has been considerable interest in this problem with a variety of generic and entity-specific algorithms applied to extract the names of genes, gene products, cells, chemical compounds and diseases (Fukuda et al., 1998; Rindflesch et al., 1999; Collier et al., 2000; Kazama et al., 2002; Zhou et al., 2003; Settles, 2004; Kim et al., 2004; Leaman and Gonzalez, 2008) . As the first stage in the integrated semantic linking of knowledge between literature and structured databases it is critically important to maximise the effectiveness of this step.Despite significant progress in NER there is still no one size fits all solution. Barriers arise because of ambiguity in the text and coding schema. Ambiguity in the text comes in various forms according to the semantic type of the entity but can be caused by a lack of standard nomenclatures, extensive and growing nomenclatures for proteins/genes across multiple organisms or the widespread use of abbreviations and descriptive names. For example, (Krauthammer and Nenadic, 2004) illustrate uncontrolled naming in genes with bridge of sevenless (boss) (FlyBase ID FBgn0000206) and Hunter and Bretonnel Cohen (2006) discuss term class ambiguity (e.g. is group a chemical entity or an assemblage of organisms?). Such challenges have led to a variety of proposed solutions involving a wide range of resources. Among these, linguistically annotated corpora such as GENIA (Tateisi et al., 2000; Kim et al., 2003) have proven to be central to the NER solution. However due to the size of the vocabularies involved, annotated corpora by themselves do not provide a complete solution. Researchers have therefore also looked at the rich availability of formally structured biomedical knowledge (ontologies) such as the Unified Medical Language System (UMLS) (Bodenreider et al., 2002) and the Gene Ontology (Gene Ontology Consortium, 2000) . Nevertheless corpora remain a key part of the solution as they provide the contextual evidence that link mentions to terms through the author's intentions. Creating such resources though is time consuming and expensive, especially when annotating new semantic types and relations.In this paper we focus on the analysis and identification of a new class of entity: phenotypes. Two thoughts motivate this: (1) The database curation community has expressed a wish for full text entity indexing and the inclusion of phenotypes (Dowell et al., 2009; Hirschman et al., 2012) , and (2) Biomedicine is rapidly moving towards full-scale integration of data, opening up the possibility to understand complex heritable diseases caused by genes. Association studies involving phenotypes are considered important to making progress (Lage et al., 2007; Wu et al., 2008) . The ultimate goal of the work we present here is to allow relations mined from sentences such as the one we annotated below to feed into novel hypothesis generation procedures. From Ex 1. the reader can easily infer a relation between IgG1 disorder and three genes/gene products marked as GGP. [anti-double-stranded DNA (anti-dsDNA) antibodies] GG P above 30 U/ml. (Source PMCID: PMC1003566).Whilst other authors have tried similar approaches for other entity types, none have tried both machine learning and external resource lookup for a class as rich and semantically complex as phenotypes. The key contributions of this paper are: (1) To provide an operational semantics for identifying phenotype candidates in text, (2) To introduce a set of guidelines and an anno-tated corpus based on a selection of 19 clinically significant auto-immune diseases from The Online Mendelian Inheritance of Man (OMIM) (Hamosh et al., 2005) , one of the most widely used gene-disease databases, and (3) To mitigate linguistic variation whilst still meeting the conceptual expectations of biologists we propose a new named entity solution that uses statistical inference and external manually crafted resources. This method is tested on the new corpus and one extant corpus (Khordad et al., 2011) that has been used in previously reported experiments. Freimer and Sabatti (2003) describe phenotypes as referring to 'any morphologic, biochemical, physiological or behavioral characteristic of an organism. . . . All phenotypic characteristics represent the expression of particular genotypes combined with the effects of specific environmental influences.' Despite recent data integration efforts for phenotypes such as (Robinson and Mundlos, 2010) , phenotypic descriptions still tend to be author/study specific and biological results may go undiscovered if the terms used lie outside an author's immediate research area (Bard and Rhee, 2004) . Again, unlike genes or anatomic structures, phenotypes and their traits are complex concepts and do not constitute a homogeneous class of objects (i.e. a natural kind).
0
Social media platforms such as Twitter are regarded as potentially valuable tools for monitoring public health, including identifying ADEs to aid pharmacovigilance efforts. They do however pose a challenge due to the relative scarcity of relevant tweets in addition to a more fluid use of language, creating a further challenge of identifying and classifying specific instances of health-related issues. In this year's task as well as previous SMM4H runs (Klein et al., 2020) a distinction is made between classification, extraction, and normalization. This is atypical of NER systems, and many other NER datasets present their datasets, and are consequently solved in a joint approach. Gattepaille (2020) showed that simply tuning a base BERT (Devlin et al., 2019) model could achieve strong results, even beating ensemble methods that rely on tranformers pretrained on more academic texts such as SciBERT (Beltagy et al., 2019) , BioBERT (Lee et al., 2020) or ensembles of them, while approaching the performance of BERT models specifically pretrained on noisy health-related comments .
0
In human-human tutoring, it is an effective strategy to ask students to explain instructional material in their own words. Self-explanation (Chi et al., 1994) and contentful talk focused on the domain are correlated with better learning outcomes (Litman et al., 2009; Chi et al., 1994) . There has therefore been much interest in developing automated tutorial dialogue systems that ask students open-ended explanation questions (Graesser et al., 1999; Aleven et al., 2001; Jordan et al., 2006; VanLehn et al., 2007; Nielsen et al., 2009; Dzikovska et al., 2010a) . In order to do this well, it is not enough to simply ask the initiating question, because students need the experience of engaging in meaningful dialogue about the instructional content. Thus, systems must respond appropriately to student explanations, and must provide detailed, flexible and appropriate feedback (Aleven et al., 2002; .In simple domains, we can adopt a knowledge engineering approach and build a domain model and a diagnoser, together with a natural language parser to produce detailed semantic representations of student input (Glass, 2000; Aleven et al., 2002; Pon-Barry et al., 2004; Callaway et al., 2006; Dzikovska et al., 2010a) . The advantage of this approach is that it allows for flexible adaptation of feedback to a variety of factors such as student performance. For example, it is easy for the system to know if the student made the same error before, and adjust its feedback to reflect it. Moreover, this approach allows for easy addition of new exercises : as long as an exercise relies on the concepts covered by the domain model, the system can apply standard instructional strategies to each new question automatically. However, this approach is significantly limited by the requirement that the domain be small enough to allow comprehensive knowledge engineering, and it is very labor-intensive even for small domains.Alternatively, we can adopt a data-driven approach, asking human tutors to anticipate in advance a range of possible correct and incorrect answers, and associating each answer with an appropriate remediation (Graesser et al., 1999; Jordan et al., 2004; VanLehn et al., 2007) . The advantage of this approach is that it allows more complex and interesting domains and provides a good framework for eliciting the necessary information from the human experts. A weakness of this approach, which also arises in content-scoring applications such as ETS's c-rater (Leacock and Chodorow, 2003) , is that human experts find it extremely difficult to predict with any certainty what the full range of student responses will be. This leads to a lack of adaptivity and generality -if the system designers have failed to predict the full range of possibilities, students will often receive the default feedback. It is frustrating and confusing for students to repeatedly receive the same feedback, regardless of their past performance or dialogue context (Jordan, 2004) .Our goal is to address the weaknesses of the datadriven approach by creating a framework for supporting more flexible and systematic feedback. Our approach identifies general classes of error, such as omissions, incorrect statements and off-topic statements, then aims to develop general remediation strategies for each error type. This has the potential to free system designers from the need to pre-author separate remediations for each individual question. A precondition for the success of this approach is that the system be able to identify error types based on the student response and the model answers.A contribution of this paper is to provide a new dataset that will enable researchers to develop classifiers specifically for this purpose. The hope is that with an appropriate dataset the data-driven approach will be flexible and responsive enough to maintain student engagement. We provide a corpus that is labeled for a set of five student response types, develop a precise definition of the corresponding supervised classification task, and report results for a variety of simple baseline classifiers. This will provide a basis for the development, comparison and evaluation of alternative approaches to the error classification task. We believe that the natural language capabilities needed for this task will be directly applicable to a far wider range of tasks in educational assessment, information extraction and computational semantics. This dataset is publicly available and will be used in a community-wide shared task.
0
Contrastive learning learns to encode data into an embedding space such that related data points have closer representations and unrelated ones have further apart ones. Recent works in NLP adopt deep neural nets as encoders and use unsupervised contrastive learning on sentence representation (Giorgi et al., 2020 ), text retrieval , and language model pre-training tasks . Supervised contrastive learning (Khosla et al., 2020) has also been shown effective in training dense retrievers (Karpukhin et al., 2020; Qu et al., 2020) . These works typically use batch-wise contrastive loss, sharing target texts as in-batch negatives. With such a technique, previous works have empirically shown that larger batches help learn better representations. However, computing loss and updating model parameters with respect 1 Our code is at github.com/luyug/GradCache. to a big batch require encoding all batch data and storing all activation, so batch size is limited by total available GPU memory. This limits application and research of contrastive learning methods under memory limited setup, e.g. academia. For example, pre-train a BERT passage encoder with a batch size of 4096 while a high-end commercial GPU RTX 2080ti can only fit a batch of 8. The gradient accumulation technique, splitting a large batch into chunks and summing gradients across several backwards, cannot emulate a large batch as each smaller chunk has fewer in-batch negatives.In this paper, we present a simple technique that thresholds peak memory usage for contrastive learning to almost constant regardless of the batch size. For deep contrastive learning, the memory bottlenecks are at the deep neural network based encoder. We observe that we can separate the backpropagation process of contrastive loss into two parts, from loss to representation, and from representation to model parameter, with the latter being independent across batch examples given the former, detailed in subsection 3.2. We then show in subsection 3.3 that by separately pre-computing the representations' gradient and store them in a cache, we can break the update of the encoder into multiple sub-updates that can fit into the GPU memory. This pre-computation of gradients allows our method to produce the exact same gradient update as training with large batch. Experiments show that with about 20% increase in runtime, our technique enables a single consumer-grade GPU to reproduce the state-of-the-art large batch trained models that used to require multiple professional GPUs.
0
SemEval-2017 Task 3 1 on Community Question Answering (cQA) focuses on answering new questions by retrieving related answered questions in community forums (Nakov et al., 2017) . This task extends the previous SemEval-2015 and SemEval-2016 cQA tasks.This year, five subtasks were proposed: English Question-Comment Similarity (subtask A), English Question-Question Similarity (subtask B), English Question-External Comment Similarity (subtask C), Arabic Answer Re-rank (subtask D) and English Multi-Domain Duplicate Question Detection (subtask E).1 http://alt.qcri.org/semeval2017/task3 Subtask B (Question Similarity) aims to re-rank a set of similar questions retrieved by a search engine with respect to the original question, with the idea that the answers to the similar questions should also be answers to the new question. For a given question, a set of ten similar questions is provided for re-ranking.
0
Deep neural networks have achieved state-of-theart performances for many natural language processing (NLP) tasks (Otter et al., 2020; Ruder et al., 2019) . When applying such models in real world applications, understanding their behavior can be challenging -the ever increasing complexity of such models makes it difficult to understand and debug their predictions. A human can explain why an example belongs to a specific concept class by constructing a counterfactual of an example that is minimally altered but belongs to a different class. Contrasting the original example with its counterfactual highlights the critical aspects signifying the concept class. We study a similar approach to understand deep NLP models' classification criteria.Given a classifier and an input text, our goal is to generate a counterfactual by making a set of minimal modifications to the text that change the label assigned by the classifier. Additionally, our goal is to understand the model's behavior when processing naturally occurring inputs, hence we wish to generate grammatically correct and semantically plausible counterfactuals.Automatic generation of text counterfactuals has been studied in different settings. Qin et al. (2019) considered counterfactual story rewriting which aims to minimally rewrite an original story to be compatible with a counterfactual event. Wu et al. (2021) used a fine-tuned GPT-2 model to generate general purpose counterfactuals that are not tied to a particular classification model. Yang et al. (2020) aim to generate plausible-sounding counterfactuals that flip a classification model's decision for financial texts.Related, textual adversaries also aim to change the model prediction (with modifications resembling natural text). The difference is that adversaries further aim to escape human detection (not changing a human's classification), whereas counterfactuals do not have such requirement.Another line of related work is style transfer (Sudhakar et al., 2019; Wang et al., 2019; Hu et al., 2017) , which aim to modify a given text according to a target style. It differs from adversary or counterfactual generation in that it seeks to fully change all style-related phrases, as opposed to minimally perturbing a text to change a classifier's decision.White-box approaches have been widely used to generate adversaries or counterfactuals for vision tasks where the continuous inputs can be optimized to alter model predictions (Goodfellow et al., 2014; Carlini and Wagner, 2017; Neal et al., 2018) . Such optimization based approaches are difficult to apply to language due to the discrete nature of text. We circumvent this difficulty by directly optimizing in the latent space of the input towards the desired classification. We then exploit the language generation capability of pre-trained language models, available for most state-of-the-art NLP models such as BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019) , to generate semantically plausible substitutions from the optimized latent representations. We further introduce Shapley values to estimate the combinatoric effect of multiple simultaneous changes, which are then used to guide a beam search to generate the final counterfactual.Leveraging pre-trained language models to generate alternative texts has been a popular black-box approach in the recent literature on text adversaries (Li et al., 2020b; Garg and Ramakrishnan, 2020; Li et al., 2020a) . Our work presents a first attempt to combine the strength of white-box optimization and the power of pre-trained language models. While Shapley values have been widely studied for the problem of feature importance (Lundberg and Lee, 2017; Sundararajan and Najmi, 2020) and data valuation (Jia et al., 2020) , this is the first effort demonstrating their usefulness for text generation.We compare our method to several white-box and black-box baselines on two different text classification tasks. Automatic and human evaluation results show that our method significantly improves the success rate of counterfactual generation, while reducing the fraction of input tokens modified and enhancing the semantic plausibility of generated counterfactuals. We also show through ablation studies that both counterfactual optimization of the latent representations and Shapley value estimates contribute to our method's strong performance.
0
Vision-language tasks, such as image captioning (Vinyals et al., 2015) , visual question answering (Antol et al., 2015) , and visual commonsense reasoning (Zellers et al., 2018) , serve as rich test-beds for evaluating the reasoning capabilities of visually informed systems. These tasks require joint understanding of visual contents, language semantics, and cross-modal alignments. In particular, beyond simply detecting what objects are present, models have to understand comprehensively the semantic information in an image, such as objects, attributes, relationships, actions, and intentions, and how all of these are referred to in natural language.Inspired by the success of BERT (Devlin et al., 2019) on a variety of NLP tasks, there has been a surge of building pretrained models for visionlanguage tasks, such as ViLBERT (Lu et al., 2019) , VL-BERT (Su et al., 2020) , and UNITER (Chen et al., 2020) . Despite the impressive performance on several vision-language tasks, these models suffer from fundamental difficulties in learning effective visually grounded representations, as they Figure 1 : A Visual question-answering example illustrating the effectiveness of using scene graph as the bridge for cross-modal alignment rely solely on cross-attention mechanisms to capture the alignment between image and text features, and learn from indirect signals without any explicit supervisions. Recently, Oscar introduced object tags detected in images as anchor points to ease the learning of semantic alignments between image regions and word sequences. However, individual object tags in isolation ignore the rich visual information, such as attributes and relationships between objects. Without such information as contextual cues, the core challenge of ambiguity in visual grounding remains difficult to solve. As Figure 1 shows, in order to answer the question correctly, the model needs to reason about object relationships. Without the relation "on" between "cup" and "table", the model mistakenly thinks the "cup" is on the "tray".This work tackles the above challenges by introducing visual scene graphs as the bridge to align vision-language semantics. Extracted from the image using modern scene graph generators, a visual scene graph effectively depicts salient objects and their relationships. The visually-grounded intermediate abstraction permits more effective vision language cross attention for disambiguation and finer-grained alignment. Specifically, we propose Samformer (Semantic Aligned Multi-modal trans-FORMER) that learns the alignment between the modalities of text, image, and graphical structure. For each of object-relation labels in the scene graph, the model can easily find the referring text segments in natural language, and then learn to align to the image regions already associated with the scene graph. On the basis of the visually-grounded graph, we apply a contrastive loss and a masked language model loss that explicitly encourage image-text alignment. Furthermore, we propose a per-triplet (object, relation, subject) contrastive loss to align object and relation representations across the two modalities respectively.We adopt a set of datasets, including Microsoft COCO Captions dataset (Lin et al., 2014) , Visual Genome (Krishna et al., 2016) , VQA (Antol et al., 2015) , GQA (Hudson and Manning, 2019), Flicker 30k (Young et al., 2014) , SBU (Ordonez et al., 2011) , and Conceptual Caption (Sharma et al., 2018) to pre-train our model and fine-tune it on visual compositional question answering (GQA) (Hudson and Manning, 2019). Our preliminary analyses show improved performance and demonstrate the potential of the proposed approach on broader visual-language applications.
0
Named entity recognition (NER) is the basic step of many natural language processing (NLP) tasks, such as Information Extraction and MachineTranslation. NER's recognition precision will directly affect subsequent NLP tasks.Mongolian has a wide range of users in China, Mongolia and Russia. However, it is a low-resource language, Mongolian NLP resources are very rare. The research on Mongolian NLP is at its initial stage. Researchers used Conditional Random Field (CRF) to predict Mongolian NER label and only did initial work on it. In terms of the NER task, we need to label the sentences with named entity tags. For example, Figure 1 gives a sentence with organization name (ORG) and person name (PER). In Mongolian corpus, data sparseness problem is more serious, due to the lexical features of Mongolian and homomorphic characters with different pronunciation, words look same but pronunciation is different in the corpus. Sometimes, it occurs to spell mistake in Mongolian corpus which is not consistent with the coding rule due to the reason of keyboard operators' dialect. For example, the words "oyun" and "uyun" are expressed in Latin Mongolian, and they are both the word "wisdom" in English. They are the same word rather than a traditional synonym. To some extent, this phenomenon can lead to data sparsity. For further details, we have made word frequency statistics on web corpus, and there are a large number of low frequency words, some of which are misspelling of high frequency words. They have the same context, and the neural LM can learn this representation of the context sensitive and is effective for supervised NER sequence annotation tasks.In general, neural network model can be learned by training and continuous learning on large-scale corpus. Unfortunately, it is still in the shortage of Mongolian text resource especially public labelled dataset with high quality and large-scale. So, we explored an alternate semi-supervised approach which does not require additional labeled data, it is an effective way to learn information from a large number of unlabeled dataset.The pre-trained word embeddings (Mikolov et al., 2013; Pennington et al., 2014) from unlabeled dataset has become a fundamental component of the neural network architecture of NER tasks. Peters (2017) proposed a new approach in English NER using the unlabeled dataset pre-training language model (LM) to get context embeddings as extended information. They incorporated LM into NER model by concatenation. Because the LM embeddings are used to compute the probability of future words in a neural LM, they can capture information such as the semantic and syntactic roles of words in context.Our research found that the way to use LM embeddings by simple concatenation is suboptimal, because it implies that the words context in the labeled dataset and the large number of unlabeled dataset are equivalent. In this case, the more similar the text style, the more effective LM embeddings will be. However, we can't get unlabeled dataset which is highly correlated with the labeled dataset. The two datasets used in this paper are news corpora from the Mongolian news website in recent years, but their editing styles are inconsistent for different programs, due to the differences between websites styles and news time spans. The pre-trained LM can't represent the context-embedding information of the labeled dataset.In this paper, we investigated language model extensions based on attention mechanism. In the architecture of NER model, LM concatenation layer is replaced by an attention mechanism layer. By using an attention mechanism, the model is able to dynamically balance how much information will be used between the two inputs. We named this architecture LM-ATT model. Our experiments in Mongolian shows that this architecture provides a substantial improvement over the previous model.
0
Arabic is a morphologically complex language. 1 The morphological analysis of a word consists of determining the values of a large number of (orthogonal) features, such as basic part-of-speech (i.e., noun, verb, and so on), voice, gender, number, information about the clitics, and so on. 2 For Arabic, this gives us about 333,000 theoretically possible completely specified morphological analyses, i.e., morphological tags, of which about 2,200 are actually used in the first 280,000 words of the Penn Arabic Treebank (ATB). In contrast, English morphological tagsets usually have about 50 tags, which cover all morphological variation.As a consequence, morphological disambiguation of a word in context, i.e., choosing a complete 1 We would like to thank Mona Diab for helpful discussions. The work reported in this paper was supported by NSF Award 0329163. The authors are listed in alphabetical order.2 In this paper, we only discuss inflectional morphology. Thus, the fact that the stem is composed of a root, a pattern, and an infix vocalism is not relevant except as it affects broken plurals and verb aspect. morphological tag, cannot be done successfully using methods developed for English because of data sparseness. Hajič (2000) demonstrates convincingly that morphological disambiguation can be aided by a morphological analyzer, which, given a word without any context, gives us the set of all possible morphological tags. The only work on Arabic tagging that uses a corpus for training and evaluation (that we are aware of), (Diab et al., 2004) , does not use a morphological analyzer. In this paper, we show that the use of a morphological analyzer outperforms other tagging methods for Arabic; to our knowledge, we present the best-performing wide-coverage tokenizer on naturally occurring input and the bestperforming morphological tagger for Arabic.
0
In radiology, the health status of a patient is described using a multitude of formats. During the examination process, a radiologist creates machine readable descriptions such as radiology images, dictated reports about the image findings and written texts. Although, most of the radiology data are related via the anatomical entities shown or described, there is no link between them, since the information pieces are stored in distributed systems. This absence of links between the items is hindering the radiologist's workflow. Especially when reading reports, radiologists want to reference back from the described finding (in the text) to the correlating body location (in the images). Without automatically created links, this resolution is obviously time-consuming when dealing with images taken with modalities that deliver a mass of stacked images.Today, radiologists add alignment information to the text that names the image that contains the described findings. But still, the resolution of these textual links requires manual interventions to find the correct image and detect the described finding in the image.To simplify this workflow, we introduce a mechanism that automatically aligns pathological anatomical entities in radiology text and images based on semantic annotations. Figure 1 shows our concept of linking anatomical concepts from image and text: Both the images and the texts are annotated with the anatomical concepts that they describe. Combining annotations with the same RadLex ID (RID), the link from one format to the other can be established. As a result, the radiologist can easily navigate from the pathological Leber [liver] (RID58) described in the text to the correlating position in the images.For the integration, the necessary semantic annotations of the images have been made available as a result of a previous project (Seifert et. al., 2009; Seifert, 2010) . In order to align these RadLex-based annotations with anatomical entities described in radiology reports, our text analysis system has to annotate the texts with RadLexbased annotations, too. Our established mechanism operates in two steps: First, we identify the relevant sentences that describe pathological findings and, second, extract the anatomical annotations only from these sentence.We include a preceding sentence classification step, because according to the radiologists we worked with, the extraction of all anatomical entities from the text to link them with the image annotations is inappropriate. A large portion of the findings is included in the reports in order to exclude differential diagnoses. These are normal or absent findings that do not describe pathologies. But radiologists are rather interested in automated alignment of images of anatomical entities described with pathological findings.The sentence classification is conducted basedRID1 RadLex term ... RID3 anatomical entity ...liver ... [...] In der Leber 2,7 x 2,6 cm große, hypodense Läsion im Segment VII (VA 3,9 x 3,4 cm). [...] Figure 1: Aligning the anatomical concept liver from radiology text to image using RadLex-based annotations on a lexicon and probabilistic semantic grammar rules (P-CFG). For parsing, we apply the standard probabilistic CKY algorithm (Kasami, 1965) . During parsing, the most likely parse tree for the given sentence is determined. The topmost constituent in the resulting parse tree can be used to determine the pathology classification of the report sentences.The chosen approach requires a full coverage lexicon including pathology classification of the entities. An initial linguistic resource based on the German RadLex taxonomy is provided. However, the German RadLex is lacking in terminology and pathology classification. The contribution of this paper is the description of a process to extend the German RadLex-based lexicon with vocabulary and pathology classification information in order to link heterogeneous medical data sources.
0
Cross-Document (CD) Event Coreference resolution is the task of identifying clusters of text mentions, across multiple texts, that refer to the same event. Successful identification of such coreferring mentions is beneficial for a broad range of applications at the multi-text level, which are gaining increasing interest and need to match and integrate information across documents, such as multidocument summarization (Falke et al., 2017; Liao et al., 2018) , multi-hop question answering (Dhingra et al., 2018; Wang et al., 2019) and Knowledge Base Population (KBP) (Lin et al., 2020) .Unfortunately, rather few datasets of reasonable scale exist for CD event coreference. Notable datasets include ECB+ (Cybulska and Vossen, 2014) , MEANTIME (Minard et al., 2016) and the Gun Violence Corpus (GVC) (Vossen et al., 2018) (described in Section 2), where recent work has been evaluated solely on ECB+. When addressed in a direct manner, manual CD coreference annotation is very hard due to its worst-case quadratic complexity, where each mention may need to be compared to all other mentions in all documents. Indeed, ECB+ contains less than 7000 event mentions in total (train, dev, and test sets). Further, effective corpora for CD event coreference are available mostly for English, limiting research opportunities for other languages. Partly as a result of this data scarcity, rather little effort was invested in this field in recent years, compared to dramatic recent progress in modeling within-document coreference.Furthermore, most existing cross-document coreference datasets are restricted in their scope by two inter-related characteristics. First, these datasets annotate sets of documents, where the documents in each set all describe the same topic, mostly a news event (consider the Malaysia Airlines crash as an example). While such topicfocused document sets guarantee a high density of coreferring event mentions, facilitating annotation, in practical settings the same event might be mentioned across an entire corpus, being referred to in documents of varied topics. Second, we interestingly observed that event mentions may be (softly) classified into two different types. One type, which we term a descriptive mention, pertains to a mention involved in presenting the event or describing new information about it. For example, news about the Malaysian Airline crash will include mostly descriptive mentions of the event and its sub-events, such as shot-down, crashed and investigated. Naturally, news documents about a topic, as in prior event coreference datasets, include mostly descriptive event mentions. The other type, which we term a referential mention, pertains to mentions of the event in sentences that do not focus on presenting new information about the event but rather mention it as a point of reference. For example, mentions referring to the airplane crash, such as the Malaysian plane crash, Flight MH17 or disaster may appear in documents about the war in Donbass or about flight safety. Since referential event mentions are split across an entire corpus, they are less trivial to identify for coreference annotation, and are mostly missing in current newsbased datasets. As we demonstrate later, these two mention types exhibit different lexical distributions and seem to require corresponding training data to be properly modeled.In this paper, we present the Wikipedia Event Coreference methodology (WEC), an efficient method for automatically gathering a large-scale dataset for the cross-document event coreference task. Our methodology effectively complements current datasets in the above-mentioned respects: data annotation is boosted by leveraging available information in Wikipedia, practically applicable for any Wikipedia language; mentions are gathered across the entire Wikipedia corpus, yielding a dataset that is not partitioned by topics; and finally, our dataset consists mostly of referential event mentions.In its essence, our methodology leverages the coreference relation that often holds between anchor texts of hyperlinks pointing to the same Wikipedia article (see Figure 1 ), similar to the basic idea introduced in the Wikilinks dataset (Singh et al., 2012) . Focusing on CD event coreference, we identify and target only Wikipedia articles denoting events. Anchor texts pointing to the same event article, along with some surrounding context, become candidate mentions for a corresponding event coreference cluster, undergoing extensive filtering. We apply our method to the English Wikipedia and extract WEC-Eng, our English version of a WEC dataset. The automaticallyextracted data that we collected provides a training set of a very large scale compared to prior work, while our development and test sets underwent relatively fast manual validation.Due to the large scale of the WEC-Eng training data, current state-of-the-art CD coreference models cannot be easily trained and evaluated on it, for scalability reasons. We therefore developed a new, more scalable, baseline model for the task, while adapting components of recent competitive within-document coreference models (Lee et al., 2017; Kantor and Globerson, 2019; . In addition to setting baseline results for WEC-Eng, we assess our model's competitiveness by presenting a new state-of-the-art on the commonly used ECB+ dataset. Finally, we propose that our automatic extraction and manual validation methods may be applied to generate additional annotated datasets, particularly for other languages. Overall, we suggest that future cross-document coreference models should be evaluated also on the WEC-Eng dataset, and address its complementary characteristics, while the WEC methodology may be efficiently applied to create additional datasets. To that end, our dataset and code 12 are released for open access.
0
Though WordNet was already used as a starting resource for developing many language WordNets, the constructions of the WordNet for languages can be varied according to the availability of the language resources. Some were developed from scratch, and some were developed from the combination of various existing lexical resources.This paper presents an online collaborative tool particularly to facilitate the construction of the Asian WordNet which is automatically generated by using the existing resources having only English equivalents and the lexical synonyms.In addition, to support the work of syntactic dependency tree annotation, we develop an editing suite which integrates the utilities for word segmentation, POS tagging and dependency tree. The tool is organized in 4 steps, namely, sentence selection, word segmentation, POS tagging, and syntactic dependency tree annotation.The rest of this paper is organized as follows: Section 2 describes the collaborative interface for revising the result of synset translation. Section 3 describes the tool for annotating Thai syntactic dependency tree corpus. And, Section 4 concludes our work.
0
Assigning sense tags to the words in a text can be viewed as a classification problem. A probabilistic classifier assigns to each word the tag that has the highest estimated probability of having occurred in the given context. Designing a probabilistic classifier for word-sense disambiguation includes two main sub-tasks: specifying an appropriate model and estimating the parameters of that model. The former involves selecting informative contextual features (such as collocations) and describing the joint distribution of the values of these features and the sense tags of the word to be classified. The parameters of a model are the characteristics of the entire population that are cohsidered in the model. Practical applications require the use of estimates of the parameters. Such estimates are based on functions of a data sample (i.e., statistics) rather than the complete population. To make the estimation of parameters feasible, a model with a simplified form is created by limiting the number of contextual features considered and by expressing the joint distribution of features and sense tags in terms of only the most important systematic interactions among variables.To date, much of the work in statistical NLP has focused on parameter estimation ( [11] , [13] , [12] , [4] ). Of the research directed toward identifying the optimum form of model, most has been concerned with the selection of individually informative features ( [2] , [5] ), with relatively little attention directed toward the identification of an optimum approximation to the joint distribution of the values of the contextual features and object classes. Most previous efforts to formulate a probabilistic classifier for word-sense disambiguation did not attempt to systematically identify the interdependencies among contextual features that can be used to classify the meaning of an ambiguous word. Many researchers have performed disambiguation on the basis of only a single feature ([61, [15] , [2] ), while others who do consider multiple contextual features assume that all contextual features are either conditionally independent given the sense of the word (Is], [14] ) or fuRRy independent ( [10] , [16] ).In earlier work, we describe a method for identifying an uppropriate model for use in disambiguating a word given a set of contextual features. We chose a particular set of contextual features and, using this method, identified a model incorporating these features for use in disambiguating the noun interest. These features, which are assigned automatically, are of three types: morphological, collocation-specific, and class-based, with part-of-speech (POS) categories serving as the word classes (see [3] for how the features were chosen). The results of using the model to disambiguate the noun interest were encouraging. We suspect that the model provides a description of the distribution of sense tags and contextual features that is applicable to a wide range of content words. This paper provides suggestive evidence supporting this, by testing its applicability to the disambiguation of several words. Specifically, for each word to be disambiguated, we created a model according to a schema, where that schema is a generalization of the model created for interest. We evaluate the performance of probabilistic word-sense classifiers that utilize maximum likelihood estimates for the parameters of models created for the following lexical items: the noun senses of bill and concern, the verb senses of close and help, and the adjective senses of common. We also identify upper and lower bounds for the performance of any probabilistic classifier utilizing the same set of contextual features, as well as compare, for each word, the performance of' (1) a classifier using a model created according to the schema for that word, with (2) the performance of a classifier that uses a model selected, per the procedure to be described in section 2, as the best model for that word given the same set of contextual features. Section 2 of this paper describes the method used for selecting the form of a probabilistic model given sense tags and a set of contextual features. In section 3, the model schema is presented and, in section 4, the experiments using models created according to the schema are described. Section 5 discusses the results of the experiments and section 6 discusses future work.
0
The use of machine translation (MT) as part of the localization workflow has mushroomed in recent years, with post-edited MT becoming an increasingly cost-effective solution for specific domains and language pairs. DePalma and Hegde (2010) stated that 42% of language service providers (LSPs) surveyed said that they offered post-edited MT to customers. At present, postediting tends to be carried out via tools built for editing human-generated translations, such as translation memory (TM) or Translation Envi-ronment Tools (TEnT). These environments are fairly well suited to the task for which they were intended. However, it is our opinion that integration with machine translation and support for the post-editing task are not necessarily well catered for in current translation editing interfaces. This lack of support may lead to cognitive friction during the post-editing task and to reluctance among translators to accept post-editing jobs. This paper describes the results of a survey of professional translators and post-editors, in which they chose features and functions that they would like to see in translation and post-editing user interfaces (UIs). The survey is intended as a first step towards creating specifications for UIs that better support the post-editing task. Our starting point is that translators do not require a separate editor for post-editing, but rather that features could be integrated into existing commercial tools in order to better support the task and, ultimately, integration with MT systems.Research on post-editing has tended to focus largely on rates of productivity. Recent papers have measured translation throughput, cognitive effort, quality (as perceived when compared with human translation), or have attempted to estimate MT quality via comparison of performance with automatic evaluation metrics (AEMs) (e.g. de Almeida and O"Brien, 2010; Farzindar, 2010, Koponen et al, 2012) . This research has involved the use of commercial TM tools such as SDL Trados, proprietary tools such as Crosslang, or purpose-built tools for research that have simple UIs such as Caitra (Koehn, 2009) or PET (Aziz et al., 2012) . There has, however, been little focus on the UI itself, or on the functionality required for the job of post-editing. Vieira and Specia (2011) rated several textediting tools used for post-editing, using various criteria, including one of "interface intuitiveness". They acknowledge that this criterion was "highly subjective" as "its judgment was based solely on the experience of a single translator attempting to use the toolkits for the first time" (ibid.). The commercial TM tools rated all "put some effort into assigning intuitive meaning to the interface of the system" by utilizing color codes (ibid.), providing the source and target segments, including concordance search, and including dictionary and other display functions. The tools that they rated highest, however, "show clear evidence of collecting feedback from translators" (ibid.). Their wish list for a post-editing interface includes more sophisticated alignment, accurate confidence scores for MT proposals, and change tracking (included in subsequent versions of SDL Trados Studio), and they conclude that "a number of features deemed desirable for the work of a translator were not satisfactorily found in any of the tools analyzed" (ibid.).Lagoudaki investigated text editing UIs as part of her TM survey in 2006. She found that, during development, TM users were usually "invited to provide feedback on an almost finished product with limited possibilities for changes" (2006). One translator in Moorkens (2012) said that developers had not understood her feedback as they had not worked as translators and "they don't know the problems you encounter or the things you would like to see". Lagoudaki also had the opinion that industry research is mostly motivated by "technical improvement of the TM system and not how the TM system can best meet the needs of its users" (2008). This runs counter to user-centered design recommendations, whereby a designer defines user profiles, usability requirements, and models before designing the UI (Redmond-Pyle and Moore, 1995).Lagoudaki also wrote that "systems usability and end-users" demands seem to have been of only subordinate interest" in TM system development (2008) . Based on her research, the message from the users of TMs is occasionally conflicting. However, she concludes that an overall message is clear: TM users want simplicity. This does not necessarily mean fewer features; rather they want a streamlined process with compatibility between languages and scripts. They want ease of access, meaning "affordability of the system, not only in terms of purchase cost, but also in terms of upgrade, support and training costs" (2008). To better understand what features post-editors might require we designed a survey in which the questions focused on five areas in particular. 1Participants were asked for some biographical details, such as years of professional experience, and about their attitude to technology. (2) They were asked about their current working methods, 3what they would like to see in their ideal UI, (4) how they would like to see TM matches and MT output presented, and (5) about intelligent functionality that might help combine TM and MT matches. This survey is the first stage in a study that will be followed by interviews and observation, with the aim of creating specifications for a UI dedicated to the task of postediting. Some interim results from the survey are contained in the following sections.
0
The Mycenaean script constitutes one of the writing systems used in the Aegean in the 2nd millennium B.C. Mycenaean Linear B, was a syllabic script deciphered by the English architect Michael Ventris in 1-6-1952, who proved that it expresses an archaic form of the Greek language. The Mycenaean Linear B texts, though they are brief and terse revealing little about structure, provide valuable information about unknown aspects of Mycenaean life. Despite the huge importance of the archaeological data, the written sources are those that highlight aspects of the political and social organization of the cultures under study, supporting the archaeological evidence with "historically" substantiated elements. For example, the decipherment of Mycenaean Linear B script added seven centuries to the history of the Hellenic language. Until then the oldest texts for the history of the Greek language were considered to be the Homeric epics. The Greek language is among the oldest languages and therefore a valuable tool for linguistic observation (Ruiperez & Melena, 1996) . However, the importance of Mycenaean documents is remarkable, not only for linguistics and philology, but also for other sciences such as religious studies, ethnology, history and law, since from the Mycenaean texts is obtained information on the political and administrative organization, the social structure, the economic activity, the religion and the military aspect of the Mycenaean civilization. The computational methods are expected to offer a lot in decipherment tasks. Computational techniques (such as smoothed n-grams, Hidden Markov Models, Bayesian classifiers, Conditional Random Fields, etc.) might be applied to the problem of decipherment of ancient scripts (Knight & Yamada, 1999; Knight, et al., 2006; Ravi & Knight, 2008; Ravi & Knight, 2011b; Snyder, Barzilay, & Knight, 2010; Corlett & Penn, 2010; Nuhn, Mauser, & Ney, 2012; Nuhn & Ney, 2013) and also to the in-depth linguistic analysis (e.g., sentence detection, tokenization, lemmatization, part-of-speech tagging, etc.) of the already deciphered ones. The goal is not to replace the experts and their insight, but to contribute to their efforts by facilitating computational intelligence perspectives. By presenting a dataset of Mycenaean sequences we aim to contribute to the restoration of the words of the damaged Mycenaean inscriptions. In the future we aspire to complement the decipherment efforts of the Minoan script. To this end we employ probabilistic methods for structure prediction, in sequences such as the Conditional Random Fields (CRF), which are appliccable in many areas including natural language processing, computer vision and bioinformatics (Sutton & McCallum, 2012) . In the next section we present the Mycenaean Linear B datasets used so far and the importance of our contribution. In section 3 we analyse the methods used to construct the dataset. In section 4 we present the resulting dataset. Section 5 describes an initial experiment on predicting missing symbols and discusses the results. Finally, section 6 concludes this work.
0
Word Segmentation (WS) is an essential process for several Natural Language Processing (NLP) tasks such as Part-of-Speech (PoS) tagging and Machine Translation (MT). The accuracy of WS significantly affects the accuracy of these NLP tasks, as shown in experimental results from Nguyen et al. and Chang et al. While WS is considered relatively simple in English, it is still an open problem in languages without explicitly defined word delimiters, such as Thai, Chinese, and Japanese. However, unlike Chinese and Japanese, Thai WS did not receive much research attention. There are only six notable publications (Chormai et al., 2019; Nararatwong et al., 2018; Noyunsan et al.; Thanadechteemapat and Fung; Tongtep and Theeramunkong) on Thai WS for the past ten years. On the other hand, there are at least eight papers from well-established conferences on Chinese and Japanese WS (Li et al., 2019; Aguirre and Aguiar, 2019; Ma et al., 2018; Gong et al., 2017; Chen et al., 2017; Zhou et al., 2017; Cai et al., 2017) within only the last two years. This investigation focuses on the segmentation of Thai words since it is a challenging problem that has an excellent opportunity to improve, especially in the area of domain adaptation.Like many NLP tasks, Thai WS is domaindependent. For instance, Chormai et al. 2019recorded an accuracy drop from 91% to 81% when their model trained on a generic domain corpus (Kosawat et al., 2009) was tested on a social media one (bact' et al., 2019) . Results from our analysis (Section 3) also conform to these findings.One way to solve the domain dependency problem is through Transfer Learning (TL), which is a common technique in domain adaptations (Schuster et al.; Chang et al.) . However, TL may not be applicable when working with a commercial API or a model that does not support weight adjustments (Chormai et al., 2019; Chuang, 2019; Ikeda, 2018) . We call this type of model a black box.In this paper, we propose a stacked-ensemble learning solution to overcome the black-box limitation. Instead of making changes to the existing model directly, we build a separate model to improve the accuracy of predictions made by the black box. Our solution comprises two parts, Domain-Generic (DG) and Domain-Specific (DS). The pretrained black box handles the Domain-Generic part, and a new model is constructed to handle the Domain-Specific part. All samples go through Domain-Generic, which makes initial predictions. We rank all predictions according to uncertainty and send the top-k uncertain predictions to Domain-Specific for further consideration. We combine the predictions from Domain-Specific with the remaining from Domain-Generic to form the final predictive results.We conducted extensive experimental studies to assess our solution's performance against a base-line model and transfer learning solutions. We also applied our Stacked-Ensemble Filter-and-Refine (SEFR) technique to Chinese and Japanese. Experimental results showed that our proposed solution achieved the accuracy level comparable to those of transfer learning solutions in Thai. For Chinese and Japanese, we showed that model adaptation using the SEFR technique could improve the performance of black-box models when used in a cross-domain setting.Our contributions are as follows. First, we propose a novel solution for adapting a black-box model to a new domain by formulating the problem as an ensemble learning one. Second, we derive a filter-and-refine method to speed up the inference process without sacrificing accuracy in some cases. Third, we conducted extensive experimental studies; experimental results validate the effectiveness of our solution. Fourth, we make our code available at: github.com/mrpeerat/SEFR_CUT 2 Stacked-Ensemble Method 2.1 Pipeline Structure Figure 1 displays the pipeline structure of the proposed SEFR method, which consists of a Domain-Generic (DG) black box, uncertainty filtering, and a Domain-Specific (DS) model. Each character enters the pipeline through the Domain-Generic black box, which gives a softmax or logistic score from the Domain-Generic model as output. We then use this output to calculate the uncertainty score. Uncertainty values are used to rank and filter samples that need reexamination by the Domain-Specific model. We then merge the results from Domain-Specific with the direct answers from Domain-Generic to form the final answers.
0
Mental health care has been of great importance as the ongoing COVID-19 pandemic poses a serious negative impact on people's mental wellbeing (Paredes et al., 2021) . Not only there is a larger unmet need for counseling services, the health care workers are also in tremendous physical and mental strain (Huffman et al., 2021) . With this in mind, it is natural to consider how the advancement in natural language processing can be leveraged to help counseling.Across different counseling styles, reflective listening has always been a fundamental procedure underlying effective counseling practices (Katz and McNulty, 1994) . Reflective listening asks the counselor not only to listen to the client carefully, but also to actively make a guess of what the client means. If carried out the right way, it gives the client a sense of being understood and facilitates further self-exploration. However, people do not always say what they mean, which is especially the case for patients seeking mental support. Reflection, as the response made based on reflective listening, sometimes needs to decode the client's meaning not explicitly expressed in words. On the other hand, pressing the client to clarify the missing part may hinder them from expressing their own experience (Miller and Rollnick, 2012) . Thus, counseling frequently calls for counselors to make inferences based on their prior knowledge. For example, when the client says I had a really hard time sticking to my diet this week, a plausible reflection may be You're wondering whether you'll be able to lose weight this way, which relates diet with losing weight as an inference based on commonsense knowledge. Moreover, making a good reflection may sometime require domain knowledge. For example, to understand the client in Figure 1 , the counselor needs to know that smoking can be a possible cause of emphysema, and Chantix is a medication for smoke cessation. All these cases pose challenges to state-of-the-art language models.In this paper, we propose the task of knowledge enhanced counseling reflection generation, which utilizes the dialogue context as well as commonsense and domain knowledge. This extra knowledge is needed since existing pre-trained language models struggle to produce coherent and informative responses that capture relevant knowledge, even if they have acquired some knowledge during the pre-training phase (Petroni et al., 2019a) . A system that generates accurate counseling reflections can serve as a tool to aid counseling training or assist counselors during a session by providing alternative reflections in response to client's statements.We experiment with two main strategies to incorporate knowledge. The first is retrieval, which acquires sentences containing relevant knowledge based on the vector representations of sentences from the dialogue and assertions in the knowledge base using a BERT-based model (Reimers and Gurevych, 2019a) . The second strategy is generative, where we first extract key phrases from the dialogue, and query a COMET model for plausible knowledge triplets with a predefined set of relations (Bosselut et al., 2019) . We propose a knowledge-grounded BART (Lewis et al., 2020) model using soft positional encoding and masked self-attention representations to indicate the knowledge position and make the introduced knowledge only visible to the key phrase it relates to.In addition, we explore the effect of different knowledge sources on the counseling responses generation task. Although commonsense knowledge bases usually have high coverage for general domain concepts, they contain a limited amount of domain-specific knowledge. This applies particularly to medical terminology. For instance, when querying ConceptNet (Speer et al., 2017) , a wellknown knowledge base, for the word Chantix (a prescription smoking cessation aid) we are only able to retrieve three relationships, including synonyms, related terms, and type-of, whereas with a common word daughter ConceptNet provides a total of eleven relationships. For the Chantix example in Figure 1 , ConceptNet is also missing important causal relationships regarding side effects or suggested usage, which are especially relevant during a counseling conversation about smoking cessation. To address this challenge, we collect a dataset of counseling domain knowledge using web mining with queries constructed with the medical concepts extracted from the dialogue as well as manually defined templates. We compare this Web-collected data with a public commonsense knowledge base, and show that this data collected with no human an-notation can serve as a complementary knowledge resource. We also conduct an ablation study on different categories of commonsense knowledge, and show that intentional or causal relationships are more useful for counseling response generation, a finding consistent with related medical literature. (Miller and Rollnick, 2012) .Contributions. The main contributions of this work are as follows: 1) We collect a counseling knowledge base and use it along with commonsense knowledge bases for the task of reflection generation using different retrieval-based methods. 2) We adopt the encoding scheme from K-BERT on BART to incorporate knowledge generated from COMET. 3) We analyze different types of commonsense and domain knowledge, and their effect on the generation task.
0
The editing of semi-automatic translations has been a common practice among users of Translation Memory (TM). TM tools such as SDL Trados 1 and Wordfast 2 provide user-friendly environments to aid human translators. The post-editing (PE) of Machine Translation (MT) output has only recently started to be more widely adopted as a way of incorporating MT into human translation workflows. Although a number of issues are yet to be addressed, such as an adequate pricing model for PE, this practice has been shown to minimise time and costs. A consequence of the widespread use of MT is the need for PE tools.Modern TM tools incorporate MT systems with a common PE interface for both MT and TM, e.g., SDL Trados. Some MT systems also incorporate PE facilities, such as Google Translate 3 and Systran 4 . However, these and other existing PE tools suffer from one or more of the following limitations, which we aim to address in this work:• Restricted availability: most of them are proprietary tools only available as part of a major (more expensive) product. These tools generally do not allow the PE of a heterogeneous selection of translations from multiple MT systems.• Lack of flexibility: these tools do not allow incorporating system-or task-specific functionalities, such as to limit the length of a post-edited segment.• Limited logging: most tools do not collect explicit assessment nor detailed information about the post-editing process that could be used for measuring translation quality and diagnosing MT systems. These limitations mostly constrain developers of translation technologies and researchers in machine (or computer-aided) translation. For a detailed study on translation tools that allow post-editing (e.g. Caitra, 5 Lingotek, 6 Déjà Vu X2, 7 and OmegaT 8 ) and requirements from the human translator's perspective, we refer the reader to Vieira and Specia (2011) .We present PET (Post-Editing Tool) (Aziz et al., 2012a) , a simple, freely available opensource standalone tool that allows the PE of any MT system and records various segment-level information. While PET is not yet a full-fledge tool for post-editing, offering limited built-in functionalities (dictionaries, etc.), it offers the flexibility that other tools lack (i) to enable easy design of post-editing tasks with specific requirements (such as constraints on the revisions produced in terms of length, word use, etc.), and (ii) to collect a number of (customisable) effort indicators and statistics on post-editing tasks.
0
Modern weather reports present weather prediction information using tables, graphs, maps, icons and text. Among these different modalities only text is currently manually produced, consuming significant human resources. Therefore releasing meteorologists' time to add value elsewhere in the production chain without sacrificing quality and consistency in weather reports is an important industry goal. In addition, in order to remain competitive, modern weather services need to provide weather reports for any geolocation the end-user demands. As the quantity of required texts increases, manual production becomes humanly impossible. In this paper we describe a case study where data-to-text NLG techniques have been applied to a real-world use case involving the UK national weather service, the Met Office. In the UK, the Met Office provides daily weather reports for nearly 5000 locations which are available through its public website. These reports contain a textual component that is not focused on the geo-location selected by the end-user, but instead describes the weather conditions over a broader geographic region. This is done partly because the time taken to manually produce thousands of texts required would be in the order of weeks rather than minutes. In this case study a data-to-text NLG system was built to demonstrate that the sitespecific data could be enhanced with site-specific text for nearly 5000 locations. This system, running on a standard desktop, was tested to produce nearly 15000 texts (forecasts for 5000 locations for 3 days into the future) in less than a minute. After internally assessing the quality of machine-generated texts for nearly two years, the Met Office launched the system on their beta site (http://www.metoffice.gov.uk/public/weather/for ecast-data2text/) in December 2013 for external assessment. A screenshot of the forecast for London Heathrow on 5th March 2014 is shown in Figure 1 . In this figure, the machine-generated text is at the top of the table. Ongoing work has extended the processing capabilities of this system to handle double the number of locations and an additional two forecast days. It has been found that the processing time scales linearly.
0
Entity linking (EL) describes the task of disambiguating entity mentions in a text by linking them to a knowledge base (KB), e.g. the text span Earl of Orrery can be linked to the KB entry John Boyle, 5 th Earl of Cork, thereby disambiguating it. EL is highly relevant in many fields like digital humanities, classics, technical writing or biomedical sciences for applications like search (Meij et al., 2014) , semantic enrichment (Schlögl and Lejtovicz, 2017) or information extraction (Nooralahzadeh and Øvrelid, 2018) .In these scenarios, the first crucial step is typically to annotate data. Manual annotation is laborious and often prohibitively expensive. To improve annotation speed and quality, we have developed a novel Human-In-The-Loop (HITL) entity linking approach. It helps annotators finding entity mentions in the text and linking them to the correct knowledge base entries. The more mentions get linked over time, the better the annotation support will be.We demonstrate the effectiveness of our approach with extensive simulation as well as a user study on different, challenging datasets. We have implemented our approach based on the opensource annotation platform INCEpTION (Klie et al., 2018) and publish all datasets and code.
0
Past successes with conversational Intelligent Tutoring Systems (ITS) (Graesser et al., 2001) , have helped to demonstrate the efficacy of computer-led, tutorial dialogue. However, ITS will not reach their full potential until they can overcome current limitations in spoken dialogue technologies. Producing systems capable of leading open-ended, Socratic-style tutorials will likely require more sophisticated models to automate analysis and generation of dialogue. A well defined tutorial dialogue annotation scheme can serve as a stepping stone towards these goals. Such a scheme should account for differences in tutoring style and question scaffolding techniques and should capture the subtle distinctions between different question types. To do this, requires a representation that connects a turn's communicative and rhetorical functions to its underlying semantic content.While efforts such as DAMSL (Core and Allen, 1997) and DIT++ (Bunt, 2009) have helped to make dialogue act annotation more uniform and applicable to a wider audience, and while tutoring-specific initiatives (Tsovaltzi and Karagjosova, 2004; Buckley and Wolska, 2008) have helped to bring dialogue acts to tutorial dialogue, the move granularity in these schemas is too coarse to capture the differences in tutorial questioning styles exhibited in our corpus of Socratic-style tutorial dialogues. Conversely, question type categories (Graesser and Person, 1994; Nielsen et al., 2008) have been designed with education in mind, but they largely ignore how the student and tutor may work together to construct meaning. The DISCOUNT scheme's (Pilkington, 1999) combination of dialogue acts and rhetorical functions enabled it to better capture tutoring moves, but its omission of shallow semantics prevents it from capturing how content influences behavior.Our long-term goals of automatic dialogue characterization, tutorial move prediction and question generation led us to design our own dialogue representation called DISCUSS (Dialogue Scheme for Unifying Speech and Semantics). Design of this dialogue move taxonomy was based on preliminary observations from our corpus of tutorial dialogues, and was influenced by the aforementioned research. We hope that undertaking this ambitious endeavor to capture not only a turn's pragmatic interpretation, but also its rhetorical and semantic functions will enable us to better model the complexity of open-ended, tutorial dialogue.The remainder of the this paper is organized as follows. In the next section we describe our tutorial dialogue setting and our data. Section 3 discusses the organization of the DISCUSS annotation scheme. Section 4 briefly explains the current status of our annotation. Lastly section 5 outlines our future plans and conclusions.
0