question
stringlengths 12
1.77k
| context
stringlengths 79
71.7k
| answer
stringlengths 1
1.63k
|
---|---|---|
What is open website for cataloguing abusive language data? | ### Introduction
Abusive online content, such as hate speech and harassment, has received substantial attention over the past few years for its malign social effects. Left unchallenged, abusive content risks harminng those who are targeted, toxifying public discourse, exacerbating social tensions and could lead to the exclusion of some groups from public spaces. As such, systems which can accurately detect and classify online abuse at scale, in real-time and without bias are of central interest to tech companies, policymakers and academics. Most detection systems rely on having the right training dataset, reflecting one of the most widely accepted mantras in computer science: Garbage In, Garbage Out. Put simply: to have systems which can detect and classify abusive online content effectively, one needs appropriate datasets with which to train them. However, creating training datasets is often a laborious and non-trivial task – and creating datasets which are non-biased, large and theoretically-informed is even more difficult (BIBREF0 p. 189). We address this issue by examining and reviewing publicly available datasets for abusive content detection, which we provide access to on a new dedicated website, hatespeechdata.com. In the first section, we examine previous reviews and present the four research aims which guide this paper. In the second section, we conduct a critical and in-depth analysis of the available datasets, discussing first what their aim is, how tasks have been described and what taxonomies have been constructed and then, second, what they contain and how they were annotated. In the third section, we discuss the challenges of open science in this research area and elaborates different ways of sharing training datasets, including the website hatespeechdata.com In the final section, we draw on our findings to establish best practices when creating datasets for abusive content detection. ### Background
The volume of research examining the social and computational aspects of abusive content detection has expanded prodigiously in the past five years. This has been driven by growing awareness of the importance of the Internet more broadly BIBREF1, greater recognition of the harms caused by online abuse BIBREF2, and policy and regulatory developments, such as the EU's Code of Conduct on Hate, the UK Government's `Online Harms' white paper BIBREF3, Germany's NetzDG laws, the Public Pledge on Self-Discipline for the Chinese Internet Industry, and France's anti-hate regulation BIBREF2. In 2020 alone, three computer science venues will host workshops on online hate (TRAC and STOC at LREC, and WOAH at EMNLP), and a shared task at 2019's SemEval on online abuse detection reports that 800 teams downloaded the training data and 115 submitted detection systems BIBREF4. At the same time, social scientific interventions have also appeared, deepening our understanding of how online abuse spreads BIBREF5 and how its harmful impact can be mitigated and challenged BIBREF6. All analyses of online abuse ultimately rely on a way of measuring it, which increasingly means having a method which can handle the sheer volume of content produced, shared and engaged with online. Traditional qualitative methods cannot scale to handle the hundreds of millions of posts which appear on each major social media platform every day, and can also introduce inconsistencies and biase BIBREF7. Computational tools have emerged as the most promising way of classifying and detecting online abuse, drawing on work in machine learning, Natural Language Processing (NLP) and statistical modelling. Increasingly sophisticated architectures, features and processes have been used to detect and classify online abuse, leveraging technically sophisticated methods, such as contextual word embeddings, graph embeddings and dependency parsing. Despite their many differences BIBREF8, nearly all methods of online abuse detection rely on a training dataset, which is used to teach the system what is and is not abuse. However, there is a lacuna of research on this crucial aspect of the machine learning process. Indeed, although several general reviews of the field have been conducted, no previous research has reviewed training datasets for abusive content detection in sufficient breadth or depth. This is surprising given (i) their fundamental importance in the detection of online abuse and (ii) growing awareness that several existing datasets suffer from many flaws BIBREF9, BIBREF10. Close relevant work includes: Schmidt and Wiegand conduct a comprehensive review of research into the detection and classification of abusive online content. They discuss training datasets, stating that `to perform experiments on hate speech detection, access to labelled corpora is essential' (BIBREF8, p. 7), and briefly discuss the sources and size of the most prominent existing training datasets, as well as how datasets are sampled and annotated. Schmidt and Wiegand identify two key challenges with existing datasets. First, `data sparsity': many training datasets are small and lack linguistic variety. Second, metadata (such as how data was sampled) is crucial as it lets future researchers understand unintended biases, but is often not adequately reported (BIBREF8, p. 6). Waseem et al.BIBREF11 outline a typology of detection tasks, based on a two-by-two matrix of (i) identity- versus person- directed abuse and (ii) explicit versus implicit abuse. They emphasise the importance of high-quality datasets, particularly for more nuanced expressions of abuse: `Without high quality labelled data to learn these representations, it may be difficult for researchers to come up with models of syntactic structure that can help to identify implicit abuse.' (BIBREF11, p. 81) Jurgens et al.BIBREF12 discuss also conduct a critical review of hate speech detection, and note that `labelled ground truth data for building and evaluating classifiers is hard to obtain because platforms typically do not share moderated content due to privacy, ethical and public relations concerns.' (BIBREF12, p. 3661) They argue that the field needs to `address the data scarcity faced by abuse detection research' in order to better address more complex rsearch issues and pressing social challenges, such as `develop[ing] proactive technologies that counter or inhibit abuse before it harms' (BIBREF12, pp. 3658, 3661). Vidgen et al. describe several limitations with existing training datasets for abusive content, most noticeably how `they contain systematic biases towards certain types and targets of abuse.' BIBREF13[p.2]. They describe three issues in the quality of datasets: degradation (whereby datasets decline in quality over time), annotation (whereby annotators often have low agreement, indicating considerable uncertainty in class assignments) and variety (whereby `The quality, size and class balance of datasets varies considerably.' [p. 6]). Chetty and AlathurBIBREF14 review the use of Internet-based technologies and online social networks to study the spread of hateful, offensive and extremist content BIBREF14. Their review covers both computational and legal/social scientific aspects of hate speech detection, and outlines the importance of distinguishing between different types of group-directed prejudice. However, they do not consider training datasets in any depth. Fortuna and NunesBIBREF15 provide an end-to-end review of hate speech research, including the motivations for studying online hate, definitional challenges, dataset creation/sharing, and technical advances, both in terms of feature selection and algorithmic architecture (BIBREF15, 2018). They delineate between different types of online abuse, including hate, cyberbullying, discrimination and flaming, and add much needed clarity to the field. They show that (1) dataset size varies considerably but they are generally small (mostly containing fewer than 10,000 entries), (2) Twitter is the most widely-studied platform, and (3) most papers research hate speech per se (i.e. without specifying a target). Of those which do specify a target, racism and sexism are the most researched. However, their review focuses on publications rather than datasets: the same dataset might be used in multiple studies, limiting the relevance of their review for understanding the intrinsic role of training datasets. They also only engage with datasets fairly briefly, as part of a much broader review. Several classification papers also discuss the most widely used datasets, including Davidson et al. BIBREF16 who describe five datasets, and Salminen et al. who review 17 datasets and describe four in detail BIBREF17. This paper addresses this lacuna in existing research, providing a systematic review of available training datasets for online abuse. To provide structure to this review, we adopt the `data statements' framework put forward by Bender and Friedman BIBREF18, as well as other work providing frameworks, schema and processes for analysing NLP artefacts BIBREF19, BIBREF20, BIBREF21. Data statements are a way of documenting the decisions which underpin the creation of datasets used for Natural Language Processing (NLP). They formalise how decisions should be documented, not only ensuring scientific integrity but also addressing `the open and urgent question of how we integrate ethical considerations in the everyday practice of our field' (BIBREF18, p. 587). In many cases, we find that it is not possible to fully recreate the level of detail recorded in an original data statement from how datasets are described in publications. This reinforces the importance of proper documentation at the point of dataset creation. As the field of online abusive content detection matures, it has started to tackle more complex research challenges, such as multi-platform, multi-lingual and multi-target abuse detection, and systems are increasingly being deployed in `the wild' for social scientific analyses and for content moderation BIBREF5. Such research heightens the focus on training datasets as exactly what is being detected comes under greater scrutiny. To enhance our understanding of this domain, our review paper has four research aims. Research Aim One: to provide an in-depth and critical analysis of the available training datasets for abusive online content detection. Research Aim Two: to map and discuss ways of addressing the lack of dataset sharing, and as such the lack of `open science', in the field of online abuse research. Research Aim Three: to introduce the website hatespeechdata.com, as a way of enabling more dataset sharing. Research Aim Four: to identify best practices for creating an abusive content training dataset. ### Analysis of training datasets
Relevant publications have been identified from four sources to identify training datasets for abusive content detection: The Scopus database of academic publications, identified using keyword searches. The ACL Anthology database of NLP research papers, identified using keyword searches. The ArXiv database of preprints, identified using keyword searches. Proceedings of the 1st, 2nd and 3rd workshops on abusive language online (ACL). Most publications report on the creation of one abusive content training dataset. However, some describe several new datasets simultaneously or provide one dataset with several distinct subsets of data BIBREF22, BIBREF23, BIBREF24, BIBREF25. For consistency, we separate out each subset of data where they are in different languages or the data is collected from different platforms. As such, the number of datasets is greater than the number publications. All of the datasets were released between 2016 and 2019, as shown in Figure FIGREF17. ### Analysis of training datasets ::: The purpose of training datasets ::: Problems addressed by datasets
Creating a training dataset for online abuse detection is typically motivated by the desire to address a particular social problem. These motivations can inform how a taxonomy of abusive language is designed, how data is collected and what instructions are given to annotators. We identify the following motivating reasons, which were explicitly referenced by dataset creators. Reducing harm: Aggressive, derogatory and demeaning online interactions can inflict harm on individuals who are targeted by such content and those who are not targeted but still observe it. This has been shown to have profound long-term consequences on individuals' well-being, with some vulnerable individuals expressing concerns about leaving their homes following experiences of abuse BIBREF26. Accordingly, many dataset creators state that aggressive language and online harassment is a social problem which they want to help address Removing illegal content: Many countries legislate against certain forms of speech, e.g. direct threats of violence. For instance, the EU's Code of Conduct requires that all content that is flagged for being illegal online hate speech is reviewed within 24 hours, and removed if necessary BIBREF27. Many large social media platforms and tech companies adhere to this code of conduct (including Facebook, Google and Twitter) and, as of September 2019, 89% of such content is reviewed in 24 hours BIBREF28. However, we note that in most cases the abuse that is marked up in training datasets falls short of the requirements of illegal online hate – indeed, as most datasets are taken from public API access points, the data has usually already been moderated by the platforms and most illegal content removed. Improving health of online conversations: The health of online communities can be severely affected by abusive language. It can fracture communities, exacerbate tensions and even repel users. This is not only bad for the community and for civic discourse in general, it also negatively impacts engagement and thus the revenue of the host platforms. Therefore, there is a growing impetus to improve user experience and ensure online dialogues are healthy, inclusive and respectful where possible. There is ample scope for improvement: a study showed that 82% of personal attacks on Wikipedia against other editors are not addressed BIBREF29. Taking steps to improve the health of exchanges in online communities will also benefit commercial and voluntary content moderators. They are routinely exposed to such content, often with insufficient safeugards, and sometimes display symptoms similar to those of PTSD BIBREF30. Automatic tools could help to lessen this exposure, reducing the burden on moderators. ### Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined
Myriad tasks have been addressed in the field of abusive online content detection, reflecting the different disciplines, motivations and assumptions behind research. This has led to considerable variation in what is actually detected under the rubric of `abusive content', and establishing a degree of order over the diverse categorisations and subcategorisations is both difficult and somewhat arbitrary. Key dimensions which dataset creators have used to categorise detection tasks include who/what is targeted (e.g. groups vs. individuals), the strength of content (e.g. covert vs. overt), the nature of the abuse (e.g. benevolent vs. hostile sexism BIBREF31), how the abuse manifests (e.g. threats vs. derogatory statements), the tone (e.g. aggressive vs. non-aggressive), the specific target (e.g. ethnic minorities vs. women),and the subjective perception of the reader (e.g. disrespectful vs. respectful). Other important dimensions include the theme used to express abuse (e.g. Islamophobia which relies on tropes about terrorism vs. tropes about sexism) and the use of particular linguistic devices, such as appeals to authority, sincerity and irony. All of these dimensions can be combined in different ways, producing a large number of intersecting tasks. Consistency in how tasks are described will not necessarily ensure that datasets can be used interchangeably. From the description of a task, an annotation framework must be developed which converts the conceptualisation of abuse into a set of standards. This formalised representation of the `abuse' inevitably involves shortcuts, imperfect rules and simplifications. If annotation frameworks are developed and applied differently, then even datasets aimed at the same task can still vary considerably. Nonetheless, how detection tasks for online abuse are described is crucial for how the datasets – and in turn the systems trained on them – can subsequently be used. For example, a dataset annotated for hate speech can be used to examine bigoted biases, but the reverse is not true. How datasets are framed also impacts whether, and how, datasets can be combined to form large `mega-datasets' – a potentially promising avenue for overcoming data sparsity BIBREF17. In the remainder of this section, we provide a framework for splitting out detection tasks along the two most salient dimensions: (1) the nature of abuse and (2) the granularity of the taxonomy. ### Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined ::: Detection tasks: the nature of abuse
This refers to what is targeted/attacked by the content and, subsequently, how the taxonomy has been designed/framed by the dataset creators. The most well-established taxonomic distinction in this regard is the difference between (i) the detection of interpersonal abuse, and (ii) the detection of group-directed abuse BIBREF11). Other authors have sought to deductively theorise additional categories, such as `concept-directed' abuse, although these have not been widely adopted BIBREF13. Through an inductive investigation of existing training datasets, we extend this binary distinction to four primary categories of abuse which have been studied in previous work, as well as a fifth `Mixed' category. Person-directed abuse. Content which directs negativity against individuals, typically through aggression, insults, intimidation, hostility and trolling, amongst other tactics. Most research falls under the auspices of `cyber bullying', `harassment' and `trolling' BIBREF23, BIBREF32, BIBREF33. One major dataset of English Wikipedia editor comments BIBREF29 focuses on the `personal attack' element of harassment, drawing on prior investigations that mapped out harassment in that community. Another widely used dataset focuses on trolls' intent to intimidate, distinguishing between direct harassment and other behaviours BIBREF34. An important consideration in studies of person-directed abuse is (a) interpersonal relations, such as whether individuals engage in patterns of abuse or one-off acts and whether they are known to each other in the `real' world (both of which are a key concern in studies of cyberbullying) and (b) standpoint, such as whether individuals directly engage in abuse themselves or encourage others to do so. For example, the theoretically sophisticated synthetic dataset provided by BIBREF33 identifies not only harassment but also encouragement to harassment. BIBREF22 mark up posts from computer game forums (World of Warcraft and League of Legends) for cyberbullying and annotate these as $\langle $offender, victim, message$\rangle $ tuples. Group-directed abuse. Content which directs negativity against a social identity, which is defined in relation to a particular attribute (e.g. ethnic, racial, religious groups)BIBREF35. Such abuse is often directed against marginalised or under-represented groups in society. Group-directed abuse is typically described as `hate speech' and includes use of dehumanising language, making derogatory, demonising or hostile statements, making threats, and inciting others to engage in violence, amongst other dangerous communications. Common examples of group-directed abuse include sexism, which is included in datasets provided by BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF33 and racism, which is directly targeted in BIBREF36, BIBREF40. In some cases, specific types of group-directed abuse are subsumed within a broader category of identity-directed abuse, as in BIBREF41, BIBREF42, BIBREF4. Determining the limits of any group-directed abuse category requires careful theoretical reflection, as with the decision to include ethnic, caste-based and certain religious prejudices under `racism'. There is no `right' answer to such questions as they engage with ontological concerns about identification and `being' and the politics of categorization. Flagged content. Content which is reported by community members or assessed by community and professional content moderators. This covers a broad range of focuses as moderators may also remove spam, sexually inappropriate content and other undesirable contributions. In this regard, `flagged' content is akin to the concept of `trolling', which covers a wide range of behaviours, from jokes and playful interventions through to sinister personal attacks such as doxxing BIBREF43. Some forms of trolling can be measured with tools such as the Global Assessment of Internet Trolling (GAIT) BIBREF43. Incivility. Content which is considered to be incivil, rude, inappropriate, offensive or disrespectful BIBREF24, BIBREF25, BIBREF44. Such categories are usually defined with reference to the tone that the author adopts rather than the substantive content of what they express, which is the basis of person- and group- directed categories. Such content usually contains obscene, profane or otherwise `dirty' words. This can be easier to detect as closed-class lists are effective at identifying single objectionable words (e.g. BIBREF45). However, one concern with this type of research is that the presence of `dirty' words does not necessarily signal malicious intent or abuse; they may equally be used as intensifiers or colloquialisms BIBREF46. At the same time, detecting incivility can be more difficult as it requires annotators to infer the subjective intent of the speaker or to understand (or guess) the social norms of a setting and thus whether disrespect has been expressed BIBREF42. Content can be incivil without directing hate against a group or person, and can be inappropriate in one setting but not another: as such it tends to be more subjective and contextual than other types of abusive language. Mixed. Content which contains multiple types of abuse, usually a combination of the four categories discussed above. The intersecting nature of online language means that this is common but can also manifest in unexpected ways. For instance, female politicians may receive more interpersonal abuse than other politicians. This might not appear as misogyny because their identity as women is not referenced – but it might have motivated the abuse they were subjected to. Mixed forms of abuse require further research, and have thus far been most fully explored in the OLID dataset provided by BIBREF4, who explore several facets of abuse under one taxonomy. ### Analysis of training datasets ::: The purpose of training datasets ::: Uses of datasets: How detection tasks are defined ::: Detection tasks: Granularity of taxonomies
This refers to how much detail a taxonomy contains, reflected in the number of unique classes. The most important and widespread distinction is whether a binary class is used (e.g. Hate / Not) or a multi-level class, such as a tripartite split (typically, Overt, Covert and Non-abusive). In some cases, a large number of complex classes are created, such as by combining whether the abuse is targeted or not along with its theme and strength. In general, Social scientific analyses encourage creating a detailed taxonomy with a large number of fine-grained categories. However, this is only useful for machine learning if there are enough data points in each category and if annotators are capable of consistently distinguishing between them. Complex annotation schemas may not result in better training datasets if they are not implemented in a robust way. As such, it is unsurprising that binary classification schemas are the most prevalent, even though they are arguably the least useful given the variety of ways in which abuse can be articulated. This can range from the explicit and overt (e.g. directing threats against a group) to more subtle behaviours, such as micro-aggressions and dismissing marginalised groups' experiences of prejudice. Subsuming both types of behaviour within one category not only risks making detection difficult (due to considerable in-class variation) but also leads to a detection system which cannot make important distinctions between qualitatively different types of content. This has severe implications for whether detection systems trained on such datasets can actually be used for downstream tasks, such as content moderation and social scientific analysis. Drawing together the nature and granularity of abuse, our analyses identify a hierarchy of taxonomic granularity from least to most granular: Binary classification of a single `meta' category, such as hate/not or abuse/not. This can lead to very general and vague research, which is difficult to apply in practice. Binary classification of a single type of abuse, such as person-directed or group-directed. This can be problematic given that abuse is nearly always directed against a group rather than `groups' per se. Binary classification of abuse against a single well-defined group, such as racism/not or Islamophobia/not, or interpersonal abuse against a well-defined cohort, such as MPs and young people. Multi-class (or multi-label) classification of different types of abuse, such as: Multiple targets (e.g. racist, sexist and non-hateful content) or Multiple strengths (e.g. none, implicit and explicit content). Multiple types (e.g. threats versus derogatory statements or benevolent versus hostile statements). Multi-class classification of different types of abuse which is integrated with other dimensions of abuse. ### Analysis of training datasets ::: The content of training datasets ::: The `Level' of content
49 of the training datasets are annotated at the level of the post, one dataset is annotated at the level of the user BIBREF47, and none of them are annotated at the level of the comment thread. Only two publications indicate that the entire conversational thread was presented to annotators when marking up individual entries, meaning that in most cases this important contextual information is not used. 49 of the training datasets contain only text. This is a considerable limitation of existing research BIBREF13, especially given the multimodal nature of online communication and the increasing ubiquity of digital-specific image-based forms of communication such as Memes, Gifs, Filters and Snaps BIBREF48. Although some work has addressed the task of detecting hateful images BIBREF49, BIBREF50, this lead to the creation of a publically available labelled training dataset in only one case BIBREF51. To our knowledge, no research has tackled the problem of detecting hateful audio content. This is a distinct challenge; alongside the semantic content audio also contains important vocal cues which provide more opportunities to investigate (but also potentially misinterpret) tone and intention. ### Analysis of training datasets ::: The content of training datasets ::: Language
The most common language in the training datasets is English, which appears in 20 datasets, followed by Arabic and Italian (5 datasets each), Hindi-English (4 datasets) and then German, Indonesian and Spanish (3 datasets). Noticeably, several major languages, both globally and in Europe, do not appear, which suggests considerable unevenness in the linguistic and cultural focuses of abusive language detection. For instance, there are major gaps in the coverage of European languages, including Danish and Dutch. Surprisingly, French only appears once. The dominance of English may be due to how we sampled publications (for which we used English terms), but may also reflect different publishing practices in different countries and how well-developed abusive content research is. ### Analysis of training datasets ::: The content of training datasets ::: Source of data
Training datasets use data collected from a range of online spaces, including from mainstream platforms, such as Twitter, Wikipedia and Facebook, to more niche forums, such as World of Warcraft and Stormfront. In most cases, data is collected from public sources and then manually annotated but in others data is sourced through proprietary data sharing agreements with host platforms. Unsurprisingly, Twitter is the most widely used source of data, accounting for 27 of the datasets. This reflects wider concerns in computational social research that Twitter is over-used, primarily because it has a very accessible API for data collection BIBREF52, BIBREF53. Facebook and Wikipedia are the second most used sources of data, accounting for three datasets each – although we note that all three Wikipedia datasets are reported in the same publication. Many of the most widely used online platforms are not represented at all, or only in one dataset, such as Reddit, Weibo, VK and YouTube. The lack of diversity in where data is collected from limits the development of detection systems. Three main issues emerge: Linguistic practices vary across platforms. Twitter only allows 280 characters (previously only 140), provoking stylistic changes BIBREF54, and abusive content detection systems trained on this data are unlikely to work as well with longer pieces of text. Dealing with longer pieces of text could necessitate different classification systems, potentially affecting the choice of algorithmic architecture. Additionally, the technical affordances of platforms may affect the style, tone and topic of the content they host. The demographics of users on different platforms vary considerably. Social science research indicates that `digital divides' exist, whereby online users are not representative of wider populations and differ across different online spaces BIBREF53, BIBREF55, BIBREF56. Blank draws attention to how Twitter users are usually younger and wealthier than offline populations; over reliance on data from Twitter means, in effect, that we are over-sampling data from this privileged section of society. Blank also shows that there are also important cross-national differences: British Twitters are better-educated than the offline British population but the same is not true for American Twitter users compared with the offline American population BIBREF56. These demographic differences are likely to affect the types of content that users produce. Platforms have different norms and so host different types and amounts of abuse. Mainstream platforms have made efforts in recent times to `clean up' content and so the most overt and aggressive forms of abuse, such as direct threats, are likely to be taken down BIBREF57. However, more niche platforms, such as Gab or 4chan, tolerate more offensive forms of speech and are more likely to contain explicit abuse, such as racism and very intrusive forms of harassment, such as `doxxing' BIBREF58, BIBREF59, BIBREF60. Over-reliance on a few sources of data could mean that datasets are biased towards only a subset of types of abuse. ### Analysis of training datasets ::: The content of training datasets ::: Size
The size of the training datasets varies considerably from 469 posts to 17 million; a difference of four orders of magnitude. Differences in size partly reflect different annotation approaches. The largest datasets are from proprietary data sharing agreements with platforms. Smaller datasets tend to be carefully collected and then manually annotated. There are no established guidelines for how large an abusive language training dataset needs to be. However, smaller datasets are problematic because they contain too little linguistic variation and increase the likelihood of overfitting. Rizoiu et al.BIBREF61 train detection models on only a proportion of the Davidson et al. and Waseem training datasets and show that this leads to worse performance, with a lower F1-Score, particularly for `data hungry' deep learning approaches BIBREF61. At the same time, `big' datasets alone are not a panacea for the challenges of abusive content classification. Large training datasets which have been poorly sampled, annotated with theoretically problematic categories or inexpertly and unthoughtfully annotated, could still lead to the development of poor classification systems. The challenges posed by small datasets could potentially be overcome through machine learning techniques such as `semi-supervised' and `active' learning BIBREF62, although these have only been limitedly applied to abusive content detection so far BIBREF63. Sharifirad et al. propose using text augmentation and new text generation as a way of overcoming small datasets, which is a promising avenue for future research BIBREF64. ### Analysis of training datasets ::: The content of training datasets ::: Class distribution and sampling
Class distribution is an important, although often under-considered, aspect of the design of training datasets. Datasets with little abusive content will lack linguistic variation in terms of what is abusive, thereby increasing the risk of overfitting. More concerningly, the class distribution directly affects the nature of the engineering task and how performance should be evaluated. For instance, if a dataset is 70% hate speech then a zero-rule classification system (i.e. where everything is categorised as hate speech) will achieve 70% precision and 100% recall. This should be used as a baseline for evaluating performance: 80% precision is less impressive compared with this baseline. However, 80% precision on an evenly balanced dataset would be impressive. This is particularly important when evaluating the performance of ternary classifiers, when classes can be considerably imbalanced. On average, 35% of the content in the training datasets is abusive. However, class distributions vary considerably, from those with just 1% abusive content up to 100%. These differences are largely a product of how data is sampled and which platform it is taken from. Bretschneider BIBREF22 created two datasets without using purposive sampling, and as such they contain very low levels of abuse ( 1%). Other studies filter data collection based on platforms, time periods, keywords/hashtags and individuals to increase the prevalence of abuse. Four datasets comprise only abusive content; three cases are synthetic datasets, reported on in one publication BIBREF65, and in the other case the dataset is an amendment to an existing dataset and only contains misogynistic content BIBREF37. Purposive sampling has been criticised for introducing various forms of bias into datasets BIBREF66, such as missing out on mis-spelled content BIBREF67 and only focusing on the linguistic patterns of an atypical subset of users. One pressing risk is that a lot of data is sampled from far right communities – which means that most hate speech classifiers implicitly pick up on right wing styles of discourse rather than hate speech per se. This could have profound consequences for our understanding of online political dialogue if the classifiers are applied uncritically to other groups. Nevertheless, purposive sampling is arguably a necessary step when creating a training dataset given the low prevalence of abuse on social media in general BIBREF68. ### Analysis of training datasets ::: The content of training datasets ::: Identity of the content creators
The identity of the users who originally created the content in training datasets is described in only two cases. In both cases the data is synthetic BIBREF65, BIBREF33. Chung et al. use `nichesourcing' to synthetically generate abuse, with experts in tackling hate speech creating hateful posts. Sprugnoli et al. ask children to adopt pre-defined roles in an experimental classroom setup, and ask them to engage in a cyberbullying scenario. In most of the non-synthetic training datasets, some information is given about the sampling criteria used to collect data, such as hashtags. However, this does not provide direct insight into who the content creators are, such as their identity, demographics, online behavioural patterns and affiliations. Providing more information about content creators may help address biases in existing datasets. For instance, Wiegand et al. show that 70% of the sexist tweets in the highly cited Waseem and Hovy dataset BIBREF36 come from two content creators and that 99% of the racist tweets come from just one BIBREF66. This is a serious constraint as it means that user-level metadata is artificially highly predictive of abuse. And, even when user-level metadata is not explicitly modelled, detection systems only need to pick up on the linguistic patterns of a few authors to nominally detect abuse. Overall, the complete lack of information about which users have created the content in most training datasets is a substantial limitation which may be driving as-yet-unrecognised biases. This can be remedied through the methodological rigour implicit in including a data statement with a corpus. ### Analysis of training datasets ::: Annotation of training datasets ::: Annotation process
How training datasets are annotated is one of the most important aspects of their creation. A range of annotation processes are used in training datasets, which we split into five high-level categories: Crowdsourcing (15 datasets). Crowdsourcing is widely used in NLP research because it is relatively cheap and easy to implement. The value of crowdsourcing lies in having annotations undertaken by `a large number of non-experts' (BIBREF69, p. 278) – any bit of content can be annotated by multiple annotators, effectively trading quality for quantity. Studies which use crowdsourcing with only a few annotators for each bit of content risk minimising quality without counterbalancing it with greater quantity. Furthermore, testing the work of many different annotators can be challenging BIBREF70, BIBREF71 and ensuring they are paid an ethical amount may make the cost comparable to using trained experts. Crowdsourcing has also been associated with `citizen science' initiatives to make academic research more accessible but this may not be fully realised in cases where annotation tasks are laborious and low-skilled BIBREF72, BIBREF20. Academic experts (22 datasets). Expert annotation is time-intensive but is considered to produce higher quality annotations. Waseem reports that `systems trained on expert annotations outperform systems trained on amateur annotations.' BIBREF73 and, similarly, D'Orazio et al. claim, `although expert coding is costly, it produces quality data.' BIBREF74. However, the notion of an `expert' remains somewhat fuzzy within abusive content detection research. In many cases, publications only report that `an expert' is used, without specifying the nature of their expertise – even though this can vary substantially. For example, an expert may refer to an NLP practitioner, an undergraduate student with only modest levels of training, a member of an attacked social group relevant to the dataset or a researcher with a doctorate in the study of prejudice. In general, we anticipate that experts in the social scientific study of prejudice/abuse would perform better at annotation tasks then NLP experts who may not have any direct expertise in the conceptual and theoretical issues of abusive content annotation. In particular, one risk of using NLP practitioners, whether students or professionals, is that they might `game' training datasets based on what they anticipate is technically feasible for existing detection systems. For instance, if existing systems perform poorly when presented with long range dependencies, humour or subtle forms of hate (which are nonetheless usually discernible to human readers) then NLP experts could unintentionally use this expectation to inform their annotations and not label such content as hateful. Professional moderators (3 datasets). Professional moderators offer a standardized approach to content annotated, implemented by experienced workers. This should, in principle, result in high quality annotations. However, one concern is that moderators are output-focused as their work involves determining whether content should be allowed or removed from platforms; they may not provide detailed labels about the nature of abuse and may also set the bar for content labelled `abusive' fairly high, missing out on more nuance and subtle varieties. In most cases, moderators will annotate for a range of unacceptable content, such as spam and sexual content, and this must be marked in datasets. A mix of crowdsourcing and experts (6 datasets). Synthetic data creation (4 datasets). Synthetic datasets are an interesting option as they are inherently non-authentic and therefore not necessarily representative of how abuse manifests in real-world situations. However, if they are created in realistic conditions by experts or relevant content creators then they can mimic real behaviour and have the added advantage that they may have broader coverage of different types of abuse. They are also usually easier to share. ### Analysis of training datasets ::: Annotation of training datasets ::: Identity of the annotators
The data statements framework given by Bender and Friedman emphasises the importance of understanding who has completed annotations. Knowing who the annotators are is important because `their own “social address" influences their experience with language and thus their perception of what they are annotating.' BIBREF18 In the context of online abuse, Binns et al. show that the gender of annotators systematically influences what annotations they provide BIBREF75. No annotator will be well-versed in all of the slang or coded meanings used to construct abusive language. Indeed, many of these coded meanings are deliberately covert and obfuscated BIBREF76. To help mitigate these challenges, annotators should be (a) well-qualified and (b) diverse. A homogeneous group of annotators will be poorly equipped to catch all instances of abuse in a corpus. Recruiting an intentionally mixed groups of annotators is likely to yield better recall of abuse and thus a more precise dataset BIBREF77. Information about annotators is unfortunately scarce. In 23 of the training datasets no information is given about the identity of annotators; in 17 datasets very limited information is given, such as whether the annotator is a native speaker of the language; and in just 10 cases is detailed information given. Interestingly, only 4 out of these 10 datasets are in the English language. Relevant information about annotators can be split into (i) Demographic information and (ii) annotators' expertise and experience. In none of the training sets is the full range of annotator information made available, which includes: Demographic information. The nature of the task affects what information should be provided, as well as the geographic and cultural context. For instance, research on Islamophobia should include, at the very least, information about annotators' religious affiliation. Relevant variables include: Age Ethnicity and race Religion Gender Sexual Orientation Expertise and experience. Relevant variables include: Field of research Years of experience Research status (e.g. research assistant or post-doc) Personal experiences of abuse. In our review, none of the datasets contained systematic information about whether annotators had been personally targeted by abuse or had viewed such abuse online, even though this can impact annotators' perceptions. Relevant variables include: Experiences of being targeted by online abuse. Experiences of viewing online abuse. ### Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation
A key source of variation across datasets is whether annotators were given detailed guidelines, very minimal guidelines or no guidelines at all. Analysing this issue is made difficult by the fact that many dataset creators do not share their annotation guidelines. 21 of the datasets we study do not provide the guidelines and 14 only provide them in a highly summarised form. In just 15 datasets is detailed information given (and these are reported on in just 9 publications). Requiring researchers to publish annotation guidelines not only helps future researchers to better understand what datasets contain but also to improve and extend them. This could be crucial for improving the quality of annotations; as Ross et al. recommend, `raters need more detailed instructions for annotation.' BIBREF78 The degree of detail given in guidelines is linked to how the notion of `abuse' is understood. Some dataset creators construct clear and explicit guidelines in an attempt to ensure that annotations are uniform and align closely with social scientific concepts. In other cases, dataset creators allow annotators to apply their own perception. For instance, in their Portuguese language dataset, Fortuna et al. ask annotators to `evaluate if according to your opinion, these tweets contain hate speech' BIBREF38. The risk here is that authors' perceptions may differ considerably; Salminen et al. show that online hate interpretation varies considerably across individuals BIBREF79. This is also reflected in inter-annotator agreement scores for abusive content, which is often very low, particularly for tasks which deploy more than just a binary taxonomy. However, it is unlikely that annotators could ever truly divorce themselves from their own social experience and background to decide on a single `objective' annotation. Abusive content annotation is better understood, epistemologically, as an intersubjective process in which agreement is constructed, rather than an objective process in which a `true' annotation is `found'. For this reason, some researchers have shifted the question of `how can we achieve the correct annotation?' to `who should decide what the correct annotation is?' BIBREF73. Ultimately, whether annotators should be allowed greater freedom in making annotations, and whether this results in higher quality datasets, needs further research and conceptual examination. Some aspects of abusive language present fundamental issues that are prone to unreliable annotation, such as Irony, Calumniation and Intent. They are intrinsically difficult to annotate given a third-person perspective on a piece of text as they involve making a judgement about indeterminate issues. However, they cannot be ignored given their prevalence in abusive content and their importance to how abuse is expressed. Thus, although they are fundamentally conceptual problems, these issues also present practical problems for annotators, and should be addressed explicitly in coding guidelines. Otherwise, as BIBREF80 note, these issues are likely to drive type II errors in classification, i.e. labelling non-hate-speech utterances as hate speech. ### Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Irony
This covers statements that have a meaning contrary to that one might glean at first reading. Lachenicht BIBREF81 notes that Irony goes against Grice's quality maxim, and as such Ironic content requires closer attention from the reader as it is prone to being misinterpreted. Irony is a particularly difficult issue as in some cases it is primarily intended to provide humour (and thus might legitimately be considered non-abusive) but in other cases is used as a way of veiling genuine abuse. Previous research suggests that the problem is widespread. Sanguinetti et al. BIBREF82 find irony in 11% of hateful tweets in Italian. BIBREF25 find that irony is one of the most common phenomena in self-deleted comments; and that the prevalence of irony is 33.9% amongst deleted comments in a Croatian comment dataset and 18.1% amongst deleted comments in a Slovene comment dataset. Furthermore, annotating irony (as well as related constructs, such as sarcasm and humour) is inherently difficult. BIBREF83 report that agreement on sarcasm amongst annotators working in English is low, something echoed by annotations of Danish content BIBREF84. Irony is also one of the most common reasons for content to be re-moderated on appeal, according to Pavlopoulos et al. BIBREF24. ### Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Calumniation
This covers false statements, slander, and libel. From the surveyed set, this is annotated in datasets for Greek BIBREF24 and for Croatian and Slovene BIBREF25. Its prevalence varies considerably across these two datasets and reliable estimations of the prevalence of false statements are not available. Calumniation is not only an empirical issue, it also raises conceptual problems: should false information be considered abusive if it slanders or demeans a person? However, if the information is then found out to be true does it make the content any less abusive? Given the contentiousness of `objectivity', and the lack of consensus about most issues in a `post-truth' age BIBREF85, who should decide what is considered true? And, finally, how do we determine whether the content creator knows whether something is true? These ontological, epistemological and social questions are fundamental to the issue of truth and falsity in abusive language. Understandably, most datasets do not taken any perspective on the truth and falsity of content. This is a practical solution: given error rates in abusive language detection as well as error rates in fact-checking, a system which combined both could be inapplicable in practice. ### Analysis of training datasets ::: Annotation of training datasets ::: Guidelines for annotation ::: Intent
This information about the utterer's state of mind is a core part of how many types of abusive language are defined. Intent is usually used to emphasize the wrongness of abusive behaviour, such as spreading, inciting, promoting or justifying hatred or violence towards a given target, or sending a message that aims at dehumanising, delegitimising, hurting or intimidating them BIBREF82. BIBREF81 postulate that “aggravation, invective and rudeness ... may be performed with varying degrees of intention to hurt", and cite five legal degrees of intent BIBREF86. However, it is difficult to discern the intent of another speaker in a verbal conversation between humans, and even more difficult to do so through written and computer-mediated communications BIBREF87. Nevertheless, intent is particularly important for some categories of abuse such as bullying, maliciousness and hostility BIBREF34, BIBREF32. Most of the guidelines for the datasets we have studied do not contain an explicit discussion of intent, although there are exceptions. BIBREF88 include intent as a core part of their annotation standard, noting that understanding context (such as by seeing a speakers' other online messages) is crucial to achieving quality annotations. However, this proposition poses conceptual challenges given that people's intent can shift over time. Deleted comments have been used to study potential expressions of regret by users and, as such, a change in their intent BIBREF89, BIBREF25; this has also been reported as a common motivator even in self-deletion of non-abusive language BIBREF90. Equally, engaging in a sequence of targeted abusive language is an indicator of aggressive intent, and appears in several definitions. BIBREF23 require an “intent to physically assert power over women" as a requirement for multiple categories of misogynistic behaviour. BIBREF34 find that messages that are “unapologetically or intentionally offensive" fit in the highest grade of trolling under their schema. Kenny et al. BIBREF86 note how sarcasm, irony, and humour complicate the picture of intent by introducing considerable difficulties in discerning the true intent of speakers (as discussed above). Part of the challenge is that many abusive terms, such as slurs and insults, are polysemic and may be co-opted by an ingroup into terms of entertainment and endearment BIBREF34. ### Dataset sharing ::: The challenges and opportunities of achieving Open Science
All of the training datasets we analyse are publicly accessible and as such can be used by researchers other than the authors of the original publication. Sharing data is an important aspect of open science but also poses ethical and legal risks, especially in light of recent regulatory changes, such as the introduction of GPDR in the UK BIBREF91, BIBREF92. This problem is particularly acute with abusive content, which can be deeply shocking, and some training datasets from highly cited publications have not been made publicly available BIBREF93, BIBREF94, BIBREF95. Open science initiatives can also raise concerns amongst the public, who may not be comfortable with researchers sharing their personal data BIBREF96, BIBREF97. The difficulty of sharing data in sensitive areas of research is reflected by the Islamist extremism research website, `Jihadology'. It chose to restrict public access in 2019, following efforts by Home Office counter-terrorism officials to shut it down completely. They were concerned that, whilst it aimed to support academic research into Islamist extremism, it may have inadvertently enabled individuals to radicalise by making otherwise banned extremist material available. By working with partners such as the not-for-profit Tech Against Terrorism, Jihadology created a secure area in the website, which can only be accessed by approved researchers. Some of the training datasets in our list have similar requirements, and can only be accessed following a registration process. Open sharing of datasets is not only a question of scientific integrity and a powerful way of advancing scientific knowledge. It is also, fundamentally, a question of fairness and power. Opening access to datasets will enable less-well funded researchers and organisations, which includes researchers in the Global South and those working for not-for-profit organisations, to steer and contribute to research. This is a particularly pressing issue in a field which is directly concerned with the experiences of often-marginalised communities and actors BIBREF36. For instance, one growing concern is the biases encoded in detection systems and the impact this could have when they are applied in real-world settings BIBREF9, BIBREF10. This research could be further advanced by making more datasets and detection systems more easily available. For instance, Binns et al. use the detailed metadata in the datasets provided by Wulczyn et al. to investigate how the demographics of annotators impacts the annotations they make BIBREF75, BIBREF29. The value of such insights is only clear after the dataset has been shared – and, equally, is only possible because of data sharing. More effective ways of sharing datasets would address the fact that datasets often deteriorate after they have been published BIBREF13. Several of the most widely used datasets provide only the annotations and IDs and must be `rehydrated' to collect the content. Both of the datasets provided by Waseem and Hovy and Founta et al. must be collected in this way BIBREF98, BIBREF36, and both have degraded considerably since they were first released as the tweets are no longer available on Twitter. Chung et al. also estimate that within 12 months the recently released dataset for counterspeech by Mathew et al. had lost more than 60% of its content BIBREF65, BIBREF58. Dataset degradation poses three main risks: First, if less data is available then there is a greater likelihood of overfitting. Second, the class distributions usually change as proportionally more of the abusive content is taken down than the non-abusive. Third, it is also likely that the more overt forms of abuse are taken down, rather than the covert instances, thereby changing the qualitative nature of the dataset. ### Dataset sharing ::: Research infrastructure: Solutions for sharing training datasets
The problem of data access and sharing remains unresolved in the field of abusive content detection, much like other areas of computational research BIBREF99. At present, an ethical, secure and easy way of sharing sensitive tools and resources has not been developed and adopted in the field. More effective dataset sharing would (1) greater collaboration amongst researchers, (2) enhance the reproducibility of research by encouraging greater scrutiny BIBREF100, BIBREF101, BIBREF102 and (3) substantively advance the field by enabling future researchers to better understand the biases and limitations of existing research and to identify new research directions. There are two main challenges which must be overcome to ensure that training datasets can be shared and used by future researchers. First, dataset quality: the size, class distribution and quality of their content must be maintained. Second, dataset access: access to datasets must be controlled so that researchers can use them, whilst respecting platforms' Terms of Service and not allowing potential extremists from having access. These problems are closely entwined and the solutions available, which follow, have implications for both of them. Synthetic datasets. Four of the datasets we have reviewed were developed synthetically. This resolves the dataset quality problem but introduces additional biases and limitations because the data is not real. Synthetic datasets still need to be shared in such a way as to limit access for potential extremists but face no challenges from Platforms' Terms of Services. Data `philanthropy' or `donations'. These are defined as `the act of an individual actively consenting to donate their personal data for research' BIBREF97. Donated data from many individuals could then be combined and shared – but it would still need to be annotated. A further challenge is that many individuals who share abusive content may be unwilling to `donate' their data as this is commonly associated with prosocial motivations, creating severe class imbalances BIBREF97. Data donations could also open new moral and ethical issues; individuals' privacy could be impacted if data is re-analysed to derive new unexpected insights BIBREF103. Informed consent is difficult given that the exact nature of analyses may not be known in advance. Finally, data donations alone do not solve how access can be responsibly protected and how platforms' Terms of Service can be met. For these reasons, data donations are unlikely to be a key part of future research infrastructure for abusive content detection. Platform-backed sharing. Platforms could share datasets and support researchers' access. There are no working examples of this in abusive content detection research, but it has been successfully used in other research areas. For instance, Twitter has made a large dataset of accounts linked to potential information operations, known as the “IRA" dataset (Internet Research Agency). This would require considerably more interfaces between academia and industry, which may be difficult given the challenges associated with existing initiatives, such as Social Science One. However, in the long term, we propose that this is the most effective solution for the problem of sharing training datasets. Not only because it removes Terms of Service limitations but also because platforms have large volumes of original content which has been annotated in a detailed way. This could take one of two forms: platforms either make content which has violated their Community Guidelines available directly or they provide special access post-hoc to datasets which researchers have collected publicly through their API - thereby making sure that datasets do not degrade over time. Data trusts. Data trusts have been described as a way of sharing data `in a fair, safe and equitable way' ( BIBREF104 p. 46). However, there is considerable disagreement as to what they entail and how they would operate in practice BIBREF105. The Open Data Institute identifies that data trusts aim to make data open and accessible by providing a framework for storing and accessing data, terms and mechanisms for resolving disputes and, in some cases, contracts to enforce them. For abusive content training datasets, this would provide a way of enabling datasets to be shared, although it would require considerable institutional, legal and financial commitments. Arguably, the easiest way of ensuring data can be shared is to maintain a very simple data trust, such as a database, which would contain all available abusive content training datasets. This repository would need to be permissioned and access controlled to address concerns relating to privacy and ethics. Such a repository could substantially reduce the burden on researchers; once they have been approved to the repository, they could access all datasets publicly available – different levels of permission could be implemented for different datasets, depending on commercial or research sensitivity. Furthermore, this repository could contain all of the metadata reported with datasets and such information could be included at the point of deposit, based on the `data statements' work of Bender and Friedman BIBREF18. A simple API could be developed for depositing and reading data, similar to that of the HateBase. The permissioning system could be maintained either through a single institution or, to avoid power concentrating amongst a small group of researchers, through a decentralised blockchain. ### Dataset sharing ::: A new repository of training datasets: Hatespeechdata.com
The resources and infrastructure to create a dedicated data trust and API for sharing abusive content training datasets is substantial and requires considerable further engagement with research teams in this field. In the interim, to encourage greater sharing of datasets, we have launched a dedicated website which contains all of the datasets analysed here: https://hatespeechdata.com. Based on the analysis in the previous sections, we have also provided partial data statements BIBREF18. The website also contains previously published abusive keyword dictionaries, which are not analysed here but some researchers may find useful. Note that the website only contains information/data which the original authors have already made publicly available elsewhere. It will be updated with new datasets in the future. ### Best Practices for training dataset creation
Much can be learned from existing efforts to create abusive language datasets. We identify best practices which emerge at four distinct points in the process of creating a training dataset: (1) task formation, (2) data selection, (3) annotation, and (4) documentation. ### Best Practices for training dataset creation ::: Task formation: Defining the task addressed by the dataset
Dataset creation should be `problem driven' BIBREF106 and should address a well-defined and specific task, with a clear motivation. This will directly inform the taxonomy design, which should be well-specified and engage with social scientific theory as needed. Defining a clear task which the dataset addresses is especially important given the maturation of the field, ongoing terminological disagreement and the complexity of online abuse. The diversity of phenomena that fits under the umbrella of abusive language means that `general purpose' datasets are unlikely to advance the field. New datasets are most valuable when they address a new target, generator, phenomenon, or domain. Creating datasets which repeat existing work is not nearly as valuable. ### Best Practices for training dataset creation ::: Selecting data for abusive language annotation
Once the task is established, dataset creators should select what language will be annotated, where data will be sampled from and how sampling will be completed. Any data selection exercise is bound to give bias, and so it is important to record what decisions are made (and why) in this step. Dataset builders should have a specific target size in mind and also have an idea of the minimum amount of data this is likely to be needed for the task. This is also where steps 1 and 2 intersect: the data selection should be driven by the problem that is addressed rather than what is easy to collect. Ensuring there are enough positive examples of abuse will always be challenging as the prevalence of abuse is so low. However, given that purposive sampling inevitably introduces biases, creators should explore a range of options before determining the best one – and consider using multiple sampling methods at once, such as including data from different times, different locations, different types of users and different platforms. Other options include using measures of linguistic diversity to maximize the variety of text included in datasets, or including words that cluster close to known abusive terms. ### Best Practices for training dataset creation ::: Annotating abusive language
Annotators must be hired, trained and given appropriate guidelines. Annotators work best with solid guidelines, that are easy to grasp and have clear examples BIBREF107. The best examples are both illustrative, in order to capture the concepts (such as `threatening language') and provide insight into `edge cases', which is content that only just crosses the line into abuse. Decisions should be made about how to handle intrinsically difficult aspects of abuse, such as irony, calumniation and intent (see above). Annotation guidelines should be developed iteratively by dataset creators; by working through the data, rules can be established for difficult or counter-intuitive coding decisions, and a set of shared practices developed. Annotators should be included in this iterative process. Discussions with annotators the language that they have seen “in the field" offers an opportunity to enhance and refine guidelines - and even taxonomies. Such discussions will lead to more consistent data and provide a knowledge base to draw on for future work. To achieve this, it is important to adopt an open culture where annotators are comfortable providing open feedback and also describing their uncertainties. Annotators should also be given emotional and practical support (as well as appropriate financial compensation), and the harmful and potentially triggering effects of annotating online abuse should be recognised at all times. For a set of guidelines to help protect the well-being of annotators, see BIBREF13. ### Best Practices for training dataset creation ::: Documenting methods, data, and annotators
The best training datasets provide as much information as possible and are well-documented. When the method behind them is unclear, they are hard to evaluate, use and build on. Providing as much information as possible can open new and unanticipated analyses and gives more agency to future researchers who use the dataset to create classifiers. For instance, if all annotators' codings are provided (rather than just the `final' decision) then a more nuanced and aware classifier could be developed as, in some cases, it can be better to maximise recall of annotations rather than maximise agreement BIBREF77. Our review found that most datasets have poor methodological descriptions and few (if any) provide enough information to construct an adequate data statement. It is crucial that dataset creators are up front about their biases and limitations: every dataset is biased, and this is only problematic when the biases are unknown. One strategy for doing this is to maintain a document of decisions made when designing and creating the dataset and to then use it to describe to readers the rationale behind decisions. Details about the end-to-end dataset creation process are welcomed. For instance, if the task is crowdsourced then a screenshot of the micro-task presented to workers should be included, and the top-level parameters should be described (e.g. number of workers, maximum number of tasks per worker, number of annotations per piece of text) BIBREF20. If a dedicated interface is used for the annotation, this should also be described and screenshotted as the interface design can influence the annotations. ### Best Practices for training dataset creation ::: Best practice summary
Unfortunately, as with any burgeoning field, there is confusion and overlap around many of the phenomena discussed in this paper; coupled with the high degree of variation in the quality of method descriptions, it has lead to many pieces of research that are hard to combine, compare, or re-use. Our reflections on best practices are driven by this review and the difficulties of creating high quality training datasets. For future researchers, we summarise our recommendations in the following seven points: Bear in mind the purpose of the dataset; design the dataset to help address questions and problems from previous research. Avoid using `easy to access' data, and instead explore new sources which may have greater diversity. Consider what biases may be created by your sampling method. Determine size based on data sparsity and having enough positive classes rather than `what is possible'. Establish a clear taxonomy to be used for the task, with meaningful and theoretically sound categories. Provide annotators with guidelines; develop them iteratively and publish them with your dataset. Consider using trained annotators given the complexities of abusive content. Involve people who have direct experience of the abuse which you are studying whenever possible (and provided that you can protect their well-being). Report on every step of the research through a Data Statement. ### Conclusion
This paper examined a large set of datasets for the creation of abusive content detection systems, providing insight into what they contain, how they are annotated, and how tasks have been framed. Based on an evidence-driven review, we provided an extended discussion of how to make training datasets more readily available and useful, including the challenges and opportunities of open science as well as the need for more research infrastructure. We reported on the development of hatespeechdata.com – a new repository for online abusive content training datasets. Finally, we outlined best practices for creation of training datasets for detection of online abuse. We have effectively met the four research aims elaborated at the start of the paper. Training detection systems for online abuse is a substantial challenge with real social consequences. If we want the systems we develop to be useable, scalable and with few biases then we need to train them on the right data: garbage in will only lead to garbage out. Fig. 1. Year in which training datasets were originally published Fig. 2. Primary language of dataset Fig. 3. Platform from which data is gathered Fig. 4. Distribution of dataset sizes Fig. 5. Relative size of “abusive" data class Table 1. Datasets surveyed. Synthetic data use indicated with asterisk. | hatespeechdata.com |
How does Alan interrupt the robots' communication signal?
A. Alan jambs his knife into the fallen robot, which disrupts the signal.
B. Alan hurls the oozing blob-like creature at the robot. The blob dissolves the robot with its acid and that is what disrupts the signal.
C. Alan uses his pocket blaster to disrupt the signal.
D. Alan throws a handful of an anthill at the robot, using the brain waves of hundred of ants to disrupt the signal.
| SURVIVAL TACTICS By AL SEVCIK ILLUSTRATOR NOVICK The robots were built to serve Man; to do his work, see to his comforts, make smooth his way. Then the robots figured out an additional service—putting Man out of his misery. There was a sudden crash that hung sharply in the air, as if a tree had been hit by lightning some distance away. Then another. Alan stopped, puzzled. Two more blasts, quickly together, and the sound of a scream faintly. Frowning, worrying about the sounds, Alan momentarily forgot to watch his step until his foot suddenly plunged into an ant hill, throwing him to the jungle floor. "Damn!" He cursed again, for the tenth time, and stood uncertainly in the dimness. From tall, moss-shrouded trees, wrist-thick vines hung quietly, scraping the spongy ground like the tentacles of some monstrous tree-bound octopus. Fitful little plants grew straggly in the shadows of the mossy trunks, forming a dense underbrush that made walking difficult. At midday some few of the blue sun's rays filtered through to the jungle floor, but now, late afternoon on the planet, the shadows were long and gloomy. Alan peered around him at the vine-draped shadows, listening to the soft rustlings and faint twig-snappings of life in the jungle. Two short, popping sounds echoed across the stillness, drowned out almost immediately and silenced by an explosive crash. Alan started, "Blaster fighting! But it can't be!" Suddenly anxious, he slashed a hurried X in one of the trees to mark his position then turned to follow a line of similar marks back through the jungle. He tried to run, but vines blocked his way and woody shrubs caught at his legs, tripping him and holding him back. Then, through the trees he saw the clearing of the camp site, the temporary home for the scout ship and the eleven men who, with Alan, were the only humans on the jungle planet, Waiamea. Stepping through the low shrubbery at the edge of the site, he looked across the open area to the two temporary structures, the camp headquarters where the power supplies and the computer were; and the sleeping quarters. Beyond, nose high, stood the silver scout ship that had brought the advance exploratory party of scientists and technicians to Waiamea three days before. Except for a few of the killer robots rolling slowly around the camp site on their quiet treads, there was no one about. "So, they've finally got those things working." Alan smiled slightly. "Guess that means I owe Pete a bourbon-and-soda for sure. Anybody who can build a robot that hunts by homing in on animals' mind impulses ..." He stepped forward just as a roar of blue flame dissolved the branches of a tree, barely above his head. Without pausing to think, Alan leaped back, and fell sprawling over a bush just as one of the robots rolled silently up from the right, lowering its blaster barrel to aim directly at his head. Alan froze. "My God, Pete built those things wrong!" Suddenly a screeching whirlwind of claws and teeth hurled itself from the smoldering branches and crashed against the robot, clawing insanely at the antenna and blaster barrel. With an awkward jerk the robot swung around and fired its blaster, completely dissolving the lower half of the cat creature which had clung across the barrel. But the back pressure of the cat's body overloaded the discharge circuits. The robot started to shake, then clicked sharply as an overload relay snapped and shorted the blaster cells. The killer turned and rolled back towards the camp, leaving Alan alone. Shakily, Alan crawled a few feet back into the undergrowth where he could lie and watch the camp, but not himself be seen. Though visibility didn't make any difference to the robots, he felt safer, somehow, hidden. He knew now what the shooting sounds had been and why there hadn't been anyone around the camp site. A charred blob lying in the grass of the clearing confirmed his hypothesis. His stomach felt sick. "I suppose," he muttered to himself, "that Pete assembled these robots in a batch and then activated them all at once, probably never living to realize that they're tuned to pick up human brain waves, too. Damn! Damn!" His eyes blurred and he slammed his fist into the soft earth. When he raised his eyes again the jungle was perceptibly darker. Stealthy rustlings in the shadows grew louder with the setting sun. Branches snapped unaccountably in the trees overhead and every now and then leaves or a twig fell softly to the ground, close to where he lay. Reaching into his jacket, Alan fingered his pocket blaster. He pulled it out and held it in his right hand. "This pop gun wouldn't even singe a robot, but it just might stop one of those pumas." They said the blast with your name on it would find you anywhere. This looked like Alan's blast. Slowly Alan looked around, sizing up his situation. Behind him the dark jungle rustled forbiddingly. He shuddered. "Not a very healthy spot to spend the night. On the other hand, I certainly can't get to the camp with a pack of mind-activated mechanical killers running around. If I can just hold out until morning, when the big ship arrives ... The big ship! Good Lord, Peggy!" He turned white; oily sweat punctuated his forehead. Peggy, arriving tomorrow with the other colonists, the wives and kids! The metal killers, tuned to blast any living flesh, would murder them the instant they stepped from the ship! A pretty girl, Peggy, the girl he'd married just three weeks ago. He still couldn't believe it. It was crazy, he supposed, to marry a girl and then take off for an unknown planet, with her to follow, to try to create a home in a jungle clearing. Crazy maybe, but Peggy and her green eyes that changed color with the light, with her soft brown hair, and her happy smile, had ended thirty years of loneliness and had, at last, given him a reason for living. "Not to be killed!" Alan unclenched his fists and wiped his palms, bloody where his fingernails had dug into the flesh. There was a slight creak above him like the protesting of a branch too heavily laden. Blaster ready, Alan rolled over onto his back. In the movement, his elbow struck the top of a small earthy mound and he was instantly engulfed in a swarm of locust-like insects that beat disgustingly against his eyes and mouth. "Fagh!" Waving his arms before his face he jumped up and backwards, away from the bugs. As he did so, a dark shapeless thing plopped from the trees onto the spot where he had been lying stretched out. Then, like an ambient fungus, it slithered off into the jungle undergrowth. For a split second the jungle stood frozen in a brilliant blue flash, followed by the sharp report of a blaster. Then another. Alan whirled, startled. The planet's double moon had risen and he could see a robot rolling slowly across the clearing in his general direction, blasting indiscriminately at whatever mind impulses came within its pickup range, birds, insects, anything. Six or seven others also left the camp headquarters area and headed for the jungle, each to a slightly different spot. Apparently the robot hadn't sensed him yet, but Alan didn't know what the effective range of its pickup devices was. He began to slide back into the jungle. Minutes later, looking back he saw that the machine, though several hundred yards away, had altered its course and was now headed directly for him. His stomach tightened. Panic. The dank, musty smell of the jungle seemed for an instant to thicken and choke in his throat. Then he thought of the big ship landing in the morning, settling down slowly after a lonely two-week voyage. He thought of a brown-haired girl crowding with the others to the gangway, eager to embrace the new planet, and the next instant a charred nothing, unrecognizable, the victim of a design error or a misplaced wire in a machine. "I have to try," he said aloud. "I have to try." He moved into the blackness. Powerful as a small tank, the killer robot was equipped to crush, slash, and burn its way through undergrowth. Nevertheless, it was slowed by the larger trees and the thick, clinging vines, and Alan found that he could manage to keep ahead of it, barely out of blaster range. Only, the robot didn't get tired. Alan did. The twin moons cast pale, deceptive shadows that wavered and danced across the jungle floor, hiding debris that tripped him and often sent him sprawling into the dark. Sharp-edged growths tore at his face and clothes, and insects attracted by the blood matted against his pants and shirt. Behind, the robot crashed imperturbably after him, lighting the night with fitful blaster flashes as some winged or legged life came within its range. There was movement also, in the darkness beside him, scrapings and rustlings and an occasional low, throaty sound like an angry cat. Alan's fingers tensed on his pocket blaster. Swift shadowy forms moved quickly in the shrubs and the growling became suddenly louder. He fired twice, blindly, into the undergrowth. Sharp screams punctuated the electric blue discharge as a pack of small feline creatures leaped snarling and clawing back into the night. Mentally, Alan tried to figure the charge remaining in his blaster. There wouldn't be much. "Enough for a few more shots, maybe. Why the devil didn't I load in fresh cells this morning!" The robot crashed on, louder now, gaining on the tired human. Legs aching and bruised, stinging from insect bites, Alan tried to force himself to run holding his hands in front of him like a child in the dark. His foot tripped on a barely visible insect hill and a winged swarm exploded around him. Startled, Alan jerked sideways, crashing his head against a tree. He clutched at the bark for a second, dazed, then his knees buckled. His blaster fell into the shadows. The robot crashed loudly behind him now. Without stopping to think, Alan fumbled along the ground after his gun, straining his eyes in the darkness. He found it just a couple of feet to one side, against the base of a small bush. Just as his fingers closed upon the barrel his other hand slipped into something sticky that splashed over his forearm. He screamed in pain and leaped back, trying frantically to wipe the clinging, burning blackness off his arm. Patches of black scraped off onto branches and vines, but the rest spread slowly over his arm as agonizing as hot acid, or as flesh being ripped away layer by layer. Almost blinded by pain, whimpering, Alan stumbled forward. Sharp muscle spasms shot from his shoulder across his back and chest. Tears streamed across his cheeks. A blue arc slashed at the trees a mere hundred yards behind. He screamed at the blast. "Damn you, Pete! Damn your robots! Damn, damn ... Oh, Peggy!" He stepped into emptiness. Coolness. Wet. Slowly, washed by the water, the pain began to fall away. He wanted to lie there forever in the dark, cool, wetness. For ever, and ever, and ... The air thundered. In the dim light he could see the banks of the stream, higher than a man, muddy and loose. Growing right to the edge of the banks, the jungle reached out with hairy, disjointed arms as if to snag even the dirty little stream that passed so timidly through its domain. Alan, lying in the mud of the stream bed, felt the earth shake as the heavy little robot rolled slowly and inexorably towards him. "The Lord High Executioner," he thought, "in battle dress." He tried to stand but his legs were almost too weak and his arm felt numb. "I'll drown him," he said aloud. "I'll drown the Lord High Executioner." He laughed. Then his mind cleared. He remembered where he was. Alan trembled. For the first time in his life he understood what it was to live, because for the first time he realized that he would sometime die. In other times and circumstances he might put it off for a while, for months or years, but eventually, as now, he would have to watch, still and helpless, while death came creeping. Then, at thirty, Alan became a man. "Dammit, no law says I have to flame-out now !" He forced himself to rise, forced his legs to stand, struggling painfully in the shin-deep ooze. He worked his way to the bank and began to dig frenziedly, chest high, about two feet below the edge. His arm where the black thing had been was swollen and tender, but he forced his hands to dig, dig, dig, cursing and crying to hide the pain, and biting his lips, ignoring the salty taste of blood. The soft earth crumbled under his hands until he had a small cave about three feet deep in the bank. Beyond that the soil was held too tightly by the roots from above and he had to stop. The air crackled blue and a tree crashed heavily past Alan into the stream. Above him on the bank, silhouetting against the moons, the killer robot stopped and its blaster swivelled slowly down. Frantically, Alan hugged the bank as a shaft of pure electricity arced over him, sliced into the water, and exploded in a cloud of steam. The robot shook for a second, its blaster muzzle lifted erratically and for an instant it seemed almost out of control, then it quieted and the muzzle again pointed down. Pressing with all his might, Alan slid slowly along the bank inches at a time, away from the machine above. Its muzzle turned to follow him but the edge of the bank blocked its aim. Grinding forward a couple of feet, slightly overhanging the bank, the robot fired again. For a split second Alan seemed engulfed in flame; the heat of hell singed his head and back, and mud boiled in the bank by his arm. Again the robot trembled. It jerked forward a foot and its blaster swung slightly away. But only for a moment. Then the gun swung back again. Suddenly, as if sensing something wrong, its tracks slammed into reverse. It stood poised for a second, its treads spinning crazily as the earth collapsed underneath it, where Alan had dug, then it fell with a heavy splash into the mud, ten feet from where Alan stood. Without hesitation Alan threw himself across the blaster housing, frantically locking his arms around the barrel as the robot's treads churned furiously in the sticky mud, causing it to buck and plunge like a Brahma bull. The treads stopped and the blaster jerked upwards wrenching Alan's arms, then slammed down. Then the whole housing whirled around and around, tilting alternately up and down like a steel-skinned water monster trying to dislodge a tenacious crab, while Alan, arms and legs wrapped tightly around the blaster barrel and housing, pressed fiercely against the robot's metal skin. Slowly, trying to anticipate and shift his weight with the spinning plunges, Alan worked his hand down to his right hip. He fumbled for the sheath clipped to his belt, found it, and extracted a stubby hunting knife. Sweat and blood in his eyes, hardly able to move on the wildly swinging turret, he felt down the sides to the thin crack between the revolving housing and the stationary portion of the robot. With a quick prayer he jammed in the knife blade—and was whipped headlong into the mud as the turret literally snapped to a stop. The earth, jungle and moons spun in a pinwheeled blur, slowed, and settled to their proper places. Standing in the sticky, sweet-smelling ooze, Alan eyed the robot apprehensively. Half buried in mud, it stood quiet in the shadowy light except for an occasional, almost spasmodic jerk of its blaster barrel. For the first time that night Alan allowed himself a slight smile. "A blade in the old gear box, eh? How does that feel, boy?" He turned. "Well, I'd better get out of here before the knife slips or the monster cooks up some more tricks with whatever it's got for a brain." Digging little footholds in the soft bank, he climbed up and stood once again in the rustling jungle darkness. "I wonder," he thought, "how Pete could cram enough brain into one of those things to make it hunt and track so perfectly." He tried to visualize the computing circuits needed for the operation of its tracking mechanism alone. "There just isn't room for the electronics. You'd need a computer as big as the one at camp headquarters." In the distance the sky blazed as a blaster roared in the jungle. Then Alan heard the approaching robot, crunching and snapping its way through the undergrowth like an onrushing forest fire. He froze. "Good Lord! They communicate with each other! The one I jammed must be calling others to help." He began to move along the bank, away from the crashing sounds. Suddenly he stopped, his eyes widened. "Of course! Radio! I'll bet anything they're automatically controlled by the camp computer. That's where their brain is!" He paused. "Then, if that were put out of commission ..." He jerked away from the bank and half ran, half pulled himself through the undergrowth towards the camp. Trees exploded to his left as another robot fired in his direction, too far away to be effective but churning towards him through the blackness. Alan changed direction slightly to follow a line between the two robots coming up from either side, behind him. His eyes were well accustomed to the dark now, and he managed to dodge most of the shadowy vines and branches before they could snag or trip him. Even so, he stumbled in the wiry underbrush and his legs were a mass of stinging slashes from ankle to thigh. The crashing rumble of the killer robots shook the night behind him, nearer sometimes, then falling slightly back, but following constantly, more unshakable than bloodhounds because a man can sometimes cover a scent, but no man can stop his thoughts. Intermittently, like photographers' strobes, blue flashes would light the jungle about him. Then, for seconds afterwards his eyes would see dancing streaks of yellow and sharp multi-colored pinwheels that alternately shrunk and expanded as if in a surrealist's nightmare. Alan would have to pause and squeeze his eyelids tight shut before he could see again, and the robots would move a little closer. To his right the trees silhouetted briefly against brilliance as a third robot slowly moved up in the distance. Without thinking, Alan turned slightly to the left, then froze in momentary panic. "I should be at the camp now. Damn, what direction am I going?" He tried to think back, to visualize the twists and turns he'd taken in the jungle. "All I need is to get lost." He pictured the camp computer with no one to stop it, automatically sending its robots in wider and wider forays, slowly wiping every trace of life from the planet. Technologically advanced machines doing the job for which they were built, completely, thoroughly, without feeling, and without human masters to separate sense from futility. Finally parts would wear out, circuits would short, and one by one the killers would crunch to a halt. A few birds would still fly then, but a unique animal life, rare in the universe, would exist no more. And the bones of children, eager girls, and their men would also lie, beside a rusty hulk, beneath the alien sun. "Peggy!" As if in answer, a tree beside him breathed fire, then exploded. In the brief flash of the blaster shot, Alan saw the steel glint of a robot only a hundred yards away, much nearer than he had thought. "Thank heaven for trees!" He stepped back, felt his foot catch in something, clutched futilely at some leaves and fell heavily. Pain danced up his leg as he grabbed his ankle. Quickly he felt the throbbing flesh. "Damn the rotten luck, anyway!" He blinked the pain tears from his eyes and looked up—into a robot's blaster, jutting out of the foliage, thirty yards away. Instinctively, in one motion Alan grabbed his pocket blaster and fired. To his amazement the robot jerked back, its gun wobbled and started to tilt away. Then, getting itself under control, it swung back again to face Alan. He fired again, and again the robot reacted. It seemed familiar somehow. Then he remembered the robot on the river bank, jiggling and swaying for seconds after each shot. "Of course!" He cursed himself for missing the obvious. "The blaster static blanks out radio transmission from the computer for a few seconds. They even do it to themselves!" Firing intermittently, he pulled himself upright and hobbled ahead through the bush. The robot shook spasmodically with each shot, its gun tilted upward at an awkward angle. Then, unexpectedly, Alan saw stars, real stars brilliant in the night sky, and half dragging his swelling leg he stumbled out of the jungle into the camp clearing. Ahead, across fifty yards of grass stood the headquarters building, housing the robot-controlling computer. Still firing at short intervals he started across the clearing, gritting his teeth at every step. Straining every muscle in spite of the agonizing pain, Alan forced himself to a limping run across the uneven ground, carefully avoiding the insect hills that jutted up through the grass. From the corner of his eye he saw another of the robots standing shakily in the dark edge of the jungle waiting, it seemed, for his small blaster to run dry. "Be damned! You can't win now!" Alan yelled between blaster shots, almost irrational from the pain that ripped jaggedly through his leg. Then it happened. A few feet from the building's door his blaster quit. A click. A faint hiss when he frantically jerked the trigger again and again, and the spent cells released themselves from the device, falling in the grass at his feet. He dropped the useless gun. "No!" He threw himself on the ground as a new robot suddenly appeared around the edge of the building a few feet away, aimed, and fired. Air burned over Alan's back and ozone tingled in his nostrils. Blinding itself for a few seconds with its own blaster static, the robot paused momentarily, jiggling in place. In this instant, Alan jammed his hands into an insect hill and hurled the pile of dirt and insects directly at the robot's antenna. In a flash, hundreds of the winged things erupted angrily from the hole in a swarming cloud, each part of which was a speck of life transmitting mental energy to the robot's pickup devices. Confused by the sudden dispersion of mind impulses, the robot fired erratically as Alan crouched and raced painfully for the door. It fired again, closer, as he fumbled with the lock release. Jagged bits of plastic and stone ripped past him, torn loose by the blast. Frantically, Alan slammed open the door as the robot, sensing him strongly now, aimed point blank. He saw nothing, his mind thought of nothing but the red-clad safety switch mounted beside the computer. Time stopped. There was nothing else in the world. He half-jumped, half-fell towards it, slowly, in tenths of seconds that seemed measured out in years. The universe went black. Later. Brilliance pressed upon his eyes. Then pain returned, a multi-hurting thing that crawled through his body and dragged ragged tentacles across his brain. He moaned. A voice spoke hollowly in the distance. "He's waking. Call his wife." Alan opened his eyes in a white room; a white light hung over his head. Beside him, looking down with a rueful smile, stood a young man wearing space medical insignia. "Yes," he acknowledged the question in Alan's eyes, "you hit the switch. That was three days ago. When you're up again we'd all like to thank you." Suddenly a sobbing-laughing green-eyed girl was pressed tightly against him. Neither of them spoke. They couldn't. There was too much to say. THE END Transcriber's Note: This etext was produced from Amazing Science Fiction Stories October 1958. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note. | C. Alan uses his pocket blaster to disrupt the signal. |
How did Giles feel about family in the beginning?
A. he cared less about them as time wore on
B. Harry reminded him too much of his ex-wife
C. he liked some of his children, but not all of them
D. he loved them but didn't want to travel for ninety years
| The Dwindling Years He didn’t expect to be last—but neither did he anticipate the horror of being the first! By LESTER DEL REY Illustrated by JOHNS NEARLY TWO hundred years of habit carried the chairman of Exodus Corporation through the morning ritual of crossing the executive floor. Giles made the expected comments, smiled the proper smiles and greeted his staff by the right names, but it was purely automatic. Somehow, thinking had grown difficult in the mornings recently. Inside his private office, he dropped all pretense and slumped into the padding of his chair, gasping for breath and feeling his heart hammering in his chest. He’d been a fool to come to work, he realized. But with the Procyon shuttle arriving yesterday, there was no telling what might turn up. Besides, that fool of a medicist had sworn the shot would cure any allergy or asthma. Giles heard his secretary come in, but it wasn’t until the smell of the coffee reached his nose that he looked up. She handed him a filled cup and set the carafe down on the age-polished surface of the big desk. She watched solicitously as he drank. “That bad, Arthur?” she asked. “Just a little tired,” he told her, refilling the cup. She’d made the coffee stronger than usual and it seemed to cut through some of the thickness in his head. “I guess I’m getting old, Amanda.” She smiled dutifully at the time-worn joke, but he knew she wasn’t fooled. She’d cycled to middle age four times in her job and she probably knew him better than he knew himself—which wouldn’t be hard, he thought. He’d hardly recognized the stranger in the mirror as he tried to shave. His normal thinness had looked almost gaunt and there were hollows in his face and circles under his eyes. Even his hair had seemed thinner, though that, of course, was impossible. “Anything urgent on the Procyon shuttle?” he asked as she continue staring at him with worried eyes. SHE JERKED her gaze away guiltily and turned to the incoming basket. “Mostly drugs for experimenting. A personal letter for you, relayed from some place I never heard of. And one of the super-light missiles! They found it drifting half a light-year out and captured it. Jordan’s got a report on it and he’s going crazy. But if you don’t feel well—” “I’m all right!” he told her sharply. Then he steadied himself and managed to smile. “Thanks for the coffee, Amanda.” She accepted dismissal reluctantly. When she was gone, he sat gazing at the report from Jordan at Research. For eighty years now, they’d been sending out the little ships that vanished at greater than the speed of light, equipped with every conceivable device to make them return automatically after taking pictures of wherever they arrived. So far, none had ever returned or been located. This was the first hope they’d found that the century-long trips between stars in the ponderous shuttles might be ended and he should have been filled with excitement at Jordan’s hasty preliminary report. He leafed through it. The little ship apparently had been picked up by accident when it almost collided with a Sirius-local ship. Scientists there had puzzled over it, reset it and sent it back. The two white rats on it had still been alive. Giles dropped the report wearily and picked up the personal message that had come on the shuttle. He fingered the microstrip inside while he drank another coffee, and finally pulled out the microviewer. There were three frames to the message, he saw with some surprise. He didn’t need to see the signature on the first projection. Only his youngest son would have sent an elaborate tercentenary greeting verse—one that would arrive ninety years too late! Harry had been born just before Earth passed the drastic birth limitation act and his mother had spoiled him. He’d even tried to avoid the compulsory emigration draft and stay on with his mother. It had been the bitter quarrels over that which had finally broken Giles’ fifth marriage. Oddly enough, the message in the next frame showed none of that. Harry had nothing but praise for the solar system where he’d been sent. He barely mentioned being married on the way or his dozen children, but filled most of the frame with glowing description and a plea for his father to join him there! GILES SNORTED and turned to the third frame, which showed a group picture of the family in some sort of vehicle, against the background of an alien but attractive world. He had no desire to spend ninety years cooped up with a bunch of callow young emigrants, even in one of the improved Exodus shuttles. And even if Exodus ever got the super-light drive working, there was no reason he should give up his work. The discovery that men could live practically forever had put an end to most family ties; sentiment wore thin in half a century—which wasn’t much time now, though it had once seemed long enough. Strange how the years seemed to get shorter as their number increased. There’d been a song once—something about the years dwindling down. He groped for the lines and couldn’t remember. Drat it! Now he’d probably lie awake most of the night again, trying to recall them. The outside line buzzed musically, flashing Research’s number. Giles grunted in irritation. He wasn’t ready to face Jordan yet. But he shrugged and pressed the button. The intense face that looked from the screen was frowning as Jordan’s eyes seemed to sweep around the room. He was still young—one of the few under a hundred who’d escaped deportation because of special ability—and patience was still foreign to him. Then the frown vanished as an expression of shock replaced it, and Giles felt a sinking sensation. If he looked that bad— But Jordan wasn’t looking at him; the man’s interest lay in the projected picture from Harry, across the desk from the communicator. “Antigravity!” His voice was unbelieving as he turned his head to face the older man. “What world is that?” Giles forced his attention on the picture again and this time he noticed the vehicle shown. It was enough like an old model Earth conveyance to pass casual inspection, but it floated wheellessly above the ground. Faint blur lines indicated it had been moving when the picture was taken. “One of my sons—” Giles started to answer. “I could find the star’s designation....” Jordan cursed harshly. “So we can send a message on the shuttle, begging for their secret in a couple of hundred years! While a hundred other worlds make a thousand major discoveries they don’t bother reporting! Can’t the Council see anything ?” Giles had heard it all before. Earth was becoming a backwater world; no real progress had been made in two centuries; the young men were sent out as soon as their first fifty years of education were finished, and the older men were too conservative for really new thinking. There was a measure of truth in it, unfortunately. “They’ll slow up when their populations fill,” Giles repeated his old answers. “We’re still ahead in medicine and we’ll get the other discoveries eventually, without interrupting the work of making the Earth fit for our longevity. We can wait. We’ll have to.” THE YOUNGER man stared at him with the strange puzzled look Giles had seen too often lately. “Damn it, haven’t you read my report? We know the super-light drive works! That missile reached Sirius in less than ten days. We can have the secret of this antigravity in less than a year! We—” “Wait a minute.” Giles felt the thickness pushing back at his mind and tried to fight it off. He’d only skimmed the report, but this made no sense. “You mean you can calibrate your guiding devices accurately enough to get a missile where you want it and back?” “ What? ” Jordan’s voice rattled the speaker. “Of course not! It took two accidents to get the thing back to us—and with a half-light-year miss that delayed it about twenty years before the Procyon shuttle heard its signal. Pre-setting a course may take centuries, if we can ever master it. Even with Sirius expecting the missiles and ready to cooperate. I mean the big ship. We’ve had it drafted for building long enough; now we can finish it in three months. We know the drive works. We know it’s fast enough to reach Procyon in two weeks. We even know life can stand the trip. The rats were unharmed.” Giles shook his head at what the other was proposing, only partly believing it. “Rats don’t have minds that could show any real damage such as the loss of power to rejuvenate. We can’t put human pilots into a ship with our drive until we’ve tested it more thoroughly, Bill, even if they could correct for errors on arrival. Maybe if we put in stronger signaling transmitters....” “Yeah. Maybe in two centuries we’d have a through route charted to Sirius. And we still wouldn’t have proved it safe for human pilots. Mr. Giles, we’ve got to have the big ship. All we need is one volunteer!” It occurred to Giles then that the man had been too fired with the idea to think. He leaned back, shaking his head again wearily. “All right, Bill. Find me one volunteer. Or how about you? Do you really want to risk losing the rest of your life rather than waiting a couple more centuries until we know it’s safe? If you do, I’ll order the big ship.” Jordan opened his mouth and for a second Giles’ heart caught in a flux of emotions as the man’s offer hovered on his lips. Then the engineer shut his mouth slowly. The belligerence ran out of him. He looked sick, for he had no answer. NO SANE man would risk a chance for near eternity against such a relatively short wait. Heroism had belonged to those who knew their days were numbered, anyhow. “Forget it, Bill,” Giles advised. “It may take longer, but eventually we’ll find a way. With time enough, we’re bound to. And when we do, the ship will be ready.” The engineer nodded miserably and clicked off. Giles turned from the blank screen to stare out of the windows, while his hand came up to twist at the lock of hair over his forehead. Eternity! They had to plan and build for it. They couldn’t risk that plan for short-term benefits. Usually it was too easy to realize that, and the sight of the solid, time-enduring buildings outside should have given him a sense of security. Today, though, nothing seemed to help. He felt choked, imprisoned, somehow lost; the city beyond the window blurred as he studied it, and he swung the chair back so violently that his hand jerked painfully on the forelock he’d been twisting. Then he was staring unbelievingly at the single white hair that was twisted with the dark ones between his fingers. Like an automaton, he bent forward, his other hand groping for the mirror that should be in one of the drawers. The dull pain in his chest sharpened and his breath was hoarse in his throat, but he hardly noticed as he found the mirror and brought it up. His eyes focused reluctantly. There were other white strands in his dark hair. The mirror crashed to the floor as he staggered out of the office. It was only two blocks to Giles’ residence club, but he had to stop twice to catch his breath and fight against the pain that clawed at his chest. When he reached the wood-paneled lobby, he was barely able to stand. Dubbins was at his side almost at once, with a hand under his arm to guide him toward his suite. “Let me help you, sir,” Dubbins suggested, in the tones Giles hadn’t heard since the man had been his valet, back when it was still possible to find personal servants. Now he managed the club on a level of quasi-equality with the members. For the moment, though, he’d slipped back into the old ways. GILES FOUND himself lying on his couch, partially undressed, with the pillows just right and a long drink in his hand. The alcohol combined with the reaction from his panic to leave him almost himself again. After all, there was nothing to worry about; Earth’s doctors could cure anything. “I guess you’d better call Dr. Vincenti,” he decided. Vincenti was a member and would probably be the quickest to get. Dubbins shook his head. “Dr. Vincenti isn’t with us, sir. He left a year ago to visit a son in the Centauri system. There’s a Dr. Cobb whose reputation is very good, sir.” Giles puzzled over it doubtfully. Vincenti had been an oddly morose man the last few times he’d seen him, but that could hardly explain his taking a twenty-year shuttle trip for such a slim reason. It was no concern of his, though. “Dr. Cobb, then,” he said. Giles heard the other man’s voice on the study phone, too low for the words to be distinguishable. He finished the drink, feeling still better, and was sitting up when Dubbins came back. “Dr. Cobb wants you to come to his office at once, sir,” he said, dropping to his knee to help Giles with his shoes. “I’d be pleased to drive you there.” Giles frowned. He’d expected Cobb to come to him. Then he grimaced at his own thoughts. Dubbins’ manners must have carried him back into the past; doctors didn’t go in for home visits now—they preferred to see their patients in the laboratories that housed their offices. If this kept on, he’d be missing the old days when he’d had a mansion and counted his wealth in possessions, instead of the treasures he could build inside himself for the future ahead. He was getting positively childish! Yet he relished the feeling of having Dubbins drive his car. More than anything else, he’d loved being driven. Even after chauffeurs were a thing of the past, Harry had driven him around. Now he’d taken to walking, as so many others had, for even with modern safety measures so strict, there was always a small chance of some accident and nobody had any desire to spend the long future as a cripple. “I’ll wait for you, sir,” Dubbins offered as they stopped beside the low, massive medical building. It was almost too much consideration. Giles nodded, got out and headed down the hall uncertainly. Just how bad did he look? Well, he’d soon find out. He located the directory and finally found the right office, its reception room wall covered with all the degrees Dr. Cobb had picked up in some three hundred years of practice. Giles felt better, realizing it wouldn’t be one of the younger men. COBB APPEARED himself, before the nurse could take over, and led Giles into a room with an old-fashioned desk and chairs that almost concealed the cabinets of equipment beyond. He listened as Giles stumbled out his story. Halfway through, the nurse took a blood sample with one of the little mosquito needles and the machinery behind the doctor began working on it. “Your friend told me about the gray hair, of course,” Cobb said. At Giles’ look, he smiled faintly. “Surely you didn’t think people could miss that in this day and age? Let’s see it.” He inspected it and began making tests. Some were older than Giles could remember—knee reflex, blood pressure, pulse and fluoroscope. Others involved complicated little gadgets that ran over his body, while meters bobbed and wiggled. The blood check came through and Cobb studied it, to go back and make further inspections of his own. At last he nodded slowly. “Hyper-catabolism, of course. I thought it might be. How long since you had your last rejuvenation? And who gave it?” “About ten years ago,” Giles answered. He found his identity card and passed it over, while the doctor studied it. “My sixteenth.” It wasn’t going right. He could feel it. Some of the panic symptoms were returning; the pulse in his neck was pounding and his breath was growing difficult. Sweat ran down his sides from his armpit and he wiped his palms against his coat. “Any particular emotional strain when you were treated—some major upset in your life?” Cobb asked. Giles thought as carefully as he could, but he remembered nothing like that. “You mean—it didn’t take? But I never had any trouble, Doctor. I was one of the first million cases, when a lot of people couldn’t rejuvenate at all, and I had no trouble even then.” Cobb considered it, hesitated as if making up his mind to be frank against his better judgment. “I can’t see any other explanation. You’ve got a slight case of angina—nothing serious, but quite definite—as well as other signs of aging. I’m afraid the treatment didn’t take fully. It might have been some unconscious block on your part, some infection not diagnosed at the time, or even a fault in the treatment. That’s pretty rare, but we can’t neglect the possibility.” HE STUDIED his charts again and then smiled. “So we’ll give you another treatment. Any reason you can’t begin immediately?” Giles remembered that Dubbins was waiting for him, but this was more important. It hadn’t been a joke about his growing old, after all. But now, in a few days, he’d be his old—no, of course not—his young self again! They went down the hall to another office, where Giles waited outside while Cobb conferred with another doctor and technician, with much waving of charts. He resented every second of it. It was as if the almost forgotten specter of age stood beside him, counting the seconds. But at last they were through and he was led into the quiet rejuvenation room, where the clamps were adjusted about his head and the earpieces were fitted. The drugs were shot painlessly into his arm and the light-pulser was adjusted to his brain-wave pattern. It had been nothing like this his first time. Then it had required months of mental training, followed by crude mechanical and drug hypnosis for other months. Somewhere in every human brain lay the memory of what his cells had been like when he was young. Or perhaps it lay in the cells themselves, with the brain as only a linkage to it. They’d discovered that, and the fact that the mind could effect physical changes in the body. Even such things as cancer could be willed out of existence—provided the brain could be reached far below the conscious level and forced to operate. There had been impossible faith cures for millenia—cataracts removed from blinded eyes within minutes, even—but finding the mechanism in the brain that worked those miracles had taken an incredible amount of study and finding a means of bringing it under control had taken even longer. Now they did it with dozens of mechanical aids in addition to the hypnotic instructions—and did it usually in a single sitting, with the full transformation of the body taking less than a week after the treatment! But with all the equipment, it wasn’t impossible for a mistake to happen. It had been no fault of his ... he was sure of that ... his mind was easy to reach ... he could relax so easily.... He came out of it without even a headache, while they were removing the probes, but the fatigue on the operator’s face told him it had been a long and difficult job. He stretched experimentally, with the eternal unconscious expectation that he would find himself suddenly young again. But that, of course, was ridiculous. It took days for the mind to work on all the cells and to repair the damage of time. COBB LED him back to the first office, where he was given an injection of some kind and another sample of his blood was taken, while the earlier tests were repeated. But finally the doctor nodded. “That’s all for now, Mr. Giles. You might drop in tomorrow morning, after I’ve had a chance to complete my study of all this. We’ll know by then whether you’ll need more treatment. Ten o’clock okay?” “But I’ll be all right?” Cobb smiled the automatic reassurance of his profession. “We haven’t lost a patient in two hundred years, to my knowledge.” “Thanks,” said Giles. “Ten o’clock is fine.” Dubbins was still waiting, reading a paper whose headlined feature carried a glowing account of the discovery of the super-light missile and what it might mean. He took a quick look at Giles and pointed to it. “Great work, Mr. Giles. Maybe we’ll all get to see some of those other worlds yet.” Then he studied Giles more carefully. “Everything’s in good shape now, sir?” “The doctor says everything’s going to be fine,” Giles answered. It was then he realized for the first time that Cobb had said no such thing. A statement that lightning had never struck a house was no guarantee that it never would. It was an evasion meant to give such an impression. The worry nagged at him all the way back. Word had already gone around the club that he’d had some kind of attack and there were endless questions that kept it on his mind. And even when it had been covered and recovered, he could still sense the glances of the others, as if he were Vincenti in one of the man’s more morose moods. He found a single table in the dining room and picked his way through the meal, listening to the conversation about him only when it was necessary because someone called across to him. Ordinarily, he was quick to support the idea of clubs in place of private families. A man here could choose his group and grow into them. Yet he wasn’t swallowed by them, as he might be by a family. Giles had been living here for nearly a century now and he’d never regretted it. But tonight his own group irritated him. He puzzled over it, finding no real reason. Certainly they weren’t forcing themselves on him. He remembered once when he’d had a cold, before they finally licked that; Harry had been a complete nuisance, running around with various nostrums, giving him no peace. Constant questions about how he felt, constant little looks of worry—until he’d been ready to yell at the boy. In fact, he had. Funny, he couldn’t picture really losing his temper here. Families did odd things to a man. HE LISTENED to a few of the discussions after the dinner, but he’d heard them all before, except for one about the super-speed drive, and there he had no wish to talk until he could study the final report. He gave up at last and went to his own suite. What he needed was a good night’s sleep after a little relaxation. Even that failed him, though. He’d developed one of the finest chess collections in the world, but tonight it held no interest. And when he drew out his tools and tried working on the delicate, lovely jade for the set he was carving his hands seemed to be all thumbs. None of the other interests he’d developed through the years helped to add to the richness of living now. He gave it up and went to bed—to have the fragment of that song pop into his head. Now there was no escaping it. Something about the years—or was it days—dwindling down to something or other. Could they really dwindle down? Suppose he couldn’t rejuvenate all the way? He knew that there were some people who didn’t respond as well as others. Sol Graves, for instance. He’d been fifty when he finally learned how to work with the doctors and they could only bring him back to about thirty, instead of the normal early twenties. Would that reduce the slice of eternity that rejuvenation meant? And what had happened to Sol? Or suppose it wasn’t rejuvenation, after all; suppose something had gone wrong with him permanently? He fought that off, but he couldn’t escape the nagging doubts at the doctor’s words. He got up once to stare at himself in the mirror. Ten hours had gone by and there should have been some signs of improvement. He couldn’t be sure, though, whether there were or not. He looked no better the next morning when he finally dragged himself up from the little sleep he’d managed to get. The hollows were still there and the circles under his eyes. He searched for the gray in his hair, but the traitorous strands had been removed at the doctor’s office and he could find no new ones. He looked into the dining room and then went by hastily. He wanted no solicitous glances this morning. Drat it, maybe he should move out. Maybe trying family life again would give him some new interests. Amanda probably would be willing to marry him; she’d hinted at a date once. He stopped, shocked by the awareness that he hadn’t been out with a woman for.... He couldn’t remember how long it had been. Nor why. “In the spring, a young man’s fancy,” he quoted to himself, and then shuddered. It hadn’t been that kind of spring for him—not this rejuvenation nor the last, nor the one before that. GILES TRIED to stop scaring himself and partially succeeded, until he reached the doctor’s office. Then it was no longer necessary to frighten himself. The wrongness was too strong, no matter how professional Cobb’s smile! He didn’t hear the preliminary words. He watched the smile vanish as the stack of reports came out. There was no nurse here now. The machines were quiet—and all the doors were shut. Giles shook his head, interrupting the doctor’s technical jargon. Now that he knew there was reason for his fear, it seemed to vanish, leaving a coldness that numbed him. “I’d rather know the whole truth,” he said. His voice sounded dead in his ears. “The worst first. The rejuvenation...?” Cobb sighed and yet seemed relieved. “Failed.” He stopped, and his hands touched the reports on his desk. “Completely,” he added in a low, defeated tone. “But I thought that was impossible!” “So did I. I wouldn’t believe it even yet—but now I find it isn’t the first case. I spent the night at Medical Center going up the ranks until I found men who really know about it. And now I wish I hadn’t.” His voice ran down and he gathered himself together by an effort. “It’s a shock to me, too, Mr. Giles. But—well, to simplify it, no memory is perfect—even cellular memory. It loses a little each time. And the effect is cumulative. It’s like an asymptotic curve—the further it goes, the steeper the curve. And—well, you’ve passed too far.” He faced away from Giles, dropping the reports into a drawer and locking it. “I wasn’t supposed to tell you, of course. It’s going to be tough enough when they’re ready to let people know. But you aren’t the first and you won’t be the last, if that’s any consolation. We’ve got a longer time scale than we used to have—but it’s in centuries, not in eons. For everybody, not just you.” It was no consolation. Giles nodded mechanically. “I won’t talk, of course. How—how long?” Cobb spread his hands unhappily. “Thirty years, maybe. But we can make them better. Geriatric knowledge is still on record. We can fix the heart and all the rest. You’ll be in good physical condition, better than your grandfather—” “And then....” Giles couldn’t pronounce the words. He’d grown old and he’d grow older. And eventually he’d die! An immortal man had suddenly found death hovering on his trail. The years had dwindled and gone, and only a few were left. He stood up, holding out his hand. “Thank you, Doctor,” he said, and was surprised to find he meant it. The man had done all he could and had at least saved him the suspense of growing doubt and horrible eventual discovery. OUTSIDE ON the street, he looked up at the Sun and then at the buildings built to last for thousands of years. Their eternity was no longer a part of him. Even his car would outlast him. He climbed into it, still partly numbed, and began driving mechanically, no longer wondering about the dangers that might possibly arise. Those wouldn’t matter much now. For a man who had thought of living almost forever, thirty years was too short a time to count. He was passing near the club and started to slow. Then he went on without stopping. He wanted no chance to have them asking questions he couldn’t answer. It was none of their business. Dubbins had been kind—but now Giles wanted no kindness. The street led to the office and he drove on. What else was there for him? There, at least, he could still fill his time with work—work that might even be useful. In the future, men would need the super-light drive if they were to span much more of the Universe than now. And he could speed up the work in some ways still, even if he could never see its finish. It would be cold comfort but it was something. And he might keep busy enough to forget sometimes that the years were gone for him. Automatic habit carried him through the office again, to Amanda’s desk, where her worry was still riding her. He managed a grin and somehow the right words came to his lips. “I saw the doctor, Amanda, so you can stop figuring ways to get me there.” She smiled back suddenly, without feigning it. “Then you’re all right?” “As all right as I’ll ever be,” he told her. “They tell me I’m just growing old.” This time her laugh was heartier. He caught himself before he could echo her mirth in a different voice and went inside where she had the coffee waiting for him. Oddly, it still tasted good to him. The projection was off, he saw, wondering whether he’d left it on or not. He snapped the switch and saw the screen light up, with the people still in the odd, wheelless vehicle on the alien planet. FOR A long moment, he stared at the picture without thinking, and then bent closer. Harry’s face hadn’t changed much. Giles had almost forgotten it, but there was still the same grin there. And his grandchildren had a touch of it, too. And of their grandfather’s nose, he thought. Funny, he’d never seen even pictures of his other grandchildren. Family ties melted away too fast for interstellar travel. Yet there seemed to be no slackening of them in Harry’s case, and somehow it looked like a family, rather than a mere group. A very pleasant family in a very pleasant world. He read Harry’s note again, with its praise for the planet and its invitation. He wondered if Dr. Vincenti had received an invitation like that, before he left. Or had he even been one of those to whom the same report had been delivered by some doctor? It didn’t matter, but it would explain things, at least. Twenty years to Centaurus, while the years dwindled down— Then abruptly the line finished itself. “The years dwindle down to a precious few....” he remembered. “A precious few.” Those dwindling years had been precious once. He unexpectedly recalled his own grandfather holding him on an old knee and slipping him candy that was forbidden. The years seemed precious to the old man then. Amanda’s voice came abruptly over the intercom. “Jordan wants to talk to you,” she said, and the irritation was sharp in her voice. “He won’t take no!” Giles shrugged and reached for the projector, to cut it off. Then, on impulse, he set it back to the picture, studying the group again as he switched on Jordan’s wire. But he didn’t wait for the hot words about whatever was the trouble. “Bill,” he said, “start getting the big ship into production. I’ve found a volunteer.” He’d been driven to it, he knew, as he watched the man’s amazed face snap from the screen. From the first suspicion of his trouble, something inside him had been forcing him to make this decision. And maybe it would do no good. Maybe the ship would fail. But thirty years was a number a man could risk. If he made it, though.... Well, he’d see those grandchildren of his this year—and Harry. Maybe he’d even tell Harry the truth, once they got done celebrating the reunion. And there’d be other grandchildren. With the ship, he’d have time enough to look them up. Plenty of time! Thirty years was a long time, when he stopped to think of it. —LESTER DEL REY | A. he cared less about them as time wore on |
What likely happens to Ledman after the story ends?
A. He is given new legs and can start a new life
B. He will rejoin the search for uranium
C. Even with his wheelchair he must receive mental health treatment
D. He will undergo physical and mental health care before starting over
| THE HUNTED HEROES By ROBERT SILVERBERG The planet itself was tough enough—barren, desolate, forbidding; enough to stop the most adventurous and dedicated. But they had to run head-on against a mad genius who had a motto: Death to all Terrans! "Let's keep moving," I told Val. "The surest way to die out here on Mars is to give up." I reached over and turned up the pressure on her oxymask to make things a little easier for her. Through the glassite of the mask, I could see her face contorted in an agony of fatigue. And she probably thought the failure of the sandcat was all my fault, too. Val's usually about the best wife a guy could ask for, but when she wants to be she can be a real flying bother. It was beyond her to see that some grease monkey back at the Dome was at fault—whoever it was who had failed to fasten down the engine hood. Nothing but what had stopped us could stop a sandcat: sand in the delicate mechanism of the atomic engine. But no; she blamed it all on me somehow: So we were out walking on the spongy sand of the Martian desert. We'd been walking a good eight hours. "Can't we turn back now, Ron?" Val pleaded. "Maybe there isn't any uranium in this sector at all. I think we're crazy to keep on searching out here!" I started to tell her that the UranCo chief had assured me we'd hit something out this way, but changed my mind. When Val's tired and overwrought there's no sense in arguing with her. I stared ahead at the bleak, desolate wastes of the Martian landscape. Behind us somewhere was the comfort of the Dome, ahead nothing but the mazes and gullies of this dead world. He was a cripple in a wheelchair—helpless as a rattlesnake. "Try to keep going, Val." My gloved hand reached out and clumsily enfolded hers. "Come on, kid. Remember—we're doing this for Earth. We're heroes." She glared at me. "Heroes, hell!" she muttered. "That's the way it looked back home, but, out there it doesn't seem so glorious. And UranCo's pay is stinking." "We didn't come out here for the pay, Val." "I know, I know, but just the same—" It must have been hell for her. We had wandered fruitlessly over the red sands all day, both of us listening for the clicks of the counter. And the geigers had been obstinately hushed all day, except for their constant undercurrent of meaningless noises. Even though the Martian gravity was only a fraction of Earth's, I was starting to tire, and I knew it must have been really rough on Val with her lovely but unrugged legs. "Heroes," she said bitterly. "We're not heroes—we're suckers! Why did I ever let you volunteer for the Geig Corps and drag me along?" Which wasn't anywhere close to the truth. Now I knew she was at the breaking point, because Val didn't lie unless she was so exhausted she didn't know what she was doing. She had been just as much inflamed by the idea of coming to Mars to help in the search for uranium as I was. We knew the pay was poor, but we had felt it a sort of obligation, something we could do as individuals to keep the industries of radioactives-starved Earth going. And we'd always had a roving foot, both of us. No, we had decided together to come to Mars—the way we decided together on everything. Now she was turning against me. I tried to jolly her. "Buck up, kid," I said. I didn't dare turn up her oxy pressure any higher, but it was obvious she couldn't keep going. She was almost sleep-walking now. We pressed on over the barren terrain. The geiger kept up a fairly steady click-pattern, but never broke into that sudden explosive tumult that meant we had found pay-dirt. I started to feel tired myself, terribly tired. I longed to lie down on the soft, spongy Martian sand and bury myself. I looked at Val. She was dragging along with her eyes half-shut. I felt almost guilty for having dragged her out to Mars, until I recalled that I hadn't. In fact, she had come up with the idea before I did. I wished there was some way of turning the weary, bedraggled girl at my side back into the Val who had so enthusiastically suggested we join the Geigs. Twelve steps later, I decided this was about as far as we could go. I stopped, slipped out of the geiger harness, and lowered myself ponderously to the ground. "What'samatter, Ron?" Val asked sleepily. "Something wrong?" "No, baby," I said, putting out a hand and taking hers. "I think we ought to rest a little before we go any further. It's been a long, hard day." It didn't take much to persuade her. She slid down beside me, curled up, and in a moment she was fast asleep, sprawled out on the sands. Poor kid , I thought. Maybe we shouldn't have come to Mars after all. But, I reminded myself, someone had to do the job. A second thought appeared, but I squelched it: Why the hell me? I looked down at Valerie's sleeping form, and thought of our warm, comfortable little home on Earth. It wasn't much, but people in love don't need very fancy surroundings. I watched her, sleeping peacefully, a wayward lock of her soft blonde hair trailing down over one eyebrow, and it seemed hard to believe that we'd exchanged Earth and all it held for us for the raw, untamed struggle that was Mars. But I knew I'd do it again, if I had the chance. It's because we wanted to keep what we had. Heroes? Hell, no. We just liked our comforts, and wanted to keep them. Which took a little work. Time to get moving. But then Val stirred and rolled over in her sleep, and I didn't have the heart to wake her. I sat there, holding her, staring out over the desert, watching the wind whip the sand up into weird shapes. The Geig Corps preferred married couples, working in teams. That's what had finally decided it for us—we were a good team. We had no ties on Earth that couldn't be broken without much difficulty. So we volunteered. And here we are. Heroes. The wind blasted a mass of sand into my face, and I felt it tinkle against the oxymask. I glanced at the suit-chronometer. Getting late. I decided once again to wake Val. But she was tired. And I was tired too, tired from our wearying journey across the empty desert. I started to shake Val. But I never finished. It would be so nice just to lean back and nuzzle up to her, down in the sand. So nice. I yawned, and stretched back. I awoke with a sudden startled shiver, and realized angrily I had let myself doze off. "Come on, Val," I said savagely, and started to rise to my feet. I couldn't. I looked down. I was neatly bound in thin, tough, plastic tangle-cord, swathed from chin to boot-bottoms, my arms imprisoned, my feet caught. And tangle-cord is about as easy to get out of as a spider's web is for a trapped fly. It wasn't Martians that had done it. There weren't any Martians, hadn't been for a million years. It was some Earthman who had bound us. I rolled my eyes toward Val, and saw that she was similarly trussed in the sticky stuff. The tangle-cord was still fresh, giving off a faint, repugnant odor like that of drying fish. It had been spun on us only a short time ago, I realized. "Ron—" "Don't try to move, baby. This stuff can break your neck if you twist it wrong." She continued for a moment to struggle futilely, and I had to snap, "Lie still, Val!" "A very wise statement," said a brittle, harsh voice from above me. I looked up and saw a helmeted figure above us. He wasn't wearing the customary skin-tight pliable oxysuits we had. He wore an outmoded, bulky spacesuit and a fishbowl helmet, all but the face area opaque. The oxygen cannisters weren't attached to his back as expected, though. They were strapped to the back of the wheelchair in which he sat. Through the fishbowl I could see hard little eyes, a yellowed, parchment-like face, a grim-set jaw. I didn't recognize him, and this struck me odd. I thought I knew everyone on sparsely-settled Mars. Somehow I'd missed him. What shocked me most was that he had no legs. The spacesuit ended neatly at the thighs. He was holding in his left hand the tanglegun with which he had entrapped us, and a very efficient-looking blaster was in his right. "I didn't want to disturb your sleep," he said coldly. "So I've been waiting here for you to wake up." I could just see it. He might have been sitting there for hours, complacently waiting to see how we'd wake up. That was when I realized he must be totally insane. I could feel my stomach-muscles tighten, my throat constrict painfully. Then anger ripped through me, washing away the terror. "What's going on?" I demanded, staring at the half of a man who confronted us from the wheelchair. "Who are you?" "You'll find out soon enough," he said. "Suppose now you come with me." He reached for the tanglegun, flipped the little switch on its side to MELT, and shot a stream of watery fluid over our legs, keeping the blaster trained on us all the while. Our legs were free. "You may get up now," he said. "Slowly, without trying to make trouble." Val and I helped each other to our feet as best we could, considering our arms were still tightly bound against the sides of our oxysuits. "Walk," the stranger said, waving the tanglegun to indicate the direction. "I'll be right behind you." He holstered the tanglegun. I glimpsed the bulk of an outboard atomic rigging behind him, strapped to the back of the wheelchair. He fingered a knob on the arm of the chair and the two exhaust ducts behind the wheel-housings flamed for a moment, and the chair began to roll. Obediently, we started walking. You don't argue with a blaster, even if the man pointing it is in a wheelchair. "What's going on, Ron?" Val asked in a low voice as we walked. Behind us the wheelchair hissed steadily. "I don't quite know, Val. I've never seen this guy before, and I thought I knew everyone at the Dome." "Quiet up there!" our captor called, and we stopped talking. We trudged along together, with him following behind; I could hear the crunch-crunch of the wheelchair as its wheels chewed into the sand. I wondered where we were going, and why. I wondered why we had ever left Earth. The answer to that came to me quick enough: we had to. Earth needed radioactives, and the only way to get them was to get out and look. The great atomic wars of the late 20th Century had used up much of the supply, but the amount used to blow up half the great cities of the world hardly compared with the amount we needed to put them back together again. In three centuries the shattered world had been completely rebuilt. The wreckage of New York and Shanghai and London and all the other ruined cities had been hidden by a shining new world of gleaming towers and flying roadways. We had profited by our grandparents' mistakes. They had used their atomics to make bombs. We used ours for fuel. It was an atomic world. Everything: power drills, printing presses, typewriters, can openers, ocean liners, powered by the inexhaustible energy of the dividing atom. But though the energy is inexhaustible, the supply of nuclei isn't. After three centuries of heavy consumption, the supply failed. The mighty machine that was Earth's industry had started to slow down. And that started the chain of events that led Val and me to end up as a madman's prisoners, on Mars. With every source of uranium mined dry on Earth, we had tried other possibilities. All sorts of schemes came forth. Project Sea-Dredge was trying to get uranium from the oceans. In forty or fifty years, they'd get some results, we hoped. But there wasn't forty or fifty years' worth of raw stuff to tide us over until then. In a decade or so, our power would be just about gone. I could picture the sort of dog-eat-dog world we'd revert back to. Millions of starving, freezing humans tooth-and-clawing in it in the useless shell of a great atomic civilization. So, Mars. There's not much uranium on Mars, and it's not easy to find or any cinch to mine. But what little is there, helps. It's a stopgap effort, just to keep things moving until Project Sea-Dredge starts functioning. Enter the Geig Corps: volunteers out on the face of Mars, combing for its uranium deposits. And here we are, I thought. After we walked on a while, a Dome became visible up ahead. It slid up over the crest of a hill, set back between two hummocks on the desert. Just out of the way enough to escape observation. For a puzzled moment I thought it was our Dome, the settlement where all of UranCo's Geig Corps were located, but another look told me that this was actually quite near us and fairly small. A one-man Dome, of all things! "Welcome to my home," he said. "The name is Gregory Ledman." He herded us off to one side of the airlock, uttered a few words keyed to his voice, and motioned us inside when the door slid up. When we were inside he reached up, clumsily holding the blaster, and unscrewed the ancient spacesuit fishbowl. His face was a bitter, dried-up mask. He was a man who hated. The place was spartanly furnished. No chairs, no tape-player, no decoration of any sort. Hard bulkhead walls, rivet-studded, glared back at us. He had an automatic chef, a bed, and a writing-desk, and no other furniture. Suddenly he drew the tanglegun and sprayed our legs again. We toppled heavily to the floor. I looked up angrily. "I imagine you want to know the whole story," he said. "The others did, too." Valerie looked at me anxiously. Her pretty face was a dead white behind her oxymask. "What others?" "I never bothered to find out their names," Ledman said casually. "They were other Geigs I caught unawares, like you, out on the desert. That's the only sport I have left—Geig-hunting. Look out there." He gestured through the translucent skin of the Dome, and I felt sick. There was a little heap of bones lying there, looking oddly bright against the redness of the sands. They were the dried, parched skeletons of Earthmen. Bits of cloth and plastic, once oxymasks and suits, still clung to them. Suddenly I remembered. There had been a pattern there all the time. We didn't much talk about it; we chalked it off as occupational hazards. There had been a pattern of disappearances on the desert. I could think of six, eight names now. None of them had been particularly close friends. You don't get time to make close friends out here. But we'd vowed it wouldn't happen to us. It had. "You've been hunting Geigs?" I asked. " Why? What've they ever done to you?" He smiled, as calmly as if I'd just praised his house-keeping. "Because I hate you," he said blandly. "I intend to wipe every last one of you out, one by one." I stared at him. I'd never seen a man like this before; I thought all his kind had died at the time of the atomic wars. I heard Val sob, "He's a madman!" "No," Ledman said evenly. "I'm quite sane, believe me. But I'm determined to drive the Geigs—and UranCo—off Mars. Eventually I'll scare you all away." "Just pick us off in the desert?" "Exactly," replied Ledman. "And I have no fears of an armed attack. This place is well fortified. I've devoted years to building it. And I'm back against those hills. They couldn't pry me out." He let his pale hand run up into his gnarled hair. "I've devoted years to this. Ever since—ever since I landed here on Mars." "What are you going to do with us?" Val finally asked, after a long silence. He didn't smile this time. "Kill you," he told her. "Not your husband. I want him as an envoy, to go back and tell the others to clear off." He rocked back and forth in his wheelchair, toying with the gleaming, deadly blaster in his hand. We stared in horror. It was a nightmare—sitting there, placidly rocking back and forth, a nightmare. I found myself fervently wishing I was back out there on the infinitely safer desert. "Do I shock you?" he asked. "I shouldn't—not when you see my motives." "We don't see them," I snapped. "Well, let me show you. You're on Mars hunting uranium, right? To mine and ship the radioactives back to Earth to keep the atomic engines going. Right?" I nodded over at our geiger counters. "We volunteered to come to Mars," Val said irrelevantly. "Ah—two young heroes," Ledman said acidly. "How sad. I could almost feel sorry for you. Almost." "Just what is it you're after?" I said, stalling, stalling. "Atomics cost me my legs," he said. "You remember the Sadlerville Blast?" he asked. "Of course." And I did, too. I'd never forget it. No one would. How could I forget that great accident—killing hundreds, injuring thousands more, sterilizing forty miles of Mississippi land—when the Sadlerville pile went up? "I was there on business at the time," Ledman said. "I represented Ledman Atomics. I was there to sign a new contract for my company. You know who I am, now?" I nodded. "I was fairly well shielded when it happened. I never got the contract, but I got a good dose of radiation instead. Not enough to kill me," he said. "Just enough to necessitate the removal of—" he indicated the empty space at his thighs. "So I got off lightly." He gestured at the wheelchair blanket. I still didn't understand. "But why kill us Geigs? We had nothing to do with it." "You're just in this by accident," he said. "You see, after the explosion and the amputation, my fellow-members on the board of Ledman Atomics decided that a semi-basket case like myself was a poor risk as Head of the Board, and they took my company away. All quite legal, I assure you. They left me almost a pauper!" Then he snapped the punchline at me. "They renamed Ledman Atomics. Who did you say you worked for?" I began, "Uran—" "Don't bother. A more inventive title than Ledman Atomics, but not quite as much heart, wouldn't you say?" He grinned. "I saved for years; then I came to Mars, lost myself, built this Dome, and swore to get even. There's not a great deal of uranium on this planet, but enough to keep me in a style to which, unfortunately, I'm no longer accustomed." He consulted his wrist watch. "Time for my injection." He pulled out the tanglegun and sprayed us again, just to make doubly certain. "That's another little souvenir of Sadlerville. I'm short on red blood corpuscles." He rolled over to a wall table and fumbled in a container among a pile of hypodermics. "There are other injections, too. Adrenalin, insulin. Others. The Blast turned me into a walking pin-cushion. But I'll pay it all back," he said. He plunged the needle into his arm. My eyes widened. It was too nightmarish to be real. I wasn't seriously worried about his threat to wipe out the entire Geig Corps, since it was unlikely that one man in a wheelchair could pick us all off. No, it wasn't the threat that disturbed me, so much as the whole concept, so strange to me, that the human mind could be as warped and twisted as Ledman's. I saw the horror on Val's face, and I knew she felt the same way I did. "Do you really think you can succeed?" I taunted him. "Really think you can kill every Earthman on Mars? Of all the insane, cockeyed—" Val's quick, worried head-shake cut me off. But Ledman had felt my words, all right. "Yes! I'll get even with every one of you for taking away my legs! If we hadn't meddled with the atom in the first place, I'd be as tall and powerful as you, today—instead of a useless cripple in a wheelchair." "You're sick, Gregory Ledman," Val said quietly. "You've conceived an impossible scheme of revenge and now you're taking it out on innocent people who've done nothing, nothing at all to you. That's not sane!" His eyes blazed. "Who are you to talk of sanity?" Uneasily I caught Val's glance from a corner of my eye. Sweat was rolling down her smooth forehead faster than the auto-wiper could swab it away. "Why don't you do something? What are you waiting for, Ron?" "Easy, baby," I said. I knew what our ace in the hole was. But I had to get Ledman within reach of me first. "Enough," he said. "I'm going to turn you loose outside, right after—" " Get sick! " I hissed to Val, low. She began immediately to cough violently, emitting harsh, choking sobs. "Can't breathe!" She began to yell, writhing in her bonds. That did it. Ledman hadn't much humanity left in him, but there was a little. He lowered the blaster a bit and wheeled one-hand over to see what was wrong with Val. She continued to retch and moan most horribly. It almost convinced me. I saw Val's pale, frightened face turn to me. He approached and peered down at her. He opened his mouth to say something, and at that moment I snapped my leg up hard, tearing the tangle-cord with a snicking rasp, and kicked his wheelchair over. The blaster went off, burning a hole through the Dome roof. The automatic sealers glued-in instantly. Ledman went sprawling helplessly out into the middle of the floor, the wheelchair upended next to him, its wheels slowly revolving in the air. The blaster flew from his hands at the impact of landing and spun out near me. In one quick motion I rolled over and covered it with my body. Ledman clawed his way to me with tremendous effort and tried wildly to pry the blaster out from under me, but without success. I twisted a bit, reached out with my free leg, and booted him across the floor. He fetched up against the wall of the Dome and lay there. Val rolled over to me. "Now if I could get free of this stuff," I said, "I could get him covered before he comes to. But how?" "Teamwork," Val said. She swivelled around on the floor until her head was near my boot. "Push my oxymask off with your foot, if you can." I searched for the clamp and tried to flip it. No luck, with my heavy, clumsy boot. I tried again, and this time it snapped open. I got the tip of my boot in and pried upward. The oxymask came off, slowly, scraping a jagged red scratch up the side of Val's neck as it came. "There," she breathed. "That's that." I looked uneasily at Ledman. He was groaning and beginning to stir. Val rolled on the floor and her face lay near my right arm. I saw what she had in mind. She began to nibble the vile-tasting tangle-cord, running her teeth up and down it until it started to give. She continued unfailingly. Finally one strand snapped. Then another. At last I had enough use of my hand to reach out and grasp the blaster. Then I pulled myself across the floor to Ledman, removed the tanglegun, and melted the remaining tangle-cord off. My muscles were stiff and bunched, and rising made me wince. I turned and freed Val. Then I turned and faced Ledman. "I suppose you'll kill me now," he said. "No. That's the difference between sane people and insane," I told him. "I'm not going to kill you at all. I'm going to see to it that you're sent back to Earth." " No! " he shouted. "No! Anything but back there. I don't want to face them again—not after what they did to me—" "Not so loud," I broke in. "They'll help you on Earth. They'll take all the hatred and sickness out of you, and turn you into a useful member of society again." "I hate Earthmen," he spat out. "I hate all of them." "I know," I said sarcastically. "You're just all full of hate. You hated us so much that you couldn't bear to hang around on Earth for as much as a year after the Sadlerville Blast. You had to take right off for Mars without a moment's delay, didn't you? You hated Earth so much you had to leave." "Why are you telling all this to me?" "Because if you'd stayed long enough, you'd have used some of your pension money to buy yourself a pair of prosthetic legs, and then you wouldn't need this wheelchair." Ledman scowled, and then his face went belligerent again. "They told me I was paralyzed below the waist. That I'd never walk again, even with prosthetic legs, because I had no muscles to fit them to." "You left Earth too quickly," Val said. "It was the only way," he protested. "I had to get off—" "She's right," I told him. "The atom can take away, but it can give as well. Soon after you left they developed atomic-powered prosthetics—amazing things, virtually robot legs. All the survivors of the Sadlerville Blast were given the necessary replacement limbs free of charge. All except you. You were so sick you had to get away from the world you despised and come here." "You're lying," he said. "It's not true!" "Oh, but it is," Val smiled. I saw him wilt visibly, and for a moment I almost felt sorry for him, a pathetic legless figure propped up against the wall of the Dome at blaster-point. But then I remembered he'd killed twelve Geigs—or more—and would have added Val to the number had he had the chance. "You're a very sick man, Ledman," I said. "All this time you could have been happy, useful on Earth, instead of being holed up here nursing your hatred. You might have been useful, on Earth. But you decided to channel everything out as revenge." "I still don't believe it—those legs. I might have walked again. No—no, it's all a lie. They told me I'd never walk," he said, weakly but stubbornly still. I could see his whole structure of hate starting to topple, and I decided to give it the final push. "Haven't you wondered how I managed to break the tangle-cord when I kicked you over?" "Yes—human legs aren't strong enough to break tangle-cord that way." "Of course not," I said. I gave Val the blaster and slipped out of my oxysuit. "Look," I said. I pointed to my smooth, gleaming metal legs. The almost soundless purr of their motors was the only noise in the room. "I was in the Sadlerville Blast, too," I said. "But I didn't go crazy with hate when I lost my legs." Ledman was sobbing. "Okay, Ledman," I said. Val got him into his suit, and brought him the fishbowl helmet. "Get your helmet on and let's go. Between the psychs and the prosthetics men, you'll be a new man inside of a year." "But I'm a murderer!" "That's right. And you'll be sentenced to psych adjustment. When they're finished, Gregory Ledman the killer will be as dead as if they'd electrocuted you, but there'll be a new—and sane—Gregory Ledman." I turned to Val. "Got the geigers, honey?" For the first time since Ledman had caught us, I remembered how tired Val had been out on the desert. I realized now that I had been driving her mercilessly—me, with my chromium legs and atomic-powered muscles. No wonder she was ready to fold! And I'd been too dense to see how unfair I had been. She lifted the geiger harnesses, and I put Ledman back in his wheelchair. Val slipped her oxymask back on and fastened it shut. "Let's get back to the Dome in a hurry," I said. "We'll turn Ledman over to the authorities. Then we can catch the next ship for Earth." "Go back? Go back? If you think I'm backing down now and quitting you can find yourself another wife! After we dump this guy I'm sacking in for twenty hours, and then we're going back out there to finish that search-pattern. Earth needs uranium, honey, and I know you'd never be happy quitting in the middle like that." She smiled. "I can't wait to get out there and start listening for those tell-tale clicks." I gave a joyful whoop and swung her around. When I put her down, she squeezed my hand, hard. "Let's get moving, fellow hero," she said. I pressed the stud for the airlock, smiling. THE END Transcriber's Note: This etext was produced from Amazing Stories September 1956. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note. | D. He will undergo physical and mental health care before starting over |
How big is dataset used for fine-tuning model for detection of red flag medical symptoms in individual statements? | ### Introduction
The spread of influenza is a major health concern. Without appropriate preventative measures, this can escalate to an epidemic, causing high levels of mortality. A potential route to early detection is to analyse statements on social media platforms to identify individuals who have reported experiencing symptoms of the illness. These numbers can be used as a proxy to monitor the spread of the virus. Since disease does not respect cultural borders and may spread between populations speaking different languages, we would like to build models for several languages without going through the difficult, expensive and time-consuming process of generating task-specific labelled data for each language. In this paper we explore ways of taking data and models generated in one language and transferring to other languages for which there is little or no data. ### Related Work
Previously, authors have created multilingual models which should allow transfer between languages by aligning models BIBREF0 or embedding spaces BIBREF1, BIBREF2. An alternative is translation of a high-resource language into the target low-resource language; for instance, BIBREF3 combined translation with subsequent selective correction by active learning of uncertain words and phrases believed to describe entities, to create a labelled dataset for named entity recognition. ### MedWeb Dataset
We use the MedWeb (“Medical Natural Language Processing for Web Document”) dataset BIBREF4 that was provided as part of a subtask at the NTCIR-13 Conference BIBREF5. The data is summarised in Table TABREF1. There are a total of 2,560 pseudo-tweets in three different languages: Japanese (ja), English (en) and Chinese (zh). These were created in Japanese and then manually translated into English and Chinese (see Figure FIGREF2). Each pseudo-tweet is labelled with a subset of the following 8 labels: influenza, diarrhoea/stomach ache, hay fever, cough/sore throat, headache, fever, runny nose, and cold. A positive label is assigned if the author (or someone they live with) has the symptom in question. As such it is more than a named entity recognition task, as can be seen in pseudo-tweet #3 in Figure FIGREF2 where the term “flu” is mentioned but the label is negative. ### Methods ::: Bidirectional Encoder Representations from Transformers (BERT):
The BERT model BIBREF6 base version is a 12-layer Transformer model trained on two self-supervised tasks using a large corpus of text. In the first (denoising autoencoding) task, the model must map input sentences with some words replaced with a special “MASK” token back to the original unmasked sentences. In the second (binary classification) task, the model is given two sentences and must predict whether or not the second sentence immediately follows the first in the corpus. The output of the final Transformer layer is passed through a logistic output layer for classification. We have used the original (English) BERT-base, trained on Wikipedia and books corpus BIBREF7, and a Japanese BERT (jBERT) BIBREF8 trained on Japanese Wikipedia. The original BERT model and jBERT use a standard sentence piece tokeniser with roughly 30,000 tokens. ### Methods ::: Multilingual BERT:
Multilingual BERT (mBERT) is a BERT model simultaneously trained on Wikipedia in 100 different languages. It makes use of a shared sentence piece tokeniser with roughly 100,000 tokens trained on the same data. This model provides state-of-the-art zero-shot transfer results on natural language inference and part-of-speech tagging tasks BIBREF9. ### Methods ::: Translation:
We use two publicly available machine translation systems to provide two possible translations for each original sentence: Google's neural translation system BIBREF10 via Google Cloud, and Amazon Translate. We experiment using the translations singly and together. ### Methods ::: Training procedure:
Models are trained for 20 epochs, using the Adam optimiser BIBREF11 and a cyclical learning rate BIBREF12 varied linearly between $5 \times 10^{-6}$ and $3 \times 10^{-5}$. ### Experiments
Using the multilingual BERT model, we run three experiments as described below. The “exact match” metric from the original MedWeb challenge is reported, which means that all labels must be predicted correctly for a given pseudo-tweet to be considered correct; macro-averaged F1 is also reported. Each experiment is run 5 times (with different random seeds) and the mean performance is shown in Table TABREF11. Our experiments are focused around using Japanese as the low-resource target language, with English and Chinese as the more readily available source languages. ### Experiments ::: Baselines
To establish a target for our transfer techniques we train and test models on a single language, i.e. English to English, Japanese to Japanese, and Chinese to Chinese. For English we use the uncased base-BERT, for Japanese we use jBERT, and for Chinese we use mBERT (since there is no Chinese-specific model available in the public domain). This last choice seems reasonable since mBERT performed similarly to the single-language models when trained and tested on the same language. For comparison, we show the results of BIBREF13 who created the most successful model for the MedWeb challenge. Their final system was an ensemble of 120 trained models, using two architectures: a hierarchical attention network and a convolutional neural network. They exploited the fact that parallel data is available in three languages by ensuring consistency between outputs of the models in each language, giving a final exact match score of 0.880. However, for the purpose of demonstrating language transfer we report their highest single-model scores to show that our single-language models are competitive with the released results. We also show results for a majority class classifier (predicting all negative labels, see Table TABREF1) and a random classifier that uses the label frequencies from the training set to randomly predict labels. ### Experiments ::: Zero-shot transfer with multilingual pre-training
Our first experiment investigates the zero-shot transfer ability of multilingual BERT. If mBERT has learned a shared embedding space for all languages, we would expect that if the model is fine-tuned on the English training dataset, then it should be applicable also to the Japanese dataset. To test this we have run this with both the English and Chinese training data, results are shown in Table TABREF11. We ran additional experiments where we froze layers within BERT, but observed no improvement. The results indicate poor transfer, especially between English and Japanese. To investigate why the model does not perform well, we visualise the output vectors of mBERT using t-SNE BIBREF14 in Figure FIGREF14. We can see that the language representations occupy separate parts of the representation space, with only small amounts of overlap. Further, no clear correlation can be observed between sentence pairs. The better transfer between Chinese and Japanese likely reflects the fact that these languages share tokens; one of the Japanese alphabets (the Kanji logographic alphabet) consists of Chinese characters. There is 21% vocabulary overlap for the training data and 19% for the test data, whereas there is no token overlap between English and Japanese. Our finding is consistent with previous claims that token overlap impacts mBERT's transfer capability BIBREF9. ### Experiments ::: Training on machine translated data
Our second experiment investigates the use of machine translated data for training a model. We train on the machine translated source data and test on the target test set. Results are shown in Table TABREF11. Augmenting the data by using two sets of translations rather than one proves beneficial. In the end, the difference between training on real Japanese and training on translations from English is around 9% while training on translations from Chinese is around 4%. ### Experiments ::: Mixing translated data with original data
Whilst the results for translated data are promising, we would like to bridge the gap to the performance of the original target data. Our premise is that we start with a fixed-size dataset in the source language, and we have a limited annotation budget to manually translate a proportion of this data into the target language. For this experiment we mix all the translated data with different portions of original Japanese data, varying the amount between 1% and 100%. The results of these experiments are shown in Figure FIGREF17. Using the translated data with just 10% of the original Japanese data, we close the gap by half, with 50% we match the single-language model, and with 100% appear to even achieve a small improvement (for English), likely through the data augmentation provided by the translations. ### Discussion and Conclusions
Zero-shot transfer using multilingual BERT performs poorly when transferring to Japanese on the MedWeb data. However, training on machine translations gives promising performance, and this performance can be increased by adding small amounts of original target data. On inspection, the drop in performance between translated and original Japanese was often a result of translations that were reasonable but not consistent with the labels. For example, when translating the first example in Figure FIGREF2, both machine translations map “UTF8min風邪”, which means cold (the illness), into “UTF8min寒さ”, which means cold (low temperature). Another example is where the Japanese pseudo-tweet “UTF8min花粉症の時期はすごい疲れる。” was provided alongside an English pseudo-tweet “Allergy season is so exhausting.”. Here, the Japanese word for hay fever “UTF8min花粉症。” has been manually mapped to the less specific word “allergies” in English; the machine translation maps back to Japanese using the word for “allergies” i.e. “UTF8minアレルギー” in the katakana alphabet (katakana is used to express words derived from foreign languages), since there is no kanji character for the concept of allergies. In future work, it would be interesting to understand how to detect such ambiguities in order to best deploy our annotation budget. Table 1: MedWeb dataset overview statistics. Table 2: Overall results, given as mean (standard deviation) of 5 runs, for different training/test data pairs. The leading results on the original challenge are shown as baselines for benchmarking purposes. EN - English, JA - Japanese, ZH - Chinese, TJA - Translated Japanese. Figure 2: Max-pooled output of mBERT final layer (before fine tuning), reduced using principal component analysis (to reduce from 768 to 50 dimensions) followed by t-SNE (to project into 2 dimensions). 20 sentence triplets are linked to give an idea of the mapping between languages. Figure 3: Exact match accuracy when training on different proportions of the original Japanese training set, with or without either the original English data or the translated data. The pink and orange dashed lines show the accuracy of the full set of translated Japanese data (from English and Chinese respectively) and the blue dashed line shows the accuracy of the full original Japanese data. | a total of 2,560 pseudo-tweets in three different languages: Japanese (ja), English (en) and Chinese (zh) |
How is the clinical text structuring task defined? | ### Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches. However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost. Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance. To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows. We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model. Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task. The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6. ### Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods. Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster. Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job. Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component. ### Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to. The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain. ### Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$. Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement. Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data. ### The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. ### The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information. The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model. ### The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model. The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively. ### The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows. While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix. $Attention$ denotes the traditional attention and it can be defined as follows. where $d_k$ is the length of hidden vector. ### The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence. Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed. where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively. ### The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features. Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model. ### Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold. ### Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20. In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer. ### Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts. ### Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23. Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score. ### Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model. As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model. ### Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head. From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge. Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance. ### Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information. As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score. Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well. Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way. ### Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset. ### Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). Fig. 1. An illustrative example of QA-CTS task. TABLE I AN ILLUSTRATIVE EXAMPLE OF NAMED ENTITY FEATURE TAGS Fig. 2. The architecture of our proposed model for QA-CTS task TABLE II STATISTICS OF DIFFERENT TYPES OF QUESTION ANSWER INSTANCES TABLE V COMPARATIVE RESULTS FOR DIFFERENT INTEGRATION METHOD OF OUR PROPOSED MODEL TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL TABLE VI COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (WITHOUT TWO-STAGE TRAINING AND NAMED ENTITY INFORMATION) TABLE VII COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (WITH TWO-STAGE TRAINING AND NAMED ENTITY INFORMATION) TABLE VIII COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (USING MIXED-DATA PRE-TRAINED PARAMETERS) | Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained., Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.