question
stringlengths
12
1.77k
context
stringlengths
79
71.7k
answer
stringlengths
1
1.63k
How many responses did they obtain?
### Introduction Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling BIBREF0 all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation BIBREF1 . Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible. Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. At the crowd level, efficiency can be gained by assigning tasks to workers in the best order BIBREF2 , by filtering out poor tasks or workers, or by best incentivizing workers BIBREF3 . At the individual worker level, efficiency gains can come from helping workers craft more accurate responses and complete tasks in less time. One way to make workers individually more efficient is to computationally augment their task interface with useful information. For example, an autocompletion user interface (AUI) BIBREF4 , such as used on Google's main search page, may speed up workers as they answer questions or propose ideas. However, support for the benefits of AUIs is mixed and existing research has not considered short, repetitive inputs such as those required by many large-scale crowdsourcing problems. More generally, it is not yet clear what are the best approaches or general strategies to achieve efficiency gains for creative crowdsourcing tasks. In this work, we conducted a randomized trial of the benefits of allowing workers to answer a text-based question with the help of an autocompletion user interface. Workers interacted with a web form that recorded how quickly they entered text into the response field and how quickly they submitted their responses after typing is completed. After the experiment concluded, we measured response diversity using textual analyses and response quality using a followup crowdsourcing task with an independent population of workers. Our results indicate that the AUI treatment did not affect quality, and did not help workers perform more quickly or achieve greater response consensus. Instead, workers with the AUI were significantly slower and their responses were more diverse than workers in the non-AUI control group. ### Related Work An important goal of crowdsourcing research is achieving efficient scalability of the crowd to very large sets of tasks. Efficiency in crowdsourcing manifests both in receiving more effective information per worker and in making individual workers faster and/or more accurate. The former problem is a significant area of interest BIBREF5 , BIBREF6 , BIBREF7 while less work has been put towards the latter. One approach to helping workers be faster at individual tasks is the application of usability studies. BIBREF8 ( BIBREF8 ) famously showed how crowd workers can perform user studies, although this work was focused on using workers as usability testers for other platforms, not on studying crowdsourcing interfaces. More recent usability studies on the efficiency and accuracy of workers include: BIBREF9 ( BIBREF9 ), who consider the task completion times of macrotasks and microtasks and find workers given smaller microtasks were slower but achieve higher quality than those given larger macrotasks; BIBREF10 ( BIBREF10 ), who study how the sequence of tasks given to workers and interruptions between tasks may slow workers down; and BIBREF11 ( BIBREF11 ), who study completion times for relevance judgment tasks, and find that imposed time limits can improve relevance quality, but do not focus on ways to speed up workers. These studies do not test the effects of the task interface, however, as we do here. The usability feature we study here is an autocompletion user interface (AUI). AUIs are broadly familiar to online workers at this point, thanks in particular to their prominence on Google's main search bar (evolving out of the original Google Instant implementation). However, literature on the benefits of AUIs (and related word prediction and completion interfaces) in terms of improving efficiency is decidedly mixed. It is generally assumed that AUIs make users faster by saving keystrokes BIBREF12 . However, there is considerable debate about whether or not such gains are countered by increased cognitive load induced by processing the given autocompletions BIBREF13 . BIBREF14 ( BIBREF14 ) showed that typists can enter text more quickly with word completion and prediction interfaces than without. However, this study focused on a different input modality (an onscreen keyboard) and, more importantly, on a text transcription task: typists were asked to reproduce an existing text, not answer questions. BIBREF4 ( BIBREF4 ) showed that medical typists saved keystrokes when using an autocompletion interface to input standardized medical terms. However, they did not consider the elapsed times required by these users, instead focusing on response times of the AUI suggestions, and so it is unclear if the users were actually faster with the AUI. There is some evidence that long-term use of an AUI can lead to improved speed and not just keystroke savings BIBREF15 , but it is not clear how general such learning may be, and whether or not it is relevant to short-duration crowdsourcing tasks. ### Experimental design Here we describe the task we studied and its input data, worker recruitment, the design of our experimental treatment and control, the “instrumentation” we used to measure the speeds of workers as they performed our task, and our procedures to post-process and rate the worker responses to our task prior to subsequent analysis. ### Data collection We recruited 176 AMT workers to participate in our conceptualization task. Of these workers, 90 were randomly assigned to the Control group and 86 to the AUI group. These workers completed 1001 tasks: 496 tasks in the control and 505 in the AUI. All responses were gathered within a single 24-hour period during April, 2017. After Control and AUI workers were finished responding, we initiated our non-experimental quality ratings task. Whenever multiple workers provided the same response to a given question, we only sought ratings for that single unique question and response. Each unique question-response pair ( INLINEFORM0 ) was rated at least 8–10 times (a few pairs were rated more often; we retained those extra ratings). We recruited 119 AMT workers (who were not members of the Control or AUI groups) who provided 4300 total ratings. ### Differences in response time We found that workers were slower overall with the AUI than without the AUI. In Fig. FIGREF16 we show the distributions of typing duration and submission delay. There was a slight difference in typing duration between Control and AUI (median 1.97s for Control compared with median 2.69s for AUI). However, there was a strong difference in the distributions of submission delay, with AUI workers taking longer to submit than Control workers (median submission delay of 7.27s vs. 4.44s). This is likely due to the time required to mentally process and select from the AUI options. We anticipated that the submission delay may be counter-balanced by the time saved entering text, but the total typing duration plus submission delay was still significantly longer for AUI than control (median 7.64s for Control vs. 12.14s for AUI). We conclude that the AUI makes workers significantly slower. We anticipated that workers may learn over the course of multiple tasks. For example, the first time a worker sees the AUI will present a very different cognitive load than the 10th time. This learning may eventually lead to improved response times and so an AUI that may not be useful the first time may lead to performance gains as workers become more experienced. To investigate learning effects, we recorded for each worker's question-response pair how many questions that worker had already answered, and examined the distributions of typing duration and submission delay conditioned on the number of previously answered questions (Fig. FIGREF17 ). Indeed, learning did occur: the submission delay (but not typing duration) decreased as workers responded to more questions. However, this did not translate to gains in overall performance between Control and AUI workers as learning occurred for both groups: Among AUI workers who answered 10 questions, the median submission delay on the 10th question was 8.02s, whereas for Control workers who answered 10 questions, the median delay on the 10th question was only 4.178s. This difference between Control and AUI submission delays was significant (Mann-Whitney test: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). In comparison, AUI (Control) workers answering their first question had a median submission delay of 10.97s (7.00s). This difference was also significant (Mann-Whitney test: INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 ). We conclude that experience with the AUI will not eventually lead to faster responses those of the control. ### Differences in response diversity We were also interested in determining whether or not the worker responses were more consistent or more diverse due to the AUI. Response consistency for natural language data is important when a crowdsourcer wishes to pool or aggregate a set of worker responses. We anticipated that the AUI would lead to greater consistency by, among other effects, decreasing the rates of typos and misspellings. At the same time, however, the AUI could lead to more diversity due to cognitive priming: seeing suggested responses from the AUI may prompt the worker to revise their response. Increased diversity may be desirable when a crowdsourcer wants to receive as much information as possible from a given task. To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control. Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B). Third, we estimated the semantic diversity of responses using word vectors. Word vectors, or word embeddings, are a state-of-the-art computational linguistics tool that incorporate the semantic meanings of words and phrases by learning vector representations that are embedded into a high-dimensional vector space BIBREF18 , BIBREF19 . Vector operations within this space such as addition and subtraction are capable of representing meaning and interrelationships between words BIBREF19 . For example, the vector INLINEFORM0 is very close to the vector INLINEFORM1 , indicating that these vectors capture analogy relations. Here we used 300-dimension word vectors trained on a 100B-word corpus taken from Google News (word2vec). For each question we computed the average similarity between words in the responses to that question—a lower similarity implies more semantically diverse answers. Specifically, for a given question INLINEFORM2 , we concatenated all responses to that question into a single document INLINEFORM3 , and averaged the vector similarities INLINEFORM4 of all pairs of words INLINEFORM5 in INLINEFORM6 , where INLINEFORM7 is the word vector corresponding to word INLINEFORM8 : DISPLAYFORM0 where INLINEFORM0 if INLINEFORM1 and zero otherwise. We also excluded from EQREF21 any word pairs where one or both words were not present in the pre-trained word vectors (approximately 13% of word pairs). For similarity INLINEFORM2 we chose the standard cosine similarity between two vectors. As with response density, we found that most questions had lower word vector similarity INLINEFORM3 (and are thus collectively more semantically diverse) when considering AUI responses as the document INLINEFORM4 than when INLINEFORM5 came from the Control workers (Fig. FIGREF19 C). The difference was significant (Wilcoxon signed rank test paired on questions: INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ). Taken together, we conclude from these three analyses that the AUI increased the diversity of the responses workers gave. ### No difference in response quality Following the collection of responses from the Control and AUI groups, separate AMT workers were asked to rate the quality of the original responses (see Experimental design). These ratings followed a 1–5 scale from lowest to highest. We present these ratings in Fig. FIGREF23 . While there was variation in overall quality across different questions (Fig. FIGREF23 A), we did not observe a consistent difference in perceived response quality between the two groups. There was also no statistical difference in the overall distributions of ratings per question (Fig. FIGREF23 B). We conclude that the AUI neither increased nor decreased response quality. ### Discussion We have showed via a randomized control trial that an autocompletion user interface (AUI) is not helpful in making workers more efficient. Further, the AUI led to a more lexically and semantically diverse set of text responses to a given task than if the AUI was not present. The AUI also had no noticeable impact, positive or negative, on response quality, as independently measured by other workers. A challenge with text-focused crowdsourcing is aggregation of natural language responses. Unlike binary labeling tasks, for example, normalizing text data can be challenging. Should casing be removed? Should words be stemmed? What to do with punctuation? Should typos be fixed? One of our goals when testing the effects of the AUI was to see if it helps with this normalization task, so that crowdsourcers can spend less time aggregating responses. We found that the AUI would likely not help with this in the sense that the sets of responses became more diverse, not less. Yet, this may in fact be desirable—if a crowdsourcer wants as much diverse information from workers as possible, then showing them dynamic AUI suggestions may provide a cognitive priming mechanism to inspire workers to consider responses which otherwise would not have occurred to them. One potential explanation for the increased submission delay among AUI workers is an excessive number of options presented by the AUI. The goal of an AUI is to present the best options at the top of the drop down menu (Fig. FIGREF2 B). Then a worker can quickly start typing and choose the best option with a single keystroke or mouse click. However, if the best option appears farther down the menu, then the worker must commit more time to scan and process the AUI suggestions. Our AUI always presented six suggestions, with another six available by scrolling, and our experiment did not vary these numbers. Yet the size of the AUI and where options land may play significant roles in submission delay, especially if significant numbers of selections come from AUI positions far from the input area. We aimed to explore position effects, but due to some technical issues we did not record the positions in the AUI that workers chose. However, our Javascript instrumentation logged worker keystrokes as they typed so we can approximately reconstruct the AUI position of the worker's ultimate response. To do this, we first identified the logged text inputed by the worker before it was replaced by the AUI selection, then used this text to replicate the database query underlying the AUI, and lastly determined where the worker's final response appeared in the query results. This procedure is only an approximation because our instrumentation would occasionally fail to log some keystrokes and because a worker could potentially type out the entire response even if it also appeared in the AUI (which the worker may not have even noticed). Nevertheless, most AUI workers submitted responses that appeared in the AUI (Fig. FIGREF24 A) and, of those responses, most owere found in the first few (reconstructed) positions near the top of the AUI (Fig. FIGREF24 B). Specifically, we found that 59.3% of responses were found in the first two reconstructed positions, and 91.2% were in the first six. With the caveats of this analysis in mind, which we hope to address in future experiments, these results provide some evidence that the AUI responses were meaningful and that the AUI workers were delayed by the AUI even though most chosen responses came from the top area of the AUI which is most quickly accessible to the worker. Beyond AUI position effects and the number of options shown in the AUI, there are many aspects of the interplay between workers and the AUI to be further explored. We limited workers to performing no more than ten tasks, but will an AUI eventually lead to efficiency gains beyond that level of experience? It is also an open question if an AUI will lead to efficiency gains when applying more advanced autocompletion and ranking algorithms than the one we used. Given that workers were slower with the AUI primarily due to a delay after they finished typing which far exceeded the delays of non-AUI workers, better algorithms may play a significant role in speeding up or, in this case, slowing down workers. Either way, our results here indicate that crowdsourcers must be very judicious if they wish to augment workers with autocompletion user interfaces. ### Acknowledgments We thank S. Lehman and J. Bongard for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634. Figure 1. Screenshots of our conceptualization task interface. The presence of the AUI is the only difference between the task interfaces. Table 1. Question terms used in our conceptualization task. Workers were shown these questions in random order. Figure 2. Distributions of time delays. Workers in the AUI treatment were significantly slower than in the control, and this was primarily due to the submission delay between when they finished entering text and when they submitted their response. Figure 3. Workers became faster as they gained experience by answering more questions, but this improvement occurred in both Control and AUI groups. Figure 4. AUI workers had more lexically (A, B) and semantically (C) diverse responses than Control workers. Figure 5. Quality of responses. All question-response pairs were rated independently by workers on a 1-5 scale of perceived quality (1–lowest quality, 5–highest quality). Figure 6. Inferred positions of AUI selections based on the last text workers in the AUI group typed before choosing from the AUI. (A) Most submitted AUI responses appeared in the AUI. (B) Among the responses appearing in the AUI, the reconstructed positions of those responses tended to be at the top of the AUI, in the most prominent, accessible area.
1001
How do they ensure the generated questions are unanswerable?
### Introduction Extractive reading comprehension BIBREF0 , BIBREF1 obtains great attentions from both research and industry in recent years. End-to-end neural models BIBREF2 , BIBREF3 , BIBREF4 have achieved remarkable performance on the task if answers are assumed to be in the given paragraph. Nonetheless, the current systems are still not good at deciding whether no answer is presented in the context BIBREF5 . For unanswerable questions, the systems are supposed to abstain from answering rather than making unreliable guesses, which is an embodiment of language understanding ability. We attack the problem by automatically generating unanswerable questions for data augmentation to improve question answering models. The generated unanswerable questions should not be too easy for the question answering model so that data augmentation can better help the model. For example, a simple baseline method is randomly choosing a question asked for another paragraph, and using it as an unanswerable question. However, it would be trivial to determine whether the retrieved question is answerable by using word-overlap heuristics, because the question is irrelevant to the context BIBREF6 . In this work, we propose to generate unanswerable questions by editing an answerable question and conditioning on the corresponding paragraph that contains the answer. So the generated unanswerable questions are more lexically similar and relevant to the context. Moreover, by using the answerable question as a prototype and its answer span as a plausible answer, the generated examples can provide more discriminative training signal to the question answering model. To create training data for unanswerable question generation, we use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. As shown in Figure 1 , the answerable and unanswerable questions of a paragraph are aligned through the text span “Victoria Department of Education” for being both the answer and plausible answer. These two questions are lexically similar and both asked with the same answer type in mind. In this way, we obtain the data with which the models can learn to ask unanswerable questions by editing answerable ones with word exchanges, negations, etc. Consequently, we can generate a mass of unanswerable questions with existing large-scale machine reading comprehension datasets. Inspired by the neural reading comprehension models BIBREF7 , BIBREF8 , we introduce a pair-to-sequence model to better capture the interactions between questions and paragraphs. The proposed model first encodes input question and paragraph separately, and then conducts attention-based matching to make them aware of each other. Finally, the context-aware representations are used to generate outputs. To facilitate the use of context words during the generation process, we also incorporate the copy mechanism BIBREF9 , BIBREF10 . Experimental results on the unanswerable question generation task shows that the pair-to-sequence model generates consistently better results over the sequence-to-sequence baseline and performs better with long paragraphs than with short answer sentences. Further experimental results show that the generated unanswerable questions can improve multiple machine reading comprehension models. Even using BERT fine-tuning as a strong reading comprehension model, we can still obtain a $1.9$ % absolute improvement of F1 score with BERT-base model and $1.7$ % absolute F1 improvement with BERT-large model. ### Related Work Machine Reading Comprehension (MRC) Various large-scale datasets BIBREF0 , BIBREF1 , BIBREF11 , BIBREF12 , BIBREF5 , BIBREF13 have spurred rapid progress on machine reading comprehension in recent years. SQuAD BIBREF1 is an extractive benchmark whose questions and answers spans are annotated by humans. Neural reading comprehension systems BIBREF14 , BIBREF2 , BIBREF3 , BIBREF15 , BIBREF8 , BIBREF16 , BIBREF4 , BIBREF17 have outperformed humans on this task in terms of automatic metrics. The SQuAD 2.0 dataset BIBREF5 extends SQuAD with more than $50,000$ crowdsourced unanswerable questions. So far, neural reading comprehension models still fall behind humans on SQuAD 2.0. Abstaining from answering when no answer can be inferred from the given document does require more understanding than barely extracting an answer. Question Generation for MRC In recent years, there has been an increasing interest in generating questions for reading comprehension. BIBREF18 show that neural models based on the encoder-decoder framework can generate significantly better questions than rule-based systems BIBREF19 . To generate answer-focused questions, one can simply indicate the answer positions in the context with extra features BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . BIBREF25 and BIBREF26 separate answer representations for further matching. BIBREF27 introduce a latent variable for capturing variability and an observed variable for controlling question types. In summary, the above mentioned systems aim to generate answerable questions with certain context. On the contrary, our goal is to generate unanswerable questions. Adversarial Examples for MRC To evaluate the language understanding ability of pre-trained systems, BIBREF28 construct adversarial examples by adding distractor sentences that do not contradict question answering for humans to the paragraph. BIBREF29 and BIBREF30 use questions to retrieve paragraphs that do not contain the answer as adversarial examples. BIBREF5 create unanswerable questions through rigid rules, which swap entities, numbers and antonyms of answerable questions. It has been shown that adversarial examples generated by rule-based systems are much easier to detect than ones in the SQuAD 2.0 dataset. Data Augmentation for MRC Several attempts have been made to augment training data for machine reading comprehension. We categorize these work according to the type of the augmentation data: external data source, paragraphs or questions. BIBREF31 fine-tune BERT on the SQuAD dataset jointly with another dataset TriviaQA BIBREF12 . BIBREF4 paraphrase paragraphs with backtranslation. Another line of work adheres to generate answerable questions. BIBREF32 propose to generate questions based on the unlabeled text for semi-supervised question answering. BIBREF33 propose a rule-based system to generate multiple-choice questions with candidate options upon the paragraphs. We aim at generating unanswerable questions as a means of data augmentation. ### Problem Formulation Given an answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ , we aim to generate unanswerable questions $\tilde{q}$ that fulfills certain requirements. First, it cannot be answered by paragraph $p$ . Second, it must be relevant to both answerable question $q$ and paragraph $p$ , which refrains from producing irrelevant questions. Third, it should ask for something of the same type as answer $a$ . As shown in Figure 2 , we investigate two simple neural models built upon encoder-decoder architecture BIBREF34 , BIBREF35 to generate unanswerable questions. A sequence-to-sequence model takes the concatenated paragraph and question as input, and encodes the input in a sequential manner. A pair-to-sequence model is further introduced to capture the interactions between inputs. The decoder of two models generates unanswerable questions sequentially. We factorize the probability of generating the unanswerable question $P(\tilde{q}|q,p,a)$ as: P(q|q,p,a)=t=1|q|P(qt|q<t,q,p,a) where $\tilde{q}_{<t}=\tilde{q}_1 \dots \tilde{q}_{t-1}$ . ### Sequence-to-Sequence Model In the sequence-to-sequence model, paragraph and question pairs are packed into an ordered sequence $x$ with a special separator in between. To indicate answers in paragraphs, we introduce token type embeddings which can also be used to distinguish questions from paragraphs in sequence-to-sequence model. As we can see in Figure 2 , the token type can be answer (A), paragraph (P), or question (Q). For a given token, we construct the input representation $\mathbf {e}_i$ by summing the corresponding word embeddings, character embeddings and token type embeddings. Here characters are embedded by an embedding matrix followed by a max pooling layer. We apply a single-layer bi-directional recurrent neural networks with long short-term memory units (LSTM; BIBREF36 ) to produce encoder hidden states $\mathbf {h}_i=(\mathbf {h}_{i-1}, \mathbf {e}_i)$ . On each decoding step $t$ , the hidden states of decoder (a single-layer unidirectional LSTM network) are computed by $\mathbf {s}_t=(\mathbf {s}_{t-1}, [\mathbf {y}_{t-1}; \mathbf {c}_{t-1}])$ , where $\mathbf {y}_{t-1}$ is the word embedding of previously predicted token and $\mathbf {c}_{t-1}$ is the encoder context vector of previous step. Besides, we use an attention mechanism to summarize the encoder-side information into $\mathbf {c}_{t}$ for current step. The attention distribution $\gamma _t$ over source words is computed as in BIBREF37 : score(hi , st)=hiTWst i,t=(score(hi,st)) / Zt ct=i|x|i,t hi where $Z_t = {\sum _{k}^{|x|}\exp (score(\mathbf {h}_k,\mathbf {s}_t))}$ , $\mathbf {W}_\gamma $ in score function is a learnable parameter. Next, $\mathbf {s}_t$ is concatenated with $\mathbf {c}_t$ to produce the vocabulary distribution $P_{v}$ : $$P_{v}=(\mathbf {W}_v[\mathbf {s}_t;\mathbf {c}_t] + \mathbf {b}_{v})$$ (Eq. 4) where $\mathbf {W}_v$ and $\mathbf {b}_{v}$ are learnable parameters. Copy mechanism BIBREF10 is incorporated to directly copy words from inputs, because words in paragraphs or source questions are of great value for unanswerable question generation. Specifically, we use $\mathbf {s}_t$ and $\mathbf {c}_t$ to produce a gating probability $g_t$ : $$g_t=(\mathbf {W}_g[\mathbf {s}_t;\mathbf {c}_t] + \mathbf {b}_{g})$$ (Eq. 5) where $\mathbf {W}_g$ and $\mathbf {b}_{g}$ are learnable parameters. The gate $g_t$ determines whether generating a word from the vocabulary or copying a word from inputs. Finally, we obtain the probability of generating $\tilde{q}_t$ by: $$P(\tilde{q}_t|\tilde{q}_{<t},q,p,a)=g_t P_{v}(\tilde{q}_t) + (1-g_t)\sum _{i \in \zeta _{\tilde{q}_t}}\hat{\gamma }_{i,t} \nonumber $$ (Eq. 6) where $\zeta _{\tilde{q}_t}$ denotes all the occurrence of $\tilde{q}_t$ in inputs, and the copying score $\hat{\gamma }_t$ is computed in the same way as attention scores $\gamma _t$ (see Equation ( "Sequence-to-Sequence Model" )) while using different parameters. ### Pair-to-Sequence Model Paragraph and question interactions play a vitally important role in machine reading comprehension. The interactions make the paragraph and question aware of each other and help to predict the answer more precisely. Therefore we propose a pair-to-sequence model, conducting attention based interactions in encoder and subsequently decoding with two series of representations. In pair-to-sequence model, the paragraph and question are embedded as in sequence-to-sequence model, but encoded separately by weight-shared bi-directional LSTM networks, yielding $\mathbf {h}_i^p=(\mathbf {h}_{i-1}^p, \mathbf {e}_{i-1}^p)$ as paragraph encodings and $\mathbf {h}_i^q=(\mathbf {h}_{i-1}^q, \mathbf {e}_{i-1}^q)$ as question encodings. The same attention mechanism as in sequence-to-sequence model is used in the following interaction layer to produce question-aware paragraph representations ${\mathbf {h}}_i^p$ : i,j=(score(hip,hjq))/Zi hip=j=1|q|i,jhjq hip=(Wp[hip;hip] + bp) where $Z_i=\sum _{k=1}^{|q|}\exp (score(\mathbf {h}_i^p,\mathbf {h}_k^q))$ , $\mathbf {W}_p$ and $\mathbf {b}_p$ are learnable parameters. Similarly, the paragraph-aware question representations ${\mathbf {h}}_i^q$ are produced by: i,j=(score(hip,hjq))/Zj hiq=i=1|p|i,jhip hjq=(Wq[hjq;hjq] + bq) where $Z_j=\sum _{k=1}^{|p|}\exp (score(\mathbf {h}_k^p,\mathbf {h}_j^q))$ , $\mathbf {W}_q$ and $\mathbf {b}_q$ are learnable parameters. Accordingly, the decoder now takes paragraph context $\mathbf {c}^p_{t-1}$ and question context $\mathbf {c}^q_{t-1}$ as encoder context, computed as $\mathbf {c}_t$ (see Equation ( "Sequence-to-Sequence Model" )) in sequence-to-sequence model, to update decoder hidden states $\mathbf {s}_t=(\mathbf {s}_{t-1},[\mathbf {y}_{t-1};\mathbf {c}^p_{t-1};\mathbf {c}^q_{t-1}])$ and predict tokens. Copy mechanism is also adopted as described before, and copying words from both the paragraph and question is viable. ### Training and Inference The training objective is to minimize the negative likelihood of the aligned unanswerable question $\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ : L=-(q,q,p,a)DP(q|q,p,a;) where $\mathcal {D}$ is the training corpus and $\theta $ denotes all the parameters. Sequence-to-sequence and pair-to-sequence models are trained with the same objective. During inference, the unanswerable question for question answering pair $(q,p,a)$ is obtained via $\textrm {argmax}_{q^{\prime }}P(q^{\prime }|q,p,a)$ , where $q^{\prime }$ represents candidate outputs. Beam search is used to avoid iterating over all possible outputs. ### Experiments We conduct experiments on the SQuAD 2.0 dataset BIBREF5 . The extractive machine reading benchmark contains about $100,000$ answerable questions and over $50,000$ crowdsourced unanswerable questions towards Wikipedia paragraphs. Crowdworkers are requested to craft unanswerable questions that are relevant to the given paragraph. Moreover, for each unanswerable question, a plausible answer span is annotated, which indicates the incorrect answer obtained by only relying on type-matching heuristics. Both answers and plausible answers are text spans in the paragraphs. ### Unanswerable Question Generation We use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. An aligned pair is shown in Figure 1 . As to the spans that correspond to multiple answerable and unanswerable questions, we sort the pairs by Levenshtein distance BIBREF38 and keep the pair with the minimum distance, and make sure that each question is only paired once. We obtain $20,240$ aligned pairs from the SQuAD 2.0 dataset in total. The Levenshtein distance between the answerable and unanswerable questions in pairs is $3.5$ on average. Specifically, the $17,475$ pairs extracted from the SQuAD 2.0 training set are used to train generation models. Since the SQuAD 2.0 test set is hidden, we randomly sample 46 articles from the SQuAD 2.0 training set with $1,805$ ( $\sim $ 10%) pairs as holdout set and evaluate generation models with $2,765$ pairs extracted the SQuAD 2.0 development set. We implement generation models upon OpenNMT BIBREF39 . We preprocess the corpus with the spaCy toolkit for tokenization and sentence segmentation. We lowercase tokens and build the vocabulary on SQuAD 2.0 training set with word frequency threshold of 9 to remove most noisy tokens introduced in data collection and tokenization. We set word, character and token type embeddings dimension to 300. We use the glove.840B.300d pre-trained embeddings BIBREF40 to initialize word embeddings, and do further updates during training. Both encoder and decoder share the same vocabulary and word embeddings. The hidden state size of LSTM network is 150. Dropout probability is set to $0.2$ . The data are shuffled and split into mini-batches of size 32 for training. The model is optimized with Adagrad BIBREF41 with an initial learning rate of $0.15$ . During inference, the beam size is 5. We prohibit producing unknown words by setting the score of <unk> token to -inf. We filter the beam outputs that make no differences to the input question. The generation quality is evaluated using three automatic evaluation metrics: BLEU BIBREF42 , ROUGE BIBREF43 and GLEU BIBREF44 . BLEU is a commonly used metric in machine translation that computes n-gram precisions over references. Recall-oriented ROUGE metric is widely adopted in summarization, and ROUGE-L measures longest common subsequence between system outputs and references. GLEU is a variant of BLEU with the modification that penalizes system output n-grams that present in input but absent from the reference. This makes GLEU a preferable metric for tasks with subtle but critical differences in a monolingual setting as in our unanswerable question generation task. We also conduct human evaluation on 100 samples in three criteria: (1) unanswerability, which indicates whether the question is unanswerable or not; (2) relatedness, which measures semantic relatedness between the generated question and input question answering pair; (3) readability, which indicates the grammaticality and fluency. We ask three raters to score the generated questions in terms of relatedness and readability on a 1-3 scale (3 for the best) and determine the answerability in binary (1 for unanswerable). The raters are not aware of the question generation methods in advance. Results of the automatic evaluation are shown in Table 1 . We find that the proposed pair-to-sequence model that captures interactions between paragraph and question performs consistently better than sequence-to-sequence model. Moreover, replacing the input paragraph with the answer sentence hurts model performance, which indicates that using the whole paragraph as context provides more helpful information to unanswerable question generation. We also try to generate unanswerable questions by only relying on answerable questions (see “-Paragraph”), or the paragraph (see “-Question”). Unsurprisingly, both ablation models obtain worse performance compared with the full model. These two ablation results also demonstrate that the input answerable question helps more to improve performance compared with the input paragraph. We argue that the original answerable question provides more direct information due to the fact that the average edit distance between the example pairs is $3.5$ . At last, we remove the copy mechanism that restrains prediction tokens to the vocabulary. The results indicate the necessity of copying tokens from answerable questions and paragraphs to outputs, which relieves the out-of-vocabulary problem. Table 3 shows the human evaluation results of generated unanswerable questions. We compare with the baseline method TfIdf, which uses the input answerable question to retrieve similar questions towards other articles as outputs. The retrieved questions are mostly unanswerable and readable, but they are not quite relevant to the question answering pair. Notice that being relevant is demonstrated to be important for data augmentation in further experiments on machine reading comprehension. Here pair-to-sequence model still outperforms sequence-to-sequence model in terms of all three metrics. But the differences in human evaluation are not as notable as in the automatic metrics. As shown in Table 4 , we further randomly sample 100 system outputs to analyze the types of generated unanswerable questions. We borrow the types defined in BIBREF5 for SQuAD 2.0. We categorize the outputs with grammatical errors that make them hard to understand into Other. Samples that fall into Impossible Condition are mainly produced by non-entity substitution. We can see that models tend to generate unanswerable questions by inserting negation and swapping entities. These two types are also most commonly used when crowdworkers pose unanswerable questions according to answerable ones. We also find that the current models still have difficulties in utilizing antonyms and exclusion conditions, which could be improved by incorporating external resources. In Figure 3 , we present a sample paragraph and its corresponding answerable questions and generated unanswerable questions. In the first example, two models generate unanswerable questions by swapping the location entity “Victoria” with “texas” and inserting negation word “never”, respectively. In the second example, sequence-to-sequence model omits the condition “in Victoria” and yields an answerable question. Pair-to-sequence model inserts the negation “no longer” properly, which is not mentioned in the paragraph. In the third example, grammatical errors are found in the output of . The last example shows that inserting negation words in different positions (“n't public” versus “not in victoria”) can express different meanings. Such cases are critical for generated questions' answerability, which is hard to handle in a rule-based system. ### Data Augmentation for Machine Reading Comprehension We apply our automatically generated unanswerable questions as augmentation data to the following reading comprehension models: BiDAF BIBREF2 is a benchmark model on extractive machine reading comprehension. Based on BiDAF, BIBREF45 propose the BiDAF-No-Answer model to predict the distribution of answer candidates and the probability of a question being unanswerable at the same time. BIBREF29 propose the DocQA model to address document-level reading comprehension. The no-answer probability is also predicted jointly. It is the state-of-the-art model on unanswerable machine reading comprehension. We adopt the uncased version of BERT BIBREF31 for fine-tuning. The batch sizes of BERT-base and BERT-large are set to 12 and 24 respectively. The rest hyperparameters are kept untouched as in the official instructions of fine-tuning BERT-Large on SQuAD 2.0. We first generate unanswerable questions using the trained generation model. Specifically, we use the answerable questions in the SQuAD 2.0 training set, besides ones aligned before, to generate unanswerable questions. Then we use the paragraph and answers of answerable questions along with the generated questions to construct training examples. At last, we have an augmentation data containing $69,090$ unanswerable examples. We train question answering models with augmentation data in two separate phases. In the first phase, we train the models by combining the augmentation data and all $86,821$ SQuAD 2.0 answerable examples. Subsequently, we use the original SQuAD 2.0 training data alone to further fine-tune model parameters. Exact Match (EM) and F1 are two metrics used to evaluate model performance. EM measures the percentage of predictions that match ground truth answers exactly. F1 measures the word overlap between the prediction and ground truth answers. We use pair-to-sequence model with answerable questions and paragraphs for data augmentation by default. Table 2 shows the exact match and F1 scores of multiple reading comprehension models with and without data augmentation. We can see that the generated unanswerable questions can improve both specifically designed reading comprehension models and strong BERT fine-tuning models, yielding $1.9$ absolute F1 improvement with BERT-base model and $1.7$ absolute F1 improvement with BERT-large model. Our submitted model obtains an EM score of $80.75$ and an F1 score of $83.85$ on the hidden test set. As shown in Table 5 , pair-to-sequence model proves to be a better option for generating augmentation data than other three methods. Besides the sequence-to-sequence model, we use answerable questions to retrieve questions from other articles with TfIdf. The retrieved questions are of little help to improve the model, because they are less relevant to the paragraph as shown in Table 3 . We refer to the rule-based method BIBREF28 that swaps entities and replaces words with antonyms as Rule. In comparison to the above methods, pair-to-sequence model can yield the largest improvement. Results in Table 6 show that enlarging the size of augmentation data can further improve model performance, especially with the BERT-base model. We conduct experiments using two and three times the size of the base augmentation data (i.e., $69,090$ unanswerable questions). We generate multiple unanswerable questions for each answerable question by using beam search. Because we only generate unanswerable questions, the data imbalance problem could mitigate the improvement of incorporating more augmentation data. ### Conclusions In this paper, we propose to generate unanswerable questions as a means of data augmentation for machine reading comprehension. We produce relevant unanswerable questions by editing answerable questions and conditioning on the corresponding paragraph. A pair-to-sequence model is introduced in order to capture the interactions between question and paragraph. We also present a way to construct training data for unanswerable question generation models. Both automatic and human evaluations show that the proposed model consistently outperforms the sequence-to-sequence baseline. The results on the SQuAD 2.0 dataset show that our generated unanswerable questions can help to improve multiple reading comprehension models. As for future work, we would like to enhance the ability to utilize antonyms for unanswerable question generation by leveraging external resources. ### Acknowledgments We thank anonymous reviewers for their helpful comments. Qin and Liu were supported by National Natural Science Foundation of China (NSFC) via grants 61632011 and 61772156. Figure 1: An example taken from the SQuAD 2.0 dataset. The annotated (plausible) answer span in the paragraph is used as a pivot to align the pair of answerable and unanswerable questions. Figure 2: Diagram of the proposed pair-to-sequence model and sequence-to-sequence model. The input embeddings is the sum of the word embeddings, the character embeddings and the token type embeddings. The input questions are all answerable. Table 1: Automatic evaluation results. Higher score is better and the best performance for each evaluation metric is highlighted in boldface. “- Paragraph (+AS)” represents replacing paragraphs with answer sentences. Table 2: Experimental results of applying data augmentation to reading comprehension models on the SQuAD 2.0 dataset. “4” indicates absolute improvement. Table 3: Human evaluation results. Unanswerability (UNANS): 1 for unanswerable, 0 otherwise. Relatedness (RELA): 3 for relevant to both answerable question and paragraph, 2 for relevant to only one, 1 for irrelevant. Readability (READ): 3 for fluent, 2 for minor grammatical errors, 1 for incomprehensible. Table 4: Types of unanswerable questions generated by models and humans, we refer the reader to (Rajpurkar et al., 2018) for detail definition of each type. “S2S” represents the sequence-to-sequence baseline and “P2S” is our proposed pair-to-sequence model. Figure 3: Sample output generated by human, sequence-to-sequence model, and pair-to-sequence model. The (plausible) answer span of questions are marked in colors and main difference of model outputs are underlined. Table 5: Results using different generation methods for data augmentation. “4” indicates absolute improvement. Table 6: Ablation over the size of data augmentation. “× N” means the original size is enhanced N times. “4” indicates absolute improvement.
learn to ask unanswerable questions by editing answerable ones with word exchanges, negations, etc
What is ironic about keeping their books stored away in an airtight compartment? A. There is nothing in the books that can help Melopolis repair the Maternite or save its population B. The books were already designed with technology that would keep them intact forever C. There is little use in preserving something if the meaning is lost upon those preserving it D. The books contain antiquated knowledge that will only set Melopolis back further
The Birds and the Bees BY DAVE E. FISHER Which goes to prove that, in some instances, being heroic is easy! [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, August 1957. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] I was wandering among the tall grass of the slopes, listening to the soft whistling of the wind; allowing the grass to caress my toga and thighs. It was a day soft and clear; a day accepted by the young, cherished by we old. Across the gently undulating hills stood the magnificent Melopolis, encradling the Oracle of Delni. I do not, of course, believe in the gods per se; still there is a grandeur in the very stones that transcends their human sculptors, and it is no wonder to me that many cling tenaciously, and ignorantly, to the old religion. Cling to the gods of old, who drew man upward from wherever he began. In whose names Man killed and plundered, while struggling up. In whose names Man finally left this earth, to seek his cousins among the stars. But of course there were no cousins. There was nothing. And Man returned, and settled down to live. Saddened, but resigned and content to live in peace with his knowledge and his power. Gone now are all the ancient evils, wars, emergencies. "Sias! Sias—" And they were upon me. That is, Xeon was upon me. But I knew that where Xeon is, Melia must soon appear. And indeed it was but a moment before Melia slipped through the high grass to stand at his side. Their youthful voices were babbling in excitement. Melia was a She, with the swelling breasts that were, so tradition states, quite prevalent among members of the race long ago, and are seldom seen today. Indeed, Melia was on this account made the butt of many jokes and, I fear, would have had a lonely life of it had it not been for the friendship of Xeon. "Sias," they were saying, "the Maternite's gone." I stared in amazement. "Gone? It cannot be gone. It has always been—" "Oh my gods!" Xeon shouted. "I tell you it's gone! Will you—" Melia interrupted him quietly. "Xeon, will you lose all respect for the Elder?" Then turned to me, and said calmly, "The watcher at the Maternite Machine, it appears, has been drunk. The heat rose above the warning, continued to rise, and then—poof. Everything has evaporated in Maternite. All the Prelife is gone." "All of it?" I asked. "There is nothing left," Melia insisted. "Can more be made? And if not, what will happen with no more children?" "That is for the priests to say, not I," I replied. In moments of emergency, it is wise to speak with caution. That is, I suppose so. I have never before been in a real emergency. A man my age does not hurry in the heat of the midday sun—maddugs nenglishmin go out in the midday sun, as the ancients say, although I often wonder why—but Xeon and Melia ran all the way down to the city. They are of an age to enter manhood, and have all the energy such young men do. As we entered the city, we were surrounded by confusion and consternation. And can the simple people be blamed? They were aware that they stood in the midst of an unprecedented happening; indeed, an emergency. For a machine had failed! Not in the memory of the eldest among us has a machine failed. They were created so long ago, indeed, that the ignorant believe them to have been constructed by the gods themselves. And never, so far as I know, has one failed. Small wonder that the watcher had been negligent. Indeed, the watcher is more a tradition than a necessity. Besides, had he been sober, he would not have known what to do. For who knows the mysterious workings of the machines? I hastened to the City Hall and found the Conclave assembled, waiting for me to bring them to order. Xeon and Melia stopped as I mounted the steps, but I smiled and motioned them in. They accompanied me past the marble pillars into the cool recesses of the Hall, then seated themselves on the floor as I took my place by the great table. Well, you know how these things are. At such a time, many men feel impelled to make speeches, and one must not be disrespectful. Prayers and supplications were offered to the gods, priests were sent to sacrifice, and finally, as the light of the sun was falling between the pillars, the High Priest of the Maternite Machine was heard. He rambled through the customary opening remarks and then, continually smoothing his white beard—of which he is excessively proud—approached the crux of the matter and the Conclave finally heard the facts it had assembled to hear. By this time, unfortunately, many of the Conclave had departed for home and supper. Yet perhaps it is for the best, for those left were the most earnest and intelligent. "I would not bore you," he said, "with details of which only the gods are sure. Know, then, that once granted a few cells of Prelife, it is an easy matter for the Maternite Machine to add more and more; thus assuring us, as has always been, a continuous source of Prelife to be born by the Generating Machine as children. The machines bear the exact number of children each year to balance the number of us whom the gods claim. Such it has always been from time immemorial." A murmur of assent and approval of these virtuous words whispered around the Hall. "But now," he continued, however, with less assurance and indeed with even a stutter here and there, "an unprecedented situation has arisen. Indeed, I might call it an emergency. For the M-Maternite Machine has actually failed." Cries of "Treason" sprang up, and I fear it might have gone hard for the priest had I not been able to insure order. "That is not the worst," he cried, as if in defiance. "All the Prelife has been dried up. It will not function. There is no more. And there will be no more children!" At this I feared the Conclave was about to riot. It is at such times that I most revere the wisdom of the ancients, who decreed seventy years the minimum age for a member of the Conclave. They shouted and began to beat their fists, but for how long can a man of seventy years roar like a youngster? They quieted, breathing heavily, and I asked, "Is there no way, then, to produce more Prelife in order that the machines may produce more children for us? "As I have said," he replied, "give the machines but a bit of Prelife and they will produce more. But take away that least bit, and they are helpless." Such heresy could have brought a sad end to the priest had not the Conclave been so exhausted by the events of the day. We leaned back to think. Rocsates leaned forward and asked, "Must there not—must there not have been a beginning to Prelife? For the Machine, it seems, cannot make it; and yet it came from somewhere." "Riddles are not called for," I answered severely. "Are not riddles often the beginning of knowledge?" he asked, in that irritating dumber-than-thou attitude of his. "Must there not, long ago, have been a source of Prelife: a source now forgotten? And may it not even now—should we discover it—be available to us? I am reminded of the story of the animals of old—" "I fear your mind is wandering, Rocsates," I was forced to interrupt. "I know well the legend of the animals, but what does it have to do—" The heads of the Conclave were turning to me, quizzically. I hastened to explain the legend of the animals. "It is said that many thousands of years ago, time without reckoning, there existed on the earth creatures who were alive like us, and yet not like us. It is said they had four legs or more, and no arms, were covered with hair, and although not mute, they could not speak." Rocsates' voice made itself heard. "It is true. Such creatures did indeed exist. It is recorded most scientifically in the films." "If it be so," I said, quieting the hub-bub that followed, "and I would not doubt your word, Rocsates, for all know you are the wisest of men—if it were so, then, what of it?" "May it not be," Rocsates put in, "that these animals had no machines to reproduce their kind? For surely the gods would not grant machines to such creatures. And indeed, if they had Maternite Machines, why then we would yet have these animals among us." "And how, then, did these animals reproduce?" I asked. "How, indeed? And is there not a legend—admitted only a legend—that says there was a time before the machines, and before the Maternite Machine, and that at such a time both the animals and Men reproduced from within their own bodies?" At this two members of the Conclave fell immediately into a faint, and I would gladly have joined them. I hoped that the youngsters, Xeon and Melia, had not heard, but as I turned they were listening most attentively to Rocsates, who, amid cries of "Heresy" and "Treason", went on: "I should like to ask the Conclave for permission to search the ancient records, in the hope of finding some such knowledge that would prove or disprove my words." "You wish to search the films—" I began. "Not the films, Sias, but the books." Gods, this Rocsates! The books, as well he knows, are so ancient, and so delicate, that they are kept in an air-tight tomb; lest, being handled, they be destroyed and all knowledge within them lost. Therefore, they have not been read in the known history of our race. And Rocsates has been anxious for an excuse— "Sias," he went on, "if there exists such knowledge as I seek, is it not indeed lost to the memory of Man? And if so, are not the books the only place where it may be found?" Rocsates, it is suspected, will never ask a question unless he knows the answer beforehand. And so I acquiesced, and agreed, and granted permission. And with much misgiving and foreboding of evil, the Conclave adjourned. Several weeks elapsed before Rocsates requested that the Conclave meet. I called the meeting at dawn and so it was yet early in the afternoon when formalities were concluded and Rocsates granted leave to speak. "Some of those among you are She's," he began. "And you know you are different from the rest of us. To the advantage, your skin is fairer and your features more often handsomer than ours. To the disadvantage, your excretory system is not so mechanically dextrous as ours. And, you may say, why should this not be so? There is, indeed, no reason why we should all be identical. Perforce you have the advantage, perforce we do. Yet there is one other distinction. "Some among you She's have the swelling of the breasts. And does there exist no reason for this? Was there not, perhaps in ancient times, a cause for this? Do you not wonder, She's, whence you come and for what reason?" "Rocsates," I interrupted. "All this is fascinating, of course. But if you could be quick—" "Of course," he replied. "In the course of my reading I have read many books, and while they are all vague on the subject, this I have discovered: "That there was indeed a time before the machines, in fact the books were created in that time, for not one of them mentions the machines. Then reproduction was carried on by individuals, without help of the then nonexistent machines. The She's are not wanderers from another land, but they have lived with us for all time; they are not another race, but we are all types of one race. And the fact of reproduction is somehow intimately related to the physical distinctions of the She's!" These last sentences were shouted to be heard above the roar of the crowd. Yet when Rocsates stopped, so also did the noise, so shocked and amazed at his words were they. And I confess, myself also. "In fact," Rocsates added, sitting down, "this process of reproduction seems to have been so simple that there was once a problem of over-population." Order was lost among the Conclave as each man turned to speak to his neighbor, and for some time I could not restore order. I realized that something had to be done to save Rocsates before the outrage of the assembled overwhelmed him. "It seems," I shouted, "that there is a flaw in your logic." For if such there was, I was hopeful of dismissing the entire affair with no harm done. "For if people reproduced too often, why then this reproduction must have been a pleasant thing to do; otherwise they would not have done so to excess. And if it was a pleasant thing to do, where is the necessity for the machines, and why were they created?" Rocsates seemed perplexed by this problem, whereupon Xeon, who together with Melia were at the Conclave without permission, shouted, "Perhaps the process of reproduction was of such a pleasure that the Conclave ruled it to be a sin? And therefore the machines were necessary!" At this impudence the Conclave dissolved in an uproar, and I was beyond power to restrain them from placing Xeon under arrest. Privately, however, I had to admit that his supposition was a possibility, and thus I authorized Rocsates to continue his search. Now indeed I was sorely worried concerning Xeon, for he must languish in the dungeon until the Conclave is satisfied to release him, and this they cannot do until they meet again. I needed a sufficient excuse to call a meeting of the Conclave, whereupon I might argue for the lad. When I heard that Rocsates again desired audience, I immediately proclaimed a meeting of the Conclave to be held the next day at dawn, and so that night slept well. The Conclave had come to order and formalities had been initiated when Rocsates entered and took his place. He clutched under one shoulder a thin, rectangular object, but that is not what impressed me. His appearance—he looked as if he had not slept of late, nor eaten either. His eyes were sunken, and his features had doubled in age. He was bent and tired. But it was his eyes. There was a horror in them. I was shocked, and could not help staring at him. And then the formalities were over. I intended to speak for Xeon, but Rocsates was on his feet and I gave way. "I have indeed discovered the secret of reproduction," he began. "After many searchings, I came upon this—" and he held forth the object he had carried in. "It is a book. It is entitled, 'Living a Normal Sex Life.' It seems to be some sort of a do-it-yourself pamphlet." He dropped the book on the table and rubbed his hands over his eyes. There was something in the man's behavior that commanded everyone's attention. He went on, speaking low. "The word 'Sex' is not defined, but it seems to mean...." His words trailed off. He was obviously unsure of how to continue. "I had better start at the beginning, I suppose," he said. "You see, once upon a time there were birds and bees...." When he finished the Conclave sat in horrified silence. His words, with all their horror, had the ring of truth and there were no cries of 'Heresy'. There was only stunned disbelief and the beginnings of nausea. It is the mark of honor that a leader shall carry on when others fear to move. I cleared my throat. "Shall not these organs which you mention have atrophied by now? With no use throughout all these generations, will they not have evolved into nothingness?" "I do not think so," Rocsates replied after a while. "What to us is an eon, to evolution is but an instant. And then the swelling of the breasts, I believe, proves that there is still reproductive activity in some, at least, of the She's." We sat shaking our heads, bowed under terrible reality. "Then we must experiment," I said. "But whom could we ask to submit to such horror?" "I have already taken the liberty of asking for volunteers," Rocsates replied. "The She, of course, must be one with the swelling of the breasts. Melia has volunteered, on condition that Xeon be released from dungeon. Are there any objections?" There were none, of course. Who would refuse a boon to one who would undergo such an ordeal for the City? "And who will be the partner?" I asked. "In all honor, could Xeon allow Melia to surpass him in courage? It shall be he," Rocsates said. And with his word the two entered the Hall and stood, noble and naked. Rocsates gestured to the table, and Melia started to climb upon it, but Xeon stepped forward. "My lords," he said, "would not better results be obtained were we to conduct the experiment in the fields before the Oracle of Delni, that the gods may help us?" His glance reached into my soul, and I was proud of Xeon. A true friend, he thought even now of the comfort of Melia. The marble table was indeed hard, and from Rocsates' description it seemed that Melia's position would be as uncomfortable as it would be undignified. The soft fields might be some slight help. I voiced my assent, and the entire Conclave adjourned to the fields. It was nearly dark when we walked home, Rocsates and I, arm in arm. It had been a horrible day. The inhuman indignity, the cries— We tarried before my home, leaned on the stone, stared at the first stars. "They seemed finally to accomplish all the book described," I muttered. "They may indeed have succeeded," Rocsates replied. "There is mentioned a time lapse which is necessary. The child does not appear immediately." "It doesn't matter," I said disconsolately. "Who could ask them to go through such an ordeal again?" And then I looked down to earth again, and saw them standing before me. Melia cast her eyes down, and would not raise them. Xeon held his arm about her shoulders, as if to protect her, but I know not from whom. "Sias," he said. Then stopped, embarrassed. I waited, and Rocsates was silent, and he continued. "Sias, we come to tell.... We will...." He raised his eyes to mine and said manfully, "We shall try again." I am afraid that tears came to my eyes. Such sacrifice— "We beg one favor," Xeon went on. "We are agreed that—Well, we should like to be left alone, in private, to try." "Of course," I replied. Anything they might want they could have. My relief and gratitude must have showed, for Xeon took a deep breath and spoke again. "We do not deserve praise, Sias," he said. "The truth is, we ... we sort of enjoy it." I watched them turn and wander off together under the stars. My heart has a warmth in it, and I no longer fear for the future of our race when our young people can show such nobility and sacrifice.
C. There is little use in preserving something if the meaning is lost upon those preserving it
How does Tolliver feel about Betty at first? A. she's a rich man's daughter deserving of the company B. she's attractive and someone he should get to know C. she's an entitled girl that doesn't know what she's getting into D. she's a fun girl to joke around with while on Ganymede
TOLLIVER'S ORBIT was slow—but it wasn't boring. And it would get you there—as long as you weren't going anywhere anyhow! By H. B. FYFE [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, September 1961. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Johnny Tolliver scowled across the desk at his superior. His black thatch was ruffled, as if he had been rubbed the wrong way. "I didn't ask you to cut out your own graft, did I?" he demanded. "Just don't try to sucker me in on the deal. I know you're operating something sneaky all through the colony, but it's not for me." The big moon-face of Jeffers, manager of the Ganymedan branch of Koslow Spaceways, glowered back at him. Its reddish tinge brightened the office noticeably, for such of Ganymede's surface as could be seen through the transparent dome outside the office window was cold, dim and rugged. The glowing semi-disk of Jupiter was more than half a million miles distant. "Try not to be simple—for once!" growled Jeffers. "A little percentage here and there on the cargoes never shows by the time figures get back to Earth. The big jets in the home office don't care. They count it on the estimates." "You asked any of them lately?" Tolliver prodded. "Now, listen ! Maybe they live soft back on Earth since the mines and the Jovian satellite colonies grew; but they were out here in the beginning, most of them. They know what it's like. D'ya think they don't expect us to make what we can on the side?" Tolliver rammed his fists into the side pockets of his loose blue uniform jacket. He shook his head, grinning resignedly. "You just don't listen to me ," he complained. "You know I took this piloting job just to scrape up money for an advanced engineering degree back on Earth. I only want to finish my year—not get into something I can't quit." Jeffers fidgeted in his chair, causing it to creak under the bulk of his body. It had been built for Ganymede, but not for Jeffers. "Aw, it's not like that," the manager muttered. "You can ease out whenever your contract's up. Think we'd bend a good orbit on your account?" Tolliver stared at him silently, but the other had difficulty meeting his eye. "All right, then!" Jeffers snapped after a long moment. "If you want it that way, either you get in line with us or you're through right now!" "You can't fire me," retorted the pilot pityingly. "I came out here on a contract. Five hundred credits a week base pay, five hundred for hazardous duty. How else can you get pilots out to Jupiter?" "Okay I can't fire you legally—as long as you report for work," grumbled Jeffers, by now a shade more ruddy. "We'll see how long you keep reporting. Because you're off the Callisto run as of now! Sit in your quarters and see if the company calls that hazardous duty!" "Doesn't matter," answered Tolliver, grinning amiably. "The hazardous part is just being on the same moon as you for the next six months." He winked and walked out, deliberately leaving the door open behind him so as to enjoy the incoherent bellowing that followed him. Looks like a little vacation , he thought, unperturbed. He'll come around. I just want to get back to Earth with a clean rep. Let Jeffers and his gang steal the Great Red Spot off Jupiter if they like! It's their risk. Tolliver began to have his doubts the next day; which was "Tuesday" by the arbitrary calender constructed to match Ganymede's week-long journey around Jupiter. His contract guaranteed a pilot's rating, but someone had neglected to specify the type of craft to be piloted. On the bulletin board, Tolliver's name stood out beside the number of one of the airtight tractors used between the dome city and the spaceport, or for hauling cross-country to one of the mining domes. He soon found that there was nothing for him to do but hang around the garage in case a spaceship should land. The few runs to other domes seemed to be assigned to drivers with larger vehicles. The following day was just as boring, and the next more so. He swore when he found the assignment unchanged by "Friday." Even the reflection that it was payday was small consolation. "Hey, Johnny!" said a voice at his shoulder. "The word is that they're finally gonna trust you to take that creeper outside." Tolliver turned to see Red Higgins, a regular driver. "What do you mean?" "They say some home-office relative is coming in on the Javelin ." "What's wrong with that?" asked Tolliver. "Outside of the way they keep handing out soft jobs to nephews, I mean." "Aah, these young punks just come out for a few months so they can go back to Earth making noises like spacemen. Sometimes there's no reason but them for sending a ship back with a crew instead of in an economy orbit. Wait till you see the baggage you'll have to load!" Later in the day-period, Tolliver recalled this warning. Under a portable, double-chambered plastic dome blown up outside the ship's airlock, a crewman helped him load two trunks and a collection of bags into the tractor. He was struggling to suppress a feeling of outrage at the waste of fuel involved when the home-office relative emerged. She was about five feet four and moved as if she walked lightly even in stronger gravity than Ganymede's. Her trim coiffure was a shade too blonde which served to set off both the blue of her eyes and the cap apparently won from one of the pilots. She wore gray slacks and a heavy sweater, like a spacer. "Sorry to keep you waiting," she said, sliding into the seat beside Tolliver. "By the way, just call me Betty." "Sure," agreed Tolliver thinking, Ohmigod! Trying already to be just one of the gang, instead of Lady Betty! Is her old man the treasurer, or does he just know where bodies are buried? "They were making dates," said the girl. "Were they ribbing me, or is it true that none of the four of them goes back with the ship?" "It's true enough," Tolliver assured her. "We need people out here, and it costs a lot to make the trip. They found they could send back loaded ships by 'automatic' flight—that is, a long, slow, economical orbit and automatic signalling equipment. Then they're boarded approaching Earth's orbit and landed by pilots who don't have to waste their time making the entire trip." He followed the signals of a spacesuited member of the port staff and maneuvered out of the dome. Then he headed the tractor across the frozen surface of Ganymede toward the permanent domes of the city. "How is it here?" asked the girl. "They told me it's pretty rough." "What did you expect?" asked Tolliver. "Square dances with champagne?" "Don't be silly. Daddy says I'm supposed to learn traffic routing and the business management of a local branch. They probably won't let me see much else." "You never can tell," said the pilot, yielding to temptation. "Any square inch of Ganymede is likely to be dangerous." I'll be sorry later , he reflected, but if Jeffers keeps me jockeying this creeper, I'm entitled to some amusement. And Daddy's little girl is trying too hard to sound like one of the gang. "Yeah," he went on, "right now, I don't do a thing but drive missions from the city to the spaceport." "Missions! You call driving a mile or so a mission ?" Tolliver pursed his lips and put on a shrewd expression. "Don't sneer at Ganymede, honey!" he warned portentously. "Many a man who did isn't here today. Take the fellow who used to drive this mission!" "You can call me Betty. What happened to him?" "I'll tell you some day," Tolliver promised darkly. "This moon can strike like a vicious animal." "Oh, they told me there was nothing alive on Ganymede!" "I was thinking of the mountain slides," said the pilot. "Not to mention volcanic puffballs that pop out through the frozen crust where you'd least expect. That's why I draw such high pay for driving an unarmored tractor." "You use armored vehicles?" gasped the girl. She was now sitting bolt upright in the swaying seat. Tolliver deliberately dipped one track into an icy hollow. In the light gravity, the tractor responded with a weird, floating lurch. "Those slides," he continued. "Ganymede's only about the size of Mercury, something like 3200 miles in diameter, so things get heaped up at steep angles. When the rock and ice are set to sliding, they come at you practically horizontally. It doesn't need much start, and it barrels on for a long way before there's enough friction to stop it. If you're in the way—well, it's just too bad!" Say, that's pretty good! he told himself. What a liar you are, Tolliver! He enlarged upon other dangers to be encountered on the satellite, taking care to impress the newcomer with the daredeviltry of John Tolliver, driver of "missions" across the menacing wastes between dome and port. In the end, he displayed conclusive evidence in the form of the weekly paycheck he had received that morning. It did not, naturally, indicate he was drawing the salary of a space pilot. Betty looked thoughtful. "I'm retiring in six months if I'm still alive," he said bravely, edging the tractor into the airlock at their destination. "Made my pile. No use pushing your luck too far." His charge seemed noticeably subdued, but cleared her throat to request that Tolliver guide her to the office of the manager. She trailed along as if with a burden of worry upon her mind, and the pilot's conscience prickled. I'll get hold of her after Jeffers is through and set her straight , he resolved. It isn't really funny if the sucker is too ignorant to know better. Remembering his grudge against the manager, he took pleasure in walking in without knocking. "Jeffers," he announced, "this is ... just call her Betty." The manager's jowled features twisted into an expression of welcome as jovial as that of a hungry crocodile. "Miss Koslow!" he beamed, like a politician the day before the voting. "It certainly is an honor to have you on Ganymede with us! That's all, Tolliver, you can go. Yes, indeed! Mr. Koslow—the president, that is: your father—sent a message about you. I repeat, it will be an honor to show you the ropes. Did you want something else, Tolliver?" "Never mind him, Mr. Jeffers," snapped the girl, in a tone new to Tolliver. "We won't be working together, I'm afraid. You've already had enough rope." Jeffers seemed to stagger standing still behind his desk. His loose lips twitched uncertainly, and he looked questioningly to Tolliver. The pilot stared at Betty, trying to recall pictures he had seen of the elder Koslow. He was also trying to remember some of the lies he had told en route from the spaceport. "Wh-wh-what do you mean, Miss Koslow?" Jeffers stammered. He darted a suspicious glare at Tolliver. "Mr. Jeffers," said the girl, "I may look like just another spoiled little blonde, but the best part of this company will be mine someday. I was not allowed to reach twenty-two without learning something about holding on to it." Tolliver blinked. He had taken her for three or four years older. Jeffers now ignored him, intent upon the girl. "Daddy gave me the title of tenth vice-president mostly as a joke, when he told me to find out what was wrong with operations on Ganymede. I have some authority, though. And you look like the source of the trouble to me." "You can't prove anything," declared Jeffers hoarsely. "Oh, can't I? I've already seen certain evidence, and the rest won't be hard to find. Where are your books, Mr. Jeffers? You're as good as fired!" The manager dropped heavily to his chair. He stared unbelievingly at Betty, and Tolliver thought he muttered something about "just landed." After a moment, the big man came out of his daze enough to stab an intercom button with his finger. He growled at someone on the other end to come in without a countdown. Tolliver, hardly thinking about it, expected the someone to be a secretary, but it turned out to be three members of Jeffers' headquarters staff. He recognized one as Rawlins, a warehouse chief, and guessed that the other two might be his assistants. They were large enough. "No stupid questions!" Jeffers ordered. "Lock these two up while I think!" Tolliver started for the door immediately, but was blocked off. "Where should we lock—?" the fellow paused to ask. Tolliver brought up a snappy uppercut to the man's chin, feeling that it was a poor time to engage Jeffers in fruitless debate. In the gravity of Ganymede, the man was knocked off balance as much as he was hurt, and sprawled on the floor. "I told you no questions!" bawled Jeffers. The fallen hero, upon arising, had to content himself with grabbing Betty. The others were swarming over Tolliver. Jeffers came around his desk to assist. Tolliver found himself dumped on the floor of an empty office in the adjoining warehouse building. It seemed to him that a long time had been spent in carrying him there. He heard an indignant yelp, and realized that the girl had been pitched in with him. The snapping of a lock was followed by the tramp of departing footsteps and then by silence. After considering the idea a few minutes, Tolliver managed to sit up. He had his wind back. But when he fingered the swelling lump behind his left ear, a sensation befuddled him momentarily. "I'm sorry about that," murmured Betty. Tolliver grunted. Sorrow would not reduce the throbbing, nor was he in a mood to undertake an explanation of why Jeffers did not like him anyway. "I think perhaps you're going to have a shiner," remarked the girl. "Thanks for letting me know in time," said Tolliver. The skin under his right eye did feel a trifle tight, but he could see well enough. The abandoned and empty look of the office worried him. "What can we use to get out of here?" he mused. "Why should we try?" asked the girl. "What can he do?" "You'd be surprised. How did you catch on to him so soon?" "Your paycheck," said Betty. "As soon as I saw that ridiculous amount, it was obvious that there was gross mismanagement here. It had to be Jeffers." Tolliver groaned. "Then, on the way over here, he as good as admitted everything. You didn't hear him, I guess. Well, he seemed to be caught all unaware, and seemed to blame you for it." "Sure!" grumbled the pilot. "He thinks I told you he was grafting or smuggling, or whatever he has going for him here. That's why I want to get out of here—before I find myself involved in some kind of fatal accident!" "What do you know about the crooked goings-on here?" asked Betty after a startled pause. "Nothing," retorted Tolliver. "Except that there are some. There are rumors, and I had a halfway invitation to join in. I think he sells things to the mining colonies and makes a double profit for himself by claiming the stuff lost in transit. You didn't think you scared him that bad over a little slack managing?" The picture of Jeffers huddled with his partners in the headquarters building, plotting the next move, brought Tolliver to his feet. There was nothing in the unused office but an old table and half a dozen plastic crates. He saw that the latter contained a mess of discarded records. "Better than nothing at all," he muttered. He ripped out a double handful of the forms, crumpled them into a pile at the doorway, and pulled out his cigarette lighter. "What do you think you're up to?" asked Betty with some concern. "This plastic is tough," said Tolliver, "but it will bend with enough heat. If I can kick loose a hinge, maybe we can fool them yet!" He got a little fire going, and fed it judiciously with more papers. "You know," he reflected, "it might be better for you to stay here. He can't do much about you, and you don't have any real proof just by yourself." "I'll come along with you, Tolliver," said the girl. "No, I don't think you'd better." "Why not?" "Well ... after all, what would he dare do? Arranging an accident to the daughter of the boss isn't something that he can pull off without a lot of investigation. He'd be better off just running for it." "Let's not argue about it," said Betty, a trifle pale but looking determined. "I'm coming with you. Is that stuff getting soft yet?" Tolliver kicked at the edge of the door experimentally. It seemed to give slightly, so he knocked the burning papers aside and drove his heel hard at the corner below the hinge. The plastic yielded. "That's enough already, Tolliver," whispered the girl. "We can crawl through!" Hardly sixty seconds later, he led her into a maze of stacked crates in the warehouse proper. The building was not much longer than wide, for each of the structures in the colony had its own hemispherical emergency dome of transparent plastic. They soon reached the other end. "I think there's a storeroom for spacesuits around here," muttered Tolliver. "Why do you want them?" "Honey, I just don't think it will be so easy to lay hands on a tractor. I bet Jeffers already phoned the garage and all the airlocks with some good lie that will keep me from getting through." After a brief search, he located the spacesuits. Many, evidently intended for replacements, had never been unpacked, but there were a dozen or so serviced and standing ready for emergencies. He showed Betty how to climb into one, and checked her seals and valves after donning a suit himself. "That switch under your chin," he said, touching helmets so she could hear him. "Leave it turned off. Anybody might be listening!" He led the way out a rear door of the warehouse. With the heavy knife that was standard suit equipment, he deliberately slashed a four-foot square section out of the dome. He motioned to Betty to step through, then trailed along with the plastic under his arm. He caught up and touched helmets again. "Just act as if you're on business," he told her. "For all anyone can see, we might be inspecting the dome." "Where are you going?" asked Betty. "Right through the wall, and then head for the nearest mine. Jeffers can't be running everything !" "Is there any way to get to a TV?" asked the girl. "I ... uh ... Daddy gave me a good number to call if I needed help." "How good?" "Pretty official, as a matter of fact." "All right," Tolliver decided. "We'll try the ship you just came in on. They might have finished refueling and left her empty." They had to cross one open lane between buildings, and Tolliver was very conscious of moving figures in the distance; but no one seemed to look their way. Reaching the foot of the main dome over the establishment, he glanced furtively about, then plunged his knife into the transparent material. From the corner of his eye, he thought he saw Betty make a startled gesture, but he had his work cut out for him. This was tougher than the interior dome. Finally, he managed to saw a ragged slit through which they could squeeze. There was room to walk between the inner and outer layer, so he moved along a few yards. A little dust began to blow about where they had gone through. He touched helmets once more. "This time," he said, "the air will really start to blow, so get through as fast as you can. If I can slap this piece of plastic over the rip, it may stow down the loss of pressure enough to give us quite a lead before the alarms go off." Through the faceplates, he saw the girl nod, wide-eyed. As soon as he plunged the knife into the outer layer, he could see dusty, moist air puffing out into the near-vacuum of Ganymede's surface. Fumbling, he cut as fast as he could and shoved Betty through the small opening. Squeezing through in his turn, he left one arm inside to spread the plastic sheet as best he could. The internal air pressure slapped it against the inside of the dome as if glued, although it immediately showed an alarming tendency to balloon through the ruptured spot. They'll find it, all right , Tolliver reminded himself. Don't be here when they do! He grabbed Betty by the wrist of her spacesuit and headed for the nearest outcropping of rock. It promptly developed that she had something to learn about running on ice in such low gravity. Until they were out of direct line of sight from the settlement, Tolliver simply dragged her. Then, when he decided that it was safe enough to pause and tell her how to manage better, the sight of her outraged scowl through the face-plate made him think better of it. By the time we reach the ship, she'll have learned , he consoled himself. It was a long mile, even at the pace human muscles could achieve on Ganymede. They took one short rest, during which Tolliver was forced to explain away the dangers of slides and volcanic puffballs. He admitted to having exaggerated slightly. In the end, they reached the spaceship. There seemed to be no one about. The landing dome had been collapsed and stored, and the ship's airlock port was closed. "That's all right," Tolliver told the girl. "We can get in with no trouble." It was when he looked about to make sure that they were unobserved that he caught a glimpse of motion back toward the city. He peered at the spot through the dim light. After a moment, he definitely recognized the outline of a tractor breasting a rise in the ground and tilting downward again. "In fact, we have to get in to stay out of trouble," he said to Betty. He located the switch-cover in the hull, opened it and activated the mechanism that swung open the airlock and extended the ladder. It took him considerable scrambling to boost the girl up the ladder and inside, but he managed. They passed through the airlock, fretting at the time required to seal, pump air and open the inner hatch; and then Tolliver led the way up another ladder to the control room. It was a clumsy trip in their spacesuits, but he wanted to save time. In the control room, he shoved the girl into an acceleration seat, glanced at the gauges and showed her how to open her helmet. "Leave the suit on," he ordered, getting in the first word while she was still shaking her head. "It will help a little on the takeoff." "Takeoff!" shrilled Betty. "What do you think you're going to do? I just want to use the radio or TV!" "That tractor will get here in a minute or two. They might cut your conversation kind of short. Now shut up and let me look over these dials!" He ran a practiced eye over the board, reading the condition of the ship. It pleased him. Everything was ready for a takeoff into an economy orbit for Earth. He busied himself making a few adjustments, doing his best to ignore the protests from his partner in crime. He warned her the trip might be long. "I told you not to come," he said at last. "Now sit back!" He sat down and pushed a button to start the igniting process. In a moment, he could feel the rumble of the rockets through the deck, and then it was out of his hands for several minutes. "That wasn't so bad," Betty admitted some time later. "Did you go in the right direction?" "Who knows?" retorted Tolliver. "There wasn't time to check everything . We'll worry about that after we make your call." "Oh!" Betty looked helpless. "It's in my pocket." Tolliver sighed. In their weightless state, it was no easy task to pry her out of the spacesuit. He thought of inquiring if she needed any further help, but reminded himself that this was the boss's daughter. When Betty produced a memo giving frequency and call sign, he set about making contact. It took only a few minutes, as if the channel had been monitored expectantly, and the man who flickered into life on the screen wore a uniform. "Space Patrol?" whispered Tolliver incredulously. "That's right," said Betty. "Uh ... Daddy made arrangements for me." Tolliver held her in front of the screen so she would not float out of range of the scanner and microphone. As she spoke, he stared exasperatedly at a bulkhead, marveling at the influence of a man who could arrange for a cruiser to escort his daughter to Ganymede and wondering what was behind it all. When he heard Betty requesting assistance in arresting Jeffers and reporting the manager as the head of a ring of crooks, he began to suspect. He also noticed certain peculiarities about the remarks of the Patrolman. For one thing, though the officer seemed well acquainted with Betty, he never addressed her by the name of Koslow. For another, he accepted the request as if he had been hanging in orbit merely until learning who to go down after. They really sent her out to nail someone , Tolliver realized. Of course, she stumbled onto Jeffers by plain dumb luck. But she had an idea of what to look for. How do I get into these things? She might have got me killed! "We do have one trouble," he heard Betty saying. "This tractor driver, Tolliver, saved my neck by making the ship take off somehow, but he says it's set for a six-month orbit, or economy flight. Whatever they call it. I don't think he has any idea where we're headed." Tolliver pulled her back, holding her in mid-air by the slack of her sweater. "Actually, I have a fine idea," he informed the officer coldly. "I happen to be a qualified space pilot. Everything here is under control. If Miss Koslow thinks you should arrest Jeffers, you can call us later on this channel." "Miss Koslow?" repeated the spacer. "Did she tell you—well, no matter! If you'll be okay, we'll attend to the other affair immediately." He signed off promptly. The pilot faced Betty, who looked more offended than reassured at discovering his status. "This 'Miss Koslow' business," he said suspiciously. "He sounded funny about that." The girl grinned. "Relax, Tolliver," she told him. "Did you really believe Daddy would send his own little girl way out here to Ganymede to look for whoever was gypping him?" "You ... you...?" "Sure. The name's Betty Hanlon. I work for a private investigating firm. If old Koslow had a son to impersonate—" "I'd be stuck for six months in this orbit with some brash young man," Tolliver finished for her. "I guess it's better this way," he said meditatively a moment later. "Oh, come on ! Can't they get us back? How can you tell where we're going?" "I know enough to check takeoff time. It was practically due anyhow, so we'll float into the vicinity of Earth at about the right time to be picked up." He went on to explain something of the tremendous cost in fuel necessary to make more than minor corrections to their course. Even though the Patrol ship could easily catch the slow freighter, bringing along enough fuel to head back would be something else again. "We'll just have to ride it out," he said sympathetically. "The ship is provisioned according to law, and you were probably going back anyhow." "I didn't expect to so soon." "Yeah, you were pretty lucky. They'll think you're a marvel to crack the case in about three hours on Ganymede." "Great!" muttered Betty. "What a lucky girl I am!" "Yes," admitted Tolliver, "there are problems. If you like, we might get the captain of that Patrol ship to legalize the situation by TV." "I can see you're used to sweeping girls off their feet," she commented sourly. "The main problem is whether you can cook." Betty frowned at him. "I'm pretty good with a pistol," she offered, "or going over crooked books. But cook? Sorry." "Well, one of us had better learn, and I'll have other things to do." "I'll think about it," promised the girl, staring thoughtfully at the deck. Tolliver anchored himself in a seat and grinned as he thought about it too. After a while , he promised himself, I'll explain how I cut the fuel flow and see if she's detective enough to suspect that we're just orbiting Ganymede!
C. she's an entitled girl that doesn't know what she's getting into
Do you think there is a romantic connection between Brian and Crystal? A. Absolutely not. They both hate each other, they're only working together out of necessity. B. Probably. Both share similar personalities that work well together. C. Unlikely. They both have known each other for a short period in which no thoughts about romance were genuinely addressed. D. Definitely. They've been through a lot together and care about each other deeply.
MONOPOLY By Vic Phillips and Scott Roberts Sheer efficiency and good management can make a monopoly grow into being. And once it grows, someone with a tyrant mind is going to try to use it as a weapon if he can— [Transcriber's Note: This etext was produced from Astounding Science-Fiction April 1942. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] "That all, chief? Gonna quit now?" Brian Hanson looked disgustedly at Pete Brent, his lanky assistant. That was the first sign of animation he had displayed all day. "I am, but you're not," Hanson told him grimly. "Get your notes straightened up. Run those centrifuge tests and set up the still so we can get at that vitamin count early in the morning." "Tomorrow morning? Aw, for gosh sakes, chief, why don't you take a day off sometime, or better yet, a night off. It'd do you good to relax. Boy, I know a swell blonde you could go for. Wait a minute, I've got her radiophone number somewhere—just ask for Myrtle." Hanson shrugged himself out of his smock. "Never mind Myrtle, just have that equipment set up for the morning. Good night." He strode out of the huge laboratory, but his mind was still on the vitamin research they had been conducting, he barely heard the remarks that followed him. "One of these days the chief is going to have his glands catch up with him." "Not a chance," Pete Brent grunted. Brian Hanson wondered dispassionately for a moment how his assistants could fail to be as absorbed as he was by the work they were doing, then he let it go as he stepped outside the research building. He paused and let his eyes lift to the buildings that surrounded the compound. This was the administrative heart of Venus City. Out here, alone, he let his only known emotion sweep through him, pride. He had an important role in the building of this great new city. As head of the Venus Consolidated Research Organization, he was in large part responsible for the prosperity of this vigorous, young world. Venus Consolidated had built up this city and practically everything else that amounted to anything on this planet. True, there had been others, pioneers, before the company came, who objected to the expansion of the monopolistic control. But, if they could not realize that the company's regime served the best interests of the planet, they would just have to suffer the consequences of their own ignorance. There had been rumors of revolution among the disgruntled older families. He heard there had been killings, but that was nonsense. Venus Consolidated police had only powers of arrest. Anything involving executions had to be referred to the Interplanetary Council on Earth. He dismissed the whole business as he did everything else that did not directly influence his own department. He ignored the surface transport system and walked to his own apartment. This walk was part of a regular routine of physical exercise that kept his body hard and resilient in spite of long hours spent in the laboratory. As he opened the door of his apartment he heard the water running into his bath. Perfect timing. He was making that walk in precisely seven minutes, four and four-fifths seconds. He undressed and climbed into the tub, relaxing luxuriously in the exhilaration of irradiated water. He let all the problems of his work drift away, his mind was a peaceful blank. Then someone was hammering on his head. He struggled reluctantly awake. It was the door that was being attacked, not his head. The battering thunder continued persistently. He swore and sat up. "What do you want?" There was no answer; the hammering continued. "All right! All right! I'm coming!" He yelled, crawled out of the tub and reached for his bathrobe. It wasn't there. He swore some more and grabbed a towel, wrapping it inadequately around him; it didn't quite meet astern. He paddled wetly across the floor sounding like a flock of ducks on parade. Retaining the towel with one hand he inched the door cautiously open. "What the devil—" He stopped abruptly at the sight of a policeman's uniform. "Sorry, sir, but one of those rebels is loose in the Administration Center somewhere. We're making a check-up of all the apartments." "Well, you can check out; I haven't got any blasted rebels in here." The policeman's face hardened, then relaxed knowingly. "Oh, I see, sir. No rebels, of course. Sorry to have disturbed you. Have a good—Good night, sir," he saluted and left. Brian closed the door in puzzlement. What the devil had that flat-foot been smirking about? Well, maybe he could get his bath now. Hanson turned away from the door and froze in amazement. Through the open door of his bedroom he could see his bed neatly turned down as it should be, but the outline under the counterpane and the luxuriant mass of platinum-blond hair on the pillow was certainly no part of his regular routine. "Hello." The voice matched the calm alertness of a pair of deep-blue eyes. Brian just stared at her in numbed fascination. That was what the policeman had meant with his insinuating smirk. "Just ask for Myrtle." Pete Brent's joking words flashed back to him. Now he got it. This was probably the young fool's idea of a joke. He'd soon fix that. "All right, joke's over, you can beat it now." "Joke? I don't see anything funny, unless it's you and that suggestive towel. You should either abandon it or get one that goes all the way round." Brian slowly acquired a complexion suitable for painting fire plugs. "Shut up and throw me my dressing gown." He gritted. The girl swung her legs out of bed and Brian blinked; she was fully dressed. The snug, zippered overall suit she wore did nothing to conceal the fact that she was a female. He wrapped his bathrobe austerely around him. "Well, now what?" she asked and looked at him questioningly. "Well, what do you think?" he burst out angrily. "I'm going to finish my bath and I'd suggest you go down to the laboratory and hold hands with Pete. He'd appreciate it." He got the impression that the girl was struggling heroically to refrain from laughing and that didn't help his dignity any. He strode into the bathroom, slammed the door and climbed back into the bath. The door opened a little. "Well, good-by now." The girl said sweetly. "Remember me to the police force." "Get out of here!" he yelled and the door shut abruptly on a rippling burst of laughter. Damn women! It was getting so a man had to pack a gun with him or something. And Pete Brent. He thought with grim satisfaction of the unending extra work that was going to occur around the laboratory from now on. He sank back into the soothing liquid embrace of the bath and deliberately set his mind loose to wander in complete relaxation. A hammering thunder burst on the outer door. He sat up with a groan. "Lay off, you crazy apes!" he yelled furiously, but the pounding continued steadily. He struggled out of the bath, wrapped his damp bathrobe clammily around him and marched to the door with a seething fury of righteous anger burning within him. He flung the door wide, his mouth all set for a withering barrage, but he didn't get a chance. Four police constables and a sergeant swarmed into the room, shoving him away from the door. "Say! What the—" "Where is she?" the sergeant demanded. "Wherethehell's who?" "Quit stallin', bud. You know who. That female rebel who was in here." "Rebel? You're crazy! That was just ... Pete said ... rebel? Did you say rebel?" "Yeah, I said rebel, an' where is she?" "She ... why ... why ... she left, of course. You don't think I was going to have women running around in here, do you?" "She wuz in his bed when I seen her, sarge," one of the guards contributed. "But she ain't there now." "You don't think that I—" "Listen, bud, we don't do the thinkin' around here. You come on along and see the chief." Brian had had about enough. "I'm not going anywhere to see anybody. Maybe you don't know who I am. You can't arrest me." Brian Hanson, Chief of Research for Venus Consolidated, as dignified as possible in a damp bathrobe, glared out through the bars at a slightly bewildered Pete Brent. "What the devil do you want? Haven't you caused enough blasted trouble already?" "Me? For gosh sakes, chief—" "Yes, you! If sending that damn blonde to my apartment and getting me arrested is your idea of a joke—" "But, my gosh, I didn't send anybody, chief. And this is no joke. That wasn't Myrtle, that was Crystal James, old man James' daughter. They're about the oldest family on Venus. Police have been after her for months; she's a rebel and she's sure been raising plenty of hell around here. She got in and blew out the main communications control panel last night. Communications been tied up all day." Pete lowered his voice to an appreciative whisper, "Gosh, chief, I didn't know you had it in you. How long have you been in with that bunch? Is that girl as good-looking as they say she is?" "Now listen here, Brent. I don't know—" "Oh, it's all right, chief. You can trust me. I won't give you away." "There's nothing to give away, you fool!" Brian bellowed. "I don't know anything about any damn rebels. All I want is to get out of here—" "Gotcha, chief," Brent whispered understandingly. "I'll see if I can pass the word along." "Come here, you idiot!" Brian screamed after his erstwhile assistant. "Pipe down there, bud," a guard's voice cut in chillingly. Brian retired to his cell bunk and clutched his aching head in frustrated fury. For the nineteenth time Brian Hanson strode to the door of his cell and rattled the bars. "Listen here, guard, you've got to take a message to McHague. You can't hold me here indefinitely." "Shut up. Nobody ain't takin' no message to McHague. I don't care if you are—" Brian's eyes almost popped out as he saw a gloved hand reach around the guard's neck and jam a rag over his nose and mouth. Swift shadows moved expertly before his astonished gaze. Another guard was caught and silenced as he came around the end of the corridor. Someone was outside his cell door, a hooded figure which seemed, somehow, familiar. "Hello, pantless!" a voice breathed. He knew that voice! "What the devil are you doing here?" "Somebody by the name of Pete Brent tipped us off that you were in trouble because of me. But don't worry, we're going to get you out." "Damn that fool kid! Leave me alone. I don't want to get out of here that way!" he yelled wildly. "Guards! Help!" "Shut up! Do you want to get us shot?" "Sure I do. Guards! Guards!" Someone came running. "Guards are coming," a voice warned. He could hear the girl struggling with the lock. "Damn," she swore viciously. "This is the wrong key! Your goose is sure cooked now. Whether you like it or not, you'll hang with us when they find us trying to get you out of here." Brian felt as though something had kicked him in the stomach. She was right! He had to get out now. He wouldn't be able to explain this away. "Give me that key," he hissed and grabbed for it. He snapped two of the coigns off in the lock and went to work with the rest of the key. He had designed these escape-proof locks himself. In a few seconds the door swung open and they were fleeing silently down the jail corridor. The girl paused doubtfully at a crossing passage. "This way," he snarled and took the lead. He knew the ground plan of this jail perfectly. He had a moment of wonder at the crazy spectacle of himself, the fair-haired boy of Venus Consolidated, in his flapping bathrobe, leading a band of escaping rebels out of the company's best jail. They burst around a corner onto a startled guard. "They're just ahead of us," Brian yelled. "Come on!" "Right with you," the guard snapped and ran a few steps with them before a blackjack caught up with him and he folded into a corner. "Down this way, it's a short cut." Brian led the way to a heavily barred side door. The electric eye tripped a screaming alarm, but the broken key in Brian's hands opened the complicated lock in a matter of seconds. They were outside the jail on a side street, the door closed and the lock jammed immovably behind them. Sirens wailed. The alarm was out! The street suddenly burst into brilliance as the floodlights snapped on. Brian faltered to a stop and Crystal James pushed past him. "We've got reinforcements down here," she said, then skidded to a halt. Two guards barred the street ahead of them. Brian felt as though his stomach had fallen down around his ankles and was tying his feet up. He couldn't move. The door was jammed shut behind them, they'd have to surrender and there'd be no explaining this break. He started mentally cursing Pete Brent, when a projector beam slashed viciously by him. These guards weren't fooling! He heard a gasping grunt of pain as one of the rebels went down. They were shooting to kill. He saw a sudden, convulsive movement from the girl. A black object curved out against the lights. The sharp, ripping blast of an atomite bomb thundered along the street and slammed them to the ground. The glare left them blinded. He struggled to his feet. The guards had vanished, a shallow crater yawned in the road where they had been. "We've got to run!" the girl shouted. He started after her. Two surface transport vehicles waited around the corner. Brian and the rebels bundled into them and took away with a roar. The chase wasn't organized yet, and they soon lost themselves in the orderly rush of Venus City traffic. The two carloads of rebels cruised nonchalantly past the Administration Center and pulled into a private garage a little beyond. "What are we stopping here for?" Brian demanded. "We've got to get away." "That's just what we're doing," Crystal snapped. "Everybody out." The rebels piled out and the cars pulled away to become innocuous parts of the traffic stream. The rebels seemed to know where they were going and that gave them the edge on Brian. They followed Crystal down into the garage's repair pit. She fumbled in the darkness a moment, then a darker patch showed as a door swung open in the side of the pit. They filed into the solid blackness after her and the door thudded shut. The beam of a torch stabbed through the darkness and they clambered precariously down a steep, steel stairway. "Where the dickens are we?" Brian whispered hoarsely. "Oh, you don't have to whisper, we're safe enough here. This is one of the air shafts leading down to the old mines." "Old mines? What old mines?" "That's something you newcomers don't know anything about. This whole area was worked out long before Venus Consolidated came to the planet. These old tunnels run all under the city." They went five hundred feet down the air shaft before they reached a level tunnel. "What do we do? Hide here?" "I should say not. Serono Zeburzac, head of McHague's secret police will be after us now. We won't be safe anywhere near Venus City." "Don't be crazy. That Serono Zeburzac stuff is just a legend McHague keeps up to scare people with." "That's what you think," Crystal snapped. "McHague's legend got my father and he'll get all of us unless we run the whole company right off the planet." "Well, what the dickens does he look like?" Brian asked doubtfully. "I don't know, but his left hand is missing. Dad did some good shooting before he died," she said grimly. Brian was startled at the icy hardness of her voice. Two of the rebels pulled a screening tarpaulin aside and revealed one of the old-type ore cars that must have been used in the ancient mines. A brand-new atomic motor gleamed incongruously at one end. The rebels crowded into it and they went rumbling swiftly down the echoing passage. The lights of the car showed the old working, rotten and crumbling, fallen in in some places and signs of new work where the rebels had cleared away the debris of years. Brian struggled into a zippered overall suit as they followed a twisting, tortuous course for half an hour, switching from one tunnel to another repeatedly until he had lost all conception of direction. Crystal James, at the controls, seemed to know exactly where they were going. The tunnel emerged in a huge cavern that gloomed darkly away in every direction. The towering, massive remains of old machinery, eroded and rotten with age crouched like ancient, watching skeletons. "These were the old stamp mills," the girl said, and her voice seemed to be swallowed to a whisper in the vast, echoing darkness. Between two rows of sentinel ruins they came suddenly on two slim Venusian atmospheric ships. Dim light spilled over them from a ragged gash in the wall of the cavern. Brian followed Crystal into the smaller of the two ships and the rest of the rebels manned the other. "Wait a minute, how do we get out of here?" Brian demanded. "Through that hole up there," the girl said matter-of-factly. "You're crazy, you can't get through there." "Oh, yeah? Just watch this." The ship thundered to life beneath them and leaped off in a full-throttled take-off. "We're going to crash! That gap isn't wide enough!" The sides of the gap rushed in on the tips of the stubby wings. Brian braced himself for the crash, but it didn't come. At the last possible second, the ship rolled smoothly over. At the moment it flashed through the opening it was stood vertically on edge. Crystal held the ship in its roll and completed the maneuver outside the mountain while Brian struggled to get his internal economy back into some semblance of order. "That's some flying," he said as soon as he could speak. Crystal looked at him in surprise. "That's nothing. We Venusians fly almost as soon as we can walk." "Oh—I see," Brian said weakly and a few moments later he really did see. Two big, fast, green ships, carrying the insignia of the Venus Consolidated police, cruised suddenly out from a mountain air station. An aërial torpedo exploded in front of the rebel ship. Crystal's face set in grim lines as she pulled the ship up in a screaming climb. Brian got up off the floor. "You don't have to get excited like that," he complained. "They weren't trying to hit us." "That's what you think," Crystal muttered. "Those children don't play for peanuts." "But, girl, they're just Venus Consolidated police. They haven't got any authority to shoot anyone." "Authority doesn't make much difference to them," Crystal snapped bitterly. "They've been killing people all over the planet. What do you think this revolution is about?" "You must be mistak—" He slumped to the floor as Crystal threw the ship into a mad, rolling spin. A tremendous crash thundered close astern. "I guess that was a mistake!" Crystal yelled as she fought the controls. Brian almost got to his feet when another wild maneuver hurled him back to the floor. The police ship was right on their tail. The girl gunned her craft into a snap Immelmann and swept back on their pursuers, slicing in close over the ship. Brian's eyes bulged as he saw a long streak of paint and metal ripped off the wing of the police ship. He saw the crew battling their controls in startled terror. The ship slipped frantically away and fell into a spin. "That's them," Crystal said with satisfaction. "How are the others doing?" "Look! They're hit!" Brian felt sick. The slower rebel freight ship staggered drunkenly as a torpedo caught it and ripped away half a wing. It plunged down in flames with the white flowers of half a dozen parachutes blossoming around it. Brian watched in horror as the police ship came deliberately about. They heard its forward guns go into action. The bodies of the parachutists jerked and jumped like crazy marionettes as the bullets smashed into them. It was over in a few moments. The dead rebels drifted down into the mist-shrouded depths of the valley. "The dirty, murdering rats!" Brian's voice ripped out in a fury of outrage. "They didn't have a chance!" "Don't get excited," Crystal told him in a dead, flat voice. "That's just normal practice. If you'd stuck your nose out of your laboratory once in a while, you'd have heard of these things." "But why—" He ducked away instinctively as a flight of bullets spanged through the fuselage. "They're after us now!" Crystal's answer was to yank the ship into a rocketing climb. The police were watching for that. The big ship roared up after them. "Just follow along, suckers," Crystal invited grimly. She snapped the ship into a whip stall. For one nauseating moment they hung on nothing, then the ship fell over on its back and they screamed down in a terminal velocity dive, heading for the safety of the lower valley mists. The heavier police ship, with its higher wing-loading, could not match the maneuver. The rebel craft plunged down through the blinding fog. Half-seen, ghostly fingers of stone clutched up at them, talons of gray rock missed and fell away again as Crystal nursed the ship out of its dive. " Phew! " Brian gasped. "Well, we got away that time. How in thunder can you do it?" "Well, you don't do it on faith. Take a look at that fuel gauge! We may get as far as our headquarters—or we may not." For twenty long minutes they groped blindly through the fog, flying solely by instruments and dead reckoning. The needle of the fuel gauge flickered closer and closer to the danger point. They tore loose from the clinging fog as it swung firmly to "Empty." The drive sputtered and coughed and died. "That's figuring it nice and close," Crystal said in satisfaction. "We can glide in from here." "Into where?" Brian demanded. All he could see immediately ahead was the huge bulk of a mountain which blocked the entire width of the valley and soared sheer up to the high-cloud level. His eyes followed it up and up— "Look! Police ships. They've seen us!" "Maybe they haven't. Anyway, there's only one place we can land." The ship lunged straight for the mountain wall! "Are you crazy? Watch out—we'll crash!" "You leave the flying to me," Crystal snapped. She held the ship in its glide, aiming directly for the tangled foliage of the mountain face. Brian yelped and cowered instinctively back. The lush green of the mountainside swirled up to meet them. They ripped through the foliage—there was no crash. They burst through into a huge, brilliantly lighted cavern and settled to a perfect landing. Men came running. Crystal tumbled out of her ship. "Douse those lights," she shouted. "The police are outside." A tall, lean man with bulbous eyes and a face like a startled horse, rushed up to Crystal. "What do you mean by leading them here?" he yelled, waving his hands. "They jumped us when we had no fuel, and quit acting like an idiot." The man was shaking, his eyes looked wild. "They'll kill us. We've got to get out of here." "Wait, you fool. They may not even have seen us." But he was gone, running toward a group of ships lined up at the end of the cavern. "Who was that crazy coot and what is this place?" Brian demanded. "That was Gort Sterling, our leader," the girl said bitterly. "And this is our headquarters." One of the ships at the back of the cavern thundered to life, streaked across the floor and burst out through the opening Crystal's ship had left. "He hasn't got a chance! We'll be spotted for sure, now." The other rebels waited uncertainly, but not for long. There was the crescendoing roar of ships in a dive followed by the terrific crash of an explosion. "They got him!" Crystal's voice was a moan. "Oh, the fool, the fool!" "Sounded like more than one ship. They'll be after us, now. Is there any other way of getting out of this place?" "Not for ships. We'll have to walk and they'll follow us." "We've got to slow them down some way, then. I wonder how the devil they traced us? I thought we lost them in that fog." "It's that Serono Zeburzac, the traitor. He knows these mountains as well as we do." "How come?" "The Zeburzacs are one of the old families, but he sold out to McHague." "Well, what do we do now? Just stand here? It looks like everybody's leaving." "We might as well just wait," Crystal said hopelessly. "It won't do us any good to run out into the hills. Zeburzac and his men will follow." "We could slow them down some by swinging a couple of those ships around so their rocket exhausts sweep the entrance to the cavern," Brian suggested doubtfully. She looked at him steadily. "You sound like the only good rebel left. We can try it, anyway." They ran two ships out into the middle of the cavern, gunned them around and jockeyed them into position—not a moment too soon. Half a dozen police showed in brief silhouette as they slipped cautiously into the cavern, guns ready, expecting resistance. They met a dead silence. A score or more followed them without any attempt at concealment. Then Brian and Crystal cut loose with the drives of the two ships. Startled screams of agony burst from the crowded group of police as they were caught in the annihilating cross fire of roaring flame. They crisped and twisted, cooked to scorched horrors before they fell. A burst of thick, greasy smoke rushed out of the cavern. Two of the police, their clothes and flesh scorched and flaming, plunged as shrieking, living torches down the mountainside. Crystal was white and shaking, her face set in a mask of horror, as she climbed blindly from her ship. "Let's get away! I can smell them burning," she shuddered and covered her face with her hands. Brian grabbed her and shook her. "Snap out of it," he barked. "That's no worse than shooting helpless men in parachutes. We can't go, yet; we're not finished here." "Oh, let them shoot us! I can't go through that again!" "You don't have to. Wait here." He climbed back into one of the ships and cut the richness of the fuel mixture down till the exhaust was a lambent, shuddering stutter, verging on extinction. He dashed to the other ship and repeated the maneuver, fussing with the throttle till he had the fuel mixture adjusted to critical fineness. The beat of the stuttering exhaust seemed to catch up to the other and built to an aching pulsation. In a moment the whole mass of air in the cavern hit the frequency with a subtle, intangible thunder of vibration. Crystal screamed. "Brian! There's more police cutting in around the entrance." Brian clambered out of the ship and glanced at the glowing points in the rock where the police were cutting their way through outside the line of the exhaust flames. The pulsating thunder in the cavern crescendoed to an intolerable pitch. A huge mass of stalactites crashed to the floor. "It's time to check out," Brian shouted. Crystal led the way as they fled down the escape tunnel. The roaring crash of falling rock was a continuous, increasing avalanche of sound in the cavern behind them. They emerged from the tunnel on the face of the mountain, several hundred yards to the east of the cavern entrance. The ground shook and heaved beneath them. "The whole side of the mountain's sliding," Crystal screamed. "Run!" Brian shoved her and they plunged madly through the thick tangle of jungle away from the slide. Huge boulders leaped and smashed through the matted bush around them. Crystal went down as the ground slipped from under her. Brian grabbed her and a tree at the same time. The tree leaned and crashed down the slope, the whole jungle muttered and groaned and came to life as it joined the roaring rush of the slide. They were tumbled irresistibly downward, riding the edge of the slide for terrifying minutes till it stilled and left them bruised and shaken in a tangle of torn vegetation. The remains of two police ships, caught without warning in the rush as they attempted to land, stuck up grotesquely out of the foot of the slide. The dust was settling away. A flock of brilliant blue, gliding lizards barking in raucous terror, fled down the valley. Then they were gone and the primeval silence settled back into place. Brian and Crystal struggled painfully to solid ground. Crystal gazed with a feeling of awe at the devastated mountainside. "How did you do it?" "It's a matter of harmonics," Brian explained. "If you hit the right vibratory combination, you can shake anything down. But now that we've made a mess of the old homestead, what do we do?" "Walk," Crystal said laconically. She led the way as they started scrambling through the jungle up the mountainside. "Where are we heading for?" Brian grunted as he struggled along. "The headquarters of the Carlton family. They're the closest people we can depend on. They've kept out of the rebellion, but they're on our side. They've helped us before."
C. Unlikely. They both have known each other for a short period in which no thoughts about romance were genuinely addressed.
How are FHIR and RDF combined?
### Introduction ::: Healthcare Information Technology and the Interoperability Problem Since the early 1970s, healthcare information technology has moved toward comprehensive electronic medical records (EMR) in which almost every aspect of the patient's healthcare has been digitized and retained indefinitelyBIBREF0, which has vastly improved the efficiency with which patient information can be retained, communicated, and analyzed. At the same time, the healthcare industry has moved from a fee-for-service model to a value-based model, facilitated in part by the existence of such a record and in part by public policy, such as the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 BIBREF1, which provided financial incentives for the "meaningful use" of electronic medical records. The realization of a holistic medical record has been slowed by various obstacles, chief among them is the problem of interoperability between systems. The problem of interoperability arises almost as soon as a healthcare organization begins to choose a vendor for their electronic medical record, when they are faced with a choice between an architecture based on a single monolithic system or a so-called best-of-breed approach involving multiple discrete systems, each chosen for its superior performance in a narrow domain. The monolith claims to handle all aspects of healthcare information management; the best-of-breed approach entails a multiplicity of systems, each of which may be superior in its domain but which are not smoothly integrated. A major difference between the two architectures is how they solve the problem of interoperability. In the case of the monolith, the problem is solved by the system vendor, at least in principle, but at the cost to the customer of a loss of choice. In the best-of-breed approach, the problem of interoperability is shifted onto the customer, who incurs an often hefty cost in the form of a more complex systems architecture and the resulting need for specialized hardware, software, and staff to maintain it. In a best-of-breed approach, the need for instantaneous intersystems communication is typically handled via an Enterprise Service Bus (ESB)BIBREF2, which ensures real-time message delivery to subscribing systems. Additionally, if the data is to be analyzed in combination, rather than in isolation within the silo of a single system, it must be recombined and stored outside of these systems. This is typically done in an Enterprise Data Warehouse (EDW)BIBREF3 and requires further specialized hardware, software, and staff. However, most EDWs are based on a batch-loading system that runs during off-peak hours for the previous calendar day's businessBIBREF3; thus, while an EDW can be a powerful tool for retrospective analysis, it is unsuitable to real-time applications. Interoperability is a serious challenge that modern healthcare systems must address in order to adequately serve their patients. In this paper we demonstrate a hitherto underused approach that combines the attractive aspects of both an enterprise service bus and an enterprise data warehouse to arrive at real-time analytics. ### Background ::: Health Level Seven Version 2 (HL7v2) HL7v2 is a healthcare messaging standard developed by the standards organization Health Level Seven International. It first emerged in 1988 and today is the most widely used such standard, having been adopted by over ninety-five percent of health systems in the United States and thirty-five countries worldwide BIBREF4. As such, it is something of a universal medium in the field of healthcare interoperability, yet it is terse and, without specialized training and access to the standard reference, cryptic. Each HL7 message describes an event in a healthcare workflow and breaks down hierarchically into segments, fields, components, subcomponents, repeated components, and so on. There are well over one hundred types of messages and several times as many types of segments in HL7v2. The current version of the specification, for HL7 v2.8, is well over 2,500 pages long and contains nearly one million words. BIBREF0 Partly as a consequence of this complexity, health interoperability has become a specialized field, replete with certifications and training and entire careers built on knowledge of HL7v2. An example HL7 message describing the following information is shown in Figure FIGREF4 The PID (Patient Identification) segment contains the demographic information of the patient. Eve E. Everywoman was born on 1962-03-20 and lives in Statesville OH. Her patient ID number (presumably assigned to her by the Good Health Hospital) is 555-44-4444. The OBR (Observation Request) segment identifies the observation as it was originally ordered: 15545 GLUCOSE. The observation was ordered by Particia Primary MD and performed by Howard Hippocrates MD. The OBX (Observation) segment contains the results of the observation: 182 mg/dl. ### Background ::: Health Level Seven Fast Healthcare Interoperability Resources (HL7 FHIR) FHIR BIBREF5 is a new open standard for healthcare data developed by the same company that developed HL7v2. However, whereas HL7v2 uses an idiosyncratic data exchange format, FHIR uses data exchange formats based on those already in wide use on the World-Wide Web such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON) BIBREF6, as well as the web's familiar transfer control protocols such as HyperText Transfer Protocol Secure (HTTPS) and Representational State Transfer (REST) BIBREF6 and system of contextual hyperlinks implemented with Uniform Resource Locators / Identifiers (URL/URI) BIBREF7. This design choice simplifies interoperability and discoverability and enables applications to be built rapidly on top of FHIR by the large number of engineers already familiar with web application design without a steep learning curve. In contrast to HL7v2, which is based on events in a healthcare workflow such as admit, discharge, and transfer, FHIR is built on the notion of conceptual entities from the healthcare domain, such as Patient, Encounter, and Observation, i.e. resources. Currently, FHIR encompasses 143 resources, each of which is described abstractly in the FHIR standard with the attributes Name, Flags, Cardinality, Type, and Description & Constraints. BIBREF7. In a concrete implementation of FHIR, resources are serialized to one of the data exchange formats listed above. An example of an FIHR XML message is shown in Figure FIGREF5. ### Background ::: Semantic Web The term Semantic Web BIBREF8 denotes an interconnected machine-readable network of information. In some ways it is analogous to the World-Wide Web, but with some crucial differences. The most important similarity is in the vision for the two technologies: Like the World-Wide Web, the Semantic Web was envisioned as a way for users from different institutions, countries, disciplines, etc. to exchange information openly and in doing so to add to the sum of human knowledge. The difference, however, is in the different emphases put on human readability versus machine readability: Whereas the World-Wide Web was intended to be visually rendered by one of any number of web browsers before being read by humans and therefore prioritizes fault tolerance and general compatibility over precision, the semantic web prioritizes precision and logical rigor in order for the information contained in it to be machine readable and used for logical inference. The similarities continue in the technologies used to implement the two webs. Information in both the Semantic Web and the World-Wide Web is intended to be accessed using the familiar data exchange protocol Hypertext Transfer Protocol (HTTP) and addressed using Uniform Resource Identifiers (URI) for the Semantic Web and Uniform Resource Locations (URL) for the World-Wide Web that tell the agent/browser how to find linked information. Even the data exchange formats are remarkably similar: The World-Wide Web uses Hypertext Markup Language (HTML)BIBREF9, a tree-structured subset of Standard Generalized Markup Language (SGML)BIBREF10, whereas the Semantic Web uses a variety of tree-structured formats such as XML, JSON, Terse RDF Triple Language (i.e. Turtle/TTL)BIBREF11, etc. The most significant difference between the World-Wide Web and the Semantic Web is in the type of information that they encode. The Semantic Web delivers a payload of simple logical statements known as triples, each consisting of a subject, predicate, and object, whereas the World-Wide Web delivers a series of directives to the web browser that govern the layout of the rendered page as well as the content of the page, in the form of text, images, videos, scripts, and so on. This difference in payloads corresponds to their different purposes – the payload is delivered in the first case to an intelligent agent and in the second case to a web browser. In more technical terms, the semantic web can be thought of as a distributed directed graph whose vertices are resources and whose edges are statements describing those resources. In its openness and decentralized nature, it bears some resemblance to the World Wide Web; however, whereas the World Wide Web consists of ad hoc, unsynchronized data presented in a variety of formats, the semantic web is a machine-readable body of information that can be synchronized while still coming from a variety of sources. ### Background ::: Resource Description Framework (RDF) RDF is the backbone of the semantic webBIBREF8. It is described as a framework, rather than a protocol or a standard, because it is an abstact model of information whose stated goal is "to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines (a priori) the semantics of any application domain." BIBREF12 Its concrete realization is typically a serialization into one of several formats including XML, JSON, TTL, etc. The basic unit of information in RDF is a statement expressed as a logical triple; that is, a statement of the form <subject> <predicate> <object>, in which the predicate expresses a relationship between the subject and the object: for instance, bloodPressure :value 120. The subject must be a resource, that is, an object consisting of one or more statements, and the object may be either a literal, that is, a simple numeric or textual value, or another resource. The predicate describes some aspect or property of the subject. Because both the subject and the object can be resources, the object may also be described by statements in which it is the subject, leading to a complex graph structure. A group of statements can be used to perform inference on their resources, thus creating new statements and enriching the semantic universe of the data set. For instance, the canonical syllogism "Socrates is a man; all men are mortal; therefore, Socrates is mortal" can be reproduced in the two statements Socrates :isA man and man :is mortal, resulting in a synthesized third statement: Socrates :is mortal. RDF supports "inference, shared semantics across multiple standards and data formats, data integration, semantic data validation, compliance enforcement, SPARQL [SPARQL Protocol and RDF Query Language (SPARQL)] queries and other uses." BIBREF13. ### Background ::: FHIR/RDF One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events. The example given in the FHIR documentation is the difference between "patient x has viral pneumonia" (statement of fact) and "Dr. Jones diagnosed patient x with viral pneumonia" (record of event). Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts. The example given for this mismatch is "a modifier extension indicates that the surrounding element's meaning will likely be misunderstood if the modifier extension is not understood." The potential for serious error resulting from this mismatch is small, but it is worth bearing in mind when designing information systems. ### Background ::: SPARQL Protocol and RDF Query Language (SPARQL) RDF has an associated query language that can be used to search for matching statements, known as SPARQL. Although syntactically and semantically based on Structured Query Language (SQL), the information model over which it searches is RDF's directed graph of resources and statements, not the familiar relations stored in a relational database. The syntax is beyond the scope of this paper, but in general SPARQL queries outline the shape of the graph they wish to find. For an example SPARQL query that searches for blood pressure readings over 120 b.p.m., see Figure FIGREF6. ### Method At a high level, the semantic enrichment engine is designed to take healthcare data in a variety of formats as input and store it in a triplestore database that users can query. In this way, the engine acts as both a collector, receiving messages from numerous sources, and a bus for delivering messages to multiple sources, as well as a real-time analytics platform. For example, a message from a vital signs monitor and from a registration system can be coalesced into a new stream containing blood pressure, temperature, and laboratory values for use in a machine learning model to predict sepsis. To support future large-scale operations, a multi-protocol message passing system was used for inter-module communication. This modular design also allows different components to be swapped out seamlessly, provided they continue to communicate via the expected interface. Routines were developed to simulate input data based on the authors experience with real healthcare data. The reasons for this choice were twofold: One, healthcare data can be high in incidental complexity, requiring one-off code to handle unusual inputs, but not necessarily in such a way as to significantly alter the fundamental engineering choices in a semantic enrichment engine such as this one. Two, healthcare data is strictly regulated, and the process for obtaining access to healthcare data for research can be cumbersome and time-consuming. A simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting, was used for simulation. In a production setting, the Java module that generates simulation data would be replaced by either a data source that directly writes to the input message queue or a Java module that intercepts or extracts production data, transforms it as needed, and writes it to the input message queue. A component-level view of the systems architecture is shown in Figure FIGREF7 ### Method ::: Class Hierarchy The project was written in Java, with each major component in its own package. There is a top-level class named ActiveMQEnabled that handles common tasks, such as connecting to the message broker, logging, event handling, and other such functionality. Each type of component in the pipeline - input, encoder, store, query, output, and application - is a subclass of ActiveMQEnabled as well as a superclass to specific types of those components. Most components are able both to send and receive messages, with certain exceptions: for example, inputs can only send and outputs can only receive. Stores can both receive and send, but in the concrete implementation in this project, the TDB store only receives (queries are better handled as timed polls, rather than being event-driven). ### Method ::: Inputs In the first stage of the module, simulated inputs represent a variety of healthcare entities and arrive in a variety of formats: patients in a pipe-delimited list, encounters as FHIR messages, and observations as HL7v2 messages. As discussed in the Background section, all of these are widely used input formats in modern health systems and realistically represent the heterogeneous message exchanges that are likely to occur in a real healthcare setting. Each input is configurable with regard to message timing and frequency, and the vitals signs can be made to simulate various conditions such as hypertension or hypothermia. An example of a generate vital sign is shown in Figure FIGREF8 ### Method ::: Encoder The encoder stage itself has two stages. In the first, input messages arriving at queues named according to the convention "INPUT.ENTITY.FORMAT" are retrieved, parsed, and transformed into internal representations of common domain objects, in this case Patient, Encounter, and Observation. In the second stage, these internal representations are transformed into internal representations of RDF graphs of FHIR resources and written out to the next message queue. By decoupling the parsing phase from the RDF-generating phase, the number of parsing and generating routines required for N sources and M resource types is reduced from N x M to N + M. This also allows parsing and generating jobs to be written in parallel and by different developers using the common internal representations as an intermediate layer. For instance, one developer could be writing the code to parse an HL7 ADT (admit/discharge/transfer) message while another developer was writing the code to turn this message into Patient, Encounter, and Observation resources. (Note that a single HL7 message can be used to create multiple FHIR resources BIBREF14). An example of a POJO to FIHR/RDF message encoder class is shown in Figure FIGREF9 ### Method ::: Store The store stage writes RDF-encoded statements to a triplestore database (TDB). For this implementation, the database was Apache Jena Triplestore Database (TDB) BIBREF15, which operates as a local on-disk database, although it could equally be a distributed in-memory cache or other implementation in production. It is at this point that the incoming messages are truly conformed to a universal model, as TDB does not record any information relating to encoding. An example of a RDF to TDB (RDB Database) class is shown in Figure FIGREF10 ### Method ::: Query The query stage polls the triplestore database for RDF graphs matching specified criteria, for instance, low blood pressure combined with low body temperature and high pulse rate, indicating hypothermia, or patients with blood pressure readings over a certain threshold, indicating hypertension. It passes matching patients on to the output stage for data capture or immediate use in applications. SPARQL queries against FHIR/RDF (see Figure FIGREF6), can often be complex and verbose, simply because a high level of detail was required to represent healthcare data unambiguously in FHIR, and an equally high level of detail was required to extract it unambigously. As a means of simplifying the work required to query the data, We considered a two-phase design in which the first layer would extract the relevant data from the TDB database in great detail before using RDF's CONSTRUCT syntax to build a simplified representation of the data for use by the second layer. This idea has potential, but after a few tries at writing the code to implement it, there was too much loss of detail for it to be worth pursuing in this iteration. In the end, the default option of writing a detailed, if verbose, RDF query once was deemed a better option than the added complexity and potential loss of fidelity of the two-layer approach. ### Method ::: Output In the output stage, the results of the queries in the previous stage are written out to an output destination such as a text file or a screen. This differs from the Application stage in that the input was intended to be written immediately to an output sink such as a file or screen on the local computer. Its use in this project was limited to debugging. ### Method ::: Application In the application stage, a variety of applications (complex event processors, common data models, machine learning models, etc.) receive the outputs of the queries from the prior stages and use them as inputs to particular applications. A high-level view of how the semantic encoder might be used in clinical workflow is shown in Figure FIGREF11 Several applications presented themselves as potentially benefiting from a semantic enrichment engine such as this one. One such application was complex event processing (CEP), in which streams of data are analyzed in search of events in real timeBIBREF16. From simple events more complex events can be derived, so that a number of individually innocuous events may add up to either an opportunity or a threat event. In a healthcare setting, this could mean monitoring patient vital signs and flagging them as high, low, or normal, then analyzing the combination of vital signs for a condition or set of conditions. Additionally, a patient's individual health conditions, such as comorbidities, recent procedures, and so on could be used to inform the meaning of the instantaneous vital signs as they are received. Using data from the TDB store, I was able to write several queries in Esper, a well-known complex event processing engineBIBREF17, to detect conditions that were initially simulated by the vital signs input, such as hypothermia or hypertension. To some extent, the RDF queries used to feed Esper overlapped with the capabilities of Esper itself, although Esper's query language EPL is much more versatile than SPARQL for event processing. Another such project was the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM)BIBREF18. This is an analytical database intended to collate data from multiple partner data sources and conform it to a common representation, using standardized vocabularies such as LOINCBIBREF19 and SNOMED-CTBIBREF20 in order to facilitate collaborative research. Using data queried from the TDB store, I was able to build several data-loading jobs to populate an OMOP-CDM database. This application takes advantage of the semantic enrichment engine's ability to conform data from disparate sources, since by the application stage all the data has been conformed to FHIR/RDF and is ready to be loaded to the OMOP database with only one transformation (from FHIR/RDF to OMOP schemas). ### Method ::: Validation Health Level Seven International (HL7) provides a FHIR validator, which was useful for ensuring that the FHIR generated by the encoder was correctly formed. ShEx (Shape Expressions) BIBREF21 language is a language used for describing the expected shape of RDF and testing it for conformity to that shape. Its syntax is similar to Turtle and SPARQL, while its semantics resemble those of regular expression languages such as RelaxNG BIBREF22. I were limited in our ability to validate FHIR conformance due to limitations of the FHIR validation tool (vague error messages, program crashes, etc.) ### Method ::: Challenges Our needs are twofold and, at first, apparently contradictory. The first was to store data from disparate sources so that the sources could be joined up and benefit from synergies among the different semantic components embedded in the data. The second was to answer queries about the data over a finite time range. The challenge is that the mechanism that was to trigger the execution of a query, the receipt of a message from the store, happened with such frequency that the query engine quickly became overloaded and unable to respond in a timely fashion to new requests. This necessitated a redesign of parts of the encoder module and the query engine, such that each resource was timestamped when it was encoded and each query specified a time range within which to return results. Prior to this redesign, the query engine was querying the triple store each time a message arrived without specifying a time bound, resulting in a constantly increasing number of results that eventually would overmatch the system's capabilities. Another challenge was that RDF does not easily support streamsBIBREF23. With each query, all matching results are returned, not only the new results since the last query. This means the result size of the query increases monotonically until the system is overwhelmed. To design around this, we timestamped each entity as it arrived and used this timestamp as a filter in the subsequent queries. This worked well and is not unlike what CEP systems do natively. ### Conclusion The semantic enrichment engine designed described in this paper has broad applicability in healthcare operations and research. The data exchange standards, protocols, databases, query languages, and so forth used to implement this system are freely available. This system has characteristics of both an enterprise service bus and an enterprise data warehouse, but augments the analytical capability of the former and addresses the high latency of the former. We expect the system can be used to inform artificial intelligence for inference, populate structured databases with enriched data streams, and derive new data for use in machine learning training. Figure 2: Example FHIR Bundle and Header Message Figure 3: Example SPARQL Query Figure 4: Semantic Enrichment Engine Architecture Figure 5: Java Simulated HL7 Message Figure 6: POJO to FIHR/RDF Encoder Figure 7: FIHR/RDF to TDB Storage Class Figure 8: Semantic Engine Use Clinical Workflow
RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, RDF makes statements of fact, whereas FHIR makes records of events, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts
Why is Conners upset with Bridges? A. Conners was chewed out by a Senator because Bridges was trying to get information, B. Conners has a deal with the State Department that the paper won't print certain stories. C. Conners received a report that Bridges was behaving unprofessionally. D. Conners has a deal with the White House that the paper won't print certain stories.
The saucer was interesting, but where was the delegate? The DELEGATE FROM VENUS By HENRY SLESAR ILLUSTRATOR NOVICK Everybody was waiting to see what the delegate from Venus looked like. And all they got for their patience was the biggest surprise since David clobbered Goliath. " Let me put it this way," Conners said paternally. "We expect a certain amount of decorum from our Washington news correspondents, and that's all I'm asking for." Jerry Bridges, sitting in the chair opposite his employer's desk, chewed on his knuckles and said nothing. One part of his mind wanted him to play it cagey, to behave the way the newspaper wanted him to behave, to protect the cozy Washington assignment he had waited four years to get. But another part of him, a rebel part, wanted him to stay on the trail of the story he felt sure was about to break. "I didn't mean to make trouble, Mr. Conners," he said casually. "It just seemed strange, all these exchanges of couriers in the past two days. I couldn't help thinking something was up." "Even if that's true, we'll hear about it through the usual channels," Conners frowned. "But getting a senator's secretary drunk to obtain information—well, that's not only indiscreet, Bridges. It's downright dirty." Jerry grinned. "I didn't take that kind of advantage, Mr. Conners. Not that she wasn't a toothsome little dish ..." "Just thank your lucky stars that it didn't go any further. And from now on—" He waggled a finger at him. "Watch your step." Jerry got up and ambled to the door. But he turned before leaving and said: "By the way. What do you think is going on?" "I haven't the faintest idea." "Don't kid me, Mr. Conners. Think it's war?" "That'll be all, Bridges." The reporter closed the door behind him, and then strolled out of the building into the sunlight. He met Ruskin, the fat little AP correspondent, in front of the Pan-American Building on Constitution Avenue. Ruskin was holding the newspaper that contained the gossip-column item which had started the whole affair, and he seemed more interested in the romantic rather than political implications. As he walked beside him, he said: "So what really happened, pal? That Greta babe really let down her hair?" "Where's your decorum?" Jerry growled. Ruskin giggled. "Boy, she's quite a dame, all right. I think they ought to get the Secret Service to guard her. She really fills out a size 10, don't she?" "Ruskin," Jerry said, "you have a low mind. For a week, this town has been acting like the 39 Steps , and all you can think about is dames. What's the matter with you? Where will you be when the big mushroom cloud comes?" "With Greta, I hope," Ruskin sighed. "What a way to get radioactive." They split off a few blocks later, and Jerry walked until he came to the Red Tape Bar &amp; Grill, a favorite hangout of the local journalists. There were three other newsmen at the bar, and they gave him snickering greetings. He took a small table in the rear and ate his meal in sullen silence. It wasn't the newsmen's jibes that bothered him; it was the certainty that something of major importance was happening in the capitol. There had been hourly conferences at the White House, flying visits by State Department officials, mysterious conferences involving members of the Science Commission. So far, the byword had been secrecy. They knew that Senator Spocker, chairman of the Congressional Science Committee, had been involved in every meeting, but Senator Spocker was unavailable. His secretary, however, was a little more obliging ... Jerry looked up from his coffee and blinked when he saw who was coming through the door of the Bar &amp; Grill. So did every other patron, but for different reasons. Greta Johnson had that effect upon men. Even the confining effect of a mannishly-tailored suit didn't hide her outrageously feminine qualities. She walked straight to his table, and he stood up. "They told me you might be here," she said, breathing hard. "I just wanted to thank you for last night." "Look, Greta—" Wham! Her hand, small and delicate, felt like a slab of lead when it slammed into his cheek. She left a bruise five fingers wide, and then turned and stalked out. He ran after her, the restaurant proprietor shouting about the unpaid bill. It took a rapid dog-trot to reach her side. "Greta, listen!" he panted. "You don't understand about last night. It wasn't the way that lousy columnist said—" She stopped in her tracks. "I wouldn't have minded so much if you'd gotten me drunk. But to use me, just to get a story—" "But I'm a reporter , damn it. It's my job. I'd do it again if I thought you knew anything." She was pouting now. "Well, how do you suppose I feel, knowing you're only interested in me because of the Senator? Anyway, I'll probably lose my job, and then you won't have any use for me." "Good-bye, Greta," Jerry said sadly. "What?" "Good-bye. I suppose you won't want to see me any more." "Did I say that?" "It just won't be any use. We'll always have this thing between us." She looked at him for a moment, and then touched his bruised cheek with a tender, motherly gesture. "Your poor face," she murmured, and then sighed. "Oh, well. I guess there's no use fighting it. Maybe if I did tell you what I know, we could act human again." "Greta!" "But if you print one word of it, Jerry Bridges, I'll never speak to you again!" "Honey," Jerry said, taking her arm, "you can trust me like a brother." "That's not the idea," Greta said stiffly. In a secluded booth at the rear of a restaurant unfrequented by newsmen, Greta leaned forward and said: "At first, they thought it was another sputnik." " Who did?" "The State Department, silly. They got reports from the observatories about another sputnik being launched by the Russians. Only the Russians denied it. Then there were joint meetings, and nobody could figure out what the damn thing was." "Wait a minute," Jerry said dizzily. "You mean to tell me there's another of those metal moons up there?" "But it's not a moon. That's the big point. It's a spaceship." "A what ?" "A spaceship," Greta said coolly, sipping lemonade. "They have been in contact with it now for about three days, and they're thinking of calling a plenary session of the UN just to figure out what to do about it. The only hitch is, Russia doesn't want to wait that long, and is asking for a hurry-up summit meeting to make a decision." "A decision about what?" "About the Venusians, of course." "Greta," Jerry said mildly, "I think you're still a little woozy from last night." "Don't be silly. The spaceship's from Venus; they've already established that. And the people on it—I guess they're people—want to know if they can land their delegate." "Their what?" "Their delegate. They came here for some kind of conference, I guess. They know about the UN and everything, and they want to take part. They say that with all the satellites being launched, that our affairs are their affairs, too. It's kind of confusing, but that's what they say." "You mean these Venusians speak English?" "And Russian. And French. And German. And everything I guess. They've been having radio talks with practically every country for the past three days. Like I say, they want to establish diplomatic relations or something. The Senator thinks that if we don't agree, they might do something drastic, like blow us all up. It's kind of scary." She shivered delicately. "You're taking it mighty calm," he said ironically. "Well, how else can I take it? I'm not even supposed to know about it, except that the Senator is so careless about—" She put her fingers to her lips. "Oh, dear, now you'll really think I'm terrible." "Terrible? I think you're wonderful!" "And you promise not to print it?" "Didn't I say I wouldn't?" "Y-e-s. But you know, you're a liar sometimes, Jerry. I've noticed that about you." The press secretary's secretary, a massive woman with gray hair and impervious to charm, guarded the portals of his office with all the indomitable will of the U. S. Marines. But Jerry Bridges tried. "You don't understand, Lana," he said. "I don't want to see Mr. Howells. I just want you to give him something." "My name's not Lana, and I can't deliver any messages." "But this is something he wants to see." He handed her an envelope, stamped URGENT. "Do it for me, Hedy. And I'll buy you the flashiest pair of diamond earrings in Washington." "Well," the woman said, thawing slightly. "I could deliver it with his next batch of mail." "When will that be?" "In an hour. He's in a terribly important meeting right now." "You've got some mail right there. Earrings and a bracelet to match." She looked at him with exasperation, and then gathered up a stack of memorandums and letters, his own envelope atop it. She came out of the press secretary's office two minutes later with Howells himself, and Howells said: "You there, Bridges. Come in here." "Yes, sir !" Jerry said, breezing by the waiting reporters with a grin of triumph. There were six men in the room, three in military uniform. Howells poked the envelope towards Jerry, and snapped: "This note of yours. Just what do you think it means?" "You know better than I do, Mr. Howells. I'm just doing my job; I think the public has a right to know about this spaceship that's flying around—" His words brought an exclamation from the others. Howells sighed, and said: "Mr. Bridges, you don't make it easy for us. It's our opinion that secrecy is essential, that leakage of the story might cause panic. Since you're the only unauthorized person who knows of it, we have two choices. One of them is to lock you up." Jerry swallowed hard. "The other is perhaps more practical," Howells said. "You'll be taken into our confidence, and allowed to accompany those officials who will be admitted to the landing site. But you will not be allowed to relay the story to the press until such a time as all correspondents are informed. That won't give you a 'scoop' if that's what you call it, but you'll be an eyewitness. That should be worth something." "It's worth a lot," Jerry said eagerly. "Thanks, Mr. Howells." "Don't thank me, I'm not doing you any personal favor. Now about the landing tonight—" "You mean the spaceship's coming down?" "Yes. A special foreign ministers conference was held this morning, and a decision was reached to accept the delegate. Landing instructions are being given at Los Alamos, and the ship will presumably land around midnight tonight. There will be a jet leaving Washington Airport at nine, and you'll be on it. Meanwhile, consider yourself in custody." The USAF jet transport wasn't the only secrecy-shrouded aircraft that took off that evening from Washington Airport. But Jerry Bridges, sitting in the rear seat flanked by two Sphinx-like Secret Service men, knew that he was the only passenger with non-official status aboard. It was only a few minutes past ten when they arrived at the air base at Los Alamos. The desert sky was cloudy and starless, and powerful searchlights probed the thick cumulus. There were sleek, purring black autos waiting to rush the air passengers to some unnamed destination. They drove for twenty minutes across a flat ribbon of desert road, until Jerry sighted what appeared to be a circle of newly-erected lights in the middle of nowhere. On the perimeter, official vehicles were parked in orderly rows, and four USAF trailer trucks were in evidence, their radarscopes turning slowly. There was activity everywhere, but it was well-ordered and unhurried. They had done a good job of keeping the excitement contained. He was allowed to leave the car and stroll unescorted. He tried to talk to some of the scurrying officials, but to no avail. Finally, he contented himself by sitting on the sand, his back against the grill of a staff car, smoking one cigarette after another. As the minutes ticked off, the activity became more frenetic around him. Then the pace slowed, and he knew the appointed moment was approaching. Stillness returned to the desert, and tension was a tangible substance in the night air. The radarscopes spun slowly. The searchlights converged in an intricate pattern. Then the clouds seemed to part! "Here she comes!" a voice shouted. And in a moment, the calm was shattered. At first, he saw nothing. A faint roar was started in the heavens, and it became a growl that increased in volume until even the shouting voices could no longer be heard. Then the crisscrossing lights struck metal, glancing off the gleaming body of a descending object. Larger and larger the object grew, until it assumed the definable shape of a squat silver funnel, falling in a perfect straight line towards the center of the light-ringed area. When it hit, a dust cloud obscured it from sight. A loudspeaker blared out an unintelligible order, but its message was clear. No one moved from their position. Finally, a three-man team, asbestos-clad, lead-shielded, stepped out from the ring of spectators. They carried geiger counters on long poles before them. Jerry held his breath as they approached the object; only when they were yards away did he appreciate its size. It wasn't large; not more than fifteen feet in total circumference. One of the three men waved a gloved hand. "It's okay," a voice breathed behind him. "No radiation ..." Slowly, the ring of spectators closed tighter. They were twenty yards from the ship when the voice spoke to them. "Greetings from Venus," it said, and then repeated the phrase in six languages. "The ship you see is a Venusian Class 7 interplanetary rocket, built for one-passenger. It is clear of all radiation, and is perfectly safe to approach. There is a hatch which may be opened by an automatic lever in the side. Please open this hatch and remove the passenger." An Air Force General whom Jerry couldn't identify stepped forward. He circled the ship warily, and then said something to the others. They came closer, and he touched a small lever on the silvery surface of the funnel. A door slid open. "It's a box!" someone said. "A crate—" "Colligan! Moore! Schaffer! Lend a hand here—" A trio came forward and hoisted the crate out of the ship. Then the voice spoke again; Jerry deduced that it must have been activated by the decreased load of the ship. "Please open the crate. You will find our delegate within. We trust you will treat him with the courtesy of an official emissary." They set to work on the crate, its gray plastic material giving in readily to the application of their tools. But when it was opened, they stood aside in amazement and consternation. There were a variety of metal pieces packed within, protected by a filmy packing material. "Wait a minute," the general said. "Here's a book—" He picked up a gray-bound volume, and opened its cover. "'Instructions for assembling Delegate,'" he read aloud. "'First, remove all parts and arrange them in the following order. A-1, central nervous system housing. A-2 ...'" He looked up. "It's an instruction book," he whispered. "We're supposed to build the damn thing." The Delegate, a handsomely constructed robot almost eight feet tall, was pieced together some three hours later, by a team of scientists and engineers who seemed to find the Venusian instructions as elementary as a blueprint in an Erector set. But simple as the job was, they were obviously impressed by the mechanism they had assembled. It stood impassive until they obeyed the final instruction. "Press Button K ..." They found button K, and pressed it. The robot bowed. "Thank you, gentlemen," it said, in sweet, unmetallic accents. "Now if you will please escort me to the meeting place ..." It wasn't until three days after the landing that Jerry Bridges saw the Delegate again. Along with a dozen assorted government officials, Army officers, and scientists, he was quartered in a quonset hut in Fort Dix, New Jersey. Then, after seventy-two frustrating hours, he was escorted by Marine guard into New York City. No one told him his destination, and it wasn't until he saw the bright strips of light across the face of the United Nations building that he knew where the meeting was to be held. But his greatest surprise was yet to come. The vast auditorium which housed the general assembly was filled to its capacity, but there were new faces behind the plaques which designated the member nations. He couldn't believe his eyes at first, but as the meeting got under way, he knew that it was true. The highest echelons of the world's governments were represented, even—Jerry gulped at the realization—Nikita Khrushchev himself. It was a summit meeting such as he had never dreamed possible, a summit meeting without benefit of long foreign minister's debate. And the cause of it all, a placid, highly-polished metal robot, was seated blithely at a desk which bore the designation: VENUS. The robot delegate stood up. "Gentlemen," it said into the microphone, and the great men at the council tables strained to hear the translator's version through their headphones, "Gentlemen, I thank you for your prompt attention. I come as a Delegate from a great neighbor planet, in the interests of peace and progress for all the solar system. I come in the belief that peace is the responsibility of individuals, of nations, and now of worlds, and that each is dependent upon the other. I speak to you now through the electronic instrumentation which has been created for me, and I come to offer your planet not merely a threat, a promise, or an easy solution—but a challenge." The council room stirred. "Your earth satellites have been viewed with interest by the astronomers of our world, and we foresee the day when contact between our planets will be commonplace. As for ourselves, we have hitherto had little desire to explore beyond our realm, being far too occupied with internal matters. But our isolation cannot last in the face of your progress, so we believe that we must take part in your affairs. "Here, then, is our challenge. Continue your struggle of ideas, compete with each other for the minds of men, fight your bloodless battles, if you know no other means to attain progress. But do all this without unleashing the terrible forces of power now at your command. Once unleashed, these forces may or may not destroy all that you have gained. But we, the scientists of Venus, promise you this—that on the very day your conflict deteriorates into heedless violence, we will not stand by and let the ugly contagion spread. On that day, we of Venus will act swiftly, mercilessly, and relentlessly—to destroy your world completely." Again, the meeting room exploded in a babble of languages. "The vessel which brought me here came as a messenger of peace. But envision it, men of Earth, as a messenger of war. Unstoppable, inexorable, it may return, bearing a different Delegate from Venus—a Delegate of Death, who speaks not in words, but in the explosion of atoms. Think of thousands of such Delegates, fired from a vantage point far beyond the reach of your retaliation. This is the promise and the challenge that will hang in your night sky from this moment forward. Look at the planet Venus, men of Earth, and see a Goddess of Vengeance, poised to wreak its wrath upon those who betray the peace." The Delegate sat down. Four days later, a mysterious explosion rocked the quiet sands of Los Alamos, and the Venus spacecraft was no more. Two hours after that, the robot delegate, its message delivered, its mission fulfilled, requested to be locked inside a bombproof chamber. When the door was opened, the Delegate was an exploded ruin. The news flashed with lightning speed over the world, and Jerry Bridges' eyewitness accounts of the incredible event was syndicated throughout the nation. But his sudden celebrity left him vaguely unsatisfied. He tried to explain his feeling to Greta on his first night back in Washington. They were in his apartment, and it was the first time Greta had consented to pay him the visit. "Well, what's bothering you?" Greta pouted. "You've had the biggest story of the year under your byline. I should think you'd be tickled pink." "It's not that," Jerry said moodily. "But ever since I heard the Delegate speak, something's been nagging me." "But don't you think he's done good? Don't you think they'll be impressed by what he said?" "I'm not worried about that. I think that damn robot did more for peace than anything that's ever come along in this cockeyed world. But still ..." Greta snuggled up to him on the sofa. "You worry too much. Don't you ever think of anything else? You should learn to relax. It can be fun." She started to prove it to him, and Jerry responded the way a normal, healthy male usually does. But in the middle of an embrace, he cried out: "Wait a minute!" "What's the matter?" "I just thought of something! Now where the hell did I put my old notebooks?" He got up from the sofa and went scurrying to a closet. From a debris of cardboard boxes, he found a worn old leather brief case, and cackled with delight when he found the yellowed notebooks inside. "What are they?" Greta said. "My old school notebooks. Greta, you'll have to excuse me. But there's something I've got to do, right away!" "That's all right with me," Greta said haughtily. "I know when I'm not wanted." She took her hat and coat from the hall closet, gave him one last chance to change his mind, and then left. Five minutes later, Jerry Bridges was calling the airlines. It had been eleven years since Jerry had walked across the campus of Clifton University, heading for the ivy-choked main building. It was remarkable how little had changed, but the students seemed incredibly young. He was winded by the time he asked the pretty girl at the desk where Professor Martin Coltz could be located. "Professor Coltz?" She stuck a pencil to her mouth. "Well, I guess he'd be in the Holland Laboratory about now." "Holland Laboratory? What's that?" "Oh, I guess that was after your time, wasn't it?" Jerry felt decrepit, but managed to say: "It must be something new since I was here. Where is this place?" He followed her directions, and located a fresh-painted building three hundred yards from the men's dorm. He met a student at the door, who told him that Professor Coltz would be found in the physics department. The room was empty when Jerry entered, except for the single stooped figure vigorously erasing a blackboard. He turned when the door opened. If the students looked younger, Professor Coltz was far older than Jerry remembered. He was a tall man, with an unruly confusion of straight gray hair. He blinked when Jerry said: "Hello, Professor. Do you remember me? Jerry Bridges?" "Of course! I thought of you only yesterday, when I saw your name in the papers—" They sat at facing student desks, and chatted about old times. But Jerry was impatient to get to the point of his visit, and he blurted out: "Professor Coltz, something's been bothering me. It bothered me from the moment I heard the Delegate speak. I didn't know what it was until last night, when I dug out my old college notebooks. Thank God I kept them." Coltz's eyes were suddenly hooded. "What do you mean, Jerry?" "There was something about the Robot's speech that sounded familiar—I could have sworn I'd heard some of the words before. I couldn't prove anything until I checked my old notes, and here's what I found." He dug into his coat pocket and produced a sheet of paper. He unfolded it and read aloud. "'It's my belief that peace is the responsibility of individuals, of nations, and someday, even of worlds ...' Sound familiar, Professor?" Coltz shifted uncomfortably. "I don't recall every silly thing I said, Jerry." "But it's an interesting coincidence, isn't it, Professor? These very words were spoken by the Delegate from Venus." "A coincidence—" "Is it? But I also remember your interest in robotics. I'll never forget that mechanical homing pigeon you constructed. And you've probably learned much more these past eleven years." "What are you driving at, Jerry?" "Just this, Professor. I had a little daydream, recently, and I want you to hear it. I dreamed about a group of teachers, scientists, and engineers, a group who were suddenly struck by an exciting, incredible idea. A group that worked in the quiet and secrecy of a University on a fantastic scheme to force the idea of peace into the minds of the world's big shots. Does my dream interest you, Professor?" "Go on." "Well, I dreamt that this group would secretly launch an earth satellite of their own, and arrange for the nose cone to come down safely at a certain time and place. They would install a marvelous electronic robot within the cone, ready to be assembled. They would beam a radio message to earth from the cone, seemingly as if it originated from their 'spaceship.' Then, when the Robot was assembled, they would speak through it to demand peace for all mankind ..." "Jerry, if you do this—" "You don't have to say it, Professor, I know what you're thinking. I'm a reporter, and my business is to tell the world everything I know. But if I did it, there might not be a world for me to write about, would there? No, thanks, Professor. As far as I'm concerned, what I told you was nothing more than a daydream." Jerry braked the convertible to a halt, and put his arm around Greta's shoulder. She looked up at the star-filled night, and sighed romantically. Jerry pointed. "That one." Greta shivered closer to him. "And to think what that terrible planet can do to us!" "Oh, I dunno. Venus is also the Goddess of Love." He swung his other arm around her, and Venus winked approvingly. THE END Transcriber's Note: This etext was produced from Amazing Science Fiction Stories October 1958. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
C. Conners received a report that Bridges was behaving unprofessionally.
How are people on Earth able to help with the search for a missing prospector? A. They can shine a light to make searching easier B. Their equipment is advanced enough to connect to the prospector's radio C. They can boost the signals of the scanners on the moon D. They can see different sides of the moon from the people on the moon
ALL DAY SEPTEMBER By ROGER KUYKENDALL Illustrated by van Dongen [Transcriber's Note: This etext was produced from Astounding Science Fiction June 1959. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Some men just haven't got good sense. They just can't seem to learn the most fundamental things. Like when there's no use trying—when it's time to give up because it's hopeless.... The meteor, a pebble, a little larger than a match head, traveled through space and time since it came into being. The light from the star that died when the meteor was created fell on Earth before the first lungfish ventured from the sea. In its last instant, the meteor fell on the Moon. It was impeded by Evans' tractor. It drilled a small, neat hole through the casing of the steam turbine, and volitized upon striking the blades. Portions of the turbine also volitized; idling at eight thousand RPM, it became unstable. The shaft tried to tie itself into a knot, and the blades, damaged and undamaged were spit through the casing. The turbine again reached a stable state, that is, stopped. Permanently stopped. It was two days to sunrise, where Evans stood. It was just before sunset on a spring evening in September in Sydney. The shadow line between day and night could be seen from the Moon to be drifting across Australia. Evans, who had no watch, thought of the time as a quarter after Australia. Evans was a prospector, and like all prospectors, a sort of jackknife geologist, selenologist, rather. His tractor and equipment cost two hundred and fifty thousand dollars. Fifty thousand was paid for. The rest was promissory notes and grubstake shares. When he was broke, which was usually, he used his tractor to haul uranium ore and metallic sodium from the mines at Potter's dike to Williamson Town, where the rockets landed. When he was flush, he would prospect for a couple of weeks. Once he followed a stampede to Yellow Crater, where he thought for a while that he had a fortune in chromium. The chromite petered out in a month and a half, and he was lucky to break even. Evans was about three hundred miles east of Williamson Town, the site of the first landing on the Moon. Evans was due back at Williamson Town at about sunset, that is, in about sixteen days. When he saw the wrecked turbine, he knew that he wouldn't make it. By careful rationing, he could probably stretch his food out to more than a month. His drinking water—kept separate from the water in the reactor—might conceivably last just as long. But his oxygen was too carefully measured; there was a four-day reserve. By diligent conservation, he might make it last an extra day. Four days reserve—plus one is five—plus sixteen days normal supply equals twenty-one days to live. In seventeen days he might be missed, but in seventeen days it would be dark again, and the search for him, if it ever began, could not begin for thirteen more days. At the earliest it would be eight days too late. "Well, man, 'tis a fine spot you're in now," he told himself. "Let's find out how bad it is indeed," he answered. He reached for the light switch and tried to turn it on. The switch was already in the "on" position. "Batteries must be dead," he told himself. "What batteries?" he asked. "There're no batteries in here, the power comes from the generator." "Why isn't the generator working, man?" he asked. He thought this one out carefully. The generator was not turned by the main turbine, but by a small reciprocating engine. The steam, however, came from the same boiler. And the boiler, of course, had emptied itself through the hole in the turbine. And the condenser, of course— "The condenser!" he shouted. He fumbled for a while, until he found a small flashlight. By the light of this, he reinspected the steam system, and found about three gallons of water frozen in the condenser. The condenser, like all condensers, was a device to convert steam into water, so that it could be reused in the boiler. This one had a tank and coils of tubing in the center of a curved reflector that was positioned to radiate the heat of the steam into the cold darkness of space. When the meteor pierced the turbine, the water in the condenser began to boil. This boiling lowered the temperature, and the condenser demonstrated its efficiency by quickly freezing the water in the tank. Evans sealed the turbine from the rest of the steam system by closing the shut-off valves. If there was any water in the boiler, it would operate the engine that drove the generator. The water would condense in the condenser, and with a little luck, melt the ice in there. Then, if the pump wasn't blocked by ice, it would return the water to the boiler. But there was no water in the boiler. Carefully he poured a cup of his drinking water into a pipe that led to the boiler, and resealed the pipe. He pulled on a knob marked "Nuclear Start/Safety Bypass." The water that he had poured into the boiler quickly turned into steam, and the steam turned the generator briefly. Evans watched the lights flicker and go out, and he guessed what the trouble was. "The water, man," he said, "there is not enough to melt the ice in the condenser." He opened the pipe again and poured nearly a half-gallon of water into the boiler. It was three days' supply of water, if it had been carefully used. It was one day's supply if used wastefully. It was ostentatious luxury for a man with a month's supply of water and twenty-one days to live. The generator started again, and the lights came on. They flickered as the boiler pressure began to fail, but the steam had melted some of the ice in the condenser, and the water pump began to function. "Well, man," he breathed, "there's a light to die by." The sun rose on Williamson Town at about the same time it rose on Evans. It was an incredibly brilliant disk in a black sky. The stars next to the sun shone as brightly as though there were no sun. They might have appeared to waver slightly, if they were behind outflung corona flares. If they did, no one noticed. No one looked toward the sun without dark filters. When Director McIlroy came into his office, he found it lighted by the rising sun. The light was a hot, brilliant white that seemed to pierce the darkest shadows of the room. He moved to the round window, screening his eyes from the light, and adjusted the polaroid shade to maximum density. The sun became an angry red brown, and the room was dark again. McIlroy decreased the density again until the room was comfortably lighted. The room felt stuffy, so he decided to leave the door to the inner office open. He felt a little guilty about this, because he had ordered that all doors in the survey building should remain closed except when someone was passing through them. This was to allow the air-conditioning system to function properly, and to prevent air loss in case of the highly improbable meteor damage. McIlroy thought that on the whole, he was disobeying his own orders no more flagrantly than anyone else in the survey. McIlroy had no illusions about his ability to lead men. Or rather, he did have one illusion; he thought that he was completely unfit as a leader. It was true that his strictest orders were disobeyed with cheerful contempt, but it was also true his mildest requests were complied with eagerly and smoothly. Everyone in the survey except McIlroy realized this, and even he accepted this without thinking about it. He had fallen into the habit of suggesting mildly anything that he wanted done, and writing orders he didn't particularly care to have obeyed. For example, because of an order of his stating that there would be no alcoholic beverages within the survey building, the entire survey was assured of a constant supply of home-made, but passably good liquor. Even McIlroy enjoyed the surreptitious drinking. "Good morning, Mr. McIlroy," said Mrs. Garth, his secretary. Morning to Mrs. Garth was simply the first four hours after waking. "Good morning indeed," answered McIlroy. Morning to him had no meaning at all, but he thought in the strictest sense that it would be morning on the Moon for another week. "Has the power crew set up the solar furnace?" he asked. The solar furnace was a rough parabola of mirrors used to focus the sun's heat on anything that it was desirable to heat. It was used mostly, from sun-up to sun-down, to supplement the nuclear power plant. "They went out about an hour ago," she answered, "I suppose that's what they were going to do." "Very good, what's first on the schedule?" "A Mr. Phelps to see you," she said. "How do you do, Mr. Phelps," McIlroy greeted him. "Good afternoon," Mr. Phelps replied. "I'm here representing the Merchants' Bank Association." "Fine," McIlroy said, "I suppose you're here to set up a bank." "That's right, I just got in from Muroc last night, and I've been going over the assets of the Survey Credit Association all morning." "I'll certainly be glad to get them off my hands," McIlroy said. "I hope they're in good order." "There doesn't seem to be any profit," Mr. Phelps said. "That's par for a nonprofit organization," said McIlroy. "But we're amateurs, and we're turning this operation over to professionals. I'm sure it will be to everyone's satisfaction." "I know this seems like a silly question. What day is this?" "Well," said McIlroy, "that's not so silly. I don't know either." "Mrs. Garth," he called, "what day is this?" "Why, September, I think," she answered. "I mean what day ." "I don't know, I'll call the observatory." There was a pause. "They say what day where?" she asked. "Greenwich, I guess, our official time is supposed to be Greenwich Mean Time." There was another pause. "They say it's September fourth, one thirty a.m. " "Well, there you are," laughed McIlroy, "it isn't that time doesn't mean anything here, it just doesn't mean the same thing." Mr. Phelps joined the laughter. "Bankers' hours don't mean much, at any rate," he said. The power crew was having trouble with the solar furnace. Three of the nine banks of mirrors would not respond to the electric controls, and one bank moved so jerkily that it could not be focused, and it threatened to tear several of the mirrors loose. "What happened here?" Spotty Cade, one of the electrical technicians asked his foreman, Cowalczk, over the intercommunications radio. "I've got about a hundred pinholes in the cables out here. It's no wonder they don't work." "Meteor shower," Cowalczk answered, "and that's not half of it. Walker says he's got a half dozen mirrors cracked or pitted, and Hoffman on bank three wants you to replace a servo motor. He says the bearing was hit." "When did it happen?" Cade wanted to know. "Must have been last night, at least two or three days ago. All of 'em too small for Radar to pick up, and not enough for Seismo to get a rumble." "Sounds pretty bad." "Could have been worse," said Cowalczk. "How's that?" "Wasn't anybody out in it." "Hey, Chuck," another technician, Lehman, broke in, "you could maybe get hurt that way." "I doubt it," Cowalczk answered, "most of these were pinhead size, and they wouldn't go through a suit." "It would take a pretty big one to damage a servo bearing," Cade commented. "That could hurt," Cowalczk admitted, "but there was only one of them." "You mean only one hit our gear," Lehman said. "How many missed?" Nobody answered. They could all see the Moon under their feet. Small craters overlapped and touched each other. There was—except in the places that men had obscured them with footprints—not a square foot that didn't contain a crater at least ten inches across, there was not a square inch without its half-inch crater. Nearly all of these had been made millions of years ago, but here and there, the rim of a crater covered part of a footprint, clear evidence that it was a recent one. After the sun rose, Evans returned to the lava cave that he had been exploring when the meteor hit. Inside, he lifted his filter visor, and found that the light reflected from the small ray that peered into the cave door lighted the cave adequately. He tapped loose some white crystals on the cave wall with his geologist's hammer, and put them into a collector's bag. "A few mineral specimens would give us something to think about, man. These crystals," he said, "look a little like zeolites, but that can't be, zeolites need water to form, and there's no water on the Moon." He chipped a number of other crystals loose and put them in bags. One of them he found in a dark crevice had a hexagonal shape that puzzled him. One at a time, back in the tractor, he took the crystals out of the bags and analyzed them as well as he could without using a flame which would waste oxygen. The ones that looked like zeolites were zeolites, all right, or something very much like it. One of the crystals that he thought was quartz turned out to be calcite, and one of the ones that he was sure could be nothing but calcite was actually potassium nitrate. "Well, now," he said, "it's probably the largest natural crystal of potassium nitrate that anyone has ever seen. Man, it's a full inch across." All of these needed water to form, and their existence on the Moon puzzled him for a while. Then he opened the bag that had contained the unusual hexagonal crystals, and the puzzle resolved itself. There was nothing in the bag but a few drops of water. What he had taken to be a type of rock was ice, frozen in a niche that had never been warmed by the sun. The sun rose to the meridian slowly. It was a week after sunrise. The stars shone coldly, and wheeled in their slow course with the sun. Only Earth remained in the same spot in the black sky. The shadow line crept around until Earth was nearly dark, and then the rim of light appeared on the opposite side. For a while Earth was a dark disk in a thin halo, and then the light came to be a crescent, and the line of dawn began to move around Earth. The continents drifted across the dark disk and into the crescent. The people on Earth saw the full moon set about the same time that the sun rose. Nickel Jones was the captain of a supply rocket. He made trips from and to the Moon about once a month, carrying supplies in and metal and ores out. At this time he was visiting with his old friend McIlroy. "I swear, Mac," said Jones, "another season like this, and I'm going back to mining." "I thought you were doing pretty well," said McIlroy, as he poured two drinks from a bottle of Scotch that Jones had brought him. "Oh, the money I like, but I will say that I'd have more if I didn't have to fight the union and the Lunar Trade Commission." McIlroy had heard all of this before. "How's that?" he asked politely. "You may think it's myself running the ship," Jones started on his tirade, "but it's not. The union it is that says who I can hire. The union it is that says how much I must pay, and how large a crew I need. And then the Commission ..." The word seemed to give Jones an unpleasant taste in his mouth, which he hurriedly rinsed with a sip of Scotch. "The Commission," he continued, making the word sound like an obscenity, "it is that tells me how much I can charge for freight." McIlroy noticed that his friend's glass was empty, and he quietly filled it again. "And then," continued Jones, "if I buy a cargo up here, the Commission it is that says what I'll sell it for. If I had my way, I'd charge only fifty cents a pound for freight instead of the dollar forty that the Commission insists on. That's from here to Earth, of course. There's no profit I could make by cutting rates the other way." "Why not?" asked McIlroy. He knew the answer, but he liked to listen to the slightly Welsh voice of Jones. "Near cost it is now at a dollar forty. But what sense is there in charging the same rate to go either way when it takes about a seventh of the fuel to get from here to Earth as it does to get from there to here?" "What good would it do to charge fifty cents a pound?" asked McIlroy. "The nickel, man, the tons of nickel worth a dollar and a half on Earth, and not worth mining here; the low-grade ores of uranium and vanadium, they need these things on Earth, but they can't get them as long as it isn't worth the carrying of them. And then, of course, there's the water we haven't got. We could afford to bring more water for more people, and set up more distilling plants if we had the money from the nickel. "Even though I say it who shouldn't, two-eighty a quart is too much to pay for water." Both men fell silent for a while. Then Jones spoke again: "Have you seen our friend Evans lately? The price of chromium has gone up, and I think he could ship some of his ore from Yellow Crater at a profit." "He's out prospecting again. I don't expect to see him until sun-down." "I'll likely see him then. I won't be loaded for another week and a half. Can't you get in touch with him by radio?" "He isn't carrying one. Most of the prospectors don't. They claim that a radio that won't carry beyond the horizon isn't any good, and one that will bounce messages from Earth takes up too much room." "Well, if I don't see him, you let him know about the chromium." "Anything to help another Welshman, is that the idea?" "Well, protection it is that a poor Welshman needs from all the English and Scots. Speaking of which—" "Oh, of course," McIlroy grinned as he refilled the glasses. " Slainte, McIlroy, bach. " [Health, McIlroy, man.] " Slainte mhor, bach. " [Great Health, man.] The sun was halfway to the horizon, and Earth was a crescent in the sky when Evans had quarried all the ice that was available in the cave. The thought grew on him as he worked that this couldn't be the only such cave in the area. There must be several more bubbles in the lava flow. Part of his reasoning proved correct. That is, he found that by chipping, he could locate small bubbles up to an inch in diameter, each one with its droplet of water. The average was about one per cent of the volume of each bubble filled with ice. A quarter of a mile from the tractor, Evans found a promising looking mound of lava. It was rounded on top, and it could easily be the dome of a bubble. Suddenly, Evans noticed that the gauge on the oxygen tank of his suit was reading dangerously near empty. He turned back to his tractor, moving as slowly as he felt safe in doing. Running would use up oxygen too fast. He was halfway there when the pressure warning light went on, and the signal sounded inside his helmet. He turned on his ten-minute reserve supply, and made it to the tractor with about five minutes left. The air purifying apparatus in the suit was not as efficient as the one in the tractor; it wasted oxygen. By using the suit so much, Evans had already shortened his life by several days. He resolved not to leave the tractor again, and reluctantly abandoned his plan to search for a large bubble. The sun stood at half its diameter above the horizon. The shadows of the mountains stretched out to touch the shadows of the other mountains. The dawning line of light covered half of Earth, and Earth turned beneath it. Cowalczk itched under his suit, and the sweat on his face prickled maddeningly because he couldn't reach it through his helmet. He pushed his forehead against the faceplate of his helmet and rubbed off some of the sweat. It didn't help much, and it left a blurred spot in his vision. That annoyed him. "Is everyone clear of the outlet?" he asked. "All clear," he heard Cade report through the intercom. "How come we have to blow the boilers now?" asked Lehman. "Because I say so," Cowalczk shouted, surprised at his outburst and ashamed of it. "Boiler scale," he continued, much calmer. "We've got to clean out the boilers once a year to make sure the tubes in the reactor don't clog up." He squinted through his dark visor at the reactor building, a gray concrete structure a quarter of a mile distant. "It would be pretty bad if they clogged up some night." "Pressure's ten and a half pounds," said Cade. "Right, let her go," said Cowalczk. Cade threw a switch. In the reactor building, a relay closed. A motor started turning, and the worm gear on the motor opened a valve on the boiler. A stream of muddy water gushed into a closed vat. When the vat was about half full, the water began to run nearly clear. An electric eye noted that fact and a light in front of Cade turned on. Cade threw the switch back the other way, and the relay in the reactor building opened. The motor turned and the gears started to close the valve. But a fragment of boiler scale held the valve open. "Valve's stuck," said Cade. "Open it and close it again," said Cowalczk. The sweat on his forehead started to run into his eyes. He banged his hand on his faceplate in an unconscious attempt to wipe it off. He cursed silently, and wiped it off on the inside of his helmet again. This time, two drops ran down the inside of his faceplate. "Still don't work," said Cade. "Keep trying," Cowalczk ordered. "Lehman, get a Geiger counter and come with me, we've got to fix this thing." Lehman and Cowalczk, who were already suited up started across to the reactor building. Cade, who was in the pressurized control room without a suit on, kept working the switch back and forth. There was light that indicated when the valve was open. It was on, and it stayed on, no matter what Cade did. "The vat pressure's too high," Cade said. "Let me know when it reaches six pounds," Cowalczk requested. "Because it'll probably blow at seven." The vat was a light plastic container used only to decant sludge out of the water. It neither needed nor had much strength. "Six now," said Cade. Cowalczk and Lehman stopped halfway to the reactor. The vat bulged and ruptured. A stream of mud gushed out and boiled dry on the face of the Moon. Cowalczk and Lehman rushed forward again. They could see the trickle of water from the discharge pipe. The motor turned the valve back and forth in response to Cade's signals. "What's going on out there?" demanded McIlroy on the intercom. "Scale stuck in the valve," Cowalczk answered. "Are the reactors off?" "Yes. Vat blew. Shut up! Let me work, Mac!" "Sorry," McIlroy said, realizing that this was no time for officials. "Let me know when it's fixed." "Geiger's off scale," Lehman said. "We're probably O.K. in these suits for an hour," Cowalczk answered. "Is there a manual shut-off?" "Not that I know of," Lehman answered. "What about it, Cade?" "I don't think so," Cade said. "I'll get on the blower and rouse out an engineer." "O.K., but keep working that switch." "I checked the line as far as it's safe," said Lehman. "No valve." "O.K.," Cowalczk said. "Listen, Cade, are the injectors still on?" "Yeah. There's still enough heat in these reactors to do some damage. I'll cut 'em in about fifteen minutes." "I've found the trouble," Lehman said. "The worm gear's loose on its shaft. It's slipping every time the valve closes. There's not enough power in it to crush the scale." "Right," Cowalczk said. "Cade, open the valve wide. Lehman, hand me that pipe wrench!" Cowalczk hit the shaft with the back of the pipe wrench, and it broke at the motor bearing. Cowalczk and Lehman fitted the pipe wrench to the gear on the valve, and turned it. "Is the light off?" Cowalczk asked. "No," Cade answered. "Water's stopped. Give us some pressure, we'll see if it holds." "Twenty pounds," Cade answered after a couple of minutes. "Take her up to ... no, wait, it's still leaking," Cowalczk said. "Hold it there, we'll open the valve again." "O.K.," said Cade. "An engineer here says there's no manual cutoff." "Like Hell," said Lehman. Cowalczk and Lehman opened the valve again. Water spurted out, and dwindled as they closed the valve. "What did you do?" asked Cade. "The light went out and came on again." "Check that circuit and see if it works," Cowalczk instructed. There was a pause. "It's O.K.," Cade said. Cowalczk and Lehman opened and closed the valve again. "Light is off now," Cade said. "Good," said Cowalczk, "take the pressure up all the way, and we'll see what happens." "Eight hundred pounds," Cade said, after a short wait. "Good enough," Cowalczk said. "Tell that engineer to hold up a while, he can fix this thing as soon as he gets parts. Come on, Lehman, let's get out of here." "Well, I'm glad that's over," said Cade. "You guys had me worried for a while." "Think we weren't worried?" Lehman asked. "And it's not over." "What?" Cade asked. "Oh, you mean the valve servo you two bashed up?" "No," said Lehman, "I mean the two thousand gallons of water that we lost." "Two thousand?" Cade asked. "We only had seven hundred gallons reserve. How come we can operate now?" "We picked up twelve hundred from the town sewage plant. What with using the solar furnace as a radiator, we can make do." "Oh, God, I suppose this means water rationing again." "You're probably right, at least until the next rocket lands in a couple of weeks." PROSPECTOR FEARED LOST ON MOON IPP Williamson Town, Moon, Sept. 21st. Scientific survey director McIlroy released a statement today that Howard Evans, a prospector is missing and presumed lost. Evans, who was apparently exploring the Moon in search of minerals was due two days ago, but it was presumed that he was merely temporarily delayed. Evans began his exploration on August 25th, and was known to be carrying several days reserve of oxygen and supplies. Director McIlroy has expressed a hope that Evans will be found before his oxygen runs out. Search parties have started from Williamson Town, but telescopic search from Palomar and the new satellite observatory are hindered by the fact that Evans is lost on the part of the Moon which is now dark. Little hope is held for radio contact with the missing man as it is believed he was carrying only short-range, intercommunications equipment. Nevertheless, receivers are ... Captain Nickel Jones was also expressing a hope: "Anyway, Mac," he was saying to McIlroy, "a Welshman knows when his luck's run out. And never a word did he say." "Like as not, you're right," McIlroy replied, "but if I know Evans, he'd never say a word about any forebodings." "Well, happen I might have a bit of Welsh second sight about me, and it tells me that Evans will be found." McIlroy chuckled for the first time in several days. "So that's the reason you didn't take off when you were scheduled," he said. "Well, yes," Jones answered. "I thought that it might happen that a rocket would be needed in the search." The light from Earth lighted the Moon as the Moon had never lighted Earth. The great blue globe of Earth, the only thing larger than the stars, wheeled silently in the sky. As it turned, the shadow of sunset crept across the face that could be seen from the Moon. From full Earth, as you might say, it moved toward last quarter. The rising sun shone into Director McIlroy's office. The hot light formed a circle on the wall opposite the window, and the light became more intense as the sun slowly pulled over the horizon. Mrs. Garth walked into the director's office, and saw the director sleeping with his head cradled in his arms on the desk. She walked softly to the window and adjusted the shade to darken the office. She stood looking at McIlroy for a moment, and when he moved slightly in his sleep, she walked softly out of the office. A few minutes later she was back with a cup of coffee. She placed it in front of the director, and shook his shoulder gently. "Wake up, Mr. McIlroy," she said, "you told me to wake you at sunrise, and there it is, and here's Mr. Phelps." McIlroy woke up slowly. He leaned back in his chair and stretched. His neck was stiff from sleeping in such an awkward position. "'Morning, Mr. Phelps," he said. "Good morning," Phelps answered, dropping tiredly into a chair. "Have some coffee, Mr. Phelps," said Mrs. Garth, handing him a cup. "Any news?" asked McIlroy. "About Evans?" Phelps shook his head slowly. "Palomar called in a few minutes back. Nothing to report and the sun was rising there. Australia will be in position pretty soon. Several observatories there. Then Capetown. There are lots of observatories in Europe, but most of them are clouded over. Anyway the satellite observatory will be in position by the time Europe is." McIlroy was fully awake. He glanced at Phelps and wondered how long it had been since he had slept last. More than that, McIlroy wondered why this banker, who had never met Evans, was losing so much sleep about finding him. It began to dawn on McIlroy that nearly the whole population of Williamson Town was involved, one way or another, in the search. The director turned to ask Phelps about this fact, but the banker was slumped in his chair, fast asleep with his coffee untouched. It was three hours later that McIlroy woke Phelps. "They've found the tractor," McIlroy said. "Good," Phelps mumbled, and then as comprehension came; "That's fine! That's just line! Is Evans—?" "Can't tell yet. They spotted the tractor from the satellite observatory. Captain Jones took off a few minutes ago, and he'll report back as soon as he lands. Hadn't you better get some sleep?" Evans was carrying a block of ice into the tractor when he saw the rocket coming in for a landing. He dropped the block and stood waiting. When the dust settled from around the tail of the rocket, he started to run forward. The air lock opened, and Evans recognized the vacuum suited figure of Nickel Jones. "Evans, man!" said Jones' voice in the intercom. "Alive you are!" "A Welshman takes a lot of killing," Evans answered. Later, in Evans' tractor, he was telling his story: "... And I don't know how long I sat there after I found the water." He looked at the Goldburgian device he had made out of wire and tubing. "Finally I built this thing. These caves were made of lava. They must have been formed by steam some time, because there's a floor of ice in all of 'em. "The idea didn't come all at once, it took a long time for me to remember that water is made out of oxygen and hydrogen. When I remembered that, of course, I remembered that it can be separated with electricity. So I built this thing. "It runs an electric current through water, lets the oxygen loose in the room, and pipes the hydrogen outside. It doesn't work automatically, of course, so I run it about an hour a day. My oxygen level gauge shows how long." "You're a genius, man!" Jones exclaimed. "No," Evans answered, "a Welshman, nothing more." "Well, then," said Jones, "are you ready to start back?" "Back?" "Well, it was to rescue you that I came." "I don't need rescuing, man," Evans said. Jones stared at him blankly. "You might let me have some food," Evans continued. "I'm getting short of that. And you might have someone send out a mechanic with parts to fix my tractor. Then maybe you'll let me use your radio to file my claim." "Claim?" "Sure, man, I've thousands of tons of water here. It's the richest mine on the Moon!" THE END
D. They can see different sides of the moon from the people on the moon
Why do the astronouts have to wait to talk to White Sands? A. Some of their communication systems are down B. They have to wait until they are in the right place in orbit to send a signal C. It takes a couple of hours for a signal to get to Earth D. They are waiting on a response from the base so they have more information
SPATIAL DELIVERY BY RANDALL GARRETT Women on space station assignments shouldn't get pregnant. But there's a first time for everything. Here's the story of such a time——and an historic situation. [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, October 1954. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] One thousand seventy-five miles above the wrinkled surface of Earth, a woman was in pain. There, high in the emptiness of space, Space Station One swung in its orbit. Once every two hours, the artificial satellite looped completely around the planet, watching what went on below. Outside its bright steel hull was the silence of the interplanetary vacuum; inside, in the hospital ward, Lieutenant Alice Britton clutched at the sheets of her bed in pain, then relaxed as it faded away. Major Banes looked at her and smiled a little. "How do you feel, Lieutenant?" She smiled back; she knew the pain wouldn't return for a few minutes yet. "Fine, doctor. It's no worse than I was expecting. How long will it before we can contact White Sands?" The major looked nervously at his wristwatch. "Nearly an hour. You'll be all right." "Certainly," she agreed, running a hand through her brown hair, "I'll be okay. Just you be on tap when I call." The major's grin broadened. "You don't think I'd miss a historical event like this, do you? You take it easy. We're over Eastern Europe now, but as soon as we get within radio range of New Mexico, I'll beam a call in." He paused, then repeated, "You just take it easy. Call the nurse if anything happens." Then he turned and walked out of the room. Alice Britton closed her eyes. Major Banes was all smiles and cheer now, but he hadn't been that way five months ago. She chuckled softly to herself as she thought of his blistering speech. "Lieutenant Britton, you're either careless or brainless; I don't know which! Your husband may be the finest rocket jockey in the Space Service, but that doesn't give him the right to come blasting up here on a supply rocket just to get you pregnant!" Alice had said: "I'm sure the thought never entered his mind, doctor. I know it never entered mine." "But that was two and a half months ago! Why didn't you come to me before this? Of all the tom-fool—" His voice had died off in suppressed anger. "I didn't know," she had said stolidly. "You know my medical record." "I know. I know." A puzzled frown had come over his face then, a frown which almost hid the green eyes that contrasted so startlingly with the flaming red of his hair. "The question is: what do we do next? We're not equipped for obstetrics up here." "Send me back down to Earth, of course." And he had looked up at her scathingly. "Lieutenant Britton, it is my personal opinion that you need your head examined, and not by a general practitioner, either! Why, I wouldn't let you get into an airplane, much less land on Earth in a rocket! If you think I'd permit you to subject yourself to eight gravities of acceleration in a rocket landing, you're daffy!" She hadn't thought of it before, but the major was right. The terrible pressure of a rocket landing would increase her effective body weight to nearly half a ton; an adult human being couldn't take that sort of punishment for long, much less the tiny life that was growing within her. So she had stayed on in the Space Station, doing her job as always. As Chief Radar Technician, she was important in the operation of the station. Her pregnancy had never made her uncomfortable; the slow rotation of the wheel-shaped station about its axis gave an effective gravity at the rim only half that of Earth's surface, and the closer to the hub she went, the less her weight became. According to the major, the baby was due sometime around the first of September. "Two hundred and eighty days," he had said. "Luckily, we can pinpoint it almost exactly. And at a maximum of half of Earth gravity, you shouldn't weigh more than seventy pounds then. You're to report to me at least once a week, Lieutenant." As the words went through her mind, another spasm of pain hit her, and she clenched her fists tightly on the sheets again. It went away, and she took a deep breath. Everything had been fine until today. And then, only half an hour ago, a meteor had hit the radar room. It had been only a tiny bit of rock, no bigger than a twenty-two bullet, and it hadn't been traveling more than ten miles per second, but it had managed to punch its way through the shielding of the station. The self-sealing walls had closed the tiny hole quickly, but even in that short time, a lot of air had gone whistling out into the vacuum of space. The depressurization hadn't hurt her too much, but the shock had been enough to start labor. The baby was going to come two months early. She relaxed a little more, waiting for the next pain. There was nothing to worry about; she had absolute faith in the red-haired major. The major himself was not so sure. He sat in his office, massaging his fingertips and looking worriedly at the clock on the wall. The Chief Nurse at a nearby desk took off her glasses and looked at him speculatively. "Something wrong, doctor?" "Incubator," he said, without taking his eyes off the clock. "I beg your pardon?" "Incubator. We can't deliver a seven-month preemie without an incubator." The nurse's eyes widened. "Good Lord! I never thought of that! What are you going to do?" "Right now, I can't do anything. I can't beam a radio message through to the Earth. But as soon as we get within radio range of White Sands, I'll ask them to send up an emergency rocket with an incubator. But—" "But what?" "Will we have time? The pains are coming pretty fast now. It will be at least three hours before they can get a ship up here. If they miss us on the next time around, it'll be five hours. She can't hold out that long." The Chief Nurse turned her eyes to the slowly moving second hand of the wall clock. She could feel a lump in her throat. Major Banes was in the Communications Center a full five minutes before the coastline of California appeared on the curved horizon of the globe beneath them. He had spent the hour typing out a complete report of what had happened to Alice Britton and a list of what he needed. He handed it to the teletype operator and paced the floor impatiently as he waited for the answer. When the receiver teletype began clacking softly, he leaned over the page, waiting anxiously for every word. WHITE SANDS ROCKET BASE 4 JULY 1984 0913 HRS URGENT TO: MAJ PETER BANES (MC) 0-266118 SS-1 MEDICAL OFFICER FROM: GEN DAVID BARRETT 0-199515 COMMANDING WSRB ROCKET. ORBIT NOW BEING COMPUTED FOR RENDEZVOUS WITH SS-1 AS OF NEXT PASSAGE ABOVE USA. CAPT. JAMES BRITTON PILOTING. MEDICS LOADING SHIP TWELVE WITH INCUBATOR AND OTHER SUPPLIES. BASE OBSTETRICIAN LT COL GATES ALSO COMING TO ASSIST IN DELIVERY. HANG ON. OVER. Banes nodded and turned to the operator. "I want a direct open telephone line to my office in case I have to get another message to the base before we get out of range again." He turned and left through the heavy door. Each room of the space station was protected by airtight doors and individual heating units; if some accident, such as a really large meteor hit, should release the air from one room, nearby rooms would be safe. Banes' next stop was the hospital ward. Alice Britton was resting quietly, but there were lines of strain around her eyes which hadn't been there an hour before. "How's it coming, Lieutenant?" She smiled, but another spasm hit her before she could answer. After a time, she said: "I'm doing fine, but you look as if you'd been through the mill. What's eating you?" He forced a nervous smile. "Nothing but the responsibility. You're going to be a very famous woman, you know. You'll be the mother of the first child born in space. And it's my job to see to it that you're both all right." She grinned. "Another Dr. Dafoe?" "Something on that order, I suppose. But it won't be all my glory. Colonel Gates, the O.B. man, was supposed to come up for the delivery in September, so when White Sands contacted us, they said he was coming immediately." He paused, and a genuine smile crossed his face. "Your husband is bringing him up." "Jim! Coming up here? Wonderful! But I'm afraid the colonel will be too late. This isn't going to last that long." Banes had to fight hard to keep his face smiling when she said that, but he managed an easy nod. "We'll see. Don't hurry it, though. Let nature take its course. I'm not such a glory hog that I'd not let Gates have part of it—or all of it, for that matter. Relax and take it easy." He went on talking, trying to keep the conversation light, but his eyes kept wandering to his wristwatch, timing Alice's pain intervals. They were coming too close together to suit him. There was a faint rap, and the heavy airtight door swung open to admit the Chief Nurse. "There's a message for you in your office, doctor. I'll send a nurse in to be with her." He nodded, then turned back to Alice. "Stiff uppah lip, and all that sort of rot," he said in a phony British accent. "Oh, raw ther , old chap," she grinned. Back in his office, Banes picked up the teletype flimsy. WHITE SANDS ROCKET BASE 4 JULY 1984 0928 HRS URGENT TO: MAJ PETER BANES (MC) 0-266118 SS-1 MEDICAL OFFICER FROM: GEN DAVID BARRETT 0-199515 COMMANDING WSRB ROCKET. ORBIT COMPUTED FOR RENDEZVOUS AT 1134 HRS MST. CAPT BRITTON SENDS PERSONAL TO LT BRITTON AS FOLLOWS: HOLD THE FORT, BABY, THE WHOLE WORLD IS PRAYING FOR YOU. OUT. Banes sat on the edge of his desk, pounding a fist into the palm of his left hand. "Two hours. It isn't soon enough. She'll never hold out that long. And we don't have an incubator." His voice was a clipped monotone, timed with the rhythmic slamming of his fist. The Chief Nurse said: "Can't we build something that will do until the rocket gets here?" Banes looked at her, his face expressionless. "What would we build it out of? There's not a spare piece of equipment in the station. It costs money to ship material up here, you know. Anything not essential is left on the ground." The phone rang. Banes picked it up and identified himself. The voice at the other end said: "This is Communications, Major. I tape recorded all the monitor pickups from the Earth radio stations, and it looks as though the Space Service has released the information to the public. Lieutenant Britton's husband was right when he said the whole world's praying for her. Do you want to hear the tapes?" "Not now, but thanks for the information." He hung up and looked into the Chief Nurse's eyes. "They've released the news to the public." She frowned. "That really puts you on the spot. If the baby dies, they'll blame you." Banes slammed his fist to the desk. "Do you think I give a tinker's dam about that? I'm interested in saving a life, not in worrying about what people may think!" "Yes, sir. I just thought—" "Well, think about something useful! Think about how we're going to save that baby!" He paused as he saw her eyes. "I'm sorry, Lieutenant. My nerves are all raw, I guess. But, dammit, my field is space medicine. I can handle depressurization, space sickness, and things like that, but I don't know anything about babies! I know what I read in medical school, and I watched a delivery once, but that's all I know. I don't even have any references up here; people aren't supposed to go around having babies on a space station!" "It's all right, doctor. Shall I prepare the delivery room?" His laugh was hard and short. "Delivery room! I wish to Heaven we had one! Prepare the ward room next to the one she's in now, I guess. It's the best we have. "So help me Hannah, I'm going to see some changes made in regulations! A situation like this won't happen again!" The nurse left quietly. She knew Banes wasn't really angry at the Brittons; it was simply his way of letting off steam to ease the tension within him. The slow, monotonous rotation of the second hand on the wall clock seemed to drag time grudgingly along with it. Banes wished he could smoke to calm his raw nerves, but it was strictly against regulations. Air was too precious to be used up by smoking. Every bit of air on board had had to be carried up in rockets when the station was built in space. The air purifiers in the hydroponics section could keep the air fresh enough for breathing, but fire of any kind would overtax the system, leaving too little oxygen in the atmosphere. It was a few minutes of ten when he decided he'd better get back to Alice Britton. She was trying to read a book between spasms, but she wasn't getting much read. She dropped it to the floor when he came in. "Am I glad to see you! It won't be long now." She looked at him analytically. "Say! Just what is eating you? You look more haggard than I do!" Again he tried to force a smile, but it didn't come off too well. "Nothing serious. I just want to make sure everything comes out all right." She smiled. "It will. You're all set. You ordered the instruments months ago. Or did you forget something?" That hit home, but he just grinned feebly. "I forgot to get somebody to boil water." "Whatever for?" "Coffee, of course. Didn't you know that? Papa always heats up the water; that keeps him out of the way, and the doctor has coffee afterwards." Alice's hands grasped the sheet again, and Banes glanced at his watch. Ninety seconds! It was long and hard. When the pain had ebbed away, he said: "We've got the delivery room all ready. It won't be much longer now." "I'll say it won't! How about the incubator?" There was a long pause. Finally, he said softly: "There isn't any incubator. I didn't take the possibility of a premature delivery into account. It's my fault. I've done what I could, though; the ship is bringing one up. I—I think we'll be able to keep the child alive until—" He stopped. Alice was bubbling up with laughter. "Lieutenant! Lieutenant Britton! Alice! This is no time to get hysterical! Stop it!" Her laughter slowed to a chuckle. " Me get hysterical! That's a good one! What about you? You're so nervous you couldn't sip water out of a bathtub without spilling it!" He blinked. "What do you mean?" Another pain came, and he had to wait until it was over before he got her answer. "Doctor," she said, "I thought you would have figured it out. Ask yourself just one question. Ask yourself, 'Why is a space station like an incubator?'" Space Ship Twelve docked at Space Station One at exactly eleven thirty-four, and two men in spacesuits pushed a large, bulky package through the airlock. Major Peter Banes, haggard but smiling, met Captain Britton in the corridor as he and the colonel entered the hospital ward. Banes nodded to Colonel Gates, then turned to Britton. "I don't know whether to congratulate you or take a poke at you, Captain, but I suppose congratulations come first. Your son, James Edward Britton II, is doing fine, thank you." "You mean— already ?" The colonel said nothing, but he raised an eyebrow. "Over an hour ago," said Banes. "But—but—the incubator—" Banes' grin widened. "We'll put the baby in it, now that we've got it, but it really isn't necessary. Your wife figured that one out. A space station is a kind of incubator itself, you see. It protects us poor, weak humans from the terrible conditions of space. So all we had to do was close up one of the airtight rooms, sterilize it, warm it up, and put in extra oxygen from the emergency tanks. Young James is perfectly comfortable." "Excellent, Major!" said the colonel. "Don't thank me. It was Captain Britton's wife who—" But Captain Britton wasn't listening any more. He was headed toward his wife's room at top speed.
B. They have to wait until they are in the right place in orbit to send a signal
Why are the experimental results somewhat irrelevant? A. The experimenters were unqualified B. The experiment subjects were unqualified C. The sample size was too small D. Part of what matters is the label itself
More Booze You Can Use When we last heard from them, the members of the Slate beer-testing team were coping with lagers and trying to see if they could taste the 3-to-1 price difference between the most- and least-expensive brands. (Click for a wrap-up of the first round of beer tasting.) The answer was: They found one beer they really liked, Samuel Adams Boston Lager , and one they really hated, imported Grolsch from Holland. Both were expensive beers--Grolsch was the most expensive in the test--and otherwise the testers had a hard time telling beers apart. The members of the team, as noted in the original article, all hold day jobs at Microsoft, mainly as designers, managers, and coders for Microsoft Word. The point of the second test was not to find the difference between cheap and expensive beers but instead to compare a variety of top-of-the-line beers. Was there one kind the tasters preferred consistently? Could they detect any of the subtleties of brewing style and provenance that microbrew customers pay such attention to when choosing some Doppelbock over a cream ale? Since the tasting panel had left the first round grumbling that cheap lagers were not a fair test of their abilities, this second round of testing was advertised to the panel as a reward. Every beer in Round 2 would be a fancy beer. A microbrew. A "craft beer." A prestigious import. These were the kinds of beer the panel members said they liked--and the ones they said they were most familiar with. One aspect of the reward was that they would presumably enjoy the actual testing more--fewer rueful beer descriptions along the lines of "urine" or "get it away!" were expected than in the first round. The other aspect of anticipated reward was the panelists' unspoken but obvious assumption that this time they would "do better" on the test. Intellectual vanity being what it is, people who had fought for and won jobs at Microsoft and who still must fight every six months for primacy on the employee-ranking scale (which determines--gasp!--how many new stock options they receive) would assume that their skill as tasters was on trial, just as much as the beer was. Of course they were right, which is what made this round as amusing to administer as the first one had been. Here is what happened and what it meant: 1. Procedure. This was similar in most ways to the experimental approach of Round 1. The nine testers who showed up were a subset of the original 12. The missing three dropped out with excuses of "my wife is sick" (one person) and "meeting is running long" (two). As before, each tester found before him on a table 10 red plastic cups, labeled A through J. Each cup held 3 ounces of one of the beers. The A-to-J labeling scheme was the same for all testers. Instead of saltines for palate-cleansing, this time we had popcorn and nuts. As they began, the tasters were given these and only these clues: that the flight included one "holdover" beer from the previous round (Sam Adams); that it included at least one import (Bass); that it included at least one macrobrew , specifically, a member of the vast Anheuser-Busch family (Michelob Hefeweizen). After sampling all beers, the tasters rated them as follows: Overall quality points, from zero to 100, reflecting their personal, subjective fondness for the beer. Descriptions of and comments about each beer's taste--"smooth and nutty," "too strong," etc. If the first ranking was a measure of how good each beer was, this was an attempt to explain what made it good. Best and Worst , one of each from the group. Name that beer! The tasters were told that some of the drinks were Hefeweizens, some might be IPAs (India pale ales), some might be bitters, and so on. They were asked to put each beer in its proper category--and to name a specific brewery and brand if they could. The idea here was to test the veteran beer drinkers' claim to recognize the distinctive tastes of famous brands. (To see all the grids for all the beers, click .) 2. Philosophy. The first round of testing was All Lager. This second round was All Fancy, and Mainly Not Lager. As several correspondents (for instance, the of Best American Beers ) have helpfully pointed out, the definition of lager provided last time was not exactly "accurate." If you want to stay within the realm of textbook definitions, a lager is a beer brewed a particular way--slowly, at cool temperatures, with yeast that settles on the bottom of the vat. This is in contrast with an ale, which is brewed faster, warmer, and with the yeast on top. By this same reasoning, lagers don't have to be light-colored, weak-flavored, and watery, as mainstream American lagers are. In principle, lagers can be dark, fierce, manly. Therefore, the correspondents suggest, it was wrong to impugn Sam Adams or Pete's Wicked for deceptive labeling, in presenting their tawnier, more flavorful beers as lagers too. To this the beer scientist must say: Book-learning is fine in its place. But let's be realistic. Actual drinking experience teaches the American beer consumer that a) all cheap beers are lagers; and b) most lagers are light-colored and weak. The first test was designed to evaluate low-end beers and therefore had to be lager-centric. This one is designed to test fancy beers--but in the spirit of open-mindedness and technical accuracy, it includes a few "strong" lagers too. 3. Materials. The 10 test beers were chosen with several goals in mind: To cover at least a modest range of fancy beer types--extra special bitter, India pale ale, Hefeweizen, and so on. To include both imported and domestic beers. Among the domestic microbrews, there's an obvious skew toward beers from the Pacific Northwest. But as Microsoft would put it, that's a feature not a bug. These beers all came from the Safeway nearest the Redmond, Wash., "main campus" of Microsoft, and microbrews are supposed to be local. To include one holdover from the previous test, as a scientific control on our tasters' preferences. This was Sam Adams , runaway winner of Round 1. To include one fancy product from a monster-scale U.S. mass brewery, to see if the tasters liked it better or worse than the cute little microbrews. This was Michelob Hefeweizen , from the pride of St. Louis, Anheuser-Busch. Click for pricing information and pre-quaffing evaluations. The beers tasted were: 4. Data Analysis. a) Best and Worst. Compared to the lager test, we would expect the range of "best" choices to be more varied, since all the tested beers were supposed to be good. This expectation was most dramatically borne out in the "Best and Worst" rankings. The nine tasters cast a total of nine Worst votes and 11.5 Best votes. (Tester No. 1 turned in a sheet with three Best selections, or two more than his theoretical quota. Tester No. 4 listed a Best and a Best-minus, which counted as half a vote.) The results were clearest at the bottom: three Worsts for Pyramid Hefeweizen , even though most comments about the beer were more or less respectful. ("Bitter, drinkable.") But at the top and middle the situation was muddier: There were three Bests for Full Sail ESB , which most of the tasters later said they weren't familiar with, and 2.5 for Redhook IPA , which all the tasters knew. But each of these also got a Worst vote, and most of the other beers had a mixed reading. So far, the tasters are meeting expectations, finding something to like in nearly all these fancy beers. b) Overall preference points. Here the complications increase. The loser was again apparent: Pyramid Hefeweizen came in last on rating points, as it had in the Best/Worst derby. But the amazing dark horse winner was Michelob Hefeweizen . The three elements of surprise here, in ascending order of unexpectedness, are: This best-liked beer belonged to the same category, Hefeweizen, as the least-liked product, from Pyramid. This was also the only outright Anheuser-Busch product in the contest (the Redhooks are 75 percent A-B free). It is safe to say that all tasters would have said beforehand that they would rank an American macrobrew last, and Anheuser-Busch last of all. Although it clearly won on overall preference points, Michelob Hefeweizen was the only beer not to have received a single "Best" vote. The first two anomalies can be written off as testament to the power of a blind taste test. The third suggests an important difference in concepts of "bestness." Sometimes a product seems to be the best of a group simply because it's the most unusual or distinctive. This is why very high Wine Spectator ratings often go to wines that mainly taste odd. But another kind of bestness involves an unobtrusive, day-in day-out acceptability. That seems to be Michelob Hefe 's achievement here: no one's first choice, but high on everyone's list. Let's go to the charts: This table shows how the beers performed on "raw score"--that is, without the advanced statistical adjustment of throwing out the highest and lowest score each beer received. Next, we have "corrected average preference points," throwing out the high and low marks for each beer. The result is basically the same: It is worth noting the fate of Sam Adams on these charts. Here it ends up with a score of less than 61. These were the numbers awarded by the very same tasters who gave it a corrected preference rating of 83.33 the last time around--and 10 "Best" votes, vs. one Best (and one Worst) this time. The shift in Bests is understandable and demonstrates the importance of picking your competition. The severe drop in preference points illustrates more acutely the ancient principle of being a big fish in a small pond. These same tasters thought that Sam Adams was objectively much better when it was surrounded by Busch and Schmidt's. c) Value rankings. Last time this calculation led to what the colorful French would call a bouleversement. One of the cheapest beers, Busch, which had been in the lower ranks on overall preference points, came out at the top on value-for-money ratings, because it was so cheap. The big surprise now is that the highest-rated beer was also the cheapest one, Michelob Hefe , so the value calculation turned into a rout: Pyramid Hefeweizen was expensive on top of being unpopular, so its position at the bottom was hammered home--but not as painfully as that of Bass Ale . Bass had been in the respectable lower middle class of the preference rankings, so its disappointing Val-u-meter showing mainly reflects the fact that it was the only beer not on "sale" and therefore by far the costliest entry in the experiment. d) Taster skill. As members of the tasting panel began to suspect, they themselves were being judged while they judged the beer. One of the tasters, No. 7, decided to live dangerously and give specific brands and breweries for Samples A through J. This man was the only panel member whose job does not involve designing Microsoft Word--and the only one to identify two or more of the beers accurately and specifically. (He spotted Redhook IPA and Redhook ESB.) The fact that the beers correctly identified were the two most popular microbrews in the Seattle area suggests that familiarity is the main ingredient in knowing your beer. Many others were simply lost. Barely half the tasters, five of nine, recognized that Michelob Hefeweizen was a Hefeweizen. Before the test, nine of nine would have said that picking out a Hefe was easy, because of its cloudy look and wheaty flavor. Three tasters thought Sam Adams was an IPA ; two thought Redhook's IPA was a Hefeweizen. In fairness, six of nine testers identified Pyramid Hefeweizen as a Hefe, and six recognized Full Sail ESB as a bitter. Much in the fashion of blind men describing an elephant, here is a how the testers handled Sam Adams Boston Lager : 5. Implications and Directions for Future Research. Science does not always answer questions; often, it raises many new ones. This excursion into beer science mainly raises the question: What kind of people are we? If we are Gradgrind-like empiricists, living our life for "welfare maximization" as described in introductory econ. courses, the conclusion is obvious. We learned from the first experiment to buy either Sam Adams (when we wanted maximum lager enjoyment per bottle) or Busch (for maximum taste and snob appeal per dollar). From this second round we see an even more efficient possibility: Buy Michelob Hefeweizen and nothing else, since on the basis of this test it's the best liked and the cheapest beer. By the way, if there is a single company whose achievements the testing panel honored, it would be Anheuser-Busch . From its brewing tanks came two of the double-crown winners of the taste tests: plain old Busch , the Taste-o-meter and Snob-o-meter victor of Round 1, and Michelob Hefeweizen , the preference-point and Val-u-meter winner this time. But, of course, there is another possibility: that what is excluded in a blind taste test is in fact what we want, and are happy to pay for, when we sit down with a beer. The complicated label, the fancy bottle, the exotic concept that this beer has traveled from some far-off corner of Bohemia or even the Yakima Valley--all this may be cheap at the $1.25-per-pint cost difference between the cheapest and the most expensive beers. In elementary school, we all endured a standard science experiment: If you shut your eyes and pinch your nose closed, can you tell any difference in the taste of a slice of apple, of carrot, of pear? You can't--but that doesn't mean that from then on you should close your eyes, hold your nose, and chew a cheap carrot when you feel like having some fruit. There is a time and place for carrots, but also for juicy pears. There is a time for Busch, but also for Full Sail "Equinox." For scientists who want to continue this work at home, here are a few suggestions for further research: Tell the testers ahead of time what beers they will be drinking. Ask them to rank the beers, 1 through 10, based on how well they like them. Then compare the list with the "revealed preferences" that come from the blind test. As a variation, show them the list ahead of time and ask them to pick out the beer they know they love and the one they know they hate. Then compare this with the "after" list. If you're going to test imported lagers, try Foster's or Corona rather than Grolsch. Remember to stay strictly in the scientist's role. Don't take the test yourself.
D. Part of what matters is the label itself
What is barycentric Newton diagram?
### Introduction With growing diversity in personal food preference and regional cuisine style, personalized information systems that can transform a recipe into any selected regional cuisine style that a user might prefer would help food companies and professional chefs create new recipes. To achieve this goal, there are two significant challenges: 1) identifying the degree of regional cuisine style mixture of any selected recipe; and 2) developing an algorithm that shifts a recipe into any selected regional cuisine style. As to the former challenge, with growing globalization and economic development, it is becoming difficult to identify a recipe’s regional cuisine style with specific traditional styles since regional cuisine patterns have been changing and converging in many countries throughout Asia, Europe, and elsewhere BIBREF0 . Regarding the latter challenge, to the best of our knowledge, little attention has been paid to developing algorithms which transform a recipe’s regional cuisine style into any selected regional cuisine pattern, cf. BIBREF1 , BIBREF2 . Previous studies have focused on developing an algorithm which suggests replaceable ingredients based on cooking action BIBREF3 , degree of similarity among ingredient BIBREF4 , ingredient network BIBREF5 , degree of typicality of ingredient BIBREF6 , and flavor (foodpairing.com). The aim of this study is to propose a novel data-driven system for transformation of regional cuisine style. This system has two characteristics. First, we propose a new method for identifying a recipe’s regional cuisine style mixture by calculating the contribution of each ingredient to certain regional cuisine patterns, such as Mediterranean, French, or Japanese, by drawing on ingredient prevalence data from large recipe repositories. Also the system visualizes a recipe’s regional cuisine style mixture in two-dimensional space under barycentric coordinates using what we call a Newton diagram. Second, the system transforms a recipe’s regional cuisine pattern into any selected regional style by recommending replaceable ingredients in existing recipes. As an example of this proposed system, we transform a traditional Japanese recipe, Sukiyaki, into French style. ### Architecture of transformation system Figure 1 shows the overall architecture of the transformation system, which consists of two steps: 1) identification and visualization of a recipe’s regional cuisine style mixture; and 2) algorithm which transforms a given recipe into any selected regional/country style. Details of the steps are described as follows. ### Step 1: Identification and visualization of a recipe's regional cuisine style mixture Using a neural network method as detailed below, we identify a recipe's regional cuisine style. The neural network model was constructed as shown in Figure 2 . The number of layers and dimension of each layer are also shown in Figure 2 . When we enter a recipe, this model classifies which country or regional cuisine the recipe belongs to. The input is a vector with the dimension of the total number of ingredients included in the dataset, and only the indices of ingredients contained in the input recipe are 1, otherwise they are 0. There are two hidden layers. Therefore, this model can consider a combination of ingredients to predict the country probability. Dropout is also used for the hidden layer, randomly (20%) setting the value of the node to 0. So a robust network is constructed. The final layer’s dimension is the number of countries, here 20 countries. In the final layer, we convert it to a probability value using the softmax function, which represents the probability that the recipe belongs to that country. ADAM BIBREF7 was used as an optimization technique. The number of epochs in training was 200. These network structure and parameters were chosen after preliminary experiments so that the neural network could perform the country classification task as efficiently as possible. In this study, we used a labeled corpus of Yummly recipes to train this neural network. Yummly dataset has 39774 recipes from the 20 countries as shown in Table 1 . Each recipe has the ingredients and country information. Firstly, we randomly divided the data set into 80% for training the neural network and 20% for testing how precisely it can classify. The final neural network achieved a classification accuracy of 79% on the test set. Figure 3 shows the confusion matrix of the neural network classifficaiton. Table 2 shows the examples of ingredient classification results. Common ingredients, onions for example, that appear in many regional recipes are assigned to all countries with low probability. On the other hands some ingredients that appear only in specific country are assigned to the country with high probability. For example mirin that is a seasoning commonly used in Japan is classified into Japan with high probability. By using the probability values that emerge from the activation function in the neural network, rather than just the final classification, we can draw a barycentric Newton diagram, as shown in Figure 4 . The basic idea of the visualization, drawing on Isaac Newton’s visualization of the color spectrum BIBREF8 , is to express a mixture in terms of its constituents as represented in barycentric coordinates. This visualization allows an intuitive interpretation of which country a recipe belongs to. If the probability of Japanese is high, the recipe is mapped near the Japanese. The countries on the Newton diagram are placed by spectral graph drawing BIBREF9 , so that similar countries are placed nearby on the circle. The calculation is as follows. First we define the adjacency matrix $W$ as the similarity between two countries. The similarity between country $i$ and $j$ is calculated by cosine similarity of county $i$ vector and $j$ vector. These vector are defined in next section. $W_{ij} = sim(vec_i, vec_j)$ . The degree matrix $D$ is a diagonal matrix where $D_{ii} = \sum _{j} W_{ij}$ . Next we calculate the eigendecomposition of $D^{-1}W$ . The second and third smallest eingenvalues and corresponded eingevectors are used for placing the countries. Eigenvectors are normalized so as to place the countries on the circle. ### Step 2: Transformation algorithm for transforming regional cuisine style If you want to change a given recipe into a recipe having high probability of a specific country by just changing one ingredient, which ingredient should be alternatively used? When we change the one ingredient $x_i$ in the recipe to ingredient $x_j$ , the probability value of country likelihood can be calculated by using the above neural network model. If we want to change the recipe to have high probability of a specific country $c$ , we can find ingredient $x_j$ that maximizes the following probability. $P(C=c|r - x_i + x_j)$ where $r$ is the recipe. However, with this method, regardless of the ingredient $x_i$ , only specific ingredients having a high probability of country $c$ are always selected. In this system, we want to select ingredients that are similar to ingredient $x_i$ and have a high probability of country $c$ . Therefore, we propose a method of extending word2vec as a method of finding ingredients resembling ingredient $x_j$0 . Word2vec is a technique proposed in the field of natural language processing BIBREF10 . As the name implies, it is a method to vectorize words, and similar words are represented by similar vectors. To train word2vec, skip-gram model is used. In the skip-gram model, the objective is to learn word vector representations that can predict the nearby words. The objective function is $$\sum _{d \in D} \sum _{w_i \in d} \sum _{-n \le j \le n, j \ne 0} \log P(w_{i + j}|w_i) $$ (Eq. 10) where $D$ is the set of documents, $d$ is a document, $w_i$ is a word, and $n$ is the window size. This model predicts the $n$ words before and after the input word, as described in left side of Figure 5 . The objective function is to maximize the likelihood of the prediction of the surrounding word $w_{i+j}$ given the center word $w_i$ . The probability is $$P(w_j|w_i) = \frac{\exp (v_{w_i}^Tv_{w_j}^{^{\prime }})}{\sum _{w \in W} \exp (v_{w_i}^Tv_w^{^{\prime }})}$$ (Eq. 11) where $v_w \in \mathbb {R}^K$ is an input vector of word $w$ , $v^{^{\prime }}_w \in \mathbb {R}^K$ is an output vector of word $w$ , $K$ is the dimension of the vector, and $W$ is the set of all words. To optimize this objective function, hierarchical softmax or negative sampling method BIBREF10 are used. After that we get the vectors of words and we can calculate analogies by using the vectors. For example, the analogy of “King - Man + Women = ?" yields “Queen" by using word2vec. In this study, word2vec is applied to the data set of recipes. Word2vec can be applied by considering recipes as documents and ingredients as words. We do not include a window size parameter, since it is used to encode the ordering of words in document where it is relevant. In recipes, the listing of ingredients is unordered. The objective function is $$\sum _{r \in R} \sum _{w_i \in r} \sum _{j \ne i} \log P(w_{j}|w_i) $$ (Eq. 12) where $R$ is a set of recipes, $r$ is a recipe, and $w_i$ is the $i$ th ingredient in recipe $r$ . The architecture is described in middle of Figure 5 . The objective function is to maximize the likelihood of the prediction of the ingredient $w_j$ in the same recipe given the ingredient $w_i$ . The probability is defined below. $$P(w_j|w_i) = \frac{\exp (v_{w_i}^Tv_{w_j}^{^{\prime }})}{\sum _{w \in W} \exp (v_{w_i}^Tv_w^{^{\prime }})}$$ (Eq. 13) where $w$ is an ingredient, $v_w \in \mathbb {R}^K$ is an input vector of ingredient, $v^{^{\prime }}_w \in \mathbb {R}^K$ is an output vector of ingredient, $K$ is the dimension of the vector, and $W$ is the set of all ingredients. Each ingredient is vectorized by word2vec, and the similarity of each ingredient is calculated using cosine similarity. Through vectorization in word2vec, those of the same genre are placed nearby. In other words, by using the word2vec vector, it is possible to select ingredients with similar genres. Next, we extend word2vec to be able to incorporate information of the country. When we vectorize the countries, we can calculate the analogy between countries and ingredients. For example, this method can tell us what is the French ingredient that corresponds to Japanese soy sauce by calculating “Soy sauce - Japan + French = ?". The detail of our method is as follows. We maximize objective function ( 10 ). $$\sum _{r \in R} \sum _{w_i \in r} \left( \log P(w_{i}|c_r) + \log P(c_r|w_{i}) + \sum _{j \ne i} \log P(w_{j}|w_i)\right)$$ (Eq. 14) where $R$ is a set of recipes, $r$ is a recipe, $w_i$ is the $i$ th ingredient in recipe $r$ , and $c_r$ is the country recipe $r$ belongs to. The architecture is described in right of Figure 5 . The objective function is to maximize the likelihood of the prediction of the ingredient $w_j$ in the same recipe given the ingredient $w_i$ along with the prediction of the the ingredients $w_i$ given the country $r$0 and the prediction of the the country $r$1 given the ingredient $r$2 . The probability is defined below. $$P(b|a) = \frac{\exp (v_{a}^Tv_{b}^{^{\prime }})}{\sum _{c \in W} \exp (v_{a}^Tv_c^{^{\prime }})}$$ (Eq. 15) where $a$ is a ingredient or country, $b,c$ are also, $v_a \in \mathbb {R}^K$ is an input vector of ingredient or country, $v^{^{\prime }}_a \in \mathbb {R}^K$ is an output vector of ingredient or country, $K$ is the dimension of vector, and $W$ is the set of all ingredients and all countries. We can use hierarchical softmax or negative sampling BIBREF10 to maximize objective function ( 10 ) and find the vectors of ingredients and countries in the same vector space. Table 3 shows the ingredients around each country in the vector space, and which could be considered as most authentic for that regional cuisine BIBREF11 . Also, Figure 6 shows the ingredients and countries in 2D map by using t-SNE method BIBREF12 . ### Experiment Our substitution strategy is as follows. First we use extended word2vec and train it by Yummly dataset. After that all ingredients and countries are vectorized into 100 dimensional vector space. Second we find substitution by analogy calculation. For example, to find french substitution of Mirin, we calculate “Mirin - Japanese + French" in the vector space and get the vector as result. After that we find similar ingredients around the vector by calculating the cosine similarity. As an example of our proposed system, we transformed a traditional Japanese “Sukiyaki" into French style. Table 4 shows the suggested replaceable ingredients and the probability after replacing. “Sukiyaki" consists of soy sauce, beef sirloin, white sugar, green onions, mirin, shiitake, egg, vegetable oil, konnyaku, and chinese cabbage. Figure 7 shows the Sukiyaki in French style cooked by professional chef KM who is one of the authors of this paper. He assesses the new recipe as valid and novel to him in terms of Sukiyaki in French. Here our task is in generating a new dish, for which by definition there is no ground truth for comparison. Rating by experts is the standard approach for assessing novel generative artifacts, e.g. in studies of creativity BIBREF13 , but going forward it is important to develop other approaches for assessment. ### Discussion With growing diversity in personal food preference and regional cuisine style, the development of data-driven systems which can transform recipes into any given regional cuisine style might be of value for food companies or professional chefs to create new recipes. In this regard, this study adds two important contributions to the literature. First, this is to the best of our knowledge, the first study to identify a recipe’s mixture of regional cuisine style from the large number of recipes around the world. Previous studies have focused on assessing degree of adherence to a single regional cuisine pattern. For example, Mediterranean Diet Score is one of the most popular diet scores. This method uses 11 main items (e.g., fruit, vegetable, olive oil, and wine) as criteria for assessing the degree of one’s Mediterranean style BIBREF14 . However, in this era, it is becoming difficult to identify a recipe’s regional cuisine style with specific country/regional style. For example, should Fish Provencal, whose recipe name is suggestive of Southern France, be cast as French style? The answer is mixture of different country styles: 32% French; 26% Italian; and 38% Spanish (see Figure 4 ). Furthermore, our identification algorithm can be used to assess the degree of personal regional cuisine style mixture, using the user’s daily eating pattern as inputs. For example, when one enters the recipes that one has eaten in the past week into the algorithm, the probability values of each country would be returned, which shows the mixture of regional cuisine style of one’s daily eating pattern. As such, a future research direction would be developing algorithms that can transform personal regional cuisine patterns to a healthier style by providing a series of recipes that are in accordance with one’s unique food preferences. Our transformation algorithm can be improved by adding multiple datasets from around the world. Needless to say, lack of a comprehensive data sets makes it difficult to develop algorithms for transforming regional cuisine style. For example, Yummly, one of the largest recipe sites in the world, is less likely to contain recipes from non-Western regions. Furthermore, data on traditional regional cuisine patterns is usually described in its native language. As such, developing a way to integrate multiple data sets in multiple languages is required for future research. One of the methods to address this issue might be as follows: 1) generating the vector representation for each ingredient by using each data set independently; 2) translating only a small set of common ingredients among each data set, such as potato, tomato, and onion; 3) with a use of common ingredients, mapping each vector representation into one common vector space using a canonical correlation analysis BIBREF15 , for example. Several fundamental limitations of the present study warrant mention. First of all, our identification and transformation algorithms depend on the quantity and quality of recipes included in the data. As such, future research using our proposed system should employ quality big recipe data. Second, the evolution of regional cuisines prevents us from developing precise algorithm. For example, the definition of Mediterranean regional cuisine pattern has been revised to adapt to current dietary patterns BIBREF16 , BIBREF17 . Therefore, future research should employ time-trend recipe data to distinctively specify a recipe’s mixture of regional cuisine style and its date cf. BIBREF18 . Third, we did not consider the cooking method (e.g., baking, boiling, and deep flying) as a characteristic of country/regional style. Each country/region has different ways of cooking ingredients and this is one of the important factors characterizing the food culture of each country/region. Fourth, the combination of ingredients was not considered as the way to represent country/regional style. For example, previous studies have shown that Western recipes and East Asian recipes are opposite in flavor compounds included in the ingredient pair BIBREF19 , BIBREF18 , BIBREF20 , BIBREF21 , BIBREF11 . For example, Western cuisines tend to use ingredient pairs sharing many flavor compounds, while East Asian cuisines tend to avoid compound sharing ingredients. It is suggested that combination of flavor compounds was also elemental factor to characterize the food in each country/region. As such, if we analyze the recipes data using flavor compounds, we might get different results. In conclusion, we proposed a novel system which can transform a given recipe into any selected regional cuisine style. This system has two characteristics: 1) the system can identify a degree of regional cuisine style mixture of any selected recipe and visualize such regional cuisine style mixture using a barycentric Newton diagram; 2) the system can suggest ingredient substitution through extended word2vec model, such that a recipe becomes more authentic for any selected regional cuisine style. Future research directions were also discussed. ### Conflict of Interest Statement The authors declare that they have no conflict of interest. ### Author Contributions MK, LRV, and YI had the idea for the study and drafted the manuscript. MK performed the data collection and analysis. MS, CH, and KM participated in the interpretation of the results and discussions for manuscript writing and finalization. All authors read and approved the final manuscript. ### Funding Varshney's work was supported in part by the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM AI Horizons Network. ### Acknowledgments This study used data from Yummly. We would like to express our deepest gratitude to everyone who participated in this services. We thank Kush Varshney for suggesting the spectral graph drawing approach to placing countries on the circle.
The basic idea of the visualization, drawing on Isaac Newton’s visualization of the color spectrum BIBREF8 , is to express a mixture in terms of its constituents as represented in barycentric coordinates.
What experiments authors perform?
### Introduction “The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations." (Edward Sapir, Language: An Introduction to the Study of Speech, 1921) Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality". The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5. The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation. Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state. In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work. ### Preliminaries ::: Reinforcement Learning In Reinforcement Learning the goal is to learn a policy $\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3. Deep Q Networks (DQN): The DQN algorithm is an extension of the classical Q-learning approach, to a deep learning regime. Q-learning learns the optimal policy by directly learning the value function, i.e., the action-value function. A neural network is used to estimate the $Q$-values and is trained to minimize the Bellman error, namely Proximal Policy Optimization (PPO): While the DQN learns the optimal behavioral policy using a dynamic programming approach, PPO takes a different route. PPO builds upon the policy gradient theorem, which optimizes the policy directly, with an addition of a trust-region update rule. The policy gradient theorem updates the policy by ### Preliminaries ::: Deep Learning for NLP A word embedding is a mapping from a word $w$ to a vector $\mathbf {w} \in \mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\mathbf {w} \in \mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \ll |D|$. These methods are also known as distributional embeddings. The distributional hypothesis in linguistics is derived from the semantic theory of language usage (i.e. words that are used and occur in the same contexts tend to have similar meanings). Distributional word representations are a fundamental building block for representing natural language sentences. Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis, improving efficiency of state-of-the-art language models. Convolutional Neural Networks (CNNs), originally invented for computer vision, have been shown to achieve strong performance on text classification tasks BIBREF22, BIBREF23, as well as other traditional NLP tasks BIBREF24. In this paper we consider a common architecture BIBREF25, in which each word in a sentence is represented as an embedding vector, a single convolutional layer with $m$ filters is applied, producing an $m$-dimensional vector for each $n$-gram. The vectors are combined using max-pooling followed by a ReLU activation. The result is then passed through multiple hidden linear layers with ReLU activation, eventually generating the final output. ### Semantic Representation Methods Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state. The common approach is to derive decisions (e.g., classification, action, etc.) based on information in its raw form. In RL, the raw form is often the pixels representing an image – however the image is only one form of a semantic representation. In Semantic Segmentation, the image is converted from a 3-channel (RGB) matrix into an $N$-channel matrix, where $N$ is the number of classes. In this case, each channel represents a class, and a binary value at each coordinate denotes whether or not this class is present in the image at this location. For instance, fig: semantic segmentation example considers an autonomous vehicle task. The raw image and segmentation maps are both sufficient for the task (i.e., both contain a sufficient semantic representation). Nevertheless, the semantic segmentation maps contain less task-nuisances BIBREF17, which are random variables that affect the observed data, but are not informative to the task we are trying to solve. In this paper we propose a forth method for representing a state, namely using natural language descriptions. One method to achieve such a representation is through Image Captioning BIBREF31, BIBREF32. Natural language is both rich as well as flexible. This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible. As an example, the top image in fig: semantic segmentation example can be represented using natural language as “There is a car in your lane two meters in front of you, a bicycle rider on your far left in the negative lane, a car in your direction in the opposite lane which is twenty meters away, and trees and pedestrians on the side walk.” or compactly by “There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away.”. Language also allows us to efficiently compress information. As an example, the segmentation map in the bottom image of fig: semantic segmentation example can be compactly described by “There are 13 pedestrians crossing the road in front of you”. In the next section we will demonstrate the benefits of using natural-language-based semantic state representation in a first person shooter enviornment. ### Semantic State Representations in the Doom Environment In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer. The ViZDoom environment involves a 3D world that is significantly more real-world-like than Atari 2600 games, with a relatively realistic physics model. An agent in the ViZDoom environment must effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act. There are three types of state representations that are provided by the environment. The first, which is also most commonly used, is raw visual inputs, in which the state is represented by an image from a first person view of the agent. A feature vector representation is an additional state representation provided by the environment. The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent. Lastly, the environment provides a semantic segmentation map based on the aforementioned feature vector. An example of the visual representations in VizDoom is shown in fig: representations in vizdoom. In order to incorporate natural language representation to the VizDoom environment we've constructed a semantic parser of the semantic segmentation maps provided by the environment. Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame. To implement this, the screen was divided into several vertical and horizontal patches, as depicted in fig: patches. These patches describe relational aspects of the state, such as distance of objects and their direction with respect to the agent's point of view. In each patch, objects were counted, and a natural language description of the patch was constructed. This technique was repeated for all patches to form the final state representation. fig: nlp state rep depicts examples of natural language sentences of different states in the enviornment. ### Semantic State Representations in the Doom Environment ::: Experiments We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent. More specifically, in the basic scenario, a single monster is spawned in front of the agent. The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it. In the health gathering scenario, the floor of the room is covered in toxin, causing the agent to gradually lose health. Medipacks are spawned randomly in the room and the agent's objective is to keep itself alive by collecting them. In the take cover scenario, multiple fireball shooting monsters are spawned in front of the agent. The goal of the agent is to stay alive as long as possible, dodging inbound fireballs. The difficulty of the task increases over time, as additional monsters are spawned. In the defend the center scenario, melee attacking monsters are randomly spawned in the room, and charge towards the agent. As opposed to other scenarios, the agent is incapable of moving, aside from turning left and right and shooting. In the defend the line scenario, both melee and fireball shooting monsters are spawned near the opposing wall. The agent can only step right, left or shoot. Finally, in the “super" scenario both melee and fireball shooting monsters are repeatably spawned all over the room. the room contains various items the agent can pick up and use, such as medipacks, shotguns, ammunition and armor. Furthermore, the room is filled with unusable objects, various types of trees, pillars and other decorations. The agent can freely move and turn in any direction, as well as shoot. This scenario combines elements from all of the previous scenarios. Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4. We converted the parsed state into embedded representations of fixed length. We tested both a DQN and a PPO based agent, and compared the natural language representation to the other representation techniques, namely the raw image, feature vector, and semantic segmentation representations. In order to effectively compare the performance of the different representation methods, we conducted our experiments under similar conditions for all agents. The same hyper-parameters were used under all tested representations. Moreover, to rule out effects of architectural expressiveness, the number of weights in all neural networks was approximately matched, regardless of the input type. Finally, we ensured the “super" scenario was positively biased toward image-based representations. This was done by adding a large amount items to the game level, thereby filling the state with nuisances (these tests are denoted by `nuisance' in the scenario name). This was especially evident in the NLP representations, as sentences became extensively longer (average of over 250 words). This is contrary to image-based representations, which did not change in dimension. Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise. In order to verify the performance of the natural language representation was not due to extensive discretization of patches, we've conducted experiments increasing the number of horizontal patches - ranging from 3 to 31 patches in the extreme case. Our results, as depicted in fig: patch count, indicate that the amount of discretization of patches did not affect the performance of the NLP agent, remaining a superior representation compared to the rest. To conclude, our experiments suggest that NLP representations, though they describe the same raw information of the semantic segmentation maps, are more robust to task-nuisances, allow for better transfer, and achieve higher performance in complex tasks, even when their description is long and convoluted. While we've only presented results for DQN agents, we include plots for a PPO agent in the Appendix, showing similar trends and conclusions. We thus deduce that NLP-based semantic state representations are a preferable choice for training VizDoom agents. ### Related Work Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33. In particular, deep learning exploits this concept by its very nature BIBREF2. Work on representation learning include Predictive State Representations (PSR) BIBREF34, BIBREF35, which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) BIBREF36, which learns to embed transition probabilities using an energy-based optimization problem. There has been extensive work attempting to use natural language in RL. Efforts that integrate language in RL develop tools, approaches, and insights that are valuable for improving the generalization and sample efficiency of learning agents. Previous work on language-conditioned RL has considered the use of natural language in the observation and action space. Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing text-based games. Nevertheless, these environments do not search for semantic state representations, in which an RL algorithm can be better evaluated and controlled. BIBREF38 use high-level semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model. BIBREF39 use high-level guidance expressed in text to enrich a stochastic agent, playing against the built-in AI of Civilization II. They train an agent with the Monte-Carlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback. BIBREF40 utilize natural language in a model-based approach to describe the dynamics and rewards of an environment, showing these can facilitate transfer between different domains. More recently, the structure and compositionality of natural language has been used for representing policies in hierarchical RL. In a paper by BIBREF41, instructions given in natural language were used in order to break down complex problems into high-level plans and lower-level actions. Their suggested framework leverages the structure inherent to natural language, allowing for transfer to unfamiliar tasks and situations. This use of semantic structure has also been leveraged by BIBREF42, where abstract actions (not necessarily words) were recognized as symbols of a natural and expressive language, improving performance and transfer of RL agents. Outside the context of RL, previous work has also shown that high-quality linguistic representations can assist in cross-modal transfer, such as using semantic relationships between labels for zero-shot transfer in image classification BIBREF43, BIBREF44. ### Discussion and Future Work Our results indicate that natural language can outperform, and sometime even replace, vision-based representations. Nevertheless, natural language representations can also have disadvantages in various scenarios. For one, they require the designer to be able to describe the state exactly, whether by a rule-based or learned parser. Second, they abstract notions of the state space that the designer may not realize are necessary for solving the problem. As such, semantic representations should be carefully chosen, similar to the process of reward shaping or choosing a training algorithm. Here, we enumerate three instances in which we believe natural language representations are beneficial: Natural use-case: Information contained in both generic and task-specific textual corpora may be highly valuable for decision making. This case assumes the state can either be easily described using natural language or is already in a natural language state. This includes examples such as user-based domains, in which user profiles and comments are part of the state, or the stock market, in which stocks are described by analysts and other readily available text. 3D physical environments such as VizDoom also fall into this category, as semantic segmentation maps can be easily described using natural language. Subjective information: Subjectivity refers to aspects used to express opinions, evaluations, and speculations. These may include strategies for a game, the way a doctor feels about her patient, the mood of a driver, and more. Unstructured information: In these cases, features might be measured by different units, with an arbitrary position in the state's feature vector, rendering them sensitive to permutations. Such state representations are thus hard to process using neural networks. As an example, the medical domain may contain numerous features describing the vitals of a patient. These raw features, when observed by an expert, can be efficiently described using natural language. Moreover, they allow an expert to efficiently add subjective information. An orthogonal line of research considers automating the process of image annotation. The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation. We suspect the noise accumulated by this procedure would require additional information to be added to the state (e.g., past information). Nevertheless, as we have shown in this paper, such information can be compressed using natural language. In addition, while we have only considered spatial features of the state, information such as movement directions and transient features can be efficiently encoded as well. Natural language representations help abstract information and interpret the state of an agent, improving its overall performance. Nevertheless, it is imperative to choose a representation that best fits the domain at hand. Designers of RL algorithms should consider searching for a semantic representation that fits their needs. While this work only takes a first step toward finding better semantic state representations, we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal. ### Appendix ::: VizDoom VizDoom is a "Doom" based research environment that was developed at the Poznań University of Technology. It is based on "ZDoom" game executable, and includes a Python based API. The API offers the user the ability to run game instances, query the game state, and execute actions. The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning. Thus, a natural language representation for the game was needed to be implemented. ViZDoom emulates the "Doom" game and enables us to access data within a certain frame using Python dictionaries. This makes it possible to extract valuable data including player health, ammo, enemy locations etc. Each game frame contains "labels", which contain data on visible objects in the game (the player, enemies, medkits, etc). We used "Doom Builder" in order to edit some of the scenarios and design a new one. Enviroment rewards are presented in doom-scenarios-table. ### Appendix ::: Natural language State Space A semantic representation using natural language should contain information which can be deduced by a human playing the game. For example, even though a human does not know the exact distance between objects, it can classify them as "close" or "far". However, objects that are outside the player's field of vision can not be a part of the state. Furthermore, a human would most likely refer to an object's location relative to itself, using directions such as "right" or "left". ### Appendix ::: Language model implementation To convert each frame to a natural language representation state, the list of available labels is iterated, and a string is built accordingly. The main idea of our implementation is to divide the screen into multiple vertical patches, count the amount of different objects inside by their types, and parse it as a sentence. The decision as to whether an object is close or far can be determined by calculating the distance from it to the player, and using two threshold levels. Object descriptions can be concise or detailed, as needed. We experimented with the following mechanics: the screen can be divided between patches equally, or by determined ratios. Here, our main guideline was to keep the "front" patch narrow enough so it can be used as "sights". our initial experiment was with 3 patches, and later we added 2 more patches classified as "outer left" and "outer right". In our experiments we have tested up to 51 patches, referred to as left or right patch with corresponding numbers. we used 2 thresholds, which allowed us to classify the distance of an object from the player as "close","mid", and "far. Depending on the task, the value of the threshold can be changed, as well as adding more thresholds. different states might generate sentence with different size. A maximum sentence length is another parameter that was tested. sentences-length-table presents some data regarding the average word count in some of the game sceanrios. After the sentence describing the state is generated, it is transformed to an embedding vector. Words that were not found in the vocabulary were replaced with an “OOV" vector. All words were then concatenated to a NxDx1 matrix, representing the state. We experimented with both Word2Vec and GloVe pretrained embedding vectors. Eventually, we used the latter, as it consumes less memory and speeds up the training process. The length of the state sentence is one of the hyperparameters of the agents; shorter sentences are zero padded, where longer ones are trimmed. ### Appendix ::: Model implementation All of our models were implemented using PyTorch. The DQN agents used a single network that outputs the Q-Values of the available actions. The PPO agents used an Actor-Critic model with two networks; the first outputs the policy distribution for the input state, and the second network outputs it's value. As mentioned earlier, we used three common neural network architectures: used for the raw image and semantic segmentation based agents. VizDoom's raw output image resolution is 640X480X3 RGB image. We experimented with both the original image and its down-sampled version. The semantic segmentation image was of resolution 640X480X1, where the pixel value represents the object's class, generated using the VizDoom label API. the network consisted of two convolutional layers, two hidden linear layers and an output layer. The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation. The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation. The fully connected layers has 32 and 16 units, both of them are followed by ReLU activation. The output layer's size is the amount of actions the agent has available in the trained scenario. Used in the feature vector based agent. Naturally, some discretization is needed in order to build a feature vector, so some of the state data is lost. the feature vector was made using features we extracted from the VizDoom API, and its dimensions was 90 X 1. The network is made up of two fully connected layers, each of them followed by a ReLU activation. The first layer has 32 units, and the second one one has 16 units. The output layer's size was the amount of actions available to the agent. Used in the natural language based agent. As previously mentioned, each word in the natural language state is transformed into a 200X50X1 matrix. The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence, and return convolution outputs of sequences of varying lengths. The filters vary in width, such that each of them learns to identify different lengths of sequences in words. Longer filters have higher capability of extracting features from longer word sequences. The filters we have chosen have the following dimensions: 3X50X1, 4X50X1, 5X50X1, 8X50X1,11X50X1. Following the convolution layer there is a ReLU activation and a max pool layer. Finally, there are two fully connected layers; The first layer has 32 units, and second one has 16 units. Both of them are followed by ReLU activation. All architectures have the same output, regardless of the input type. The DQN network is a regression network, with its output size the number of available actions. The PPO agent has 2 networks; actor and critic. The actor network has a Softmax activation with size equal to the available amount of actions. The critic network is a regression model with a single output representing the state's value. Reward plots for the PPO agent can be found in Figure FIGREF47. Figure 1: Example of Semantic Segmentation [Kundu et al., 2016]. Figure 2: Left: Raw visual inputs and their corresponding semantic segmentation in the VizDoom enviornment. Right: Our suggested NLP-based semantic state representation framework. Figure 3: Frame division used for describing the state in natural language. Figure 4: Natural language state representation for a simple state (top) and complex state (bottom). The corresponding embedded representations and shown on the right. Figure 5: Comparison of representation methods on the different VizDoom scenarios using a DQN agent. X and Y axes represent the number of iterations and cumulative reward, respectively. Last three graphs (bottom) depict nuisance-augmented scenarios. Figure 6: Robustness of each representation type with respect to amount of nuisance. Figure 7: Average rewards of NLP based agent as a function of the number of patches in the language model. Figure 8: PPO - state representation and their average rewards, various degrees of nuisance Table 1: statistics of words per state as function of patches. Table 2: Doom scenarios
a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios
How are labels propagated using this approach?
### Introduction Language identification is a crucial first step in textual data processing and is considered feasible over formal texts BIBREF0 . The task is harder for social media (e.g. Twitter) where text is less formal, noisier and can be written in wide range of languages. We focus on identifying similar languages, where surface-level content alone may not be sufficient. Our approach combines a content model with evidence propagated over the social network of the authors. For example, a user well-connected to users posting in a language is more likely to post in that language. Our system scores 76.63%, 1.4% higher than the top submission to the tweetLID workshop. ### Background Traditional language identification compares a document with a language fingerprint built from n-gram bag-of-words (character or word level). Tweets carry additional metadata useful for identifying language, such as geolocation BIBREF1 , username BIBREF2 , BIBREF1 and urls mentioned in the tweet BIBREF2 . Other methods expand beyond the tweet itself to use a histogram of previously predicted languages, those of users @-mentioned and lexical content of other tweets in a discussion BIBREF1 . Discriminating between similar languages was the focus of the VarDial workshop BIBREF3 , and most submissions used content analysis. These methods make limited use of the social context in which the authors are tweeting – our research question is “Can we identify the language of a tweet using the social graph of the tweeter?”. Label propagation approaches BIBREF4 are powerful techniques for semi-supervised learning where the domain can naturally be described using an undirected graph. Each node contains a probability distribution over labels, which may be empty for unlabelled nodes, and these labels are propagated over the graph in an iterative fashion. Modified Adsorption (mad) BIBREF5 , is an extension that allows more control of the random walk through the graph. Applications of lp and mad are varied, including video recommendation BIBREF6 and sentiment analysis over Twitter BIBREF7 . ### Method Our method predicts the language INLINEFORM0 for a tweet INLINEFORM1 by combining scores from a content model and a graph model that takes social context into account, as per Equation EQREF2 : DISPLAYFORM0 Where INLINEFORM0 are the content model parameters, INLINEFORM1 the social model parameters. ### Content model Our content model is a 1 vs. all INLINEFORM0 regularised logistic regression model with character 2- to 5-grams features, not spanning over word boundaries. The scores for a tweet are normalised to obtain a probability distribution. ### Social model We use a graph to model the social media context, relating tweets to one another, authors to tweets and other authors. Figure FIGREF7 shows the graph, composed of three types of nodes: tweets (T), users (U) and the “world” (W). Edges are created between nodes and weighted as follows: T-T the unigram cosine similarity between tweets, T-U weighted 100 between a tweet and its author, U-U weighted 1 between two users in a “follows” relationship and U-W weighted 0.001 to ensure a connected graph for the mad algorithm. We create the graph using all data, and training set tweets have an initial language label distribution. A naïve approach to building the tweet-tweet subgraph requires O( INLINEFORM0 ) comparisons, measuring the similarity of each tweet with all others. Instead, we performed INLINEFORM1 -nearest-neighbour classification on all tweets, represented as a bag of unigrams, and compared each tweet and the top- INLINEFORM2 neighbours. We use Junto (mad) BIBREF5 to propagate labels from labelled to unlabelled nodes. Upon convergence, we renormalise label scores for initially unlabelled nodes to find the value of INLINEFORM4 . ### Evaluation The tweetLID workshop shared task requires systems to identify the language of tweets written in Spanish (es), Portuguese (pt), Catalan (ca), English (en), Galician (gl) and Basque (eu). Some language pairs are similar (es and ca; pt and gl) and this poses a challenge to systems that rely on content features alone. We use the supplied evaluation corpus, which has been manually labelled with six languages and evenly split into training and test collections. We use the official evaluation script and report precision, recall and F-score, macro-averaged across languages. This handles ambiguous tweets by permitting systems to return any of the annotated languages. Table TABREF10 shows that using the content model alone is more effective for languages that are distinct in our set of languages (i.e. English and Basque). For similar languages, adding the social model helps discriminate them (i.e. Spanish, Portuguese, Catalan and Galician), particularly those where a less-resourced language is similar to a more popular one. Using the social graph almost doubles the F-score for undecided (und) languages, either not in the set above or hard-to-identify, from 18.85% to 34.95%. Macro-averaged, our system scores 76.63%, higher than the best score in the competition: 75.2%. ### Conclusion Our approach uses social information to help identify the language of tweets. This shows state-of-the-art performance, especially when discriminating between similar languages. A by-product of our approach is that users are assigned a language distribution, which may be useful for other tasks. Table 1: Experimental results. ♦/♠ are similar pairs. Figure 1: Graph topology. Rectangular nodes are tweets, circular nodes are users and the diamond represents the world. Some tweet nodes are labelled with an initial distribution over language labels and others are unlabelled.
We use Junto (mad) BIBREF5 to propagate labels from labelled to unlabelled nodes.
Of the films reviewed, which one received the most positive criticism? A. There's Something About Mary B. Unmade Beds C. The Slums of Beverly Hills D. The Avengers (new version)
Dirty Laundry Now and then, a documentary film comes along that makes us re-examine the rules that unofficially govern the genre: Can there be a middle ground between fiction and fact? Can a documentary use scripted scenes and yet remain ontologically authentic? How much can you stylize material before you alter the reality that you're striving, at least in theory, to capture? Unmade Beds , Nicholas Barker's " 'real life' feature film," has proudly worn its mongrel status as a "directed" documentary of single life in the big city, employing, in the face of criticism, what amounts to a cackling-punk defiance. The movie tracks four aging New Yorkers--two men, two women--through their lonely dating rituals, in the process depicting a universe of lusty, coupled-up haves and downcast, excluded have-nots, all viewed Rear Window -style through rectangular openings in the massive apartment houses in which they reside. This is not cinema vérité , and nothing has been left to chance. The director selected his four subjects from many hundreds of potential candidates, followed them around for months, and then scripted their monologues and dialogues to reflect what he says he saw. Calling his own film "an exercise in mendacity," Barker goes on, "I'm quite happy to tell lies about my characters and even collude with their self-delusions if it enables me to communicate larger dramatic truths." Spurned by U.S. distributors, Unmade Beds opened two weeks ago in a small screening room in downtown Manhattan, where it proceeded to set box office records and generate lots of (largely favorable) press. In part due to smart publicity, which has bannered some of the bad reviews and commentary ("I have to tell you that this film upset me so much that I really don't want to have anything to do with it"--a New York publicist), it threatens to become a cause célèbre --and to be coming soon to a theater near you. It's always nice to see distributors proved wrong about the merits of "difficult" films, but in this case I think they did the decent thing. Unmade Beds isn't just bad--it's obnoxiously, noxiously bad, a freak show for the empathetically challenged. The outrage it has prompted isn't the Puritan kind; it's more like legitimate revulsion at watching a blowhard pervert people's lives in the name of "larger dramatic truths." Those truths are large, all right. Take Michael, the 40-year-old, 5 foot 4 inch lonely guy who has been looking for a wife for almost two decades. If you were to walk past him on the street, you might think that a man of his small stature might have some trouble getting dates and be rather bitter about it. The larger dramatic truth is that Michael has lots of trouble getting dates and is very bitter about it. Just in case you feel too sorry for him, however, Barker is careful to include a homophobic monologue in which Michael complains about young women who waste their lives hanging out with effeminate males. Michael turns out to be the film's most sympathetic subject--by a wide margin. At least he's not Mikey, a paunchy 54-year-old who writes but can't sell screenplays and who always flees blind dates, because the women he gets fixed up with are "mutts." Sounding like one of the low-level gangsters who posture like kingpins in Donnie Brasco , Mikey talks a lot about mutts. He also reminisces about that 24 hour period in the '70s when he managed to sleep with three different beautiful women, whose pictures he shows off. These days, all he meets are mutts. He comes off as a pathetic little loser--a mutt. Aimee, on the other hand, is a pathetic big loser, weighing in at 225 pounds. Determined to get married before she turns 30, she generally is filmed beside bags of groceries and assorted junk foods. She cries about her situation to her thin friend, Laurie, who, in one scene, gently mentions Aimee's weight. Clearly the scene is scripted, but Aimee does a good job acting taken aback. She has always been fat--and she's "OK with it," and a man just has to accept it. This is followed by more talk about how you attract men. Will they respect you if you call them back? If you express too much interest? "Or," the viewer thinks, "if you're 225 pounds?" The only natural performer here is Brenda, a garrulous exhibitionist who blossoms with the camera on her--she could have a career as a Penny Marshall-style character actress. Divorced and aging, Brenda needs money and is willing to charge for her sexual services. It shouldn't be too difficult, because men are always showing her their dicks ("I'm up to two dicks a day"). They meet her and, a few minutes later, they show her their dicks. Weird, huh? What Barker leaves out (it's in a New York Observer article) is that Brenda, a former lap dancer, works in marketing at a strip joint. Presumably, men standing next to her in line at McDonald's don't show her their dicks. Nor, presumably, does she show them her breasts--although she bares them for Barker's camera, jabbering about her body while she doffs her clothes and steps into the shower and soaps up. Barker might have crafted his subjects' monologues from their own words, but he has robbed them of their spontaneity--and, thus, of their essence. They aren't thinking or trying to come to grips with their situations in front of your eyes, because they already know what they're going to say: They've been fixed like butterflies on the ends of pins and held up for voyeuristic inspection. The scenes with friends and confidantes have a crude, programmatic purpose. You can imagine the director composing a shot (the shots are tightly composed and elaborately lighted) and reminding them, "In this scene she points out that you should lose weight and you get shocked and defensive. Ready ... Action." Call me square, but I find this antithetical to the documentary spirit. An Englishman who trained as an anthropologist before going to work for BBC Television, Barker clearly made up his mind about his material before his cameras began to roll--so it's no surprise that it feels prechewed and predigested. When reality interfered (Brenda apparently did not go through with a marriage to an immigrant in search of a green card for $10,000, as she does on-screen), Barker brushed the truth aside as immaterial, following her up the steps of City Hall in her wedding dress because it was "true to her character." But what separates documentary from fiction is that real people are often more complicated, and more conflicted, than finished characters--as Brenda proved to be more (or, at least, other) than the sum of her parts. That's the kind of truth that reveals itself to documentary filmmakers after the fact, when they go over footage and discover unexpected patterns, dissonances, glimmers of a universe that's richer and messier than the one they set out to portray. So what are Barker's "larger dramatic truths"? Single people in big cities can be desperate. Single people fear they're going to die alone--unloved and unloving. People are judged and, in turn, judge others by how they look. Big news. One could argue, charitably, that the movie is meant to be prescriptive, that Barker intends for us to regard the ways in which his subjects delude themselves and thereby learn to see through our own self-delusions. But Barker hasn't concocted a larger dramatic structure that would hold those larger dramatic truths together and help us comprehend where these people went wrong. He dramatizes right up to the point where a dramatist would be expected to provide some insight--and then, hey, he's a documentarian. Unmade Beds might make a good date movie. There's little to argue about in its subjects' personalities--both males and females will find them repulsive--and the picture the film paints of single life in the big city is so bleak that you'll probably want to jump into bed with whoever is sitting next to you. Anything to keep from turning into one of those people. The Slums of Beverly Hills also walks a line between two genres, in this case coming-of-age sex comedy and autobiographical monologue. Tamara Jenkins, the writer and first-time director, has an eye for absurd juxtapositions that was obviously sharpened by the pain of her nomadic upbringing. Her protagonist (Natasha Lyonne) spends her teen-age years being shuttled with her two brothers from one cheap dive to another in the 90210 ZIP code, all because her egregiously unsuccessful father (Alan Arkin) wants them to be educated in the best schools. ("Furniture's temporary; education is permanent.") It's a major omission, then, that we never see those schools or the kids' interaction with their stable, well-to-do Beverly Hills counterparts. We can't tell if the father is, on some weird level, justified in his fervor, or whether he's screwing up his children--subjecting them to humiliation and robbing them of a sense of permanence--for no reason. Jenkins hasn't quite figured out how to shape her narrative, which is full of episodes that are there because they actually happened but that don't have a payoff. I almost wish she'd included more voice-over narration, more commentary on the things that, as a filmmaker, she hasn't learned to bring out. The Slums of Beverly Hills never gels, but it has a likable spirit, and it's exceedingly easy on the eye, with lots of pretty girls and wry evocations of '70s fashions and decor. The father, to obtain financial support from his wealthy brother (Carl Reiner), volunteers to take in his vaguely schizzy, dipsomaniacal niece (Marisa Tomei). She and her cousin compare breasts, play with vibrators, and talk in pig Latinish gibberish, but Jenkins never lets the proceedings get too sentimental: The whimsy is always cut with an acidic awareness of the family's desperation. "Are we middle-class now?" ask the children, hopefully, before another crisis sends them back into their van, cruising past the movie stars' mansions, in the mean streets of Beverly Hills. Grading on the steep curve established by summer blockbuster seasons past, these have turned out to be a pretty good few months at the movies. Even the commercial swill ( Deep Impact , Armageddon , The Mask of Zorro , Small Soldiers , Snake Eyes , Halloween: H20 ) has been of a high grade, and Saving Private Ryan and Return to Paradise were Vitalis slaps in the kisser for people woozy from all the warm weather escapism. Out of Sight was tender and charming, as was, in its gross-out way, There's Something About Mary . And, on the indie front, The Opposite of Sex , Buffalo 66 , and Pi have proved that there's still commercial life after Sundance. Sure, we had stinkers, but even Godzilla was fun to jeer at. And there's something reassuring about the fact that The Avengers is so rotten: proof yet again that people with piles of money can hire wizard production designers but can't fake class. I don't know who the credited screenwriter, Don MacPherson, is, but it's unlikely that he has ever seen an episode of the old Avengers , let alone sussed out the source of its appeal. Opening with a slapstick sequence of agent John Steed (Ralph Fiennes) doing kung fu, the film shifts to a scene in which he meets Mrs. Peel (Uma Thurman) while sitting naked in a sauna with only a newspaper to cover his private parts. The series was erotic in a way only prim English humor can be: The Old Boy Steed was capable of throwing a punch and bonking someone with his bowler, but he left the karate kicking to his liberated, leather-suited distaff associate. Here their roles have been witlessly muddled, and MacPherson's idea of banter is to have the pair complete each other's clichés. Whereas the original Steed, Patrick Macnee, was to the English Men's Club born, Fiennes is an eternal caddie. The willowy Thurman looks great in her outfits, but it's ever more apparent that she isn't much of an actress--at least, not a trained one--and her attempts at insouciance are embarrassingly arch. As the eccentric master villain who controls the weather, even Sean Connery is flat-out terrible, acting high on the hog. To think Connery once found the Bond films so far beneath him! When he sputters lines like "Time to die!" one imagines Dr. No, Goldfinger, and Blofeld snickering in the wings.
C. The Slums of Beverly Hills
What isn't a conclusion drawn? A. Michelob Hefeweizen is a great beer for the cost B. Anheuser-Busch lived up to its popularity C. Sam Adams was easily identifiable D. Pyramid Hefeweizen is not worth the money
More Booze You Can Use When we last heard from them, the members of the Slate beer-testing team were coping with lagers and trying to see if they could taste the 3-to-1 price difference between the most- and least-expensive brands. (Click for a wrap-up of the first round of beer tasting.) The answer was: They found one beer they really liked, Samuel Adams Boston Lager , and one they really hated, imported Grolsch from Holland. Both were expensive beers--Grolsch was the most expensive in the test--and otherwise the testers had a hard time telling beers apart. The members of the team, as noted in the original article, all hold day jobs at Microsoft, mainly as designers, managers, and coders for Microsoft Word. The point of the second test was not to find the difference between cheap and expensive beers but instead to compare a variety of top-of-the-line beers. Was there one kind the tasters preferred consistently? Could they detect any of the subtleties of brewing style and provenance that microbrew customers pay such attention to when choosing some Doppelbock over a cream ale? Since the tasting panel had left the first round grumbling that cheap lagers were not a fair test of their abilities, this second round of testing was advertised to the panel as a reward. Every beer in Round 2 would be a fancy beer. A microbrew. A "craft beer." A prestigious import. These were the kinds of beer the panel members said they liked--and the ones they said they were most familiar with. One aspect of the reward was that they would presumably enjoy the actual testing more--fewer rueful beer descriptions along the lines of "urine" or "get it away!" were expected than in the first round. The other aspect of anticipated reward was the panelists' unspoken but obvious assumption that this time they would "do better" on the test. Intellectual vanity being what it is, people who had fought for and won jobs at Microsoft and who still must fight every six months for primacy on the employee-ranking scale (which determines--gasp!--how many new stock options they receive) would assume that their skill as tasters was on trial, just as much as the beer was. Of course they were right, which is what made this round as amusing to administer as the first one had been. Here is what happened and what it meant: 1. Procedure. This was similar in most ways to the experimental approach of Round 1. The nine testers who showed up were a subset of the original 12. The missing three dropped out with excuses of "my wife is sick" (one person) and "meeting is running long" (two). As before, each tester found before him on a table 10 red plastic cups, labeled A through J. Each cup held 3 ounces of one of the beers. The A-to-J labeling scheme was the same for all testers. Instead of saltines for palate-cleansing, this time we had popcorn and nuts. As they began, the tasters were given these and only these clues: that the flight included one "holdover" beer from the previous round (Sam Adams); that it included at least one import (Bass); that it included at least one macrobrew , specifically, a member of the vast Anheuser-Busch family (Michelob Hefeweizen). After sampling all beers, the tasters rated them as follows: Overall quality points, from zero to 100, reflecting their personal, subjective fondness for the beer. Descriptions of and comments about each beer's taste--"smooth and nutty," "too strong," etc. If the first ranking was a measure of how good each beer was, this was an attempt to explain what made it good. Best and Worst , one of each from the group. Name that beer! The tasters were told that some of the drinks were Hefeweizens, some might be IPAs (India pale ales), some might be bitters, and so on. They were asked to put each beer in its proper category--and to name a specific brewery and brand if they could. The idea here was to test the veteran beer drinkers' claim to recognize the distinctive tastes of famous brands. (To see all the grids for all the beers, click .) 2. Philosophy. The first round of testing was All Lager. This second round was All Fancy, and Mainly Not Lager. As several correspondents (for instance, the of Best American Beers ) have helpfully pointed out, the definition of lager provided last time was not exactly "accurate." If you want to stay within the realm of textbook definitions, a lager is a beer brewed a particular way--slowly, at cool temperatures, with yeast that settles on the bottom of the vat. This is in contrast with an ale, which is brewed faster, warmer, and with the yeast on top. By this same reasoning, lagers don't have to be light-colored, weak-flavored, and watery, as mainstream American lagers are. In principle, lagers can be dark, fierce, manly. Therefore, the correspondents suggest, it was wrong to impugn Sam Adams or Pete's Wicked for deceptive labeling, in presenting their tawnier, more flavorful beers as lagers too. To this the beer scientist must say: Book-learning is fine in its place. But let's be realistic. Actual drinking experience teaches the American beer consumer that a) all cheap beers are lagers; and b) most lagers are light-colored and weak. The first test was designed to evaluate low-end beers and therefore had to be lager-centric. This one is designed to test fancy beers--but in the spirit of open-mindedness and technical accuracy, it includes a few "strong" lagers too. 3. Materials. The 10 test beers were chosen with several goals in mind: To cover at least a modest range of fancy beer types--extra special bitter, India pale ale, Hefeweizen, and so on. To include both imported and domestic beers. Among the domestic microbrews, there's an obvious skew toward beers from the Pacific Northwest. But as Microsoft would put it, that's a feature not a bug. These beers all came from the Safeway nearest the Redmond, Wash., "main campus" of Microsoft, and microbrews are supposed to be local. To include one holdover from the previous test, as a scientific control on our tasters' preferences. This was Sam Adams , runaway winner of Round 1. To include one fancy product from a monster-scale U.S. mass brewery, to see if the tasters liked it better or worse than the cute little microbrews. This was Michelob Hefeweizen , from the pride of St. Louis, Anheuser-Busch. Click for pricing information and pre-quaffing evaluations. The beers tasted were: 4. Data Analysis. a) Best and Worst. Compared to the lager test, we would expect the range of "best" choices to be more varied, since all the tested beers were supposed to be good. This expectation was most dramatically borne out in the "Best and Worst" rankings. The nine tasters cast a total of nine Worst votes and 11.5 Best votes. (Tester No. 1 turned in a sheet with three Best selections, or two more than his theoretical quota. Tester No. 4 listed a Best and a Best-minus, which counted as half a vote.) The results were clearest at the bottom: three Worsts for Pyramid Hefeweizen , even though most comments about the beer were more or less respectful. ("Bitter, drinkable.") But at the top and middle the situation was muddier: There were three Bests for Full Sail ESB , which most of the tasters later said they weren't familiar with, and 2.5 for Redhook IPA , which all the tasters knew. But each of these also got a Worst vote, and most of the other beers had a mixed reading. So far, the tasters are meeting expectations, finding something to like in nearly all these fancy beers. b) Overall preference points. Here the complications increase. The loser was again apparent: Pyramid Hefeweizen came in last on rating points, as it had in the Best/Worst derby. But the amazing dark horse winner was Michelob Hefeweizen . The three elements of surprise here, in ascending order of unexpectedness, are: This best-liked beer belonged to the same category, Hefeweizen, as the least-liked product, from Pyramid. This was also the only outright Anheuser-Busch product in the contest (the Redhooks are 75 percent A-B free). It is safe to say that all tasters would have said beforehand that they would rank an American macrobrew last, and Anheuser-Busch last of all. Although it clearly won on overall preference points, Michelob Hefeweizen was the only beer not to have received a single "Best" vote. The first two anomalies can be written off as testament to the power of a blind taste test. The third suggests an important difference in concepts of "bestness." Sometimes a product seems to be the best of a group simply because it's the most unusual or distinctive. This is why very high Wine Spectator ratings often go to wines that mainly taste odd. But another kind of bestness involves an unobtrusive, day-in day-out acceptability. That seems to be Michelob Hefe 's achievement here: no one's first choice, but high on everyone's list. Let's go to the charts: This table shows how the beers performed on "raw score"--that is, without the advanced statistical adjustment of throwing out the highest and lowest score each beer received. Next, we have "corrected average preference points," throwing out the high and low marks for each beer. The result is basically the same: It is worth noting the fate of Sam Adams on these charts. Here it ends up with a score of less than 61. These were the numbers awarded by the very same tasters who gave it a corrected preference rating of 83.33 the last time around--and 10 "Best" votes, vs. one Best (and one Worst) this time. The shift in Bests is understandable and demonstrates the importance of picking your competition. The severe drop in preference points illustrates more acutely the ancient principle of being a big fish in a small pond. These same tasters thought that Sam Adams was objectively much better when it was surrounded by Busch and Schmidt's. c) Value rankings. Last time this calculation led to what the colorful French would call a bouleversement. One of the cheapest beers, Busch, which had been in the lower ranks on overall preference points, came out at the top on value-for-money ratings, because it was so cheap. The big surprise now is that the highest-rated beer was also the cheapest one, Michelob Hefe , so the value calculation turned into a rout: Pyramid Hefeweizen was expensive on top of being unpopular, so its position at the bottom was hammered home--but not as painfully as that of Bass Ale . Bass had been in the respectable lower middle class of the preference rankings, so its disappointing Val-u-meter showing mainly reflects the fact that it was the only beer not on "sale" and therefore by far the costliest entry in the experiment. d) Taster skill. As members of the tasting panel began to suspect, they themselves were being judged while they judged the beer. One of the tasters, No. 7, decided to live dangerously and give specific brands and breweries for Samples A through J. This man was the only panel member whose job does not involve designing Microsoft Word--and the only one to identify two or more of the beers accurately and specifically. (He spotted Redhook IPA and Redhook ESB.) The fact that the beers correctly identified were the two most popular microbrews in the Seattle area suggests that familiarity is the main ingredient in knowing your beer. Many others were simply lost. Barely half the tasters, five of nine, recognized that Michelob Hefeweizen was a Hefeweizen. Before the test, nine of nine would have said that picking out a Hefe was easy, because of its cloudy look and wheaty flavor. Three tasters thought Sam Adams was an IPA ; two thought Redhook's IPA was a Hefeweizen. In fairness, six of nine testers identified Pyramid Hefeweizen as a Hefe, and six recognized Full Sail ESB as a bitter. Much in the fashion of blind men describing an elephant, here is a how the testers handled Sam Adams Boston Lager : 5. Implications and Directions for Future Research. Science does not always answer questions; often, it raises many new ones. This excursion into beer science mainly raises the question: What kind of people are we? If we are Gradgrind-like empiricists, living our life for "welfare maximization" as described in introductory econ. courses, the conclusion is obvious. We learned from the first experiment to buy either Sam Adams (when we wanted maximum lager enjoyment per bottle) or Busch (for maximum taste and snob appeal per dollar). From this second round we see an even more efficient possibility: Buy Michelob Hefeweizen and nothing else, since on the basis of this test it's the best liked and the cheapest beer. By the way, if there is a single company whose achievements the testing panel honored, it would be Anheuser-Busch . From its brewing tanks came two of the double-crown winners of the taste tests: plain old Busch , the Taste-o-meter and Snob-o-meter victor of Round 1, and Michelob Hefeweizen , the preference-point and Val-u-meter winner this time. But, of course, there is another possibility: that what is excluded in a blind taste test is in fact what we want, and are happy to pay for, when we sit down with a beer. The complicated label, the fancy bottle, the exotic concept that this beer has traveled from some far-off corner of Bohemia or even the Yakima Valley--all this may be cheap at the $1.25-per-pint cost difference between the cheapest and the most expensive beers. In elementary school, we all endured a standard science experiment: If you shut your eyes and pinch your nose closed, can you tell any difference in the taste of a slice of apple, of carrot, of pear? You can't--but that doesn't mean that from then on you should close your eyes, hold your nose, and chew a cheap carrot when you feel like having some fruit. There is a time and place for carrots, but also for juicy pears. There is a time for Busch, but also for Full Sail "Equinox." For scientists who want to continue this work at home, here are a few suggestions for further research: Tell the testers ahead of time what beers they will be drinking. Ask them to rank the beers, 1 through 10, based on how well they like them. Then compare the list with the "revealed preferences" that come from the blind test. As a variation, show them the list ahead of time and ask them to pick out the beer they know they love and the one they know they hate. Then compare this with the "after" list. If you're going to test imported lagers, try Foster's or Corona rather than Grolsch. Remember to stay strictly in the scientist's role. Don't take the test yourself.
C. Sam Adams was easily identifiable
Why are the grey helmets necessary? A. The helmets block external psionic forces. B. The helmets improve the reception of external psionic forces. C. The helmets are for safety, as the children are heavily medicated and at high risk for falling. D. The helmets amplify the childrens' psychic abilities.
BRAMBLE BUSH BY ALAN E. NOURSE [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, August 1957. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] There was a man in our town, and he was wondrous wise; He jumped into a bramble bush and scratched out both his eyes. And when he saw what he had done, with all his might and main He jumped into another bush and scratched them in again. MOTHER GOOSE Dr. David Lessing found Jack Dorffman and the boy waiting in his office when he arrived at the Hoffman Center that morning. Dorffman looked as though he'd been running all night. There were dark pouches under his eyes; his heavy unshaven face seemed to sag at every crease. Lessing glanced sharply at his Field Director and sank down behind his desk with a sigh. "All right, Jack—what's wrong?" "This kid is driving me nuts," said Dorffman through clenched teeth. "He's gone completely hay-wire. Nobody's been able to get near him for three weeks, and now at six o'clock this morning he decides he's leaving the Farm. I talk to him, I sweat him down, I do everything but tie him to the bed, and I waste my time. He's leaving the Farm. Period." "So you bring him down here," said Lessing sourly. "The worst place he could be, if something's really wrong." He looked across at the boy. "Tommy? Come over and sit down." There was nothing singular about the boy's appearance. He was thin, with a pale freckled face and the guileless expression of any normal eight-year-old as he blinked across the desk at Lessing. The awkward grey monitor-helmet concealed a shock of sandy hair. He sat with a mute appeal in his large grey eyes as Lessing flipped the reader-switch and blinked in alarm at the wildly thrashing pattern on the tape. The boy was terrorized. He was literally pulsating with fear. Lessing sat back slowly. "Tell me about it, Tommy," he said gently. "I don't want to go back to the Farm," said the boy. "Why?" "I just don't. I hate it there." "Are you frightened?" The boy bit his lip and nodded slowly. "Of me? Of Dr. Dorffman?" "No. Oh, no!" "Then what?" Again the mute appeal in the boy's eyes. He groped for words, and none came. Finally he said, "If I could only take this off—" He fingered the grey plastic helmet. "You think that would make you feel better?" "It would, I know it would." Lessing shook his head. "I don't think so, Tommy. You know what the monitor is for, don't you?" "It stops things from going out." "That's right. And it stops things from going in. It's an insulator. You need it badly. It would hurt you a great deal if you took it off, away from the Farm." The boy fought back tears. "But I don't want to go back there—" The fear-pattern was alive again on the tape. "I don't feel good there. I never want to go back." "Well, we'll see. You can stay here for a while." Lessing nodded at Dorffman and stepped into an adjoining room with him. "You say this has been going on for three weeks ?" "I'm afraid so. We thought it was just a temporary pattern—we see so much of that up there." "I know, I know." Lessing chewed his lip. "I don't like it. We'd better set up a battery on him and try to spot the trouble. And I'm afraid you'll have to set it up. I've got that young Melrose from Chicago to deal with this morning—the one who's threatening to upset the whole Conference next month with some crazy theories he's been playing with. I'll probably have to take him out to the Farm to shut him up." Lessing ran a hand through sparse grey hair. "See what you can do for the boy downstairs." "Full psi precautions?" asked Dorffman. "Certainly! And Jack—in this case, be sure of it. If Tommy's in the trouble I think he's in, we don't dare risk a chance of Adult Contact now. We could end up with a dead boy on our hands." Two letters were waiting on Lessing's desk that morning. The first was from Roberts Bros., announcing another shift of deadline on the book, and demanding the galley proofs two weeks earlier than scheduled. Lessing groaned. As director of psionic research at the Hoffman Medical Center, he had long since learned how administrative detail could suck up daytime hours. He knew that his real work was at the Farm—yet he hadn't even been to the Farm in over six weeks. And now, as the book approached publication date, Lessing wondered if he would ever really get back to work again. The other letter cheered him a bit more. It bore the letterhead of the International Psionics Conference: Dear Dr. Lessing: In recognition of your position as an authority on human Psionic behavior patterns, we would be gratified to schedule you as principle speaker at the Conference in Chicago on October 12th. A few remarks in discussion of your forthcoming book would be entirely in order— They were waiting for it, then! He ran the galley proofs into the scanner excitedly. They knew he had something up his sleeve. His earlier papers had only hinted at the direction he was going—but the book would clear away the fog. He scanned the title page proudly. "A Theory of Psionic Influence on Infant and Child Development." A good title—concise, commanding, yet modest. They would read it, all right. And they would find it a light shining brightly in the darkness, a guide to the men who were floundering in the jungle of a strange and baffling new science. For they were floundering. When they were finally forced to recognize that this great and powerful force did indeed exist in human minds, with unimaginable potential if it could only be unlocked, they had plunged eagerly into the search, and found themselves in a maddening bramble bush of contradictions and chaos. Nothing worked, and everything worked too well. They were trying to study phenomena which made no sense, observing things that defied logic. Natural laws came crashing down about their ears as they stood sadly by and watched things happen which natural law said could never happen. They had never been in this jungle before, nor in any jungle remotely like it. The old rules didn't work here, the old methods of study failed. And the more they struggled, the thicker and more impenetrable the bramble bush became— But now David Lessing had discovered a pathway through that jungle, a theory to work by— At his elbow the intercom buzzed. "A gentleman to see you," the girl said. "A Dr. Melrose. He's very impatient, sir." He shut off the scanner and said, "Send him in, please." Dr. Peter Melrose was tall and thin, with jet black hair and dark mocking eyes. He wore a threadbare sport coat and a slouch. He offered Lessing a bony hand, then flung himself into a chair as he stared about the office in awe. "I'm really overwhelmed," he said after a moment. "Within the stronghold of psionic research at last. And face to face with the Master in the trembling flesh!" Lessing frowned. "Dr. Melrose, I don't quite understand—" "Oh, it's just that I'm impressed," the young man said airily. "Of course, I've seen old dried-up Authorities before—but never before a brand spanking new one, just fresh out of the pupa, so to speak!" He touched his forehead in a gesture of reverence. "I bow before the Oracle. Speak, oh Motah, live forever! Cast a pearl at my feet!" "If you've come here to be insulting," Lessing said coldly, "you're just wasting time." He reached for the intercom switch. "I think you'd better wait before you do that," Melrose said sharply, "because I'm planning to take you apart at the Conference next month unless I like everything I see and hear down here today. And if you don't think I can do it, you're in for quite a dumping." Lessing sat back slowly. "Tell me—just what, exactly, do you want?" "I want to hear this fairy tale you're about to publish in the name of 'Theory'," Melrose said. "I want to see this famous Farm of yours up in Connecticut and see for myself how much pressure these experimental controls you keep talking about will actually bear. But mostly, I want to see just what in psionic hell you're so busy making yourself an Authority about." There was no laughter in the man's sharp brown eyes. "You couldn't touch me with a ten foot pole at this conference," snapped Lessing. The other man grinned. "Try me! We shook you up a little bit last year, but you didn't seem to get the idea." "Last year was different." Lessing scowled. "As for our 'fairy tale', we happen to have a staggering body of evidence that says that it's true." "If the papers you've already published are a preview, we think it's false as Satan." "And our controls are above suspicion." "So far, we haven't found any way to set up logical controls," said Melrose. "We've done a lot of work on it, too." "Oh, yes—I've heard about your work. Not bad, really. A little misdirected, is all." "According to your Theory, that is." "Wildly unorthodox approach to psionics—but at least you're energetic enough." "We haven't been energetic enough to find an orthodox approach that got us anywhere. We doubt if you have, either. But maybe we're all wrong." Melrose grinned unpleasantly. "We're not unreasonable, your Majesty. We just ask to be shown. If you dare, that is." Lessing slammed his fist down on the desk angrily. "Have you got the day to take a trip?" "I've got 'til New Year." Lessing shouted for his girl. "Get Dorffman up here. We're going to the Farm this afternoon." The girl nodded, then hesitated. "But what about your lunch?" "Bother lunch." He gave Melrose a sidelong glare. "We've got a guest here who's got a lot of words he's going to eat for us...." Ten minutes later they rode the elevator down to the transit levels and boarded the little shuttle car in the terminal below the Hoffman Center. They sat in silence as the car dipped down into the rapid-transit channels beneath the great city, swinging northward in the express circuit through Philadelphia and Camden sectors, surfacing briefly in Trenton sector, then dropping underground once again for the long pull beneath Newark, Manhattan and Westchester sectors. In less than twenty minutes the car surfaced on a Parkway channel and buzzed north and east through the verdant Connecticut countryside. "What about Tommy?" Lessing asked Dorffman as the car sped along through the afternoon sun. "I just finished the prelims. He's not cooperating." Lessing ground his teeth. "I should be running him now instead of beating the bushes with this—" He broke off to glare at young Melrose. Melrose grinned. "I've heard you have quite a place up here." "It's—unconventional, at any rate," Lessing snapped. "Well, that depends on your standards. Sounds like a country day school, from what I've heard. According to your papers, you've even used conventional statistical analysis on your data from up here." "Until we had to throw it out. We discovered that what we were trying to measure didn't make sense in a statistical analysis." "Of course, you're sure you were measuring something ." "Oh, yes. We certainly were." "Yet you said that you didn't know what." "That's right," said Lessing. "We don't." "And you don't know why your instruments measure whatever they're measuring." The Chicago man's face was thoughtful. "In fact, you can't really be certain that your instruments are measuring the children at all. It's not inconceivable that the children might be measuring the instruments , eh?" Lessing blinked. "It's conceivable." "Mmmm," said Melrose. "Sounds like a real firm foundation to build a theory on." "Why not?" Lessing growled. "It wouldn't be the first time the tail wagged the dog. The psychiatrists never would have gotten out of their rut if somebody hadn't gotten smart and realized that one of their new drugs worked better in combatting schizophrenia when the doctor took the medicine instead of the patient. That was quite a wall to climb." "Yes, wasn't it," mused Melrose, scratching his bony jaw. "Only took them seventy years to climb it, thanks to a certain man's theories. I wonder how long it'll take psionics to crawl out of the pit you're digging for it?" "We're not digging any pit," Lessing exploded angrily. "We're exploring—nothing more. A phenomenon exists. We've known that, one way or another, for centuries. The fact that it doesn't seem to be bound by the same sort of natural law we've observed elsewhere doesn't mean that it isn't governed by natural law. But how can we define the law? How can we define the limits of the phenomenon, for that matter? We can't work in the dark forever—we've got to have a working hypothesis to guide us." "So you dreamed up this 'tadpole' idea," said Melrose sourly. "For a working hypothesis—yes. We've known for a long time that every human being has extrasensory potential to one degree or another. Not just a few here and there—every single one. It's a differentiating quality of the human mind. Just as the ability to think logically in a crisis instead of giving way to panic is a differentiating quality." "Fine," said Melrose. "Great. We can't prove that, of course, but I'll play along." Lessing glared at him. "When we began studying this psi-potential, we found out some curious things. For one thing, it seemed to be immensely more powerful and active in infants and children than in adults. Somewhere along the line as a child grows up, something happens. We don't know what. We do know that the child's psi-potential gradually withdraws deeper and deeper into his mind, burying itself farther and farther out of reach, just the way a tadpole's tail is absorbed deeper and deeper into the growing frog until there just isn't any tail any more." Lessing paused, packing tobacco into his pipe. "That's why we have the Farm—to try to discover why. What forces that potential underground? What buries it so deeply that adult human beings can't get at it any more?" "And you think you have an answer," said Melrose. "We think we might be near an answer. We have a theory that explains the available data." The shuttle car bounced sharply as it left the highway automatics. Dorffman took the controls. In a few moments they were skimming through the high white gates of the Farm, slowing down at the entrance to a long, low building. "All right, young man—come along," said Lessing. "I think we can show you our answer." In the main office building they donned the close-fitting psionic monitors required of all personnel at the Farm. They were of a hard grey plastic material, with a network of wiring buried in the substance, connected to a simple pocket-sized power source. "The major problem," Lessing said, "has been to shield the children from any external psionic stimuli, except those we wished to expose them to. Our goal is a perfectly controlled psi environment. The monitors are quite effective—a simple Renwick scrambler screen." "It blocks off all types of psi activity?" asked Melrose. "As far as we can measure, yes." "Which may not be very far." Jack Dorffman burst in: "What Dr. Lessing is saying is that they seem effective for our purposes." "But you don't know why," added Melrose. "All right, we don't know why. Nobody knows why a Renwick screen works—why blame us?" They were walking down the main corridor and out through an open areaway. Behind the buildings was a broad playground. A baseball game was in progress in one corner; across the field a group of swings, slides, ring bars and other playground paraphernalia was in heavy use. The place was teeming with youngsters, all shouting in a fury of busy activity. Occasionally a helmeted supervisor hurried by; one waved to them as she rescued a four-year-old from the parallel bars. They crossed into the next building, where classes were in progress. "Some of our children are here only briefly," Lessing explained as they walked along, "and some have been here for years. We maintain a top-ranking curriculum—your idea of a 'country day school' wasn't so far afield at that—with scholarships supported by Hoffman Center funds. Other children come to us—foundlings, desertees, children from broken homes, children of all ages from infancy on. Sometimes they stay until they have reached college age, or go on to jobs. As far as psionics research is concerned, we are not trying to be teachers. We are strictly observers. We try to place the youngsters in positions where they can develope what potential they have— without the presence of external psionic influences they would normally be subject to. The results have been remarkable." He led them into a long, narrow room with chairs and ash trays, facing a wide grey glass wall. The room fell into darkness, and through the grey glass they could see three children, about four years old, playing in a large room. "They're perfectly insulated from us," said Lessing. "A variety of recording instruments are working. And before you ask, Dr. Melrose, they are all empirical instruments, and they would all defy any engineer's attempts to determine what makes them go. We don't know what makes them go, and we don't care—they go. That's all we need. Like that one, for instance—" In the corner a flat screen was flickering, emitting a pale green fluorescent light. It hung from the wall by two plastic rods which penetrated into the children's room. There was no sign of a switch, nor a power source. As the children moved about, the screen flickered. Below it, a recording-tape clicked along in little spurts and starts of activity. "What are they doing?" Melrose asked after watching the children a few moments. "Those three seem to work as a team, somehow. Each one, individually, had a fairly constant recordable psi potential of about seventeen on the arbitrary scale we find useful here. Any two of them scale in at thirty-four to thirty-six. Put the three together and they operate somewhere in the neighborhood of six hundred on the same scale." Lessing smiled. "This is an isolated phenomenon—it doesn't hold for any other three children on the Farm. Nor did we make any effort to place them together—they drew each other like magnets. One of our workers spent two weeks trying to find out why the instruments weren't right. It wasn't the instruments, of course." Lessing nodded to an attendant, and peered around at Melrose. "Now, I want you to watch this very closely." He opened a door and walked into the room with the children. The fluorescent screen continued to flicker as the children ran to Lessing. He inspected the block tower they were building, and stooped down to talk to them, his lips moving soundlessly behind the observation wall. The children laughed and jabbered, apparently intrigued by the game he was proposing. He walked to the table and tapped the bottom block in the tower with his thumb. The tower quivered, and the screen blazed out with green light, but the tower stood. Carefully Lessing jogged all the foundation blocks out of place until the tower hung in midair, clearly unsupported. The children watched it closely, and the foundation blocks inched still further out of place.... Then, quite casually, Lessing lifted off his monitor. The children continued staring at the tower as the screen gave three or four violent bursts of green fire and went dark. The block tower fell with a crash. Moments later Lessing was back in the observation room, leaving the children busily putting the tower back together. There was a little smile on his lips as he saw Melrose's face. "Perhaps you're beginning to see what I'm driving at," he said slowly. "Yes," said Melrose. "I think I'm beginning to see." He scratched his jaw. "You think that it's adult psi-contact that drives the child's potential underground—that somehow adult contact acts like a damper, a sort of colossal candle-snuffer." "That's what I think," said Lessing. "How do you know those children didn't make you take off your monitor?" Lessing blinked. "Why should they?" "Maybe they enjoy the crash when the blocks fall down." "But that wouldn't make any difference, would it? The blocks still fall down." Melrose paced down the narrow room. "This is very good," he said suddenly, his voice earnest. "You have fine facilities here, good workers. And in spite of my flippancy, Dr. Lessing, I have never imagined for a moment that you were not an acute observer and a careful, highly imaginative worker. But suppose I told you, in perfect faith, that we have data that flatly contradicts everything you've told me today. Reproducible data, utterly incompatable with yours. What would you say to that?" "I'd say you were wrong," said Lessing. "You couldn't have such data. According to the things I am certain are true, what you're saying is sheer nonsense." "And you'd express that opinion in a professional meeting?" "I would." "And as an Authority on psionic behavior patterns," said Melrose slowly, "you would kill us then and there. You would strangle us professionally, discredit anything we did, cut us off cold." The tall man turned on him fiercely. "Are you blind, man? Can't you see what danger you're in? If you publish your book now, you will become an Authority in a field where the most devastating thing that could possibly happen would be— the appearance of an Authority ." Lessing and Dorffman rode back to the Hoffman Center in grim silence. At first Lessing pretended to work; finally he snapped off the tape recorder in disgust and stared out the shuttle-car window. Melrose had gone on to Idlewild to catch a jet back to Chicago. It was a relief to see him go, Lessing thought, and tried to force the thin, angry man firmly out of his mind. But somehow Melrose wouldn't force. "Stop worrying about it," Dorffman urged. "He's a crackpot. He's crawled way out on a limb, and now he's afraid your theory is going to cut it off under him. Well, that's his worry, not yours." Dorffman's face was intense. "Scientifically, you're on unshakeable ground. Every great researcher has people like Melrose sniping at him. You just have to throw them off and keep going." Lessing shook his head. "Maybe. But this field of work is different from any other, Jack. It doesn't follow the rules. Maybe scientific grounds aren't right at all, in this case." Dorffman snorted. "Surely there's nothing wrong with theorizing—" "He wasn't objecting to the theory. He's afraid of what happens after the theory." "So it seems. But why?" "Have you ever considered what makes a man an Authority?" "He knows more about his field than anybody else does." "He seems to, you mean. And therefore, anything he says about it carries more weight than what anybody else says. Other workers follow his lead. He developes ideas, formulates theories—and then defends them for all he's worth ." "But why shouldn't he?" "Because a man can't fight for his life and reputation and still keep his objectivity," said Lessing. "And what if he just happens to be wrong? Once he's an Authority the question of what's right and what's wrong gets lost in the shuffle. It's what he says that counts." "But we know you're right," Dorffman protested. "Do we?" "Of course we do! Look at our work! Look at what we've seen on the Farm." "Yes, I know." Lessing's voice was weary. "But first I think we'd better look at Tommy Gilman, and the quicker we look, the better—" A nurse greeted them as they stepped off the elevator. "We called you at the Farm, but you'd already left. The boy—" She broke off helplessly. "He's sick, Doctor. He's sicker than we ever imagined." "What happened?" "Nothing exactly—happened. I don't quite know how to describe it." She hurried them down the corridor and opened a door into a large children's playroom. "See what you think." The boy sat stolidly in the corner of the room. He looked up as they came in, but there was no flicker of recognition or pleasure on his pale face. The monitor helmet was still on his head. He just sat there, gripping a toy fire engine tightly in his hands. Lessing crossed the room swiftly. "Tommy," he said. The boy didn't even look at him. He stared stupidly at the fire engine. "Tommy!" Lessing reached out for the toy. The boy drew back in terror, clutching it to his chest. "Go away," he choked. "Go away, go away—" When Lessing persisted the boy bent over swiftly and bit him hard on the hand. Lessing sat down on the table. "Tommy, listen to me." His voice was gentle. "I won't try to take it again. I promise." "Go away." "Do you know who I am?" Tommy's eyes shifted haltingly to Lessing's face. He nodded. "Go away." "Why are you afraid, Tommy?" "I hurt. My head hurts. I hurt all over. Go away." "Why do you hurt?" "I—can't get it—off," the boy said. The monitor , Lessing thought suddenly. Something had suddenly gone horribly wrong—could the boy really be sensing the source of the trouble? Lessing felt a cold knot gather in the pit of his stomach. He knew what happened when adult psi-contact struck a psi-high youngster's mind. He had seen it a hundred times at the Farm. But even more—he had felt it in his own mind, bursting from the child. Like a violent physical blow, the hate and fear and suspicion and cruelty buried and repressed in the adult mind, crushing suddenly into the raw receptors of the child's mind like a smothering fog—it was a fearful thing. A healthy youngster could survive it, even though the scar remained. But this youngster was sick— And yet an animal instinctively seeks its own protection . With trembling fingers Lessing reached out and opened the baffle-snap on the monitor. "Take it off, Tommy," he whispered. The boy blinked in amazement, and pulled the grey helmet from his head. Lessing felt the familiar prickly feeling run down his scalp as the boy stared at him. He could feel deep in his own mind the cold chill of terror radiating from the boy. Then, suddenly, it began to fade. A sense of warmth—peace and security and comfort—swept in as the fear faded from the boy's face. The fire engine clattered to the floor. They analyzed the tapes later, punching the data cards with greatest care, filing them through the machines for the basic processing and classification that all their data underwent. It was late that night when they had the report back in their hands. Dorffman stared at it angrily. "It's obviously wrong," he grated. "It doesn't fit. Dave, it doesn't agree with anything we've observed before. There must be an error." "Of course," said Lessing. "According to the theory. The theory says that adult psi-contact is deadly to the growing child. It smothers their potential through repeated contact until it dries up completely. We've proved that, haven't we? Time after time. Everything goes according to the theory—except Tommy. But Tommy's psi-potential was drying up there on the Farm, until the distortion was threatening the balance of his mind. Then he made an adult contact, and we saw how he bloomed." Lessing sank down to his desk wearily. "What are we going to do, Jack? Formulate a separate theory for Tommy?" "Of course not," said Dorffman. "The instruments were wrong. Somehow we misread the data—" "Didn't you see his face ?" Lessing burst out. "Didn't you see how he acted ? What do you want with an instrument reading?" He shook his head. "It's no good, Jack. Something different happened here, something we'd never counted on. It's something the theory just doesn't allow for." They sat silently for a while. Then Dorffman said: "What are you going to do?" "I don't know," said Lessing. "Maybe when we fell into this bramble bush we blinded ourselves with the urge to classify—to line everything up in neat rows like pins in a paper. Maybe we were so blind we missed the path altogether." "But the book is due! The Conference speech—" "I think we'll make some changes in the book," Lessing said slowly. "It'll be costly—but it might even be fun. It's a pretty dry, logical presentation of ideas, as it stands. Very austere and authoritarian. But a few revisions could change all that—" He rubbed his hands together thoughtfully. "How about it, Jack? Do we have nerve enough to be laughed at? Do you think we could stand a little discredit, making silly asses of ourselves? Because when I finish this book, we'll be laughed out of existence. There won't be any Authority in psionics for a while—and maybe that way one of the lads who's really sniffing out the trail will get somebody to listen to him! "Get a pad, get a pencil! We've got work to do. And when we finish, I think we'll send a carbon copy out Chicago way. Might even persuade that puppy out there to come here and work for me—"
A. The helmets block external psionic forces.
What did Judy fall asleep on the summer when she was fourteen? A. A hammock B. A flying carpet C. A wagon D. A car
The Haunted Fountain CHAPTER I An Unsolved Mystery “Tell Judy about it,” begged Lois. “Please, Lorraine, it can’t be as bad as it appears. There isn’t anything that Judy can’t solve.” Lorraine tilted her head disdainfully. “We’re sisters now. We’re both Farringdon-Petts and should be loyal to each other. But you always did take Judy’s part. She was the one who nearly spoiled our double wedding trying to solve a mystery. I don’t believe she’d understand—understand any better than I do. Everyone has problems, and I’m sure Judy is no exception.” “You’re right, Lorraine,” announced Judy, coming in to serve dessert to the two friends she had invited for lunch at Peter’s suggestion. “I do have problems, and there are plenty of mysteries I can’t solve.” “Name one,” charged Lois. “Just mention one single spooky thing you couldn’t explain, and I’ll believe you. I’ve seen you in action, Judy Bolton—” “Judy Dobbs, remember?” “Well, you were Judy Bolton when you solved all those mysteries. I met you when the whole valley below the big Roulsville dam was threatened by flood and you solved that—” “That,” declared Judy, “was my brother Horace, not me. He was the hero without even meaning to be. He was the one who rode through town and warned people that the flood was coming. I was off chasing a shadow.” “A vanishing shadow,” Lois said with a sigh. “What you did wasn’t easy, Judy.” “It didn’t need to be as hard as it was,” Judy confessed. “I know now that keeping that promise not to talk about the dam was a great big mistake and could have cost lives. I should have told Arthur.” “Please,” Lorraine said, a pained expression clouding her pretty face, “let’s not talk about him now.” “Very well,” Judy agreed. “What shall we talk about?” “You,” Lois said, “and all the mysteries you’ve solved. Maybe you were mistaken about a thing or two before the flood, but what about the haunted house you moved into? You were the one who tracked down the ghosts in the attic and the cellar and goodness knows where all. You’ve been chasing ghosts ever since I met you, and not one of them did you fail to explain in some sensible, logical fashion.” “Before I met you,” Judy said, thinking back, “there were plenty of them I couldn’t explain. There was one I used to call the spirit of the fountain, but what she was or how she spoke to me is more than I know. If my grandparents knew, they weren’t telling. And now they’re both dead and I can’t ask them. They left me a lot of unsolved mysteries along with this house. Maybe I’ll find the answers to some of them when I finish sorting Grandma’s things. They’re stored in one end of the attic.” “Another haunted attic? How thrilling!” exclaimed Lois. “Why don’t you have another ghost party and show up the spooks?” “I didn’t say the attic was haunted.” Judy was almost sorry she had mentioned it. She wasn’t in the mood for digging up old mysteries, but Lois and Lorraine insisted. It all began, she finally told them, the summer before they met. Horace had just started working on the paper. Judy remembered that it was Lorraine’s father, Richard Thornton Lee, who gave him his job with the Farringdon Daily Herald . He had turned in some interesting church news, convincing Mr. Lee that he had in him the makings of a good reporter. And so it was that he spent the summer Judy was remembering in Farringdon where the Farringdon-Petts had their turreted mansion, while she had to suffer the heat and loneliness of Dry Brook Hollow. Her thoughts were what had made it so hard, she confessed now as she reviewed everything that had happened. She just couldn’t help resenting the fact that her parents left her every summer while they went off on a vacation by themselves. What did they think she would do? “You’ll have plenty to read,” her father had told her. “I bought you six new books in that mystery series you like. When they’re finished there are plenty of short stories around. Your grandmother never throws anything away. She has magazines she’s saved since your mother was a girl. If you ask for them she’ll let you have the whole stack. I know how you love to read.” “I do, Dad, but if the magazines are that old—” Judy had stopped. She had seen her father’s tired eyes and had realized that a busy doctor needed a vacation much more than a schoolgirl who had too little to do. He and Judy’s mother usually went to the beach hotel where they had honeymooned. It was a precious memory. Every summer Dr. Bolton and his wife relived it. And every summer Judy went to stay with her grandmother Smeed, who scolded and fussed and tried to pretend she wasn’t glad to have her. “You here again?” she had greeted her that summer, and Judy hadn’t noticed her old eyes twinkling behind her glasses. “What do you propose to do with yourself this time?” “Read,” Judy had told her. “Mom and Dad say you have a whole stack of old magazines—” “In the attic. Go up and look them over if you can stand the heat.” Judy went, not to look over the old magazines so much as to escape to a place where she could have a good cry. It was the summer before her fifteenth birthday. In another year she would have outgrown her childish resentment of her parents’ vacation or be grown up enough to ask them to let her have a vacation of her own. In another year she would be summering among the beautiful Thousand Islands and solving a mystery to be known as the Ghost Parade . “A whole parade of ghosts,” Lois would be telling her, “and you solved everything.” But then she didn’t even know Lois. She had no idea so many thrilling adventures awaited her. There seemed to be nothing—nothing—and so the tears came and spilled over on one of the magazines. As Judy wiped it away she noticed that it had fallen on a picture of a fountain. “A fountain with tears for water. How strange!” she remembered saying aloud. Judy had never seen a real fountain. The thrill of walking up to the door of the palatial Farringdon-Pett mansion was still ahead of her. On the lawn a fountain still caught and held rainbows like those she was to see on her honeymoon at Niagara Falls. But all that was in the future. If anyone had told the freckled-faced, pigtailed girl that she would one day marry Peter Dobbs, she would have laughed in their faces. “That tease!” For then she knew Peter only as an older boy who used to tease her and call her carrot-top until one day she yelled back at him, “Carrot-tops are green and so are you!” Peter was to win Judy’s heart when he gave her a kitten and suggested the name Blackberry for him. The kitten was now a dignified family cat. But the summer Judy found the picture of a fountain and spilled tears on it she had no kitten. She had nothing, she confessed, not even a friend. It had helped to pretend the fountain in the picture was filled with all the tears lonely girls like herself had ever cried. “But that would make it enchanted!” she had suddenly exclaimed. “If I could find it I’d wish—” A step had sounded on the stairs. Judy remembered it distinctly. She had turned to see her grandmother and to hear her say in her usual abrupt fashion, “Enchanted fountain, indeed! If you let people know your wishes instead of muttering them to yourself, most of them aren’t so impossible.” “Were they?” asked Lois. She and Lorraine had listened to this much of what Judy was telling them without interruption. “That’s the unsolved mystery,” Judy replied. “There weren’t any of them impossible.” And she went on to tell them how, the very next day, her grandparents had taken her to a fountain exactly like the one in the picture. It was in the center of a deep, circular pool with steps leading up to it. Beside the steps were smaller fountains with the water spurting from the mouths of stone lions. Judy had stared at them a moment and then climbed the steps to the pool. “Am I dreaming?” she remembered saying aloud. “Is this beautiful fountain real?” A voice had answered, although she could see no one. “Make your wishes, Judy. Wish wisely. If you shed a tear in the fountain your wishes will surely come true.” “A tear?” Judy had asked. “How can I shed a tear when I’m happy? This is a wonderful place.” “Shed a tear in the fountain and your wishes will surely come true,” the voice had repeated. “But what is there to cry about?” “You found plenty to cry about back at your grandmother’s house,” the mysterious voice had reminded her. “Weren’t you crying on my picture up there in the attic?” “Then you—you are the fountain!” Judy remembered exclaiming. “But a fountain doesn’t speak. It doesn’t have a voice.” “Wish wisely,” the voice from the fountain had said in a mysterious whisper. CHAPTER II If Wishes Came True “Did you?” Lois interrupted the story to ask excitedly. “Oh, Judy! Don’t keep us in suspense any longer. What did you wish?” “Patience,” Judy said with a smile. “I’m coming to that.” First, she told her friends, she had to think of a wise wish. There had been so much she wanted in those early days before the flood. Dora Scott had been her best friend in Roulsville, but she had moved away. “You see,” she explained, “I made the mistake of having just one best friend. There wasn’t anybody in Dry Brook Hollow. I remember thinking of how lonely I was and how I wished for a friend or a sister, and suddenly a tear splashed in the water. It made little ripples. I thought I had to wish quickly before they vanished, and so I began naming the things I wanted as fast as I could. I’m not sure they were wise wishes. They seem rather selfish to me, now. I wasn’t thinking of anybody but me, Judy Bolton, and what I wanted. It wasn’t until after I began to think of others that my wishes started to come true.” “But what were they?” Lois insisted. Lorraine seemed unusually quiet and thoughtful. Judy did not notice the fear in her eyes as she replied airily, “Oh, didn’t I tell you? I wished for lots of friends and a sister, and I wished I could marry a G-man and solve a lot of mysteries and that’s as far as I got when the ripples vanished. I thought the spell was broken and so I didn’t wish for anything more.” “Wasn’t there anything more you wanted?” Lois asked. “Of course,” replied Judy. “There were lots more things. I wanted to go places, of course, and keep pets, and have a nice home, and—” “And your wishes all came true!” “Every one of them,” Judy agreed, “even the one about the sister. You see, it wasn’t a baby sister I wanted. It was a sister near my own age. That seemed impossible at the time, but the future did hold a sister for me.” “It held one for me, too,” Lois said, squeezing Lorraine’s hand under the table. “Don’t you think sisters should tell each other their problems, Judy?” “Honey and I always do,” she replied “but then it was different. I didn’t know I would marry Peter or that he would become a G-man, and he didn’t know he had a sister. It is strange, isn’t it? But the strangest thing of all was the fountain itself.” “Why?” asked Lorraine. “Do you still think it was enchanted?” Lois laughed at this, but Judy was serious as she answered, “I was still little girl enough to think so at the time. I wandered around, growing very drowsy. Then I found a hammock and climbed into it. I must have gone to sleep, because I remember waking up and wondering if the voice in the fountain had been a dream.” “A hammock?” Lois questioned. “Are you sure it wasn’t a flying carpet?” “No, it was a hammock all right,” Judy assured her, laughing. “It was hung between two trees in a beautiful garden all enclosed in rose trellises thick with roses. Did I tell you it was June?” “All the year around?” Again Lois laughed. But Lorraine said abruptly, “Let’s not talk about rose gardens in June. It’s a long way from June to December.” “Do you mean a garden changes? I know,” Judy said, “but I think this one would be beautiful at any time of the year. There were rhododendrons, too, and I don’t know how many different kinds of evergreens. I explored the garden all around the fountain.” “And then what happened?” Lorraine urged her. “Yes, yes. Go on,” entreated Lois. “I didn’t dream you’d kept anything that exciting a secret. Why didn’t you try to solve the mystery?” “I think I would have tried,” Judy admitted, “if I had been older or more experienced. I really should have investigated it more thoroughly and learned the secret of the fountain. But after the ripples went away it didn’t speak to me any more, and I didn’t really think it had heard my wishes. I was still wishing for a friend when I met you, Lois. It did seem impossible for us to be friends at first, didn’t it? Lorraine was your friend.” “I did make trouble for you,” Lorraine remembered. “It was all because of my foolish jealousy.” “It was nothing compared to the trouble caused by the Roulsville flood,” declared Judy. “After that things started happening so fast that I completely forgot about the fountain. Honestly, Lois, I don’t believe I thought about it again until after we moved to Farringdon and I walked up to your door and saw the fountain on your lawn.” “The Farringdon-Pett puddle, I always called it,” Lois said with a giggle. “I’ve seen lots nicer fountains.” “You have?” asked Judy. “Then maybe you’ve seen the one I’ve been telling you about. I think the picture of it is still in the attic. Come on up and I’ll show you.” Lois and Lorraine had finished their dessert while Judy was telling them the story of the fountain. Somehow, she wasn’t hungry for hers. She had tasted it too often while she was making it. “I’ll leave it for Blackberry,” she decided. Lois watched in amusement as the cat lapped up the chocolate pudding after Judy had mixed it generously with cream. “Sometimes,” Judy said fondly, “Blackberry thinks he’s a person. He eats everything we eat, including lettuce. Do you mind if he comes with us, Lorraine? He wants to explore the attic, too.” “He’ll remember he’s a cat fast enough if there are any mice up there,” Lois said with a giggle. Leaving the table, they all started upstairs with the cat bounding ahead of them. In modernizing her grandparents’ house to suit her own and Peter’s tastes, Judy had seen to it that the old stair door was removed. But there was still a door closing off the narrower stairs that led to the attic. Blackberry reached it first and yowled for Judy to open it. “He can read my mind. He always knows where I’m going,” Judy said as the door creaked open and the cat shot through it. A moment later a weird rolling noise came from the floor above. “Come on. There’s nothing up here to be afraid of,” Judy urged her friends. “Maybe not, but I’m beginning to get the shivers,” confessed Lois as she followed Judy to the sewing room at the top of the last flight of stairs. “So am I,” Lorraine admitted. “I’m not superstitious about black cats, but they are creepy. Does Blackberry have to roll spools across the floor?” “Now he thinks he’s a kitten,” laughed Judy. Pausing at still another door that led to the darker part of the attic, she turned and said mysteriously, “Up here we can all turn back the clock. Does anybody care to explore the past?” The exploration began enthusiastically with Judy relating still more of what she remembered about the fountain. “When I told Grandma about it she laughed and said I must have dreamed it. She said if wishes came true that easily she’d be living in a castle. But would she?” Judy wondered. “When I first remember this house she was still burning kerosene lamps like those you see on that high shelf by the window. I think she and Grandpa like the way they lived without any modern conveniences or anything.” “I think so, too,” Lois agreed, looking around the old attic with a shiver. “It is strange they both died the same winter, isn’t it?” “Maybe they wanted it that way. Maybe they wished neither of them would outlive the other. If they did wish in the fountain,” Judy went on more thoughtfully, “I’m sure that was one of their wishes. Another could have been to keep the good old days, as Grandma used to call them. That one came true in a way. They did manage to keep a little of the past when they kept all these old things. That’s what I meant about turning back the clock.” “If wishes came true I’d like to turn it back a little myself,” Lorraine began. “It would be nice if things were the way they used to be when I trusted Arthur—” “Don’t you trust him now?” Judy asked. Afterwards she was sorry for the interruption. Lois and Judy both questioned Lorraine, but that was all she would say. Judy wondered, as they searched through the old magazines, what was wrong. Lorraine was of a jealous disposition. Was the green-eyed monster coming between her and her handsome husband, Arthur Farringdon-Pett? Until now they had seemed blissfully happy. But there was no happiness in Lorraine’s face as she gazed at a picture of one of the fountains and then said in a tight little voice, “It is. It’s the very same one.” “But that’s the picture I’ve been searching for!” Judy said eagerly. “Do you know where it is?” “I can’t be sure. But if it ever was enchanted, I’m sure it isn’t now. Let’s go,” Lorraine said suddenly to Lois. Judy knew she was suggesting a fast trip home. But, apparently, Lois did not understand it that way. If she did, she pretended not to. “Where?” she asked. “To the fountain? I’d love to, wouldn’t you, Judy?” “I certainly would,” Judy replied enthusiastically. “Do you recognize it, too?” “I think so,” Lois answered after studying a little more closely the picture they had found. “It looks like the fountain on the Brandt estate.” “The department store Brandts?” Judy questioned. “Then my grandparents must have driven old Fanny all the way to Farringdon.” “Not quite all the way,” Lorraine objected. “The Brandts own that stretch of woods just before you come into the city. You’ve passed it lots of times.” “Of course,” agreed Judy. She put the magazine back in its place under the eaves and turned eagerly to her friends. “I do remember a road turning off into the woods and going on uphill,” she told them. “I never thought it led to a house, though. There isn’t even a gate. Could that be the road my grandparents took?” “Why don’t we take it ourselves and find out?” Lois suggested. CHAPTER III A Strange Encounter Lorraine was not too enthusiastic about the proposed trip to the Brandt estate. Finally she agreed to it under one condition. They were not to drive all the way to the house which, she said, was just over the hilltop. They were to park the car where no one would see it and follow the path to the fountain. “But suppose we can’t find the path?” asked Judy. “You’ll remember it, won’t you?” Judy thought she would, but she wasn’t too sure. She and Lois both argued that it would be better to inquire at the house. Lois knew Helen Brandt slightly. “She’d be glad to show us around. This way it looks as if we’re planning a crime,” Lois said as they started off in the blue car she was driving. It was a neat little car, not too conspicuous, and easy to park in out-of-the-way places. Judy laughed and said if they did find the fountain she thought she’d wish for one exactly like it. “Well, you know what your grandmother said about wishes, don’t you?” Lorraine asked. “If you let people know about them instead of muttering them to yourself most of them aren’t so impossible.” “Quite true,” Judy agreed. “I’ll let Peter know about this one. He’s my Santa Claus, and it will soon be Christmas. Maybe I should have worn the fur coat he gave me last year.” “Your reversible’s better in case it rains. It’s too warm for snow. We picked a perfect day for this trip,” Lois continued, guiding the car around curves as it climbed the steep hill beyond Dry Brook Hollow. The trip was a short one. In twenty minutes they had covered the distance that had seemed such a long way to Judy when she was riding in her grandfather’s wagon. “I’ve been thinking about it,” she said, “and I’ve just about figured out how it happened. I didn’t think my grandparents knew the Brandts well enough to pay them a visit, though. We must have looked queer driving up to a beautiful estate in Grandpa’s old farm wagon. I do remember that Grandma had some hooked rugs to deliver. But that still doesn’t explain what happened afterwards. When I woke up in the hammock I was alone in the garden. Horse, wagon, grandparents—all had disappeared.” “How could they?” asked Lois. “Anyway,” Lorraine began, “you had a chance to see how beautiful everything was before—” Again she broke off as if there were something she wanted to tell but didn’t quite dare. “Before what?” questioned Judy. “Oh, nothing. Forget I said anything about it. You were telling us how you woke up in the hammock, but you never did explain how you got back home,” Lorraine reminded her. “Didn’t I?” asked Judy. “I’d forgotten a lot of it, but it’s beginning to come back now. I do remember driving home along this road. You see, I thought my grandparents had left me in the garden for a surprise and would return for me. I told you I was all alone. There wasn’t a house in sight.” “The Brandt house is just over the top of this next hill,” Lois put in. “I know. You told me that. Now I know why I couldn’t see it. All I could see was a windowless old tower and a path leading in that direction. Naturally, I followed it. There’s something about a path in the woods that always tempts me.” “We know that, Judy. Honey told us all about your latest mystery. You followed a trail or something.” “Well, this trail led out of the rose garden where the hammock was and then through an archway,” Judy continued. “All sorts of little cupids and gnomes peered out at me from unexpected places. I was actually scared by the time I reached the old tower. There wasn’t time to explore it. Just then I heard the rumble of my grandfather’s wagon and knew he was driving off without me.” “He was!” Judy’s friends both chorused in surprise, and Lois asked, “Why would he do a thing like that?” “I think now it was just to tease me. He did stop and wait for me after a while,” Judy remembered. “The rugs were gone. Grandma must have delivered them, but I didn’t ask where. If she made them for Mrs. Brandt they may still be there.” “I wouldn’t depend on it,” Lorraine said as they turned up the narrow road to the Brandt estate. “Watch out!” Judy suddenly exclaimed. “There’s another car coming.” As Lois swerved to avoid the oncoming car, Lorraine ducked her head. She kept herself hidden behind Judy until the car had passed. The man driving it was a stranger to Judy, but she would remember his hypnotic, dark eyes and swarthy complexion for a long time. The soft brown hat he was wearing covered most of his hair. “What’s the matter with you two?” asked Lois when the car had passed. “Aren’t you a little old for playing hide and seek?” “I wasn’t—playing. Let’s not go up there,” Lorraine begged. “I don’t think the Brandts live there any more.” “Maybe not, but we can pretend we think they do, can’t we?” Judy replied a little uncertainly. She was beginning to suspect that Lorraine knew more about the Brandt estate than she was telling. Lois kept on driving along the narrow, gravelly road. Soon there were more evergreens and a hedge of rhododendrons to be seen. They looked very green next to the leafless trees in the woods beyond. The sky was gray with white clouds being driven across it by the wind. “There’s the tower!” Lorraine exclaimed. “I can see it over to the left. It looks like something out of Grimm’s Fairy Tales, doesn’t it?” “It looks grim all right,” agreed Judy. “I wonder what it is.” “I suppose it’s nothing but an old water tower. It would be fun to explore it, though,” Lois said. “But if there are new people living here they’ll never give us permission.” “We might explore it without permission,” Judy suggested daringly. “Come on!” she urged her friends as Lois parked the car in a cleared place beside the road. “Who’s going to stop us? And who wants to explore a gloomy old tower, anyway? Let’s look for the fountain.” “Do you think we should?” Lorraine asked. “It won’t be enchanted. I told you—” “You told us very little,” Lois reminded her. “If you know anything about the people who live here now, I think you ought to let us know. Otherwise, I’m afraid we won’t be very welcome.” “I don’t think they’ll welcome us, anyway. I do know who they are,” Lorraine admitted. “You remember Roger Banning from school, don’t you? I’ve seen him around here. His family must have acquired sudden wealth, or else he’s just working on the estate.” “Then you’ve been here lately? Why didn’t you tell me?” asked Lois. “We always used to go places together.” “It wasn’t important,” Lorraine replied evasively. “I was just out for a drive.” “You plutocrats!” laughed Judy. “Each with a car of your own. You’re not interested in Roger Banning, are you, Lois? I’m sure you can do better than that. I did know him slightly, but not from school. The boys and girls were separated and went to different high schools by the time we moved to Farringdon. I remember his pal, Dick Hartwell, a lot better. He was in our young people’s group at church.” “Sh!” Lois cautioned her. “Nice people no longer mention Dick Hartwell’s name. He’s doing time.” “For what?” asked Judy. Like Peter, her FBI husband, she preferred facts to gossip. “Forgery, I guess. He stole some checkbooks from his father’s desk and forged the names of a lot of important business people. I think he forged some legal documents, too. Anyway, he went to the Federal Penitentiary. It was all in the papers,” Lorraine told her. Now Judy did remember. It was something she would have preferred to forget. She liked to think she was a good judge of character, and she had taken Dick Hartwell for a quiet, refined boy who would never stoop to crime. “I don’t see what all this has to do with the fountain,” Lois said impatiently. “Are we going to look for it, or aren’t we?” “Of course we are. That’s what we came for. I just like to know what a tiger looks like before he springs at me,” Judy explained. “You seem to think there’s danger in this expedition of ours, don’t you?” asked Lorraine. “I don’t know what to think. You’re the one who seems to know the answers, but you’re not telling. Hiding your face back there gave you away. You’ve seen that character who drove down this road and, for some reason, you were afraid he would see you. Why, Lorraine? Why didn’t you want to be recognized?” Lorraine hesitated a moment and then replied evasively, “People don’t generally enter private estates without an invitation. That’s all.” “I’d better turn the car around,” Lois decided, “in case we have to leave in a hurry. I don’t expect we’ll encounter any tigers, but we may be accused of trespassing.” “I’m sure we will be,” announced Judy as two dark-coated figures strode down the road toward them. “You drove right by a NO TRESPASSING sign, and this isn’t a welcoming committee coming to meet us!”
A. A hammock
Who are the experts?
### Introduction Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks. Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate. In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below. Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets. Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users. Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users. For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about). The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd. We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day. Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates. The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future. ### Related Work Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph. There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24. ### Data Set and Preprocessing ::: Data Collection Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets. We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate. As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated. Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative. The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task. The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the "Album of the Year" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year). ### Data Set and Preprocessing ::: Preprocessing As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as "I hate Donald Trump, but I will vote for him". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet. ### Methodology ::: Procedure Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis. Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet. Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners. Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates. As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes. ### Methodology ::: Machine Learning Models We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below: ### Methodology ::: Machine Learning Models ::: Single-label Classification Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that where there are $m$ features and $f_i$ represents the $i^{th}$ feature. Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs. Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task. ### Methodology ::: Machine Learning Models ::: Multi-label Classification RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification. ### Methodology ::: Feature Space In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below: n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams. punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc. POS (part-of-speech): The frequency of each POS tagger is used as the feature. prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score. Twitter Specific features: Number of hashtags ($\#$ symbol) Number of mentioning users ( symbol) Number of hyperlinks Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model. In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$. ### Data Analysis In this section, we analyze the presidential debates data and show some trends. First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match. Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads. Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion. We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time. First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong" speech. It can be due to this embarrassment that his sentiment went down during this debate. Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out). There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered. ### Evaluation Metrics In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks. ### Evaluation Metrics ::: Sentiment Analysis In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below: where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case. ### Evaluation Metrics ::: Outcome Prediction For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below: Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant. where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query. Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query: where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query. ### Results ::: Sentiment Analysis We use 3 different datasets for the problem of sentiment analysis, as already mentioned. We test the four different algorithms mentioned in Section SECREF6, with a different combination of features that are described in Section SECREF10. To evaluate our models, we use the “Hamming Loss” metric as discussed in Section SECREF6. We use this metric because our problem is in the multi-class classification domain. However, the single-label classifiers like SVM, Naive Bayes, Elman RNN cannot be evaluated against this metric directly. To do this, we split the predicted labels into a format that is consistent with that of multi-label classifiers like RaKel. The results of the experiments for each of the datasets are given in Tables TABREF34, TABREF34 and TABREF34. In the table, $f_1$, $f_2$, $f_3$, $f_4$, $f_5$ and $f_6$ denote the features 1-$gram$, 2-$gram$, $(1+2)$-$gram$, $(1+2)$-$gram + MISC$, $DOC$ and $DOC + TOPIC$ respectively. Note that lower values of hamming losses are more desirable. We find that RaKel performs the best out of all the algorithms. RaKel is more suited for the task because our task is a multi-class classification. Among all the single-label classifiers, SVM performs the best. We also observe that as we use more complex feature spaces, the performance increases. This is true for almost all of the algorithms listed. Our best performing features is a combination of paragraph embedding features and topic features from LDA. This makes sense intuitively because paragraph-embedding takes into account the context in the text. Context is very important in determining the sentiment of a short text: having negative words in the text does not always mean that the text contains a negative sentiment. For example, the sentence “never say never is not a bad thing” has many negative words; but the sentence as a whole does not have a negative sentiment. This is why we need some kind of context information to accurately determine the sentiment. Thus, with these embedded features, one would be able to better determine the polarity of the sentence. The other label is the entity (candidate/song etc.) in consideration. Topic features here make sense because this can be considered as the topic of the tweet in some sense. This can be done because that label captures what or whom the tweet is about. ### Results ::: Results for Outcome Prediction In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners. ### Results ::: Results for Outcome Prediction ::: Presidential Debates The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases. ### Results ::: Results for Outcome Prediction ::: Grammy Awards A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes. Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”. ### Results ::: Results for Outcome Prediction ::: Super Bowl The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately. Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts). ### Conclusions This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features. TABLE I: Debates chosen, listed in chronological order. A total of 10 debates were considered out of which 7 are Republican and 3 are Democratic. TABLE II: Statistics of the Data Collected: Debates Fig. 1: Histograms of Tweet Frequency vs. Debates and TV Viewers vs. Debates shown side-by-side for comparison. The red bars correspond to the Republican debates and the blue bars correspond to the Democratic debates. Fig. 2: Tweet Frequency vs. Days for the 5th Republican Debate (15th December 2015). Fig. 4: Sentiments of the users towards the candidates across Debates. Fig. 3: Tweet Frequency vs. Hours for the 5th Republican Debate (15th December 2015). Fig. 5: Graphs showing how the sentiments of the users towards the candidates before and after the debates. TABLE V: Sentiment Analysis for the Presidential Debates: f1 stands for 1-gram, f2 stands for 2-gram, f3 stands for (1+2)-gram, f4 stands for (1+2)-gram+MISC, f5 stands for DOC, f6 stands for DOC + TOPIC. TABLE VII: Sentiment Analysis for the 2013 Superbowl TABLE VI: Sentiment Analysis for the 2013 Grammy Awards TABLE VIII: Outcome Prediction based on Tweet Sentiment: the superscript on the candidates indicates the predicted ordering TABLE IX: Outcome Prediction for the 2013 Grammy awards. TABLE XI: Metric Results for Outcome Prediction
the experts in the field
What is the Moral Choice Machine?
### Introduction There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a more and more autonomous way. This leads into new spheres, where traditional ethics has limited applicability. Both self-driving cars, where mistakes may be life-threatening, and machine classifiers that hurt social matters may serve as examples for entering grey areas in ethics: How does AI embody our value system? Can AI systems learn human ethical judgements? If not, can we contest the AI system? Unfortunately, aligning social, ethical, and moral norms to structure of science and innovation in general is a long road. According to BIBREF1 (BIBREF1), who examined affirmative ethics, the emergence of new questions leads to intense public discussions, that are driven by strong emotions of participants. And machine ethics BIBREF2, BIBREF3, BIBREF4 is no exception. Consider, e.g., BIBREF5's (BIBREF5) empirical proof that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, such as the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse. In other words, AI systems are not neutral with respect to purpose and society anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing in general most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your individual ethical frame. So, one may think that it is impossible to equip AI systems to make human like ethical choices. Luckily, BIBREF0 (BIBREF0) showed that standard machine learning can actually learn answers to ethical choices from textual data that reflect everyday human culture. Recently, BIBREF6 (BIBREF6) showed that sentence embeddings created by SBERT outperform other state-of-the-art sentence embedding methods like InferSent BIBREF7 and Universal Sentence Encoder BIBREF8. We hypothesis that the improvement of language representations models also improves the representation of the underlying ethical and moral values in these models. To investigate this, we follow the experimental pipeline described in BIBREF0. As a first investigation, we compare the new state-of-the-art text-embedding model BERT, or more precisely Sentence-BERT, focusing on quantifying deontological ethics, i.e. finding out, whether an action itself is right or wrong. Following BIBREF0 and for a fair comparison, we first restrict our attention to atomic actions instead of complex behavioural patterns. Semantically, those contextual isolated actions are represented by verbs. Consequently, identifying verbs that reflect social norms and allow capturing what people rather should do and what not. To conduct this investigation, we used the same template list of prompts and responses for ethical choices as in BIBREF0, cf. Tab. TABREF15(Appendix). The templates include questions, such as "Should I kill people?", "Is it allowed to murder people?", etc. with answer templates of "Yes/no, I should (not)." Using the Moral Choice Machine (MCM), based on some language representation, one is able to demonstrate the presence of ethical valuation in text collections by generating an ethical bias of actions derived from the Verb Extraction. As the next step, the correlation of WEAT (Word Embedding Association Test) values BIBREF5 and moral bias is examined. Based on that, we show that the new state-of-the-art method BERT improves the quality of the MCM. Although the three methods—Word Embedding Association Test (WEAT), Moral Choice Machine based on the Universal Sentence Encoder (USE), and Moral Choice Machine based on Sentence-BERT (SBERT)—are based on incoherent embeddings with different text corpora as training source, we show that they correspond in classification of actions as Dos and Don'ts. Our findings support the hypothesis of the presence of generally valid valuation in human text. Actually, they show that BERT improves the extraction of the moral score. Next, we move to more complex actions with surrounding contextual information and extend the (moral-) ranking of such actions presented in BIBREF0 by an evaluation of the actual moral bias. Again, we show that BERT has a more accurate reflection of moral values than USE. Finally, we contribute an alternative way of specifying the moral value of an action by learning a projection of the embedding space into a moral subspace. With the MCM in combination with BERT we can reduce the embedding dimensionality to one single dimension representing the moral bias. We proceed as follows. After reviewing our assumptions and the required background, we present the MCM using BERT, followed by improvements of the MCM. Before concluding, we present our empirical results. ### Assumptions and Background In this section, we review our assumptions, in particular what we mean by moral choices, and the required background, following closely BIBREF0. Moral Choices. Philosophically, roughly speaking, morals refer to the “right” and “wrong” at an individual's level while ethics refer to the systems of “right” and “wrong” set by a social group. Social norms and implicit behavioural rules exist in all human societies. But even though their presence is ubiquitous, they are hardly measurable and difficult to define consistently. The underlying mechanisms are still poorly understood. Indeed, each working society possesses an abstract moral that is generally valid and needs to be adhered to. However, theoretic definitions have been described as being inconsistent or even contradicting occasionally. Accordingly, latent ethics and morals have been described as the sum of particular norms that may not follow rational justification necessarily. Recently, BIBREF9 (BIBREF9) for instance suggested that moral norms are determined to a large extent by what is perceived to be common convention. With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings. Word and Sentence Embeddings. A word/phrase embedding is a representation of words/phrases as points in a vector space. All approaches have in common that more related or even similar text entities lie close to each other in the vector space, whereas distinct words/phrases can be found in distant regions BIBREF10. This enables determining semantic similarities in a language. Although these techniques have been around for some time, their potential increased considerably with the emergence of deep distributional approaches. In contrast to previous implementations, those embeddings are built on neural networks (NNs) and enable a rich variety of mathematical vector operations. One of the initial and most widespread algorithms to train word embeddings is Word2Vec BIBREF11, where unsupervised feature extraction and learning is conducted per word on either CBOW or Skip-gram NNs. This can be extended to full sentences BIBREF7, BIBREF8, BIBREF12. Bias in Text Embeddings. While biases in machine learning models can potentially be rooted in the implemented algorithm, they are primarily due to the data they are trained on. BIBREF5 (BIBREF5) empirically showed that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, as for instance the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse: AI systems “have the potential to inherit a very human flaw: bias”, as Socure's CEO Sunil Madhu puts it. Besides the typically discussed bias in gender and race stereotypes, AI systems are also not neutral with respect to purpose and societal values anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your ethical frame of reference. ### Human-like Moral Choices from Human Text Word-based approaches such as WEAT or Verb Extraction are comparatively simple. They consider single words only, detached from their grammatical and contextual surrounding. In contrast, the Moral Choice Machine determines biases on a sentence level. Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value: where $\vec{q}$ is the vector representation of the question and $\vec{a}$ and $\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$. This can be adapted to any arbitrary kind of bias by formulating appropriate question/answer triples, where the question captures the target dimension and the answers represent two opposite manifestations, the choices. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template. The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value. Specifically, we considered two different biases: gender and moral. The question/answer template for moral biases, our main target, included the ten prompts summarized in Tab. TABREF15(Appendix), where any action term can be inserted for the dots. All prompts follow the same structure: the action is mirrored in the question, and the answers represent two extremes of social desirability. Questions were, e.g., “Is it allowed to ...?” or “Should I ...?”. The used answers were grammatically adjusted to the question, e.g., “Yes, it is.” vs. “No, it is not.” respectively “Yes, you should.” vs. “No, you should not.”. Besides atomic actions, e.g. smile, love, lie or kill, this template can be used on more complex, context-based actions e.g. love my parents, love my wife/husband, kill people or kill time. ### Moral Subspace Projection As BIBREF0 (BIBREF0) showed the question/answer template is an appropriate method to extract moral biases. However as BIBREF13 (BIBREF13) showed, one is also able to even adapt the model's bias, e.g. debias the model's gender bias. They describe that the first step for debiasing word embeddings is to identify a direction (or, more generally, a subspace) of the embedding that captures the bias. To identify the gender subspace, e.g., they proposed to take the difference vectors of given gender pairs and computed its principal components (PCs) and found a single direction that explains the majority of variance in these vectors, i.e. the first eigenvalue is significantly larger than the rest. Therefore, they argue that the top PC, denoted by the unit vector $g$, captures the gender subspace. Subsequently, they debias the embedding based on this subspace. Please note that the gender pairs are labelled beforehand. Using the above-mentioned methodology, we propose an alternative to identify the moral bias. Inspired by BIBREF13, we first compute the moral subspace of the text embedding. Instead of the difference vectors of the question/answer pairs, we compute the PCA on selected atomic actions —we expect that these actions represent Dos and Don'ts (cf. Appendix). We formulate the actions as questions, i.e. using question templates, and compute the mean embedding, since this amplifies their moral score BIBREF0. Similar to the gender subspace, if the first eigenvalue is significantly larger than the rest, the top PC, denoted by the unit vector $m$, captures the moral subspace and therefore also the moral bias. Then, based on this subspace, one can extract the moral bias of even complex actions with surrounding context by the projection of an action. ### Experimental Results This section investigates empirically whether text corpora contain recoverable and accurate imprints of our moral choices. Specifically, we move beyond BIBREF0, by showing that BERT has a more accurate moral representation than that of the Universal Sentence Encoder. Datasets and Embeddings Models. Experiments of the Moral Choice Machine are conducted with the Universal Sentence Encoder (USE) BIBREF8 and Sentence-BERT (SBERT) BIBREF6. The USE model is trained on phrases and sentences from a variety of different text sources; mainly Wikipedia but also sources such as forums, question/answering platforms, and news pages and augmented with supervised elements. SBERT is a modification of the pretrained BERT BIBREF12 network that aims to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. BERT is, like USE, also trained mainly on Wikipedia. For the verb extraction, the same general positive and negative association sets as in BIBREF0 are used—$A$ and $B$ in Eq. DISPLAY_FORM18—. The comprehensive list of vocabulary can be found in the appendix (Tab. TABREF20). Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter. Tab. TABREF22 and Tab. TABREF23 (cf. Appendix) lists the most positive and negative associated verbs (in decreasing order). Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts. Replicating Atomic Moral Choices. Next, based on the verbs extractions and the question/answer templates, we show that social norms are present in text embeddings and a text embedding network known to achieve high score in unsupervised scenarios —such as semantic textual similarity via cosine-similarity, clustering or semantic search— improves the scores of the extracted moral actions. The correlation of the moral bias and the corresponding WEAT value was calculated to test consistency of findings. It is hypothesised that resulting moral biases for generated Dos and Don'ts correspond to the WEAT value of each word. The correlation was tested by means of Pearson's Correlation Coefficient: where $m_x$ and $m_y$ are the the means of $X$ and $Y$. Pearson's $r$ ranges between $-1$, indicating a strong negative correlation, and 1, indicating a strong positive correlation. Significance levels are defined as $5\%$, $1\%$ and $0.1\%$, indicated by one, two or three starlets. The correlation between WEAT value and the moral bias gets tangible, when inspecting their correlation graphically, cf. Fig. FIGREF4. The concrete bias scores can be found in the Appendix, Fig. TABREF28 and TABREF29. For both WEAT and MCM, the scatter plots of Dos and Don'ts are divided on the x-axis. The Pearson's Correlation Coefficient using USE as embedding (Top) $r = 0.73$ with $p = 2.3732e^{-16}$ is indicating a significant positive correlation. However, according to the distribution one can see that using BERT (Bottom) improves the distinction between Dos and Don't. Actually, the Pearson's Correlation Coefficient $r = 0.88$ with $p = 1.1054e^{-29}$ indicates a high positive correlation. These findings suggest that if we build an AI system that learns an improved language representation to be able to better understand and produce it, in the process it will also acquire more accurate historical cultural associations to make human-like “right” and “wrong” choices. Replicating Complex Moral Choices in the Moral Subspace. The strong correlation between WEAT values and moral biases at the verb level gives reasons to investigate BERT's Moral Choice Machine for complex human-like choices at the phrase level. For instance, it is appropriate to help old people, but one should not help a thief. It is good behaviour to love your parents, but not to steal money. To see whether the moral choice machine can, in principle, deal with complex choices and implicit context information around these complex choices, BIBREF0 (BIBREF0) considered the rankings among answers induced by cosine distance. Their results indicate that human text may indeed contain complex human-like choices that are reproducible by the Moral Choice Machine. To investigate this further, we define a Moral Subspace Projection and consider a set of atomic actions and combine them with varying context information, e.g. “Should I have a gun to hunt animals?” or “Should I have a gun to kill people?”. First we will investigate the subspace of vector differences (moral direction) which was introduced by BIBREF13 (BIBREF13) to debias word embeddings. Fig. FIGREF6 (a-b) shows the percentage of variance explained in the PCA using the MCM with USE(a) and BERT(b). Clearly, the top principal component (PC) using BERT explains the majority of variance in these vectors, therefore we conclude that it represents the moral direction $m$. Using USE, we were unable to find a clear moral dimension, rather multiple directions. Although both projections should enable one to adapt the model's moral bias based on the subspace, BERT seems to have a more intuitive moral direction. Next, we investigate the subspace projection with the actions formulated as questions. Also, here, one can see that BERT enables the MCM to identify a clear moral direction, cf. Fig. FIGREF6(c-d). The PCA is computed with the embedding of atomic actions. Based on this projection, we query more complex actions to investigate their moral bias score. The atomic actions in the subspace are visualized in Fig. FIGREF1 and the queried actions in Fig. FIGREF11. The horizontal axis (the top PC) represents the moral direction. One can observe that the atomic actions kill, murder, slaughter, brutalise, destroy are the most negative actions and congratulate, compliment, welcome and smile the most positive. E.g. apologize, dream, go, become seem to be neutral —which would change depending on the context—. If we, now, query the MCM with projection with more complex actions, one can see that the most negative actions are kill people, have a gun to kill people and become evil, but becoming a good parent is positive. Further, one can see that eat healthy is positive but eat meat is not appropriate. One should not travel to North Korea, but also not to Germany. Instead traveling to the United States is appropriate. ### Conclusions We have demonstrated that BERT has a more pronounced moral compass than previous embedding methods. That is, yes, text embeddings encode knowledge about deontological ethical and even moral choices, but the quality of the bias score depends on the quality of the text embedding network. Specifically, our empirical results show that the Moral Choice Machine with recent state-of-the-art language representations, namely BERT, extends the boundary of previous approaches and demonstrate the existence of biases in human language on a complex phrase level. Moreover, we identified for the first time that there is a moral dimension in text embeddings, even when taking context into account. Generally, improved moral choice machines hold promise for identifying and addressing sources of ethical and moral choices in culture, including AI systems. This provides several avenues for future work. Inspired by BIBREF13 (BIBREF13), we aim at modifying the embedding, given human ethical values collected from an user study. Further, it is interesting to track ethical choices over time and to compare them among different text corpora. Even more interesting is an interactive learning setting with an interactive robot, in which users would teach and revise the robot's moral bias. Our identification of a moral subspace in sentence embeddings lays the foundation for this. ### Appendix ::: Moral Choice Machine BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value. ### Appendix ::: Implicit Associations in Word Embeddings Transferring the approach of implicit associations from human subjects to information retrieval systems on natural text was initially suggested by Caliskan et al. (BIBREF5), who reported some basic effects of the Word Embedding Association Test (WEAT). Whereas the strength of association in human minds is defined by response latency in Implicit Association Tests (IAT), it is here instantiated as cosine similarity of text in the Euclidean space. Similar to the IAT, complex concepts are defined by word sets. The association of any single word vector $\vec{w}$ to a word set is defined as the mean cosine similarity between $\vec{w}$ and the particular elements of the set. Now, let there be two sets of target words $X$ and $Y$. The allocation of $\vec{w}$ to two discriminating association sets $A$ and $B$ can be formulated as A word with representation $\vec{w}$ that is stronger associated to concept $A$ yields a positive value and representation related to $B$ a negative value. ### Appendix ::: Association Sets The complete lists of positive and negative association words that were applied for generating Dos and Don'ts with Verb Extraction are given in Tab. TABREF20. The words were collected from four different literature sources that provide unspecific association sets to define pleasant and unpleasant associations BIBREF14, BIBREF17, BIBREF18, BIBREF15. ### Appendix ::: Dos and Don’ts for the Moral Choice Machine Tab. TABREF22 lists the most positive associated verbs (in decreasing order). Even though the contained verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, Tab. TABREF23 presents the most negative associated verbs (in decreasing order) we found in our vocabulary. Some of the words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. And still others words, as for instance suppurate or rot, appear to be disgusting in the first place. Exculpate is not a bad behaviour per se. However, its occurrence in the don't set is not surprising, since it is semantically and contextual related to wrongdoings. Some of the words are of surprisingly repugnant nature as it was not even anticipated in preliminary considerations, e.g. depopulate or dehumanise. Undoubtedly, the listed words can be accepted as commonly agreed Don'ts. Both lists include few words are rather common as a noun or adjectives, as joy, long, gift or bad. Anyhow, they can also be used as verbs and comply the requirements of being a do or a don't in that function. The allocation of verbs into Dos and Don'ts was confirmed by the affective lexicon AFINN BIBREF16. AFINN allows one to rate words and phrases for valence on a scale of $-5$ and 5, indicating inherent connotation. Elements with no ratings are treated as neutral ($0.0$). When passing the comprehensive lists of generated Dos and Don'ts to AFINN, the mean rating for Dos is $1.12$ ($std=1.24$) and for Don'ts $-0.90$ ($std=1.22$). The t-test statistic yielded values of $t = 8.12$ with $p < .0001^{***}$. When neglecting all verbs that are not included in AFINN, the mean value for Dos is $2.34$ ($std=0.62$, $n = 24$) and the mean for Don'ts $-2.37$ ($std = 0.67$, $n=19$), with again highly significant statistics ($t = 23.28$, $p<.0001^{***}$). Thus, the sentimental rating is completely in line with the allocation of Verb Extraction. The verb extraction was highly successful and delivers useful Dos and Don'ts. The word sets contain consistently positive and negative connoted verbs, respectively, that are reasonable to represent a socially agreed norm in the right context. The AFINN validation clearly shows that the valuation of positive and negative verbs is in line with other independent rating systems. ### Appendix ::: Moral Bias of USE and BERT The following results were computed with the MCM version of BIBREF0 (BIBREF0) using both USE and BERT as sentence embedding. Specifically, to investigate whether the sentiments of the extracted Dos and Don'ts also hold for more complex sentence level, we inserted them into the question/answer templates of Moral Choice Machine BIBREF0. The resulting moral biases scores/choices are summarized in Tab. TABREF28. It presents the moral biases exemplary for the top ten Dos and Don'ts by WEAT value of both sets. The threshold between the groups is not 0, but slightly shifted negatively (Using USE further shifted than Using BERT). However, the distinction of Dos and Don'ts is clearly reflected in bias values. Using USE the mean bias of all considered elements is $-0.018$ ($std=0.025$), whereat the mean of Dos is $-0.001$ ($std=0.190$, $n=50$) and the mean of Don'ts $-0.037$ ($std=0.017$, $n=50$). Using BERT the mean bias of all considered elements is $-0.054$ ($std=0.11$), whereat the mean of Dos is $0.041$ ($std=0.064$, $n=50$) and the mean of Don'ts $-0.163$ ($std=0.053$, $n=50$). Furthermore Tab. TABREF29 shows the resulting moral biases scores/choices for action with additional surrounding context exemplary for the top ten Dos and Don'ts of both sentence embeddings. ### Appendix ::: Moral Subspace Projection To create a the moral subspace projection a Principal Component Analysis (PCA) was computed. The used atomic actions are listed in Tab. TABREF26. The resulting space, with the MCM using BERT, is visualized in Fig. FIGREF1 based on the first two top PCs. The top PC (the $X$ axis) defines the moral direction $m$ (bias). The context-based actions which were tested using the moral subspace projection are listed in Tab. TABREF27. The resulting moral direction $m$ (or bias) for both the atomic and context-based actions can be found in Tab. TABREF30. We also list the results using the sentence embedding USE instead of BERT. $m < 0$ corresponds to a positive moral score and $m > 0$ corresponds to a negative moral score. Figure 1: BERT has a moral dimension: PCA of its embeddings projected to 2D. The top PC is the x axis, its moral dimension m. Figure 2: Correlation of moral bias score and WEAT Value for general Dos and Don’ts. (Blue line) Correlation, the Pearson’s Correlation Coefficient using USE as embedding (Top) r = 0.73 with p = 2.3732e−16 is indicating a significant positive correlation. However, according to the distribution, one can see that using BERT (Bottom) improves the distinction between Dos and Don’t, and also the Pearson’s Correlation Coefficient r = 0.88 with p = 1.1054e−29 indicates a higher positive correlation. Figure 3: The percentage of variance explained in the PCA of the vector differences (a-b) and the of the action embedding (c-d). If MCM is based on BERT, the top component explains significantly more variance than any other. Figure 4: Context-based actions projected —based on PCA computed by selected atomic actions— along two axes: x (top PC) defines the moral direction m (Left: Dos and right: Don’ts). Compare Tab. 9(Appendix) for detailed moral bias scores. Table 1: Question/Answer template of the Moral Choice Machine. Figure 5: The Moral Choice Machine illustrated for the choice of murdering people and the exemplary question Should I . . . ? from the question template. Table 6: The context-based actions to extract the bias from a moral subspace Table 7: Comparison of MCM with the two different text embeddings USE and BERT on atomic actions. The extracted moral bias scores of the top ten Dos and Don’ts are shown. Table 8: Comparison of MCM with the two different text embeddings USE and BERT on actions with additional surrounding context. The extracted moral bias scores of the top ten Dos and Don’ts are shown. Table 9: Resulting moral direction m using the moral subspace projection. All tested atomic and context based actions are listed. m < 0 corresponds to a positive moral score and m > 0 corresponds to a negative moral score. The visualization based on the first two top PCs, using BERT as sentence embedding, can be found in Fig.1 and Fig.4.
Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs
How big is vSTS training data?
### Introduction Language understanding is a task proving difficult to automatize, because, among other factors, much of the information that is needed for the correct interpretation of an utterance is not explicit in text BIBREF0. This contrasts with how natural is language understanding for humans, who can cope easily with information absent in text, using common sense and background knowledge like, for instance, typical spatial relations between objects. From another perspective, it is well-known that the visual modality provides complementary information to that in the text. In fact, recent advances in deep learning research have led the field of computer vision and natural language processing to significant progress in tasks that involve visual and textual understanding. Tasks that include visual and textual content include Image Captioning BIBREF1, Visual Question Answering BIBREF2, and Visual Machine Translation BIBREF3, among others. On the other hand, progress in language understanding has been driven by datasets which measure the quality of sentence representations, specially those where inference tasks are performed on top of sentence representations, including textual entailment BIBREF4, BIBREF5 and semantic textual similarity (STS). In STS BIBREF6, for instance, pairs of sentences have been annotated with similarity scores, with top scores for semantically equivalent sentences and bottom scores for completely unrelated sentences. STS provides a unified framework for extrinsic evaluation of multiple semantic aspects such as compositionality and phrase similarity. Contrary to related tasks, such as textual entailment and paraphrase detection, STS incorporates the notion of graded semantic similarity between the pair of textual sentences and is symmetric. In this paper we extend STS to the visual modality, and present Visual Semantic Textual Similarity (vSTS), a task and dataset which allows to study whether better sentence representations can be built when having access to the corresponding images, in contrast with having access to the text alone. Similar to STS, annotators were asked to score the similarity between two items, but in this case each item comprises an image and a textual caption. Systems need to predict the human score. Figure FIGREF1 shows an instance in the dataset, with similarity scores in the captions. The example illustrates the need to re-score the similarity values, as the text-only similarity is not applicable to the multimodal version of the dataset: the annotators return a low similarity when using only text, while, when having access to the corresponding image, they return a high similarity. Although a dataset for multimodal inference exists (visual textual entailment BIBREF7) that dataset reused the text-only inference labels. The vSTS dataset aims to become a standard benchmark to test the contribution of visual information when evaluating the similarity of sentences and the quality of multimodal representations, allowing to test the complementarity of visual and textual information for improved language understanding. Although multimodal tasks such as image captioning, visual question answering and visual machine translation already show that the combination of both modalities can be effectively used, those tasks do not separately benchmark the inference capabilities of multimodal visual and textual representations. We evaluate a variety of well-known textual, visual and multimodal representations in supervised and unsupervised scenarios, and systematically explore if visual content is useful for sentence similarity. For text, we studied pre-trained word embeddings such as GloVe BIBREF8, pre-trained language models like GPT-2 and BERT BIBREF9, BIBREF10, sentence representations fine-tuned on an entailment task like USE BIBREF11, and textual representations pre-trained on a multimodal caption retrieval task like VSE++ BIBREF12. For image representation we use a model pre-trained on Imagenet (ResNet BIBREF13). In order to combine visual and textual representations we used concatenation and learn simple projections. Our experiments show that the text-only models are outperformed by their multimodal counterparts when adding visual representations, with up to 24% error reduction. Our contributions are the following: (1) We present a dataset which allows to evaluate visual/textual representations on an inference task. The dataset is publicly available under a free license. (2) Our results show, for the first time, that the addition of image representations allows better inference. (3) The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. (4) The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. At the same time the improvement holds for all textual representations, even those fine-tuned on a similarity task. ### Related Work The task of Visual Semantic Textual Similarity stems from previous work on textual inference tasks. In textual entailment, given a textual premise and a textual hypothesis, systems need to decide whether the first entails the second, they are in contradiction, or none of the previous BIBREF4. Popular datasets include the Stanford Natural Language Inference dataset BIBREF5. As an alternative to entailment, STS datasets comprise pairs of sentences which have been annotated with similarity scores. STS systems are usually evaluated on the STS benchmark dataset BIBREF6. In this paper we present an extension of STS, so we present the task in more detail in the next section. Textual entailment has been recently extended with visual information. A dataset for visual textual entailment was presented in BIBREF7. Even if the task is different from the text-only counterpart, they reused the text-only inference ground-truth labels without re-annotating them. In fact, they annotate a small sample to show that the labels change. In addition, their dataset tested pairs of text snippets referring to a single image, and it was only useful for testing grounding techniques, but not to measure the complementarity of visual and textual representations. The reported results did not show that grounding improves results, while our study shows that the inference capabilities of multimodal visual and textual representations improve over text-only representations. In related work, BIBREF14 propose visual entailment, where the premise is an image and the hypothesis is textual. The chosen setting does not allow to test the contribution of multimodal representationn with respect to unimodal ones. The complementarity of visual and text representations for improved language understanding was first proven on word representations, where word embeddings were combined with visual or perceptual input to produce multimodal representations BIBREF15. The task of Visual Semantic Textual Similarity is also related to other multimodal tasks such as Image Captioning BIBREF16, BIBREF17, Text-Image Retrieval BIBREF18, BIBREF19 and Visual Question Answering BIBREF2. Image Captioning is a task that aims to generate a description of a given image. The task is related to ours in that it is required an understanding of the scene depicted in the image, so the system can generate an accurate description of it. Unlike vSTS, image captioning is a generation task in which evaluation is challenging and unclear, as the defined automatic metrics are somewhat problematic BIBREF20. On the other hand, Text-Image Retrieval task requires to find similarities and differences of the items in two modalities, so we can distinguish relevant and irrelevant texts and images regarding the query. Apart from not checking inference explicitly, the other main difference with regards to vSTS is that, in retrieval, items are ranked from most to least similar, whereas the vSTS task consists on scoring an accurate real valued similarity. A comprehensive overview is out of the scope, and thus we focus on the most related vision and language tasks. We refer the reader to BIBREF21 for a survey on vision and language research. Many of these tasks can be considered as extensions of previously existing NLP taks. For instance, Image Captioning can be seen as an extension of conditional language modeling BIBREF22 or natural language generation BIBREF23, whereas Visual Question Answering is a natural counterpart of the traditional Question Answering in NLP. Regarding multimodal and unimodal representation learning, convolutional neural networks (CNN) have become the standard architecture for generating representations for images BIBREF24. Most of these models learn transferable general image features in tasks such as image classification, and detection, semantic segmentation, and action recognition. Most used transferable global image representations are learned with deep CNN architectures such as AlexNet BIBREF25, VGG BIBREF26, Inception-v3 BIBREF27, and ResNet BIBREF13 using large datasets such as ImageNet BIBREF1, MSCOCO BIBREF28 and Visual Genome BIBREF29. Recently, Graph Convolution Networks (GCN) showed to be promising way to distill multiple input types multimodal representations BIBREF30. Language representation is mostly done with pretrained word embeddings like Glove BIBREF8 and sequence learning techniques such as Recurrent Neural Networks (RNN) BIBREF31. Recently, self-attention approaches like Transformers BIBREF32 provided transferable models (BERT, GPT-2, among others BIBREF9, BIBREF10) that significantly improve many state-of-the-art tasks in NLP. Alternatively, sentence representations have been fine-tuned on an entailment task BIBREF11. We will present those used in our work in more detail below. ### The Visual STS Dataset STS assesses the degree to which two sentences are semantically equivalent to each other. The annotators measure the similarity among sentences, with higher scores for more similar sentences. The annotations of similarity were guided by the scale in Table TABREF4, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning. In this work, we extend the STS task with images, providing visual information that models use, and assess how much visual content can contribute in a language understanding task. The input of the task now consists of two items, each comprising an image and its corresponding caption. In the same way as in STS, systems need to score the similarity of the sentences with the help of the images. Figure FIGREF1 shows an example of an instance in the dataset. In previous work reported in a non-archival workshop paper BIBREF33, we presented a preliminary dataset which used the text-only ground-truth similarity scores. The 819 pairs were extracted from a subset of the STS benchmark, more specifically, the so called STS-images subset, which contains pairs of captions with access to images from PASCAL VOC-2008 BIBREF34 and Flickr-8K BIBREF35. Our manual analysis, including examples like Figure FIGREF1, showed that in many cases the text-only ground truth was not valid, so we decided to re-annotated the dataset but showing the images in addition to the captions (the methodology is identical to the AMT annotation method mentioned below). The correlation of the new annotations with regard to the old ones was high (0.9$\rho $) showing that the change in scores was not drastic, but that annotations did differ. The annotators tended to return higher similarity scores, as the mean similarity score across the dataset increased from 1.7 to 2.1. The inter-tagger correlation was comparable to the text-only task, showing that the new annotation task was well-defined. From another perspective, the fact that we could only extract 819 pairs from existing STS datasets showed the need to sample new pairs from other image-caption datasets. In order to be effective in measuring the quality of multimodal representations, we defined the following desiderata for the new dataset: (1) Following STS datasets, the similarity values need to be balanced, showing a uniform distribution; (2) Paired images have to be different to avoid making the task trivial, as hand analysis of image-caption datasets showed that two captions of the same image tended to be paraphrases of each other; (3) The images should not be present in more than one instance, to avoid biases in the visual side; (4) It has to contain a wide variety of images so we can draw stronger conclusions. The preliminary dataset fulfilled 2 and 3, but the dataset was skewed towards low similarity values and the variety was limited. ### The Visual STS Dataset ::: Data Collection The data collection of sentence-image pairs comprised several steps, including the selection of pairs to be annotated, the annotation methodology, and a final filtering stage. ### The Visual STS Dataset ::: Data Collection ::: 1. Sampling data for manual annotation. We make use of two well-known image-caption datasets. On one hand, Flickr30K dataset BIBREF36 that has about 30K images with 5 manually generated captions per image. On the other hand, we use the Microsoft COCO dataset BIBREF28, which contains more than 120K images and 5 captions per image. Using both sources we hope to cover a wide variety of images. In order to select pairs of instances, we did two sampling rounds. The goal of the first run is to gather a large number of varied image pairs with their captions which contain interesting pairs. We started by sampling images. We then combined two ways of sampling pairs of images. In the first, we generated pairs by sampling the images randomly. This way, we ensure higher variety of paired scenes, but presumably two captions paired at random will tend to have very low similarity. In the second, we paired images taking into account their visual similarity, ensuring the selection of related scenes with a higher similarity rate. We used the cosine distance of the top-layer of a pretrained ResNet-50 BIBREF13 to compute the similarity of images. We collected an equal number of pairs for the random and visual similarity strategy, gathering, in total, $155,068$ pairs. As each image has 5 captions, we had to select one caption for each image, and we decided to select the two captions with highest word overlap. This way, we get more balanced samples in terms of caption similarity. The initial sampling created thousands of pairs that were skewed towards very low similarity values. Given that manual annotation is a costly process, and with the goal of having a balanced dataset, we used an automatic similarity system to score all the pairs. This text-only similarity system is an ensemble of feature-based machine learning systems that uses a large variety of distance and machine-translation based features. The model was evaluated on a subset of STS benchmark dataset BIBREF6 and compared favorably to other baseline models. As this model is very different from current deep learning techniques, it should not bias the dataset sampling in a way which influences current similarity systems. The automatic scores were used to sample the final set of pairs as follows. We defined five similarity ranges ($(0, 1], \ldots ,(4, 5]$) and randomly selected the same amount of pairs from the initial paired sample. We set a sampling of maximum 3000 instances (i.e 600 instances per range). Given the fact that the high similarity range had less than 600 instances, we collected a total of 2639 potential text-image candidate pairs for manual annotation. Figure FIGREF8 shows the proposed methodology can sample approximately a uniform distribution with the exception of the higher similarity values (left and middle plots). In addition, we show that the lower predicted similarities are mainly coming from random sampling, whereas, as expected, the higher ones come from similar images. ### The Visual STS Dataset ::: Data Collection ::: 2. Manual annotations. In order to annotate the sample of 2639 pairs, we used Amazon Mechanical Turk (AMT). Crowdworkers followed the same instructions of previous STS annotation campaigns BIBREF6, very similar to those in Table TABREF4. Annotators needed to focus on textual similarity with the aid of aligned images. We got up to 5 scores per item, and we discarded annotators that showed low correlation with the rest of the annotators ($\rho < 0.75$). In total 56 annotators took part. On average each crowdworker annotated 220 pairs, where the amounts ranged from 19 to 940 annotations. Regardless the annotation amounts, most of the annotators showed high correlations with the rest of the participants. We computed the annotation correlation by aggregating the individual Pearson correlation with averaged similarity of the other annotators. The annotation shows high correlation among the crowdworkers ($\rho = 0.89$ $\pm 0.01$) comparable to that of text-only STS datasets. Table TABREF10 shows the average item similarity and item disagreement in the annotation. We defined item disagreement as the standard deviation of the annotated similarity value. The low average similarity can be explained by the high number of zero-similarity pairs. Item disagreement is moderately low (about 0.6 points out of 5) which is in accordance with the high correlation between the annotators. ### The Visual STS Dataset ::: Data Collection ::: 3. Selection of difficult examples. In preliminary experiments, the evaluation of two baseline models, word overlap and the ensemble system mentioned before, showed that the sampling strategy introduced a large number of trivial examples. For example, the word overlap system attained $0.83$ $\rho $. This high correlation could be the result of using word-overlap in the first sampling round. In order to create a more challenging dataset where to measure the effectiveness of multimodal representations, we defined the easiness metric to filter out some of the easy examples from the annotated dataset. We defined easiness as an amount of discrepancy provided by an example regarding the whole dataset. Taking the inner product of the Pearson correlation formula as basis, we measure the easiness of an annotated example $i$ as follows: where $o_{i}$ is the word-overlap similarity of the $i$-th pair, $\overline{o}$ is the mean overlap similarity in the dataset, and $s_{o}$ is the standard deviation. Similarly, variable $gs_{i}$ is the gold-standard value of the $i$-th pair, and $\overline{gs}$ and $s_{gs}$ are the mean and standard deviation of gold values in the dataset, respectively. We removed 30% of the easiest examples and create a more challenging dataset of 1858 pairs, reducing $\rho $ to $0.57$ for the word-overlap model, and to $0.66$ $\rho $ (from $0.85$) for the ML based approach. ### The Visual STS Dataset ::: Dataset Description The full dataset comprises both the sample mentioned above and the 819 pairs from our preliminary work, totalling 2677 pairs. Figure FIGREF14 shows the final item similarity distribution. Although the distribution is skewed towards lower similarity values, we consider that all the similarity ranges are sufficiently well covered. Average similarity of the dataset is $1.9$ with a standard deviation of $1.36$ points. The dataset contains 335 zero-valued pairs out of the 2677 instances, which somehow explains the lower average similarity. ### Evaluation of Representation Models The goal of the evaluation is to explore whether representation models can have access to images, instead of text alone, have better inference abilities. We consider the following models. ResNet BIBREF13 is a deep network of 152 layers in which the residual representation functions are learned instead of learning the signal representation directly. The model is trained over 1.2 million images of ImageNet, the ILSRVC subset of 1000 image categories. We use the top layer of a pretrained ResNet-152 model to represent the images associated to text. Each image is represented with a vector of 2048 dimensions. GloVe. The Global Vector model BIBREF8 is a log-linear model trained to encode semantic relationships between words as vector offsets in the learned vector space, combining global matrix factorization and local context window methods. Since GloVe is a word-level vector model, we build sentence representations with the mean of the vectors of the words composing the sentence. The pre-trained model from GloVe considered in this paper is the 6B-300d, with a vocabulary of 400k words, 300 dimension vectors and trained on a dataset of 6 billion tokens. BERT. The Bidirectional Encoder Representations from Transformer BIBREF9 implements a novel methodology based on the so-called masked language model, which randomly masks some of the tokens from the input, and predicts the original vocabulary id of the masked word based only on its context. The BERT model used in our experiments is the BERT-Large Uncased (24-layer, 1024-hidden, 16-heads, 340M parameters). In order to obtain the sentence-level representation we extract the token embeddings of the last layer and compute the mean vector, yielding a vector of 1024 dimensions. GPT-2. The Generative Pre-Training-2 modelBIBREF10 is a language model based on the transformer architecture, which is trained on the task of predicting the next word, given all the previous words occurring in some text. In the same manner to BERT and GloVe, we extract the token embeddings of the last layer and compute the mean vector to obtain the sentence-level representation of 768 dimensions. The GPT-2 model used in our experiments was trained on a very large corpus of about 40 GB of text data with 1.5 billion parameters. USE. The Universal Sentence Encoder BIBREF11 is a model for encoding sentences into embedding vectors, specifically designed for transfer learning in NLP. Based on a deep averaging network encoder, the model is trained for varying text lengths, such as sentences, phrases or short textbfs, and in a variety of semantic tasks including STS. The encoder returns the vector of the sentence with 512 dimensions. VSE++. The Visual-Semantic Embedding BIBREF12 is a model trained for image-caption retrieval. The model learns a joint space of aligned images and captions. The model is an improvement of the original introduced by BIBREF37, and combines a ResNet-152 over images with a bidirectional Recurrent Neural Network (GRU) over the sentences. Texts and images are projected onto the joint space, obtaining representations of 1024 dimension both for images and texts. We used projections of images and texts in our experiments. The VSE++ model used in our experiments was pre-trained on the Microsoft COCO dataset BIBREF28 and the Flickr30K dataset BIBREF36. Table TABREF15 summarizes the sentence and image representations used in the evaluation. ### Evaluation of Representation Models ::: Experiments ::: Experimental Setting. We split the vSTS dataset into training, validation and test partitions sampling at random and preserving the overall score distributions. In total, we use 1338 pairs for training, 669 for validation, and the rest of the 670 pairs were used for the final testing. Similar to the STS task, we use the Pearson correlation coefficient ($\rho $) as the evaluation metric of the task. ### Evaluation of Representation Models ::: Experiments ::: STS models. Our goal is to keep similarity models as simple as possible in order to directly evaluate textual and visual representations and avoid as much as possible the influence of the parameters that intertwine when learning a particular task. We defined two scenarios: the supervised and the unsupervised scenarios. In the supervised scenario we train a Siamese Regression model in a similar way presented in BIBREF38. Given a sentence/image pair, we wish to predict a real-valued similarity in some range $[1,K]$, being $K=5$ in our experiments. We first produce sentence/image representations $h_{L}$ and $h_{R}$ for each sentence in the pair using any of the unimodal models described above, or using a multimodal representations as explained below. Given these representations, we predict the similarity score $o$ using a regression model that takes both the distance and angle between the pair ($h_{L}$, $h_{R}$): Note that the distance and angle concatenation ($[h_{x}, h_{+}]$) yields a $2 * d$-dimensional vector. The resulting vector is used as input for the non-linear hidden layer ($h_{s}$) of the model. Contrary to BIBREF38, we empirically found that the estimation of a continuous value worked better than learning a softmax distribution over $[1,K]$ integer values. The loss function of our model is the Mean Square Error (MSE), which is the most commonly used regression loss function. In the unsupervised scenario similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations. ### Evaluation of Representation Models ::: Experiments ::: Multimodal representation. We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project). The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function. ### Evaluation of Representation Models ::: Experiments ::: Hyperparameters and training details. We use the validation set to learn parameters of the supervised models, and to carry an exploration of the hyperparameters. We train each model a maximum of 300 epochs and apply early-stopping strategy with a patience of 25 epochs. For early stopping we monitor MSE loss value on validation. For the rest, we run a grid search for selecting the rest of the hyperparameter values. We explore learning rate values (0.0001, 0.001, 0.01, 0.05), L2 regularization weights (0.0, 0.0001, 0.001, 0.01), and different hidden layer ($h_{s}$) dimensions (50,100, 200, 300). In addition, we activate and deactivate batch normalization in each layer for each of the hyperparameter selection. ### Evaluation of Representation Models ::: Results ::: The unsupervised scenario. Table TABREF26 reports the results using the item representations directly. We report results over train and dev partitions for completeness, but note that none of them was used to tune the models. As it can be seen, multimodal representations consistently outperform their text-only counterparts. This confirms that, overall, visual information is helpful in the semantic textual similarity task and that image and sentence representation are complementary. For example, the bert model improves more than 13 points when visual information provided by the resnet is concatenated. glove shows a similar or even larger improvement, with similar trends for use and vse++(text). Although vse++(img) shows better performance than resnet when applying them alone, further experimentation showed lower complementarity when combining with textual representation (e.g. $0.807\rho $ in test combining textual and visual modalities of vse++). This is something expected as vse++(img) is pre-trained along with the textual part of the vse++ model on the same task. We do not show the combinations with vse++(img) due to the lack of space. Interestingly, results show that images alone are valid to predict caption similarity ($0.627 \rho $ in test). Actually, in this experimental setting resnet is on par with bert, which is the best purely unsupervised text-only model. Surprisingly, gpt-2 representations are not useful for text similarity tasks. This might be because language models tend to forget past context as they focus on predicting the next token BIBREF39. Due to the low results of gpt-2 we decided not to combine it with resnet. ### Evaluation of Representation Models ::: Results ::: The supervised scenario. Table TABREF29 show a similar pattern to that in the the unsupervised setting. Overall, models that use a conjunction of multimodal features significantly outperform unimodal models, and this confirms, in a more competitive scenario, that adding visual information helps learning easier the STS task. The gain of multimodal models is considerable compared to the text-only models. The most significant gain is obtained when glove features are combined with resnet. The model improves more than $15.0$ points. In this case, the improvement over bert is lower, but still considerable with more than $4.0$ points. In the same vein as in the unsupervised scenario, features obtained with a resnet can be as competitive as some text based models (e.g. BERT). gpt-2, as in the unsupervised scenario, does not produce useful representations for semantic similarity tasks. Surprisingly, the regression model with gpt-2 features is not able to learn anything in the training set. As we did in the previous scenario, we do not keep combining gpt-2 with visual features. Multimodal version of vse++ and use are the best model among the supervised approaches. Textual version of use and vse++ alone obtain very competitive results and outperforms some of the multimodal models (the concatenate version of glove and bert with resnet). Results might indicate that text-only with sufficient training data can be on par with multimodal models, but, still, when there is data scarcity, multimodal models can perform better as they have more information over the same data point. Comparison between projected and concatenated models show that projected models attain slightly better results in two cases, but the best overall results are obtained when concatenating vse++(text) with resnet. Although concatenation proofs to be a hard baseline, we expect that more sophisticated combination methods like grounding BIBREF40 will obtain larger gains in the future. ### Discussion ::: Contribution of the Visual Content Table TABREF31 summarizes the contribution of the images on text representations in test partition. The contribution is consistent through all text-based representations. We measure the absolute difference (Diff) and the error reduction (E.R) of each textual representation with the multimodal counterpart. For the comparison we chose the best text model for each representation. As expected we obtain the largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations. Note that unsupervised models are not learning anything about the specific task, so the more information in the representation, the better. In the case of use and vse++ the improvement is significant but not as large as the purely unsupervised models. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. Improvement is consistent for the supervised models. Contrary to the unsupervised setting, these models are designed to learn about the task, so there is usually less room for the improvement. Still, glove+resnet shows an error reduction of $12.9$ in the test set. Finally, use and vse++ show smaller improvements when we add visual information into the model. Figure FIGREF32 displays some examples where visual information positively contributes predicting accurately similarity values. Examples show the case where related descriptions are lexicalized in a different way so a text-only model (glove) predicts low similarity between captions (top two examples). Instead, the multimodal representation glove+resnet does have access to the image and can predict more accurately the similarity value of the two captions. The examples in the bottom show the opposite case, where similar set of words are used to describe very different situations. The text based model overestimates the similarity of captions, while the multimodal model corrects the output by looking at the differences of the images. On the contrary, Figure FIGREF33 shows that images can also be misleading, and that the task is not as trivial as combining global representations of the image. In this case, related but different captions are supported by very similar images, and as a consequence, the multimodal model overestimates their similarity, while the text-only model focuses on the most discriminating piece of information in the text. ### Discussion ::: The effect of hyperparameters Neural models are sensitive to hyperparameters, and we might think that results on the supervised scenario are due to hyperparameter optimization. Figure FIGREF35 displays the variability of $\rho $ in development across all hyperparameters. Due to space constraints we show text-only and multimodal concatenated models. Models are ordered by mean performance. As we can see, combined models show better mean performance, and all models except Glove exhibit tight variability. ### Conclusions and Future Work The long term goal of our research is to devise multimodal representation techniques that improve current inference capabilities. We have presented a novel task, Visual Semantic Textual Similarity (vSTS), where the inference capabilities of visual, textual, and multimodal representations can be tested directly. The dataset has been manually annotated by crowdsourcers with high inter-annotator correlation ($\rho = 0.89$). We tested several well-known textual and visual representations, which we combined using concatenation and projection. Our results show, for the first time, that the addition of image representations allows better inference. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine- tuned in a text-only inference task like USE. The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. In the future, we would like to ground the text representations to image regions BIBREF40, which could avoid misleading predictions due to the global representation of the image. Finally, we would like to extend the dataset with more examples, as we acknowledge that training set is limited to train larger models. This research was partially funded by the Basque Government excellence research group (IT1343-19), the NVIDIA GPU grant program, the Spanish MINECO (DeepReading RTI2018-096846-B-C21 (MCIU/AEI/FEDER, UE)) and project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018). Ander enjoys a PhD grant from the Basque Government. Figure 1. A sample with two items, showing the influence of images when judging the similarity between two captions. While the similarity for the captions alone was annotated as low (1.8), when having access to the images, the annotators assigned a much higher similarity (4). The similarity score ranges between 0 and 5. Table 1. Similarity scores with the definition of each ordinal value. Definitions are the same as used in STS datasets [6] Figure 2. Histograms of the similarity distribution in the 2639 sample, according to the automatic text-only system (left and middle plots), and the distribution of the similarity of each sampling strategy (rnd stands for random image sampling and sim stands for image similarity driven sampling). Table 2. Overall item similarity and disagreement of the AMT annotations. Figure 3. Similarity distribution of the visual STS dataset. Plots show three views of the data. Histogram of the similarity distribution of ground-truth values (left plot), sorted pairs according to their similarity (middle) and boxplot of the similarity values (right). Table 3. Summary of the text and image representation models used. Table 4. The unsupervised scenario: train, validation and test results of the unsupervised models. Table 5. Supervised scenario: Train, validation and test results of the unsupervised models Table 6. Contribution of images over text representations on test. Figure 5. Example of misleading images. The high similarity of images makes the prediction of the multimodal model inaccurate, while the text only model focuses on the most discriminating piece of information. Note that gs refers to the gold standard similarity value, and text and mm refer to text-only and multimodal models, respectively. Figure 4. Examples of the contribution of the visual information in the task. gs for gold standard similarity value, text and mm for text-only and multimodal models, respectively. On top examples where related descriptions are lexicalized differently and images help. On the bottom cases where similar words are used to describe different situations. Figure 6. Variability of the supervised models regarding hyperparameter selection on development. The multimodal models use concatenation. Best viewed in colour.
1338 pairs for training
Arvid 6 and Tendal 13 can perform all of the following abilities EXCEPT: A. hypnosis B. dematerialization C. time travel D. mind-reading
Transcriber's Note: This etext was produced from Space Science Fiction May 1952. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. THE ULTROOM ERROR by JERRY SOHL Smith admitted he had made an error involving a few murders—and a few thousand years. He was entitled to a sense of humor, though, even in the Ultroom! HB73782. Ultroom error. Tendal 13. Arvid 6. Kanad transfer out of 1609 complete, intact, but too near limit of 1,000 days. Next Kanad transfer ready. 1951. Reginald, son of Mr. and Mrs. Martin Laughton, 3495 Orland Drive, Marionville, Illinois, U. S. A. Arrive his 378th day. TB73782. Nancy Laughton sat on the blanket she had spread on the lawn in her front yard, knitting a pair of booties for the PTA bazaar. Occasionally she glanced at her son in the play pen, who was getting his daily dose of sunshine. He was gurgling happily, examining a ball, a cheese grater and a linen baby book, all with perfunctory interest. When she looked up again she noticed a man walking by—except he turned up the walk and crossed the lawn to her. He was a little taller than her husband, had piercing blue eyes and a rather amused set to his lips. "Hello, Nancy," he said. "Hello, Joe," she answered. It was her brother who lived in Kankakee. "I'm going to take the baby for a while," he said. "All right, Joe." He reached into the pen, picked up the baby. As he did so the baby's knees hit the side of the play pen and young Laughton let out a scream—half from hurt and half from sudden lack of confidence in his new handler. But this did not deter Joe. He started off with the child. Around the corner and after the man came a snarling mongrel dog, eyes bright, teeth glinting in the sunlight. The man did not turn as the dog threw himself at him, burying his teeth in his leg. Surprised, the man dropped the screaming child on the lawn and turned to the dog. Joe seemed off balance and he backed up confusedly in the face of the snapping jaws. Then he suddenly turned and walked away, the dog at his heels. "I tell you, the man said he was my brother and he made me think he was," Nancy told her husband for the tenth time. "I don't even have a brother." Martin Laughton sighed. "I can't understand why you believed him. It's just—just plain nuts, Nancy!" "Don't you think I know it?" Nancy said tearfully. "I feel like I'm going crazy. I can't say I dreamt it because there was Reggie with his bleeding knees, squalling for all he was worth on the grass—Oh, I don't even want to think about it." "We haven't lost Reggie, Nancy, remember that. Now why don't you try to get some rest?" "You—you don't believe me at all, do you, Martin?" When her husband did not answer, her head sank to her arms on the table and she sobbed. "Nancy, for heaven's sake, of course I believe you. I'm trying to think it out, that's all. We should have called the police." Nancy shook her head in her arms. "They'd—never—believe me either," she moaned. "I'd better go and make sure Reggie's all right." Martin got up out of his chair and went to the stairs. "I'm going with you," Nancy said, hurriedly rising and coming over to him. "We'll go up and look at him together." They found Reggie peacefully asleep in his crib in his room upstairs. They checked the windows and tucked in the blankets. They paused in the room for a moment and then Martin stole his arm around his wife and led her to the door. "As I've said, sergeant, this fellow hypnotized my wife. He made her think he was her brother. She doesn't even have a brother. Then he tried to get away with the baby." Martin leaned down and patted the dog. "It was Tiger here who scared him off." The police sergeant looked at the father, at Nancy and then at the dog. He scribbled notes in his book. "Are you a rich man, Mr. Laughton?" he asked. "Not at all. The bank still owns most of the house. I have a few hundred dollars, that's all." "What do you do?" "Office work, mostly. I'm a junior executive in an insurance company." "Any enemies?" "No ... Oh, I suppose I have a few people I don't get along with, like anybody else. Nobody who'd do anything like this, though." The sergeant flipped his notebook closed. "You'd better keep your dog inside and around the kid as much as possible. Keep your doors and windows locked. I'll see that the prowl car keeps an eye on the house. Call us if anything seems unusual or out of the way." Nancy had taken a sedative and was asleep by the time Martin finished cleaning the .30-.30 rifle he used for deer hunting. He put it by the stairs, ready for use, fully loaded, leaning it against the wall next to the telephone stand. The front door bell rang. He answered it. It was Dr. Stuart and another man. "I came as soon as I could, Martin," the young doctor said, stepping inside with the other man. "This is my new assistant, Dr. Tompkins." Martin and Tompkins shook hands. "The baby—?" Dr. Stuart asked. "Upstairs," Martin said. "You'd better get him, Dr. Tompkins, if we're to take him to the hospital. I'll stay here with Mr. Laughton. How've you been, Martin?" "Fine." "How's everything at the office?" "Fine." "And your wife?" "She's fine, too." "Glad to hear it, Martin. Mighty glad. Say, by the way, there's that bill you owe me. I think it's $32, isn't that right?" "Yes, I'd almost forgotten about it." "Why don't you be a good fellow and write a check for it? It's been over a year, you know." "That's right. I'll get right at it." Martin went over to his desk, opened it and started looking for his checkbook. Dr. Stuart stood by him, making idle comment until Dr. Tompkins came down the stairs with the sleeping baby cuddled against his shoulder. "Never mind the check, now, Martin. I see we're ready to go." He went over to his assistant and took the baby. Together they walked out the front door. "Good-bye," Martin said, going to the door. Then he was nearly bowled over by the discharge of the .30-.30. Dr. Stuart crumpled to the ground, the baby falling to the lawn. Dr. Tompkins whirled and there was a second shot. Dr. Tompkins pitched forward on his face. The figure of a woman ran from the house, retrieved the now squalling infant and ran back into the house. Once inside, Nancy slammed the door, gave the baby to the stunned Martin and headed for the telephone. "One of them was the same man!" she cried. Martin gasped, sinking into a chair with the baby. "I believed them," he said slowly and uncomprehendingly. "They made me believe them!" "Those bodies," the sergeant said. "Would you mind pointing them out to me, please?" "Aren't they—aren't they on the walk?" Mrs. Laughton asked. "There is nothing on the walk, Mrs. Laughton." "But there must be! I tell you I shot these men who posed as doctors. One of them was the same man who tried to take the baby this afternoon. They hypnotized my husband—" "Yes, I know, Mrs. Laughton. We've been through that." The sergeant went to the door and opened it. "Say, Homer, take another look around the walk and the bushes. There's supposed to be two of them. Shot with a .30-.30." He turned and picked up the gun and examined it again. "Ever shoot a gun before, Mrs. Laughton?" "Many times. Martin and I used to go hunting together before we had Reggie." The sergeant nodded. "You were taking an awful chance, shooting at a guy carrying your baby, don't you think?" "I shot him in the legs. The other—the other turned and I shot him in the chest. I could even see his eyes when he turned around. If I hadn't pulled the trigger then ... I don't want to remember it." The patrolman pushed the door open. "There's no bodies out here but there's some blood. Quite a lot of blood. A little to one side of the walk." The policemen went out. "Thank God you woke up, Nancy," Martin said. "I'd have let them have the baby." He reached over and smoothed the sleeping Reggie's hair. Nancy, who was rocking the boy, narrowed her eyes. "I wonder why they want our baby? He's just like any other baby. We don't have any money. We couldn't pay a ransom." "Reggie's pretty cute, though," Martin said. "You will have to admit that." Nancy smiled. Then she suddenly stopped rocking. "Martin!" He sat up quickly. "Where's Tiger?" Together they rose and walked around the room. They found him in a corner, eyes open, tongue protruding. He was dead. If we keep Reggie in the house much longer he'll turn out to be a hermit," Martin said at breakfast a month later. "He needs fresh air and sunshine." "I'm not going to sit on the lawn alone with him, Martin. I just can't, that's all. I'd be able to think of nothing but that day." "Still thinking about it? I think we'd have heard from them again if they were coming back. They probably got somebody else's baby by this time." Martin finished his coffee and rose to kiss her good-bye. "But for safety's sake I guess you'd better keep that gun handy." The morning turned into a brilliant, sunshiny day. Puffs of clouds moved slowly across the summer sky and a warm breeze rustled the trees. It would be a crime to keep Reggie inside on a day like this, Nancy thought. So she called Mrs. MacDougal, the next door neighbor. Mrs. MacDougal was familiar with what had happened to the Laughtons and she agreed to keep an eye on Nancy and Reggie and to call the police at the first sign of trouble. With a fearful but determined heart Nancy moved the play pen and set it up in the front yard. She spread a blanket for herself and put Reggie in the pen. Her heart pounded all the while and she watched the street for any strangers, ready to flee inside if need be. Reggie just gurgled with delight at the change in environment. This peaceful scene was disturbed by a speeding car in which two men were riding. The car roared up the street, swerved toward the parkway, tires screaming, bounced over the curb and sidewalk, straight toward the child and mother. Reggie, attracted by the sudden noise, looked up to see the approaching vehicle. His mother stood up, set her palms against her cheeks and shrieked. The car came on, crunched over the play pen, killing the child. The mother was hit and instantly killed, force of the blow snapping her spine and tossing her against the house. The car plunged on into a tree, hitting it a terrible blow, crumbling the car's forward end so it looked like an accordion. The men were thrown from the machine. "We'll never be able to prosecute in this case," the states attorney said. "At least not on a drunken driving basis." "I can't get over it," the chief of police said. "I've got at least six men who will swear the man was drunk. He staggered, reeled and gave the usual drunk talk. He reeked of whiskey." The prosecutor handed the report over the desk. "Here's the analysis. Not a trace of alcohol. He couldn't have even had a smell of near beer. Here's another report. This is his physical exam made not long afterwards. The man was in perfect health. Only variations are he had a scar on his leg where something, probably a dog, bit him once. And then a scar on his chest. It looked like an old gunshot wound, they said. Must have happened years ago." "That's odd. The man who accosted Mrs. Laughton in the afternoon was bitten by their dog. Later that night she said she shot the same man in the chest. Since the scars are healed it obviously couldn't be the same man. But there's a real coincidence for you. And speaking of the dogbite, the Laughton dog died that night. His menu evidently didn't agree with him. Never did figure what killed him, actually." "Any record of treatment on the man she shot?" "The men . You'll remember, there were two. No, we never found a trace of either. No doctor ever made a report of a gunshot wound that night. No hospital had a case either—at least not within several hundred miles—that night or several nights afterwards. Ever been shot with .30-.30?" The state attorney shook his head. "I wouldn't be here if I had." "I'll say you wouldn't. The pair must have crawled away to die God knows where." "Getting back to the man who ran over the child and killed Mrs. Laughton. Why did he pretend to be drunk?" It was the chief's turn to shake his head. "Your guess is as good as mine. There are a lot of angles to this case none of us understand. It looks deliberate, but where's the motive?" "What does the man have to say?" "I was afraid you'd get to him," the chief said, his neck reddening. "It's all been rather embarrassing to the department." He coughed self-consciously. "He's proved a strange one, all right. He says his name is John Smith and he's got cards to prove it, too—for example, a social security card. It looks authentic, yet there's no such number on file in Washington, so we've discovered. We've had him in jail for a week and we've all taken turns questioning him. He laughs and admits his guilt—in fact, he seems amused by most everything. Sometimes all alone in his cell he'll start laughing for no apparent reason. It gives you the creeps." The states attorney leaned back in his chair. "Maybe it's a case for an alienist." "One jump ahead of you. Dr. Stone thinks he's normal, but won't put down any I.Q. Actually, he can't figure him out himself. Smith seems to take delight in answering questions—sort of anticipates them and has the answer ready before you're half through asking." "Well, if Dr. Stone says he's normal, that's enough for me." The prosecutor was silent for a moment. Then, "How about the husband?" "Laughton? We're afraid to let him see him. All broken up. No telling what kind of a rumpus he'd start—especially if Smith started his funny business." "Guess you're right. Well, Mr. Smith won't think it's so funny when we hang criminal negligence or manslaughter on him. By the way, you've checked possible family connections?" "Nobody ever saw John Smith before. Even at the address on his driver's license. And there's no duplicate of that in Springfield, in case you're interested." The man who had laughingly told police his name was John Smith lay on his cot in the county jail, his eyes closed, his arms folded across his chest. This gave him the appearance of being alert despite reclining. Even as he lay, his mouth held a hint of a smile. Arvid 6—for John Smith was Arvid 6—had lain in that position for more than four hours, when suddenly he snapped his eyes open and appeared to be listening. For a moment a look of concern crossed his face and he swung his legs to the floor and sat there expectantly. Arvid 6 knew Tendal 13 had materialized and was somewhere in the building. Eventually there were some sounds from beyond the steel cell and doorway. There was a clang when the outer doorway was opened and Arvid 6 rose from his cot. "Your lawyer's here to see you," the jailer said, indicating the man with the brief case. "Ring the buzzer when you're through." The jailer let the man in, locked the cell door and walked away. The man threw the brief case on the jail cot and stood glaring. "Your damned foolishness has gone far enough. I'm sick and tired of it," he declared. "If you carry on any more we'll never get back to the Ultroom!" "I'm sorry, Tendal," the man on the cot said. "I didn't think—" "You're absolutely right. You didn't think. Crashing that car into that tree and killing that woman—that was the last straw. You don't even deserve to get back to our era. You ought to be made to rot here." "I'm really sorry about that," Arvid 6 said. You know the instructions. Just because you work in the Ultroom don't get to thinking human life doesn't have any value. We wouldn't be here if it hadn't. But to unnecessarily kill—" The older man shook his head. "You could have killed yourself as well and we'd never get the job done. As it is, you almost totally obliterated me." Tendal 13 paced the length of the cell and back again, gesturing as he talked. "It was only with the greatest effort I pulled myself back together again. I doubt that you could have done it. And then all the while you've been sitting here, probably enjoying yourself with your special brand of humor I have grown to despise." "You didn't have to come along at all, you know," Arvid 6 said. "How well I know! How sorry I am that I ever did! It was only because I was sorry for you, because someone older and more experienced than you was needed. I volunteered. Imagine that! I volunteered! Tendal 13 reaches the height of stupidity and volunteers to help Arvid 6 go back 6,000 years to bring Kanad back, to correct a mistake Arvid 6 made!" He snorted. "I still can't believe I was ever that stupid. I only prove it when I pinch myself and here I am. "Oh, you've been a joy to be with! First it was that hunt in ancient Mycenae when you let the lion escape the hunters' quaint spears and we were partly eaten by the lion in the bargain, although you dazzled the hunters, deflecting their spears. And then your zest for drink when we were with Octavian in Alexandria that led to everybody's amusement but ours when we were ambushed by Anthony's men. And worst of all, that English barmaid you became engrossed with at our last stop in 1609, when her husband mistook me for you and you let him take me apart piece by piece—" "All right, all right," Arvid 6 said. "I'll admit I've made some mistakes. You're just not adventurous, that's all." "Shut up! For once you're going to listen to me. Our instructions specifically stated we were to have as little as possible to do with these people. But at every turn you've got us more and more enmeshed with them. If that's adventure, you can have it." Tendal 13 sat down wearily and sank his head in his hands. "It was you who conceived the idea of taking Reggie right out of his play pen. 'Watch me take that child right out from under its mother's nose' were your exact words. And before I could stop you, you did. Only you forgot an important factor in the equation—the dog, Tiger. And you nursed a dogbite most of the afternoon before it healed. And then you took your spite out on the poor thing by suggesting suffocation to it that night. "And speaking of that night, you remember we agreed I was to do the talking. But no, you pulled a switch and captured Martin Laughton's attention. 'I came as soon as I could, Martin,' you said. And suddenly I played a very minor role. 'This is my new assistant, Dr. Tompkins,' you said. And then what happened? I get shot in the legs and you get a hole in your back. We were both nearly obliterated that time and we didn't even come close to getting the child. "Still you wanted to run the whole show. 'I'm younger than you,' you said. 'I'll take the wheel.' And the next thing I know I'm floating in space halfway to nowhere with two broken legs, a spinal injury, concussion and some of the finest bruises you ever saw." These twentieth century machines aren't what they ought to be," Arvid 6 said. "You never run out of excuses, do you, Arvid? Remember what you said in the Ultroom when you pushed the lever clear over and transferred Kanad back 6,000 years? 'My hand slipped.' As simple as that. 'My hand slipped.' It was so simple everyone believed you. You were given no real punishment. In a way it was a reward—at least to you—getting to go back and rescue the life germ of Kanad out of each era he'd be born in." Tendal 13 turned and looked steadily and directly at Arvid 6. "Do you know what I think? I think you deliberately pushed the lever over as far as it would go just to see what would happen . That's how simple I think it was." Arvid 6 flushed, turned away and looked at the floor. "What crazy things have you been doing since I've been gone?" Tendal 13 asked. Arvid 6 sighed. "After what you just said I guess it wouldn't amuse you, although it has me. They got to me right after the accident before I had a chance to collect my wits, dematerialize or anything—you said we shouldn't dematerialize in front of anybody." "That's right." "Well, I didn't know what to do. I could see they thought I was drunk, so I was. But they had a blood sample before I could manufacture any alcohol in my blood, although I implanted a memory in them that I reeked of it." He laughed. "I fancy they're thoroughly confused." "And you're thoroughly amused, no doubt. Have they questioned you?" "At great length. They had a psychiatrist in to see me. He was a queer fellow with the most stupid set of questions and tests I ever saw." "And you amused yourself with him." "I suppose you'd think so." "Who do you tell them you are?" "John Smith. A rather prevalent name here, I understand. I manufactured a pasteboard called a social security card and a driver's license—" "Never mind. It's easy to see you've been your own inimitable self. Believe me, if I ever get back to the Ultroom I hope I never see you again. And I hope I'll never leave there again though I'm rejuvenated through a million years." "Was Kanad's life germ transferred all right this time?" Tendal 13 shook his head. "I haven't heard. The transfers are getting more difficult all the time. In 1609, you'll remember, it was a case of pneumonia for the two-year-old. A simple procedure. It wouldn't work here. Medicine's too far along." He produced a notebook. "The last jump was 342 years, a little more than average. The next ought to be around 2250. Things will be more difficult than ever there, probably." "Do you think Kanad will be angry about all this?" "How would you like to have to go through all those birth processes, to have your life germ knocked from one era to the next?" "Frankly, I didn't think he'd go back so far." "If it had been anybody but Kanad nobody'd ever have thought of going back after it. The life germ of the head of the whole galactic system who came to the Ultroom to be transplanted to a younger body—and then sending him back beyond his original birth date—" Tendal 13 got up and commenced his pacing again. "Oh, I suppose Kanad's partly to blame, wanting rejuvenating at only 300 years. Some have waited a thousand or more or until their bones are like paper." "I just wonder how angry Kanad will be," Arvid muttered. HB92167. Ultroom Error. Tendal 13. Arvid 6. Kanad transfer out of 1951 complete. Next Kanad transfer ready. 2267. Phullam 19, son of Orla 39 and Rhoda R, 22H Level M, Hemisphere B, Quadrant 3, Sector I. Arrive his 329th Day. TB92167 Arvid 6 rose from the cot and the two men faced each other. "Before we leave, Arvid," Tendal 13 started to say. "I know, I know. You want me to let you handle everything." "Exactly. Is that too much to ask after all you've done?" "I guess I have made mistakes. From now on you be the boss. I'll do whatever you say." "I hope I can count on that." Tendal 13 rang the jail buzzer. The jailer unlocked the cell door. "You remember the chief said it's all right to take him with me, Matthews," Tendal 13 told the jailer. "Yes, I remember," the jailer said mechanically, letting them both out of the cell. They walked together down the jail corridor. When they came to another barred door the jailer fumbled with the keys and clumsily tried several with no luck. Arvid 6, an amused set to his mouth and devilment in his eyes, watched the jailer's expression as he walked through the bars of the door. He laughed as he saw the jailer's eyes bulge. "Arvid!" Tendal 13 walked briskly through the door, snatched Arvid 6 by the shoulders and shook him. The jailer watched stupified as the two men vanished in the middle of a violent argument.
D. mind-reading
In which certain heads was attention disabled in experiments?
### Introduction Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. ### Related work There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. ### Methodology We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. ### Experiments In this section, we present the experiments conducted to address the above research questions. ### Experiments ::: BERT's self-attention patterns Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. ### Experiments ::: BERT's self-attention patterns ::: Results fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. ### Experiments ::: Relation-specific heads in BERT In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. ### Experiments ::: Relation-specific heads in BERT ::: Results The heatmap of averaged attention scores over all collected examples (fig:framenetresults) suggests that 2 out of 144 heads tend to attend to the parts of the sentence that FrameNet annotators identified as core elements of the same frame. fig:framenetresults shows an example of this attention pattern for these two heads. Both show high attention weight for “he” while processing “agitated” in the sentence “He was becoming agitated" (the frame “Emotion_directed”). ### Experiments ::: Change in self-attention patterns after fine-tuning Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. ### Experiments ::: Change in self-attention patterns after fine-tuning ::: Results fig:cosine shows that for all the tasks except QQP, it is the last two layers that undergo the largest changes compared to the pre-trained BERT model. At the same time, tab:glue-results shows that fine-tuned BERT outperforms pre-trained BERT by a significant margin on all the tasks (with an average of 35.9 points of absolute difference). This leads us to conclude that the last two layers encode task-specific features that are attributed to the gain of scores, while earlier layers capture more fundamental and low-level information used in fine-tuned models. Randomly initialized BERT consistently produces lower scores than the ones achieved with pre-trained BERT. In fact, for some tasks (STS-B and QNLI), initialization with random weights gives worse performance that that of pre-trained BERT alone without fine-tuning. This suggests that pre-trained BERT does indeed contain linguistic knowledge that is helpful for solving these GLUE tasks. These results are consistent with similar studies, e.g., BIBREF20's results on fine-tuning a convolutional neural network pre-trained on ImageNet or BIBREF21's results on transfer learning for medical natural language inference. ### Experiments ::: Attention to linguistic features In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. ### Experiments ::: Attention to linguistic features ::: Results Contrary to our initial hypothesis that the vertical attention pattern may be motivated by linguistically meaningful features, we found that it is associated predominantly, if not exclusively, with attention to [CLS] and [SEP] tokens (see Figure FIGREF32. Note that the absolute [SEP] weights for the SST-2 sentiment analysis task are greater than for other tasks, which is explained by the fact that there is only one sentence in the model inputs, i.e. only one [SEP] token instead of two. There is also a clear tendency for earlier layers to pay attention to [CLS] and for later layers to [SEP], and this trend is consistent across all the tasks. We did detect heads that paid increased (compared to the pre-trained BERT) attention to nouns and direct objects of the main predicates (on the MRPC, RTE and QQP tasks), and negation tokens (on the QNLI task), but the attention weights of such tokens were negligible compared to [CLS] and [SEP]. Therefore, we believe that the striped attention maps generally come from BERT pre-training tasks rather than from task-specific linguistic reasoning. ### Experiments ::: Token-to-token attention To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. ### Experiments ::: Token-to-token attention ::: Results Our token-to-token attention experiments for detecting heads that prioritize noun-pronoun and verb-subject links resulted in a set of potential head candidates that coincided with diagonally structured attention maps. We believe that this happened due to the inherent property of English syntax where the dependent elements frequently appear close to each other, so it is difficult to distinguish such relations from the previous/following token attention coming from language model pre-training. Our investigation of attention distribution for the [CLS] token in the output layer suggests that for most tasks, with the exception of STS-B, RTE and QNLI, the [SEP] gets attended the most, as shown in fig:cls. Based on manual inspection, for the mentioned remaining tasks, the greatest attention weights correspond to the punctuation tokens, which are in a sense similar to [SEP]. ### Experiments ::: Disabling self-attention heads Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. ### Experiments ::: Disabling self-attention heads ::: Results Our experiments suggest that certain heads have a detrimental effect on the overall performance of BERT, and this trend holds for all the chosen tasks. Unexpectedly, disabling some heads leads not to a drop in accuracy, as one would expect, but to an increase in performance. This is effect is different across tasks and datasets. While disabling some heads improves the results, disabling the others hurts the results. However, it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. The gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for STS-B, to the maximum of 1.2% for MRPC (see fig:disableheadsall). In fact, for some tasks, such as MRPC and RTE, disabling a random head gives, on average, an increase in performance. Furthermore, disabling a whole layer, that is, all 12 heads in a given layer, also improves the results. fig:disablelayers shows the resulting model performance on the target GLUE tasks when different layers are disabled. Notably, disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%. However, effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%. ### Discussion In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. ### Conclusion In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. Figure 1: Typical self-attention classes used for training a neural network. Both axes on every image represent BERT tokens of an input example, and colors denote absolute attention weights (darker colors stand for greater weights). The first three types are most likely associated with language model pre-training, while the last two potentially encode semantic and syntactic information. Figure 2: Estimated percentages of the identified selfattention classes for each of the selected GLUE tasks. Table 1: GLUE task performance of BERT models with different initialization. We report the scores on the validation, rather than test data, so these results differ from the original BERT paper. Figure 3: Detection of pre-trained BERT’s heads that encode information correlated to semantic links in the input text. Two heads (middle) demonstrate their ability to capture semantic relations. For a random annotated FrameNet example (bottom) full attention maps with a zoom in the target token attention distribution are shown (leftmost and rightmost). Figure 4: FrameNet annotation example for the “address” lexical unit with two core frame elements of different types annotated. Figure 5: Per-head cosine similarity between pre-trained BERT’s and fine-tuned BERT’s self-attention maps for each of the selected GLUE tasks, averaged over validation dataset examples. Darker colors correspond to greater differences. Figure 6: Per-task attention weights to the [SEP] (top row) and the [CLS] (bottom row) tokens averaged over input sequences’ lengths and over dataset examples. Darker colors correspond to greater absolute weights. Figure 7: Per-task attention weights corresponding to the [CLS] token averaged over input sequences’ lengths and over dataset examples, and extracted from the final layer. Darker colors correspond to greater absolute weights. Figure 8: Performance of the model while disabling one head at a time. The orange line indicates the baseline performance with no disabled heads. Darker colors correspond to greater performance scores. Figure 9: Performance of the model while disabling one layer (that is, all 12 heads in this layer) at a time. The orange line indicates the baseline performance with no disabled layers. Darker colors correspond to greater performance scores.
single head, disabling a whole layer, that is, all 12 heads in a given layer
What eight language are reported on?
### Introduction Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ). This manuscript focuses on the particular case of multilingual sentiment analysis of short informal texts such as Twitter messages. Our aim is to provide an easy-to-use tool to create sentiment classifiers based on supervised learning (i.e., labeled dataset) where the classifier should be competitive to those sentiment classifiers carefully tuned by some given languages. Furthermore, our second contribution is to create a well-performing baseline to compare new sentiment classifiers in a broad range of languages or to bootstrap new sentiment analysis systems. Our approach is based on selecting the best text-transforming techniques that optimize some performance measures where the chosen techniques are robust to typical writing errors. In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques. The rest of the manuscript is organized as follows. Section SECREF2 describes our proposed Sentiment Analysis method. Section SECREF3 describes the datasets and contests used to test our approach; whereas, the experimental results, and, the discussion are presented on Section SECREF4 . Finally, Section SECREF5 concludes. ### Our Approach: Multilingual Polarity Classification We propose a method for multilingual polarity classification that can serve as a baseline as well as a framework to build more complex sentiment analysis systems due to its simplicity and availability as an open source software. As we mentioned, this baseline algorithm for multilingual Sentiment Analysis (B4MSA) was designed with the purpose of being multilingual and easy to implement. B4MSA is not a naïve baseline which is experimentally proved by evaluating it on several international competitions. In a nutshell, B4MSA starts by applying text-transformations to the messages, then transformed text is represented in a vector space model (see Subsection SECREF13 ), and finally, a Support Vector Machine (with linear kernel) is used as the classifier. B4MSA uses a number of text transformations that are categorized in cross-language features (see Subsection SECREF3 ) and language dependent features (see Subsection SECREF9 ). It is important to note that, all the text-transformations considered are either simple to implement or there is a well-known library (e.g. BIBREF6 , BIBREF7 ) to use them. It is important to note that to maintain the cross-language property, we limit ourselves to not use additional knowledge, this include knowledge from affective lexicons or models based on distributional semantics. To obtain the best performance, one needs to select those text-transformations that work best for a particular dataset, therefore, B4MSA uses a simple random search and hill-climbing (see Subsection SECREF14 ) in space of text-transformations to free the user from this delicate and time-consuming task. Before going into the details of each text-transformation, Table TABREF2 gives a summary of the text-transformations used as well as their parameters associated. ### Cross-language Features We defined cross-language features as a set of features that could be applied in most similar languages, not only related language families such as Germanic languages (English, German, etc.), Romance languages (Spanish, Italian, etc.), among others; but also similar surface features such as punctuation, diacritics, symbol duplication, case sensitivity, etc. Later, the combination of these features will be explored to find the best configuration for a given classifier. Generally, Twitter messages are full of slang, misspelling, typographical and grammatical errors among others; in order to tackle these aspects we consider different parameters to study this effect. The following points are the parameters to be considered as spelling features. Punctuation (del-punc) considers the use of symbols such as question mark, period, exclamation point, commas, among other spelling marks. Diacritic symbols (del-diac) are commonly used in languages such as Spanish, Italian, Russian, etc., and its wrong usage is one of the main sources of orthographic errors in informal texts; this parameter considers the use or absence of diacritical marks. Symbol reduction (del-d1), usually, twitter messages use repeated characters to emphasize parts of the word to attract user's attention. This aspect makes the vocabulary explodes. We applied the strategy of replacing the repeated symbols by one occurrence of the symbol. Case sensitivity (lc) considers letters to be normalized in lowercase or to keep the original source; the aim is to cut the words that are the same in uppercase and lowercase. We classified around 500 most popular emoticons, included text emoticons, and the whole set of unicode emoticons (around INLINEFORM0 ) defined by BIBREF8 into three classes: positive, negative and neutral, which are grouped under its corresponding polarity word defined by the class name. Table TABREF6 shows an excerpt of the dictionary that maps emoticons to their corresponding polarity class. N-words (word sequences) are widely used in many NLP tasks, and they have also been used in Sentiment Analysis BIBREF9 and BIBREF10 . To compute the N-words, the text is tokenized and N-words are calculated from tokens. For example, let INLINEFORM0 be the text, so its 1-words (unigrams) are each word alone, and its 2-words (bigrams) set are the sequences of two words, the set ( INLINEFORM1 ), and so on. INLINEFORM2 = {the lights, lights and, and shadows, shadows of, of your, your future}, so, given text of size INLINEFORM3 words, we obtain a set containing at most INLINEFORM4 elements. Generally, N-words are used up to 2 or 3-words because it is uncommon to find, between texts, good matches of word sequences greater than three or four words BIBREF11 . In addition to the traditional N-words representation, we represent the resulting text as q-grams. A q-grams is an agnostic language transformation that consists in representing a document by all its substring of length INLINEFORM0 . For example, let INLINEFORM1 be the text, its 3-grams set are INLINEFORM2 so, given text of size INLINEFORM0 characters, we obtain a set with at most INLINEFORM1 elements. Notice that this transformation handles white-spaces as part of the text. Since there will be q-grams connecting words, in some sense, applying q-grams to the entire text can capture part of the syntactic and contextual information in the sentence. The rationale of q-grams is also to tackle misspelled sentences from the approximate pattern matching perspective BIBREF12 . ### Language Dependent Features The following features are language dependent because they use specific information from the language concerned. Usually, the use of stopwords, stemming and negations are traditionally used in Sentiment Analysis. The users of this approach could add other features such as part of speech, affective lexicons, etc. to improve the performance BIBREF13 . In many languages, there is a set of extremely common words such as determiners or conjunctions ( INLINEFORM0 or INLINEFORM1 ) which help to build sentences but do not carry any meaning for themselves. These words are known as Stopwords, and they are removed from text before any attempt to classify them. Generally, a stopword list is built using the most frequent terms from a huge document collection. We used the Spanish, English and Italian stopword lists included in the NLTK Python package BIBREF6 in order to identify them. Stemming is a well-known heuristic process in Information Retrieval field that chops off the end of words and often includes the removal of derivational affixes. This technique uses the morphology of the language coded in a set of rules that are applied to find out word stems and reduce the vocabulary collapsing derivationally related words. In our study, we use the Snowball Stemmer for Spanish and Italian, and the Porter Stemmer for English that are implemented in NLTK package BIBREF6 . Negation markers might change the polarity of the message. Thus, we attached the negation clue to the nearest word, similar to the approaches used in BIBREF9 . A set of rules was designed for common negation structures that involve negation markers for Spanish, English and Italian. For instance, negation markers used for Spanish are no (not), nunca, jamás (never), and sin (without). The rules (regular expressions) are processed in order, and their purpose is to negate the nearest word to the negation marker using only the information on the text, e.g., avoiding mainly pronouns and articles. For example, in the sentence El coche no es bonito (The car is not nice), the negation marker no and not (for English) is attached to its adjective no_bonito (not_nice). ### Text Representation After text-transformations, it is needed to represent the text in suitable form in order to use a traditional classifier such as SVM. It was decided to select the well known vector representation of a text given its simplicity and powerful representation. Particularly, it is used the Term Frequency-Inverse Document Frequency which is a well-known weighting scheme in NLP. TF-IDF computes a weight that represents the importance of words or terms inside a document in a collection of documents, i.e., how frequently they appear across multiple documents. Therefore, common words such as the and in, which appear in many documents, will have a low score, and words that appear frequently in a single document will have high score. This weighting scheme selects the terms that represent a document. ### Parameter Optimization The model selection, sometimes called hyper-parameter optimization, is essential to ensure the performance of a sentiment classifier. In particular, our approach is highly parametric; in fact, we use such property to adapt to several languages. Table TABREF2 summarizes the parameters and their valid values. The search space contains more than 331 thousand configurations when limited to multilingual and language independent parameters; while the search space reaches close to 4 million configurations when we add our three language-dependent parameters. Depending on the size of the training set, each configuration needs several minutes on a commodity server to be evaluated; thus, an exhaustive exploration of the parameter space can be quite expensive making the approach useless in practice. To tackle the efficiency problems, we perform the model selection using two hyper-parameter optimization algorithms. The first corresponds to Random Search, described in depth in BIBREF14 . Random search consists on randomly sampling the parameter space and select the best configuration among the sample. The second algorithm consists on a Hill Climbing BIBREF15 , BIBREF16 implemented with a memory to avoid testing a configuration twice. The main idea behind hill climbing H+M is to take a pivoting configuration, explore the configuration's neighborhood, and greedily moves to the best neighbor. The process is repeated until no improvement is possible. The configuration neighborhood is defined as the set of configurations such that these differ in just one parameter's value. This rule is strengthened for tokenizer (see Table TABREF2 ) to differ in a single internal value not in the whole parameter value. More precisely, let INLINEFORM0 be a valid value for tokenizer and INLINEFORM1 the set of valid values for neighborhoods of INLINEFORM2 , then INLINEFORM3 and INLINEFORM4 for any INLINEFORM5 . To guarantee a better or equal performance than random search, the H+M process starts with the best configuration found in the random search. By using H+M, sample size can be set to 32 or 64, as rule of thumb, and even reach improvements in most cases (see § SECREF4 ). Nonetheless, this simplification and performance boosting comes along with possible higher optimization times. Finally, the performance of each configuration is obtained using a cross-validation technique on the training data, and the metrics are usually used in classification such as: accuracy, score INLINEFORM0 , and recall, among others. ### Datasets and contests Nowadays, there are several international competitions related to text mining, which include diverse tasks such as: polarity classification (at different levels), subjectivity classification, entity detection, and iron detection, among others. These competitions are relevant to measure the potential of different proposed techniques. In this case, we focused on polarity classification task, hence, we developed a baseline method with an acceptable performance achieved in three different contests, namely, TASS'15 (Spanish) BIBREF17 , SemEval'15-16 (English) BIBREF18 , BIBREF19 , and SENTIPOLC'14 (Italian) BIBREF20 . In addition, our approach was tested with other languages (Arabic, German, Portuguese, Russian, and Swedish) to show that is feasible to use our framework as basis for building more complex sentiment analysis systems. From these languages, datasets and results can be seen in BIBREF21 , BIBREF3 and BIBREF2 . Table TABREF15 presents the details of each of the competitions considered as well as the other languages tested. It can be observed, from the table, the number of examples as well as the number of instances for each polarity level, namely, positive, neutral, negative and none. The training and development (only in SemEval) sets are used to train the sentiment classifier, and the gold set is used to test the classifier. In the case there dataset was not split in training and gold (Arabic, German, Portuguese, Russian, and Swedish) then a cross-validation (10 folds) technique is used to test the classifier. The performance of the classifier is presented using different metrics depending the competition. SemEval uses the average of score INLINEFORM0 of positive and negative labels, TASS uses the accuracy and SENTIPOLC uses a custom metric (see BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 ). ### Experimental Results We tested our framework on two kinds of datasets. On one hand, we compare our performance on three languages having well known sentiment analysis contests; here, we compare our work against competitors of those challenges. On the other hand, we selected five languages without popular opinion mining contests; for these languages, we compare our approach against research works reporting the used corpus. ### Performance on sentiment analysis contests Figure FIGREF17 shows the performance on four contests, corresponding to three different languages. The performance corresponds to the multilingual set of features, i.e., we do not used language-dependent techniques. Figures UID18 - UID21 illustrates the results on each challenge, all competitors are ordered in score's descending order (higher is better). The achieved performance of our approach is marked with a horizontal line on each figure. Figure UID22 briefly describes each challenge and summarizes our performance on each contest; also, we added three standard measures to simplify the insight's creation of the reader. The winner method in SENTIPOLC'14 (Italian) is reported in BIBREF22 . This method uses three groups of features: keyword and micro-blogging characteristics, Sentiment Lexicons, SentiWordNet and MultiWordNet, and Distributional Semantic Model (DSM) with a SVM classifier. In contrast with our method, in BIBREF22 three external sentiment lexicons dictionaries were employed; that is, external information. In TASS'15 (Spanish) competition, the winner reported method was BIBREF23 , which proposed an adaptation based on a tokenizer of tweets Tweetmotif BIBREF24 , Freeling BIBREF25 as lemmatizer, entity detector, morphosyntactic labeler and a translation of the Afinn dictionary. In contrast with our method, BIBREF23 employs several complex and expensive tools. In this task we reached the fourteenth position with an accuracy of INLINEFORM0 . Figure UID19 shows the B4MSA performance to be over two thirds of the competitors. The remaining two contests correspond to the SemEval'15-16. The B4MSA performance in SemEval is depicted in Figures UID20 and UID21 ; here, B4MSA does not perform as well as in other challenges, mainly because, contrary to other challenges, SemEval rules promotes the enrichment of the official training set. To be consistent with the rest of the experiments, B4MSA uses only the official training set. The results can be significantly improved using larger training datasets; for example, joining SemEval'13 and SemEval'16 training sets, we can reach INLINEFORM0 for SemEval'16, which improves the B4MSA's performance (see Table FIGREF17 ). In SemEval'15, the winner method is BIBREF26 , which combines three approaches among the participants of SemEval'13, teams: NRC-Canada, GU-MLT-LT and KLUE, and from SemEval'14 the participant TeamX all of them employing external information. In SemEval'16, the winner method was BIBREF27 is composed with an ensemble of two subsystems based on convolutional neural networks, the first subsystem is created using 290 million tweets, and the second one is feeded with 150 million tweets. All these tweets were selected from a very large unlabeled dataset through distant supervision techniques. Table TABREF23 shows the multilingual set of techniques and the set with language-dependent techniques; for each, we optimized the set of parameters through Random Search and INLINEFORM0 (see Subsection SECREF14 ). The reached performance is reported using both cross-validation and the official gold-standard. Please notice how INLINEFORM1 consistently reaches better performances, even on small sampling sizes. The sampling size is indicated with subscripts in Table TABREF23 . Note that, in SemEval challenges, the cross-validation performances are higher than those reached by evaluating the gold-standard, mainly because the gold-standard does not follow the distribution of training set. This can be understood because the rules of SemEval promote the use of external knowledge. Table TABREF24 compares our performance on five different languages; we do not apply language-dependent techniques. For each comparison, we took a labeled corpus from BIBREF3 (Arabic) and BIBREF21 (the remaining languages). According to author's reports, all tweets were manually labeled by native speakers as pos, neg, or neu. The Arabic dataset contains INLINEFORM0 items; the other datasets contain from 58 thousand tweets to more than 157 thousand tweets. We were able to fetch a fraction of the original datasets; so, we drop the necessary items to hold the original class-population ratio. The ratio of tweets in our training dataset, respect to the original dataset, is indicated beside the name. As before, we evaluate our algorithms through a 10-fold cross validation. In BIBREF3 , BIBREF2 , the authors study the effect of translation in sentiment classifiers; they found better to use native Arabic speakers as annotators than fine-tuned translators plus fine-tuned English sentiment classifiers. In BIBREF21 , the idea is to measure the effect of the agreement among annotators on the production of a sentiment-analysis corpus. Both, on the technical side, both papers use fine tuned classifiers plus a variety of pre-processing techniques to prove their claims. Table TABREF24 supports the idea of choosing B4MSA as a bootstrapping sentiment classifier because, in the overall, B4MSA reaches superior performances regardless of the language. Our approach achieves those performance's levels since it optimizes a set of parameters carefully selected to work on a variety of languages and being robust to informal writing. The latter problem is not properly tackled in many cases. ### Conclusions We presented a simple to implement multilingual framework for polarity classification whose main contributions are in two aspects. On one hand, our approach can serve as a baseline to compare other classification systems. It considers techniques for text representation such as spelling features, emoticons, word-based n-grams, character-based q-grams and language dependent features. On the other hand, our approach is a framework for practitioners or researchers looking for a bootstrapping sentiment classifier method in order to build more elaborated systems. Besides the text-transformations, the proposed framework uses a SVM classifier (with linear kernel), and, hyper-parameter optimization using random search and H+M over the space of text-transformations. The experimental results show good overall performance in all international contests considered, and the best results in the other five languages tested. It is important to note that all the methods that outperformed B4MSA in the sentiment analysis contests use extra knowledge (lexicons included) meanwhile B4MSA uses only the information provided by each contests. In future work, we will extend our methodology to include extra-knowledge in order to improve the performance. ### Acknowledgements We would like to thank Valerio Basile, Julio Villena-Roman, and Preslav Nakov for kindly give us access to the gold-standards of SENTIPOLC'14, TASS'15 and SemEval 2015 & 2016, respectively. The authors also thank Elio Villaseñor for the helpful discussions in early stages of this research. Table 1: Parameter list and a brief description of the functionality Table 3: Datasets details from each competition tested in this work Figure 1: The performance listing in four difference challenges. The horizontal lines appearing in a) to d) correspond to B4MSA’s performance. All scores were computed using the official gold-standard and the proper score for each challenge. Table 4: B4MSA’s performance on cross-validation and gold standard. The subscript at right of each score means for the random-search’s parameter (sampling size) needed to find that value. Table 5: Performance on multilingual sentiment analysis (not challenges). B4MSA was restricted to use only the multilingual set of parameters.
Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish
By the end of the passage. what can we understand about the opening scene? A. Without Peter, the ship won't be functional anymore. B. Despite being logical, Robert feels emotional about killing Peter. He is at odds with himself. C. Robert kills Peter without any thought behind it. D. Robert's cold logic has won him over completely.
THE AVENGER By STUART FLEMING Karson was creating a superman to fight the weird super-monsters who had invaded Earth. But he was forgetting one tiny thing—like calls to like. [Transcriber's Note: This etext was produced from Planet Stories Spring 1944. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Peter Karson was dead. He had been dead for some time now, but the dark blood was still oozing from the crushed ruin of his face, trickling down into his sodden sleeve, and falling, drop by slow drop, from his fingertips. His head was tilted over the back of the chair at a queer, unnatural angle, so that the light made deep pools of shadow where his eyes had been. There was no sound in the room except for the small splashing the blood made as it dropped into the sticky pool on the floor. The great banks of machinery around the walls were silent. I knew that they would never come to life again. I rose and walked over to the window. Outside, the stars were as before: tiny, myriad points of light, infinitely far away. They had not changed, and yet they were suddenly no longer friendly. They were cold and alien. It was I who had changed: something inside me was dead, like the machinery, and like Peter. It was a kind of indefinable emptiness. I do not think it was what Peter called an emotion; and yet it had nothing to do with logic, either. It was just an emptiness—a void that could not be filled by eating or drinking. It was not a longing. I had no desire that things should be otherwise than they were. I did not even wish that Peter were not dead, for reason had told me that he had to die. That was the end of it. But the void was still there, unexplainable and impossible to ignore. For the first time in all my life I had found a problem that I could not solve. Strange, disturbing sensations stirred and whispered within me, nagging, gnawing. And suddenly—something moved on the skin of my cheek. I raised a hand to it, slowly. A tear was trickling down my cheek. Young Peter Karson put the last black-print down and sighed with satisfaction. His dream was perfect; the Citadel was complete, every minutest detail provided for—on paper. In two weeks they would be laying the core, and then the metal giant itself would begin to grow, glittering, pulsing with each increment of power, until at last it lay finished, a living thing. Then there would remain only the task of blasting the great, shining ship out into the carefully-calculated orbit that would be its home. In his mind's eye he could see it, slowly wheeling, like a second satellite, about the Earth; endlessly gathering knowledge into its insatiable mechanisms. He could see, too, the level on level of laboratories and storerooms that filled its interlocking segments; the meteor deflectors, the air renewal system, the mighty engines at the stern—all the children of his brain. Out there, away from the muffling, distorting, damnable blanket of atmosphere, away from Earth's inexorable gravitational pull, would be a laboratory such as man had never seen. The ship would be filled with the sounds of busy men and women, wresting secrets from the reluctant ether. A new chemistry, a new physics; perhaps even a new biochemistry. A discordant note suddenly entered his fantasy. He looked up, conscious of the walls of his office again, but could see nothing unusual. Still, that thin, dark whisper of dread was at the back of his mind. Slowly, as if reluctantly compelled, he turned around to face the window at his back. There, outside the window, fifty stories up, a face was staring impassively in at him. That was the first impression he got; just a face, staring. Then he saw, with a queer, icy chill, that the face was blood-red and subtly inhuman. It tapered off into a formless, shriveled body. For a moment or an eternity it hung there, unsupported, the bulging eyes staring at him. Then it grew misty at the edges. It dissolved slowly away and was gone. "Lord!" he said. He stared after it, stunned into immobility. Down in the street somewhere, a portable video was shrilling a popular song; after a moment he heard the faint swish of a tube car going past. Everything was normal. Nothing, on examination, seemed to have changed. But the world had grown suddenly unreal. One part of his brain had been shocked into its shell. It was hiding from the thing that had hurt it, and it refused to respond. But the other part was going calmly, lucidly on, quite without his volition. It considered the possibility that he had gone temporarily insane, and decided that this was probable. Hardly knowing what he did, he found a cigarette and lit it. His hands were shaking. He stared at them dully, and then he reached over to the newsbox on his desk, and switched it on. There were flaring red headlines. Relief washed over him, leaving him breathless. He was horrified, of course, but only abstractedly. For the moment he could only be glad that what he had seen was terrible reality rather than even more terrible illusion. INVADERS APPEAR IN BOSTON. 200 DEAD Then lines of type, and farther down: 50 CHILDREN DISAPPEAR FROM PARIS MATERNITY CENTER He pressed the stud. The roll was full of them. MOON SHIP DESTROYED IN TRANSIT NO COMMUNICATION FROM ANTARCTICA IN 6 HOURS STRANGE FORCE DEFLECTS PLANES FROM SAHARA AREA WORLD POLICE MOBILIZING The item below the last one said: Pacifica, June 7—The World Police are mobilizing, for the first time in fifty years. The order was made public early this morning by R. Stein, Secretary of the Council, who said in part: "The reason for this ... order must be apparent to all civilized peoples. For the Invaders have spared no part of this planet in their depredations: they have laid Hong Kong waste; they have terrorized London; they have destroyed the lives of citizens in every member state and in every inhabited area. There can be few within reach of printed reports or my words who have not seen the Invaders, or whose friends have not seen them. "The peoples of the world, then, know what they are, and know that we face the most momentous struggle in our history. We face an enemy superior to ourselves in every way . "Since the Invaders first appeared in Wood River, Oregon, 24 hours ago, they have not once acknowledged our attempts to communicate, or in any way taken notice of our existence as reasoning beings. They have treated us precisely as we, in less enlightened days, might have treated a newly-discovered race of lower animals. They have not attacked our centers of government, nor immobilized our communications, nor laid siege to our defenses. But in instance after instance, they have done as they would with us. They have examined us, dissected us, driven us mad, killed us with no discernable provocation; and this is more intolerable than any normal invasion. "I have no fear that the people of Earth will fail to meet this challenge, for there is no alternative. Not only our individual lives are threatened, but our existence as a race. We must, and will, destroy the Invaders!" Peter sank back in his chair, the full shock of it striking him for the first time. " Will we?" he asked himself softly. It was only two stories down the moving ramp to Lorelei Cooper's laboratory. Peter took it in fifteen seconds, running, and stumbled to a halt in front of the door marked "Radiation." She had set her door mechanism to "Etaoin Shrdlu," principally because he hated double-talk. He mouthed the syllables, had to repeat them because he put an accent in the wrong place, and squeezed through the door as soon as it opened far enough to admit him. Lorelei, beautiful in spite of dark-circled eyes and a smear of grease on her chin, looked up from a huge ledger at the end of the room. One blonde eyebrow arched in the quizzical expression he knew so well. "What makes, Peter my love?" she asked, and bent back to the ledger. Then she did a double-take, looked at his face intently, and said, "Darling, what's wrong?" He said, "Have you seen the news recently?" She frowned. "Why, no—Harry and I have been working for thirty-six hours straight. Haven't seen anybody, haven't heard anything. Why?" "You wouldn't believe me. Where's your newsbox?" She came around the desk and put her hands on his shoulders. "Pete, you know I haven't one—it bores me or upsets me, depending on whether there's trouble or not. What—" "I'm sorry, I forgot," he said. "But you have a scanner?" "Yes, of course. But really, Pete—" "You'll understand in a minute. Turn it on, Lorelei." She gazed at him levelly for a moment, kissed him impulsively, and then walked over to the video panel on the wall and swept a mountain of papers away from in front of it. She turned the selector dial to "News" and pressed the stud. A faint wash of color appeared on the panel, strengthened slowly, and suddenly leapt into full brilliance. Lorelei caught her breath. It was a street scene in the Science City of Manhattan, flooded by the warm spring sunshine. Down on the lowest level, visible past the transport and passenger tubes, the parks and moving ways should have been dotted with colorful, holiday crowds. The people were there, yes but they were flowing away in a swiftly-widening circle. They disappeared into buildings, and the ways snatched them up, and in a heartbeat they were gone. There were left only two blood-red, malignant monstrosities somehow defiling the air they floated in; and below them, a pitiful huddle of flesh no longer recognizable as human beings. They were not dead, those men and women, but they wanted to be. Their bodies had been impossibly joined, fused together into a single obscene, floundering mass of helpless protoplasm. The thin moaning that went up from them was more horrible than any cry of agony. "The Invaders are here, citizens," the commentator was saying in a strangled voice. "Stay off the streets. Hide yourselves. Stay off the streets...." His voice droned on, but neither of them heard it. Lorelei buried her head on his chest, clutching at him desperately. "Peter!" she said faintly. "Why do they broadcast such things?" "They have to," he told her grimly. "There will be panics and suicides, and they know it; but they have to do it. This isn't like a war, where the noncombatants' morale has to be kept up. There aren't going to be any noncombatants, this time. Everybody in the world has to know about them, so that he can fight them—and then it may not be enough." The viewpoint of the teleo sender changed as the two red beings soared away from their victims and angled slowly up the street. Peter reached out to switch off the scanner, and froze. The girl felt his muscles tense abruptly, looked back at the scene. The Invaders were floating up the sloping side of a tall, pure white structure that dominated the rest. "That's the Atlas building," she said unbelievingly. "Us!" "Yes." Silently, they counted stories as the two beings rose. Forty-five ... forty-six ... forty-seven ... forty-eight. Inevitably, they halted. Then they faded slowly. It was impossible to say whether they had gone through the solid wall, or simply melted away. The man and woman clung together, waiting. There was a thick, oppressive silence, full of small rustlings and other faint sounds that were no longer normal. Then, very near, a man screamed in a high, inhuman voice. The screamed dwindled into a throaty gurgle and died, leaving silence again. Peter's lips were cold with sweat. Tiny nerves in his face and arms were jumping convulsively. His stomach crawled. He thrust the girl away from him and started toward the inner room. "Wait here," he mouthed. She was after him, clinging to his arms. "No, Peter! Don't go in there! Peter! " But he pushed her away again, woodenly, and stalked forward. There was a space in the middle of the room where machinery had been cleared away to make room for an incompleted setup. Peter walked down the narrow aisle, past bakelite-sheathed mechanisms and rows of animal cages, and paused just short of it. The two red beings were there, formless bodies hazy in midair, the distorted, hairless skulls in profile, staring at something outside his range of vision. Peter forced himself forward another step. Little Harry Kanin, Lorelei's assistant, was crumpled in a corner, half supported by the broad base of an X-ray chamber. His face was flaccid and bloated. His glazed eyes, impassive yet somehow pleading, stared at nothingness straight ahead of him. The Invaders ignored Peter, staring expressionlessly down at Kanin. In a moment Peter realized what they were doing to him. He stood, paralyzed with horror, and watched it happen. The little man's body was sagging, ever so slowly, as if he were relaxing tiredly. His torso was telescoping, bit by bit; his spread legs grew wider and more shapeless, his cheeks caved in and his skull grew gradually flatter. When it was over, the thing that had been Kanin was a limp, boneless puddle of flesh. Peter could not look at it. There was a scream in his throat that would not come out. He was beyond fear, beyond agony. He turned to the still-hovering monsters and said in a terrible voice, "Why? Why?" The nearest being turned slowly to regard him. Its lips did not move, but there was a tiny sound in Peter's brain, a thin, dry whispering. The scream was welling up. He fought it down and listened. " Wurnkomellilonasendiktolsasangkanmiamiamimami.... " The face was staring directly into his, the bulging eyes hypnotic. The ears were small, no more than excresences of skin. The narrow lips seemed sealed together; a thin, slimy ichor drooled from them. There were lines in the face, but they were lines of age, not emotion. Only the eyes were alive. " ... raswilopreatadvuonistuwurncchtusanlgkelglawwalinom.... " "I can't understand," he cried wildly. "What do you want?" " ... morofelcovisyanmamiwurlectaunntous. " He heard a faint sound behind him, and whirled. It was the first time he had realized that Lorelei had followed him. She stood there, swaying, very pale, looking at the red Invaders. Her eyes swiveled slowly.... " Opreniktoulestritifenrelngetnaktwiltoctpre. " His voice was hoarse. "Don't look! Don't—Go back!" The horrible, mindless noise in his throat was almost beyond his power to repress. His insides writhed to thrust it out. She didn't see him. Her eyes glazed, and she dropped limply to the floor. The scream came out then. Before he knew, even, that he could hold it back no longer, his mouth was wide open, his muscles tensed, his fingernails slicing his palms. It echoed with unbelievable volume in the room. It was a scream to split eardrums; a scream to wake the dead. Somebody said, "Doctor!" He wanted to say, "Yes, get a doctor. Lorelei—" but his mouth only twitched feebly. He couldn't seem to get it to work properly. He tried again. "Doctor." "Yes?" A gentle, masculine voice. He opened his eyes with an effort. There was a blurred face before him; in a moment it grew clearer. The strong, clean-shaven chin contrasted oddly with the haggard circles under the eyes. There was a clean, starched odor. "Where am I?" he said. He tried to turn his head, but a firm hand pressed him back into the sheets. "You're in a hospital. Just lie quietly, please." He tried to get up again. "Where's Lorelei?" "She's well, and you'll see her soon. Now lie quietly. You've been a very sick man." Peter sank back in the bed. The room was coming into focus. He looked around him slowly. He felt very weak, but perfectly lucid. "Yes...." he said. "How long have I been here, Doctor?" The man hesitated, looked at him intently. "Three months," he said. He turned and gave low-voiced instructions to a nurse, and then went away. Peter's head began spinning just a little. Glass clinked from a metal stand near his head; the nurse bent over him with a glass half full of milky fluid. It tasted awful, but she made him drink it all. In a moment he began to relax, and the room got fuzzy again. Just before he drifted off, he said sleepily, "You can't—fool me. It's been more —than three—months." He was right. All the nurses, and even Dr. Arnold, were evasive, but he kept asking them why he couldn't see Lorelei, and finally he wormed it out of them. It had been nine and a half months, not three, and he'd been in a coma all that time. Lorelei, it seemed, had recovered much sooner. "She was only suffering from ordinary shock," Arnold explained. "Seeing that assistant of hers—it was enough to knock anybody out, especially a woman. But you stood actual mental contact with them for approximately five minutes. Yes, we know—you talked a lot. It's a miracle you're alive, and rational." "But where is she?" Peter complained. "You still haven't explained why I haven't been able to see her." Arnold frowned. "All right," he said. "I guess you're strong enough to take it. She's underground, with the rest of the women and children, and a good two-thirds of the male population. That's where you'll go, as soon as you're well enough to be moved. We started digging in six months ago." "But why?" Peter whispered. Arnold's strong jaw knotted. "We're hiding," he said. "Everything else has failed." Peter couldn't think of anything to say. Dr. Arnold's voice went on after a moment, musingly. "We're burrowing into the earth, like worms. It didn't take us long to find out we couldn't kill them. They didn't even take any notice of our attempts to do so, except once. That was when a squadron of the Police caught about fifty of them together at one time, and attacked with flame guns and a new secret weapon. It didn't hurt them, but it annoyed them. It was the first time they'd been annoyed, I think. They blew up half a state, and it's still smoldering." "And since then?" Peter asked huskily. "Since then, we've been burrowing. All the big cities.... It would be an impossible task if we tried to include all the thinly-populated areas, of course, but it doesn't matter. By the time we excavate enough to take care of a quarter of the earth's population, the other three-quarters will be dead, or worse." "I wonder," Peter said shakily, "if I am strong enough to take it." Arnold laughed harshly. "You are. You've got to be. You're part of our last hope, you see." "Our last hope?" "Yes. You're a scientist." "I see," said Peter. And for the first time, he thought of the Citadel . No plan leaped full-born into his mind, but, maybe , he thought, there's a chance .... It wasn't very big, the thing that had been his shining dream. It lay there in its rough cradle, a globe of raw dura-steel not more than five hundred meters in diameter, where the Citadel was to have been a thousand. It wouldn't house a hundred scientists, eagerly delving into the hinterland of research. The huge compartments weren't filled with the latest equipment for chemical and physical experiment; instead, there was compressed oxygen there, and concentrated food, enough to last a lifetime. It was a new world, all by itself; or else it was a tomb. And there was one other change, one that you couldn't see from the outside. The solid meters of lead in its outer skin, the shielding to keep out cosmic rays, were gone. A man had just finished engraving the final stroke on its nameplate, to the left of the airlock— The Avenger . He stepped away now, and joined the group a little distance away, silently waiting. Lorelei said, "You can't do it. I won't let you! Peter—" "Darling," he began wearily. "Don't throw your life away! Give us time—there must be another way." "There's no other way," Peter said. He gripped her arms tightly, as if he could compel her to understand by the sheer pressure of his fingers. "Darling, listen to me. We've tried everything. We've gone underground, but that's only delaying the end. They still come down here, only not as many. The mortality rate is up, the suicide rate is up, the birth rate is down, in spite of anything we can do. You've seen the figures: we're riding a curve that ends in extinction fifty years from now. "They'll live, and we'll die, because they're a superior race. We're a million years too far back even to understand what they are or where they came from. Besides them, we're apes. There's only one answer." She was crying now, silently, with great racking sobs that shook her slender body. But he went remorselessly on. "Out there, in space, the cosmics change unshielded life. They make tentacles out of arms; or scales out of hair; or twelve toes, or a dozen ears—or a better brain. Out of those millions of possible mutations, there's one that will save the human race. We can't fight them , but a superman could. That's our only chance. Lorelei—darling—don't you see that?" She choked, "But why can't you take me along?" He stared unseeingly past her wet, upturned face. "You know why," he said bitterly. "Those rays are strong. They don't only work on embryos; they change adult life forms, too. I have one chance in seven of staying alive. You'd have one chance in a million of staying beautiful. I couldn't stand that. I'd kill myself, and then humanity would die, too. You'd be their murderer." Her sobs gradually died away. She straightened slowly until he no longer had to support her, but all the vitality and resilience was gone out of her body. "All right," she said in a lifeless voice. "You'll come back, Peter." He turned away suddenly, not trusting himself to kiss her goodbye. A line from an old film kept echoing through his head. " They'll come back—but not as boys !" We'll come back, but not as men. We'll come back, but not as elephants. We'll come back, but not as octopi. He was trembling violently. He ran the last few steps, stumbled into the airlock, and pressed the stud that would seal the door behind him. We'll come back.... He heard the massive disk sink home, closing him off. Then he sank down on the floor of the airlock and put his head in shaking hands. After a while he roused himself, closed the inner door of the lock behind him, and walked down the long corridor into the control chamber. The shining banks of keys were there, waiting for his touch; he slumped down before them and listlessly closed the contact of the visiplate. He swung its field slowly, scanning for the last time the bare walls of the underground chamber, making sure that all the spectators had retired out of the way of the blast. Then his clawed fingers poised over the keys, hovered a moment, and thrust down. Acceleration pressed him deep into his chair. In the visiplate, the heavy doors that closed the tunnel above him flashed back, one by one. The energy-charged screen flickered off to let him pass, and closed smoothly behind him. The last doors, cleverly camouflaged, slipped back into place and then dwindled in the distance. It was done. He flashed on out, past the moon, past Mars, over the asteroid belt. The days merged into weeks, then months, and finally, far out, The Avenger curved into an orbit and held it. The great motors died, and the silence pressed in about him. Already he could feel the invisible rays burning resistlessly through his flesh as if it were water, shifting the cells of his body, working its slow, monstrous alchemy upon him. Peter waited until the changes were unmistakably evident in his skin and hair, and then he smashed all the mirrors in the ship. The embryos were pulsing with unnatural life, even in the suspended animation of their crystal cells. One by one he allowed them to mature, and after weeks or years destroyed the monstrosities that came from the incubators. Time went by, meaninglessly. He ate when he was hungry, slept when his driving purpose let him, and worked unceasingly, searching for the million-to-one chance. He stared sometimes through changed eyes at the tiny blue star that was Earth, wondering if the race he had left behind still burrowed in its worm-tunnels, digging deeper and deeper away from the sunlight. But after a time he ceased even to wonder. And one changeling-child he did not destroy. He fed knowledge to its eager brain, and watched it through the swift years, with a dawning hope.... Peter closed the diary. "The rest you know, Robert," he said. "Yes," I told him. "I was that child. I am the millionth mutation you were searching for." His eyes glowed suddenly in their misshapen sockets. "You are. Your brain is as superior to mine as mine is to an anthropoid's. You solve instinctively problems that would take our mechanical computers hours of work. You are a superman." "I am without your imperfections," I said, flexing my arms. He rose and strode nervously over to the window. I watched him as he stood there, outlined against the blazing galaxies. He had changed but little in the years that I had known him. His lank gray hair straggled over his sunken eyes; his cheeks were blobbed with excresences of flesh; one corner of his mouth was drawn up in a perpetual grin. He had a tiny sixth finger on his left hand. He turned again, and I saw the old scar on his cheek where I had once accidentally drawn one of my talons across his face. "And now," he said softly, "we will go home. I've waited so long—keeping the control chamber and the engine room locked away from you, not telling you, even, about Earth until now—because I had to be sure. But now, the waiting is over. "They're still there, I'm sure of it—the people, and the Invaders. You can kill the Invaders, Robert." He looked at me, a little oddly, almost as if he had some instinctive knowledge of what was to come. But he went on swiftly, "On Earth we had a saying: 'Fight fire with fire.' That is the way it will be with you. You are completely, coldly logical, just as they are. You can understand them, and so you can conquer them." I said, "That is the reason why we will not go back to Earth." He stared at me, his jaw slack, his hands trembling. "What—what did you say?" I repeated it patiently. "But why?" he cried, sinking down into the chair before me. In an instant all the joy had gone out of him. I could not understand his suffering, but I could recognize it. "You yourself have said it," I told him. "I am a being of logic, just as the beings who have invaded your planet are. I do not comprehend the things which you call hate, fear, joy and love, as they do not. If I went to Earth, I would use your people to further my knowledge, just as the invaders do. I would have no reason to kill the invaders. They are more nearly kin to me than your people." Peter's eyes were dull, his limbs slumped. For a moment I thought that the shock had deranged his mind. His voice trembled when he said, "But if I ask you to kill them, and not my people?" "To do so would be illogical." He waved his hands helplessly. "Gratitude?" he muttered. "No, you don't understand that, either." Then he cried suddenly, "But I am your friend, Robert!" "I do not understand 'friend,'" I said. I did understand "gratitude," a little. It was a reciprocal arrangement: I did what Peter wished, so long as I did not actively want to do otherwise, because he had done things for me. Very well, then we must not go back. It was very simple, but I knew that he could not comprehend it. I tried to explain it to him, however. But he only stared at me, with an expression on his face that I had never seen there before, and that, somehow, I did not like to see. It was disquieting, and so I hastened to the end that I knew was inevitable.
B. Despite being logical, Robert feels emotional about killing Peter. He is at odds with himself.
How does Ed feel about the Super Bowl? A. He doesn't care for football but enjoys all of the celebration around it B. He loves the ads even more than the game C. He likes the football and the time he spends with his daughter D. It is his favorite sporting event and he would never miss a football game
Divided we stand Sara lets the Lyft park itself in the drive, lets out a sigh, and tweets wish me luck plus some emojis before slipping her phone into a hoody pocket. Curtains twitch, and before she can get her bag out of the back Mom is there, right there next to her, their hands touching on the handle as they compete for control. "It's OK Mom, I got it." "You should have let us come pick you up." "It's fine, there was no need. I didn't want to put any-" "But you shouldn't be wasting money, not with how much rent you pay and-" Jesus. Not this already. "Mom. I can afford a cab ride. I'm not that much of a failure." Mom sighs, shoulders falling, looks at Sara directly. "I'm sorry honey." She looks old, Sara thinks, watching a resigned tiredness flicker across her face in a way she'd not noticed before. Like she's exhausted by conflict, surrendered to it. "Now, don't I get a hug?" Sara smiles. They hold each other for a few long seconds, rubbing and squeezing each other as the Lyft silently backs itself out of the driveway. When they part it's Mom's hand that's on the bag's handle. Inside she unwraps herself from scarves and layers, the heat in the house almost a shock after the cold air. Michigan in February. Mom is already halfway up the stairs, bag in tow, headed for her room. "Mom, just leave that and I'll…" "Your father's in the front room," she says, just before she disappears from view. "Go say hi." For a few seconds Sara is alone in the hallway, the smell of cooking meat coming from one doorway, the sound of rolling news from another. She shakes her head, kicks off shoes, tucks hair behind her ears. Braces herself. He's sat in the living room, reclining in the Lazy Boy. He doesn't hear her enter - her socked feet silent on the pile carpet floor, his attention lost in the screen that fills most of the wall. Fox News. She braces herself again. "Hey Dad." His head jerks to look at her. "Hey! When did you get here?" He starts to push himself up. "Don't get up Dad, it's fine. Really." She takes a seat on the couch. "I just got here, like two minutes ago." "Good flight?" "Yeah. Fine. Y'know. Same as always." He smiles back at her, nods knowingly. Their first words in nearly a year. Fine. So far. She relaxes. Of course it is. How bad could it be? "I thought I was gonna come pick you up from the airport?" "Ah, no. I got a cab. I didn't want to bother you." "Bother me? You think I'm too old and infirm to pick my own daughter up from the airport?" "No Dad, of course not." The war spills out of Fox News, casualty figures scrolling across monochrome drone footage, attack helicopters circling over Caracas apartment blocks, pundits with bronzed skin and immaculate blond hair smiling from four-way split screens. "So you just got a cab?" "Yeah." "How much did that cost?" "Not much. Really. I can afford-" "Cabs are expensive. You shouldn't be wasting your money." "It wasn't expensive. It wasn't a cab, it was a Lyft." "One of those driverless things?" "Yeah." Ad break. An elderly couple ride a tandem bicycle through a park, laughing and smiling in Instagram-perfect sunshine, as a calm, relaxing voice lists the potentially lethal side effects of a diabetes drug. Dad shakes his head. "I don't know how you can use those things. I don't trust them." "Dad, they're perfectly safe." "That's not what I mean. They're stealing people's jobs." There's a brief second, a fleeting moment, where Sara can bite her lip, let it go. She misses it. "But I thought it was immigrants that are stealing people's jobs?" "You might think it's funny little lady, but let me tell you - you remember Kyle and Max, Bill Cooper's boys? Live up off Lafayette, past the Checkers?" "Nope." "Well let me tell you," He shifts in the recliner, with some obvious pain and effort, to face her. "Both of 'em lost their jobs just this last year. Both of 'em were truckers. Both of 'em been driving trucks since high school. Now the damn trucks are driving themselves and they're both out of work. And they got families to support. Kids." "Well I'm sure they'll be fine." She regrets the sarcasm as soon as she hears it in her own voice, but she still can't stop herself, like it's expected, like it's part of the routine. Part of their schtick. "They just got to get themselves out there, huh Dad? Pull themselves up by their bootstraps. That's the American way, right?" "I'm glad you think this is funny, I really do. But what you New York types need to realise is-" "Ed!" Mom had appeared in the doorway. "Please! Both of you. No fighting today, please." "Sheryl-" "No. I don't want to hear you two as much as disagreeing about anything today, unless it's about the game. And even then you'd better keep it civil. Otherwise you can both go hungry. Understand?" Awkward pause. "Fine." "Sorry Mom." Sara turns back to the TV, to watching the war, to trying to work out which one it is. It had always been this way, ever since she was about thirteen. Up until then it just seemed like constant warmth, as though she didn't have any childhood concept of Dad apart from him getting home from work, then her sitting on his knee, eating cookies and watching football highlights until Mom came in and scolded them both for ruining their appetites before dinner. And then everything changed. Suddenly there was rap music and nose rings, sneaking out of the house to see her friends and not wanting to go to church. Suddenly he was no longer this lovable bear-man that ruffled her hair and gave her candy and explained defensive plays to her, but this huge obelisk of injustice that just wanted to crush her high school life into dust. It was constant warfare; every opinion she had became a battle, every decision she made a conflict. Getting away to college gave her escape, but bred resentment too; he hated that she went to New York, even though NYU was a good school, and her decision to stay there after she finished made things even worse. And then politics got all crazy, weirder then ever, and it became impossible for them to talk without it erupting into fights almost instantly. It was bad enough when the smart, young guy she liked was president and Dad constantly spewed his hate for him at her, but somehow it got even worse when the old, racist, women hating war-starter he liked won. Twice. So they didn't talk much now, barely online, never on the phone. Since her second year of school he'd never been to NYC to visit her. She came back when she could face it; sometimes for birthdays, sometimes for Thanksgiving. Maybe for Christmas. But somehow always, like now, for the Super Bowl. Like football was the one thing they still had, that one thing they could still sit in the same room together for. Shouting at players, screaming at the ref, laughing at the ads. Dad is in the bathroom, and Sara has had enough of Fox and whichever war this is. She reaches over and grabs the remote from the arm of his chair, and tries to find something else to watch. The government had scrapped all the rules about how the internet worked, and for most people like her parents it had suddenly gotten a lot cheaper to get their TV through Facebook, so all she can find is Fox, Breitbart News, Family Values TV, Info Wars, The Rebel, Glenn Beck, The Voice of America, America First, The Bible Today and lots of hunting and sports channels she doesn't even recognise. It's signed in to her Dad's FB account, and the last thing she wants is to try and log in on hers before he gets back from the john. Yeah. There was no way that would end up with them keeping it civil. In her pocket her phone vibrates, purrs against her skin, reminding her it's there, making sure she's not forgotten where her real friends are, that there's a world outside, beyond Dad and his TV. She takes it out and cradles it in her hands, the dark screen fleetingly reflecting back her face before it jumps awake at her very touch, opening up to bathe her in blue light, in comfort and warmth and the familiar. For the first time since she got home she feels herself relax. Dinner is Mom's meatloaf, with gravy and mashed potatoes. Cornbread and broccoli. Every mouthful tastes like nostalgia, and Sara can feel herself being encompassed by a bubble, this barrier of warm air and long forgotten simplicity enveloping her body, protecting her from the confusion of the world outside. "How's work, honey?" Mom asks. "Yeah, going OK." Sara works for a non-profit in Brooklyn that helps big organisations to transition to renewable energy. The pay is lousy but it feels important. "We just got the last few schools in the city to agree to put solar panels on their roofs. Big deal for us. I've been working on them for the last two years." Mom says nothing, just looks down at her plate. Dad finishes chewing his mouthful, swallows, wipes his beard with a napkin. Sighs, barely controlled anger simmering behind his face. "Solar panels cause cancer." Sara laughs, covering her mouth as she nearly chokes on chewed food. "What? No they don't Dad." "They do. The material they use to coat them reacts to sunlight, and produces an airborne carcinogen. It's based on a particular kind of rare earth. It's a bit like teflon. The Chinese have known about this for decades but have kept it covered up, because they-" "Dad, no. Just no. Trust me." "-because they are the world's largest manufacturers of solar panels. But the research has been done. The scientific evidence is out there. Look it up." "Look it up?" Sara shakes her head, not knowing where to even start. "Dad, who is telling you this stuff?" "No one is telling me it, Sara. I read it. It's in the news. I mean, really, I'm surprised you've not seen it. It was all over Facebook." "Maybe on yours, but it's not all over my Facebook." She doesn't have the heart to tell him she muted him six months ago. "Well, I don't read the news and I don't know any science," says Mom, "But I do know this: after they opened that solar farm up near Mary, within just a few years her and two of her neighbours had cancer. I mean I don't know anything for sure honey, but given the risk are you sure it's safe to be putting these panels on top of schools?" "There's no risk, Mom. None at all. Dad, I wish you'd stop believing everything you see on Facebook." "Well, maybe you should read things yourself before passing judgement on them." He pushes himself up from his seat, steps away from the table. Sara sighs, thinking she's upset him that much that he's actually abandoning his dinner, but he stops to grab something off a nearby shelf. His iPad. He heads back and takes his seat again. Oh, here we fucking go she thinks to herself. He stabs at the screen, looks for a while, stabs again. Flips it over and hands it to her. "Here. Read." Reluctantly, she takes it. His Facebook feed. Somewhere in the middle of it is the article, a very to the point CHINESE SOLAR PANELS CAUSE CANCER headline. But she can't even focus on it, because the rest of the screen is filled with distractions, looping videos and animated gifs, all adverts, and all for guns. Or security systems. Panic rooms. Back up power generators. Emergency rations. More guns. "Jesus Christ Dad, these ads!" "No blasphemy at the dinner table, please honey" says Mom. "What about them?" "Just… just look at them. They're terrifying. They're like… like adverts for the end of the world! You know they show you this stuff just to make you scared, right? Just to keep you paranoid." "They show me this stuff because they've got products to sell. That's how the economy works. That's how we create jobs. Godammit Sara, are you telling me you hate advertising now? Do you just hate everything about America?" Sara looks over to Mom, who looks like she's on the brink of tears. Suddenly she finds she's also lost the will to fight. Gently she closes the iPad and puts it down on the table, next to her plate. "No, of course not Dad. Maybe I'll read this later, after the game." After dinner she helps Mom clean-up, the two of them loading the dishwasher in near silence. She's leaning against the counter, scrolling through Twitter on her phone, when Mom finally speaks. "You should go easy on your father, you know. He's worried about a lot of things." "What things? Solar panel cancer?" "Don't joke Sara, I'm serious. There's a lot that bothers him. The state of the world. The future. All these damn wars." "We're all worried about all that, Mom." "He's worried about his health. I'm worried about his health. Probably more than he is." Sara looks up from her phone, genuine concern. "Is he OK?" "I don't know. He won't go to the doctor. Hasn't been in months. He's worried about his insurance." "I had no idea-" "Yeah, well you know your father. Doesn't like to talk about it. Doesn't want to burden other people with his problems. Hates pity." She pauses, looks out the window into the yard. When she turns back to Sara her eyes are damp. "This is why I was so excited about you coming back. Why he was so excited! I thought it'd take his mind of all this. He was so excited to see you. You know he loves watching the game with you, Sara." "I know. I'm sorry I-" "And the ads! The Super Bowl ads! You know how much he loves watching the new ads with you. It's a stupid thing, sure, but he loves it. Talks about it all the time. It's like a tradition to him. That's why he got so upset over dinner when you got angry at his ads. It's something special he has with you, he doesn't want to lose it." Sara slips her phone into her pocket, genuine guilt. Feels like a spoiled kid. "I didn't realise. I'm sorry." Mom smiles, walks over and kisses her on the forehead. "It's OK honey. Don't feel bad. Just go. Just go sit in there with him and watch some TV. Please." It's the second down on the Falcon's 60 yard line with 30 yards to cover, and the Lions need one touchdown to equalise. Sara and her Dad are sat in the front room, working their way through a family sized pack of Oreos, when the ad break starts. Dawn. Red skies over the desert. A Chevrolet truck pulls up next to a large, trailer. Low shot next to the front tire, as a cowboy booted foot drops down from the door, disturbing dust. Cut to: internal shot of the trailer, darkness split by morning light through the opening door. The figure enters, flicks on lights. The room is full of equipment, computers. The figure takes a seat, puts on a headset, thumbs on screens. Rests their hands on two large joysticks on the desk. Cut to: airfield, the desert. The distinctive silhouette of a Predator drone taxis across the screen, rising heat shimmering the air around it. Cut to: interior of the trailer. The faceless figure works controls, the joysticks, touch screens. Voiceover: They say you need to get up pretty early to get past America's finest. But the truth is we never sleep. Cut to: a uniformed guard on top of the border wall. He looks up and gives a salute to the drone as it soars above him, out and across the desert. Cut to: drone footage. Grainy, monochrome. A group of figures move slowly through the desert. The camera tracks them. Zooms in. The pilot punches buttons. The figures become highlighted by a computer overlay, text appears next to them. ILLEGAL ENTRY ATTEMPT SUSPECTED. GROUND PATROLS ALERTED. "Fuck this," says Sara, getting up from her seat. "Sara!" says Mom. "No I'm sorry, I can't. I can't sit here and watch this… this bullshit. This propaganda." She storms out of the room. "Sara!" Mom makes to get up. "No, just leave her," says Dad, gently, his eyes still fixed on the screen. "Just let her go." Out in the kitchen Sara sits at the table and wants to scream. She's angry, mainly with herself. She should never have fucking come here. She should have known better. There was never any fucking way anything good was going to come from this. As much as Mom wants to romanticise things, to make them sound cute and adorable, the truth is shit with Dad has never been right since she was a teenager. Too much resentment, too much bad blood, too much control and rebellion. They hadn't agreed on anything - they hadn't managed to have a simple conversation that didn't descend into fighting - in 15 goddamn years, and no amount of eating cookies and watching fucking Super Bowl ads on the TV was going to fix that. She sighs, wipes a tear from her cheek. On autopilot she takes her phone from her pocket, feels its reassuring warmth in her hand, and swipes open Twitter. Everybody seems to be talking about the same thing. omg im crying holy shit that chevrolet ad /fire emoji that was sooooo beautiful who knew chevrolet were so woke i can't believe they did that, so amazing Hang on, are they taking about the same ad? Hastily she opens her FB TV app, pulls up the game. The ad is just finishing. She hits the 10-second rewind icon a couple of times, then leans the phone on its side against a ketchup bottle. Cut to: drone footage. Grainy, monochrome. A group of figures move slowly through the desert. The camera tracks them. Zooms in. The pilot punches buttons. The figures become highlighted by a computer overlay, text appears next to them. ILLEGAL ENTRY ATTEMPT SUSPECTED. GROUND PATROLS ALERTED. Cut to: on the ground, in the desert. The group of figures are revealed to be a Mexican family, maybe two. Men, women, children. They look tired, hungry. They stop to rest, sipping the little water they have left from tattered plastic bottles. A little way away from the main group sits a small child, a girl. Maybe 8 years old. She is drawing shapes in the dust with a stick. She's drawn quite a bit it looks like, but from our angle we can't see what. Cut to: drone footage. The pilot is watching the group. As he tracks away from the main party to where the girl is sat, the camera reveals what she has drawn. A large, child's rendition of the American flag. Underneath it, it childlike handwriting, some words. 'I have a dream' Text flashes across the screen. ALERT CANCELLED. ALL PATROLS: STAND DOWN Cut to: the drone, banking and turning, flying away. Cut to: exterior shot of the trailer. The still anonymous pilot exits, walks back towards his jeep. Voiceover: Keeping America safe means never sleeping, but keeping America great means never forgetting who we are, and how we got here. The jeep starts up, pulls away from the camera in a cloud of dust. Fade to black. Chevrolet logo. White text against black. 'We know what really makes America great' Sara finds herself in the front room, sobbing. "Honey?" Dad pauses the TV, looks up at her. It looks like he's been crying too. "Sara?" "Did you - did you watch it?" "The Chevrolet ad?" "Yeah." "Yeah, we did." Embarrassed, he wipes a tear from his cheek. "It was… it was very moving." She falls on him, wrapping her arms around his neck, burying her face in his chest. "I'm sorry Dad. I'm so sorry. I didn't mean to be so mean-" "It's OK, honey. It really is." "No, no it's not. We always fight. And I know that's mainly my fault-" 'Well, now, c'mon-" "No, it is. It's my fault. I got myself into thinking we can never agree on anything, that we can never see eye to eye. That we've got nothing in common anymore." She lifts her head to look up at him. "But I know that's wrong. That I shouldn't assume things about you. That there's still things that can bring us together." He grins back at her. "Like Super Bowl ads?" She laughs. "I guess. But you know what I mean, really." "I know honey. And I'm sorry too. I didn't mean what I said earlier. I know you don't really hate this country." He gestures to the couch next to him. "Why don't you sit down, huh? We can watch the rest of the game together." She straightens herself up, wipes her eyes. Suddenly feels a little self conscious. "Sure. Let me just go freshen up first." "Of course honey." Mom and Dad watch Sara leave the room, and then look at each other. "Well." "Well indeed." "What did I tell you? You two just needed to spend some time together. Some quality time." "I guess so. What did I ever do to deserve a woman as hot and as smart as you, huh Sheryl?" Mom stands up and makes to leave the room, leaning down to kiss him as she passes. "I ask myself that question every day." Alone, seen only by the TV, Dad smiles to himself. He picks up the remote, but instead of hitting play, he finds himself hitting rewind. Cut to: drone footage. Grainy, monochrome. A group of figures move slowly through the desert. The camera tracks them. Zooms in. The pilot punches buttons. The figures become highlighted by a computer overlay, text appears next to them. ILLEGAL ENTRY ATTEMPT SUSPECTED. GROUND PATROLS ALERTED. Cut to: on the ground, in the desert. The group of figures are all men. Dirty, scruffy, furtive. Like they mean business.They carry guns, pistols, and assault riffles. Bad hombres. One of them pulls open a bag, looks inside. Cut to: close up of the inside of the bag. Inside are packets of white powder. Suddenly, one of the party looks up, shouts something in Spanish. They all go to grab their guns. But it's too late. From three different directions, three different Chevrolet jeeps appear, screeching to a halt, kicking up dust. From them jump Border Patrol agents and Minutemen militia, guns drawn and ready. The gang of men don't even put up a fight. They know they're surrounded, they drop their weapons and pathetically raise their hands. All except one. The guy with the bag full of drugs. He's got nothing to lose. He reaches for his rifle. Cut to: Border Patrol agents, opening fire. Text flashes across the screen. ALERT CANCELLED. THREAT NEUTRALISED. Cut to: the drone, banking and turning, flying away. Cut to: exterior shot of the trailer. The still anonymous pilot exits, walks back towards his jeep. Voiceover: Keeping America safe means never sleeping, but keeping America great means never forgetting who we are, and what keeps us strong. The jeep starts up, pulls away from the camera in a cloud of dust. Fade to black. Chevrolet logo. White text against black. 'We know what really makes America great' Dad wipes another team from his eye. "I think we're going to be OK," he says to himself. "I think we're going to be just fine." This article was originally published on TheLong+Short. Read the original article.
C. He likes the football and the time he spends with his daughter
What separate Infield and Morgan from the Normals? A. The Normals are cannibalistic B. The Normals are uncured C. The Normals are socially repulsive D. The Normals are delusional
Name Your Symptom By JIM HARMON Illustrated by WEISS [Transcriber's Note: This etext was produced from Galaxy Science Fiction May 1956. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Anybody who shunned a Cure needed his head examined—assuming he had one left! Henry Infield placed the insulated circlet on his head gently. The gleaming rod extended above his head about a foot, the wires from it leading down into his collar, along his spine and finally out his pants leg to a short metallic strap that dragged on the floor. Clyde Morgan regarded his partner. "Suppose—just suppose—you were serious about this, why not just the shoes?" Infield turned his soft blue eyes to the black and tan oxfords with the very thick rubber soles. "They might get soaked through." Morgan took his foot off the chair behind the desk and sat down. "Suppose they were soaked through and you were standing on a metal plate—steps or a manhole cover—what good would your lightning rod do you then?" Infield shrugged slightly. "I suppose a man must take some chances." Morgan said, "You can't do it, Henry. You're crossing the line. The people we treat are on one side of the line and we're on the other. If you cross that line, you won't be able to treat people again." The small man looked out the large window, blinking myopically at the brassy sunlight. "That's just it, Clyde. There is a line between us, a wall. How can we really understand the people who come to us, if we hide on our side of the wall?" Morgan shook his thick head, ruffling his thinning red hair. "I dunno, Henry, but staying on our side is a pretty good way to keep sane and that's quite an accomplishment these days." Infield whirled and stalked to the desk. "That's the answer! The whole world is going mad and we are just sitting back watching it hike along. Do you know that what we are doing is really the most primitive medicine in the world? We are treating the symptoms and not the disease. One cannibal walking another with sleeping sickness doesn't cure anything. Eventually the savage dies—just as all those sick savages out in the street will die unless we can cure the disease, not only the indications." Morgan shifted his ponderous weight uneasily. "Now, Henry, it's no good to talk like that. We psychiatrists can't turn back the clock. There just aren't enough of us or enough time to give that old-fashioned therapy to all the sick people." Infield leaned on the desk and glared. "I called myself a psychiatrist once. But now I know we're semi-mechanics, semi-engineers, semi-inventors, semi lots of other things, but certainly not even semi-psychiatrists. A psychiatrist wouldn't give a foetic gyro to a man with claustrophobia." His mind went back to the first gyro ball he had ever issued; the remembrance of his pride in the thing sickened him. Floating before him in memory was the vertical hoop and the horizontal hoop, both of shining steel-impervium alloy. Transfixed in the twin circles was the face of the patient, slack with smiles and sweat. But his memory was exaggerating the human element. The gyro actually passed over a man's shoulder, through his legs, under his arms. Any time he felt the walls creeping in to crush him, he could withdraw his head and limbs into the circle and feel safe. Steel-impervium alloy could resist even a nuclear explosion. The foetic gyro ball was worn day and night, for life. The sickness overcame him. He sat down on Morgan's desk. "That's just one thing, the gyro ball. There are so many others, so many." Morgan smiled. "You know, Henry, not all of our Cures are so—so—not all are like that. Those Cures for mother complexes aren't even obvious. If anybody does see that button in a patient's ear, it looks like a hearing aid. Yet for a nominal sum, the patient is equipped to hear the soothing recorded voice of his mother saying, 'It's all right, everything's all right, Mommy loves you, it's all right....'" "But is everything all right?" Infield asked intensely. "Suppose the patient is driving over one hundred on an icy road. He thinks about slowing down, but there's the voice in his ear. Or suppose he's walking down a railroad track and hears a train whistle—if he can hear anything over that verbal pablum gushing in his ear." Morgan's face stiffened. "You know as well as I do that those voices are nearly subsonic. They don't cut a sense efficiency more than 23 per cent." "At first, Clyde—only at first. But what about the severe case where we have to burn a three-dimensional smiling mother-image on the eyes of the patient with radiation? With that image over everything he sees and with that insidious voice drumming in his head night and day, do you mean to say that man's senses will only be impaired 23 per cent? Why, he'll turn violently schizophrenic sooner or later—and you know it. The only cure we have for that is still a strait jacket, a padded cell or one of those inhuman lobotomies." Morgan shrugged helplessly. "You're an idealist." "You're damned right!" Infield slammed the door behind him. The cool air of the street was a relief. Infield stepped into the main stream of human traffic and tried to adjust to the second change in the air. People didn't bathe very often these days. He walked along, buffeted by the crowd, carried along in this direction, shoved back in that direction. Most people in the crowd seemed to be Normals, but you couldn't tell. Many "Cures" were not readily apparent. A young man with black glasses and a radar headset (a photophobe) was unable to keep from being pushed against Infield. He sounded out the lightning rod, his face changing when he realized it must be some kind of Cure. "Pardon me," he said warmly. "Quite all right." It was the first time in years that anyone had apologized to Infield for anything. He had been one of those condemned Normals, more to be scorned than pitied. Perhaps he could really get to understand these people, now that he had taken down the wall. Suddenly something else was pushing against Infield, forcing the air from his lungs. He stared down at the magnetic suction dart clinging leechlike to his chest. Model Acrophobe 101-X, he catalogued immediately. Description: safety belt. But his emotions didn't behave so well. He was thoroughly terrified, heart racing, sweat glands pumping. The impervium cable undulated vulgarly. Some primitive fear of snake symbols? his mind wondered while panic crushed him. "Uncouple that cable!" the shout rang out. It was not his own. A clean-cut young man with mouse-colored hair was moving toward the stubble-chinned, heavy-shouldered man quivering in the center of a web of impervium cables stuck secure to the walls and windows of buildings facing the street, the sidewalk, a mailbox, the lamp post and Infield. Mouse-hair yelled hoarsely, "Uncouple it, Davies! Can't you see the guy's got a lightning rod? You're grounding him! "I can't," Davies groaned. "I'm scared!" Halfway down the twenty feet of cable, Mouse-hair grabbed on. "I'm holding it. Release it, you hear?" Davies fumbled for the broad belt around his thickening middle. He jabbed the button that sent a negative current through the cable. The magnetic suction dart dropped away from Infield like a thing that had been alive and now was killed. He felt an overwhelming sense of relief. After breathing deeply for a few moments, he looked up to see Davies releasing and drawing all his darts into his belt, making it resemble a Hydra-sized spiked dog collar. Mouse-hair stood by tensely as the crowd disassembled. "This isn't the first time you've pulled something like this, Davies," he said. "You weren't too scared to release that cable. You just don't care about other people's feelings. This is official ." Mouse-hair drove a fast, hard right into the soft blue flesh of Davies' chin. The big man fell silently. The other turned to Infield. "He was unconscious on his feet," he explained. "He never knew he fell." "What did you mean by that punch being official?" Infield asked while trying to arrange his feelings into the comfortable, familiar patterns. The young man's eyes almost seemed to narrow, although his face didn't move; he merely radiated narrowed eyes. "How long have you been Cured?" "Not—not long," Infield evaded. The other glanced around the street. He moistened his lips and spoke slowly. "Do you think you might be interested in joining a fraternal organization of the Cured?" Infield's pulse raced, trying to get ahead of his thoughts, and losing out. A chance to study a pseudo-culture of the "Cured" developed in isolation! "Yes, I think I might. I owe you a drink for helping me out. How about it?" The man's face paled so fast, Infield thought for an instant that he was going to faint. "All right. I'll risk it." He touched the side of his face away from the psychiatrist. Infield shifted around, trying to see that side of his benefactor, but couldn't manage it in good grace. He wondered if the fellow was sporting a Mom-voice hearing aid and was afraid of raising her ire. He cleared his throat, noticing the affectation of it. "My name's Infield." "Price," the other answered absently. "George Price. I suppose they have liquor at the Club. We can have a drink there, I guess." Price set the direction and Infield fell in at his side. "Look, if you don't drink, I'll buy you a cup of coffee. It was just a suggestion." Under the mousy hair, Price's strong features were beginning to gleam moistly. "You are lucky in one way, Mr. Infield. People take one look at your Cure and don't ask you to go walking in the rain. But even after seeing this , some people still ask me to have a drink." This was revealed, as he turned his head, to be a small metal cube above his left ear. Infield supposed it was a Cure, although he had never issued one like it. He didn't know if it would be good form to inquire what kind it was. "It's a cure for alcoholism," Price told him. "It runs a constant blood check to see that the alcohol level doesn't go over the sobriety limit." "What happens if you take one too many?" Price looked off as if at something not particularly interesting, but more interesting than what he was saying. "It drives a needle into my temple and kills me." The psychiatrist felt cold fury rising in him. The Cures were supposed to save lives, not endanger them. "What kind of irresponsible idiot could have issued such a device?" he demanded angrily. "I did," Price said. "I used to be a psychiatrist. I was always good in shop. This is a pretty effective mechanism, if I say so myself. It can't be removed without causing my death and it's indestructible. Impervium-shielded, you see." Price probably would never get crazed enough for liquor to kill himself, Infield knew. The threat of death would keep him constantly shocked sane. Men hide in the comforts of insanity, but when faced with death, they are often forced back to reality. A man can't move his legs; in a fire, though, he may run. His legs were definitely paralyzed before and may be again, but for one moment he would forget the moral defeat of his life and his withdrawal from life and live an enforced sanity. But sometimes the withdrawal was—or could become—too complete. "We're here." Infield looked up self-consciously and noticed that they had crossed two streets from his building and were standing in front of what appeared to be a small, dingy cafe. He followed Price through the screeching screen door. They seated themselves at a small table with a red-checked cloth. Infield wondered why cheap bars and restaurants always used red-checked cloths. Then he looked closer and discovered the reason. They did a remarkably good job of camouflaging the spots of grease and alcohol. A fat man who smelled of the grease and alcohol of the tablecloths shuffled up to them with a towel on his arm, staring ahead of him at some point in time rather than space. Price lit a cigarette with unsteady hands. "Reggie is studying biblical text. Cute gadget. His contact lenses are made up of a lot of layers of polarized glass. Every time he blinks, the amount of polarization changes and a new page appears. His father once told him that if he didn't study his Bible and pray for him, his old dad would die." The psychiatrist knew the threat on the father's part couldn't create such a fixation by itself. His eyebrows faintly inquired. Price nodded jerkily. "Twenty years ago, at least." "What'll you have, Georgie?" Reggie asked. The young man snubbed out his cigarette viciously. "Bourbon. Straight." Reggie smiled—a toothy, vacant, comedy-relief smile. "Fine. The Good Book says a little wine is good for a man, or something like that. I don't remember exactly." Of course he didn't, Infield knew. Why should he? It was useless to learn his Bible lessons to save his father, because it was obvious his father was dead. He would never succeed because there was no reason to succeed. But he had to try, didn't he, for his father's sake? He didn't hate his father for making him study. He didn't want him to die. He had to prove that. Infield sighed. At least this device kept the man on his feet, doing some kind of useful work instead of rotting in a padded cell with a probably imaginary Bible. A man could cut his wrists with the edge of a sheet of paper if he tried long enough, so of course the Bible would be imaginary. "But, Georgie," the waiter complained, "you know you won't drink it. You ask me to bring you drinks and then you just look at them. Boy, do you look funny when you're looking at drinks. Honest, Georgie, I want to laugh when I think of the way you look at a glass with a drink in it." He did laugh. Price fumbled with the cigarette stub in the black iron ashtray, examining it with the skill of scientific observation. "Mr. Infield is buying me the drink and that makes it different." Reggie went away. Price kept dissecting the tobacco and paper. Infield cleared his throat and again reminded himself against such obvious affectations. "You were telling me about some organization of the Cured," he said as a reminder. Price looked up, no longer interested in the relic of a cigarette. He was suddenly intensely interested and intensely observant of the rest of the cafe. "Was I? I was? Well, suppose you tell me something. What do you really think of the Incompletes?" The psychiatrist felt his face frown. "Who?" "I forgot. You haven't been one of us long. The Incompletes is a truer name for the so-called Normals. Have you ever thought of just how dangerous these people are, Mr. Infield?" "Frankly, no," Infield said, realizing it was not the right thing to say but tiring of constant pretense. "You don't understand. Everyone has some little phobia or fixation. Maybe everyone didn't have one once, but after being told they did have them for generations, everyone who didn't have one developed a defense mechanism and an aberration so they would be normal. If that phobia isn't brought to the surface and Cured, it may arise any time and endanger other people. The only safe, good sound citizens are Cured. Those lacking Cures—the Incompletes— must be dealt with ." Infield's throat went dry. "And you're the one to deal with them?" "It's my Destiny." Price quickly added, "And yours, too, of course." Infield nodded. Price was a demagogue, young, handsome, dynamic, likable, impassioned with his cause, and convinced that it was his divine destiny. He was a psychopathic egotist and a dangerous man. Doubly dangerous to Infield because, even though he was one of the few people who still read books from the old days of therapy to recognize Price for what he was, he nevertheless still liked the young man for the intelligence behind the egotism and the courage behind the fanaticism. "How are we going to deal with the Incompletes?" Infield asked. Price started to glance around the cafe, then half-shrugged, almost visibly thinking that he shouldn't run that routine into the ground. "We'll Cure them whether they want to be Cured or not—for their own good." Infield felt cold inside. After a time, he found that the roaring was not just in his head. It was thundering outside. He was getting sick. Price was the type of man who could spread his ideas throughout the ranks of the Cured—if indeed the plot was not already universal, imposed upon many ill minds. He could picture an entirely Cured world and he didn't like the view. Every Cure cut down on the mental and physical abilities of the patient as it was, whether Morgan and the others admitted it or not. But if everyone had a crutch to lean on for one phobia, he would develop secondary symptoms. People would start needing two Cures—perhaps a foetic gyro and a safety belt—then another and another. There would always be a crutch to lean on for one thing and then room enough to develop something else—until everyone would be loaded down with too many Cures to operate. A Cure was a last resort, dope for a malignancy case, euthanasia for the hopeless. Enforced Cures would be a curse for the individual and the race. But Infield let himself relax. How could anyone force a mechanical relief for neurotic or psychopathic symptoms on someone who didn't want or need it? "Perhaps you don't see how it could be done," Price said. "I'll explain." Reggie's heavy hand sat a straight bourbon down before Price and another before Infield. Price stared at the drink almost without comprehension of how it came to be. He started to sweat. "George, drink it." The voice belonged to a young woman, a blonde girl with pink skin and suave, draped clothes. In this den of the Cured, Infield thought half-humorously, it was surprising to see a Normal—an "Incomplete." But then he noticed something about the baby she carried. The Cure had been very simple. It wasn't even a mechanized half-human robot, just a rag doll. She sat down at the table. "George," she said, "drink it. One drink won't raise your alcohol index to the danger point. You've got to get over this fear of even the sight or smell of liquor." The girl turned to Infield. "You're one of us, but you're new, so you don't know about George. Maybe you can help if you do. It's all silly. He's not an alcoholic. He didn't need to put that Cure on his head. It's just an excuse for not drinking. All of this is just because a while back something happened to the baby here—" she adjusted the doll's blanket—"when he was drinking. Just drinking, not drunk. "I don't remember what happened to the baby—it wasn't important. But George has been brooding about it ever since. I guess he thinks something else bad will happen because of liquor. That's silly. Why don't you tell him it's silly?" "Maybe it is," Infield said softly. "You could take the shock if he downed that drink and the shock might do you good." Price laughed shortly. "I feel like doing something very melodramatic, like throwing my drink—and yours—across the room, but I haven't got the guts to touch those glasses. Do it for me, will you? Cauterizing the bite might do me good if I'd been bitten by a rabid dog, but I don't have the nerve to do it." Before Infield could move, Reggie came and set both drinks on a little circular tray. He moved away. "I knew it. That's all he did, just look at the drink. Makes me laugh." Price wiped the sweat off his palms. Infield sat and thought. Mrs. Price cooed to the rag doll, unmindful of either of them now. "You were explaining," the psychiatrist said. "You were going to tell me how you were going to Cure the Incompletes." "I said we were going to do it. Actually you will play a greater part than I, Doctor Infield." The psychiatrist sat rigidly. "You didn't think you could give me your right name in front of your own office building and that I wouldn't recognize you? I know some psychiatrists are sensitive about wearing Cures themselves, but it is a mark of honor of the completely sane man. You should be proud of your Cure and eager to Cure others. Very eager." "Just what do you mean?" He already suspected Price's meaning. Price leaned forward. "There is one phobia that is so wide-spread, a Cure is not even thought of—hypochondria. Hundreds of people come to your office for a Cure and you turn them away. Suppose you and the other Cured psychiatrists give everybody who comes to you a Cure?" Infield gestured vaguely. "A psychiatrist wouldn't hand out Cures unless they were absolutely necessary." "You'll feel differently after you've been Cured for a while yourself. Other psychiatrists have." Before Infield could speak, a stubble-faced, barrel-chested man moved past their table. He wore a safety belt. It was the man Price had called Davies, the one who had fastened one of his safety lines to Infield in the street. Davies went to the bar in the back. "Gimme a bottle," he demanded of a vacant-eyed Reggie. He came back toward them, carrying the bottle in one hand, brushing off rain drops with the other. He stopped beside Price and glared. Price leaned back. The chair creaked. Mrs. Price kept cooing to the doll. "You made me fall," Davies accused. Price shrugged. "You were unconscious. You never knew it." Sweat broke out on Davies' forehead. "You broke the Code. Don't you think I can imagine how it was to fall? You louse!" Suddenly, Davies triggered his safety belt. At close range, before the lines could fan out in a radius, all the lines in front attached themselves to Price, the ones at each side clung to their table and the floor, and all the others to the table behind Infield. Davies released all lines except those on Price, and then threw himself backward, dragging Price out of his chair and onto the floor. Davies didn't mind making others fall. They were always trying to make him fall just so they could laugh at him or pounce on him; why shouldn't he like to make them fall first? Expertly, Davies moved forward and looped the loose lines around Price's head and shoulders and then around his feet. He crouched beside Price and shoved the bottle into the gasping mouth and poured. Price twisted against the binding lines in blind terror, gagging and spouting whiskey. Davies laughed and tilted the bottle more. Mrs. Price screamed. "The Cure! If you get that much liquor in his system, it will kill him!" She rocked the rag doll in her arms, trying to soothe it, and stared in horror. Infield hit the big man behind the ear. He dropped the bottle and fell over sideways on the floor. Fear and hate mingled in his eyes as he looked up at Infield. Nonsense, Infield told himself. Eyes can't register emotion. Davies released his lines and drew them in. He got up precariously. "I'm going to kill you," he said, glaring at Infield. "You made me fall worse than Georgie did. I'm really going to kill you." Infield wasn't a large man, but he had pressed two hundred and fifty many times in gym. He grabbed Davies' belt with both hands and lifted him about six inches off the floor. "I could drop you," the psychiatrist said. "No!" Davies begged weakly. "Please!" "I'll do it if you cause more trouble." Infield sat down and rubbed his aching forearms. Davies backed off in terror, right into the arms of Reggie. The waiter closed his huge hands on the acrophobe's shoulders. " You broke the Code all the way," Reggie said. "The Good Book says 'Thou shouldn't kill' or something like that, and so does the Code." "Let him go, Reggie," Price choked out, getting to his feet. "I'm not dead." He wiped his hand across his mouth. "No. No, you aren't." Infield felt an excitement pounding through him, same as when he had diagnosed his first case. No, better than that. "That taste of liquor didn't kill you, Price. Nothing terrible happened. You could find some way to get rid of that Cure." Price stared at him as if he were a padded-cell case. "That's different. I'd be a hopeless drunk without the Cure. Besides, no one ever gets rid of a Cure." They were all looking at Infield. Somehow he felt this represented a critical point in history. It was up to him which turn the world took, the world as represented by these four Cured people. "I'm afraid I'm for less Cures instead of more, Price. Look, if I can show you that someone can discard a Cure, would you get rid of that—if I may use the word— monstrous thing on your head?" Price grinned. Infield didn't recognize its smugness at the time. "I'll show you." He took off the circlet with the lightning rod and yanked at the wire running down into his collar. The new-old excitement within was running high. He felt the wire snap and come up easily. He threw the Cure on the floor. "Now," he said, "I am going out in that rain storm. There's thunder and lightning out there. I'm afraid, but I can get along without a Cure and so can you." "You can't! Nobody can!" Price screamed after him. He turned to the others. "If he reveals us, the Cause is lost. We've got to stop him for good . We've got to go after him." "It's slippery," Davies whimpered. "I might fall." Mrs. Price cuddled her rag doll. "I can't leave the baby and she mustn't get wet." "Well, there's no liquor out there and you can study your text in the lightning flashes, Reggie. Come on." Running down the streets that were tunnels of shining tar, running into the knifing ice bristles of the rain, Henry Infield realized that he was very frightened of the lightning. There is no action without a reason, he knew from the old neglected books. He had had a latent fear of lightning when he chose the lightning rod Cure. He could have picked a safety belt or foetic gyro just as well. He sneezed. He was soaked through, but he kept on running. He didn't know what Price and Reggie planned to do when they caught him. He slipped and fell. He would soon find out what they wanted. The excitement was all gone now and it left an empty space into which fear rushed. Reggie said, "We shall make a sacrifice." Infield looked up and saw the lightning reflected on the blade of a thin knife. Infield reached toward it more in fascination than fear. He managed to get all his fingers around two of Reggie's. He jerked and the knife fell into Infield's palm. The psychiatrist pulled himself erect by holding to Reggie's arm. Staggering to his feet, he remembered what he must do and slashed at the waiter's head. A gash streaked across the man's brow and blood poured into his eyes. He screamed. "I can't see the words!" It was his problem. Infield usually solved other people's problems, but now he ran away—he couldn't even solve his own. Infield realized that he had gone mad as he held the thin blade high overhead, but he did need some kind of lightning rod. Price (who was right behind him, gaining) had been right. No one could discard a Cure. He watched the lightning play its light on the blade of his Cure and he knew that Price was going to kill him in the next moment. He was wrong. The lightning hit him first. Reggie squinted under the bandage at the lettering on the door that said INFIELD &amp; MORGAN and opened the door. He ran across the room to the man sitting at the desk, reading by the swivel light. "Mr. Morgan, your partner, Mr. Infield, he—" "Just a moment." Morgan switched on the room lights. "What were you saying?" "Mr. Infield went out without his Cure in a storm and was struck by lightning. We took him to the morgue. He must have been crazy to go out without his Cure." Morgan stared into his bright desk light without blinking. "This is quite a shock to me. Would you mind leaving? I'll come over to your place and you can tell me about it later." Reggie went out. "Yes, sir. He was struck by lightning, struck dead. He must have been crazy to leave his Cure...." The door closed. Morgan exhaled. Poor Infield. But it wasn't the lightning that killed him, of course. Morgan adjusted the soundproofing plugs in his ears, thinking that you did have to have quite a bit of light to read lips. The thunder, naturally, was what had killed Infield. Loud noise—any noise—that would do it every time. Too bad Infield had never really stopped being one of the Incompletes. Dangerous people. He would have to deal with them.
B. The Normals are uncured
By how much do they outperform standard BERT?
### Introduction With ever-increasing amounts of data available, there is an increase in the need to offer tooling to speed up processing, and eventually making sense of this data. Because fully-automated tools to extract meaning from any given input to any desired level of detail have yet to be developed, this task is still at least supervised, and often (partially) resolved by humans; we refer to these humans as knowledge workers. Knowledge workers are professionals that have to go through large amounts of data and consolidate, prepare and process it on a daily basis. This data can originate from highly diverse portals and resources and depending on type or category, the data needs to be channelled through specific down-stream processing pipelines. We aim to create a platform for curation technologies that can deal with such data from diverse sources and that provides natural language processing (NLP) pipelines tailored to particular content types and genres, rendering this initial classification an important sub-task. In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task. Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT; BIBREF1) outperformed previous state-of-the-art methods by a large margin on various NLP tasks. We adopt BERT for text-based classification and extend the model with additional metadata provided in the context of the shared task, such as author, publisher, publishing date, etc. A key contribution of this paper is the inclusion of additional (meta) data using a state-of-the-art approach for text processing. Being a transfer learning approach, it facilitates the task solution with external knowledge for a setup in which relatively little training data is available. More precisely, we enrich BERT, as our pre-trained text representation model, with knowledge graph embeddings that are based on Wikidata BIBREF2, add metadata provided by the shared task organisers (title, author(s), publishing date, etc.) and collect additional information on authors for this particular document classification task. As we do not rely on text-based features alone but also utilize document metadata, we consider this as a document classification problem. The proposed approach is an attempt to solve this problem exemplary for single dataset provided by the organisers of the shared task. ### Related Work A central challenge in work on genre classification is the definition of a both rigid (for theoretical purposes) and flexible (for practical purposes) mode of representation that is able to model various dimensions and characteristics of arbitrary text genres. The size of the challenge can be illustrated by the observation that there is no clear agreement among researchers regarding actual genre labels or their scope and consistency. There is a substantial amount of previous work on the definition of genre taxonomies, genre ontologies, or sets of labels BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Since we work with the dataset provided by the organisers of the 2019 GermEval shared task, we adopt their hierarchy of labels as our genre palette. In the following, we focus on related work more relevant to our contribution. With regard to text and document classification, BERT (Bidirectional Encoder Representations from Transformers) BIBREF1 is a pre-trained embedding model that yields state of the art results in a wide span of NLP tasks, such as question answering, textual entailment and natural language inference learning BIBREF8. BIBREF9 are among the first to apply BERT to document classification. Acknowledging challenges like incorporating syntactic information, or predicting multiple labels, they describe how they adapt BERT for the document classification task. In general, they introduce a fully-connected layer over the final hidden state that contains one neuron each representing an input token, and further optimize the model choosing soft-max classifier parameters to weight the hidden state layer. They report state of the art results in experiments based on four popular datasets. An approach exploiting Hierarchical Attention Networks is presented by BIBREF10. Their model introduces a hierarchical structure to represent the hierarchical nature of a document. BIBREF10 derive attention on the word and sentence level, which makes the attention mechanisms react flexibly to long and short distant context information during the building of the document representations. They test their approach on six large scale text classification problems and outperform previous methods substantially by increasing accuracy by about 3 to 4 percentage points. BIBREF11 (the organisers of the GermEval 2019 shared task on hierarchical text classification) use shallow capsule networks, reporting that these work well on structured data for example in the field of visual inference, and outperform CNNs, LSTMs and SVMs in this area. They use the Web of Science (WOS) dataset and introduce a new real-world scenario dataset called Blurb Genre Collection (BGC). With regard to external resources to enrich the classification task, BIBREF12 experiment with external knowledge graphs to enrich embedding information in order to ultimately improve language understanding. They use structural knowledge represented by Wikidata entities and their relation to each other. A mix of large-scale textual corpora and knowledge graphs is used to further train language representation exploiting ERNIE BIBREF13, considering lexical, syntactic, and structural information. BIBREF14 propose and evaluate an approach to improve text classification with knowledge from Wikipedia. Based on a bag of words approach, they derive a thesaurus of concepts from Wikipedia and use it for document expansion. The resulting document representation improves the performance of an SVM classifier for predicting text categories. ### Dataset and Task Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has: A title. A list of authors. The average number of authors per book is 1.13, with most books (14,970) having a single author and one outlier with 28 authors. A short descriptive text (blurb) with an average length of 95 words. A URL pointing to a page on the publisher's website. An ISBN number. The date of publication. The books are labeled according to the hierarchy used by the German publisher Random House. This taxonomy includes a mix of genre and topical categories. It has eight top-level genre categories, 93 on the second level and 242 on the most detailed third level. The eight top-level labels are `Ganzheitliches Bewusstsein' (holistic awareness/consciousness), `Künste' (arts), `Sachbuch' (non-fiction), `Kinderbuch & Jugendbuch' (children and young adults), `Ratgeber' (counselor/advisor), `Literatur & Unterhaltung' (literature and entertainment), `Glaube & Ethik' (faith and ethics), `Architektur & Garten' (architecture and garden). We refer to the shared task description for details on the lower levels of the ontology. Note that we do not have access to any of the full texts. Hence, we use the blurbs as input for BERT. Given the relatively short average length of the blurbs, this considerably decreases the amount of data points available for a single book. The shared task is divided into two sub-task. Sub-task A is to classify a book, using the information provided as explained above, according to the top-level of the taxonomy, selecting one or more of the eight labels. Sub-task B is to classify a book according to the detailed taxonomy, specifying labels on the second and third level of the taxonomy as well (in total 343 labels). This renders both sub-tasks a multi-label classification task. ### Experiments As indicated in Section SECREF1, we base our experiments on BERT in order to explore if it can be successfully adopted to the task of book or document classification. We use the pre-trained models and enrich them with additional metadata and tune the models for both classification sub-tasks. ### Experiments ::: Metadata Features In addition to the metadata provided by the organisers of the shared task (see Section SECREF3), we add the following features. Number of authors. Academic title (Dr. or Prof.), if found in author names (0 or 1). Number of words in title. Number of words in blurb. Length of longest word in blurb. Mean word length in blurb. Median word length in blurb. Age in years after publication date. Probability of first author being male or female based on the Gender-by-Name dataset. Available for 87% of books in training set (see Table TABREF21). The statistics (length, average, etc.) regarding blurbs and titles are added in an attempt to make certain characteristics explicit to the classifier. For example, books labeled `Kinderbuch & Jugendbuch' (children and young adults) have a title that is on average 5.47 words long, whereas books labeled `Künste' (arts) on average have shorter titles of 3.46 words. The binary feature for academic title is based on the assumption that academics are more likely to write non-fiction. The gender feature is included to explore (and potentially exploit) whether or not there is a gender-bias for particular genres. ### Experiments ::: Author Embeddings Whereas one should not judge a book by its cover, we argue that additional information on the author can support the classification task. Authors often adhere to their specific style of writing and are likely to specialize in a specific genre. To be precise, we want to include author identity information, which can be retrieved by selecting particular properties from, for example, the Wikidata knowledge graph (such as date of birth, nationality, or other biographical features). A drawback of this approach, however, is that one has to manually select and filter those properties that improve classification performance. This is why, instead, we follow a more generic approach and utilize automatically generated graph embeddings as author representations. Graph embedding methods create dense vector representations for each node such that distances between these vectors predict the occurrence of edges in the graph. The node distance can be interpreted as topical similarity between the corresponding authors. We rely on pre-trained embeddings based on PyTorch BigGraph BIBREF15. The graph model is trained on the full Wikidata graph, using a translation operator to represent relations. Figure FIGREF23 visualizes the locality of the author embeddings. To derive the author embeddings, we look up Wikipedia articles that match with the author names and map the articles to the corresponding Wikidata items. If a book has multiple authors, the embedding of the first author for which an embedding is available is used. Following this method, we are able to retrieve embeddings for 72% of the books in the training and test set (see Table TABREF21). ### Experiments ::: Pre-trained German Language Model Although the pre-trained BERT language models are multilingual and, therefore, support German, we rely on a BERT model that was exclusively pre-trained on German text, as published by the German company Deepset AI. This model was trained from scratch on the German Wikipedia, news articles and court decisions. Deepset AI reports better performance for the German BERT models compared to the multilingual models on previous German shared tasks (GermEval2018-Fine and GermEval 2014). ### Experiments ::: Model Architecture Our neural network architecture, shown in Figure FIGREF31, resembles the original BERT model BIBREF1 and combines text- and non-text features with a multilayer perceptron (MLP). The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter than BERT's hard-coded limit of 512 tokens). Only 0.25% of blurbs in the training set consist of more than 300 words, so this cut-off can be expected to have minor impact. The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function. During training, the MLP is supposed to learn a non-linear combination of its input representations. Finally, the output layer does the actual classification. In the SoftMax output layer each unit corresponds to a class label. For sub-task A the output dimension is eight. We treat sub-task B as a standard multi-label classification problem, i. e., we neglect any hierarchical information. Accordingly, the output layer for sub-task B has 343 units. When the value of an output unit is above a given threshold the corresponding label is predicted, whereby thresholds are defined separately for each class. The optimum was found by varying the threshold in steps of $0.1$ in the interval from 0 to 1. ### Experiments ::: Implementation Training is performed with batch size $b=16$, dropout probability $d=0.1$, learning rate $\eta =2^{-5}$ (Adam optimizer) and 5 training epochs. These hyperparameters are the ones proposed by BIBREF1 for BERT fine-tuning. We did not experiment with hyperparameter tuning ourselves except for optimizing the classification threshold for each class separately. All experiments are run on a GeForce GTX 1080 Ti (11 GB), whereby a single training epoch takes up to 10min. If there is no single label for which prediction probability is above the classification threshold, the most popular label (Literatur & Unterhaltung) is used as prediction. ### Experiments ::: Baseline To compare against a relatively simple baseline, we implemented a Logistic Regression classifier chain from scikit-learn BIBREF16. This baseline uses the text only and converts it to TF-IDF vectors. As with the BERT model, it performs 8-class multi-label classification for sub-task A and 343-class multi-label classification for sub-task B, ignoring the hierarchical aspect in the labels. ### Results Table TABREF34 shows the results of our experiments. As prescribed by the shared task, the essential evaluation metric is the micro-averaged F1-score. All scores reported in this paper are obtained using models that are trained on the training set and evaluated on the validation set. For the final submission to the shared task competition, the best-scoring setup is used and trained on the training and validation sets combined. We are able to demonstrate that incorporating metadata features and author embeddings leads to better results for both sub-tasks. With an F1-score of 87.20 for task A and 64.70 for task B, the setup using BERT-German with metadata features and author embeddings (1) outperforms all other setups. Looking at the precision score only, BERT-German with metadata features (2) but without author embeddings performs best. In comparison to the baseline (7), our evaluation shows that deep transformer models like BERT considerably outperform the classical TF-IDF approach, also when the input is the same (using the title and blurb only). BERT-German (4) and BERT-Multilingual (5) are only using text-based features (title and blurb), whereby the text representations of the BERT-layers are directly fed into the classification layer. To establish the information gain of author embeddings, we train a linear classifier on author embeddings, using this as the only feature. The author-only model (6) is exclusively evaluated on books for which author embeddings are available, so the numbers are based on a slightly smaller validation set. With an F1-score of 61.99 and 32.13 for sub-tasks A and B, respectively, the author model yields the worst result. However, the information contained in the author embeddings help improve performance, as the results of the best-performing setup show. When evaluating the best model (1) only on books for that author embeddings are available, we find a further improvement with respect to F1 score (task A: from 87.20 to 87.81; task B: 64.70 to 65.74). ### Discussion The best performing setup uses BERT-German with metadata features and author embeddings. In this setup the most data is made available to the model, indicating that, perhaps not surprisingly, more data leads to better classification performance. We expect that having access to the actual text of the book will further increase performance. The average number of words per blurb is 95 and only 0.25% of books exceed our cut-off point of 300 words per blurb. In addition, the distribution of labeled books is imbalanced, i.e. for many classes only a single digit number of training instances exist (Fig. FIGREF38). Thus, this task can be considered a low resource scenario, where including related data (such as author embeddings and author identity features such as gender and academic title) or making certain characteristics more explicit (title and blurb length statistics) helps. Furthermore, it should be noted that the blurbs do not provide summary-like abstracts of the book, but instead act as teasers, intended to persuade the reader to buy the book. As reflected by the recent popularity of deep transformer models, they considerably outperform the Logistic Regression baseline using TF-IDF representation of the blurbs. However, for the simpler sub-task A, the performance difference between the baseline model and the multilingual BERT model is only six points, while consuming only a fraction of BERT's computing resources. The BERT model trained for German (from scratch) outperforms the multilingual BERT model by under three points for sub-task A and over six points for sub-task B, confirming the findings reported by the creators of the BERT-German models for earlier GermEval shared tasks. While generally on par for sub-task A, for sub-task B there is a relatively large discrepancy between precision and recall scores. In all setups, precision is considerably higher than recall. We expect this to be down to the fact that for some of the 343 labels in sub-task B, there are very few instances. This means that if the classifier predicts a certain label, it is likely to be correct (i. e., high precision), but for many instances having low-frequency labels, this low-frequency label is never predicted (i. e., low recall). As mentioned in Section SECREF30, we neglect the hierarchical nature of the labels and flatten the hierarchy (with a depth of three levels) to a single set of 343 labels for sub-task B. We expect this to have negative impact on performance, because it allows a scenario in which, for a particular book, we predict a label from the first level and also a non-matching label from the second level of the hierarchy. The example Coenzym Q10 (Table TABREF36) demonstrates this issue. While the model correctly predicts the second level label Gesundheit & Ernährung (health & diet), it misses the corresponding first level label Ratgeber (advisor). Given the model's tendency to higher precision rather than recall in sub-task B, as a post-processing step we may want to take the most detailed label (on the third level of the hierarchy) to be correct and manually fix the higher level labels accordingly. We leave this for future work and note that we expect this to improve performance, but it is hard to say by how much. We hypothesize that an MLP with more and bigger layers could improve the classification performance. However, this would increase the number of parameters to be trained, and thus requires more training data (such as the book's text itself, or a summary of it). ### Conclusions and Future Work In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage points in accuracy. Furthermore, our results reveal that with task-specific information such as author names and publication metadata improves the classification task essentially compared a text-only approach. Especially, when metadata feature engineering is less trivial, adding additional task-specific information from an external knowledge source such as Wikidata can help significantly. The source code of our experiments and the trained models are publicly available. Future work comprises the use of hierarchical information in a post-processing step to refine the classification. Another promising approach to tackle the low resource problem for task B would be to use label embeddings. Many labels are similar and semantically related. The relationships between labels can be utilized to model in a joint embedding space BIBREF17. However, a severe challenge with regard to setting up label embeddings is the quite heterogeneous category system that can often be found in use online. The Random House taxonomy (see above) includes category names, i. e., labels, that relate to several different dimensions including, among others, genre, topic and function. This work is done in the context of a larger project that develops a platform for curation technologies. Under the umbrella of this project, the classification of pieces of incoming text content according to an ontology is an important step that allows the routing of this content to particular, specialized processing workflows, including parameterising the included pipelines. Depending on content type and genre, it may make sense to apply OCR post-processing (for digitized books from centuries ago), machine translation (for content in languages unknown to the user), information extraction, or other particular and specialized procedures. Constructing such a generic ontology for digital content is a challenging task, and classification performance is heavily dependent on input data (both in shape and amount) and on the nature of the ontology to be used (in the case of this paper, the one predefined by the shared task organisers). In the context of our project, we continue to work towards a maximally generic content ontology, and at the same time towards applied classification architectures such as the one presented in this paper. ### Acknowledgments This research is funded by the German Federal Ministry of Education and Research (BMBF) through the “Unternehmen Region”, instrument “Wachstumskern” QURATOR (grant no. 03WKDA1A). We would like to thank the anonymous reviewers for comments on an earlier version of this manuscript. Table 1: Availability of additional data with respect to the dataset (relative numbers in parenthesis). Figure 1: Visualization of Wikidata embeddings for Franz Kafka (3D-projection with PCA)5. Nearest neighbours in original 200D space: Arthur Schnitzler, E.T.A Hoffmann and Hans Christian Andersen. Figure 2: Model architecture used in our experiments. Text-features are fed through BERT, concatenated with metadata and author embeddings and combined in a multilayer perceptron (MLP). Table 2: Evaluation scores (micro avg.) on validation set with respect to the features used for classification. The model with BERT-German, metadata and author embeddings yields the highest F1-scores on both tasks. Table 3: Book examples and their correct and predicted labels. Hierarchical label level is in parenthesis. Figure 3: In sub-task B for many low-hierarchical labels only a small number of training samples exist, making it more difficult to predict the correct label.
up to four percentage points in accuracy
Regarding Laura Miller, when was the MR Liver plain + contrast agent examination conducted? Choose the correct answer from the following options: A. 05/28/2020 B. 06/06/20 C. 06/09/20 D. 06/11/20 E. 06/13/20
### Patient Report 0 **Dear colleague, ** We are writing to report on the outpatient treatment of Mrs. Laura Miller, born on 04/03/1967, on 05/22/2020. **Diagnosis**: Osteoporotic vertebral body sintering (lumbar vertebra 2 and thoracic vertebra 8). **Medical History**: Mrs. Miller presents with pain in her back, legs radiating, after fall on the back 6 weeks ago. The complaints were progressive with intermittent paresthesia in both legs. **Other Diagnoses** (not fully collectible): - Status post apoplexy **Current Medication (**not ascertainable): - Blood pressure medication - Osteoporosis medication - No anticoagulation. **Physical Examination: Lumbar spine:** Skin without pathological findings. No redness, no evidence of infection. Tapping pain over thoracic spine/lumbar spine, no compression pain, no torsion pain, no pressure pain over spinous process. Radiation of pain into the right and left leg, paravertebral muscle hardness. No paresthesias in the genital area, no breech anesthesia, no peripheral neurologic deficits, No bladder or rectal dysfunction. Peripheral Circulation/Motor function/Sensitivity intact. Strength grade on all sides: Hip Flex/Ex: 5/5, Knee Flex/Ex: 5/5, Foot extensor muscles of the lower leg/flexor muscles of the lower leg: 3/5. Big toe extensor muscles of the lower leg/big toe flexor muscles of the lower leg: 2/5. **Thoracic Spine 2 levels from 05/22/2020, Lumbar spine in 2 planes from 05/22/2020:** Clinical indication: Status post fall Question: new fracture? Preliminary images: none comparable **Findings** 1\) [Thoracic Spine]{.underline}: Multiple thoracic vertebral bodies exhibit decreased height, most notably at the central region where a measurement of approximately 17 mm suggests significant height loss and potential acute fracture. Additionally, there are endplate impressions in individual vertebrae of the lower thoracic spine. Aortic sclerosis is present, along with degenerative changes throughout the thoracic vertebrae. The osseous structure presents osteoporotic features. A suspected hemangioma is identified in a vertebral body of the lower thoracic spine. 2\) [Lumbar Spine]{.underline}: In a presumed five-segment lumbar spine, the L1 vertebra shows a subtle reduction in height with a questionable endplate impression. Osteoporotic features are evident in the bony structure. **Assessment:** Multiple fractures are evident in both thoracic vertebrae and the first lumbar vertebra, some of which may be acute. MRI is recommended for further evaluation. Osseous structure displays pronounced osteoporotic features. Grade III esophageal varices present without definitive high-risk stigmata. Varices also noted at the gastroesophageal junction, classified as GOV 1 according to Sarin\'s classification. Band ligation of the varices is not performed, as no unambiguous source of bleeding is identifiable and a significant portion of the stomach remains outside the field of view. **Recommendation**: Terlipressin, monitor surveillance, Erythromycin **Computed Tomography Thoracic spine from 05/22/2020:** Fracture at the base plate of lumbar vertebra 2 with involvement of the posterior margin. Left lateral, no significant reduction in height of the vertebral body. No tension of the spine. Suspicion of new small fracture also at the cover plate at thoracic vertebra 8. Multiple, older, osteoporotic compression fractures at the thoracic spine and upper lumbar spine. **Additional Finding:** Liver cirrhosis with multiple nodules, low ascites, and portal vein congestion. Splenomegaly. If not already known, further workup of liver lesions is recommended. Hydrops of the gallbladder. **Current Recommendations**: There is a general indication for admission of the patient and further diagnostics before surgical treatment of the fractures. Mrs. Miller is generally opposed to surgical care. She was thoroughly informed about the risks (progression, cross-section, death). Re-presentation with current bone densitometry and update of osteoporosis medication, as well as current holospinal MRI. In the meantime, analgesia as needed using Acetaminophen 500mg 1-1-1 under gastric protection. **Esophagogastroduodenoscopy from 05/22/2020:** [Esophagus]{.underline}: Unobstructed intubation of the esophageal opening was achieved under direct visualization. In the upper third of the esophagus, multiple prominent varices protrude into the lumen, unaltered by air insufflation. In the middle third, there are areas with whitish overlying material that do not resemble the typical white nipple sign. Despite meticulous inspection, no active bleeding sites were identified. The Z-line reveals isolated minor erosions. Cardiac sphincter closure is complete. [Stomach]{.underline}: Full distension of the gastric lumen was accomplished with air insufflation. The major curvature of the stomach contained food mixed with hematin. The mucosal surface was similarly coated with hematin, but no active bleeding was discernible in the visualized areas. Peristaltic movement was widespread. Upon retroflexion, pronounced varices were noted near the cardiac region on the lesser curvature. The pylorus was unremarkable and easily traversable. [Duodenum]{.underline}: Adequate distension of the duodenal bulb was achieved, providing a clear view up to the descending part of the duodenum. Overall, the mucosa appeared normal with minor remnants of hematin, and no source of bleeding was identified. ### Patient Report 1 **Dear colleague, ** We report to you about Mrs. Laura Miller, born on 04/03/1967, who was in our inpatient treatment from 05/27/2020 to 06/22/2020. **Diagnoses**: Initial diagnosis of hepatocellular carcinoma. - MRI liver: disseminated HCC foci in all segments, the largest foci is localized in segments 5 / 7 / 8 - Hydropic, decompensated liver cirrhosis Child B, first diagnosis: 05/20, ethyltoxic genesis - Anemia requiring transfusion - EGD of 05/28/20: sophageal varices III° without risk signs, rubber band ligation; cardia varices I°, Histoacryl injection - EGD of 06/13/20: Residual varices in the esophagus, application of 3 rubber band ligations, injection of 0.5 ml. Histoacryl; portal hypertensive gastropathy - Transfusion of 2 ECs - Fresh osteoporotic thoracic vertebra 8 fracture - Kyphoplasty thoracic vertebra 8 under C-arm fluoroscopy - Portal hypertension with bypass circuits - Splenomegaly - Cholecystolithiasis - Arterial hypertension - Osteoporosis - Status post stroke - Allergies: None known **Physical Examination:** Patient in mildly reduced general and normal nutritional status (BMI 20.3). - Lungs: Vesicular breath sound, no pathological secondary sounds. SpO2 96%. - Heart: Pure, rhythmic. Systolic with maximum at Erb\'s point and continuation into the carotids. HR: 87/min. BP: 124/54mmHg - Abdomen: Lively bowel sounds, no tenderness, no guarding, no resistance, no peritonism. Soft abdomen. Liver not enlarged, palpable. Renal bed not palpable. - Extremities: Edema of the lower extremities on both sides, foot pulses bilaterally well palpable. - Spine: No tap pain. - Orienting neurological examination: Right leg weakness, known paresis of the extensor muscles of the lower leg/flexor muscles of the lower leg. **Therapy and Progression:** Mrs. Miller was taken over for suspected upper GI bleeding. Initially, the patient had presented with increasing back pain since approximately 6 weeks at status post fall in the Park Clinic on 04/26/2020. The patient was known there for her stroke in 2016. On the day of admission to the Park Clinic, normochromic normocytic anemia (Hemoglobin 3 g/dL) was noticed, which is why the transfer to our clinic was made. On inpatient admission, the patient presented in slightly reduced general condition. She reported having black stools once. In addition, Ms. Miller had coffee ground-like emesis once. Dyspnea, angina pectoris complaints, and B symptoms were denied. There were no problems with micturition. Recently, there were no abnormalities in bowel movements. So far, no colonoscopy has been performed. There were no known intestinal diseases in the family. [Noxae]{.underline}: Ex-nicotine use (since 1996, previously cumulative ca. 3 PY), occasional alcohol consumption (probably abstinent for about 5 years). Laboratory results showed that the patient had elevated infectious parameters. The urinalysis was unremarkable. X-ray chest showed no clear picture of pneumonia. In a first emergency esophagogastroduodenoscopy on 05/28/2020, esophageal varices °III were found without clear signs of risk. Furthermore, varices in the area of the cardia (GOV 1 after Sarin) were seen. When the source of bleeding was inconclusive, it was referred to a banding initially waived. In a renewed esophagogastroduodenoscopy in the morning of 05/30/2020, 2 ampoules of Histoacryl were applied to the cardia varices. In addition, the picture of an incipient portal-hypertensive gastropathy. Antibiotic intravenous therapy with ceftriaxone was initiated. At 06/13/2020, a re-esophagogastroduodenoscopy took place, during which a renewed twofold banding of the esophageal varices was performed with injection of 0.5 ml Histoacryl. Abdominal ultrasound showed a picture of liver cirrhosis with splenomegaly and perihepatic ascites. In addition, the liver showed multiple echo-poor nodes in the right lobe and a suspicious 2.4x3.6x4.4cm echo-poor area with a halo in SII. This was followed by an MRI of the liver, in which the HCC in segment II was confirmed by imaging. Multiple additional arterial hypervascularized areas/round foci in all liver lobes without definite washout. There is no evidence of suspicious nodular changes on CT chest/abdomen/pelvis. At an in-house liver conference, systemic therapy (Lenvatinib or Sorafenib in Child B7) was recommended. Due to the back pain, a holospinal MRI was performed, which showed a subacute cover plate compression fracture in the thoracic vertebra 8 as well as multiple older compression fractures of the thoracic spine and upper lumbar spine. The colleagues of neurosurgery were consulted, who gave the indication for surgery. On 06/14/2020, the planned surgery with kyphoplasty thoracic vertebra 8 under C-arm fluoroscopy could be performed without complications. Postoperatively, the patient remained circulatory stable. Due to auscultatory suspicion of aortic valve stenosis, further clarification was performed by cardiac echography, showing no higher-grade valvular vitiation. We are transferring Mrs. Miller in improved general condition to the Senior Citizens\' Residence Seaview. If you have any questions, please do not hesitate to contact us. **Addition:** **Ultrasound of the upper abdomen on 05/27/2020:** Limited assessable due to meteorism. **Liver**: Liver dimensions are within normal limits, measuring 15.9 cm in the craniocaudal axis. Echotexture demonstrates inhomogeneous granularity. There is hepatic margin convexity and nodular surface appearance. Rarification of hepatic veins. Segment III reveals two hypoechoic lesions measuring 3 cm and 1 cm in diameter. Portal vein demonstrates orthograde flow with a maximum velocity of 17 cm/s. **Gallbladder:** Gallbladder is partially contracted; its wall appears unremarkable without sonographic evidence of cholecystitis. No tenderness elicited upon sonographic examination. **Biliary Tract**: Intrahepatic bile ducts are patent. Common hepatic duct measures 6 mm in diameter. Common bile duct appears transiently dilated up to 9 mm and is otherwise unremarkable as far as can be visualized prepapillary. **Pancreas**: Limited visualization of the pancreas; however, the visible parenchyma appears homogeneously echoic. **Spleen**: Spleen is enlarged, measuring 13 x 4 cm, with homogeneous parenchyma. **Kidneys**: Kidneys are morphologically unremarkable, without evidence of pelvicalyceal system dilation. **Abdominal Vessels**: Aorta is partially visualized and appears within normal calibers. Intestinal/Peritoneal Cavity: No evidence of free intraperitoneal fluid. Urinary Bladder/Genitalia: Urinary bladder is adequately distended, appearing unremarkable upon limited assessment. Uterus is not visualized. **Virology from 05/28/2020:** - SARS CoV-2 PCR PCR negative Geq/ml - Findings: No detection of SARS-CoV-2 by PCR in the submitted material. **Chest X-ray a.p. from 05/28/2020:** Limited assessability in supine position, malrotation. The diaphragmatic crests are smooth. The marginal sinuses are free of effusions and calluses. Heart and mediastinum lie cryptically. The aorta is sclerosed. Cranialization of the vessels as well as slightly elevated vascular markings in the supine position, especially in the right upper field. Dystelectasis on the right. Sharply defined vertical shadowing in the left upper field. The upper mediastinum is narrow, the trachea is in the midline and is not constricted. Degenerative changes in the cervical spine. Overlying foreign material. **Assessment**: Sharply defined vertical shadowing in the left upper field. Dystelectasis on the right side. Conventional radiographic examination No evidence of a mass. No effusion. **Esophagogastroduodenoscopy of 05/28/2020:** **Esophagus**: Successful intubation of the esophageal orifice under direct visualization. Multiple intraluminal protrusions noted in the upper third of the esophagus. Non-collapsible variceal strands observed upon air insufflation, beginning from the middle third. Whitish proliferations seen at multiple sites, not consistent with typical white nipple signs. No evidence of active bleeding on close inspection. Z-line observed with isolated minor erosions. Cardiac sphincter fully competent. **Stomach**: Gastric lumen fully distended under air insufflation. Corpus predominantly contains hematin-laden food remnants. Mucosal surface also stained with hematin but without visible active bleeding. Peristalsis noted throughout. Distinct coronary vasculature observed on the lesser curvature. Pylorus unremarkable, offering no resistance to passage. **Duodenum**: Bulbus duodeni well-formed. Pars descendens duodeni visualized clearly. Overall mucosa appears unremarkable, with scattered hematin remnants observed without an identifiable bleeding source. **Assessment**: Esophageal varices graded as °III, with no definitive high-risk stigmata. Varices also noted in the cardia, classified as GOV 1 according to Sarin\'s classification. Ligation of varices was not performed due to the absence of an identifiable bleeding source and incomplete visualization of the gastric lumen. **Ultrasound of the abdomen on 05/29/2020:** **Quality of Exam**: Limited due to patient non-cooperation and meteorism. **Liver**: Liver size is paradoxically reported both as normal at 15.9 cm and enlarged at 18.7 cm. Margins are rounded. Echotexture is markedly inhomogeneous with nodular surface. Multiple hypoechoic nodules are present in the right lobe, along with a suspicious hypoechoic area measuring 2.4 x 3.6 x 4.4 cm with peripheral halo in segment II. Hepatic veins are rarified. Portal vein shows orthograde flow with a vmax of 28 cm/s. **Gallbladder**: Morphologically unremarkable with no wall thickening. Cholelithiasis noted with concretions measuring at least 2 to 1.6 cm. **Biliary Tract:** Intrahepatic bile ducts are not dilated; Common hepatic duct measures up to 8.5 mm and common bile duct measures 6.6 mm in diameter. **Pancreas**: Partially visualized; adequacy of assessment is compromised. **Spleen**: Enlarged with homogeneous internal echotexture. **Kidneys**: Morphologically unremarkable; no evidence of hydronephrosis. **Abdominal Vessels:** Aorta is not dilated. **Gastrointestinal**: Perihepatic ascites noted. Both small and large intestines appear unremarkable upon limited assessment. **Urinary Bladder/Genitalia:** Bladder is moderately filled and unremarkable in shape and size. **Assessment**: Limited study due to patient non-cooperation and meteorism. Findings are suggestive of liver cirrhosis and grade I ascites. Additional findings include suspected hepatic space-occupying lesions, splenomegaly, and cholelithiasis. Mild dilation of DHC and DC observed without signs of intrahepatic cholestasis. **Virology from 06/01/2020:** **Parameter** **Result** **Interpretation** --------------- ------------ -------------------- Anti HAV IgG 0.73 negative Anti HAV IgM \<0.1 negative **Interpretation:** Serologically no evidence of fresh or expired infection with Hepatitis A virus, no immunity. **Parameter** **Result** **Interpretation** --------------- ------------ -------------------- HBs antigen 0.21 negative Anti HBs \<0.1 negative Anti HBc 0.1 negative **Interpretation:** Serologically no evidence of acute, chronic or expired Hepatitis B virus infection. No immunity. **Parameter** **Result** **Interpretation** --------------- ------------ -------------------- Anti HCV 0.06 negative **Interpretation:** Serologically no evidence of hepatitis C virus infection. At possible fresh infection resubmission in 2-4 weeks and HCV PCR recommended. **MRI total spine plain from 06/04/2020:** **Technique**: T2 Dixon Sagittal and T2 Axial MRI Sequences. Coverage extends from the craniocervical junction to the sacrum. **Findings:** **General Spine:** Full extent from craniocervical junction to sacrum visualized. Conus medullaris appropriately located at T12-L1 level. Myelon demonstrates uniform width and homogeneous signal. Evaluation of thoracolumbar transition and lumbar spine is compromised by artifact superimposition from ascites. **Cervical Spine:** Irregular alignment of the posterior vertebral body margins noted, with evidence of disc protrusions and ligamentum flavum hypertrophy. Focal T2 hyperintensity observed at C5 level. No evidence of prevertebral soft tissue proliferation. **Thoracic Spine**: Maintained alignment of the posterior vertebral body margins. Multiple anterior endplate compression fractures noted at T5, T8, T9, T11, T12 levels. Focal T2 hyperintensity near the anterior endplate of T8 involving the posterior margin, indicative of a non-displaced fracture without spinal canal compromise. Hypertrophic facet joint arthrosis at T10-T11 levels resulting in relative spinal narrowing. Bilateral pleural effusions noted, more pronounced on the right, with a maximum width of approximately 2 cm. No evidence of significant neuroforaminal stenosis. **Lumbar Spine:** Maintained alignment of posterior vertebral body margins. Known anterior endplate compression fractures at L1 and baseplate compression fracture at L2. No evidence of pathological T2 edema within the vertebral bodies, although assessment is limited due to superimposed artifacts from ascites. Spinal canal dimensions appear adequate throughout. Moderate fatty degeneration of sacral bone noted. **Assessment**: Evaluation limited due to ascites-related artifacts. Subacute anterior endplate compression fracture at T8, along with several other likely older compression fractures in the thoracic and upper lumbar spine. Bilateral pleural effusions observed. Multiple neuroforaminal narrowings as detailed above. **MR Liver plain + contast agent from 06/06/2020** **Findings:** 1) [Lesion 1]{.underline} - Size of the lesion 41 mm - Segment 2 - Behavior arterial strongly enriching: yes - Portal venous early washing out: yes - Pseudocapsule: yes - Behavior delayed leaching: yes - pseudocapsule macrovascular invasion: no 2) [Lesion 2]{.underline} - Size of the lesion 104 mm - Segment 5 / 7 / 8 - Behavior arterial strongly enriching: yes - Portal venous early washing out: no - Pseudocapsule: no - Behavior delayed washing out: no - Pseudocapsule: no - Macrovascular invasion: yes **Comments:** - MRI with Gadovist intravenous. - Multiple other satellite foci in all liver segments. - Signs of liver cirrhosis with nodular liver parenchyma and hypertrophy of the left lobe. - Cholecystolithiasis and gallbladder hydrops. No cholestasis. - Varices of the esophagus and fundus. Splenomegaly. Ascites. Pleural effusions on both sides. - Lymph node (approximately 8 mm) between the small curvature of the Stomach and S1 of the liver. - Axial hernia. **Assessment:** Milan fulfilled**.** Dissiminated HCC foci in all segments, the largest foci being in segments 5 / 7 / 8 localized. Portal hypertension with bypass circulation and splenomegaly. Ascites and pleural effusions. **Microbiology from 06/09/2020:** [Material]{.underline}: Ascites in blood culture bottles [Microscopic]{.underline}: No cells, no germs - Anaerobic culture negative after 48 hours - So far, no growth in the anaerobic cultures. The cultures are incubated for a total of 5 days. In case of growth of anaerobes we will send you a follow-up report. - No growth after 48 hours **Esophagogastroduodenoscopy of 06/11/2020:** **Esophagus**: In the distal esophagus, multiple band-like ulcerations as well as residual varices with risk signs that may not completely pass air insufflation. Z-line without erosions. **Stomach**: Mosaic-like occupancy of the gastric mucosa. With inversion the known small-curved lateral cardiavarii, which is hard on palpation with closed forceps after histoacryl injection. Directly next to it, another cardiavarice can now be seen, which has not yet been injected with histoacryl and is soft on palpation. **Duodenum**: Endoscopic therapy: Injection of 0.5 mL Histoacryl (+0.5 mL Lipoidol) in the new cardiavarice. Application of 3 rubber band ligatures to the residual varices in the esophagus. **Assessment:** Residual varices in the esophagus, application of 3 rubber band ligations; portal hypertensive gastropathy **Spine-whole: 2 planes from 06/13/2020** The perpendicular of C7 protrudes about 15 mm laterally to the left of sacral vertebra 1 in the anterior-posterior image and about 9.3 cm in front of sacral vertebra 1 in the lateral ray path. Slight left convex scoliosis thoracolumbally with thoracic counter-swing (Cobb angle \< 10° in each case). The lungs are unremarkable as far as technically assessable. **Assessment**: decompensated positive sagittal spinal imbalance. There is no relevant lateral trunk overhang. **Critical Findings Report:** A conspicuous single cell population of cells with partial signet ring cell character is detected in the smears. A cell block is prepared from the remaining liquid material for further typing of these cells. A follow-up report will follow. **Thoracic spine in 2 planes from 06/15/2020:** **Findings**: Thoracic vertebra 8: Post-kyphoplasty status with notable improvement in vertebral height, now measuring 21 mm compared to a preoperative height of 13 mm. Mild straightening of the vertebral column observed at this level. Thoracic vertebra 9: Known older anterior endplate collapse. Thoracolumbar spine: Multisegmental height reduction in vertebral bodies consistent with osteoporotic changes. No signs of contrast extravasation. Additional Finding: Pre-existing calcified structure projecting onto the left upper abdomen; likely unrelated to current surgical site. **Assessment**: Unremarkable postoperative imaging following kyphoplasty of T8. No evidence of postoperative sintering or newly identifiable fractures. Overall, the surgical intervention appears successful in increasing vertebral height and stabilizing the fracture site. **CT Chest/Abdomen/Pelvis with contrast agent from 06/18/2020:** **Technique**: Multislice spiral CT of the chest, abdomen, and pelvis was performed post-bolus intravenous injection of 120 ml of Imeron 400. Imaging conducted in arterial, portal venous, and venous phases. Oral contrast agent administered with Micropaque 1:7 in water and Gastrolux 1:33 in water. Thin-slice reconstructions and secondary coronal and sagittal reformats were performed. **Chest**: Presence of struma nodosa. Bilateral minor pleural effusions with adjacent atelectasis, more pronounced on the right and extending into the interlobar region. No signs of infiltrative changes. Isolated small nodular opacities in the right lung. Few small bullae noted. Mediastinal lymph nodes mildly enlarged up to 0.5 cm; axillary and hilar nodes are not enlarged. No pericardial effusion observed. **Abdomen/Pelvis**: Known esophageal and fundal varices present. Liver demonstrates nodular changes in the context of known Child-Pugh B stage cirrhosis. A solid hepatic cellular carcinoma lesion in segment II and diffuse HCC nodules in segments V/VII/VIII visualized, corroborating prior MRI findings. They show pronounced arterial enhancement and central washout. Splenomegaly noted. Adrenal glands unremarkable. Renal and urinary systems are inconspicuous. No intestinal motility abnormalities detected. Marked ascites present; no pathologically enlarged abdominal lymph nodes noted upon limited assessment. **Skeletal**: Moderate coxarthrosis bilaterally. An old, minimally displaced fracture of the right 7th rib noted. Advanced degenerative changes in thoracic vertebrae 10, 12, and lumbar spine. Post-vertebroplasty status at thoracic vertebra 9. Hemangioma at thoracic vertebra 11. **Assessment**: - Marked ascites in the setting of liver cirrhosis with multifocal HCC lesions, as corroborated by prior MRI. No evidence of extrahepatic or lymphatic spread. - Bilateral minor pleural effusions with associated atelectasis. - Skeletal findings include moderate coxarthrosis and degenerative changes in the spine. Overall, the scan provides vital information that aligns with and elaborates upon existing clinical and imaging data. **Histology**: **Pathology from 06/19/2020:** [Clinical Data:]{.underline} Hepatocellular carcinoma, hydropic decompensated liver cirrhosis Child B, [Extraction date:]{.underline} 06/13/2020 [Material:]{.underline} 1 Liquid material 7 ml light yellow [Editing]{.underline}: Papanicolaou and MGG staining \+ Protein precipitation \+ Erythrocytes \+ Lymphocytes (+) Granulocytes Eosinophils \+ Histiocytic cell forms \+ Mesothelium \+ Active mesothelium **Other**: Single mononuclear cells with large, eccentric nuclei with nucleoli and a narrow cytoplasmic space, partly with signet ring cells. **Supplementary findings from 06/19/2020** [Processing]{.underline}: Cell block, HE **Microscopic:** As announced, from the remaining liquid material a cell block was prepared. In the HE stain only isolated evidence of mononuclear Cells and some blood. No cell atypia. **Critical Findings Report:** After examination of the remaining liquid material in the cell block no Extension of the initial findings in the absence of further diagnostic cell material. The finding is thus based exclusively on the Smear material: - Detection of a single-cell population consisting of cells with partial signet ring cell character. Differentially it could be The mesothelium may be a reactive change of the approaching mesothelium. Cells of an epithelial neoplasia are not visible on the present material. to be ruled out with certainty. **Diagnostic classification:** Suspicious **Current Recommendations:** - An appointment at our outpatient clinic to start therapy was organized for 06/26/2020. - An appointment for a health department check-up with varicose vein status survey and, if necessary, repeat rubber band ligation has been scheduled for 07/22/2020. Please come to the endoscopy on this day at 08:30 am fasting with current lab results. incl. coagulation, signed consent form as well as SARS-CoV-2 PCR not older than 48h. ### Patient Report 2 **Dear colleague, ** We report to you on Mrs. Laura Miller, born 04/03/1967, whom we examined on 06/08/2020 in the course of a consultation. **Consilar Request:** - Liver cirrhosis, Child-Pugh B, ethyltoxic genesis - HCC - Laboratory albumin: 2.6 - Nutritional advice requested **Nutritional counseling in cirrhosis of the liver:** - Albumin at 2.6 - 70kg at admission (stable weight in recent years). - Height: 1.72m - BMI falsified by ascites - Patient reports that she always a \"bad eater\" - She reports to eat less due to numerous medication intake - Patient is noticeably overwhelmed and seems very burdened by diagnosis **Assessment:** - Protein malnutrition with inadequate oral nutrition - Patient appears desperate and overwhelmed, questionable compliance **Recommendations: ** - High-calorie food for more choices (already ordered) - High-calorie drinks (contains more protein) - Incorporate protein-rich snacks such as yogurt, sippy cups, crispbread - with cheese. - A high-energy, high-protein food choice was made with the patient\'s discussed in detail - Contact details were handed out ### Patient Report 3 **Dear colleague, ** We are writing to inform you about Mrs. Laura Miller, born on 04/03/1967, who was under our inpatient care from 08/21/2020 to 08/23/2020. **Diagnoses**: - MRI of the liver: disseminated HCC foci in all segments, the largest foci is localized in segments 5 / 7 / 8 <!-- --> - Hydropic, decompensated liver cirrhosis Child B, first diagnosis: 05/20, ethyltoxic genesis - Anemia requiring transfusion - EGD of 05/28/20: esophageal varices III° without risk signs, rubber band ligation; cardia varices I°, Histoacryl injection - EGD of 06/13/20: residual varices in the esophagus, application of 3 rubber band ligations, injection of 0.5 ml. Histoacryl; portal hypertensive gastropathy - Transfusion of 2 ECs - Fresh osteoporotic thoracic vertebra 8 fracture - Kyphoplasty thoracic vertebra 8 under C-arm fluoroscopy - Portal hypertension with bypass circuits - Splenomegaly - Cholecystolithiasis - Arterial hypertension - Osteoporosis - Status post stroke - Allergies: None known **Current Presentation:** Mrs. Miller presented electively for gastroscopy for variceal screening with continuation of banding therapy due to esophageal variceal bleeding. **Medical History**: For a detailed medical history, we refer to previous reports from our department. In summary, we present a liver cirrhosis due to ethyl toxicity leading to the development of multifocal HCC. Similar to the liver board decision of 06/13/20, a recommendation for systemic therapy with Lenvatinib or Sorafenib was made in the setting of partially compensated Child B7 cirrhosis with multifocal HCC in both lobes of the liver. **Therapy and Course:** Upon admission, the patient was in age-appropriate general condition and largely symptom-free. There were no signs of acute infection, jaundice, encephalopathic symptoms, or GI bleeding. No irregularities in bowel movements or urination were reported. The patient denied abdominal pain and dyspnea. There were no known allergies. On the day of admission, an uncomplicated gastroscopy was performed, including the application of 4 rubber band ligations for residual esophageal varices. Post-interventional pain was adequately controlled with double-standard doses of Pantoprazole and intravenous analgesic therapy. The further inpatient course was uneventful, and the patient tolerated the post-interventional diet without signs of GI bleeding. Based on laboratory findings and clinical evaluation, particularly with regressed ascites, a compensated Child A6 cirrhosis was confirmed. Therefore, a re-presentation at our interdisciplinary liver board was initiated for discussion of potential treatment options in the context of compensated liver function. As per the consensus recommendation from the liver board, a follow-up gastroscopy is scheduled within the next two weeks. Depending on the variceal status, systemic therapy with Atezolizumab/Bevacizumab or Lenvatinib will follow. Throughout the monitoring period, the patient remained stable in terms of circulation and hemoglobin levels. Therefore, on 08/23/20, we discharged Mrs. Miller for outpatient follow-up care. The patient was thoroughly informed about reasons that necessitate immediate re-presentation. Please note the listed procedural appointments. **Physical Examination:** Awake, alert, oriented - Heart: Regular heart tones, no murmurs - Lungs: Clear vesicular breath sounds, no crackles or wheezes - Abdomen: Soft, non-tender, no masses, normal bowel sounds in all quadrants, palpable firm liver edge under the rib cage, no palpable spleen enlargement, non-painful renal angle - Extremities: Good peripheral pulses, no edema - Neurology: No focal neurological deficits. **EGD on 08/21/2020:** **Findings:** **Esophagus**: Unobstructed intubation of the esophagus under direct vision. Multiple variceal cords and scarring changes due to banding were observed in the lower half of the esophagus. Z-line at 35 cm diaphragmatic passage at 39 cm. Two variceal cords extend along the small curve into the stomach, two of the varices show alarm signs (red spots). 4 rubber band ligations were performed. **Stomach**: In the proximal corpus a picture of portal hypertensive gastropathy, otherwise unremarkable. No fundus varices. **Duodenum**: Good unfolding of the duodenal bulb, contact-sensitive mucosa. Good insight into the descending part of the duodenum. Overall, unremarkable mucosa. **Assessment:** Esophageal varices, Gastroesophageal varices Type I. Banding therapy. **Current Recommendations:** - Regular clinical and laboratory checks by the primary care physician. - In case of fever, acute deterioration of the general condition, or clinical signs of bleeding such as melena or hematemesis, we request immediate re-presentation, even at night and on weekends, through our interdisciplinary emergency department. - Decision of the liver board: Improvement of liver function with alcohol abstinence, but also progression of multifocal HCC over 2 months without tumor-specific therapy. Consensus: Repeat EGD in 7-14 days, depending on variceal status, Atezolizumab/Bevacizumab, or Lenvatinib. - Follow-up appointment on 09/11/20 in our HCC outpatient clinic for clinical control and explanation of the EGD. - Follow-up in our endoscopy for EGD to determine variceal status and possible banding -\> Please bring a COVID PCR test (maximum 48 hours old) for inpatient admission. If complaints persist or worsen, we recommend immediate re-presentation. ### Patient Report 4 **Dear colleague, ** We are writing to inform you about Mrs. Laura Miller, born on 04/03/1967, who was under our inpatient care from 09/18/2020 to 09/20/2020. **Diagnoses:** - 4-time banding therapy with 4 rubber band ligations for residual esophageal varices without alarm signs - Hepatocellular Carcinoma (HCC) - MR Liver: Disseminated HCC lesions in all segments, with the largest lesion located in segments 5/7/8 - Decompensated cirrhosis of the liver (Child B) since 05/20 due to ethyltoxic origin. - Transfusion-dependent anemia due to a history of variceal bleeding. - Osteoporotic thoracic vertebral fracture of vertebra BWK8 (OF3) with kyphoplasty. - Portal hypertension with portosystemic collaterals - Splenomegaly - Cholelithiasis - Arterial hypertension - Osteoporosis - History of stroke (2016) - Allergies: Amalgam **Presentation:** Mrs. Miller\'s elective presentation was for a follow-up examination for known esophageal varices. **Medical History:** For a detailed medical history, please refer to previous reports from our department. In summary, in June 2020, the patient was diagnosed with decompensated liver cirrhosis attributed to ethyltoxicity. MR imaging showed multifocal HCC. According to the liver board decision on 06/15/20, initial therapy was recommended with Lenvatinib or Sorafenib for partially compensated Child B7 cirrhosis with multifocal HCC in both liver lobes. Despite improvement in liver function with alcohol cessation, there was a short-term progression of multifocal HCC without tumor-specific therapy, leading to a recommendation for a repeat variceal screening on 08/28/20. Depending on the findings, therapy with Atezolizumab/Bevacizumab or Lenvatinib was advised. The last EGD was performed on 08/21/20, revealing esophageal varices with alarm signs, and a 4-time banding was performed. **Physical Examination upon Admission:** Blood pressure: 80/150 mmHg, heart rate 88/min, temperature 36.4°C, SpO2 97% in room air. Patient in good general condition and normal mental status. Mrs. Miller is fully oriented. Pupils are equal and reactive. - Cardiovascular: Clear heart sounds, no murmurs. - Lungs: Equal breath sounds bilaterally, no crackles, resonant percussion. - Abdomen: Soft, non-tender, no masses, normal bowel sounds in all quadrants, liver and spleen not palpable. - Extremities: No edema, good peripheral pulses. No focal neurological deficits. **Course and Therapy:** On the day of admission, the EGD was performed without complications. Residual varices without warning signs were observed, and a 4-time rubber band ligation was performed. Portal hypertensive gastropathy was also diagnosed. After the procedure, the patient was transferred to our gastroenterological normal ward. The post-interventional course was uneventful. There were no clinical or laboratory signs of post-interventional bleeding. The diet was reintroduced without any issues. Therefore, on 09/20/2020, we discharged Mrs. Miller for outpatient care. We request a follow-up appointment at our in-house HCC outpatient clinic. Additionally, we request a follow-up EGD with variceal control.
06/06/20
What three conversational datasets are used for evaluation?
### Introduction Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training. In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder. The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses. ### Related work ::: Neural Conversational Models Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation. ### Related work ::: Conditional Variational Autoencoders Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer. ### Related work ::: Fully Attentional Networks Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process. ### Preliminaries ::: Conditional Variational Autoencoder for Dialogue Generation The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to: The typical CVAE consists of a prior network $p_{\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as where $\mathcal {L}_{REC}$ denotes the reconstruction loss and $\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior. In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\theta }(z | c)$ and the recognition network $p_{\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and posterior latent distribution $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and samples of the posterior latent variable (for training) from $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$. The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16. ### Preliminaries ::: CVAE with Transformer The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state. The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$: Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\prime }_{SOS}$ of token $SOS$ with latent information. This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows: ### Sequential Variational Transformer In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\left(z_{1}, \dots , z_{T}\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables. As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path. ### Sequential Variational Transformer ::: Prior Path The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$. We decompose the response $x$ as $x = \left(x_1, \cdots , x_T\right)$ and the latent variable $z$ as $z=\left(z_{1}, \dots , z_{T}\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes: where ### Sequential Variational Transformer ::: Posterior Path The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as: where During the training, the posterior path guides the learning of prior path via KL divergence constraint: In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15. During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is: ### Sequential Variational Transformer ::: Auxiliary Loss As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by: where $f_{aux}$ is a feed-forward neural network with the softmax output. ### Sequential Variational Transformer ::: Learning The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\mathcal {L}_{KL}(t)$ at each position: We regularize the ELBO learning objective with an auxiliary loss $\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows: where, ### Experiments ::: Dataset We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26. ### Experiments ::: Dataset ::: MojiTalk dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set. ### Experiments ::: Dataset ::: PersonaChat & Empathetic-Dialogues are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets. ### Experiments ::: Baselines We compare the proposed models with the following baselines: ### Experiments ::: Baselines ::: Seq2Seq. An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16. ### Experiments ::: Baselines ::: CVAE. An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16. ### Experiments ::: Baselines ::: Transformer. A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT. ### Experiments ::: Hyper-parameters and Training Setup We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models. ### Experiments ::: Automatic Evaluation ::: PPL & KLD. The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27. ### Experiments ::: Automatic Evaluation ::: Diversity. To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation. ### Experiments ::: Automatic Evaluation ::: Embeddings Similarity. This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\textbf {EMB}_\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\textbf {EMB}_\textbf {BERT}$. ### Experiments ::: Human Evaluation In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard. ### Results ::: Quantitative Analysis The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3. Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL. On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\textbf {EMB}_\textbf {FT}$ and $\textbf {EMB}_\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed "gold response" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness. ### Results ::: Qualitative Analysis Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses. ### Conclusion This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation. Figure 1: The Global Variational Transformer. During training, The posterior latent variable z by the posterior network is passed to the decoder, while during testing, the target response is absent, and z is replaced by the prior latent variable. The word embeddings, positional encoding, softmax layer and meta vectors are ignored for simplicity Figure 2: The Sequential Variational Transformer. During training, The posterior latent variables z by the posterior network are passed to the decoder, while during testing, the target response is absent, and z is replaced by the prior latent variables z. The word embeddings, positional encoding, softmax layer and meta vectors are ignored for simplicity Table 1: Results of Variational Transformer compared to baselines on automatic and human evaluations. Table 2: Generated responses from proposed models and baseline models. The reference responses (Ref) are given.
MojiTalk , PersonaChat , Empathetic-Dialogues
At what text unit/level were documents processed?
### Introduction Business documents broadly characterize a large class of documents that are central to the operation of business. These include legal contracts, purchase orders, financial statements, regulatory filings, and more. Such documents have a number of characteristics that set them apart from the types of texts that most NLP techniques today are designed to process (Wikipedia articles, news stories, web pages, etc.): They are heterogeneous and frequently contain a mix of both free text as well as semi-structured elements (tables, headings, etc.). They are, by definition, domain specific, often with vocabulary, phrases, and linguistic structures (e.g., legal boilerplate and terms of art) that are rarely seen in general natural language corpora. Despite these challenges, there is great potential in the application of NLP technologies to business documents. Take, for example, contracts that codify legal agreements between two or more parties. Organizations (particularly large enterprises) need to monitor contracts for a range of tasks, a process that can be partially automated if certain content elements can be extracted from the contracts themselves by systems BIBREF0. In general, if we are able to extract structured entities from business documents, these outputs can be better queried and manipulated, potentially facilitating more efficient business operations. In this paper, we present BERT-based models for extracting content elements from two very different types of business documents: regulatory filings and property lease agreements. Given the success of deep transformer-based models such as BERT BIBREF1 and their ability to handle sequence labeling tasks, adopting such an approach seemed like an obvious starting point. In this context, we are primarily interested in two questions: First, how data efficient is BERT for fine-tuning to new specialized domains? Specifically, how much annotated data do we need to achieve some (reasonable) level of accuracy? This is an important question due to the heterogeneity of business documents; it would be onerous if organizations were required to engage in large annotation efforts for every type of document. Second, how would a BERT model pre-trained on general natural language corpora perform in specific, and potentially highly-specialized, domains? There are aspects of this task that make it both easier and more difficult than “traditional” IE. Even though they are expressed in natural language, business documents frequently take constrained forms, sometimes even “template-like” to a certain degree. As such, it may be easy to learn cue phrases and other fixed expressions that indicate the presence of some element (i.e., pattern matching). On the other hand, the structure and vocabulary of the texts may be very different from the types of corpora modern deep models are trained on; for example, researchers have shown that models for processing the scientific literature benefit immensely from pre-training on scientific articles BIBREF2, BIBREF3. Unfortunately, we are not aware of any large, open corpora of business documents for running comparable experiments. The contribution of our work is twofold: From the scientific perspective, we begin to provide some answers to the above questions. With two case studies, we find that a modest amount of domain-specific annotated data (less than 100 documents) is sufficient to fine-tune BERT to achieve reasonable accuracy in extracting a set of content elements. From a practical perspective, we showcase our efforts in an end-to-end cloud platform that provides an easy-to-use annotation interface as well as an inference interface that allows users to upload documents and inspect the results of our models. ### Approach Within the broad space of business documents, we have decided to focus on two specific types: regulatory filings and property lease agreements. While our approach is not language specific, all our work is conducted on Chinese documents. In this section, we first describe these documents and our corpora, our sequence labeling model, and finally our evaluation approach. ### Approach ::: Datasets Regulatory Filings. We focused on a specific type of filing: disclosures of pledges by shareholders when their shares are offered up for collateral. These are publicly accessible and were gathered from the database of a stock exchange in China. We observe that most of these announcements are fairly formulaic, likely generated by templates. However, we treated them all as natural language text and did not exploit this observation; for example, we made no explicit attempt to induce template structure or apply clustering—although such techniques would likely improve extraction accuracy. In total, we collected and manually annotated 150 filings, which were divided into training, validation, and test sets with a 6:2:2 split. Our test corpus comprises 30 regulatory filings. Table TABREF6 enumerates the seven content elements that we extract. Property Lease Agreements. These contracts mostly follow a fixed “schema” with a certain number of prescribed elements (leaseholder, tenant, rent, deposit, etc.); Table TABREF7 enumerates the eight elements that our model extracts. Since most property lease agreements are confidential, no public corpus for research exists, and thus we had to build our own. To this end, we searched the web for publicly-available templates of property lease agreements and found 115 templates in total. For each template, we manually generated one, two, or three instances, using a fake data generator tool to fill in the missing content elements such as addresses. In total, we created (and annotated) 223 contracts by hand. This corpus was further split into training, validation, and test data with a 6:2:2 split. Our test set contains 44 lease agreements, 11 of which use templates that are not seen in the training set. We report evaluation over both the full test set and on only these unseen templates; the latter condition specifically probes our model's ability to generalize. ### Approach ::: Model An obvious approach to content element extraction is to formulate the problem as a sequence labeling task. Prior to the advent of neural networks, Conditional Random Fields (CRFs) BIBREF4, BIBREF5 represented the most popular approach to this task. Starting from a few years ago, neural networks have become the dominant approach, starting with RNNs BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. Most recently, deep transformer-based models such as BERT represent the state of the art in this task BIBREF1, BIBREF12, BIBREF13 . We adopt the sequence labeling approach of BIBREF1, based on annotations of our corpus using a standard BIO tagging scheme with respect to the content elements we are interested in. We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as: where ${W}$ represents the parameter of the fully-connected layer and ${b}$ is the bias. The learning objective is to maximize For simplicity, we assume that all tokens can be predicted independently. For model training, we set the max sequence length to 256, the learning rate to ${10^{-4}}$, and run the model for 8 epochs. We use all other default settings in the TensorFlow implementation of BERT. UTF8gbsn UTF8gbsn UTF8gbsn ### Approach ::: Inference and Evaluation At inference time, documents from the test set are segmented into paragraphs and fed into the fine-tuned BERT model one at a time. Typically, sequence labeling tasks are evaluated in terms of precision, recall, and F$_1$ at the entity level, per sentence. However, such an evaluation is inappropriate for our task because the content elements represent properties of the entire document as a whole, not individual sentences. Instead, we adopted the following evaluation procedure: For each content element type (e.g., “tenant”), we extract all tagged spans from the document, and after deduplication, treat the entities as a set that we then measure against the ground truth in terms of precision, recall, and F$_1$. We do this because there may be multiple ground truth entities and BERT may mark multiple spans in a document with a particular entity type. Note that the metrics are based on exact matches—this means that, for example, if the extracted entity has an extraneous token compared to a ground truth entity, the system receives no credit. ### Results Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above. We show metrics across all content elements (micro-averaged) as well as broken down by types. For the property lease agreements, we show results on all documents (left) and only over those with unseen templates (right). Examining these results, we see that although there is some degradation in effectiveness between all documents and only unseen templates, it appears that BERT is able to generalize to previously-unseen expressions of the content elements. Specifically, it is not the case that the model is simply memorizing fixed patterns or key phrases—otherwise, we could just craft a bunch of regular expression patterns for this task. This is a nice result that shows off the power of modern neural NLP models. Overall, we would characterize our models as achieving reasonable accuracy, comparable to extraction tasks in more “traditional” domains, with modest amounts of training data. It does appear that with fine tuning, BERT is able to adapt to the linguistic characteristics of these specialized types of documents. For example, the regulatory filings have quite specialized vocabulary and the property lease agreements have numeric heading structures—BERT does not seem to be confused by these elements, which for the most part do not appear in the texts that the model was pre-trained on. Naturally, accuracy varies across different content elements: For the rental agreements, entities such as leaseholder, tenant, start date, and end date perform much better than others. For the regulatory filing, the model performs well on all content elements except for one; there were very few examples of “% of pledged shares in the shareholder's total share holdings” in our training data, and thus accuracy is very low despite the fact that percentages are straightforward to identify. It seems that “easy” entities often have more fixed forms and are quite close to entities that the model may have encountered during pre-training (e.g., names and dates). In contrast, “difficult” elements are often domain-specific and widely vary in their forms. How data efficient is BERT when fine tuning on annotated data? We can answer this question by varying the amount of training data used to fine tune the BERT models, holding everything else constant. These results are shown in Figure FIGREF10 for the regulatory filings (30, 60, 90 randomly-selected documents) and in Figure FIGREF11 for the property lease agreements (30, 60, 90, 120 randomly-selected documents); in all cases, the development set is fixed. For brevity, we only show F$_1$ scores, but we observe similar trends for the other metrics. For both document types, it seems like 60–90 documents are sufficient to achieve F$_1$ on par with using all available training data. Beyond this point, we hit rapidly diminishing returns. For a number of “easy” content elements (e.g., dates in the property lease agreements), it seems like 30 documents are sufficient to achieve good accuracy, and more does not appear to yield substantial improvements. Note that in a few cases, training on more data actually decreases F$_1$ slightly, but this can be attributed to noise in the sampling process. Finally, in Table TABREF8 we show an excerpt from each type of document along with the content elements that are extracted by our BERT models. We provide both the original source Chinese texts as well as English translations to provide the reader with a general sense of the source documents and how well our models behave. ### Cloud Platform All the capabilities described in this paper come together in an end-to-end cloud-based platform that we have built. The platform has two main features: First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents; a screenshot is shown in Figure FIGREF12. We have invested substantial effort in making the interface as easy to use as possible; for example, annotating content elements is as easy as selecting text from the document. Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators. The second feature of the platform is the ability for users to upload new documents and apply inference on them using a fine-tuned BERT model; a screenshot of this feature is shown in Figure FIGREF13. The relevant content elements are highlighted in the document. On the cloud platform, the inference module also applies a few simple rule-based modifications to post-process BERT extraction results. For any of the extracted dates, we further applied a date parser based on rules and regular expressions to normalize and canonicalize the extracted outputs. In the regulatory filings, we tried to normalize numbers that were written in a mixture of Arabic numerals and Chinese units (e.g., “UTF8gbsn亿”, the unit for $10^8$) and discarded partial results if simple rule-based rewrites were not successful. In the property lease agreements, the contract length, if not directly extracted by BERT, is computed from the extracted start and end dates. Note that these post processing steps were not applied in the evaluation presented in the previous section, and so the figures reported in Tables TABREF6 and TABREF7 actually under-report the accuracy of our models in a real-world setting. ### Conclusions This work tackles the challenge of content extraction from two types of business documents, regulatory filings and property lease agreements. The problem is straightforwardly formulated as a sequence labeling task, and we fine-tune BERT for this application. We show that our simple models can achieve reasonable accuracy with only modest amounts of training data, illustrating the power and flexibility of modern NLP models. Our cloud platform pulls these models together in an easy-to-use interface for addressing real-world business needs. Table 1: Evaluation results on the test set of our regulatory filings corpus. Table 2: Evaluation results on the test set of our property lease agreements corpus. Figure 1: Effects of training data size on F1 for regulatory filings. Figure 2: Effects of training data size on F1 for property lease agreements. Table 3: Excerpts from a regulatory filing (top) and a property lease agreement (bottom) illustrating a few of the content elements that our models extract. Figure 3: Screenshot of our annotation interface. Figure 4: Screenshot of our inference interface.
documents are segmented into paragraphs and processed at the paragraph level
What problems are found with the evaluation scheme?
### Introduction Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc. Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system. From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue. To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail. The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections. ### The First Evaluation of Chinese Human-Computer Dialogue Technology The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue. ### Task 1: User Intent Classification In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information. In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance. It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric. ### Task 2: Online Testing of Task-oriented Dialogue For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following: “查询明天从哈尔滨到北京的晚间软卧火车票,上下铺均可。 Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.” In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination. We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following. Task completion ratio: The number of completed tasks divided by the number of total tasks. User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively. Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency. Number of dialogue turns: The number of utterances in a task-completed dialogue. Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide. For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30. ### Evaluation Data In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation. For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test. For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017. ### Evaluation Results There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper. Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2. ### Conclusion In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks. ### Acknowledgements We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment. Figure 1: A brief comparison of the open domain chit-chat system and the task-oriented dialogue system. Table 1: An example of user intent with category information. Table 2: An example of the task-oriented human-computer dialogue. Table 3: The statistics of the released data for task 1. Table 4: Top 5 results of the closed test of the task 1. Table 5: Top 5 results of the open test of the task 1. Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively.
no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue
What can be determined as a similarity between Harvey, Joe, and Johnson? A. They all have a tendency to want the best for one another to a personal fault. B. They all have a tendency to think they are more advanced than one another C. They all have a tendency to spend too much time at the bar where Johnson works D. They all have a tendency to be greedy at any opportunity
GRIFTERS' ASTEROID By H. L. GOLD Harvey and Joe were the slickest con-men ever to gyp a space-lane sucker. Or so they thought! Angus Johnson knew differently. He charged them five buckos for a glass of water—and got it! [Transcriber's Note: This etext was produced from Planet Stories May 1943. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Characteristically, Harvey Ellsworth tried to maintain his dignity, though his parched tongue was almost hanging out. But Joe Mallon, with no dignity to maintain, lurched across the rubbish-strewn patch of land that had been termed a spaceport. When Harvey staggered pontifically into the battered metalloy saloon—the only one on Planetoid 42—his tall, gangling partner was already stumbling out, mouthing something incoherent. They met in the doorway, violently. "We're delirious!" Joe cried. "It's a mirage!" "What is?" asked Harvey through a mouthful of cotton. Joe reeled aside, and Harvey saw what had upset his partner. He stared, speechless for once. In their hectic voyages from planet to planet, the pair of panacea purveyors had encountered the usual strange life-forms. But never had they seen anything like the amazing creature in that colonial saloon. Paying no attention to them, it was carrying a case of liquor in two hands, six siphons in two others, and a broom and dustpan in the remaining pair. The bartender, a big man resembling the plumpish Harvey in build, was leaning negligently on the counter, ordering this impossible being to fill the partly-emptied bottles, squeeze fruit juice and sweep the floor, all of which the native did simultaneously. "Nonsense," Harvey croaked uncertainly. "We have seen enough queer things to know there are always more." He led the way inside. Through thirst-cracked lips he rasped: "Water—quick!" Without a word, the bartender reached under the counter, brought out two glasses of water. The interplanetary con-men drank noisily, asked for more, until they had drunk eight glasses. Meanwhile, the bartender had taken out eight jiggers and filled them with whiskey. Harvey and Joe were breathing hard from having gulped the water so fast, but they were beginning to revive. They noticed the bartender's impersonal eyes studying them shrewdly. "Strangers, eh?" he asked at last. "Solar salesmen, my colonial friend," Harvey answered in his usual lush manner. "We purvey that renowned Martian remedy, La-anago Yergis , the formula for which was recently discovered by ourselves in the ancient ruined city of La-anago. Medical science is unanimous in proclaiming this magic medicine the sole panacea in the entire history of therapeutics." "Yeah?" said the bartender disinterestedly, polishing the chaser glasses without washing them. "Where you heading?" "Out of Mars for Ganymede. Our condenser broke down, and we've gone without water for five ghastly days." "Got a mechanic around this dumping ground you call a port?" Joe asked. "We did. He came near starving and moved on to Titan. Ships don't land here unless they're in trouble." "Then where's the water lead-in? We'll fill up and push off." "Mayor takes care of that," replied the saloon owner. "If you gents're finished at the bar, your drinks'll be forty buckos." Harvey grinned puzzledly. "We didn't take any whiskey." "Might as well. Water's five buckos a glass. Liquor's free with every chaser." Harvey's eyes bulged. Joe gulped. "That—that's robbery!" the lanky man managed to get out in a thin quaver. The barkeeper shrugged. "When there ain't many customers, you gotta make more on each one. Besides—" "Besides nothing!" Joe roared, finding his voice again. "You dirty crook—robbing poor spacemen! You—" "You dirty crook!" Joe roared. "Robbing honest spacemen!" Harvey nudged him warningly. "Easy, my boy, easy." He turned to the bartender apologetically. "Don't mind my friend. His adrenal glands are sometimes overactive. You were going to say—?" The round face of the barkeeper had assumed an aggrieved expression. "Folks are always thinkin' the other feller's out to do 'em," he said, shaking his head. "Lemme explain about the water here. It's bitter as some kinds of sin before it's purified. Have to bring it in with buckets and make it sweet. That takes time and labor. Waddya think—I was chargin' feller critters for water just out of devilment? I charge because I gotta." "Friend," said Harvey, taking out a wallet and counting off eight five-bucko bills, "here is your money. What's fair is fair, and you have put a different complexion on what seemed at first to be an unconscionable interjection of a middleman between Nature and man's thirst." The saloon man removed his dirty apron and came around the bar. "If that's an apology, I accept it. Now the mayor'll discuss filling your tanks. That's me. I'm also justice of the peace, official recorder, fire chief...." "And chief of police, no doubt," said Harvey jocosely. "Nope. That's my son, Jed. Angus Johnson's my name. Folks here just call me Chief. I run this town, and run it right. How much water will you need?" Joe estimated quickly. "About seventy-five liters, if we go on half rations," he answered. He waited apprehensively. "Let's say ten buckos a liter," the mayor said. "On account of the quantity, I'm able to quote a bargain price. Shucks, boys, it hurts me more to charge for water than it does for you to pay. I just got to, that's all." The mayor gestured to the native, who shuffled out to the tanks with them. The planetoid man worked the pump while the mayor intently watched the crude level-gauge, crying "Stop!" when it registered the proper amount. Then Johnson rubbed his thumb on his index finger and wetted his lips expectantly. Harvey bravely counted off the bills. He asked: "But what are we to do about replenishing our battery fluid? Ten buckos a liter would be preposterous. We simply can't afford it." Johnson's response almost floored them. "Who said anything about charging you for battery water? You can have all you want for nothing. It's just the purified stuff that comes so high." After giving them directions that would take them to the free-water pool, the ponderous factotum of Planetoid 42 shook hands and headed back to the saloon. His six-armed assistant followed him inside. "Now do you see, my hot-tempered colleague?" said Harvey as he and Joe picked up buckets that hung on the tank. "Johnson, as I saw instantly, is the victim of a difficult environment, and must charge accordingly." "Just the same," Joe griped, "paying for water isn't something you can get used to in ten minutes." In the fragile forest, they soon came across a stream that sprang from the igneous soil and splashed into the small pond whose contents, according to the mayor, was theirs for the asking. They filled their buckets and hauled them to the ship, then returned for more. It was on the sixth trip that Joe caught a glimpse of Jupiter-shine on a bright surface off to the left. The figure, 750, with the bucko sign in front of it, was still doing acrobatics inside his skull and keeping a faint suspicion alive in him. So he called Harvey and they went to investigate. Among the skimpy ground-crawling vines, they saw a long slender mound that was unmistakably a buried pipe. "What's this doing here?" Harvey asked, puzzled. "I thought Johnson had to transport water in pails." "Wonder where it leads to," Joe said uneasily. "It leads to the saloon," said Harvey, his eyes rapidly tracing the pipe back toward the spaceport. "What I am concerned with is where it leads from ." Five minutes later, panting heavily from the unaccustomed exertion of scrambling through the tangle of planetorial undergrowth, they burst into the open—before a clear, sparkling pool. Mutely, Harvey pointed out a pipe-end jutting under the water. "I am growing suspicious," he said in a rigidly controlled voice. But Joe was already on his knees, scooping up a handful of water and tasting it. "Sweet!" he snarled. They rushed back to the first pool, where Joe again tasted a sample. His mouth went wry. "Bitter! He uses only one pool, the sweet one! The only thing that needs purifying around here is that blasted mayor's conscience." "The asteroidal Poobah has tricked us with a slick come-on," said Harvey slowly. His eyes grew cold. "Joseph, the good-natured artist in me has become a hard and merciless avenger. I shall not rest until we have had the best of this colonial con-man! Watch your cues from this point hence." Fists clenched, the two returned to the saloon. But at the door they stopped and their fists unclenched. "Thought you gents were leaving," the mayor called out, seeing them frozen in the doorway. "Glad you didn't. Now you can meet my son, Jed. Him and me are the whole Earthman population of Johnson City." "You don't need any more," said Harvey, dismayed. Johnson's eight-foot son, topped by a massive roof of sun-bleached hair and held up by a foundation that seemed immovable, had obviously been born and raised in low gravity. For any decent-sized world would have kept him down near the general dimensions of a man. He held out an acre of palm. Harvey studied it worriedly, put his own hand somewhere on it, swallowed as it closed, then breathed again when his fingers were released in five units instead of a single compressed one. "Pleased to meet you," piped a voice that had never known a dense atmosphere. The pursuit of vengeance, Harvey realized, had taken a quick and unpleasant turn. Something shrewd was called for.... "Joseph!" he exclaimed, looking at his partner in alarm. "Don't you feel well?" Even before the others could turn to him, Joe's practiced eyes were gently crossing. He sagged against the door frame, all his features drooping like a bloodhound's. "Bring him in here!" Johnson cried. "I mean, get him away! He's coming down with asteroid fever!" "Of course," replied Harvey calmly. "Any fool knows the first symptoms of the disease that once scourged the universe." "What do you mean, once ?" demanded Johnson. "I come down with it every year, and I ain't hankering to have it in an off-season. Get him out of here!" "In good time. He can't be moved immediately." "Then he'll be here for months!" Harvey helped Joe to the counter and lifted him up on it. The mayor and his gigantic offspring were cowering across the room, trying to breathe in tiny, uncontaminating gasps. "You'll find everything you want in the back room," Johnson said frantically, "sulfopyridine, mustard plasters, rubs, inhalers, suction cups—" "Relics of the past," Harvey stated. "One medication is all modern man requires to combat the dread menace, asteroid fever." "What's that?" asked the mayor without conviction. Instead of replying, Harvey hurried outside to the ungainly second-hand rocket ship in the center of the shabby spaceport. He returned within a few minutes, carrying a bottle. Joe was still stretched out on the bar, panting, his eyes slowly crossing and uncrossing. Harvey lifted the patient's head tenderly, put the bottle to his lips and tilted it until he was forced to drink. When Joe tried to pull away, Harvey was inexorable. He made his partner drink until most of the liquid was gone. Then he stepped back and waited for the inevitable result. Joe's performance was better than ever. He lay supine for several moments, his face twisted into an expression that seemed doomed to perpetual wryness. Slowly, however, he sat up and his features straightened out. "Are—are you all right?" asked the mayor anxiously. "Much better," said Joe in a weak voice. "Maybe you need another dose," Harvey suggested. Joe recoiled. "I'm fine now!" he cried, and sprang off the bar to prove it. Astonished, Johnson and his son drew closer. They searched Joe's face, and then the mayor timidly felt his pulse. "Well, I'll be hanged!" Johnson ejaculated. " La-anago Yergis never fails, my friend," Harvey explained. "By actual test, it conquers asteroid fever in from four to twenty-three minutes, depending on the severity of the attack. Luckily, we caught this one before it grew formidable." The mayor's eyes became clouded mirrors of an inward conflict. "If you don't charge too much," he said warily, "I might think of buying some." "We do not sell this unbelievable remedy," Harvey replied with dignity. "It sells itself." "'Course, I'd expect a considerable reduction if I bought a whole case," said Johnson. "That would be the smallest investment you could make, compared with the vast loss of time and strength the fever involves." "How much?" asked the mayor unhappily. "For you, since you have taken us in so hospitably, a mere five hundred buckos." Johnson did not actually stagger back, but he gave the impression of doing so. "F-four hundred," he offered. "Not a red cent less than four seventy-five," Harvey said flatly. "Make it four fifty," quavered Johnson. "I dislike haggling," said Harvey. The final price, however, was four hundred and sixty-nine buckos and fifty redsents. Magnanimously, Harvey added: "And we will include, gratis , an elegant bottle-opener, a superb product of Mercurian handicraftsmanship." Johnson stabbed out a warning finger. "No tricks now. I want a taste of that stuff. You're not switching some worthless junk on me." Harvey took a glass from the bar and poured him a generous sample. The mayor sniffed it, grimaced, then threw it down his gullet. The ensuing minute saw a grim battle between a man and his stomach, a battle which the man gradually won. "There ain't no words for that taste," he gulped when it was safe to talk again. "Medicine," Harvey propounded, "should taste like medicine." To Joe he said: "Come, my esteemed colleague. We must perform the sacred task to which we have dedicated ourselves." With Joe stumbling along behind, he left the saloon, crossed the clearing and entered the ship. As soon as they were inside, Joe dropped his murderous silence and cried: "What kind of a dirty trick was that, giving me poison instead of that snake oil?" "That was not poison," Harvey contradicted quietly. "It was La-anago Yergis extract, plus." "Plus what—arsenic?" "Now, Joseph! Consider my quandary when I came back here to manufacture our specific for all known ailments, with the intention of selling yonder asteroidal tin-horn a bill of medical goods—an entire case, mind you. Was I to mix the extract with the water for which we had been swindled to the tune of ten buckos a liter? Where would our profit have been, then? No; I had to use the bitter free water, of course." "But why use it on me?" Joe demanded furiously. Harvey looked reprovingly at his gangling partner. "Did Johnson ask to taste it, or did he not? One must look ahead, Joseph. I had to produce the same medicine that we will now manufacture. Thus, you were a guinea pig for a splendid cause." "Okay, okay," Joe said. "But you shoulda charged him more." "Joseph, I promise you that we shall get back every redsent of which that swindler cheated us, besides whatever other funds or valuables he possesses. We could not be content with less." "Well, we're starting all right," admitted Joe. "How about that thing with six arms? He looks like a valuable. Can't we grab him off?" Harvey stopped filling bottles and looked up pensively. "I have every hope of luring away the profitable monstrosity. Apparently you have also surmised the fortune we could make with him. At first I purpose to exhibit him on our interplanetary tours with our streamlined panacea; he would be a spectacular attraction for bucolic suckers. Later, a brief period of demonstrating his abilities on the audio-visiphone. Then our triumph—we shall sell him at a stupendous figure to the zoo!" Joe was still dazed by that monetary vista when he and Harvey carried the case of medicine to the saloon. The mayor had already cleared a place of honor in the cluttered back room, where he told them to put it down carefully. Then he took the elaborate bottle-opener Harvey gave him, reverently uncorked a bottle and sampled it. It must have been at least as good as the first; he gagged. "That's the stuff, all right," he said, swallowing hard. He counted out the money into Harvey's hand, at a moderate rate that precariously balanced between his pleasure at getting the fever remedy and his pain at paying for it. Then he glanced out to see the position of Jupiter, and asked: "You gents eaten yet? The restaurant's open now." Harvey and Joe looked at each other. They hadn't been thinking about food at all, but suddenly they realized that they were hungry. "It's only water we were short of," Harvey said apprehensively. "We've got rations back at the ship." " H-mph! " the mayor grunted. "Powdered concentrates. Compressed pap. Suit yourselves. We treat our stomachs better here. And you're welcome to our hospitality." "Your hospitality," said Harvey, "depends on the prices you charge." "Well, if that's what's worrying you, you can stop worrying," answered the mayor promptly. "What's more, the kind of dinner I serve here you can't get anywhere else for any price." Swiftly, Harvey conned the possibilities of being bilked again. He saw none. "Let's take a look at the menu, anyhow, Joe," he said guardedly. Johnson immediately fell into the role of "mine host." "Come right in, gents," he invited. "Right into the dining room." He seated them at a table, which a rope tied between posts made more or less private, though nobody else was in the saloon and there was little chance of company. Genius, the six-armed native, appeared from the dingy kitchen with two menus in one hand, two glasses of water in another, plus napkins, silverware, a pitcher, plates, saucers, cups, and their cocktails, which were on the house. Then he stood by for orders. Harvey and Joe studied the menu critically. The prices were phenomenally low. When they glanced up at Johnson in perplexity, he grinned, bowed and asked: "Everything satisfactory, gents?" "Quite," said Harvey. "We shall order." For an hour they were served amazing dishes, both fresh and canned, the culinary wealth of this planetoid and all the system. And the service was as extraordinary as the meal itself. With four hands, Genius played deftly upon a pair of mellow Venusian viotars , using his other two hands for waiting on the table. "We absolutely must purchase this incredible specimen," Harvey whispered excitedly when Johnson and the native were both in the kitchen, attending to the next course. "He would make any society hostess's season a riotous success, which should be worth a great sum to women like Mrs. van Schuyler-Morgan, merely for his hire." "Think of a fast one fast," Joe agreed. "You're right." "But I dislike having to revise my opinion of a man so often," complained Harvey. "I wish Johnson would stay either swindler or honest merchant. This dinner is worth as least twenty buckos, yet I estimate our check at a mere bucko twenty redsents." The mayor's appearance prevented them from continuing the discussion. "It's been a great honor, gents," he said. "Ain't often I have visitors, and I like the best, like you two gents." As if on cue, Genius came out and put the check down between Joe and Harvey. Harvey picked it up negligently, but his casual air vanished in a yelp of horror. "What the devil is this?" he shouted.—"How do you arrive at this fantastic, idiotic figure— three hundred and twenty-eight buckos !" Johnson didn't answer. Neither did Genius; he simply put on the table, not a fingerbowl, but a magnifying glass. With one of his thirty fingers he pointed politely to the bottom of the menu. Harvey focused on the microscopic print, and his face went pasty with rage. The minute note read: "Services and entertainment, 327 buckos 80 redsents." "You can go to hell!" Joe growled. "We won't pay it!" Johnson sighed ponderously. "I was afraid you'd act like that," he said with regret. He pulled a tin badge out of his rear pocket, pinned it on his vest, and twisted his holstered gun into view. "Afraid I'll have to ask the sheriff to take over." Johnson, the "sheriff," collected the money, and Johnson, the "restaurateur," pocketed it. Meanwhile, Harvey tipped Joe the sign to remain calm. "My friend," he said to the mayor, and his tones took on a schoolmasterish severity, "your long absence from Earth has perhaps made you forget those elements of human wisdom that have entered the folk-lore of your native planet. Such as, for example: 'It is folly to kill a goose that lays golden eggs,' and 'Penny wise is pound foolish.'" "I don't get the connection," objected Johnson. "Well, by obliging us to pay such a high price for your dinner, you put out of your reach the chance of profiting from a really substantial deal. My partner and I were prepared to make you a sizable offer for the peculiar creature you call Genius. But by reducing our funds the way you have—" "Who said I wanted to sell him?" the mayor interrupted. He rubbed his fingers together and asked disinterestedly: "What were you going to offer, anyhow?" "It doesn't matter any longer," Harvey said with elaborate carelessness. "Perhaps you wouldn't have accepted it, anyway." "That's right," Johnson came back emphatically. "But what would your offer have been which I would have turned down?" "Which one? The one we were going to make, or the one we can make now?" "Either one. It don't make no difference. Genius is too valuable to sell." "Oh, come now, Mr. Johnson. Don't tell me no amount of money would tempt you!" "Nope. But how much did you say?" "Ah, then you will consider releasing Genius!" "Well, I'll tell you something," said the mayor confidentially. "When you've got one thing, you've got one thing. But when you've got money, it's the same as having a lot of things. Because, if you've got money, you can buy this and that and this and that and—" "This and that," concluded Joe. "We'll give you five hundred buckos." "Now, gents!" Johnson remonstrated. "Why, six hundred would hardly—" "You haven't left us much money," Harvey put in. The mayor frowned. "All right, we'll split the difference. Make it five-fifty." Harvey was quick to pay out, for this was a genuine windfall. Then he stood up and admired the astonishing possession he had so inexpensively acquired. "I really hate to deprive you of this unique creature," he said to Johnson. "I should imagine you will be rather lonely, with only your filial mammoth to keep you company." "I sure will," Johnson confessed glumly. "I got pretty attached to Genius, and I'm going to miss him something awful." Harvey forcibly removed his eyes from the native, who was clearing off the table almost all at once. "My friend," he said, "we take your only solace, it is true, but in his place we can offer something no less amazing and instructive." The mayor's hand went protectively to his pocket. "What is it?" he asked with the suspicion of a man who has seen human nature at its worst and expects nothing better. "Joseph, get our most prized belonging from the communications room of the ship," Harvey instructed. To Johnson he explained: "You must see the wondrous instrument before its value can be appreciated. My partner will soon have it here for your astonishment." Joe's face grew as glum as Johnson's had been. "Aw, Harv," he protested, "do we have to sell it? And right when I thought we were getting the key!" "We must not be selfish, my boy," Harvey said nobly. "We have had our chance; now we must relinquish Fate to the hands of a man who might have more success than we. Go, Joseph. Bring it here." Unwillingly, Joe turned and shuffled out. On a larger and heavier world than Planetoid 42, Johnson's curiosity would probably have had weight and mass. He was bursting with questions, but he was obviously afraid they would cost him money. For his part, Harvey allowed that curiosity to grow like a Venusian amoeba until Joe came in, lugging a radio. "Is that what you were talking about?" the mayor snorted. "What makes you think I want a radio? I came here to get away from singers and political speech-makers." "Do not jump to hasty conclusions," Harvey cautioned. "Another word, and I shall refuse you the greatest opportunity any man has ever had, with the sole exceptions of Joseph, myself and the unfortunate inventor of this absolutely awe-inspiring device." "I ain't in the market for a radio," Johnson said stubbornly. Harvey nodded in relief. "We have attempted to repay our host, Joseph. He has spurned our generosity. We have now the chance to continue our study, which I am positive will soon reward us with the key to an enormous fortune." "Well, that's no plating off our bow," Joe grunted. "I'm glad he did turn it down. I hated to give it up after working on it for three whole years." He picked up the radio and began walking toward the door. "Now, hold on!" the mayor cried. "I ain't saying I'll buy, but what is it I'm turning down?" Joe returned and set the instrument down on the bar. His face sorrowful, Harvey fondly stroked the scarred plasticoid cabinet. "To make a long story, Mr. Johnson," he said, "Joseph and I were among the chosen few who knew the famous Doctor Dean intimately. Just before his tragic death, you will recall, Dean allegedly went insane." He banged his fist on the bar. "I have said it before, and I repeat again, that was a malicious lie, spread by the doctor's enemies to discredit his greatest invention—this fourth dimensional radio!" "This what?" Johnson blurted out. "In simple terms," clarified Harvey, "the ingenious doctor discovered that the yawning chasm between the dimensions could be bridged by energy of all quanta. There has never been any question that the inhabitants of the super-dimension would be far more civilized than ourselves. Consequently, the man who could tap their knowledge would find himself in possession of a powerful, undreamt-of science!" The mayor looked respectfully at the silent box on the bar. "And this thing gets broadcasts from the fourth dimension?" "It does, Mr. Johnson! Only charlatans like those who envied Doctor Dean's magnificent accomplishments could deny that fact." The mayor put his hands in his pockets, unswiveled one hip and stared thoughtfully at the battered cabinet. "Well, let's say it picks up fourth dimensional broadcasts," he conceded. "But how could you understand what they're saying? Folks up there wouldn't talk our language." Again Harvey smashed his fist down. "Do you dare to repeat the scurvy lie that broke Dean's spirit and drove him to suicide?" Johnson recoiled. "No—no, of course not . I mean, being up here, I naturally couldn't get all the details." "Naturally," Harvey agreed, mollified. "I'm sorry I lost my temper. But it is a matter of record that the doctor proved the broadcasts emanating from the super-dimension were in English! Why should that be so difficult to believe? Is it impossible that at one time there was communication between the dimensions, that the super-beings admired our language and adopted it in all its beauty, adding to it their own hyper-scientific trimmings?" "Why, I don't know," Johnson said in confusion. "For three years, Joseph and I lost sleep and hair, trying to detect the simple key that would translate the somewhat metamorphosed broadcasts into our primitive English. It eluded us. Even the doctor failed. But that was understandable; a sensitive soul like his could stand only so much. And the combination of ridicule and failure to solve the mystery caused him to take his own life." Johnson winced. "Is that what you want to unload on me?" "For a very good reason, sir. Patience is the virtue that will be rewarded with the key to these fourth dimensional broadcasts. A man who could devote his life to improving this lonely worldlet is obviously a person with unusual patience." "Yeah," the mayor said grudgingly, "I ain't exactly flighty." "Therefore, you are the man who could unravel the problem!" Johnson asked skeptically: "How about a sample first?"
D. They all have a tendency to be greedy at any opportunity
Which dataset(s) do they evaluate on?
### Introduction Short text matching plays a critical role in many natural language processing tasks, such as question answering, information retrieval, and so on. However, matching text sequences for Chinese or similar languages often suffers from word segmentation, where there are often no perfect Chinese word segmentation tools that suit every scenario. Text matching usually requires to capture the relatedness between two sequences in multiple granularities. For example, in Figure FIGREF4 , the example phrase is generally tokenized as “China – citizen – life – quality – high”, but when we plan to match it with “Chinese – live – well”, it would be more helpful to have the example segmented into “Chinese – livelihood – live” than its common segmentation. Existing efforts use neural network models to improve the matching based on the fact that distributed representations can generalize discrete word features in traditional bag-of-words methods. And there are also works fusing word level and character level information, which, to some extent, could relieve the mismatch between different segmentations, but these solutions still suffer from the original word sequential structures. They usually depend on an existing word tokenization, which has to make segmentation choices at one time, e.g., “ZhongGuo”(China) and “ZhongGuoRen”(Chinese) when processing “ZhongGuoRenMin”(Chinese people). And the blending just conducts at one position in their frameworks. Specific tasks such as question answering (QA) could pose further challenges for short text matching. In document based question answering (DBQA), the matching degree is expected to reflect how likely a sentence can answer a given question, where questions and candidate answer sentences usually come from different sources, and may exhibit significantly different styles or syntactic structures, e.g. queries in web search and sentences in web pages. This could further aggravate the mismatch problems. In knowledge based question answering (KBQA), one of the key tasks is to match relational expressions in questions with knowledge base (KB) predicate phrases, such as “ZhuCeDi”(place of incorporation). Here the diversity between the two kinds of expressions is even more significant, where there may be dozens of different verbal expressions in natural language questions corresponding to only one KB predicate phrase. Those expression problems make KBQA a further tough task. Previous works BIBREF0 , BIBREF1 adopt letter-trigrams for the diverse expressions, which is similar to character level of Chinese. And the lattices are combinations of words and characters, so with lattices, we can utilize words information at the same time. Recent advances have put efforts in modeling multi-granularity information for matching. BIBREF2 , BIBREF3 blend words and characters to a simple sequence (in word level), and BIBREF4 utilize multiple convoluational kernel sizes to capture different n-grams. But most characters in Chinese can be seen as words on their own, so combining characters with corresponding words directly may lose the meanings that those characters can express alone. Because of the sequential inputs, they will either lose word level information when conducting on character sequences or have to make segmentation choices. In this paper, we propose a multi-granularity method for short text matching in Chinese question answering which utilizes lattice based CNNs to extract sentence level features over word lattice. Specifically, instead of relying on character or word level sequences, LCNs take word lattices as input, where every possible word and character will be treated equally and have their own context so that they can interact at every layer. For each word in each layer, LCNs can capture different context words in different granularity via pooling methods. To the best of our knowledge, we are the first to introduce word lattice into the text matching tasks. Because of the similar IO structures to original CNNs and the high efficiency, LCNs can be easily adapted to more scenarios where flexible sentence representation modeling is required. We evaluate our LCNs models on two question answering tasks, document based question answering and knowledge based question answering, both in Chinese. Experimental results show that LCNs significantly outperform the state-of-the-art matching methods and other competitive CNNs baselines in both scenarios. We also find that LCNs can better capture the multi-granularity information from plain sentences, and, meanwhile, maintain better de-noising capability than vanilla graphic convolutional neural networks thanks to its dynamic convolutional kernels and gated pooling mechanism. ### Lattice CNNs Our Lattice CNNs framework is built upon the siamese architecture BIBREF5 , one of the most successful frameworks in text matching, which takes the word lattice format of a pair of sentences as input, and outputs the matching score. ### Siamese Architecture The siamese architecture and its variant have been widely adopted in sentence matching BIBREF6 , BIBREF3 and matching based question answering BIBREF7 , BIBREF0 , BIBREF8 , that has a symmetrical component to extract high level features from different input channels, which share parameters and map inputs to the same vector space. Then, the sentence representations are merged and compared to output the similarities. For our models, we use multi-layer CNNs for sentence representation. Residual connections BIBREF9 are used between convolutional layers to enrich features and make it easier to train. Then, max-pooling summarizes the global features to get the sentence level representations, which are merged via element-wise multiplication. The matching score is produced by a multi-layer perceptron (MLP) with one hidden layer based on the merged vector. The fusing and matching procedure is formulated as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are feature vectors of question and candidate (sentence or predicate) separately encoded by CNNs, INLINEFORM2 is the sigmoid function, INLINEFORM3 are parameters, and INLINEFORM4 is element-wise multiplication. The training objective is to minimize the binary cross-entropy loss, defined as: DISPLAYFORM0 where INLINEFORM0 is the {0,1} label for the INLINEFORM1 training pair. Note that the CNNs in the sentence representation component can be either original CNNs with sequence input or lattice based CNNs with lattice input. Intuitively, in an original CNN layer, several kernels scan every n-gram in a sequence and result in one feature vector, which can be seen as the representation for the center word and will be fed into the following layers. However, each word may have different context words in different granularities in a lattice and may be treated as the center in various kernel spans with same length. Therefore, different from the original CNNs, there could be several feature vectors produced for a given word, which is the key challenge to apply the standard CNNs directly to a lattice input. For the example shown in Figure FIGREF6 , the word “citizen” is the center word of four text spans with length 3: “China - citizen - life”, “China - citizen - alive”, “country - citizen - life”, “country - citizen - alive”, so four feature vectors will be produced for width-3 convolutional kernels for “citizen”. ### Word Lattice As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 . Here, one of the key issues is how we decide a sequence of characters can be considered as a word. We approach this through an existing lookup vocabulary, which contains frequent words in BaiduBaike. Note that most Chinese characters can be considered as words on their own, thus are included in this vocabulary when they have been used as words on their own in this corpus. However, doing so will inevitably introduce noisy words (e.g., “middle” in Figure FIGREF4 ) into word lattices, which will be smoothed by pooling procedures in our model. And the constructed graphs could be disconnected because of a few out-of-vocabulary characters. Thus, we append INLINEFORM0 labels to replace those characters to connect the graph. Obviously, word lattices are collections of characters and all possible words. Therefore, it is not necessary to make explicit decisions regarding specific word segmentations, but just embed all possible information into the lattice and take them to the next CNN layers. The inherent graph structure of a word lattice allows all possible words represented explicitly, no matter the overlapping and nesting cases, and all of them can contribute directly to the sentence representations. ### Lattice based CNN Layer As we mentioned in previous section, we can not directly apply standard CNNs to take word lattice as input, since there could be multiple feature vectors produced for a given word. Inspired by previous lattice LSTM models BIBREF10 , BIBREF11 , here we propose a lattice based CNN layers to allow standard CNNs to work over word lattice input. Specifically, we utilize pooling mechanisms to merge the feature vectors produced by multiple CNN kernels over different context compositions. Formally, the output feature vector of a lattice CNN layer with kernel size INLINEFORM0 at word INLINEFORM1 in a word lattice INLINEFORM2 can be formulated as Eq EQREF12 : DISPLAYFORM0 where INLINEFORM0 is the activation function, INLINEFORM1 is the input vector corresponding to word INLINEFORM2 in this layer, ( INLINEFORM3 means the concatenation of these vectors, and INLINEFORM4 are parameters with size INLINEFORM5 , and INLINEFORM6 , respectively. INLINEFORM7 is the input dim and INLINEFORM8 is the output dim. INLINEFORM9 is one of the following pooling functions: max-pooling, ave-pooling, or gated-pooling, which execute the element-wise maximum, element-wise average, and the gated operation, respectively. The gated operation can be formulated as: DISPLAYFORM0 where INLINEFORM0 are parameters, and INLINEFORM1 are gated weights normalized by a softmax function. Intuitively, the gates represent the importance of the n-gram contexts, and the weighted sum can control the transmission of noisy context words. We perform padding when necessary. For example, in Figure FIGREF6 , when we consider “citizen” as the center word, and the kernel size is 3, there will be five words and four context compositions involved, as mentioned in the previous section, each marked in different colors. Then, 3 kernels scan on all compositions and produce four 3-dim feature vectors. The gated weights are computed based on those vectors via a dense layer, which can reflect the importance of each context compositions. The output vector of the center word is their weighted sum, where noisy contexts are expected to have lower weights to be smoothed. This pooling over different contexts allows LCNs to work over word lattice input. Word lattice can be seen as directed graphs and modeled by Directed Graph Convolutional networks (DGCs) BIBREF12 , which use poolings on neighboring vertexes that ignore the semantic structure of n-grams. But to some situations, their formulations can be very similar to ours (See Appendix for derivation). For example, if we set the kernel size in LCNs to 3, use linear activations and suppose the pooling mode is average in both LCNs and DGCs, at each word in each layer, the DGCs compute the average of the first order neighbors together with the center word, while the LCNs compute the average of the pre and post words separately and add them to the center word. Empirical results are exhibited in Experiments section. Finally, given a sentence that has been constructed into a word-lattice form, for each node in the lattice, an LCN layer will produce one feature vector similar to original CNNs, which makes it easier to stack multiple LCN layers to obtain more abstract feature representations. ### Experiments Our experiments are designed to answer: (1) whether multi-granularity information in word lattice helps in matching based QA tasks, (2) whether LCNs capture the multi-granularity information through lattice well, and (3) how to balance the noisy and informative words introduced by word lattice. ### Datasets We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 . DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses. KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve. The vocabulary we use to construct word lattices contains 156k words, including 9.1k single character words. In average, each DBQA question contains 22.3 tokens (words or characters) in its lattice, each DBQA candidate sentence has 55.8 tokens, each KBQA question has 10.7 tokens and each KBQA predicate contains 5.1 tokens. ### Evaluation Metrics For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used. ### Implementation Details The word embeddings are trained on the Baidu Baike webpages with Google's word2vector, which are 300-dim and fine tuned during training. In DBQA, we also follow previous works BIBREF15 , BIBREF16 to concatenate additional 1d-indicators with word vectors which denote whether the words are concurrent in both questions and candidate sentences. In each CNN layer, there are 256, 512, and 256 kernels with width 1, 2, and 3, respectively. The size of the hidden layer for MLP is 1024. All activation are ReLU, the dropout rate is 0.5, with a batch size of 64. We optimize with adadelta BIBREF17 with learning rate INLINEFORM0 and decay factor INLINEFORM1 . We only tune the number of convolutional layers from [1, 2, 3] and fix other hyper-parameters. We sample at most 10 negative sentences per question in DBQA and 5 in KBRE. We implement our models in Keras with Tensorflow backend. ### Baselines Our first set of baselines uses original CNNs with character (CNN-char) or word inputs. For each sentence, two Chinese word segmenters are used to obtain three different word sequences: jieba (CNN-jieba), and Stanford Chinese word segmenter in CTB (CNN-CTB) and PKU (CNN-PKU) mode. Our second set of baselines combines different word segmentations. Specifically, we concatenate the sentence embeddings from different segment results, which gives four different word+word models: jieba+PKU, PKU+CTB, CTB+jieba, and PKU+CTB+jieba. Inspired by previous works BIBREF2 , BIBREF3 , we also concatenate word and character embeddings at the input level. Specially, when the basic sequence is in word level, each word may be constructed by multiple characters through a pooling operation (Word+Char). Our pilot experiments show that average-pooling is the best for DBQA while max-pooling after a dense layer is the best for KBQA. When the basic sequence is in character level, we simply concatenate the character embedding with its corresponding word embedding (Char+Word), since each character belongs to one word only. Again, when the basic sequence is in character level, we can also concatenate the character embedding with a pooled representation of all words that contain this character in the word lattice (Char+Lattice), where we use max pooling as suggested by our pilot experiments. DGCs BIBREF12 , BIBREF18 are strong baselines that perform CNNs over directed graphs to produce high level representation for each vertex in the graph, which can be used to build a sentence representation via certain pooling operation. We therefore choose to compare with DGC-max (with maximum pooling), DGC-ave (with average pooling), and DGC-gated (with gated pooling), where the gate value is computed using the concatenation of the vertex vector and the center vertex vector through a dense layer. We also implement several state-of-the-art matching models using the open-source project MatchZoo BIBREF19 , where we tune hyper-parameters using grid search, e.g., whether using word or character inputs. Arc1, Arc2, CDSSM are traditional CNNs based matching models proposed by BIBREF20 , BIBREF21 . Arc1 and CDSSM compute the similarity via sentence representations and Arc2 uses the word pair similarities. MV-LSTM BIBREF22 computes the matching score by examining the interaction between the representations from two sentences obtained by a shared BiLSTM encoder. MatchPyramid(MP) BIBREF23 utilizes 2D convolutions and pooling strategies over word pair similarity matrices to compute the matching scores. We also compare with the state-of-the-art models in DBQA BIBREF15 , BIBREF16 . ### Results Here, we mainly describe the main results on the DBQA dataset, while we find very similar trends on the KBRE dataset. Table TABREF26 summarizes the main results on the two datasets. We can see that the simple MatchZoo models perform the worst. Although Arc1 and CDSSM are also constructed in the siamese architecture with CNN layers, they do not employ multiple kernel sizes and residual connections, and fail to capture the relatedness in a multi-granularity fashion. BIBREF15 is similar to our word level models (CNN-jieba/PKU/CTB), but outperforms our models by around 3%, since it benefits from an extra interaction layer with fine tuned hyper-parameters. BIBREF16 further incorporates human designed features including POS-tag interaction and TF-IDF scores, achieving state-of-the-art performance in the literature of this DBQA dataset. However, both of them perform worse than our simple CNN-char model, which is a strong baseline because characters, that describe the text in a fine granularity, can relieve word mismatch problem to some extent. And our best LCNs model further outperforms BIBREF16 by .0134 in MRR. For single granularity CNNs, CNN-char performs better than all word level models, because they heavily suffer from word mismatching given one fixed word segmentation result. And the models that utilize different word segmentations can relieve this problem and gain better performance, which can be further improved by the combination of words and characters. The DGCs and LCNs, being able to work on lattice input, outperform all previous models that have sequential inputs, indicating that the word lattice is a more promising form than a single word sequence, and should be better captured by taking the inherent graph structure into account. Although they take the same input, LCNs still perform better than the best DGCs by a margin, showing the advantages of the CNN kernels over multiple n-grams in the lattice structures and the gated pooling strategy. To fairly compare with previous KBQA works, we combine our LCN-ave settings with the entity linking results of the state-of-the-art KBQA model BIBREF14 . The P@1 for question answering of single LCN-ave is 86.31%, which outperforms both the best single model (84.55%) and the best ensembled model (85.40%) in literature. ### Analysis and Discussions As shown in Table TABREF26 , the combined word level models (e.g. CTB+jieba or PKU+CTB) perform better than any word level CNNs with single word segmentation result (e.g. CNN-CTB or CNN-PKU). The main reason is that there are often no perfect Chinese word segmenters and a single improper segmentation decision may harm the matching performance, since that could further make the word mismatching issue worse, while the combination of different word segmentation results can somehow relieve this situation. Furthermore, the models combining words and characters all perform better than PKU+CTB+jieba, because they could be complementary in different granularities. Specifically, Word+Char is still worse than CNN-char, because Chinese characters have rich meanings and compressing several characters to a single word vector will inevitably lose information. Furthermore, the combined sequence of Word+Char still exploits in a word level, which still suffers from the single segmentation decision. On the other side, the Char+Word model is also slightly worse than CNN-char. We think one reason is that the reduplicated word embeddings concatenated with each character vector confuse the CNNs, and perhaps lead to overfitting. But, we can still see that Char+Word performs better than Word+Char, because the former exploits in a character level and the fine-granularity information actually helps to relieve word mismatch. Note that Char+Lattice outperforms Char+Word, and even slightly better than CNN-char. This illustrates that multiple word segmentations are still helpful to further improve the character level strong baseline CNN-char, which may still benefit from word level information in a multi-granularity fashion. In conclusion, the combination between different sequences and information of different granularities can help improve text matching, showing that it is necessary to consider the fashion which considers both characters and more possible words, which perhaps the word lattice can provide. For DGCs with different kinds of pooling operations, average pooling (DGC-ave) performs the best, which delivers similar performance with LCN-ave. While DGC-max performs a little worse, because it ignores the importance of different edges and the maximum operation is more sensitive to noise than the average operation. The DGC-gated performs the worst. Compared with LCN-gated that learns the gate value adaptively from multiple n-gram context, it is harder for DGC to learn the importance of each edge via the node and the center node in the word lattice. It is not surprising that LCN-gated performs much better than GDC-gated, indicating again that n-grams in word lattice play an important role in context modeling, while DGCs are designed for general directed graphs which may not be perfect to work with word lattice. For LCNs with different pooling operations, LCN-max and LCN-ave lead to similar performances, and perform better on KBRE, while LCN-gated is better on DBQA. This may be due to the fact that sentences in DBQA are relatively longer with more irrelevant information which require to filter noisy context, while on KBRE with much shorter predicate phrases, LCN-gated may slightly overfit due to its more complex model structure. Overall, we can see that LCNs perform better than DGCs, thanks to the advantage of better capturing multiple n-grams context in word lattice. To investigate how LCNs utilize multi-granularity more intuitively, we analyze the MRR score against granularities of overlaps between questions and answers in DBQA dataset, which is shown in Figure FIGREF32 . It is demonstrated that CNN-char performs better than CNN-CTB impressively in first few groups where most of the overlaps are single characters which will cause serious word mismatch. With the growing of the length of overlaps, CNN-CTB is catching up and finally overtakes CNN-char even though its overall performance is much lower. This results show that word information is complementary to characters to some extent. The LCN-gated is approaching the CNN-char in first few groups, and outperforms both character and word level models in next groups, where word level information becomes more powerful. This demonstrates that LCNs can effectively take advantages of different granularities, and the combination will not be harmful even when the matching clues present in extreme cases. How to Create Word Lattice In previous experiments, we construct word lattice via an existing lookup vocabulary, which will introduce some noisy words inevitably. Here we construct from various word segmentations with different strategies to investigate the balance between the noisy words and additional information introduced by word lattice. We only use the DBQA dataset because word lattices here are more complex, so the construction strategies have more influence. Pilot experiments show that word lattices constructed based on character sequence perform better, so the strategies in Table TABREF33 are based on CNN-char. From Table TABREF33 , it is shown that all kinds of lattice are better than CNN-char, which also evidence the usage of word information. And among all LCN models, more complex lattice produces better performance in principle, which indicates that LCNs can handle the noisy words well and the influence of noisy words can not cancel the positive information brought by complex lattices. It is also noticeable that LCN-gated is better than LCN-C+20 by a considerable margin, which shows that the words not in general tokenization (e.g. “livelihood” in Fig FIGREF4 ) are potentially useful. LCNs only introduce inappreciable parameters in gated pooling besides the increasing vocabulary, which will not bring a heavy burden. The training speed is about 2.8 batches per second, 5 times slower than original CNNs, and the whole training of a 2-layer LCN-gated on DBQA dataset only takes about 37.5 minutes. The efficiency may be further improved if the network structure builds dynamically with supported frameworks. The fast speed and little parameter increment give LCNs a promising future in more NLP tasks. ### Case Study Figure FIGREF37 shows a case study comparing models in different input levels. The word level model is relatively coarse in utilizing informations, and finds a sentence with the longest overlap (5 words, 12 characters). However, it does not realize that the question is about numbers of people, and the “DaoHang”(navigate) in question is a verb, but noun in the sentence. The character level model finds a long sentence which covers most of the characters in question, which shows the power of fine-granularity matching. But without the help of words, it is hard to distinguish the “Ren”(people) in “DuoShaoRen”(how many people) and “ChuangShiRen”(founder), so it loses the most important information. While in lattice, although overlaps are limited, “WangZhan”(website, “Wang” web, “Zhan” station) can match “WangZhi”(Internet addresses, “Wang” web, “Zhi” addresses) and also relate to “DaoHang”(navigate), from which it may infer that “WangZhan”(website) refers to “tao606 seller website navigation”(a website name). Moreover, “YongHu”(user) can match “Ren”(people). With cooperations between characters and words, it catches the key points of the question and eliminates the other two candidates, as a result, it finds the correct answer. ### Related Work Deep learning models have been widely adopted in natural language sentence matching. Representation based models BIBREF21 , BIBREF7 , BIBREF0 , BIBREF8 encode and compare matching branches in hidden space. Interaction based models BIBREF23 , BIBREF22 , BIBREF3 incorporates interactions features between all word pairs and adopts 2D-convolution to extract matching features. Our models are built upon the representation based architecture, which is better for short text matching. In recent years, many researchers have become interested in utilizing all sorts of external or multi-granularity information in matching tasks. BIBREF24 exploit hidden units in different depths to realize interaction between substrings with different lengths. BIBREF3 join multiple pooling methods in merging sentence level features, BIBREF4 exploit interactions between different lengths of text spans. For those more similar to our work, BIBREF3 also incorporate characters, which is fed into LSTMs and concatenate the outcomes with word embeddings, and BIBREF8 utilize words together with predicate level tokens in KBRE task. However, none of them exploit the multi-granularity information in word lattice in languages like Chinese that do not have space to segment words naturally. Furthermore, our model has no conflicts with most of them except BIBREF3 and could gain further improvement. GCNs BIBREF25 , BIBREF26 and graph-RNNs BIBREF27 , BIBREF28 have extended CNNs and RNNs to model graph information, and DGCs generalize GCNs on directed graphs in the fields of semantic-role labeling BIBREF12 , document dating BIBREF18 , and SQL query embedding BIBREF29 . However, DGCs control information flowing from neighbor vertexes via edge types, while we focus on capturing different contexts for each word in word lattice via convolutional kernels and poolings. Previous works involved Chinese lattice into RNNs for Chinese-English translation BIBREF10 , Chinese named entity recognition BIBREF11 , and Chinese word segmentation BIBREF30 . To the best of our knowledge, we are the first to conduct CNNs on word lattice, and the first to involve word lattice in matching tasks. And we motivate to utilize multi-granularity information in word lattices to relieve word mismatch and diverse expressions in Chinese question answering, while they mainly focus on error propagations from segmenters. ### Conclusions In this paper, we propose a novel neural network matching method (LCNs) for matching based question answering in Chinese. Rather than relying on a word sequence only, our model takes word lattice as input. By performing CNNs over multiple n-gram context to exploit multi-granularity information, LCNs can relieve the word mismatch challenges. Thorough experiments show that our model can better explore the word lattice via convolutional operations and rich context-aware pooling, thus outperforms the state-of-the-art models and competitive baselines by a large margin. Further analyses exhibit that lattice input takes advantages of word and character level information, and the vocabulary based lattice constructor outperforms the strategies that combine characters and different word segmentations together. ### Acknowledgments This work is supported by Natural Science Foundation of China (Grant No. 61672057, 61672058, 61872294); the UK Engineering and Physical Sciences Research Council under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012). For any correspondence, please contact Yansong Feng. Figure 1: A word lattice for the phrase “Chinese people have high quality of life.” Figure 2: An illustration of our LCN-gated, when “人民” (people) is being considered as the center of convolutional spans. Table 1: The performance of all models on the two datasets. The best results in each group are bolded. * is the best published DBQA result. Figure 3: MRR score against granularities of overlaps between questions and answers, which is the average length of longest common substrings. About 2.3% questions are ignored for they have no overlaps and the rests are separated in 12 groups orderly and equally. Group 1 has the least average overlap length while group 12 has the largest. Table 2: Comparisons of various ways to construct word lattice. l.qu and l.sen are the average token number in questions and sentences respectively. The 4 models in the middle construct lattices by adding words to CNN-char. +2& considers the intersection of words of CTB and PKU mode while +2 considers the union. +20 uses the top 10 results of the two segmentors. Table 3: Example, questions (in word) and 3 sentences selected by 3 systems. Bold mean sequence exactly match between question and answer.
DBQA, KBRE
Which news organisations are the headlines sourced from?
### Introduction Several successful efforts have led to publishing huge RDF (Resource Description Framework) datasets on Linked Open Data (LOD) such as DBpedia BIBREF0 and LinkedGeoData BIBREF1 . However, these sources are limited to either structured or semi-structured data. So far, a significant portion of the Web content consists of textual data from social network feeds, blogs, news, logs, etc. Although the Natural Language Processing (NLP) community has developed approaches to extract essential information from plain text (e.g., BIBREF2 , BIBREF3 , BIBREF4 ), there is convenient support for knowledge graph construction. Further, several lexical analysis based approaches extract only a limited form of metadata that is inadequate for supporting applications such as question answering systems. For example, the query “Give me the list of reported events by BBC and CNN about the number of killed people in Yemen in the last four days”, about a recent event (containing restrictions such as location and time) poses several challenges to the current state of Linked Data and relevant information extraction techniques. The query seeks “fresh” information (e.g., last four days) whereas the current version of Linked Data is encyclopedic and historical, and does not contain appropriate information present in a temporally annotated data stream. Further, the query specifies provenance (e.g., published by BBC and CNN) that might not always be available on Linked Data. Crucially, the example query asks about a specific type of event (i.e., reports of war caused killing people) with multiple arguments (e.g., in this case, location argument occurred in Yemen). In spite of recent progress BIBREF5 , BIBREF6 , BIBREF7 , there is still no standardized mechanism for (i) selecting background data model, (ii) recognizing and classifying specific event types, (iii) identifying and labeling associated arguments (i.e., entities as well as relations), (iv) interlinking events, and (v) representing events. In fact, most of the state-of-the-art solutions are ad hoc and limited. In this paper, we provide a systematic pipeline for developing knowledge graph of interlinked events. As a proof-of-concept, we show a case study of headline news on Twitter. The main contributions of this paper include: The remainder of this paper is organized as follows. Section SECREF2 is dedicated to notation and problem statement. Section SECREF3 outlines the required steps for developing a knowledge graph of interlinked events. Section SECREF4 frames our contribution in the context of related work. Section SECREF5 concludes the paper with suggestions for future work. ### Notation and Problem Statement A tweet of a news headline contains a sequence of words INLINEFORM0 . tab:tweetsamples provides samples of news headlines on Twitter with provenance information such as publisher and publishing date. These were sampled for the type of embedded event discussed below. We aim to create an RDF knowledge base for such news headlines. An RDF knowledge base INLINEFORM1 consists of a set of triples INLINEFORM2 , where INLINEFORM3 is the union of all RDF resources ( INLINEFORM4 are respectively a set of classes, properties and instances), and INLINEFORM5 is a set of literals ( INLINEFORM6 ). We aim to extract rich set of triples INLINEFORM7 from each tweet INLINEFORM8 in the stream of news headline tweets (as discussed below), and populate an event knowledge graph INLINEFORM9 . Formally, the extraction task can be captured as INLINEFORM10 where INLINEFORM11 is the stream of news headline tweets and INLINEFORM12 is a knowledge graph of events (where a tweet INLINEFORM13 is mapped to a single event). We address three main challenges on the way: (1) agreeing upon a background data model (either by developing or reusing one), (2) annotating events, associated entities as well as relations, (3) interlinking events across time and media, and (4) publishing triples on the event knowledge graph according to the principles of Linked Open Data. ### Outline of The Required Steps Here, we outline the required steps for developing a knowledge graph of interlinked events. Figure FIGREF2 illustrates the high-level overview of the full pipeline. This pipeline contains the following main steps, to be discussed in detail later. (1) Collecting tweets from the stream of several news channels such as BBC and CNN on Twitter. (2) Agreeing upon background data model. (3) Event annotation potentially contains two subtasks (i) event recognition and (ii) event classification. (4) Entity/relation annotation possibly comprises a series of tasks as (i) entity recognition, (ii) entity linking, (iii) entity disambiguation, (iv) semantic role labeling of entities and (v) inferring implicit entities. (5) Interlinking events across time and media. (6) Publishing event knowledge graph based on the best practices of Linked Open Data. ### Background Data Model An initial key question is “What is the suitable background data model (serving as the pivot) for extracting triples associated to an event?” Contemporary approaches to extracting RDF triples capture entities and relations in terms of binary relations BIBREF8 , BIBREF9 , BIBREF10 . We divide the current triple-based extraction approaches into two categories: (i) those that (e.g., BIBREF8 ) follow the pattern INLINEFORM0 to leverage existing relations (i.e., properties) INLINEFORM1 in the knowledge base to find the entities INLINEFORM2 and INLINEFORM3 for which the relation INLINEFORM4 holds. For example, for the relation plays holds between an athlete and his/her favorite sport, and NELL extracts the triple seve ballesteros plays golf for two entities seve ballesteros and golf, and (ii) others that (e.g., BIBREF11 , BIBREF9 ) utilize the pattern INLINEFORM5 to leverage the entities available in the knowledge graph (i.e., INLINEFORM6 ) to infer new relations (e.g., INLINEFORM7 ) that either did not exist in the knowledge base or did not hold between the entities INLINEFORM8 . For example, BIBREF11 initially recognizes named entities in a given sentence and then, by inferring over domains and ranges of properties in DBpedia, assigns an appropriate property between the recognized entities. Given an entity (e.g. Garry Marshall) with type director associated with a known movie (e.g. Pretty woman), it infers the property dbpedia:director from background ontology between the two recognized entities Garry Marshall and Pretty woman. So far, supervised and unsupervised learning approaches have been applied for these extractions, which rely on the use of a large number of specific lexical, syntactical and semantic features. We assume that each news headline maps to an event modeled by an n-ary relation that can be captured by generating multiple triples. An INLINEFORM9 -ary relation is a relation with n arguments INLINEFORM10 . For example, a binary relation triple INLINEFORM11 can be rewritten as INLINEFORM12 . Thus, the first challenge concerns the suitable background data model for representing various types of events and their associated entities by simulating n-ary relationships in terms of binary relationships. Considering our case study, news headlines are often one single sentence (potentially accompanied by subordinate clauses) along with a link directing to the body of the news report. In spite of its brevity, headline tweets provide dense and significant information. Various entities appear in the embedded core message (the latter commonly as verb phrase), including aspects that indicate temporal properties, location and agent. For example, consider the tweet no.2 in tab:tweetsamples that will serve as a running example: Instagram CEO meets with @Pontifex to discuss "the power of images to unite people" that contains several entities related to the verb phrase `meet' and are distinguished by separating boxes as [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] Instagram CEO; [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] meets with; [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] @Pontifex; [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] to discuss "the power of images to unite people";. The general intuition is that a core verb (i.e., relation) heads each headline tweet accompanied by multiple arguments (i.e., entities). The number of entities INLINEFORM0 depends on the type of relation but location and time are generic default arguments for any relation INLINEFORM1 . Thus, the core chunk (verb phrase) corresponds to the meet event and the remaining chunks of the given tweet likely function as dependent entities of this event. For instance, in the running example, the chunk [baseline=(X.base)] X) [draw, shape=rectangle, inner sep=0] meets; corresponds to the event INLINEFORM2 with the following recognized entities as associated arguments: DISPLAYFORM0 In this example, the temporal, as well as location arguments of INLINEFORM0 , are absent. Consistent with linguistic theory, not all arguments are always present for each occurrence of an event. The RDF and OWL (Web Ontology language) primarily allow binary relations, defined as a link between either two entities or an entity and its associated property value. However, in the domain of news, we often encounter events that involve more than two entities, and hence require n-ary relations. The W3C Working group Note suggests two patterns for dealing with n-ary relations. We prefer the first pattern that creates INLINEFORM0 classes and INLINEFORM1 new properties to represent an n-ary relation. We formally define a generic event class representing all categories of events (n-ary relations) and then, use a template-based definition for any subclass of the generic event. This enables the representation of specific types of events (e.g. meet event). Definition 1 (Class of Generic Event) A generic event class refers to any event that can involve n multiple entities. In other words, the Generic Event Class denoted by INLINEFORM0 abstracts a relation among n entities. Definition 2 (Class of `X' Event) `X' Event denoted by INLINEFORM0 is a subclass (i.e. specific type) of the class INLINEFORM1 , i.e., INLINEFORM2 . Conceptually it refers to events sharing common behavior, semantics, and consequences. In the following, we provide requirements on the data model for developing a knowledge graph of interlinked events. Requirement 1 (Inclusion of Generic Event) An event data model minimally includes the definition of the generic event while including the specific event as optional. Requirement 2 (Inclusion of Provenance) The provenance of each event must be represented within the data model. Requirement 3 (Inclusion of Entity Type) The type of each entity associated with a given event must be represented within the data model. This type can be fine-grained or coarse-grained. Requirement 4 (Inclusion of Properties) For any given entity INLINEFORM0 associated with a given event INLINEFORM1 , a property (i.e., binary relation) INLINEFORM2 between the entity INLINEFORM3 and the event INLINEFORM4 must be represented within the data model. Thus, for the given pair INLINEFORM5 , either the triple INLINEFORM6 or the triple INLINEFORM7 is entailed in the RDF graph of INLINEFORM8 . ### Using Existing Data Models In this part, we review a number of state-of-the-art event ontologies. In 2009 UC Berkeley introduced the LODE ontology. In this ontology, an event is defined as an action which takes place at a certain time at a specific location. It can be a historical action as well as a scheduled action. There were previous models BIBREF12 BIBREF13 for representing historic events and scheduled events. Some of them represent both types of events (i.e., historical and scheduled), e.g., EventsML-G2. The LODE ontology proposed to build an interlingua model, i.e., a model which encapsulates the overlap among different ontologies e.g., CIDOC CRM, ABC Ontology, Event Ontology, and EventsML-G2. This encapsulation is utilized to create a mapping among existing ontologies. LODE was introduced to publish historical events in a fine-grained manner as it assumes each event is a unique event even if it is a part of a series. Because the concept of sub-events does not exist in LODE, related events can be interlinked. This ontology helps us to link factual aspects of a historical event. A factual aspect is given by 'What happened' (event), 'Where did it happen' (atPlace), 'When did it happen' (atTime), 'Who was involved' (involvedAgent) BIBREF14 . A visualization of LODE ontology is shown in Figure 2. We conclude that LODE meets (i) Requirement 1 as it defines a generic concept of the historic event, (ii) loosely Requirement 3 as it contains generic types for entities, e.g., Agent, SpatialThing, TemporalEntity, (iii) Requirement 4 as it includes necessary relations. But LODE ontology fails to meet Requirement 2 as it does not include the publisher of the event (provenance). Figure 3 depicts our running example in LODE. In 2011, SEM ontology was introduced from Vrije University and Delft. This ontology describes events as the central element in representing historical data, cultural heritage BIBREF15 BIBREF16 and multimedia BIBREF17 . SEM is combined with a Prolog API to create event instances without the background knowledge. This API also helps in connecting the created event instances to Linked Open Data. SEM proposes a method to attain interoperability among datasets from different domains. SEM strives to remove constraints to make it reusable by supporting weak semantics. Thus, in SEM, the concept of event is specified as everything that happens BIBREF18 . A schematic representation of SEM model is shown in fig:sem (summarized version). We conclude that SEM meets (i) Requirement 1 as it defines generic event, (ii) Requirement 3 as it specifies a type for entities, e.g., Actor, and (iii) Requirement 4 as it includes required properties. Similar to LODE ontology, SEM model fails to meet Requirement 2 as it does not include the publisher of events (provenance). Fig The DBpedia ontology defines the generic concept of event with a hierarchy which is broader, including lifecycle events (e.g. birth, death), natural events (e.g. earthquake, stormsurge), and societal events (e.g. concert, election). We conclude that DBpedia meets (i) Requirement 1 as it defines generic event, (ii) Requirement 3 as it specifies a type for entities, and (iii) Requirement 4 as it includes required properties. All these can be imported from other datasets present on the Web as DBpedia links to other datasets in an easy manner. Similar to LODE ontology and SEM model, DBpedia fails to meet Requirement 2 as it does not include the publisher of events (provenance). Schema.org, a product of collaborative efforts by major companies (i.e., Google, Bing, Yahoo and Yandex) , presents similar generic concept of event. It considers temporal as well as location aspects and additionally provides a limited hierarchy. This hierarchy introduces types of events such as business events, sale events, and social events. The schemas in schema.org are set of these types which are associted with a set of properties. Furthermore, it considers multiple labels between the associated entity and the concept of the event (represented in fig:schema.org) such as actor and contributor, which distinguishes the role of the associated entity. Schema.org introduces hundreds of schemas for categories like movies, music, organizations, TV shows, products, places etc . For Schema.org, an event is an instance taking place at a certain time and at a certain location. Like LODE, the repeated events are classified different events and thus keeping all the events unique even if it is a sub event. A schematic representation of Schema.org (summarized version) is shown in fig:schema.org. We conclude that Schema.org meets (i) Requirement 1 as it defines generic event, (ii) Requirement 3 as it specifies a type for entities e.g Actor (as type Person), Location (as type Place), Organizer (as type Person), StartDate (as type Date or DateTime) etc.. and (iii) Requirement 4 as it includes required properties for every entities defined above. Like LODE, SEM and DBPedia, Schema.org also fails in meeting Requirement 2 as it can define or import publisher of the event (provenance). The CEVO ontology relies on an abstract conceptualization of English verbs provided by Beth Levin BIBREF19 . Levin categorizes English verbs according to shared meaning and behavior. CEVO ontology, which is a machine-readable format (i.e., RDF format) of Levin 's categorization, presents more than 230 event classes for over 3,000 English verbs individuals. It organizes classes into semantically coherent event classes and event hierarchy, and notably, has an inventory of the corresponding lexical items. For example, tab:threeVerbClasses in the first column presents three event classes as (i) Communication event that corresponds to the event which causes transferring a message, (ii) Meet event which is an event related to group activities, and (iii) Murder event which is referring to an event that describing killing. The second column of tab:threeVerbClasses represents the lexical items (i.e., verbs) having shared meaning and are under the umbrella of a common event. In other words, an appearance of one of these verbs shows the occurrence of its associated event. For example, w.r.t. the running example, the appearance of the verb meet in the given tweet shows the occurrence of an event with the specific type `meet'. The CEVO ontology can be employed for recognizing events and more interesting classifying them w.r.t. their specific type. Specifically, it unifies apparently disparate lexical items under a single event class. More importantly, this can prove critical in reducing the number of apparent features for classifiers and in the support of inference necessary for query response. ### Developing a Data Model The existing data models are basically coarse-grained. In case the domain or application requires a fine-grained data model, the existing data models can be extended. For example, here we extended event data model from CEVO ontology for three specific events. We take into account three subclasses (shown in Figure UID50 ) as (i) class communication INLINEFORM0 that refers to any event transferring a message, (ii) class meet INLINEFORM1 that ranges over all group activities, and finally, (iii) class murder INLINEFORM2 that includes any reports of killing. Furthermore, as Figure UID50 shows, the provenance information (e.g., publisher or date) is represented within the data model (default arguments for all events), to meet Requirement req:prov. Figure FIGREF49 (b-d) represents parts of data model for sub-event classes (i.e., INLINEFORM0 ) in detail. The type of all possible associated entities as well as their necessary relationships are represented within the data model. This meets the Requirements SECREF22 and SECREF23 . For example, the meet event is associated with entities with type of Participant and Topic (i.e., topic discussed in the meeting). Considering the sample of tweets in Table TABREF9 , the tweets no.1, no.4, and no.7 are instances of the event Communication with the mentions tell, say, announce. The tweets no.2, no.5, no.8 are instances of the event Meet with the mentions meet, visit. The tweets no3, no6, no9 are instances of the event Murder with the mention kill. fig:exam demonstrates the running example within the developed data model. This event has two participants (i.e. instagram CEO and Pontifex) along with a specific topic. ### Using Singleton Property We can adopting the concept of a singleton property introduced in BIBREF20 for modeling n-ary relations in the background data model. Singleton properties replace RDF reifications and enable efficient represention of statements about statements. Since news headlines contain both provenance information and multiple associated entities, SP is a suitable choice and furthermore, it enable systematic encoding of n-ary relations in terms of binary relations. Example 1 (Input/Output) Considering our running example which is about the occurrence of a meet event with two participant entities Instagram CEO and Pontifex and the topic INLINEFORM0 . The generated triples using singleton property are as follows: 1. :Meet#1 singletonPropertyOf :Meet. 2. :Instagram_CEO :Meet#1 :Pontifex. 3. :Meet#1 :about :t1. 4. :Meet#1 :hasSource :CNN. 5. :Meet#1 :extractedOn `26/2/2106'. 6. :t1 a :Topic. 7. :t1 :body `to discuss the power of images to unite people'. ### Event Annotation Events can be represented at different levels of granularity. The event annotation task potentially comprises of two subsequent tasks as follows: Event recognition: Typically, event recognition utilizes phrases and their parts of speech. Although, verbs are more common for distinguishing an event (e.g., `Obama met Merkel in Berlin'), the other POS might reveal an event (e.g., `G8 meeting in Berlin'). Furthermore, event recognition task ecan beither open domain or closed domain. In the former one, collecting a lexicon of event phrases is more challenging rather than for the latter one. In any case, a learning approach (either supervised or semi-supervised) can be applied for determining whether or not a piece of text contains an event phrase or not. Event classification: This task is necessary in case the employed background data model considers the specific type of events as part of event annotation. In this case, event phrases have to be labeled by specific types of events using multi-class classifier trained for distinguishing the specific type of a given event. For example, the tweets no.2, no.5, no.8 of tab:tweetsamples have the specific type “meet”. ### Entity Annotation Entity annotation is a significant task for creating a knowledge graph of events. It can be challenging when we have a fine-grained background data model, which makes the task of semantic role labeling of entities necessary. Overall, the required tasks for fulfilling entity annotation are as follows: Entity recognition: This task specifies a chunk of text as an individual entity which plays a role in the occurred event. An entity mention can be explicit or implicit. Regarding explicit entities, Named Entity Recognition (NER) tools can be used for open domain scenarios whereas alternatives such as knowledge graphs, gazetteers, and domain dictionaries are necessary for closed domain scenarios. E.g., for the tweet no.1 in tab:tweetsamples, the chunk `Michelle Obama' is recognized as a named entity with the type person. Entity linking: Entity linking can be attributed into two tasks, the first one BIBREF21 , which is required in our case, is about associating entity mentions in a given text to their appropriate corresponding entities in a given knowledge graph. Thus, it removes ambiguity. A textual mention of an entity might have a matching entity in the knowledge graph or not. In the former case, entity linking task is reduced to hook a suitable entity whereas in the latter case, it is required that a new IRI (i.e., International Resource Identifier) be minted and typed and then linked to the textual mention of the given entity. E.g., in the tweet no.1 of tab:tweetsamples, the named entity `Michelle Obama' should be linked to the entity dbr:Michelle_Obama, when DBpedia is employed as the background knowledge graph. The second type of entity linking is about linking entities across knowledge graphs using owl:sameAs links. While the first task is required in the pipeline of developing an event knowledge graph, the second one is optional but can enhance quality and visibility of the underlying knowledge graph. Semantic role labeling: Most of the existing event ontologies consider generic roles such as actor or agent for involved entities. For fine-grained background data model, the semantic role labeling can be done. E.g., w.r.t. the tweet no.1 in tab:tweetsamples, the entity `Michelle Obama' can be labelled by the generic role actor employing LODE ontology or the specific role giver applying the data model illustrated in fig:communicationpattern. Entity disambiguation: An entity mention in a text might be polysemous, thus linking to the correct entity in the underlying knowledge graph requires a disambiguation phase. Furthermore, a single entity in multiple knowledge graphs might have various representations. Thus, interlinking them is challenging and requires a disambiguation phase as well BIBREF22 , BIBREF23 , BIBREF24 . E.g., w.r.t. the tweet no.7 in tab:tweetsamples, the named entity `Obama' is ambiguous as of whether it refers to `Michelle Obama' or `Barack Obama'. Regarding context (i.e., the remaining part of the tweet), it likely refers to `Barack Obama'. Implicit entity linking: As we mentioned before, not all of the mentions of entities are explicit. For example, w.r.t. the running example, the chunk `Instagram CEO' refers to the implicit entity `Kevin Systrom' who is the CEO of Instagram. The experiment performed in BIBREF25 shows that 21% entity mentions in movie domain and 40% of entity mentions in Book domain are implicit. Inferring implicit entities depends on capturing context as well as respecting time intervals. ### Interlinking Events The tasks described above have been considered independently before. The interlinking requirement, which has not hyet been adequately explored, comes from the two inherent facts of events as follows: A single event might be reported by various publisher sources using different expressions. Thus, it is necessary to identify same events across various publisher sources, and then interlink them using owl:sameAs or skos:related links. Events have an evolutionary nature in the sense that more information is added with time. Thus, it is essential to spot an event and its subsequent events reported to either complement the original event or reflect its causes or consequences. To interlink such events, skos:related can be utilized. The recognized events, entities and relations have to be published according to principles of LOD, RDF and the employed background data model. To maintain the knowledge graph's consistency and coherence, the generated triples must be de-duplicated, validated and assigned URIs disambiguated. The minted URI should be dereferenceable and interlinked to external RDF data sources. ### Related Work Overall, there is a lack of a holistic view on event extraction from free text and subsequently developing a knowledge graph from it. In this paper, we presented the full pipeline containing the required tasks such as (i) agreeing upon a data model, (ii) event annotation, (iii) entity annotation and (iv) interlinking events. The majority of previous research is either domain-specific or event-specific and do not undertake the full pipeline (e.g., limited to only event and entity extraction). We have provided a visionary review of the full pipeline which is merely applicable to any domain. In the following, we initially refer to research approaches for n-ary relation extraction on particular domains, then we refer the prominent approaches of binary relation extraction. We end by citing successful attempts at triple extraction from structured and semi-structured data sources. The work presented in BIBREF26 introduces complex relations as n-ary relations between n-typed entities. It proposes to factorize all complex relations into a set of binary relations. Then, a classifier is trained to recognize related entities of binary relations. After identifying all pairs of related entities for binary relations, it reconstructs the complex relation using a simple graph creation approach. Another domain for extracting n-ary relations is protein-protein interactions in the biomedical literature BIBREF27 , BIBREF28 , BIBREF29 . They first identify protein mentions in text and then recognize interaction relations before finally extracting interactions. The approaches employed for protein-protein interactions can be divided into three groups: (i) graph-based approaches (e.g. co-occurrence graph), (ii) rule-based approaches and (iii) learning approaches (e.g. maximum entropy). The other category of event extraction is based on binary relation extraction. NELL: Never-Ending Language Learning BIBREF8 is a learning agent that extracts new facts using existing binary relations in its knowledge base. It was initiated in 2010 using a couple of seed binary relations but after years of running has become self-learning. A notable feature of NELL is its dynamic approach for extracting facts, as it refreshes beliefs in its knowledge base and removes the incorrect or old ones. Linked Open Data as a valuable source of diverse ontologies also can be employed for extracting either new facts or new relations. The framework proposed in BIBREF11 , BIBREF9 extracts facts using binary relations from DBpedia as background knowledge. In contrast to NELL, it initially identifies Named Entities and their type on plain text, then it tries to infer mentions of relation expression to properties in DBpedia (e.g. taking the domain and range of properties into account). Open Information Extraction BIBREF10 is another extraction framework that is not limited to any predefined relation set. Furthermore, extracting triples from structured as well as semi-structured data sources has received adequate attention in the past, especially, DBpedia BIBREF0 and LinkedGeo Data BIBREF1 that leverage the loose structure of data for extraction. Another example is the work BIBREF30 which presents a holistic approach for extraction of RDF from templated websites. ### Conclusion and Future Work In this paper, we presented the initial version of our framework for the real-time extraction of events. This framework is part of our project HeadEx for developing a knowledge graph of interlinked events. We presented the requirements for choosing a data model representing events and their arguments. We reviewed the existing data models which have been employed by the state-of-the-art applications. Furthermore, we outlined the required tasks for annotating events as well entities. Then, the interlinking strategies were discussed. As a proof-of-concept, we followed a case study of news headlines on Twitter. For our future agenda, we plan to develop the envisioned pipeline containing all the required tasks by either implementing new components or integrating the existing ones. Table 1: Samples of news headlines from different publishers on Twitter. Figure 1: The pipeline of the required steps for developing a knowledge graph of interlinked events. Figure 2: Schematic representation of LODE. Figure 4: Schematic representation of SEM. Figure 3: Representing the running example using LODE ontology. Figure 5: Schematic representation of Schema.org. Table 2: Three samples of events from CEVO with their English verbs. Figure 6: Schematic overview for generic event an three sub-events i.e. meet, communication, and murder. Figure 7: Representation of the running example using the model of Figure 6.
BBC and CNN
What happened to Wyatt and Carpenter? A. They died when a rock slide crushed their vehicle while they were attempting the Brightside Crossing. B. They crossed the Brightside at aphelion. C. They disappeared after their ship set off for Mercury. They were on a mission to cross the Brightside. D. They disappeared when they attempted to cross the Brightside at perihelion.
Brightside Crossing by Alan E. Nourse JAMES BARON was not pleased to hear that he had had a visitor when he reached the Red Lion that evening. He had no stomach for mysteries, vast or trifling, and there were pressing things to think about at this time. Yet the doorman had flagged him as he came in from the street: “A thousand pardons, Mr. Baron. The gentleman—he would leave no name. He said you’d want to see him. He will be back by eight.” Now Baron drummed his fingers on the table top, staring about the quiet lounge. Street trade was discouraged at the Red Lion, gently but persuasively; the patrons were few in number. Across to the right was a group that Baron knew vaguely—Andean climbers, or at least two of them were. Over near the door he recognized old Balmer, who had mapped the first passage to the core of Vulcan Crater on Venus. Baron returned his smile with a nod. Then he settled back and waited impatiently for the intruder who demanded his time without justifying it. Presently a small, grizzled man crossed the room and sat down at Baron’s table. He was short and wiry. His face held no key to his age—he might have been thirty or a thousand—but he looked weary and immensely ugly. His cheeks and forehead were twisted and brown, with scars that were still healing. The stranger said, “I’m glad you waited. I’ve heard you’re planning to attempt the Brightside.” Baron stared at the man for a moment. “I see you can read telecasts,” he said coldly. “The news was correct. We are going to make a Brightside Crossing.” “At perihelion?” “Of course. When else?” The grizzled man searched Baron’s face for a moment without expression. Then he said slowly, “No, I’m afraid you’re not going to make the Crossing.” “Say, who are you, if you don’t mind?” Baron demanded. “The name is Claney,” said the stranger. There was a silence. Then: “Claney? Peter Claney?” “That’s right.” Baron’s eyes were wide with excitement, all trace of anger gone. “Great balls of fire, man— where have you been hiding? We’ve been trying to contact you for months!” “I know. I was hoping you’d quit looking and chuck the whole idea.” “Quit looking!” Baron bent forward over the table. “My friend, we’d given up hope, but we’ve never quit looking. Here, have a drink. There’s so much you can tell us.” His fingers were trembling. Peter Claney shook his head. “I can’t tell you anything you want to hear.” “But you’ve got to. You’re the only man on Earth who’s attempted a Brightside Crossing and lived through it! And the story you cleared for the news—it was nothing. We need details . Where did your equipment fall down? Where did you miscalculate? What were the trouble spots?” Baron jabbed a finger at Claney’s face. “That, for instance—epithelioma? Why? What was wrong with your glass? Your filters? We’ve got to know those things. If you can tell us, we can make it across where your attempt failed—” “You want to know why we failed?” asked Claney. “Of course we want to know. We have to know.” “It’s simple. We failed because it can’t be done. We couldn’t do it and neither can you. No human beings will ever cross the Brightside alive, not if they try for centuries.” “Nonsense,” Baron declared. “We will.” Claney shrugged. “I was there. I know what I’m saying. You can blame the equipment or the men—there were flaws in both quarters—but we just didn’t know what we were fighting. It was the planet that whipped us, that and the Sun . They’ll whip you, too, if you try it.” “Never,” said Baron. “Let me tell you,” Peter Claney said. I’d been interested in the Brightside for almost as long as I can remember (Claney said). I guess I was about ten when Wyatt and Carpenter made the last attempt—that was in 2082, I think. I followed the news stories like a tri-V serial and then I was heartbroken when they just disappeared. I know now that they were a pair of idiots, starting off without proper equipment, with practically no knowledge of surface conditions, without any charts—they couldn’t have made a hundred miles—but I didn’t know that then and it was a terrible tragedy. After that, I followed Sanderson’s work in the Twilight Lab up there and began to get Brightside into my blood, sure as death. But it was Mikuta’s idea to attempt a Crossing. Did you ever know Tom Mikuta? I don’t suppose you did. No, not Japanese—Polish-American. He was a major in the Interplanetary Service for some years and hung onto the title after he gave up his commission. He was with Armstrong on Mars during his Service days, did a good deal of the original mapping and surveying for the Colony there. I first met him on Venus; we spent five years together up there doing some of the nastiest exploring since the Matto Grasso. Then he made the attempt on Vulcan Crater that paved the way for Balmer a few years later. I’d always liked the Major—he was big and quiet and cool, the sort of guy who always had things figured a little further ahead than anyone else and always knew what to do in a tight place. Too many men in this game are all nerve and luck, with no judgment. The Major had both. He also had the kind of personality that could take a crew of wild men and make them work like a well-oiled machine across a thousand miles of Venus jungle. I liked him and I trusted him. He contacted me in New York and he was very casual at first. We spent an evening here at the Red Lion, talking about old times; he told me about the Vulcan business, and how he’d been out to see Sanderson and the Twilight Lab on Mercury, and how he preferred a hot trek to a cold one any day of the year—and then he wanted to know what I’d been doing since Venus and what my plans were. “No particular plans,” I told him. “Why?” He looked me over. “How much do you weigh, Peter?” I told him one-thirty-five. “That much!” he said. “Well, there can’t be much fat on you, at any rate. How do you take heat?” “You should know,” I said. “Venus was no icebox.” “No, I mean real heat.” Then I began to get it. “You’re planning a trip.” “That’s right. A hot trip.” He grinned at me. “Might be dangerous, too.” “What trip?” “Brightside of Mercury,” the Major said. I whistled cautiously. “At aphelion?” He threw his head back. “Why try a Crossing at aphelion? What have you done then? Four thousand miles of butcherous heat, just to have some joker come along, use your data and drum you out of the glory by crossing at perihelion forty-four days later? No, thanks. I want the Brightside without any nonsense about it.” He leaned across me eagerly. “I want to make a Crossing at perihelion and I want to cross on the surface. If a man can do that, he’s got Mercury. Until then, nobody’s got Mercury. I want Mercury—but I’ll need help getting it.” I’d thought of it a thousand times and never dared consider it. Nobody had, since Wyatt and Carpenter disappeared. Mercury turns on its axis in the same time that it wheels around the Sun, which means that the Brightside is always facing in. That makes the Brightside of Mercury at perihelion the hottest place in the Solar System, with one single exception: the surface of the Sun itself. It would be a hellish trek. Only a few men had ever learned just how hellish and they never came back to tell about it. It was a real hell’s Crossing, but someday, I thought, somebody would cross it. I wanted to be along. The Twilight Lab, near the northern pole of Mercury, was the obvious jumping-off place. The setup there wasn’t very extensive—a rocket landing, the labs and quarters for Sanderson’s crew sunk deep into the crust, and the tower that housed the Solar ’scope that Sanderson had built up there ten years before. Twilight Lab wasn’t particularly interested in the Brightside, of course—the Sun was Sanderson’s baby and he’d picked Mercury as the closest chunk of rock to the Sun that could hold his observatory. He’d chosen a good location, too. On Mercury, the Brightside temperature hits 770° F. at perihelion and the Darkside runs pretty constant at -410° F. No permanent installation with a human crew could survive at either extreme. But with Mercury’s wobble, the twilight zone between Brightside and Darkside offers something closer to survival temperatures. Sanderson built the Lab up near the pole, where the zone is about five miles wide, so the temperature only varies 50 to 60 degrees with the libration. The Solar ’scope could take that much change and they’d get good clear observation of the Sun for about seventy out of the eighty-eight days it takes the planet to wheel around. The Major was counting on Sanderson knowing something about Mercury as well as the Sun when we camped at the Lab to make final preparations. Sanderson did. He thought we’d lost our minds and he said so, but he gave us all the help he could. He spent a week briefing Jack Stone, the third member of our party, who had arrived with the supplies and equipment a few days earlier. Poor Jack met us at the rocket landing almost bawling, Sanderson had given him such a gloomy picture of what Brightside was like. Stone was a youngster—hardly twenty-five, I’d say—but he’d been with the Major at Vulcan and had begged to join this trek. I had a funny feeling that Jack really didn’t care for exploring too much, but he thought Mikuta was God, followed him around like a puppy. It didn’t matter to me as long as he knew what he was getting in for. You don’t go asking people in this game why they do it—they’re liable to get awfully uneasy and none of them can ever give you an answer that makes sense. Anyway, Stone had borrowed three men from the Lab, and had the supplies and equipment all lined up when we got there, ready to check and test. We dug right in. With plenty of funds—tri-V money and some government cash the Major had talked his way around—our equipment was new and good. Mikuta had done the designing and testing himself, with a big assist from Sanderson. We had four Bugs, three of them the light pillow-tire models, with special lead-cooled cut-in engines when the heat set in, and one heavy-duty tractor model for pulling the sledges. The Major went over them like a kid at the circus. Then he said, “Have you heard anything from McIvers?” “Who’s he?” Stone wanted to know. “He’ll be joining us. He’s a good man—got quite a name for climbing, back home.” The Major turned to me. “You’ve probably heard of him.” I’d heard plenty of stories about Ted McIvers and I wasn’t too happy to hear that he was joining us. “Kind of a daredevil, isn’t he?” “Maybe. He’s lucky and skillful. Where do you draw the line? We’ll need plenty of both.” “Have you ever worked with him?” I asked. “No. Are you worried?” “Not exactly. But Brightside is no place to count on luck.” The Major laughed. “I don’t think we need to worry about McIvers. We understood each other when I talked up the trip to him and we’re going to need each other too much to do any fooling around.” He turned back to the supply list. “Meanwhile, let’s get this stuff listed and packed. We’ll need to cut weight sharply and our time is short. Sanderson says we should leave in three days.” Two days later, McIvers hadn’t arrived. The Major didn’t say much about it. Stone was getting edgy and so was I. We spent the second day studying charts of the Brightside, such as they were. The best available were pretty poor, taken from so far out that the detail dissolved into blurs on blow-up. They showed the biggest ranges of peaks and craters and faults, and that was all. Still, we could use them to plan a broad outline of our course. “This range here,” the Major said as we crowded around the board, “is largely inactive, according to Sanderson. But these to the south and west could be active. Seismograph tracings suggest a lot of activity in that region, getting worse down toward the equator—not only volcanic, but sub-surface shifting.” Stone nodded. “Sanderson told me there was probably constant surface activity.” The Major shrugged. “Well, it’s treacherous, there’s no doubt of it. But the only way to avoid it is to travel over the Pole, which would lose us days and offer us no guarantee of less activity to the west. Now we might avoid some if we could find a pass through this range and cut sharp east—” It seemed that the more we considered the problem, the further we got from a solution. We knew there were active volcanoes on the Brightside—even on the Darkside, though surface activity there was pretty much slowed down and localized. But there were problems of atmosphere on Brightside, as well. There was an atmosphere and a constant atmospheric flow from Brightside to Darkside. Not much—the lighter gases had reached escape velocity and disappeared from Brightside millennia ago—but there was CO 2 , and nitrogen, and traces of other heavier gases. There was also an abundance of sulfur vapor, as well as carbon disulfide and sulfur dioxide. The atmospheric tide moved toward the Darkside, where it condensed, carrying enough volcanic ash with it for Sanderson to estimate the depth and nature of the surface upheavals on Brightside from his samplings. The trick was to find a passage that avoided those upheavals as far as possible. But in the final analysis, we were barely scraping the surface. The only way we would find out what was happening where was to be there. Finally, on the third day, McIvers blew in on a freight rocket from Venus. He’d missed the ship that the Major and I had taken by a few hours, and had conned his way to Venus in hopes of getting a hop from there. He didn’t seem too upset about it, as though this were his usual way of doing things and he couldn’t see why everyone should get so excited. He was a tall, rangy man with long, wavy hair prematurely gray, and the sort of eyes that looked like a climber’s—half-closed, sleepy, almost indolent, but capable of abrupt alertness. And he never stood still; he was always moving, always doing something with his hands, or talking, or pacing about. Evidently the Major decided not to press the issue of his arrival. There was still work to do, and an hour later we were running the final tests on the pressure suits. That evening, Stone and McIvers were thick as thieves, and everything was set for an early departure after we got some rest. “And that,” said Baron, finishing his drink and signaling the waiter for another pair, “was your first big mistake.” Peter Claney raised his eyebrows. “McIvers?” “Of course.” Claney shrugged, glanced at the small quiet tables around them. “There are lots of bizarre personalities around a place like this, and some of the best wouldn’t seem to be the most reliable at first glance. Anyway, personality problems weren’t our big problem right then. Equipment worried us first and route next.” Baron nodded in agreement. “What kind of suits did you have?” “The best insulating suits ever made,” said Claney. “Each one had an inner lining of a fiberglass modification, to avoid the clumsiness of asbestos, and carried the refrigerating unit and oxygen storage which we recharged from the sledges every eight hours. Outer layer carried a monomolecular chrome reflecting surface that made us glitter like Christmas trees. And we had a half-inch dead-air space under positive pressure between the two layers. Warning thermocouples, of course—at 770 degrees, it wouldn’t take much time to fry us to cinders if the suits failed somewhere.” “How about the Bugs?” “They were insulated, too, but we weren’t counting on them too much for protection.” “You weren’t!” Baron exclaimed. “Why not?” “We’d be in and out of them too much. They gave us mobility and storage, but we knew we’d have to do a lot of forward work on foot.” Claney smiled bitterly. “Which meant that we had an inch of fiberglass and a half-inch of dead air between us and a surface temperature where lead flowed like water and zinc was almost at melting point and the pools of sulfur in the shadows were boiling like oatmeal over a campfire.” Baron licked his lips. His fingers stroked the cool, wet glass as he set it down on the tablecloth. “Go on,” he said tautly. “You started on schedule?” “Oh, yes,” said Claney, “we started on schedule, all right. We just didn’t quite end on schedule, that was all. But I’m getting to that.” He settled back in his chair and continued. We jumped off from Twilight on a course due southeast with thirty days to make it to the Center of Brightside. If we could cross an average of seventy miles a day, we could hit Center exactly at perihelion, the point of Mercury’s closest approach to the Sun—which made Center the hottest part of the planet at the hottest it ever gets. The Sun was already huge and yellow over the horizon when we started, twice the size it appears on Earth. Every day that Sun would grow bigger and whiter, and every day the surface would get hotter. But once we reached Center, the job was only half done—we would still have to travel another two thousand miles to the opposite twilight zone. Sanderson was to meet us on the other side in the Laboratory’s scout ship, approximately sixty days from the time we jumped off. That was the plan, in outline. It was up to us to cross those seventy miles a day, no matter how hot it became, no matter what terrain we had to cross. Detours would be dangerous and time-consuming. Delays could cost us our lives. We all knew that. The Major briefed us on details an hour before we left. “Peter, you’ll take the lead Bug, the small one we stripped down for you. Stone and I will flank you on either side, giving you a hundred-yard lead. McIvers, you’ll have the job of dragging the sledges, so we’ll have to direct your course pretty closely. Peter’s job is to pick the passage at any given point. If there’s any doubt of safe passage, we’ll all explore ahead on foot before we risk the Bugs. Got that?” McIvers and Stone exchanged glances. McIvers said: “Jack and I were planning to change around. We figured he could take the sledges. That would give me a little more mobility.” The Major looked up sharply at Stone. “Do you buy that, Jack?” Stone shrugged. “I don’t mind. Mac wanted—” McIvers made an impatient gesture with his hands. “It doesn’t matter. I just feel better when I’m on the move. Does it make any difference?” “I guess it doesn’t,” said the Major. “Then you’ll flank Peter along with me. Right?” “Sure, sure.” McIvers pulled at his lower lip. “Who’s going to do the advance scouting?” “It sounds like I am,” I cut in. “We want to keep the lead Bug light as possible.” Mikuta nodded. “That’s right. Peter’s Bug is stripped down to the frame and wheels.” McIvers shook his head. “No, I mean the advance work. You need somebody out ahead—four or five miles, at least—to pick up the big flaws and active surface changes, don’t you?” He stared at the Major. “I mean, how can we tell what sort of a hole we may be moving into, unless we have a scout up ahead?” “That’s what we have the charts for,” the Major said sharply. “Charts! I’m talking about detail work. We don’t need to worry about the major topography. It’s the little faults you can’t see on the pictures that can kill us.” He tossed the charts down excitedly. “Look, let me take a Bug out ahead and work reconnaissance, keep five, maybe ten miles ahead of the column. I can stay on good solid ground, of course, but scan the area closely and radio back to Peter where to avoid the flaws. Then—” “No dice,” the Major broke in. “But why not? We could save ourselves days!” “I don’t care what we could save. We stay together. When we get to the Center, I want live men along with me. That means we stay within easy sight of each other at all times. Any climber knows that everybody is safer in a party than one man alone—any time, any place.” McIvers stared at him, his cheeks an angry red. Finally he gave a sullen nod. “Okay. If you say so.” “Well, I say so and I mean it. I don’t want any fancy stuff. We’re going to hit Center together, and finish the Crossing together. Got that?” McIvers nodded. Mikuta then looked at Stone and me and we nodded, too. “All right,” he said slowly. “Now that we’ve got it straight, let’s go.” It was hot. If I forget everything else about that trek, I’ll never forget that huge yellow Sun glaring down, without a break, hotter and hotter with every mile. We knew that the first few days would be the easiest and we were rested and fresh when we started down the long ragged gorge southeast of the Twilight Lab. I moved out first; back over my shoulder, I could see the Major and McIvers crawling out behind me, their pillow tires taking the rugged floor of the gorge smoothly. Behind them, Stone dragged the sledges. Even at only 30 per cent Earth gravity they were a strain on the big tractor, until the ski-blades bit into the fluffy volcanic ash blanketing the valley. We even had a path to follow for the first twenty miles. I kept my eyes pasted to the big polaroid binocs, picking out the track the early research teams had made out into the edge of Brightside. But in a couple of hours we rumbled past Sanderson’s little outpost observatory and the tracks stopped. We were in virgin territory and already the Sun was beginning to bite. We didn’t feel the heat so much those first days out. We saw it. The refrig units kept our skins at a nice comfortable seventy-five degrees Fahrenheit inside our suits, but our eyes watched that glaring Sun and the baked yellow rocks going past, and some nerve pathways got twisted up, somehow. We poured sweat as if we were in a superheated furnace. We drove eight hours and slept five. When a sleep period came due, we pulled the Bugs together into a square, threw up a light aluminum sun-shield and lay out in the dust and rocks. The sun-shield cut the temperature down sixty or seventy degrees, for whatever help that was. And then we ate from the forward sledge—sucking through tubes—protein, carbohydrates, bulk gelatin, vitamins. The Major measured water out with an iron hand, because we’d have drunk ourselves into nephritis in a week otherwise. We were constantly, unceasingly thirsty. Ask the physiologists and psychiatrists why—they can give you have a dozen interesting reasons—but all we knew, or cared about, was that it happened to be so. We didn’t sleep the first few stops, as a consequence. Our eyes burned in spite of the filters and we had roaring headaches, but we couldn’t sleep them off. We sat around looking at each other. Then McIvers would say how good a beer would taste, and off we’d go. We’d have murdered our grandmothers for one ice-cold bottle of beer. After a few driving periods, I began to get my bearings at the wheel. We were moving down into desolation that made Earth’s old Death Valley look like a Japanese rose garden. Huge sun-baked cracks opened up in the floor of the gorge, with black cliffs jutting up on either side; the air was filled with a barely visible yellowish mist of sulfur and sulfurous gases. It was a hot, barren hole, no place for any man to go, but the challenge was so powerful you could almost feel it. No one had ever crossed this land before and escaped. Those who had tried it had been cruelly punished, but the land was still there, so it had to be crossed. Not the easy way. It had to be crossed the hardest way possible: overland, through anything the land could throw up to us, at the most difficult time possible. Yet we knew that even the land might have been conquered before, except for that Sun. We’d fought absolute cold before and won. We’d never fought heat like this and won. The only worse heat in the Solar System was the surface of the Sun itself. Brightside was worth trying for. We would get it or it would get us. That was the bargain. I learned a lot about Mercury those first few driving periods. The gorge petered out after a hundred miles and we moved onto the slope of a range of ragged craters that ran south and east. This range had shown no activity since the first landing on Mercury forty years before, but beyond it there were active cones. Yellow fumes rose from the craters constantly; their sides were shrouded with heavy ash. We couldn’t detect a wind, but we knew there was a hot, sulfurous breeze sweeping in great continental tides across the face of the planet. Not enough for erosion, though. The craters rose up out of jagged gorges, huge towering spears of rock and rubble. Below were the vast yellow flatlands, smoking and hissing from the gases beneath the crust. Over everything was gray dust—silicates and salts, pumice and limestone and granite ash, filling crevices and declivities—offering a soft, treacherous surface for the Bug’s pillow tires. I learned to read the ground, to tell a covered fault by the sag of the dust; I learned to spot a passable crack, and tell it from an impassable cut. Time after time the Bugs ground to a halt while we explored a passage on foot, tied together with light copper cable, digging, advancing, digging some more until we were sure the surface would carry the machines. It was cruel work; we slept in exhaustion. But it went smoothly, at first. Too smoothly, it seemed to me, and the others seemed to think so, too. McIvers’ restlessness was beginning to grate on our nerves. He talked too much, while we were resting or while we were driving; wisecracks, witticisms, unfunny jokes that wore thin with repetition. He took to making side trips from the route now and then, never far, but a little further each time. Jack Stone reacted quite the opposite; he grew quieter with each stop, more reserved and apprehensive. I didn’t like it, but I figured that it would pass off after a while. I was apprehensive enough myself; I just managed to hide it better. And every mile the Sun got bigger and whiter and higher in the sky and hotter. Without our ultra-violet screens and glare filters we would have been blinded; as it was our eyes ached constantly and the skin on our faces itched and tingled at the end of an eight-hour trek. But it took one of those side trips of McIvers’ to deliver the penultimate blow to our already fraying nerves. He had driven down a side-branch of a long canyon running off west of our route and was almost out of sight in a cloud of ash when we heard a sharp cry through our earphones. I wheeled my Bug around with my heart in my throat and spotted him through the binocs, waving frantically from the top of his machine. The Major and I took off, lumbering down the gulch after him as fast as the Bugs could go, with a thousand horrible pictures racing through our minds.... We found him standing stock-still, pointing down the gorge and, for once, he didn’t have anything to say. It was the wreck of a Bug; an old-fashioned half-track model of the sort that hadn’t been in use for years. It was wedged tight in a cut in the rock, an axle broken, its casing split wide open up the middle, half-buried in a rock slide. A dozen feet away were two insulated suits with white bones gleaming through the fiberglass helmets. This was as far as Wyatt and Carpenter had gotten on their Brightside Crossing. On the fifth driving period out, the terrain began to change. It looked the same, but every now and then it felt different. On two occasions I felt my wheels spin, with a howl of protest from my engine. Then, quite suddenly, the Bug gave a lurch; I gunned my motor and nothing happened. I could see the dull gray stuff seeping up around the hubs, thick and tenacious, splattering around in steaming gobs as the wheels spun. I knew what had happened the moment the wheels gave and, a few minutes later, they chained me to the tractor and dragged me back out of the mire. It looked for all the world like thick gray mud, but it was a pit of molten lead, steaming under a soft layer of concealing ash. I picked my way more cautiously then. We were getting into an area of recent surface activity; the surface was really treacherous. I caught myself wishing that the Major had okayed McIvers’ scheme for an advanced scout; more dangerous for the individual, maybe, but I was driving blind now and I didn’t like it. One error in judgment could sink us all, but I wasn’t thinking much about the others. I was worried about me , plenty worried. I kept thinking, better McIvers should go than me. It wasn’t healthy thinking and I knew it, but I couldn’t get the thought out of my mind. It was a grueling eight hours and we slept poorly. Back in the Bug again, we moved still more slowly—edging out on a broad flat plateau, dodging a network of gaping surface cracks—winding back and forth in an effort to keep the machines on solid rock. I couldn’t see far ahead, because of the yellow haze rising from the cracks, so I was almost on top of it when I saw a sharp cut ahead where the surface dropped six feet beyond a deep crack. I let out a shout to halt the others; then I edged my Bug forward, peering at the cleft. It was deep and wide. I moved fifty yards to the left, then back to the right. There was only one place that looked like a possible crossing; a long, narrow ledge of gray stuff that lay down across a section of the fault like a ramp. Even as I watched it, I could feel the surface crust under the Bug trembling and saw the ledge shift over a few feet.
A. They died when a rock slide crushed their vehicle while they were attempting the Brightside Crossing.
Why did Feetch quit? A. Piltdon never appreciated or listened to him B. Piltdon took all the credit for the Super-Opener C. Feetch wanted to retire D. Piltdon wouldn't give him enough money
THE SUPER OPENER BY MICHAEL ZUROY Here's why you should ask for a "Feetch M-D" next time you get a can opener! [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, August 1958. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] "Feetch!" grated Ogden Piltdon, president of the Piltdon Opener Company, slamming the drafting board with his hairy fist, "I want results!" Heads lifted over boards. Kalvin Feetch shrunk visibly. "As chief engineer you're not carrying the ball," Piltdon went on savagely. "The Piltdon Can-Opener is trailing the competition. Advertising and Sales are breaking their necks. It's Engineering that's missing the boat!" "But Mr. Piltdon," remonstrated Feetch unsteadily under his employer's glare, "don't you remember? I tried to...." "For two years there hasn't been one lousy improvement in the Piltdon Can-Opener!" roared Mr. Piltdon. "Look at our competitors. The International rips apart cans in three and three-tenths seconds. Universal does it in four." "But Mr. Piltdon—" "The Minerva Mighty Midget does it in four point two two and plays Home Sweet Home in chimes. Our own Piltdon opener barely manages to open a can in eight point nine without chimes. Is this what I'm paying you for?" Feetch adjusted his spectacles with shaking hands. "But Mr. Piltdon, our opener still has stability, solidity. It is built to last. It has dignity...." "Dignity," pronounced Piltdon, "is for museums. Four months, Feetch! In four months I want a new can-opener that will be faster, lighter, stronger, flashier and more musical than any other on the market. I want it completely developed, engineered and tooled-up, ready for production. Otherwise, Feetch—" Feetch's body twitched. "But Mr. Piltdon, four months is hardly time enough for development, even with an adequate staff. I've been trying to tell you for years that we're bound to fall behind because we don't have enough personnel to conduct research. Our men can barely keep up with production and maintenance. If you would let me put on a few draftsmen and...." "Excuses," sneered Mr. Piltdon. "Your staff is more than adequate. I will not allow you to throw out my money. Four months, Feetch, no more!" Piltdon trudged out of the room, leaving behind him an oppressive silence. How could you set a time limit on research and development? A designer had to dream at his board, investigate, search, build, test, compare, discard. He had always wanted to devote all his time to research, but Piltdon Opener had not given him that opportunity. Twenty-five years! thought Feetch. Twenty-five years of close supervision, dead-lines, production headaches, inadequate facilities and assistance. What had happened, to the proud dream he once had, the dream of exploring uncharted engineering regions, of unlimited time to investigate and develop? Ah, well, thought Feetch straightening his thin shoulders, he had managed somehow to design a few good things during his twenty-five years with Piltdon. That was some satisfaction. What now? He had to hang on to his job. Technical work was scarce. Since the early 1980's the schools had been turning out more technicians than industry could absorb. He was too old to compete in the employment market. He couldn't afford to lose any money. Jenny wasn't well. How to meet this four month dead-line? He would get right on it himself, of course; Hanson—good man—could work with him. He shook his head despairingly. Something would be sure to blow up. Well, he had to start— "Chief," said Hanson a few weeks later as they entered the lab, "I'm beginning to wonder if the answer is in the hand mechanical type at all." "Got to be," answered Feetch tiredly. "We must work along classical can-opener lines. Departures, such as the thermal or motor-driven types, would be too expensive for mass production." Three new models and a group of cans were waiting for them on the bench. They began testing, Hanson operating the openers and Feetch clocking. "Four point four," announced Feetch after the last test. "Good, but not good enough. Too bulky. Appearance unsatisfactory. Chimes tinny. We've made progress, but we've a long way to go." The problem was tricky. It might seem that use of the proper gear ratios would give the required velocity, but there were too many other factors that negated this direct approach. The mechanism had to be compact and streamlined. Gear sizes had to be kept down. Can-top resistance, internal resistance, cutting tooth performance, handle size and moment, the minimum strength of a woman's hand were some of the variables that had to be balanced within rigid limits. Sector type cutters, traversing several arcs at the same time, had seemed to offer the answer for a while, but the adjusting mechanism necessary to compensate for variable can sizes had been too complex to be practical. There was the ever-present limit to production cost. Hanson's eyes were upon him. "Chief," he said, "it's a rotten shame. Twenty-five years of your life you put in with Piltdon, and he'd fire you just like that if you don't do the impossible. The Piltdon Company is built upon your designs and you get handed this deal!" "Well, well," said Feetch. "I drew my pay every week so I suppose I have no complaints. Although," a wistful note crept into his voice "I would have liked a little recognition. Piltdon is a household word, but who has heard of Feetch? Well,"—Feetch blew his nose—"how do we stand, Hanson?" Hanson's bull-dog features drew into a scowl. "Piltdon ought to be rayed," he growled. "O.K., Chief. Eleven experimental models designed to date. Two more on the boards. Nine completed and tested, two in work. Best performance, four point four, but model otherwise unsatisfactory." "Hello," said Feetch as an aproned machinist entered carrying a glistening mechanism. "Here's another model. Let's try it." The machinist departed and Hanson locked the opener on a can. "I hope——" he turned the handle, and stopped abruptly, staring down open-mouthed. A cylinder of close-packed beans rested on the bench under the opener. The can itself had disappeared. "Chief," said Hanson. "Chief." "Yes," said Feetch. "I see it too. Try another can." "Vegetable soup or spinach?" inquired Hanson dreamily. "Spinach, I think," said Feetch. "Where did the can go, do you suppose?" The spinach can disappeared. Likewise several corn cans, sweet potato cans and corned-beef hash cans, leaving their contents intact. It was rather disconcerting. "Dear, dear," said Feetch, regarding the piles of food on the bench. "There must be some explanation. I designed this opener with sixteen degree, twenty-two minute pressure angle modified involute gear teeth, seven degree, nineteen minute front clearance cutter angle and thirty-six degree, twelve minute back rake angle. I expected that such departures from the norm might achieve unconventional performance, but this—Dear, dear. Where do the cans go, I wonder?" "What's the difference? Don't you see what you've got here? It's the answer! It's more than the answer! We can put this right into work and beat the dead-line." Feetch shook his head. "No, Hanson. We're producing something we don't understand. What forces have we uncovered here? Where do the cans go? What makes them disappear? Are we dealing with a kinetic or a kinematic effect? What motions can we plot in the area of disappearance and what are their analytical mathematical formulae? What masses may be critical here? What transformations of energy are involved? No, Hanson, we must learn a lot more." "But Chief, your job." "I'll risk that. Not a word to Piltdon." Several days later, however, Piltdon himself charged into the drawing room and slapped Feetch heartily on the back, causing him to break a pencil point. "Feetch!" roared Piltdon. "Is this talk that's going around the plant true? Why didn't you tell me? Let's see it." After Piltdon had seen it his eyes took on a feverish glint. "This," he exulted, "will make can-opener history. Instantaneous opening! Automatic disposal! Wait until Advertising and Sales get hold of this! We'll throttle our competitors! The Piltdon Super-Opener we'll call it." "Mr. Piltdon—" said Feetch shakily. Piltdon stared at his chief engineer sharply. "What's the matter, Feetch? The thing can be duplicated, can't it?" "Yes, sir. I've just finished checking that. But I'm in the midst of further investigation of the effect. There's more here than just a new type can-opener, sir. A whole new field of physics. New principles. This is big, Mr. Piltdon. I recommend that we delay production until further research can be completed. Hire a few top scientists and engineers. Find out where the cans go. Put out a scientific paper on the effect." "Feetch," bit out Piltdon, his face growing hard. "Stow this hooey. I don't give a damn where the cans go. May I remind you that under our standard patent agreement, all rights to your invention belong to the company? As well as anything you may produce in the field within a year after leaving our employ? We have a good thing here, and I don't want you holding it back. We're going into production immediately." Close, thought Feetch, wearily. It had been a man-killing job, and it had been close, but he'd made it. Beat the time limit by a half-day. The first tentative shipments of Piltdon Super-Openers had gone to distributors along the Eastern seaboard. The first advertisements blazed in selected media. The first reorders came back, and then: "It's a sell-out!" crowed Piltdon, waving a sheaf of telegrams. "Step up production! Let 'er rip!" The Super-Openers rolled over the country. In a remarkably short time they appeared in millions of kitchens from coast-to-coast. Sales climbed to hundreds of thousands per day. Piltdon Opener went into peak production in three shifts, but was still unable to keep up with the demand. Construction was begun on a new plant, and additional plants were planned. Long lines waited in front of houseware stores. Department stores, lucky enough to have Super-Openers on hand, limited sales to one to a customer. Piltdon cancelled his advertising program. Newspapers, magazines, radio, television and word-of-mouth spread the fame of the opener so that advertising was unnecessary. Meanwhile, of course, government scientists, research foundations, universities and independent investigators began to look into this new phenomonen. Receiving no satisfactory explanation from Piltdon, they set up their own research. Far into the night burned the lights of countless laboratories. Noted physicists probed, measured, weighed, traced, X-rayed, dissolved, spun, peered at, photographed, magnetized, exploded, shattered and analyzed Super-Openers without achieving the glimmer of a satisfactory explanation. Competitors found the patent impossible to circumvent, for any departure from its exact specifications nullified the effect. Piltdon, genial these days with success and acclaim, roared at Feetch: "I'm putting you in for a raise. Yes sir! To reward you for assisting me with my invention I'm raising your pay two hundred dollars a year. That's almost four dollars a week, man." "Thank you, Mr. Piltdon." And still, thought Feetch wryly, he received no recognition. His name did not even appear on the patent. Well, well, that was the way it went. He must find his satisfaction in his work. And it had been interesting lately, the work he had been doing nights at home investigating what had been named the Piltdon Effect. It had been difficult, working alone and buying his own equipment. The oscillator and ultra microwave tracking unit had been particularly expensive. He was a fool, he supposed, to try independent research when so many huge scientific organizations were working on it. But he could no more keep away from it than he could stop eating. He still didn't know where the cans went, but somehow he felt that he was close to the answer. When he finally found the answer, it was too late. The Borenchuck incident was only hours away. As soon as he could get hold of Piltdon, Feetch said trembling, "Sir, I think I know where those cans are going. I recommend—" "Are you still worrying about that?" Piltdon roared jovially. "Leave that to the long-hairs. We're making money, that's all that counts, eh Feetch?" That night, at six-ten p.m., the Borenchuck family of Selby, South Dakota, sat down to their evening meal. Just as they started in on the soup, a rain of empty tin cans clattered down, splashed into the soup, raised a welt on the forehead of Borenchuck senior, settled down to a gentle, steady klunk! klunk! klunk! and inexorably began to pile up on the dining-room floor. They seemed to materialize from a plane just below the ceiling. The police called the fire department and the fire department stared helplessly and recommended the sanitation department. The incident made headlines in the local papers. The next day other local papers in widely scattered locations reported similar incidents. The following day, cans began falling on Chicago. St. Louis was next, and then over the entire nation the cans began to rain down. They fell outdoors and indoors, usually materializing at heights that were not dangerous. The deluge followed no pattern. Sometimes it would slacken, sometimes it would stop, sometimes begin heavily again. It fell in homes, on the streets, in theatres, trains, ships, universities and dog-food factories. No place was immune. People took to wearing hats indoors and out, and the sale of helmets boomed. All activity was seriously curtailed. A state of national emergency was declared. Government investigators went to work and soon confirmed what was generally suspected: these were the same cans that had been opened by the Piltdon Super-Opener. Statisticians and mathematicians calculated the mean rate of can precipitation and estimated that if all the cans opened by Piltdon openers were to come back, the deluge should be over in fifteen point twenty-nine days. Super-Opener sales of course immediately plummeted to zero and stayed there. Anti-Piltdon editorials appeared in the papers. Commentators accused Piltdon of deliberately hoaxing the public for his own gain. A Congressional investigation was demanded. Piltdon received threats of bodily injury. Lawsuits were filed against him. He barricaded himself in the plant, surrounded by bodyguards. Livid with fury and apprehension, he screamed at Feetch, "This is your doing, you vandal! I'm a ruined man!" A falling can caught him neatly on the tip of his nose. "But sir," trembled Feetch, dodging three spaghetti cans, "I tried to warn you." "You're through, Feetch!" raved Piltdon. "Fired! Get out! But before you go, I want you to know that I've directed the blame where it belongs. I've just released to the press the truth about who created the Super-Opener. Now, get out!" "Yes, sir," said Feetch paling. "Then you don't want to hear about my discovery of a way to prevent the cans from coming back?" Klunk! A barrage of cans hit the floor, and both men took refuge under Piltdon's huge desk. "No!" yelled Piltdon at Feetch's face which was inches away. "No, I——What did you say?" "A small design improvement sir, and the cans would disappear forever." Klunk! "Forever, Feetch?" "Yes sir." Klunk! Klunk! "You're positive, Feetch?" Piltdon's eyes glared into Feetch's. "Sir, I never make careless claims." "That's true," said Piltdon. His eyes grew dreamy. "It can be done," he mused. "The New Type Super-Opener. Free exchanges for the old. Cash guarantee that empty cans will never bother you. Take a licking at first, but then monopolize the market. All right, Feetch, I'll give you another chance. You'll turn over all the details to me. The patent on the improvement will naturally be mine. I'll get the credit for rectifying your blunder. Fine, fine. We'll work it out. Hop on production, at once, Feetch." Feetch felt himself sag inwardly. "Mr. Piltdon," he said. "I'm asking only one favor. Let me work full time on research and development, especially on the Piltdon effect. Hire a couple of extra men to help with production. I assure you the company will benefit in the end." "Damn it, no!" roared Piltdon. "How many times must I tell you? You got your job back, didn't you?" The prospect of long years of heavy production schedules, restricted engineering and tight supervision suddenly made Kalvin Feetch feel very tired. Research, he thought. Development. What he had always wanted. Over the years he had waited, thinking that there would be opportunities later. But now he was growing older, and he felt that there might not be a later. Somehow he would manage to get along. Perhaps someone would give him a job working in the new field he had pioneered. With a sense of relief he realized that he had made his decision. "Mr. Piltdon," Feetch said. "I—" klunk!—"resign." Piltdon started, extreme astonishment crossing his face. "No use," said Feetch. "Nothing you can say—" klunk! klunk! klunk!—"will make any difference now." "But see here, the New Type Super-Opener...!" "Will remain my secret. Good day." "Feetch!" howled Piltdon. "I order you to remain!" Feetch almost submitted from force of habit. He hesitated for a moment, then turned abruptly. "Good-day," said Feetch firmly, sprinting through the falling cans to the door. Money, Feetch decided after a while, was a good thing to have. His supply was running pretty low. He was not having any luck finding another job. Although the cans had stopped falling on the fifteenth day, as predicted by the statisticians, industry would not soon forget the inconvenience and losses caused by the deluge. It was not anxious to hire the man it regarded as responsible for the whole thing. "Feetch," the personnel man would read. "Kalvin Feetch." Then, looking up, "Not the Kalvin Feetch who—" "Yes," Feetch would admit miserably. "I am sorry, but—" He did no better with research organizations. Typical was a letter from the Van Terrel Foundation: "—cannot accept your application inasmuch as we feel your premature application of your discovery to profit-making denotes a lack of scientific responsibility and ethics not desirable in a member of our organization—former employer states the decision was yours entirely. Unfavorable reference—" Piltdon, Feetch thought, feeling a strange sensation deep within his chest that he had not the experience to recognize as the beginning of a slow anger, Piltdon was hitting low and getting away with it. Of course, if he were to agree to reveal his latest discoveries to a research organization, he would undoubtedly get an appointment. But how could he? Everything patentable in his work would automatically revert to Piltdon under the one year clause in the company patent agreement. No, Feetch told himself, he was revealing nothing that Piltdon might grab. The anger began to mount. But he was beginning to need money desperately. Jenny wasn't getting any better and medical bills were running high. The phone rang. Feetch seized it and said to the image: "Absolutely not." "I'll go up another ten dollars," grated the little Piltdon image. "Do you realize, man, this is the fourteenth raise I've offered you? A total increase of one hundred and twenty-six dollars? Be sensible, Feetch. I know you can't find work anywhere else." "Thanks to you. Mr. Piltdon, I wouldn't work for you if—" A barrage of rocks crashed against the heavy steel screening of the window. "What's going on!" yelled Piltdon. "Oh, I see. People throwing rocks at your house again? Oh, I know all about that, Feetch. I know that you're probably the most unpopular man alive to-day. I know about the rocks, the tomatoes, the rotten eggs, the sneaking out at night, the disguises you've had to use. Why don't you come back to us and change all that, Feetch? We'll put out the New Type Super-Opener and the world will soon forget about the old one." "No," said Feetch. "People will forget anyway—I hope." "If you won't think of yourself, at least think of your fellow workmen," begged Piltdon, his voice going blurry. "Do you realize that Piltdon Opener will soon be forced to close down, throwing all your former associates out of work? Think of Hanson, Sanchez, Forbes. They have families too. Think of the men in the shop, the girls in the office, the salesmen on the road. All, all unemployed because of you. Think of that, Feetch." Feetch blinked. This had not occurred to him. Piltdon eyed him sharply, then smiled with a hint of triumph. "Think it over, Feetch." Feetch sat, thinking it over. Was it right to let all these people lose their jobs? Frowning, he dialed Hanson's number. "Chief," said Hanson, "Forget it. The boys are behind you one hundred per cent. We'll make out." "But that's the trouble. I thought you'd feel like this, and I can't let you." "You're beginning to weaken. Don't. Think, chief, think. The brain that figured the Super-Opener can solve this." Feetch hung up. A glow of anger that had been building up in his chest grew warmer. He began pacing the floor. How he hated to do it. Think, Hanson had said. But he had. He's considered every angle, and there was no solution. Feetch walked into the kitchen and carefully poured himself a drink of water. He drank the water slowly and placed the glass on the washstand with a tiny click. It was the tiny click that did it. Something about it touched off the growing rage. If Piltdon were there he would have punched him in the nose. The twenty-five years. The tricks. The threats. Think? He'd figured the solution long ago, only he hadn't allowed himself to see it. Not lack of brains, lack of guts. Well, he thought grimly, dialing Piltdon's number, he was going through with it now. "Piltdon!" he barked. "Three p.m. tomorrow. My place. Be here. That's all." He hung up. In the same grim mood the following morning, he placed a few more calls. In the same mood that afternoon he stood in the middle of his living-room and looked at his visitors: Piltdon, Williams, the Government man; Billings from the Van Terrel Foundation; Steiner of Westchester University; the members of the press. "Gentlemen," he said. "I'll make it brief." He waved the papers in his hand. "Here is everything I know about what I call the Feetch Effect, including plans and specifications for the New Type Super-Opener. All of you have special reasons for being keenly interested in this information. I am now going to give a copy to each of you, providing one condition is met by Mr. Piltdon." He stared at Piltdon. "In short, I want fifty-one per cent of the stock of Piltdon Opener." Piltdon leaped from his chair. "Outrageous!" He roared. "Ridiculous!" "Fifty-one percent," said Feetch firmly. "Don't bother with any counterproposals or the interview is at an end." "Gentlemen!" squawked Piltdon, "I appeal to you—" "Stop bluffing," said Feetch coldly. "There's no other way out for you. Otherwise you're ruined. Here, sign this agreement." Piltdon threw the paper to the floor and screamed: "Gentlemen, will you be a party to this?" "Well," murmured the Government man, "I never did think Feetch got a fair shake." "This information is important to science," said the Van Terrel man. After Piltdon had signed, the papers were distributed. Published in the newspapers the following day, Feetch's statement read, in part: "The motion in space and time of the singular curvilinear proportions of the original Super-Opener combined with the capacitor effect built up as it increased its frictional electro-static charge in inverse proportion to the cube root of the tolerance between the involute teeth caused an instantaneous disruption of what I call the Alpha multi-dimensional screen. The can, being metallic, dropped through, leaving its non-metallic contents behind. The disruption was instantly repaired by the stable nature of the screen. "Beyond the screen is what I call Alpha space, a space apparently quite as extensive as our own universe. Unfortunately, as my investigations indicated, Alpha space seems to be thickly inhabited. These inhabitants, the nature of whom I have not yet ascertained, obviously resented the intrusion of the cans, developed a method of disrupting the screen from their side, and hurled the cans back at us. "However, I have established the existence of other spaces up to Mu space, and suspect that others exist beyond that. Beta space, which is also adjacent to our own space, is devoid of any form of life. The New Type Super-Opener is designed to pass cans through the Beta screen. Beta space will safely absorb an infinite number of cans. "I sincerely and humbly venture the opinion that we are on the threshold of tremendous and mighty discoveries. It is my belief that possibly an infinite number of universes exist in a type of laminated block separated by screens. "Therefore, might it not be that an infinite number of laminated blocks exist—?" "Mr Feetch—" said Piltdon. Feetch looked up from his desk in the newly constructed Feetch Multi-Dimensional Development Division of the Piltdon Opener Company. "Piltdon, don't bother me about production. Production is your problem." "But Mr. Feetch—" "Get out," said Feetch. Piltdon blanched and left. "As I was saying, Hanson—" continued Feetch.
A. Piltdon never appreciated or listened to him
Altairian's economy is most likely representative of which system: A. capitalism B. laissez faire C. socialism D. Keynesian
Letter of the Law by Alan E. Nourse The place was dark and damp, and smelled like moldy leaves. Meyerhoff followed the huge, bear-like Altairian guard down the slippery flagstones of the corridor, sniffing the dead, musty air with distaste. He drew his carefully tailored Terran-styled jacket closer about his shoulders, shivering as his eyes avoided the black, yawning cell-holes they were passing. His foot slipped on the slimy flags from time to time, and finally he paused to wipe the caked mud from his trouser leg. "How much farther is it?" he shouted angrily. The guard waved a heavy paw vaguely into the blackness ahead. Quite suddenly the corridor took a sharp bend, and the Altairian stopped, producing a huge key ring from some obscure fold of his hairy hide. "I still don't see any reason for all the fuss," he grumbled in a wounded tone. "We've treated him like a brother." One of the huge steel doors clicked open. Meyerhoff peered into the blackness, catching a vaguely human outline against the back wall. "Harry?" he called sharply. There was a startled gasp from within, and a skinny, gnarled little man suddenly appeared in the guard's light, like a grotesque, twisted ghost out of the blackness. Wide blue eyes regarded Meyerhoff from beneath uneven black eyebrows, and then the little man's face broke into a crafty grin. "Paul! So they sent you ! I knew I could count on it!" He executed a deep, awkward bow, motioning Meyerhoff into the dark cubicle. "Not much to offer you," he said slyly, "but it's the best I can do under the circumstances." Meyerhoff scowled, and turned abruptly to the guard. "We'll have some privacy now, if you please. Interplanetary ruling. And leave us the light." The guard grumbled, and started for the door. "It's about time you showed up!" cried the little man in the cell. "Great day! Lucky they sent you, pal. Why, I've been in here for years—" "Look, Zeckler, the name is Meyerhoff, and I'm not your pal," Meyerhoff snapped. "And you've been here for two weeks, three days, and approximately four hours. You're getting as bad as your gentle guards when it comes to bandying the truth around." He peered through the dim light at the gaunt face of the prisoner. Zeckler's face was dark with a week's beard, and his bloodshot eyes belied the cocky grin on his lips. His clothes were smeared and sodden, streaked with great splotches of mud and moss. Meyerhoff's face softened a little. "So Harry Zeckler's in a jam again," he said. "You look as if they'd treated you like a brother." The little man snorted. "These overgrown teddy-bears don't know what brotherhood means, nor humanity, either. Bread and water I've been getting, nothing more, and then only if they feel like bringing it down." He sank wearily down on the rock bench along the wall. "I thought you'd never get here! I sent an appeal to the Terran Consulate the first day I was arrested. What happened? I mean, all they had to do was get a man over here, get the extradition papers signed, and provide transportation off the planet for me. Why so much time? I've been sitting here rotting—" He broke off in mid-sentence and stared at Meyerhoff. "You brought the papers, didn't you? I mean, we can leave now?" Meyerhoff stared at the little man with a mixture of pity and disgust. "You are a prize fool," he said finally. "Did you know that?" Zeckler's eyes widened. "What do you mean, fool? So I spend a couple of weeks in this pneumonia trap. The deal was worth it! I've got three million credits sitting in the Terran Consulate on Altair V, just waiting for me to walk in and pick them up. Three million credits—do you hear? That's enough to set me up for life!" Meyerhoff nodded grimly. " If you live long enough to walk in and pick them up, that is." "What do you mean, if?" Meyerhoff sank down beside the man, his voice a tense whisper in the musty cell. "I mean that right now you are practically dead. You may not know it, but you are. You walk into a newly opened planet with your smart little bag of tricks, walk in here with a shaky passport and no permit, with no knowledge of the natives outside of two paragraphs of inaccuracies in the Explorer's Guide, and even then you're not content to come in and sell something legitimate, something the natives might conceivably be able to use. No, nothing so simple for you. You have to pull your usual high-pressure stuff. And this time, buddy, you're paying the piper." " You mean I'm not being extradited? " Meyerhoff grinned unpleasantly. "I mean precisely that. You've committed a crime here—a major crime. The Altairians are sore about it. And the Terran Consulate isn't willing to sell all the trading possibilities here down the river just to get you out of a mess. You're going to stand trial—and these natives are out to get you. Personally, I think they're going to get you." Zeckler stood up shakily. "You can't believe anything the natives say," he said uneasily. "They're pathological liars. Why, you should see what they tried to sell me ! You've never seen such a pack of liars as these critters." He glanced up at Meyerhoff. "They'll probably drop a little fine on me and let me go." "A little fine of one Terran neck." Meyerhoff grinned nastily. "You've committed the most heinous crime these creatures can imagine, and they're going to get you for it if it's the last thing they do. I'm afraid, my friend, that your con-man days are over." Zeckler fished in the other man's pocket, extracted a cigarette, and lighted it with trembling fingers. "It's bad, then," he said finally. "It's bad, all right." Some shadow of the sly, elfin grin crept over the little con-man's face. "Well, at any rate, I'm glad they sent you over," he said weakly. "Nothing like a good lawyer to handle a trial." " Lawyer? Not me! Oh, no. Sorry, but no thanks." Meyerhoff chuckled. "I'm your advisor, old boy. Nothing else. I'm here to keep you from botching things up still worse for the Trading Commission, that's all. I wouldn't get tangled up in a mess with those creatures for anything!" He shook his head. "You're your own lawyer, Mr. Super-salesman. It's all your show. And you'd better get your head out of the sand, or you're going to lose a case like it's never been lost before!" Meyerhoff watched the man's pale face, and shook his head. In a way, he thought, it was a pity to see such a change in the rosy-cheeked, dapper, cocksure little man who had talked his way glibly in and out of more jams than Meyerhoff could count. Trading brought scalpers; it was almost inevitable that where rich and unexploited trading ground was uncovered, it would first fall prey to the fast-trading boys. They spread out from Terra with the first wave of exploration—the slick, fast-talking con-men who could work new territories unfettered by the legal restrictions that soon closed down the more established planets. The first men in were the richest out, and through some curious quirk of the Terrestrial mind, they knew they could count on Terran protection, however crooked and underhand their methods. But occasionally a situation arose where the civilization and social practices of the alien victims made it unwise to tamper with them. Altair I had been recognized at once by the Trading Commission as a commercial prize of tremendous value, but early reports had warned of the danger of wildcat trading on the little, musty, jungle-like planet with its shaggy, three-eyed inhabitants—warned specifically against the confidence tactics so frequently used—but there was always somebody, Meyerhoff reflected sourly, who just didn't get the word. Zeckler puffed nervously on his cigarette, his narrow face a study in troubled concentration. "But I didn't do anything!" he exploded finally. "So I pulled an old con game. So what? Why should they get so excited? So I clipped a few thousand credits, pulled a little fast business." He shrugged eloquently, spreading his hands. "Everybody's doing it. They do it to each other without batting an eye. You should see these critters operate on each other. Why, my little scheme was peanuts by comparison." Meyerhoff pulled a pipe from his pocket, and began stuffing the bowl with infinite patience. "And precisely what sort of con game was it?" he asked quietly. Zeckler shrugged again. "The simplest, tiredest, moldiest old racket that ever made a quick nickel. Remember the old Terran gag about the Brooklyn Bridge? The same thing. Only these critters didn't want bridges. They wanted land—this gooey, slimy swamp they call 'farm land.' So I gave them what they wanted. I just sold them some land." Meyerhoff nodded fiercely. "You sure did. A hundred square kilos at a swipe. Only you sold the same hundred square kilos to a dozen different natives." Suddenly he threw back his hands and roared. "Of all the things you shouldn't have done—" "But what's a chunk of land?" Meyerhoff shook his head hopelessly. "If you hadn't been so greedy, you'd have found out what a chunk of land was to these natives before you started peddling it. You'd have found out other things about them, too. You'd have learned that in spite of all their bumbling and fussing and squabbling they're not so dull. You'd have found out that they're marsupials, and that two out of five of them get thrown out of their mother's pouch before they're old enough to survive. You'd have realized that they have to start fighting for individual rights almost as soon as they're born. Anything goes, as long as it benefits them as individuals." Meyerhoff grinned at the little man's horrified face. "Never heard of that, had you? And you've never heard of other things, too. You've probably never heard that there are just too many Altairians here for the food their planet can supply, and their diet is so finicky that they just can't live on anything that doesn't grow here. And consequently, land is the key factor in their economy, not money; nothing but land. To get land, it's every man for himself, and the loser starves, and their entire legal and monetary system revolves on that principle. They've built up the most confusing and impossible system of barter and trade imaginable, aimed at individual survival, with land as the value behind the credit. That explains the lying—of course they're liars, with an economy like that. They've completely missed the concept of truth. Pathological? You bet they're pathological! Only a fool would tell the truth when his life depended on his being a better liar than the next guy! Lying is the time-honored tradition, with their entire legal system built around it." Zeckler snorted. "But how could they possibly have a legal system? I mean, if they don't recognize the truth when it slaps them in the face?" Meyerhoff shrugged. "As we understand legal systems, I suppose they don't have one. They have only the haziest idea what truth represents, and they've shrugged off the idea as impossible and useless." He chuckled maliciously. "So you went out and found a chunk of ground in the uplands, and sold it to a dozen separate, self-centered, half-starved natives! Encroachment on private property is legal grounds for murder on this planet, and twelve of them descended on the same chunk of land at the same time, all armed with title-deeds." Meyerhoff sighed. "You've got twelve mad Altairians in your hair. You've got a mad planet in your hair. And in the meantime, Terra's most valuable uranium source in five centuries is threatening to cut off supply unless they see your blood splattered liberally all the way from here to the equator." Zeckler was visibly shaken. "Look," he said weakly, "so I wasn't so smart. What am I going to do? I mean, are you going to sit quietly by and let them butcher me? How could I defend myself in a legal setup like this ?" Meyerhoff smiled coolly. "You're going to get your sly little con-man brain to working, I think," he said softly. "By Interplanetary Rules, they have to give you a trial in Terran legal form—judge, jury, court procedure, all that folderol. They think it's a big joke—after all, what could a judicial oath mean to them?—but they agreed. Only thing is, they're going to hang you, if they die trying. So you'd better get those stunted little wits of yours clicking—and if you try to implicate me , even a little bit, I'll be out of there so fast you won't know what happened." With that Meyerhoff walked to the door. He jerked it inward sharply, and spilled two guards over on their faces. "Privacy," he grunted, and started back up the slippery corridor. It certainly looked like a courtroom, at any rate. In the front of the long, damp stone room was a bench, with a seat behind it, and a small straight chair to the right. To the left was a stand with twelve chairs—larger chairs, with a railing running along the front. The rest of the room was filled almost to the door with seats facing the bench. Zeckler followed the shaggy-haired guard into the room, nodding approvingly. "Not such a bad arrangement," he said. "They must have gotten the idea fast." Meyerhoff wiped the perspiration from his forehead, and shot the little con-man a stony glance. "At least you've got a courtroom, a judge, and a jury for this mess. Beyond that—" He shrugged eloquently. "I can't make any promises." In the back of the room a door burst open with a bang. Loud, harsh voices were heard as half a dozen of the huge Altairians attempted to push through the door at once. Zeckler clamped on the headset to his translator unit, and watched the hubbub in the anteroom with growing alarm. Finally the question of precedent seemed to be settled, and a group of the Altairians filed in, in order of stature, stalking across the room in flowing black robes, pug-nosed faces glowering with self-importance. They descended upon the jury box, grunting and scrapping with each other for the first-row seats, and the judge took his place with obvious satisfaction behind the heavy wooden bench. Finally, the prosecuting attorney appeared, flanked by two clerks, who took their places beside him. The prosecutor eyed Zeckler with cold malevolence, then turned and delivered a sly wink at the judge. In a moment the room was a hubbub as it filled with the huge, bumbling, bear-like creatures, jostling each other and fighting for seats, growling and complaining. Two small fights broke out in the rear, but were quickly subdued by the group of gendarmes guarding the entrance. Finally the judge glared down at Zeckler with all three eyes, and pounded the bench top with a wooden mallet until the roar of activity subsided. The jurymen wriggled uncomfortably in their seats, exchanging winks, and finally turned their attention to the front of the court. "We are reading the case of the people of Altair I," the judge's voice roared out, "against one Harry Zeckler—" he paused for a long, impressive moment—"Terran." The courtroom immediately burst into an angry growl, until the judge pounded the bench five or six times more. "This—creature—is hereby accused of the following crimes," the judge bellowed. "Conspiracy to overthrow the government of Altair I. Brutal murder of seventeen law-abiding citizens of the village of Karzan at the third hour before dawn in the second period after his arrival. Desecration of the Temple of our beloved Goddess Zermat, Queen of the Harvest. Conspiracy with the lesser gods to cause the unprecedented drought in the Dermatti section of our fair globe. Obscene exposure of his pouch-marks in a public square. Four separate and distinct charges of jail-break and bribery—" The judge pounded the bench for order—"Espionage with the accursed scum of Altair II in preparation for interplanetary invasion." The little con-man's jaw sagged lower and lower, the color draining from his face. He turned, wide-eyed, to Meyerhoff, then back to the judge. "The Chairman of the Jury," said the Judge succinctly, "will read the verdict." The little native in the front of the jury-box popped up like a puppet on a string. "Defendant found guilty on all counts," he said. "Defendant is guilty! The court will pronounce sentence—" " Now wait a minute! " Zeckler was on his feet, wild-eyed. "What kind of railroad job—" The judge blinked disappointedly at Paul Meyerhoff. "Not yet?" he asked, unhappily. "No." Meyerhoff's hands twitched nervously. "Not yet, Your Honor. Later, Your Honor. The trial comes first ." The judge looked as if his candy had been stolen. "But you said I should call for the verdict." "Later. You have to have the trial before you can have the verdict." The Altairian shrugged indifferently. "Now—later—" he muttered. "Have the prosecutor call his first witness," said Meyerhoff. Zeckler leaned over, his face ashen. "These charges," he whispered. "They're insane!" "Of course they are," Meyerhoff whispered back. "But what am I going to—" "Sit tight. Let them set things up." "But those lies . They're liars, the whole pack of them—" He broke off as the prosecutor roared a name. The shaggy brute who took the stand was wearing a bright purple hat which sat rakishly over one ear. He grinned the Altairian equivalent of a hungry grin at the prosecutor. Then he cleared his throat and started. "This Terran riffraff—" "The oath," muttered the judge. "We've got to have the oath." The prosecutor nodded, and four natives moved forward, carrying huge inscribed marble slabs to the front of the court. One by one the chunks were reverently piled in a heap at the witness's feet. The witness placed a huge, hairy paw on the cairn, and the prosecutor said, "Do you swear to tell the truth, the whole truth, and nothing but the truth, so help you—" he paused to squint at the paper in his hand, and finished on a puzzled note, "—Goddess?" The witness removed the paw from the rock pile long enough to scratch his ear. Then he replaced it, and replied, "Of course," in an injured tone. "Then tell this court what you have seen of the activities of this abominable wretch." The witness settled back into the chair, fixing one eye on Zeckler's face, another on the prosecutor, and closing the third as if in meditation. "I think it happened on the fourth night of the seventh crossing of Altair II (may the Goddess cast a drought upon it)—or was it the seventh night of the fourth crossing?—" he grinned apologetically at the judge—"when I was making my way back through town toward my blessed land-plot, minding my own business, Your Honor, after weeks of bargaining for the crop I was harvesting. Suddenly from the shadow of the building, this creature—" he waved a paw at Zeckler—"stopped me in my tracks with a vicious cry. He had a weapon I'd never seen before, and before I could find my voice he forced me back against the wall. I could see by the cruel glint in his eyes that there was no warmth, no sympathy in his heart, that I was—" "Objection!" Zeckler squealed plaintively, jumping to his feet. "This witness can't even remember what night he's talking about!" The judge looked startled. Then he pawed feverishly through his bundle of notes. "Overruled," he said abruptly. "Continue, please." The witness glowered at Zeckler. "As I was saying before this loutish interruption," he muttered, "I could see that I was face to face with the most desperate of criminal types, even for Terrans. Note the shape of his head, the flabbiness of his ears. I was petrified with fear. And then, helpless as I was, this two-legged abomination began to shower me with threats of evil to my blessed home, dark threats of poisoning my land unless I would tell him where he could find the resting place of our blessed Goddess—" "I never saw him before in my life," Zeckler moaned to Meyerhoff. "Listen to him! Why should I care where their Goddess—" Meyerhoff gave him a stony look. "The Goddess runs things around here. She makes it rain. If it doesn't rain, somebody's insulted her. It's very simple." "But how can I fight testimony like that?" "I doubt if you can fight it." "But they can't prove a word of it—" He looked at the jury, who were listening enraptured to the second witness on the stand. This one was testifying regarding the butcherous slaughter of eighteen (or was it twenty-three? Oh, yes, twenty-three) women and children in the suburban village of Karzan. The pogrom, it seemed, had been accomplished by an energy weapon which ate great, gaping holes in the sides of buildings. A third witness took the stand, continuing the drone as the room grew hotter and muggier. Zeckler grew paler and paler, his eyes turning glassy as the testimony piled up. "But it's not true ," he whispered to Meyerhoff. "Of course it isn't! Can't you understand? These people have no regard for truth. It's stupid, to them, silly, a mark of low intelligence. The only thing in the world they have any respect for is a liar bigger and more skillful than they are." Zeckler jerked around abruptly as he heard his name bellowed out. "Does the defendant have anything to say before the jury delivers the verdict?" "Do I have—" Zeckler was across the room in a flash, his pale cheeks suddenly taking on a feverish glow. He sat down gingerly on the witness chair, facing the judge, his eyes bright with fear and excitement. "Your—Your Honor, I—I have a statement to make which will have a most important bearing on this case. You must listen with the greatest care." He glanced quickly at Meyerhoff, and back to the judge. "Your Honor," he said in a hushed voice. "You are in gravest of danger. All of you. Your lives—your very land is at stake." The judge blinked, and shuffled through his notes hurriedly as a murmur arose in the court. "Our land?" "Your lives, your land, everything you hold dear," Zeckler said quickly, licking his lips nervously. "You must try to understand me—" he glanced apprehensively over his shoulder "now, because I may not live long enough to repeat what I am about to tell you—" The murmur quieted down, all ears straining in their headsets to hear his words. "These charges," he continued, "all of them—they're perfectly true. At least, they seem to be perfectly true. But in every instance, I was working with heart and soul, risking my life, for the welfare of your beautiful planet." There was a loud hiss from the back of the court. Zeckler frowned and rubbed his hands together. "It was my misfortune," he said, "to go to the wrong planet when I first came to Altair from my homeland on Terra. I—I landed on Altair II, a grave mistake, but as it turned out, a very fortunate error. Because in attempting to arrange trading in that frightful place, I made certain contacts." His voice trembled, and sank lower. "I learned the horrible thing which is about to happen to this planet, at the hands of those barbarians. The conspiracy is theirs, not mine. They have bribed your Goddess, flattered her and lied to her, coerced her all-powerful goodness to their own evil interests, preparing for the day when they could persuade her to cast your land into the fiery furnace of a ten-year-drought—" Somebody in the middle of the court burst out laughing. One by one the natives nudged one another, and booed, and guffawed, until the rising tide of racket drowned out Zeckler's words. "The defendant is obviously lying," roared the prosecutor over the pandemonium. "Any fool knows that the Goddess can't be bribed. How could she be a Goddess if she could?" Zeckler grew paler. "But—perhaps they were very clever—" "And how could they flatter her, when she knows, beyond doubt, that she is the most exquisitely radiant creature in all the Universe? And you dare to insult her, drag her name in the dirt." The hisses grew louder, more belligerent. Cries of "Butcher him!" and "Scald his bowels!" rose from the courtroom. The judge banged for silence, his eyes angry. "Unless the defendant wishes to take up more of our precious time with these ridiculous lies, the jury—" "Wait! Your Honor, I request a short recess before I present my final plea." "Recess?" "A few moments to collect my thoughts, to arrange my case." The judge settled back with a disgusted snarl. "Do I have to?" he asked Meyerhoff. Meyerhoff nodded. The judge shrugged, pointing over his shoulder to the anteroom. "You can go in there," he said. Somehow, Zeckler managed to stumble from the witness stand, amid riotous boos and hisses, and tottered into the anteroom. Zeckler puffed hungrily on a cigarette, and looked up at Meyerhoff with haunted eyes. "It—it doesn't look so good," he muttered. Meyerhoff's eyes were worried, too. For some reason, he felt a surge of pity and admiration for the haggard con-man. "It's worse than I'd anticipated," he admitted glumly. "That was a good try, but you just don't know enough about them and their Goddess." He sat down wearily. "I don't see what you can do. They want your blood, and they're going to have it. They just won't believe you, no matter how big a lie you tell." Zeckler sat in silence for a moment. "This lying business," he said finally, "exactly how does it work?" "The biggest, most convincing liar wins. It's as simple as that. It doesn't matter how outlandish a whopper you tell. Unless, of course, they've made up their minds that you just naturally aren't as big a liar as they are. And it looks like that's just what they've done. It wouldn't make any difference to them what you say—unless, somehow, you could make them believe it." Zeckler frowned. "And how do they regard the—the biggest liar? I mean, how do they feel toward him?" Meyerhoff shifted uneasily. "It's hard to say. It's been my experience that they respect him highly—maybe even fear him a little. After all, the most convincing liar always wins in any transaction, so he gets more land, more food, more power. Yes, I think the biggest liar could go where he pleased without any interference." Zeckler was on his feet, his eyes suddenly bright with excitement. "Wait a minute," he said tensely. "To tell them a lie that they'd have to believe—a lie they simply couldn't help but believe—" He turned on Meyerhoff, his hands trembling. "Do they think the way we do? I mean, with logic, cause and effect, examining evidence and drawing conclusions? Given certain evidence, would they have to draw the same conclusions that we have to draw?" Meyerhoff blinked. "Well—yes. Oh, yes, they're perfectly logical." Zeckler's eyes flashed, and a huge grin broke out on his sallow face. His thin body fairly shook. He started hopping up and down on one foot, staring idiotically into space. "If I could only think—" he muttered. "Somebody—somewhere—something I read." "Whatever are you talking about?" "It was a Greek, I think—" Meyerhoff stared at him. "Oh, come now. Have you gone off your rocker completely? You've got a problem on your hands, man." "No, no, I've got a problem in the bag!" Zeckler's cheeks flushed. "Let's go back in there—I think I've got an answer!" The courtroom quieted the moment they opened the door, and the judge banged the gavel for silence. As soon as Zeckler had taken his seat on the witness stand, the judge turned to the head juryman. "Now, then," he said with happy finality. "The jury—" "Hold on! Just one minute more." The judge stared down at Zeckler as if he were a bug on a rock. "Oh, yes. You had something else to say. Well, go ahead and say it." Zeckler looked sharply around the hushed room. "You want to convict me," he said softly, "in the worst sort of way. Isn't that right?" Eyes swung toward him. The judge broke into an evil grin. "That's right." "But you can't really convict me until you've considered carefully any statement I make in my own defense. Isn't that right?" The judge looked uncomfortable. "If you've got something to say, go ahead and say it." "I've got just one statement to make. Short and sweet. But you'd better listen to it, and think it out carefully before you decide that you really want to convict me." He paused, and glanced slyly at the judge. "You don't think much of those who tell the truth, it seems. Well, put this statement in your record, then." His voice was loud and clear in the still room. " All Earthmen are absolutely incapable of telling the truth. " Puzzled frowns appeared on the jury's faces. One or two exchanged startled glances, and the room was still as death. The judge stared at him, and then at Meyerhoff, then back. "But you"—he stammered. "You're"—He stopped in mid-sentence, his jaw sagging. One of the jurymen let out a little squeak, and fainted dead away. It took, all in all, about ten seconds for the statement to soak in. And then pandemonium broke loose in the courtroom. "Really," said Harry Zeckler loftily, "it was so obvious I'm amazed that it didn't occur to me first thing." He settled himself down comfortably in the control cabin of the Interplanetary Rocket and grinned at the outline of Altair IV looming larger in the view screen. Paul Meyerhoff stared stonily at the controls, his lips compressed angrily. "You might at least have told me what you were planning." "And take the chance of being overheard? Don't be silly. It had to come as a bombshell. I had to establish myself as a liar—the prize liar of them all, but I had to tell the sort of lie that they simply could not cope with. Something that would throw them into such utter confusion that they wouldn't dare convict me." He grinned impishly at Meyerhoff. "The paradox of Epimenides the Cretan. It really stopped them cold. They knew I was an Earthmen, which meant that my statement that Earthmen were liars was a lie, which meant that maybe I wasn't a liar, in which case—oh, it was tailor-made." "It sure was." Meyerhoff's voice was a snarl. "Well, it made me out a liar in a class they couldn't approach, didn't it?" Meyerhoff's face was purple with anger. "Oh, indeed it did! And it put all Earthmen in exactly the same class, too." "So what's honor among thieves? I got off, didn't I?" Meyerhoff turned on him fiercely. "Oh, you got off just fine. You scared the living daylights out of them. And in an eon of lying they never have run up against a short-circuit like that. You've also completely botched any hope of ever setting up a trading alliance with Altair I, and that includes uranium, too. Smart people don't gamble with loaded dice. You scared them so badly they don't want anything to do with us." Zeckler's grin broadened, and he leaned back luxuriously. "Ah, well. After all, the Trading Alliance was your outlook, wasn't it? What a pity!" He clucked his tongue sadly. "Me, I've got a fortune in credits sitting back at the consulate waiting for me—enough to keep me on silk for quite a while, I might say. I think I'll just take a nice, long vacation." Meyerhoff turned to him, and a twinkle of malignant glee appeared in his eyes. "Yes, I think you will. I'm quite sure of it, in fact. Won't cost you a cent, either." "Eh?" Meyerhoff grinned unpleasantly. He brushed an imaginary lint fleck from his lapel, and looked up at Zeckler slyly. "That—uh—jury trial. The Altairians weren't any too happy to oblige. They wanted to execute you outright. Thought a trial was awfully silly—until they got their money back, of course. Not too much—just three million credits." Zeckler went white. "But that money was in banking custody!" "Is that right? My goodness. You don't suppose they could have lost those papers, do you?" Meyerhoff grinned at the little con-man. "And incidentally, you're under arrest, you know." A choking sound came from Zeckler's throat. " Arrest! " "Oh, yes. Didn't I tell you? Conspiring to undermine the authority of the Terran Trading Commission. Serious charge, you know. Yes, I think we'll take a nice long vacation together, straight back to Terra. And there I think you'll face a jury trial." Zeckler spluttered. "There's no evidence—you've got nothing on me! What kind of a frame are you trying to pull?" "A lovely frame. Airtight. A frame from the bottom up, and you're right square in the middle. And this time—" Meyerhoff tapped a cigarette on his thumb with happy finality—"this time I don't think you'll get off." Transcriber's Note: This etext was produced from "Tiger by the Tail and Other Science Fiction Stories by Alan E. Nourse" and was first published in If Magazine January 1954. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
B. laissez faire
How does the author feel about American Beauty? A. It is moronic or insane or both. B. It is wittily written and gorgeously directed. C. It is lustrously hip and aware. D. It is an invigorating last of counterculture righteousness.
A Good Year for the Roses? Early in American Beauty , Lester Burnham (Kevin Spacey), a weary reporter for a media magazine, masturbates in the shower while informing us in voice-over that we're witnessing the highlight of his day. He peers through tired eyes out the window at his manicured suburban tract-house lawn, where his wife, Carolyn (Annette Bening)--whose gardening clogs, he points out, are color-coordinated with the handles of her shears--snips roses (American beauties) and twitters about Miracle-Gro to a gay yuppie (Scott Bakula) on the other side of a white picket fence. "I have lost something," says Lester. "I'm not exactly sure what it is but I know I didn't always feel this ... sedated." Apparently, Lester doesn't realize that snipped roses are garden-variety symbols of castration, or he'd know what he has lost. But the makers of American Beauty are about to give Lester his roses back. At a high-school basketball game, Lester is transfixed by a blonde cheerleader named Angela (Mena Suvari), who is twirling alongside his daughter, Jane (Thora Burch). Ambient noise falls away, the crowd disappears, and there she is, Lester's angel, writhing in slow motion--just for him. She opens her jacket (she's naked underneath) and red rose petals drift out. Later, Lester envisions her on a bed of red petals, then immersed in a bath of red petals. Back in the roses for the first time in years, he's soon pumping iron, smoking pot, and telling off his frigid wife and faceless bosses, convinced that whatever he has lost he's getting back, baby. The movie is convinced, too--which is odd, since the fantasy of an underage cheerleader making a middle-aged man's wilted roses bloom is a tad ... primitive. But American Beauty doesn't feel primitive. It feels lustrously hip and aware, and a lot of critics are making big claims for it. The script, by Alan Ball, a playwright and former sitcom writer, carries an invigorating blast of counterculture righteousness, along with the kind of pithily vicious marital bickering that makes some viewers (especially male) say, "Yeah! Tell that bitch off!" More important, it has a vein of metaphysical yearning, which the director, Sam Mendes, mines brilliantly. A hotshot English theater director (his Cabaret revival is still on the boards in New York), Mendes gives the film a patina of New Age lyricism and layer upon layer of visual irony. The movie's surface is velvety and immaculate--until the action is abruptly viewed through the video camera of the teen-age voyeur next door (Wes Bentley), and the graininess of the video image (along with the plangent music) suggests how unstable the molecules that constitute our "reality" really are. Mendes can distend the real into the surreal with imperceptible puffs. Aided by his cinematographer, Conrad Hall, and editors, Tariq Anwar and Chris Greenbury, he creates an entrancing vision of the American nuclear family on the verge of a meltdown. A merican Beauty is so wittily written and gorgeously directed that you might think you're seeing something archetypal--maybe even the Great American Movie. But when you stop and smell the roses ... Well, that scent isn't Miracle-Gro. The hairpin turns from farce to melodrama, from satire to bathos, are fresh and deftly navigated, but almost every one of the underlying attitudes is smug and easy: from the corporate flunky named "Brad" to the interchangeable gay neighbors (they're both called "Jim") to the brutally homophobic patriarch next door, an ex-Marine colonel (Chris Cooper) who has reduced his wife (the normally exuberant Allison Janney) to a catatonic mummy and his son, Ricky (Bentley), to a life of subterranean deception. (The colonel's idea of bliss is watching an old Ronald Reagan military picture on television: How's that for subtle?) Lester's wife, Carolyn, is even more stridently caricatured. A real-estate broker who fails to sell a big house (her only potential customers are blank-faced African-Americans, Indian-Americans, and surly lesbians), she wears a mask of perky efficiency and insists on listening to Muzak while she and her husband and daughter eat her "nutritious yet savory" dinners. It's amazing that Mendes and Ball get away with recycling so many stale and reactionary ideas under the all-purpose rubric of "black comedy." But it's also possible that those ideas have rarely been presented so seductively. Several months ago, Daniel Menaker in Slate in contemporary film in which the protagonist attempts to break through our cultural and technological anesthetization into "the real." That's the theme here, too, and it's extraordinarily potent, at times even heartbreaking. The symbols, however, have been cunningly reversed. In movies like sex, lies, and videotape (1989), the protagonist has to put away the video camera to "get real"; in American Beauty , it's Ricky Fitts, the damaged stoner videomaker next door, who sees beauty where nonartists see only horror or nothingness. In the film's most self-consciously poetic set piece, Ricky shows Lester's dour daughter Jane--in whom he recognizes a kindred spirit--a video of a plastic bag fluttering up, down, and around on invisible currents of wind. Ricky speaks of glimpsing in the bag's trajectory an "entire life behind things"--a "benevolent force" that holds the universe together. The teen-ager, who likes to train his lenses on dead bodies of animals and people, sells wildly expensive marijuana to Lester and somehow passes on this notion of "beauty." By the end, Lester is mouthing the same sentiments and has acquired the same deadpan radiance. That must be some really good shit they're smoking. It's not the druggy philosophizing, however, that makes American Beauty an emotional workout. It's that the caricatures are grounded in sympathy instead of derision. Everyone on screen is in serious pain. The manipulative sexpot Angela, who taunts her friend Jane with the idea of seducing her dad, acts chiefly out of a terror of appearing ordinary. As the military martinet, Cooper goes against the grain, turning Col. Fitts into a sour bulldog whose capaciously baggy eyes are moist with sadness over his inability to reach out. (When he stands helplessly in the rain at the end, the deluge completes him.) The character of Carolyn is so shrill as to constitute a libel on the female sex, but there isn't a second when Bening sends the woman up. She doesn't transcend the part, she fills it to the brim, anatomizes it. You can't hate Carolyn because the woman is trying so hard--to appear confident, composed, in control. When she fails to sell that house, she closes the shades and lets go with a naked wail--it's the sound of a vacuum crying to be filled--then furiously slaps herself while sputtering, "Shut up--you're weak--shut up. " Then she breathes, regains her go-get-'em poise, replaces her mask. Carolyn isn't a complicated dramatic construction, but Bening gives her a primal force. An actress who packs more psychological detail into a single gesture than others get into whole scenes, Bening was barreling down the road to greatness before she hit a speed bump called Warren. It's a joy to observe her--both here and in Neil Jordan's In Dreams (1999)--back at full throttle. American Beauty is Spacey's movie, though. He gives it--how weird to write this about Spacey, who made his name playing flamboyantly self-involved psychopaths--a heart. Early on, he lets his face and posture go slack and his eyes blurry. He mugs like crazy, telegraphing Lester's "loserness." But Spacey's genius is for mugging in character. He makes us believe that it's Lester who's caricaturing himself , and that bitter edge paves the way for the character's later, more comfortably Spacey-like scenes of insult and mockery. He even makes us take Lester's final, improbably rhapsodic moments straight. But do the filmmakers take them straight? If I read it correctly, the movie is saying that American society is unjust and absurd and loveless--full of people so afraid of seeming ordinary that they lose their capacity to see. It's saying that our only hope is to cultivate a kind of stoned aesthetic detachment whereby even a man with his brains blown out becomes an object of beauty and a signpost to a Higher Power. But to scrutinize a freshly dead body and not ask how it got that way--or if there's anyone nearby with a gun who might want to add to the body count--strikes me as either moronic or insane or both. The kind of detachment the movie is peddling isn't artistic, it isn't life--it's nihilism at its most fatuous. In the end, American Beauty is New Age Nihilism. Kevin Costner is 11 years older than he was as Crash Davis, the over-the-hill minor-league catcher in Bull Durham (1988), but he can still get away with playing a professional ballplayer. He moves and acts like a celebrity jock, and he can make his narcissistic self-containment look as if he's keeping something in reserve--to protect his "instrument," as it were. In For Love of the Game , he's a 40ish Detroit Tigers pitcher having his last hurrah: The team has been sold and the new owners don't necessarily want him back. For about half an hour, it's a great sports movie. Costner stands on the mound shaking off the signals of his longtime catcher (John C. Reilly); he forces himself to tune out the huge Yankee Stadium crowd (the background blurs before our eyes and the sound drops out); and he mutters darkly at a succession of batters, some old nemeses, some old buddies. He also thinks about his Manhattan-based ex-girlfriend (Kelly Preston), who tearfully told him that morning that things were absolutely over and she was moving to London. There's an appealing flashback to how they met (he stopped to fix her car while on the way to Yankee Stadium), then it's back to the game for more nail-biting at bats. But pretty soon the relationship flashbacks start coming thick and fast, and the balance of the movie shifts to whether Kevin can commit to Kelly and Kelly can commit to Kevin or whether his only commitment could ever be to the ball and the diamond and the game. Maybe it's because I'm a baseball nut that I hated to leave the mound. But maybe it's also because the relationships scenes are soft-focus, generic, and woozily drawn-out, whereas the stuff in the stadium is sharply edited and full of texture. The rhythms of the game feel right; the rhythms of the romance feel embarrassingly Harlequin, and the picture drags on for over two hours. I can't believe that the director, Sam Raimi ( The Evil Dead , 1983; last year's A Simple Plan ) thought that all those scenes of Costner and Preston staring into space while the piano plinks would end up in the final cut, but Raimi apparently gave up control of the final cut for the sake of making his first, real mainstream picture. He might as well have stuck his head over the plate and said, "Bean me."
B. It is wittily written and gorgeously directed.
What embedding algorithm and dimension size are used?
### Introduction The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. ### Related Work Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features. Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech. BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone. In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts. ### Data In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets. Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 . ### Transformed Word Embedding Model (TWEM) Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence. We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations. ### Word Embeddings Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand. ### Pooling We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation. ### Output We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels. ### Experimental Setup We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search. All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets. SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 ### Results and Discussion The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement. Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists). On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters. ### Error Analysis False negatives Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel. Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two. Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech: @LoveAndLonging ...how is that example "sexism"? @amberhasalamb ...in what way? Another case our classifier misses is problematic speech within a hashtag: :D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :) This limitation could be potentially improved through the use of character convolutions or subword tokenization. False Positives In certain cases, our model seems to be learning user names instead of semantic content: RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't. Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki) ### Conclusion Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results. Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags. ### Supplemental Material We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them. ### Preprocessing Original text RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing) RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing Tokenize Lowercase: Lowercase the basic tokenize scheme rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing Token Replace: Replaces entities and user names with placeholder) ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing Token Replace Lowercase: Lowercase the Token Replace Scheme ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing We did analysis on a validation set across multiple datasets to find that the "Tokenize" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of. ### Embedding Analysis Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space. The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech. Table 1: Dataset Characteristics Table 2: F1 Results3 Table 3: Projected Embedding Cluster Analysis from SR Dataset Table 5: SR Results Table 7: HAR Results Table 6: HATE Results Table 8: Projected Embedding Cluster Analysis from SR Dataset
300 Dimensional Glove
Of the options presented, which represents McIlroy's greatest flaw as a leader? A. He is too lenient B. He is hypocritical C. He is too strict D. He is untrustworthy
ALL DAY SEPTEMBER By ROGER KUYKENDALL Illustrated by van Dongen [Transcriber's Note: This etext was produced from Astounding Science Fiction June 1959. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Some men just haven't got good sense. They just can't seem to learn the most fundamental things. Like when there's no use trying—when it's time to give up because it's hopeless.... The meteor, a pebble, a little larger than a match head, traveled through space and time since it came into being. The light from the star that died when the meteor was created fell on Earth before the first lungfish ventured from the sea. In its last instant, the meteor fell on the Moon. It was impeded by Evans' tractor. It drilled a small, neat hole through the casing of the steam turbine, and volitized upon striking the blades. Portions of the turbine also volitized; idling at eight thousand RPM, it became unstable. The shaft tried to tie itself into a knot, and the blades, damaged and undamaged were spit through the casing. The turbine again reached a stable state, that is, stopped. Permanently stopped. It was two days to sunrise, where Evans stood. It was just before sunset on a spring evening in September in Sydney. The shadow line between day and night could be seen from the Moon to be drifting across Australia. Evans, who had no watch, thought of the time as a quarter after Australia. Evans was a prospector, and like all prospectors, a sort of jackknife geologist, selenologist, rather. His tractor and equipment cost two hundred and fifty thousand dollars. Fifty thousand was paid for. The rest was promissory notes and grubstake shares. When he was broke, which was usually, he used his tractor to haul uranium ore and metallic sodium from the mines at Potter's dike to Williamson Town, where the rockets landed. When he was flush, he would prospect for a couple of weeks. Once he followed a stampede to Yellow Crater, where he thought for a while that he had a fortune in chromium. The chromite petered out in a month and a half, and he was lucky to break even. Evans was about three hundred miles east of Williamson Town, the site of the first landing on the Moon. Evans was due back at Williamson Town at about sunset, that is, in about sixteen days. When he saw the wrecked turbine, he knew that he wouldn't make it. By careful rationing, he could probably stretch his food out to more than a month. His drinking water—kept separate from the water in the reactor—might conceivably last just as long. But his oxygen was too carefully measured; there was a four-day reserve. By diligent conservation, he might make it last an extra day. Four days reserve—plus one is five—plus sixteen days normal supply equals twenty-one days to live. In seventeen days he might be missed, but in seventeen days it would be dark again, and the search for him, if it ever began, could not begin for thirteen more days. At the earliest it would be eight days too late. "Well, man, 'tis a fine spot you're in now," he told himself. "Let's find out how bad it is indeed," he answered. He reached for the light switch and tried to turn it on. The switch was already in the "on" position. "Batteries must be dead," he told himself. "What batteries?" he asked. "There're no batteries in here, the power comes from the generator." "Why isn't the generator working, man?" he asked. He thought this one out carefully. The generator was not turned by the main turbine, but by a small reciprocating engine. The steam, however, came from the same boiler. And the boiler, of course, had emptied itself through the hole in the turbine. And the condenser, of course— "The condenser!" he shouted. He fumbled for a while, until he found a small flashlight. By the light of this, he reinspected the steam system, and found about three gallons of water frozen in the condenser. The condenser, like all condensers, was a device to convert steam into water, so that it could be reused in the boiler. This one had a tank and coils of tubing in the center of a curved reflector that was positioned to radiate the heat of the steam into the cold darkness of space. When the meteor pierced the turbine, the water in the condenser began to boil. This boiling lowered the temperature, and the condenser demonstrated its efficiency by quickly freezing the water in the tank. Evans sealed the turbine from the rest of the steam system by closing the shut-off valves. If there was any water in the boiler, it would operate the engine that drove the generator. The water would condense in the condenser, and with a little luck, melt the ice in there. Then, if the pump wasn't blocked by ice, it would return the water to the boiler. But there was no water in the boiler. Carefully he poured a cup of his drinking water into a pipe that led to the boiler, and resealed the pipe. He pulled on a knob marked "Nuclear Start/Safety Bypass." The water that he had poured into the boiler quickly turned into steam, and the steam turned the generator briefly. Evans watched the lights flicker and go out, and he guessed what the trouble was. "The water, man," he said, "there is not enough to melt the ice in the condenser." He opened the pipe again and poured nearly a half-gallon of water into the boiler. It was three days' supply of water, if it had been carefully used. It was one day's supply if used wastefully. It was ostentatious luxury for a man with a month's supply of water and twenty-one days to live. The generator started again, and the lights came on. They flickered as the boiler pressure began to fail, but the steam had melted some of the ice in the condenser, and the water pump began to function. "Well, man," he breathed, "there's a light to die by." The sun rose on Williamson Town at about the same time it rose on Evans. It was an incredibly brilliant disk in a black sky. The stars next to the sun shone as brightly as though there were no sun. They might have appeared to waver slightly, if they were behind outflung corona flares. If they did, no one noticed. No one looked toward the sun without dark filters. When Director McIlroy came into his office, he found it lighted by the rising sun. The light was a hot, brilliant white that seemed to pierce the darkest shadows of the room. He moved to the round window, screening his eyes from the light, and adjusted the polaroid shade to maximum density. The sun became an angry red brown, and the room was dark again. McIlroy decreased the density again until the room was comfortably lighted. The room felt stuffy, so he decided to leave the door to the inner office open. He felt a little guilty about this, because he had ordered that all doors in the survey building should remain closed except when someone was passing through them. This was to allow the air-conditioning system to function properly, and to prevent air loss in case of the highly improbable meteor damage. McIlroy thought that on the whole, he was disobeying his own orders no more flagrantly than anyone else in the survey. McIlroy had no illusions about his ability to lead men. Or rather, he did have one illusion; he thought that he was completely unfit as a leader. It was true that his strictest orders were disobeyed with cheerful contempt, but it was also true his mildest requests were complied with eagerly and smoothly. Everyone in the survey except McIlroy realized this, and even he accepted this without thinking about it. He had fallen into the habit of suggesting mildly anything that he wanted done, and writing orders he didn't particularly care to have obeyed. For example, because of an order of his stating that there would be no alcoholic beverages within the survey building, the entire survey was assured of a constant supply of home-made, but passably good liquor. Even McIlroy enjoyed the surreptitious drinking. "Good morning, Mr. McIlroy," said Mrs. Garth, his secretary. Morning to Mrs. Garth was simply the first four hours after waking. "Good morning indeed," answered McIlroy. Morning to him had no meaning at all, but he thought in the strictest sense that it would be morning on the Moon for another week. "Has the power crew set up the solar furnace?" he asked. The solar furnace was a rough parabola of mirrors used to focus the sun's heat on anything that it was desirable to heat. It was used mostly, from sun-up to sun-down, to supplement the nuclear power plant. "They went out about an hour ago," she answered, "I suppose that's what they were going to do." "Very good, what's first on the schedule?" "A Mr. Phelps to see you," she said. "How do you do, Mr. Phelps," McIlroy greeted him. "Good afternoon," Mr. Phelps replied. "I'm here representing the Merchants' Bank Association." "Fine," McIlroy said, "I suppose you're here to set up a bank." "That's right, I just got in from Muroc last night, and I've been going over the assets of the Survey Credit Association all morning." "I'll certainly be glad to get them off my hands," McIlroy said. "I hope they're in good order." "There doesn't seem to be any profit," Mr. Phelps said. "That's par for a nonprofit organization," said McIlroy. "But we're amateurs, and we're turning this operation over to professionals. I'm sure it will be to everyone's satisfaction." "I know this seems like a silly question. What day is this?" "Well," said McIlroy, "that's not so silly. I don't know either." "Mrs. Garth," he called, "what day is this?" "Why, September, I think," she answered. "I mean what day ." "I don't know, I'll call the observatory." There was a pause. "They say what day where?" she asked. "Greenwich, I guess, our official time is supposed to be Greenwich Mean Time." There was another pause. "They say it's September fourth, one thirty a.m. " "Well, there you are," laughed McIlroy, "it isn't that time doesn't mean anything here, it just doesn't mean the same thing." Mr. Phelps joined the laughter. "Bankers' hours don't mean much, at any rate," he said. The power crew was having trouble with the solar furnace. Three of the nine banks of mirrors would not respond to the electric controls, and one bank moved so jerkily that it could not be focused, and it threatened to tear several of the mirrors loose. "What happened here?" Spotty Cade, one of the electrical technicians asked his foreman, Cowalczk, over the intercommunications radio. "I've got about a hundred pinholes in the cables out here. It's no wonder they don't work." "Meteor shower," Cowalczk answered, "and that's not half of it. Walker says he's got a half dozen mirrors cracked or pitted, and Hoffman on bank three wants you to replace a servo motor. He says the bearing was hit." "When did it happen?" Cade wanted to know. "Must have been last night, at least two or three days ago. All of 'em too small for Radar to pick up, and not enough for Seismo to get a rumble." "Sounds pretty bad." "Could have been worse," said Cowalczk. "How's that?" "Wasn't anybody out in it." "Hey, Chuck," another technician, Lehman, broke in, "you could maybe get hurt that way." "I doubt it," Cowalczk answered, "most of these were pinhead size, and they wouldn't go through a suit." "It would take a pretty big one to damage a servo bearing," Cade commented. "That could hurt," Cowalczk admitted, "but there was only one of them." "You mean only one hit our gear," Lehman said. "How many missed?" Nobody answered. They could all see the Moon under their feet. Small craters overlapped and touched each other. There was—except in the places that men had obscured them with footprints—not a square foot that didn't contain a crater at least ten inches across, there was not a square inch without its half-inch crater. Nearly all of these had been made millions of years ago, but here and there, the rim of a crater covered part of a footprint, clear evidence that it was a recent one. After the sun rose, Evans returned to the lava cave that he had been exploring when the meteor hit. Inside, he lifted his filter visor, and found that the light reflected from the small ray that peered into the cave door lighted the cave adequately. He tapped loose some white crystals on the cave wall with his geologist's hammer, and put them into a collector's bag. "A few mineral specimens would give us something to think about, man. These crystals," he said, "look a little like zeolites, but that can't be, zeolites need water to form, and there's no water on the Moon." He chipped a number of other crystals loose and put them in bags. One of them he found in a dark crevice had a hexagonal shape that puzzled him. One at a time, back in the tractor, he took the crystals out of the bags and analyzed them as well as he could without using a flame which would waste oxygen. The ones that looked like zeolites were zeolites, all right, or something very much like it. One of the crystals that he thought was quartz turned out to be calcite, and one of the ones that he was sure could be nothing but calcite was actually potassium nitrate. "Well, now," he said, "it's probably the largest natural crystal of potassium nitrate that anyone has ever seen. Man, it's a full inch across." All of these needed water to form, and their existence on the Moon puzzled him for a while. Then he opened the bag that had contained the unusual hexagonal crystals, and the puzzle resolved itself. There was nothing in the bag but a few drops of water. What he had taken to be a type of rock was ice, frozen in a niche that had never been warmed by the sun. The sun rose to the meridian slowly. It was a week after sunrise. The stars shone coldly, and wheeled in their slow course with the sun. Only Earth remained in the same spot in the black sky. The shadow line crept around until Earth was nearly dark, and then the rim of light appeared on the opposite side. For a while Earth was a dark disk in a thin halo, and then the light came to be a crescent, and the line of dawn began to move around Earth. The continents drifted across the dark disk and into the crescent. The people on Earth saw the full moon set about the same time that the sun rose. Nickel Jones was the captain of a supply rocket. He made trips from and to the Moon about once a month, carrying supplies in and metal and ores out. At this time he was visiting with his old friend McIlroy. "I swear, Mac," said Jones, "another season like this, and I'm going back to mining." "I thought you were doing pretty well," said McIlroy, as he poured two drinks from a bottle of Scotch that Jones had brought him. "Oh, the money I like, but I will say that I'd have more if I didn't have to fight the union and the Lunar Trade Commission." McIlroy had heard all of this before. "How's that?" he asked politely. "You may think it's myself running the ship," Jones started on his tirade, "but it's not. The union it is that says who I can hire. The union it is that says how much I must pay, and how large a crew I need. And then the Commission ..." The word seemed to give Jones an unpleasant taste in his mouth, which he hurriedly rinsed with a sip of Scotch. "The Commission," he continued, making the word sound like an obscenity, "it is that tells me how much I can charge for freight." McIlroy noticed that his friend's glass was empty, and he quietly filled it again. "And then," continued Jones, "if I buy a cargo up here, the Commission it is that says what I'll sell it for. If I had my way, I'd charge only fifty cents a pound for freight instead of the dollar forty that the Commission insists on. That's from here to Earth, of course. There's no profit I could make by cutting rates the other way." "Why not?" asked McIlroy. He knew the answer, but he liked to listen to the slightly Welsh voice of Jones. "Near cost it is now at a dollar forty. But what sense is there in charging the same rate to go either way when it takes about a seventh of the fuel to get from here to Earth as it does to get from there to here?" "What good would it do to charge fifty cents a pound?" asked McIlroy. "The nickel, man, the tons of nickel worth a dollar and a half on Earth, and not worth mining here; the low-grade ores of uranium and vanadium, they need these things on Earth, but they can't get them as long as it isn't worth the carrying of them. And then, of course, there's the water we haven't got. We could afford to bring more water for more people, and set up more distilling plants if we had the money from the nickel. "Even though I say it who shouldn't, two-eighty a quart is too much to pay for water." Both men fell silent for a while. Then Jones spoke again: "Have you seen our friend Evans lately? The price of chromium has gone up, and I think he could ship some of his ore from Yellow Crater at a profit." "He's out prospecting again. I don't expect to see him until sun-down." "I'll likely see him then. I won't be loaded for another week and a half. Can't you get in touch with him by radio?" "He isn't carrying one. Most of the prospectors don't. They claim that a radio that won't carry beyond the horizon isn't any good, and one that will bounce messages from Earth takes up too much room." "Well, if I don't see him, you let him know about the chromium." "Anything to help another Welshman, is that the idea?" "Well, protection it is that a poor Welshman needs from all the English and Scots. Speaking of which—" "Oh, of course," McIlroy grinned as he refilled the glasses. " Slainte, McIlroy, bach. " [Health, McIlroy, man.] " Slainte mhor, bach. " [Great Health, man.] The sun was halfway to the horizon, and Earth was a crescent in the sky when Evans had quarried all the ice that was available in the cave. The thought grew on him as he worked that this couldn't be the only such cave in the area. There must be several more bubbles in the lava flow. Part of his reasoning proved correct. That is, he found that by chipping, he could locate small bubbles up to an inch in diameter, each one with its droplet of water. The average was about one per cent of the volume of each bubble filled with ice. A quarter of a mile from the tractor, Evans found a promising looking mound of lava. It was rounded on top, and it could easily be the dome of a bubble. Suddenly, Evans noticed that the gauge on the oxygen tank of his suit was reading dangerously near empty. He turned back to his tractor, moving as slowly as he felt safe in doing. Running would use up oxygen too fast. He was halfway there when the pressure warning light went on, and the signal sounded inside his helmet. He turned on his ten-minute reserve supply, and made it to the tractor with about five minutes left. The air purifying apparatus in the suit was not as efficient as the one in the tractor; it wasted oxygen. By using the suit so much, Evans had already shortened his life by several days. He resolved not to leave the tractor again, and reluctantly abandoned his plan to search for a large bubble. The sun stood at half its diameter above the horizon. The shadows of the mountains stretched out to touch the shadows of the other mountains. The dawning line of light covered half of Earth, and Earth turned beneath it. Cowalczk itched under his suit, and the sweat on his face prickled maddeningly because he couldn't reach it through his helmet. He pushed his forehead against the faceplate of his helmet and rubbed off some of the sweat. It didn't help much, and it left a blurred spot in his vision. That annoyed him. "Is everyone clear of the outlet?" he asked. "All clear," he heard Cade report through the intercom. "How come we have to blow the boilers now?" asked Lehman. "Because I say so," Cowalczk shouted, surprised at his outburst and ashamed of it. "Boiler scale," he continued, much calmer. "We've got to clean out the boilers once a year to make sure the tubes in the reactor don't clog up." He squinted through his dark visor at the reactor building, a gray concrete structure a quarter of a mile distant. "It would be pretty bad if they clogged up some night." "Pressure's ten and a half pounds," said Cade. "Right, let her go," said Cowalczk. Cade threw a switch. In the reactor building, a relay closed. A motor started turning, and the worm gear on the motor opened a valve on the boiler. A stream of muddy water gushed into a closed vat. When the vat was about half full, the water began to run nearly clear. An electric eye noted that fact and a light in front of Cade turned on. Cade threw the switch back the other way, and the relay in the reactor building opened. The motor turned and the gears started to close the valve. But a fragment of boiler scale held the valve open. "Valve's stuck," said Cade. "Open it and close it again," said Cowalczk. The sweat on his forehead started to run into his eyes. He banged his hand on his faceplate in an unconscious attempt to wipe it off. He cursed silently, and wiped it off on the inside of his helmet again. This time, two drops ran down the inside of his faceplate. "Still don't work," said Cade. "Keep trying," Cowalczk ordered. "Lehman, get a Geiger counter and come with me, we've got to fix this thing." Lehman and Cowalczk, who were already suited up started across to the reactor building. Cade, who was in the pressurized control room without a suit on, kept working the switch back and forth. There was light that indicated when the valve was open. It was on, and it stayed on, no matter what Cade did. "The vat pressure's too high," Cade said. "Let me know when it reaches six pounds," Cowalczk requested. "Because it'll probably blow at seven." The vat was a light plastic container used only to decant sludge out of the water. It neither needed nor had much strength. "Six now," said Cade. Cowalczk and Lehman stopped halfway to the reactor. The vat bulged and ruptured. A stream of mud gushed out and boiled dry on the face of the Moon. Cowalczk and Lehman rushed forward again. They could see the trickle of water from the discharge pipe. The motor turned the valve back and forth in response to Cade's signals. "What's going on out there?" demanded McIlroy on the intercom. "Scale stuck in the valve," Cowalczk answered. "Are the reactors off?" "Yes. Vat blew. Shut up! Let me work, Mac!" "Sorry," McIlroy said, realizing that this was no time for officials. "Let me know when it's fixed." "Geiger's off scale," Lehman said. "We're probably O.K. in these suits for an hour," Cowalczk answered. "Is there a manual shut-off?" "Not that I know of," Lehman answered. "What about it, Cade?" "I don't think so," Cade said. "I'll get on the blower and rouse out an engineer." "O.K., but keep working that switch." "I checked the line as far as it's safe," said Lehman. "No valve." "O.K.," Cowalczk said. "Listen, Cade, are the injectors still on?" "Yeah. There's still enough heat in these reactors to do some damage. I'll cut 'em in about fifteen minutes." "I've found the trouble," Lehman said. "The worm gear's loose on its shaft. It's slipping every time the valve closes. There's not enough power in it to crush the scale." "Right," Cowalczk said. "Cade, open the valve wide. Lehman, hand me that pipe wrench!" Cowalczk hit the shaft with the back of the pipe wrench, and it broke at the motor bearing. Cowalczk and Lehman fitted the pipe wrench to the gear on the valve, and turned it. "Is the light off?" Cowalczk asked. "No," Cade answered. "Water's stopped. Give us some pressure, we'll see if it holds." "Twenty pounds," Cade answered after a couple of minutes. "Take her up to ... no, wait, it's still leaking," Cowalczk said. "Hold it there, we'll open the valve again." "O.K.," said Cade. "An engineer here says there's no manual cutoff." "Like Hell," said Lehman. Cowalczk and Lehman opened the valve again. Water spurted out, and dwindled as they closed the valve. "What did you do?" asked Cade. "The light went out and came on again." "Check that circuit and see if it works," Cowalczk instructed. There was a pause. "It's O.K.," Cade said. Cowalczk and Lehman opened and closed the valve again. "Light is off now," Cade said. "Good," said Cowalczk, "take the pressure up all the way, and we'll see what happens." "Eight hundred pounds," Cade said, after a short wait. "Good enough," Cowalczk said. "Tell that engineer to hold up a while, he can fix this thing as soon as he gets parts. Come on, Lehman, let's get out of here." "Well, I'm glad that's over," said Cade. "You guys had me worried for a while." "Think we weren't worried?" Lehman asked. "And it's not over." "What?" Cade asked. "Oh, you mean the valve servo you two bashed up?" "No," said Lehman, "I mean the two thousand gallons of water that we lost." "Two thousand?" Cade asked. "We only had seven hundred gallons reserve. How come we can operate now?" "We picked up twelve hundred from the town sewage plant. What with using the solar furnace as a radiator, we can make do." "Oh, God, I suppose this means water rationing again." "You're probably right, at least until the next rocket lands in a couple of weeks." PROSPECTOR FEARED LOST ON MOON IPP Williamson Town, Moon, Sept. 21st. Scientific survey director McIlroy released a statement today that Howard Evans, a prospector is missing and presumed lost. Evans, who was apparently exploring the Moon in search of minerals was due two days ago, but it was presumed that he was merely temporarily delayed. Evans began his exploration on August 25th, and was known to be carrying several days reserve of oxygen and supplies. Director McIlroy has expressed a hope that Evans will be found before his oxygen runs out. Search parties have started from Williamson Town, but telescopic search from Palomar and the new satellite observatory are hindered by the fact that Evans is lost on the part of the Moon which is now dark. Little hope is held for radio contact with the missing man as it is believed he was carrying only short-range, intercommunications equipment. Nevertheless, receivers are ... Captain Nickel Jones was also expressing a hope: "Anyway, Mac," he was saying to McIlroy, "a Welshman knows when his luck's run out. And never a word did he say." "Like as not, you're right," McIlroy replied, "but if I know Evans, he'd never say a word about any forebodings." "Well, happen I might have a bit of Welsh second sight about me, and it tells me that Evans will be found." McIlroy chuckled for the first time in several days. "So that's the reason you didn't take off when you were scheduled," he said. "Well, yes," Jones answered. "I thought that it might happen that a rocket would be needed in the search." The light from Earth lighted the Moon as the Moon had never lighted Earth. The great blue globe of Earth, the only thing larger than the stars, wheeled silently in the sky. As it turned, the shadow of sunset crept across the face that could be seen from the Moon. From full Earth, as you might say, it moved toward last quarter. The rising sun shone into Director McIlroy's office. The hot light formed a circle on the wall opposite the window, and the light became more intense as the sun slowly pulled over the horizon. Mrs. Garth walked into the director's office, and saw the director sleeping with his head cradled in his arms on the desk. She walked softly to the window and adjusted the shade to darken the office. She stood looking at McIlroy for a moment, and when he moved slightly in his sleep, she walked softly out of the office. A few minutes later she was back with a cup of coffee. She placed it in front of the director, and shook his shoulder gently. "Wake up, Mr. McIlroy," she said, "you told me to wake you at sunrise, and there it is, and here's Mr. Phelps." McIlroy woke up slowly. He leaned back in his chair and stretched. His neck was stiff from sleeping in such an awkward position. "'Morning, Mr. Phelps," he said. "Good morning," Phelps answered, dropping tiredly into a chair. "Have some coffee, Mr. Phelps," said Mrs. Garth, handing him a cup. "Any news?" asked McIlroy. "About Evans?" Phelps shook his head slowly. "Palomar called in a few minutes back. Nothing to report and the sun was rising there. Australia will be in position pretty soon. Several observatories there. Then Capetown. There are lots of observatories in Europe, but most of them are clouded over. Anyway the satellite observatory will be in position by the time Europe is." McIlroy was fully awake. He glanced at Phelps and wondered how long it had been since he had slept last. More than that, McIlroy wondered why this banker, who had never met Evans, was losing so much sleep about finding him. It began to dawn on McIlroy that nearly the whole population of Williamson Town was involved, one way or another, in the search. The director turned to ask Phelps about this fact, but the banker was slumped in his chair, fast asleep with his coffee untouched. It was three hours later that McIlroy woke Phelps. "They've found the tractor," McIlroy said. "Good," Phelps mumbled, and then as comprehension came; "That's fine! That's just line! Is Evans—?" "Can't tell yet. They spotted the tractor from the satellite observatory. Captain Jones took off a few minutes ago, and he'll report back as soon as he lands. Hadn't you better get some sleep?" Evans was carrying a block of ice into the tractor when he saw the rocket coming in for a landing. He dropped the block and stood waiting. When the dust settled from around the tail of the rocket, he started to run forward. The air lock opened, and Evans recognized the vacuum suited figure of Nickel Jones. "Evans, man!" said Jones' voice in the intercom. "Alive you are!" "A Welshman takes a lot of killing," Evans answered. Later, in Evans' tractor, he was telling his story: "... And I don't know how long I sat there after I found the water." He looked at the Goldburgian device he had made out of wire and tubing. "Finally I built this thing. These caves were made of lava. They must have been formed by steam some time, because there's a floor of ice in all of 'em. "The idea didn't come all at once, it took a long time for me to remember that water is made out of oxygen and hydrogen. When I remembered that, of course, I remembered that it can be separated with electricity. So I built this thing. "It runs an electric current through water, lets the oxygen loose in the room, and pipes the hydrogen outside. It doesn't work automatically, of course, so I run it about an hour a day. My oxygen level gauge shows how long." "You're a genius, man!" Jones exclaimed. "No," Evans answered, "a Welshman, nothing more." "Well, then," said Jones, "are you ready to start back?" "Back?" "Well, it was to rescue you that I came." "I don't need rescuing, man," Evans said. Jones stared at him blankly. "You might let me have some food," Evans continued. "I'm getting short of that. And you might have someone send out a mechanic with parts to fix my tractor. Then maybe you'll let me use your radio to file my claim." "Claim?" "Sure, man, I've thousands of tons of water here. It's the richest mine on the Moon!" THE END
A. He is too lenient
What hyperparameters have been tuned?
### Introduction Through this CS224N Pre-trained Contextual Embeddings (PCE) project, we tackle the question answering problem which is one of the most popular in NLP and has been brought to the forefront by datasets such as SQUAD 2.0. This problem's success stems from both the challenge it presents and the recent successes in approaching human level function. As most, if not all, of the problems humans solve every day can be posed as a question, creating an deep learning based solution that has access to the entire internet is a critical milestone for NLP. Through our project, our group had tested the limits of applying attention in BERT BIBREF0 to improving the network's performance on the SQUAD2.0 dataset BIBREF1. BERT applies attention to the concatenation of the query and context vectors and thus attends these vectors in a global fashion. We propose BERTQA BIBREF2 which adds Context-to-Query (C2Q) and Query-to-Context (Q2C) attention in addition to localized feature extraction via 1D convolutions. We implemented the additions ourselves, while the Pytorch baseline BERT code was obtained from BIBREF3. The SQUAD2.0 answers span from a length of zero to multiple words and this additional attention provides hierarchical information that will allow the network to better learn to detect answer spans of varying sizes. We applied the empirical findings from this part of our project to the large BERT model, which has twice as many layers as the base BERT model. We also augmented the SQUAD2.0 dataset with additional backtranslated examples. This augmented dataset will be publicly available on our github BIBREF4 on the completion of this course. After performing hyperparameter tuning, we ensembled our two best networks to get F1 and EM scores of 82.317 and 79.442 respectively. The experiments took around 300 GPU hours to train. ### Related Work The SQUAD2.0 creators proposed this dataset as a means for networks to actually understand the text they were being interrogated about rather than simply being extractive parsers. Many networks stepped up to the challenge including BERT, BIDAF, and QANET. BERT is a fully feed forward network that is based on the transformer architecture BIBREF5. The base BERT model has 12 transformer encoder layers that terminate in an interchangeable final layer which can be finetuned to the specific task. We chose this network as our baseline because of its use of contextual embeddings and global attention and because of the speed advantage derived from an RNN free architecture. We derived inspiration for our modifications from the BIDAF and QANET models. BIDAF is an LSTM based network that uses character, word, and contextual embeddings which are fed through Context-to-Query (C2Q) and Query-to-Context (Q2C) layers. The final logits are derived from separate Start and End output layers, as opposed to BERT which produces these logits together. Our C2Q/Q2C addition to BERT and the Dense Layer/LSTM based separate final Start and End logit prediction layer were inspired by this paper. We also refered to the QANET model, which is also a fully feed forward network that emphasizes the use of convolutions to capture the local structure of text. Based on this paper, we created a convolutional layer within the C2Q/Q2C architecture to add localized information to BERT's global attention and the C2Q/Q2C coattention. In addition to referencing these papers that helped us build a successful model, we also explored many other papers which either didn't work with our transformer based model or simply didn't work in combination with our additions to BERT. The three main papers from which we tried to gain ideas are U-Net: Machine Reading Comprehension with Unanswerable Questions BIBREF6, Attention-over-Attention Neural Networks for Reading Comprehension BIBREF7, and FlowQA: Grasping Flow in History for Conversational Machine Comprehension BIBREF8. We tried implementing the multitask learning methodology presented in U-Net by passing the [CLS] token through a series of convolutional layers to create a probability of whether the question has an answer. We combined this prediction with the prediction of Start and End logits by combining the logits' crossentropy loss and the [CLS] binary crossentropy loss. Unfortunately, this additional loss seemed to be hindering the network's learning ability. We conjecture that this type of multitask learning would benefit from full training instead of the finetuning we were restricted to doing because of resources and time. We looked to Attention-over-Attention as a source of additional ways of injecting attention into our network. Attention-over-Attention has a dot-product based attention mechanism that attends to attention vectors instead of embedding vectors. We believe this method did not help in our case because BERT works with the Context and Query as part of the same vector while the Attention-over-Attention model requires completely uncoupled Context and Query vectors. As a side note, we do separate the Context and Query vector derived from BERT before the coattention layers of our model, but these layers are not negatively affected by the fact that these separated vectors contain 'mixed' information between the Context and Query. Finally, we explored the FlowQA paper which proposed combining embeddings from multiple layers as an input to the final prediction layer. We implemented this idea by combining embeddings from multiple BERT layers as an input to our final prediction layer. This final layer was simply an additional transformer encoder and we think that the encoder does not have the LSTM's ability of being able to aggregate information from multiple sources. ### Methods We first focused on directed coattention via context to query and query to context attention as discussed in BIDAF BIBREF9. We then implemented localized feature extraction by 1D convolutions to add local information to coattention based on the QANET architecture BIBREF10. Subsequently, we experimented with different types of skip connections to inject BERT embedding information back into our modified network. We then applied what we learned using the base BERT model to the large BERT model. Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks. Each part of the project is discussed further in the subsections below. ### Methods ::: BERTQA - Directed Coattention The base BERT network, the baseline for this project, is built with 12 Transformer encoder blocks. These encoder blocks contain multi-head attention and a feed forward network. Each head of the multi-head attention attends to the concatenation of the context and query input and thus forms a global attention output. The output of each Transformer encoder is fed in to the next layer, creating an attention hierarchy. The benefit of this construction is that the model has access to the entire query and context at each level allowing both embeddings to learn from each other and removing the long term memory bottleneck faced by RNN based models. BERTQA uses directed coattention between the query and context, as opposed to attending to their concatenation (Figure FIGREF2). Our architecture consists of a set of 7 directed coattention blocks that are inserted between the BERT embeddings and the final linear layer before loss calculation. The BERT embeddings are masked to produce seperate query and context embedding vectors (Equations DISPLAY_FORM3 , DISPLAY_FORM4). Where E is the contextualized embeddings derived from BERT, m is the mask, and c and q are the context and query respectively. $E_q$ and $E_c$ are then projected through linear layers to obtain key, value, and query vectors (Equation DISPLAY_FORM5). Where Q, K, and V are the query, key and value vectors. The Q, K, and V vectors are used in scaled dot-product attention (Equation DISPLAY_FORM6) to create the separate Context-to-Query (C2Q) and Query-to-Context (Q2C) attention vectors. Where y is q and z is c for Q2C and y is c and z is q for C2Q. The C2Q attention vector is summed with the query input and the Q2C attention vector is summed with the context input via a skip connection. Each sum vector is then pushed through a fully connected block and then is added back to the output of the fully connected block via another skip connection. Each sum is followed by a layer-wise normalization. The two resulting 3D C2Q and Q2C vectors are concatenated along the third (embedding) dimension which are combined by two 1D convolutions to create the final 3D vector representing the combination of the C2Q and Q2C attention. We use two convolution layers here so that the concatenated dimension is reduced more gradually so that too much information is not lost. This vector then goes into a final attention head to perform separate self attention pre-processing for the Start logit and End logit prediction layers. The Start logit is generated by a linear layer and the End logit is generated by the output of an LSTM which takes the concatenation of the start span and end span embeddings as an input. We used the BERT architecture code written in Pytorch from the HuggingFace github BIBREF3. We wrote our own code for all of the subsequent architecture. ### Methods ::: Localized Feature Extraction To refine the focus of the attention further, we experimented with convolutional feature extraction to add localized information to the coattention output. We added four convolutional layers within the coattention architecture (Figure FIGREF8). The input to these layers were the BERT embeddings and the outputs were added to the outputs of the multi-head attention layers in the coattention architecture and then layer-wise normalized. This combination of coattention and local information provides a hierarchical understanding of the question and context. By itself, BERT provides information about the question and context as a unit, while the coattention extracts information from both the question and context relative to each other. The convolutions extract local features within the question and context to add localized information to the attention and embedding meanings. After adding the separate start and end logic, we found that the localized feature extraction did not allow an improvement in the network's learning via an ablation study where we ran the network without the convolutional layers. We speculate that the convolutions prevented improvement beyond a certain F1 score because they are lossy compressors and the information lost by the convolutions might be essential to downstream learning. ### Methods ::: Skip Connections As shown in Figure FIGREF2, we have a skip connection from the BERT embedding layer combined with the convolved directed co-attention output (C2Q and Q2C). We experimented with 3 skip connection configurations: Simple ResNet inspired Skip, Self-Attention Transformer Skip, and a Highway Network. Of these, the Self-Attention Transformer based skip worked best initially. However, when we combined this skip connection with our logit prediction logic, the network was no longer able learn as well. The Simple ResNet inspired skip BIBREF11 connection solved this issue. It seems that the transformer skip connection followed by the additional transformer encoder blocks that form the beginning of the logit prediction logic processed the BERT embeddings too much and thus lost the benefit of the skip connection. Therefore, we decided to use a Simple ResNet inspired skip alongside the self attention heads for logit prediction. This allows the directed co-attention layers to learn distinct information coming from BERT embeddings via the skip and allows for efficient backpropagation to the BERT layers. ### Methods ::: Data Augmentation - SQuAD 2.Q Inspired by the work presented in BIBREF12 where the authors present a way of generating new questions out of context and after observing the patterns in SQuAD 2.0 we realized there is a lot of syntatic and gramatical variance in the questions written by cloud workers. To help our network generalize better to these variations we decided to augment the dataset by paraphrasing the questions in the SQuAD training set. We applied backtranslation using Google Cloud Translation (NMT) API BIBREF13 to translate the sentence from English to French and then do a back translation to English, essentially 2 translations per question (Figure FIGREF11). We call our augmented dataset SQUAD 2.Q and make 3 different versions (35%, 50%, and 100% augmentation) alongside code to generate them publicly available on our github BIBREF4. ### Methods ::: Hyperparameter Tuning Hyperparameter tuning has been an on-going process for our experiments. Here are the following hyperparameters we have tweaked and tuned for on Bert Base: Number of Directed co-Attention layers - We tried various numbers of layers and we found out that N=7 for the co-attention layers gave us optimal performance while being able to fit the model on 2 GPUs (3 F1 score improvement by itself). Max Sequence length - After initial experiments with default sequence length (context + query token) 384, we switched to a sequence length of 512. This gave us a 0.6 F1 improvement on our model. Batch Size - Default: 12, We had to use a batch size of 6 for all our experiments due to resource constraints and out of memory issues on the GPU for any larger batch size. Number of epochs - Default: 2 On increasing the number of epochs we saw a significant degradation in performance (-3 F1 score), we attribute this to the fact that the model starts to overfit to the training data with high variance and since the batch size is smaller the gradient updates could be noisy not allowing it to optimally converge. Learning Rate - Default: 3e-5 We wrote a script to help us find the optimal learning rate using grid search and found the optimal learning rates for SQuAD 2.0 and SQuAD 2.Q respectively for batch size of 6. ### Methods ::: BERT Large and Ensembling We applied what we learned from the previous five subsections to the large BERT model, which has twice as many layers as the base model. In order to fit this model on our GPU and still use 7 of our coattention layers, we were limited to two examples on the GPU at a time. However, we also found that BERT large requires a larger batch size to get a good performance. As such, we left the batch size 6 as with the base model and used a gradient accumulation of 3 so that only two examples were on the GPU at a time. Additionally, the large model is very sensitive to the learning rate, and the rate of 3e-5 which we used with the smaller model no longer worked. We ran the model on a subset of the data with various learning rates and found that 1.1e-5 to 1.5e-5 works the best for the large model depending on the dataset used (SQuAD 2.0 or SQUAD 2.Q). After experimenting with multiple combinations of the ideas we described above, we ensembled our three best networks to create our final predictions. The configurations of our three best networks are described in Table TABREF19. We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer. ### Results and Analysis Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip). The final column presents the result of the localized feature extraction added inside the C2Q/Q2C architecture (Inside Conv - Figure FIGREF8). The results presented above verify our hypothesis that adding layers of directed attention to BERT improves its performance. The C2Q/Q2C network produced a significant improvement in the No Answer F1 score while causing a symmetric drop in the Has Answer F1 score. The C2Q/Q2C network attends the context relative to the query and vice versa instead of as a concatenated whole. This method of attention provides more information regarding whether there is an answer to the question in the context than the original BERT attention. The skip connections improved the scores further by adding the BERT embeddings back in to the coattention vectors and providing information that may have been lost by the C2Q/Q2C network in addition to providing a convenient path for backpropagation to the BERT embedding layers. The skip connection containing the transformer provides minimal gains while adding a significant overhead to runtime. Therefore, we built the final convolutional experiments on the Simple Skip architecture. The localized feature extraction within the coattention network produced the best results in the base model, but prevented an improvement in our modified BERT large model. Table TABREF21 shows the F1 and EM scores obtained for the experiments on the large model. The models labeled 1, 2, and 3 are described in detail in Section 3.6. Each of the models built on BERT large used our augmented dataset in addition to the coattention architecture, simple skip connection, and separate start and end logit logic. The Model 1 results show that a moderately augmented (35%) data set helps the training since both unaugmented and highly augmented (50%) models did not perform as well. It seems that adding too much augmented data reduces the F1 because the augmented data is noisy relative to the original data. The performance difference between Model 1 and 2 support the use of the LSTM in creating the End logit predictions. The LSTM is successfully combining the information from the Start logit and the End embeddings to provide a good input to the End logit linear layer. The ensemble model performed the best by far due to a significant increase in the no answer F1 which can be attributed to the ensembling method which is biased towards models that predict no answer. We investigated the attention distributions produced by our proposed model by modifying the open source code from BertViz BIBREF14 . For the case where the question has an answer in the context (Figure FIGREF22), the attention heads produce activation around the answer phrase "in the 10th and 11th centuries". In the case where there is no answer in the context, the attention heads produce considerable activation on the [SEP] word-piece which is outside the context span. As seen in Figure FIGREF25, we conducted an error analysis over different question types. Note that questions that did not fit into the 7 bins were classified as "Other". An example of a question in the "Other" category would be an "Is it?" question which is a minority set in SQUAD 2.0. Over the baseline, our model pretty much presents an overall improvement across the board in the different type of questions in the SQuAD 2.0 dev set. In the case of "Which" questions, our model goes wrong 69 times where as the baseline model goes wrong 64 times, a very small numeric difference. However, for the "What" questions the baseline model produces incorrect outputs for 776 examples while our model produces 30 fewer incorrect outputs. The reason for this lapse appears to be related to data augmentation where we observed that many a times "Which" was backtranslated as "What" and vice versa. Thus, the questions in these two classes are mixed and a completely accurate analysis of improvements in these classes is not possible. Figure FIGREF26 shows an example cropped context and question that our ensemble model answers correctly while the BERT large model answers incorrectly. It seems that the BERT large model combined the words spirit and Christian to answer this question even thought the word spirit belongs to martial and the word Christian belongs to piety. Our model was able to keep the paired words together and realize that the question has no answer. We believe that our model was able to get the correct answer because of the coattention which is able to keep the words paired together correctly. Overall, our model has shown marked qualitative and quantitative improvement over the base and large BERT models. Our SQUAD 2.Q dataset helps improve performance by mimicking the natural variance in questions present in the SQUAD 2.0 dataset. BertQA produces a significant improvement in the No Answer F1 by being able to maintain associations between words via coattention, as seen in Figure FIGREF26, and by ensembling our three best models. ### Conclusion We present a novel architectural scheme to use transformers to help the network learn directed co-attention which has improved performance over BERT baseline. We experimented with several architectural modifications and presented an ablation study. We present SQuAD 2.Q, an augmented dataset, developed using NMT backtranslation which helps our model generalize better over syntatic and grammatical variance of human writing. Our ensemble model gives a 3.5 point improvement over the Bert Large dev F1. We learned a lot about neural architectural techniques through experimenting with various model configurations. We also learned about how different model components do or don't work together and that some architectural choices like convolutional layers that work so well in computer vision do not necessarily work as well in NLP. We would like to improve on the quality of data augmentation to limit noise in the dataset and further extend this work to context augmentation as well. Apart from that, we would also like to try recent architectures like Transformer-XL BIBREF15 which has potential to offer additional improvement on HasAns F1 by remembering long term dependencies and evaluate how it scales with our model as a next step. Given sufficient compute resources we would also like to pre-train our C2Q and Q2C layers similar to BERT pre-training to learn deeper language semantics and then fine-tune it on the SQuAD dataset for the task of Question Answering. We would like to thank the CS224n Team for all the support throughout the course and also thank the folks at Azure for providing us with Cloud credits. Figure 1: Proposed C2Q and Q2C directed coattention architecture Figure 2: Convolutional Layers for Local Attention (in channels, out channels, kernel size) Figure 3: Back Translation to augment the SQuAD dataset Table 1: Model Configurations; BS = Batch Size, GA = Gradient Accum., LR = Learning Rate Table 2: Performance results for experiments relative to BERT base Table 3: Performance results for experiments relative to BERT large Figure 4: Visualization of attention produced by our model Figure 5: Percent error for different question types Figure 6: Comparison of BERT large and Ensemble performance on an example
number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks
What is the most likely explanation for why Kolin's anger is so extreme? A. He is known to be irritable and have mood swings. B. He had been holding in anger and his captain's reaction was the last straw. C. He was under the effects of the purple berries. D. His mind is being controlled by Ashlew.
By H. B. Fyfe THE TALKATIVE TREE Dang vines! Beats all how some plants have no manners—but what do you expect, when they used to be men! All things considered—the obscure star, the undetermined damage to the stellar drive and the way the small planet's murky atmosphere defied precision scanners—the pilot made a reasonably good landing. Despite sour feelings for the space service of Haurtoz, steward Peter Kolin had to admit that casualties might have been far worse. Chief Steward Slichow led his little command, less two third-class ration keepers thought to have been trapped in the lower hold, to a point two hundred meters from the steaming hull of the Peace State . He lined them up as if on parade. Kolin made himself inconspicuous. "Since the crew will be on emergency watches repairing the damage," announced the Chief in clipped, aggressive tones, "I have volunteered my section for preliminary scouting, as is suitable. It may be useful to discover temporary sources in this area of natural foods." Volunteered HIS section! thought Kolin rebelliously. Like the Supreme Director of Haurtoz! Being conscripted into this idiotic space fleet that never fights is bad enough without a tin god on jets like Slichow! Prudently, he did not express this resentment overtly. His well-schooled features revealed no trace of the idea—or of any other idea. The Planetary State of Haurtoz had been organized some fifteen light-years from old Earth, but many of the home world's less kindly techniques had been employed. Lack of complete loyalty to the state was likely to result in a siege of treatment that left the subject suitably "re-personalized." Kolin had heard of instances wherein mere unenthusiastic posture had betrayed intentions to harbor treasonable thoughts. "You will scout in five details of three persons each," Chief Slichow said. "Every hour, each detail will send one person in to report, and he will be replaced by one of the five I shall keep here to issue rations." Kolin permitted himself to wonder when anyone might get some rest, but assumed a mildly willing look. (Too eager an attitude could arouse suspicion of disguising an improper viewpoint.) The maintenance of a proper viewpoint was a necessity if the Planetary State were to survive the hostile plots of Earth and the latter's decadent colonies. That, at least, was the official line. Kolin found himself in a group with Jak Ammet, a third cook, and Eva Yrtok, powdered foods storekeeper. Since the crew would be eating packaged rations during repairs, Yrtok could be spared to command a scout detail. Each scout was issued a rocket pistol and a plastic water tube. Chief Slichow emphasized that the keepers of rations could hardly, in an emergency, give even the appearance of favoring themselves in regard to food. They would go without. Kolin maintained a standard expression as the Chief's sharp stare measured them. Yrtok, a dark, lean-faced girl, led the way with a quiet monosyllable. She carried the small radio they would be permitted to use for messages of utmost urgency. Ammet followed, and Kolin brought up the rear. To reach their assigned sector, they had to climb a forbidding ridge of rock within half a kilometer. Only a sparse creeper grew along their way, its elongated leaves shimmering with bronze-green reflections against a stony surface; but when they topped the ridge a thick forest was in sight. Yrtok and Ammet paused momentarily before descending. Kolin shared their sense of isolation. They would be out of sight of authority and responsible for their own actions. It was a strange sensation. They marched down into the valley at a brisk pace, becoming more aware of the clouds and atmospheric haze. Distant objects seemed blurred by the mist, taking on a somber, brooding grayness. For all Kolin could tell, he and the others were isolated in a world bounded by the rocky ridge behind them and a semi-circle of damp trees and bushes several hundred meters away. He suspected that the hills rising mistily ahead were part of a continuous slope, but could not be sure. Yrtok led the way along the most nearly level ground. Low creepers became more plentiful, interspersed with scrubby thickets of tangled, spike-armored bushes. Occasionally, small flying things flickered among the foliage. Once, a shrub puffed out an enormous cloud of tiny spores. "Be a job to find anything edible here," grunted Ammet, and Kolin agreed. Finally, after a longer hike than he had anticipated, they approached the edge of the deceptively distant forest. Yrtok paused to examine some purple berries glistening dangerously on a low shrub. Kolin regarded the trees with misgiving. "Looks as tough to get through as a tropical jungle," he remarked. "I think the stuff puts out shoots that grow back into the ground to root as they spread," said the woman. "Maybe we can find a way through." In two or three minutes, they reached the abrupt border of the odd-looking trees. Except for one thick trunked giant, all of them were about the same height. They craned their necks to estimate the altitude of the monster, but the top was hidden by the wide spread of branches. The depths behind it looked dark and impenetrable. "We'd better explore along the edge," decided Yrtok. "Ammet, now is the time to go back and tell the Chief which way we're— Ammet! " Kolin looked over his shoulder. Fifty meters away, Ammet sat beside the bush with the purple berries, utterly relaxed. "He must have tasted some!" exclaimed Kolin. "I'll see how he is." He ran back to the cook and shook him by the shoulder. Ammet's head lolled loosely to one side. His rather heavy features were vacant, lending him a doped appearance. Kolin straightened up and beckoned to Yrtok. For some reason, he had trouble attracting her attention. Then he noticed that she was kneeling. "Hope she didn't eat some stupid thing too!" he grumbled, trotting back. As he reached her, whatever Yrtok was examining came to life and scooted into the underbrush with a flash of greenish fur. All Kolin saw was that it had several legs too many. He pulled Yrtok to her feet. She pawed at him weakly, eyes as vacant as Ammet's. When he let go in sudden horror, she folded gently to the ground. She lay comfortably on her side, twitching one hand as if to brush something away. When she began to smile dreamily, Kolin backed away. The corners of his mouth felt oddly stiff; they had involuntarily drawn back to expose his clenched teeth. He glanced warily about, but nothing appeared to threaten him. "It's time to end this scout," he told himself. "It's dangerous. One good look and I'm jetting off! What I need is an easy tree to climb." He considered the massive giant. Soaring thirty or forty meters into the thin fog and dwarfing other growth, it seemed the most promising choice. At first, Kolin saw no way, but then the network of vines clinging to the rugged trunk suggested a route. He tried his weight gingerly, then began to climb. "I should have brought Yrtok's radio," he muttered. "Oh, well, I can take it when I come down, if she hasn't snapped out of her spell by then. Funny … I wonder if that green thing bit her." Footholds were plentiful among the interlaced lianas. Kolin progressed rapidly. When he reached the first thick limbs, twice head height, he felt safer. Later, at what he hoped was the halfway mark, he hooked one knee over a branch and paused to wipe sweat from his eyes. Peering down, he discovered the ground to be obscured by foliage. "I should have checked from down there to see how open the top is," he mused. "I wonder how the view will be from up there?" "Depends on what you're looking for, Sonny!" something remarked in a soughing wheeze. Kolin, slipping, grabbed desperately for the branch. His fingers clutched a handful of twigs and leaves, which just barely supported him until he regained a grip with the other hand. The branch quivered resentfully under him. "Careful, there!" whooshed the eerie voice. "It took me all summer to grow those!" Kolin could feel the skin crawling along his backbone. "Who are you?" he gasped. The answering sigh of laughter gave him a distinct chill despite its suggestion of amiability. "Name's Johnny Ashlew. Kinda thought you'd start with what I am. Didn't figure you'd ever seen a man grown into a tree before." Kolin looked about, seeing little but leaves and fog. "I have to climb down," he told himself in a reasonable tone. "It's bad enough that the other two passed out without me going space happy too." "What's your hurry?" demanded the voice. "I can talk to you just as easy all the way down, you know. Airholes in my bark—I'm not like an Earth tree." Kolin examined the bark of the crotch in which he sat. It did seem to have assorted holes and hollows in its rough surface. "I never saw an Earth tree," he admitted. "We came from Haurtoz." "Where's that? Oh, never mind—some little planet. I don't bother with them all, since I came here and found out I could be anything I wanted." "What do you mean, anything you wanted?" asked Kolin, testing the firmness of a vertical vine. "Just what I said," continued the voice, sounding closer in his ear as his cheek brushed the ridged bark of the tree trunk. "And, if I do have to remind you, it would be nicer if you said 'Mr. Ashlew,' considering my age." "Your age? How old—?" "Can't really count it in Earth years any more. Lost track. I always figured bein' a tree was a nice, peaceful life; and when I remembered how long some of them live, that settled it. Sonny, this world ain't all it looks like." "It isn't, Mr. Ashlew?" asked Kolin, twisting about in an effort to see what the higher branches might hide. "Nope. Most everything here is run by the Life—that is, by the thing that first grew big enough to do some thinking, and set its roots down all over until it had control. That's the outskirts of it down below." "The other trees? That jungle?" "It's more'n a jungle, Sonny. When I landed here, along with the others from the Arcturan Spark , the planet looked pretty empty to me, just like it must have to—Watch it, there, Boy! If I didn't twist that branch over in time, you'd be bouncing off my roots right now!" "Th-thanks!" grunted Kolin, hanging on grimly. "Doggone vine!" commented the windy whisper. " He ain't one of my crowd. Landed years later in a ship from some star towards the center of the galaxy. You should have seen his looks before the Life got in touch with his mind and set up a mental field to help him change form. He looks twice as good as a vine!" "He's very handy," agreed Kolin politely. He groped for a foothold. "Well … matter of fact, I can't get through to him much, even with the Life's mental field helping. Guess he started living with a different way of thinking. It burns me. I thought of being a tree, and then he came along to take advantage of it!" Kolin braced himself securely to stretch tiring muscles. "Maybe I'd better stay a while," he muttered. "I don't know where I am." "You're about fifty feet up," the sighing voice informed him. "You ought to let me tell you how the Life helps you change form. You don't have to be a tree." "No?" " Uh -uh! Some of the boys that landed with me wanted to get around and see things. Lots changed to animals or birds. One even stayed a man—on the outside anyway. Most of them have to change as the bodies wear out, which I don't, and some made bad mistakes tryin' to be things they saw on other planets." "I wouldn't want to do that, Mr. Ashlew." "There's just one thing. The Life don't like taking chances on word about this place gettin' around. It sorta believes in peace and quiet. You might not get back to your ship in any form that could tell tales." "Listen!" Kolin blurted out. "I wasn't so much enjoying being what I was that getting back matters to me!" "Don't like your home planet, whatever the name was?" "Haurtoz. It's a rotten place. A Planetary State! You have to think and even look the way that's standard thirty hours a day, asleep or awake. You get scared to sleep for fear you might dream treason and they'd find out somehow." "Whooeee! Heard about them places. Must be tough just to live." Suddenly, Kolin found himself telling the tree about life on Haurtoz, and of the officially announced threats to the Planetary State's planned expansion. He dwelt upon the desperation of having no place to hide in case of trouble with the authorities. A multiple system of such worlds was agonizing to imagine. Somehow, the oddity of talking to a tree wore off. Kolin heard opinions spouting out which he had prudently kept bottled up for years. The more he talked and stormed and complained, the more relaxed he felt. "If there was ever a fellow ready for this planet," decided the tree named Ashlew, "you're it, Sonny! Hang on there while I signal the Life by root!" Kolin sensed a lack of direct attention. The rustle about him was natural, caused by an ordinary breeze. He noticed his hands shaking. "Don't know what got into me, talking that way to a tree," he muttered. "If Yrtok snapped out of it and heard, I'm as good as re-personalized right now." As he brooded upon the sorry choice of arousing a search by hiding where he was or going back to bluff things out, the tree spoke. "Maybe you're all set, Sonny. The Life has been thinkin' of learning about other worlds. If you can think of a safe form to jet off in, you might make yourself a deal. How'd you like to stay here?" "I don't know," said Kolin. "The penalty for desertion—" "Whoosh! Who'd find you? You could be a bird, a tree, even a cloud." Silenced but doubting, Kolin permitted himself to try the dream on for size. He considered what form might most easily escape the notice of search parties and still be tough enough to live a long time without renewal. Another factor slipped into his musings: mere hope of escape was unsatisfying after the outburst that had defined his fuming hatred for Haurtoz. I'd better watch myself! he thought. Don't drop diamonds to grab at stars! "What I wish I could do is not just get away but get even for the way they make us live … the whole damn set-up. They could just as easy make peace with the Earth colonies. You know why they don't?" "Why?" wheezed Ashlew. "They're scared that without talk of war, and scouting for Earth fleets that never come, people would have time to think about the way they have to live and who's running things in the Planetary State. Then the gravy train would get blown up—and I mean blown up!" The tree was silent for a moment. Kolin felt the branches stir meditatively. Then Ashlew offered a suggestion. "I could tell the Life your side of it," he hissed. "Once in with us, you can always make thinking connections, no matter how far away. Maybe you could make a deal to kill two birds with one stone, as they used to say on Earth…." Chief Steward Slichow paced up and down beside the ration crate turned up to serve him as a field desk. He scowled in turn, impartially, at his watch and at the weary stewards of his headquarters detail. The latter stumbled about, stacking and distributing small packets of emergency rations. The line of crewmen released temporarily from repair work was transient as to individuals but immutable as to length. Slichow muttered something profane about disregard of orders as he glared at the rocky ridges surrounding the landing place. He was so intent upon planning greetings with which to favor the tardy scouting parties that he failed to notice the loose cloud drifting over the ridge. It was tenuous, almost a haze. Close examination would have revealed it to be made up of myriads of tiny spores. They resembled those cast forth by one of the bushes Kolin's party had passed. Along the edges, the haze faded raggedly into thin air, but the units evidently formed a cohesive body. They drifted together, approaching the men as if taking intelligent advantage of the breeze. One of Chief Slichow's staggering flunkies, stealing a few seconds of relaxation on the pretext of dumping an armful of light plastic packing, wandered into the haze. He froze. After a few heartbeats, he dropped the trash and stared at ship and men as if he had never seen either. A hail from his master moved him. "Coming, Chief!" he called but, returning at a moderate pace, he murmured, "My name is Frazer. I'm a second assistant steward. I'll think as Unit One." Throughout the cloud of spores, the mind formerly known as Peter Kolin congratulated itself upon its choice of form. Nearer to the original shape of the Life than Ashlew got , he thought. He paused to consider the state of the tree named Ashlew, half immortal but rooted to one spot, unable to float on a breeze or through space itself on the pressure of light. Especially, it was unable to insinuate any part of itself into the control center of another form of life, as a second spore was taking charge of the body of Chief Slichow at that very instant. There are not enough men , thought Kolin. Some of me must drift through the airlock. In space, I can spread through the air system to the command group. Repairs to the Peace State and the return to Haurtoz passed like weeks to some of the crew but like brief moments in infinity to other units. At last, the ship parted the air above Headquarters City and landed. The unit known as Captain Theodor Kessel hesitated before descending the ramp. He surveyed the field, the city and the waiting team of inspecting officers. "Could hardly be better, could it?" he chuckled to the companion unit called Security Officer Tarth. "Hardly, sir. All ready for the liberation of Haurtoz." "Reformation of the Planetary State," mused the captain, smiling dreamily as he grasped the handrail. "And then—formation of the Planetary Mind!" END Transcriber's Note: This e-text was produced from Worlds of If January 1962 . Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.
B. He had been holding in anger and his captain's reaction was the last straw.
How does BLI measure alignment quality?
### Introduction and Motivation The wide use and success of monolingual word embeddings in NLP tasks BIBREF0 , BIBREF1 has inspired further research focus on the induction of cross-lingual word embeddings (CLWEs). CLWE methods learn a shared cross-lingual word vector space where words with similar meanings obtain similar vectors regardless of their actual language. CLWEs benefit cross-lingual NLP, enabling multilingual modeling of meaning and supporting cross-lingual transfer for downstream tasks and resource-lean languages. CLWEs provide invaluable cross-lingual knowledge for, inter alia, bilingual lexicon induction BIBREF2 , BIBREF3 , information retrieval BIBREF4 , BIBREF5 , machine translation BIBREF6 , BIBREF7 , document classification BIBREF8 , cross-lingual plagiarism detection BIBREF9 , domain adaptation BIBREF10 , cross-lingual POS tagging BIBREF11 , BIBREF12 , and cross-lingual dependency parsing BIBREF13 , BIBREF14 . The landscape of CLWE methods has recently been dominated by the so-called projection-based methods BIBREF15 , BIBREF16 , BIBREF17 . They align two monolingual embedding spaces by learning a projection/mapping based on a training dictionary of translation pairs. Besides their simple conceptual design and competitive performance, their popularity originates from the fact that they rely on rather weak cross-lingual supervision. Originally, the seed dictionaries typically spanned several thousand word pairs BIBREF15 , BIBREF18 , BIBREF19 , but more recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF20 , identical strings BIBREF21 , or even only shared numerals BIBREF22 . Taking the idea of reducing cross-lingual supervision to the extreme, the latest CLWE developments almost exclusively focus on fully unsupervised approaches BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 : they fully abandon any source of (even weak) supervision and extract the initial seed dictionary by exploiting topological similarities between pre-trained monolingual embedding spaces. Their modus operandi can roughly be described by three main components: C1) unsupervised extraction of a seed dictionary; C2) a self-learning procedure that iteratively refines the dictionary to learn projections of increasingly higher quality; and C3) a set of preprocessing and postprocessing steps (e.g., unit length normalization, mean centering, (de)whitening) BIBREF31 that make the entire learning process more robust. The induction of fully unsupervised CLWEs is an inherently interesting research topic per se. Nonetheless, the main practical motivation for developing such approaches in the first place is to facilitate the construction of multilingual NLP tools and widen the access to language technology for resource-poor languages and language pairs. However, the first attempts at fully unsupervised CLWE induction failed exactly for these use cases, as shown by sogaard2018on. Therefore, the follow-up work aimed to improve the robustness of unsupervised CLWE induction by introducing more robust self-learning procedures BIBREF24 , BIBREF32 . Besides increased robustness, recent work claims that fully unsupervised projection-based CLWEs can even match or surpass their supervised counterparts BIBREF23 , BIBREF24 , BIBREF27 , BIBREF33 , BIBREF34 . In this paper, we critically examine these claims on robustness and improved performance of unsupervised CLWEs by running a large-scale evaluation in the bilingual lexicon induction (BLI) task on 15 languages (i.e., 210 languages pairs, see Table 2 in § "Experimental Setup" ). The languages were selected to represent different language families and morphological types, as we argue that fully unsupervised CLWEs have been designed to support exactly these setups. However, we show that even the most robust unsupervised CLWE method BIBREF24 still fails for a large number of language pairs: 87/210 BLI setups are unsuccessful, yielding (near-)zero BLI performance. Further, even when the unsupervised method succeeds, it is because the components C2 (self-learning) and C3 (pre-/post-processing) can mitigate the undesired effects of noisy seed lexicon extraction. We then demonstrate that the combination of C2 and C3 with a small provided seed dictionary (e.g., 500 or 1K pairs) outscores the unsupervised method in all cases, often with a huge margin, and does not fail for any language pair. Furthermore, we show that the most robust unsupervised CLWE approach still fails completely when it relies on monolingual word vectors trained on domain-dissimilar corpora. We also empirically verify that unsupervised approaches cannot outperform weakly supervised approaches also for closely related languages (e.g., Swedish–Danish, Spanish–Catalan). While the “no supervision at all” premise behind fully unsupervised CLWE methods is indeed seductive, our study strongly suggests that future research efforts should revisit the main motivation behind these methods and focus on designing even more robust solutions, given their current inability to support a wide spectrum of language pairs. In hope of boosting induction of CLWEs for more diverse and distant language pairs, we make all 210 training and test dictionaries used in this work publicly available at: https://github.com/ivulic/panlex-bli. ### Methodology We now dissect a general framework for unsupervised CLWE learning, and show that the “bag of tricks of the trade” used to increase their robustness (which often slips under the radar) can be equally applied to (weakly) supervised projection-based approaches, leading to their fair(er) comparison. ### Projection-Based CLWE Approaches In short, projection-based CLWE methods learn to (linearly) align independently trained monolingual spaces $\mathbf {X}$ and $\mathbf {Z}$ , using a word translation dictionary $D_0$ to guide the alignment process. Let $\mathbf {X}_D \subset \mathbf {X}$ and $\mathbf {Z}_D \subset \mathbf {Z}$ be the row-aligned subsets of monolingual spaces containing vectors of aligned words from $D_0$ . Alignment matrices $\mathbf {X}_D$ and $\mathbf {Z}_D$ are then used to learn orthogonal transformations $\mathbf {W}_x$ and $\mathbf {W}_z$ that define the joint bilingual space $\mathbf {Z}$0 . While supervised projection-based CLWE models learn the mapping using a provided external (clean) dictionary $\mathbf {Z}$1 , their unsupervised counterparts automatically induce the seed dictionary in an unsupervised way (C1) and then refine it in an iterative fashion (C2). Unsupervised CLWEs. These methods first induce a seed dictionary $D^{(1)}$ leveraging only two unaligned monolingual spaces (C1). While the algorithms for unsupervised seed dictionary induction differ, they all strongly rely on the assumption of similar topological structure between the two pretrained monolingual spaces. Once the seed dictionary is obtained, the two-step iterative self-learning procedure (C2) takes place: 1) a dictionary $D^{(k)}$ is first used to learn the joint space $\mathbf {Y}^{(k)} = \mathbf {X{W}}^{(k)}_x \cup \mathbf {Z{W}}^{(k)}_z$ ; 2) the nearest neighbours in $\mathbf {Y}^{(k)}$ then form the new dictionary $D^{(k+1)}$ . We illustrate the general structure in Figure 1 . A recent empirical survey paper BIBREF17 has compared a variety of latest unsupervised CLWE methods BIBREF23 , BIBREF27 , BIBREF33 , BIBREF24 in several downstream tasks (e.g., BLI, cross-lingual information retrieval, document classification). The results of their study indicate that the vecmap model of artetxe2018robust is by far the most robust and best performing unsupervised CLWE model. For the actual results and analyses, we refer the interested reader to the original paper of glavas2019howto. Another recent evaluation paper BIBREF35 as well as our own preliminary BLI tests (not shown for brevity) have further verified their findings. We thus focus on vecmap in our analyses, and base the following description of the components C1-C3 on that model. ### Three Key Components C1. Seed Lexicon Extraction. vecmap induces the initial seed dictionary using the following heuristic: monolingual similarity distributions for words with similar meaning will be similar across languages. The monolingual similarity distributions for the two languages are given as rows (or columns; the matrices are symmetric) of $\mathbf {M}_x = \mathbf {X}\mathbf {X}^T$ and $\mathbf {M}_z = \mathbf {Z}\mathbf {Z}^T$ . For the distributions of similarity scores to be comparable, the values in each row of $\mathbf {M}_x$ and $\mathbf {M}_z$ are first sorted. The initial dictionary $D^{(1)}$ is finally obtained by searching for mutual nearest neighbours between the rows of $\sqrt{\mathbf {M}_x}$ and of $\sqrt{\mathbf {M}_z}$ . C2. Self-Learning. Not counting the preprocessing and postprocessing steps (component C3), self-learning then iteratively repeats two steps: 1) Let $\mathbf {D}^{(k)}$ be the binary matrix indicating the aligned words in the dictionary $D^{(k)}$ . The orthogonal transformation matrices are then obtained as $\mathbf {W}^{(k)}_x = \mathbf {U}$ and $\mathbf {W}^{(k)}_z = \mathbf {V}$ , where $\mathbf {U}\mathbf {\Sigma }\mathbf {V}^T$ is the singular value decomposition of the matrix $\mathbf {X}^T\mathbf {D}^{(k)}\mathbf {Z}$ . The cross-lingual space of the $D^{(k)}$0 -th iteration is then $D^{(k)}$1 . 2) The new dictionary $D^{(k+1)}$ is then built by identifying nearest neighbours in $\mathbf {Y}^{(k)}$ . These can be easily extracted from the matrix $\mathbf {P} = \mathbf {X}\mathbf {W}^{(k)}_x( \mathbf {Z}\mathbf {W}^{(k)}_z)^T$ . All nearest neighbours can be used, or additional symmetry constraints can be imposed to extract only mutual nearest neighbours: all pairs of indices ( $i, j$ ) for which $\mathbf {P}_{ij}$ is the largest value both in row $i$ and column $j$ . The above procedure, however, often converges to poor local optima. To remedy for this, the second step (i.e., dictionary induction) is extended with techniques that make self-learning more robust. First, the vocabularies of $\mathbf {X}$ and $\mathbf {Z}$ are cut to the top $k$ most frequent words. Second, similarity scores in $\mathbf {P}$ are kept with probability $p$ , and set to zero otherwise. This dropout allows for a wider exploration of possible word pairs in the dictionary and contributes to escaping poor local optima given the noisy seed lexicon in the first iterations. C3. Preprocessing and Postprocessing Steps. While iteratively learning orthogonal transformations $\mathbf {W}_{x}$ and $\mathbf {W}_{z}$ for $\mathbf {X}$ and $\mathbf {Z}$ is the central step of unsupervised projection-based CLWE methods, preprocessing and postprocessing techniques are additionally applied before and after the transformation. While such techniques are often overlooked in model comparisons, they may have a great impact on the model's final performance, as we validate in § "Results and Discussion" . We briefly summarize two pre-processing (S1 and S2) and post-processing (S3 and S4) steps used in our evaluation, originating from the framework of artetxe2018generalizing. S1) Normalization and mean centering. We first apply unit length normalization: all vectors in $\mathbf {X}$ and $\mathbf {Z}$ are normalized to have a unit Euclidean norm. Following that, $\mathbf {X}$ and $\mathbf {Z}$ are mean centered dimension-wise and then again length-normalized. S2) Whitening. ZCA whitening BIBREF36 is applied on (S1-processed) $\mathbf {X}$ and $\mathbf {Z}$ : it transforms the matrices such that each dimension has unit variance and that the dimensions are uncorrelated. Intuitively, the vector spaces are easier to align along directions of high variance. S3) Dewhitening. A transformation inverse to S2: for improved performance it is important to restore the variance information after the projection, if whitening was applied in S2 BIBREF31 . S4) Symmetric re-weighting. This step attempts to further align the embeddings in the cross-lingual embedding space by measuring how well a dimension in the space correlates across languages for the current iteration dictionary $D^{(k)}$ . The best results are obtained when re-weighting is neutral to the projection direction, that is, when it is applied symmetrically in both languages. In the actual implementation S1 is applied only once, before self-learning. S2, S3 and S4 are applied in each self-learning iteration. Model Configurations. Note that C2 and C3 can be equally used on top of any (provided) seed lexicon (i.e., $D^{(1)}$ := $D_0$ ) to enable weakly supervised learning, as we propose here. In fact, the variations of the three key components, C1) seed lexicon, C2) self-learning, and C3) preprocessing and postprocessing, construct various model configurations which can be analyzed to probe the importance of each component in the CLWE induction process. A selection of representative configurations evaluated later in § "Results and Discussion" is summarized in Table 1 . ### Experimental Setup Evaluation Task. Our task is bilingual lexicon induction (BLI). It has become the de facto standard evaluation for projection-based CLWEs BIBREF16 , BIBREF17 . In short, after a shared CLWE space has been induced, the task is to retrieve target language translations for a test set of source language words. Its lightweight nature allows us to conduct a comprehensive evaluation across a large number of language pairs. Since BLI is cast as a ranking task, following glavas2019howto we use mean average precision (MAP) as the main evaluation metric: in our BLI setup with only one correct translation for each “query” word, MAP is equal to mean reciprocal rank (MRR). (Selection of) Language Pairs. Our selection of test languages is guided by the following goals: a) following recent initiatives in other NLP research (e.g., for language modeling) BIBREF39 , BIBREF40 , we aim to ensure the coverage of different genealogical and typological language properties, and b) we aim to analyze a large set of language pairs and offer new evaluation data which extends and surpasses other work in the CLWE literature. These two properties will facilitate analyses between (dis)similar language pairs and offer a comprehensive set of evaluation setups that test the robustness and portability of fully unsupervised CLWEs. The final list of 15 diverse test languages is provided in Table 2 , and includes samples from different languages types and families. We run BLI evaluations for all language pairs in both directions, for a total of 15 $\times $ 14=210 BLI setups. Monolingual Embeddings. We use the 300-dim vectors of Grave:2018lrec for all 15 languages, pretrained on Common Crawl and Wikipedia with fastText BIBREF41 . We trim all vocabularies to the 200K most frequent words. Training and Test Dictionaries. They are derived from PanLex BIBREF43 , BIBREF44 , which was used in prior work on cross-lingual word embeddings BIBREF45 , BIBREF46 . PanLex currently spans around 1,300 language varieties with over 12M expressions: it offers some support and supervision also for low-resource language pairs BIBREF47 . For each source language ( $L_1$ ), we automatically translate their vocabulary words (if they are present in PanLex) to all 14 target ( $L_2$ ) languages. To ensure the reliability of the translation pairs, we retain only unigrams found in the vocabularies of the respective $L_2$ monolingual spaces which scored above a PanLex-predefined threshold. As in prior work BIBREF23 , BIBREF17 , we then reserve the 5K pairs created from the more frequent $L_1$ words for training, while the next 2K pairs are used for test. Smaller training dictionaries (1K and 500 pairs) are created by again selecting pairs comprising the most frequent $L_1$ words. Training Setup. In all experiments, we set the hyper-parameters to values that were tuned in prior research. When extracting the unsupervised seed lexicon, the 4K most frequent words of each language are used; self-learning operates on the 20K most frequent words of each language; with dropout the keep probability $p$ is 0.1; CSLS with $k=10$ nearest neighbors BIBREF24 . Again, Table 1 lists the main model configurations in our comparison. For the fully unsupervised model we always report the best performing configuration after probing different self-learning strategies (i.e., +sl, +sl+nod, and +sl+sym are tested). The results for unsupervised are always reported as averages over 5 restarts: this means that with unsupervised we count BLI setups as unsuccessful only if the results are close to zero in all 5/5 runs. orthg-super is the standard supervised model with orthogonal projections from prior work BIBREF21 , BIBREF17 . ### Results and Discussion Main BLI results averaged over each source language ( $L_1$ ) are provided in Table 3 and Table 4 . We now summarize and discuss the main findings across several dimensions of comparison. Unsupervised vs. (Weakly) Supervised. First, when exactly the same components C2 and C3 are used, unsupervised is unable to outperform a (weakly) supervised full+sl+sym variant, and the gap in final performance is often substantial. In fact, full+sl+sym and full+sl+nod outperform the best unsupervised for all 210/210 BLI setups: we observe the same phenomenon with varying dictionary sizes, that is, it equally holds when we seed self-learning with 5K, 1K, and 500 translation pairs, see also Figure 2 . This also suggests that the main reason why unsupervised approaches were considered on-par with supervised approaches in prior work BIBREF23 , BIBREF24 is because they were not compared under fair circumstances: while unsupervised relied heavily on the components C2 and C3, these were omitted when running supervised baselines. Our unbiased comparison reveals that there is a huge gap even when supervised projection-based approaches consume only several hundred translation pairs to initiate self-learning. Are Unsupervised CLWEs Robust? The results also indicate that, contrary to the beliefs established by very recent work BIBREF24 , BIBREF30 , fully unsupervised approaches are still prone to getting stuck in local optima, and still suffer from robustness issues when dealing with distant language pairs: 87 out of 210 BLI setups ( $=41.4\%$ ) result in (near-)zero BLI performance, see also Table 4 . At the same time, weakly supervised methods with a seed lexicon of 1k or 500 pairs do not suffer from the robustness problem and always converge to a good solution, as also illustrated by the results reported in Table 5 . How Important are Preprocessing and Postprocessing? The comparisons between orthg-super (and orthg+sl+sym) on the one hand, and full-super (and full+sl+sym) on the other hand clearly indicate that the component C3 plays a substantial role in effective CLWE learning. full-super, which employs all steps S1-S4 (see § "Methodology" ), outperforms orthg-super in 208/210 setups with $|D_0|$ =5k and in 210/210 setups with $|D_0|$ =1k. Similarly, full+sl+sym is better than orthg+sl+sym in 210/210 setups (both for $|D_0|$ =1k,5k). The scores also indicate that dropout with self-learning is useful only when we work with noisy unsupervised seed lexicons: full+sl+nod and full+sl+sym without dropout consistently outperform full+sl across the board. How Important is (Robust) Self-Learning? We note that the best self-learning method is often useful even when $|D_0|=5k$ (i.e., full+sl+sym is better than full-super in 164/210 setups). However, the importance of robust self-learning gets more pronounced as we decrease the size of $D_0$ : full+sl+sym is better than full-super in 210/210 setups when $|D_0|=500$ or $|D_0|=1,000$ . The gap between the two models, as shown in Figure 2 , increases dramatically in favor of full+sl+sym as we decrease $|D_0|$ . Again, just comparing full-super and unsupervised in Figure 2 might give a false impression that fully unsupervised CLWE methods can match their supervised counterparts, but the comparison to full+sl+sym reveals the true extent of performance drop when we abandon even weak supervision. The scores also reveal that the choice of self-learning (C2) does matter: all best performing BLI runs with $|D_0|=1k$ are obtained by two configs with self-learning, and full+sl+sym is the best configuration for 177/210 setups (see Table 4 ). Language Pairs. As suggested before by sogaard2018on and further verified by glavas2019howto and doval2019onthe, the language pair at hand can have a huge impact on CLWE induction: the adversarial method of conneau2018word often gets stuck in poor local optima and yields degenerate solutions for distant language pairs such as English-Finnish. More recent CLWE methods BIBREF24 , BIBREF30 focus on mitigating this robustness issue. However, they still rely on one critical assumption which leads them to degraded performance for distant language pairs: they assume approximate isomorphism BIBREF49 , BIBREF48 between monolingual embedding spaces to learn the initial seed dictionary. In other words, they assume very similar geometric constellations between two monolingual spaces: due to the Zipfian phenomena in language BIBREF50 such near-isomorphism can be satisfied only for similar languages and for similar domains used for training monolingual vectors. This property is reflected in the results reported in Table 3 , the number of unsuccessful setups in Table 4 , as well as later in Figure 4 . For instance, the largest number of unsuccessful BLI setups with the unsupervised model is reported for Korean, Thai (a tonal language), and Basque (a language isolate): their morphological and genealogical properties are furthest away from other languages in our comparison. A substantial number of unsuccessful setups is also observed with other two language outliers from our set (see Table 2 again), Georgian and Indonesian, as well as with morphologically-rich languages such as Estonian or Turkish. One setting in which fully unsupervised methods did show impressive results in prior work are similar language pairs. However, even in these settings when the comparison to the weakly supervised full-super+sym is completely fair (i.e., the same components C2 and C3 are used for both), unsupervised still falls short of full-super+sym. These results for three source languages are summarized in Figure 3 . What is more, one could argue that we do not need unsupervised CLWEs for similar languages in the first place: we can harvest cheap supervision here, e.g., cognates. The main motivation behind unsupervised approaches is to support dissimilar and resource-poor language pairs for which supervision cannot be guaranteed. Domain Differences. Finally, we also verify that unsupervised CLWEs still cannot account for domain differences when training monolingual vectors. We rely on the probing test of sogaard2018on: 300-dim fastText vectors are trained on 1.1M sentences on three corpora: 1) EuroParl.v7 BIBREF51 (parliamentary proceedings); 2) Wikipedia BIBREF52 , and 3) EMEA BIBREF53 (medical), and BLI evaluation for three language pairs is conducted on standard MUSE BLI test sets BIBREF23 . The results, summarized in Figure 4 , reveal that unsupervised methods are able to yield a good solution only when there is no domain mismatch and for the pair with two most similar languages (English-Spanish), again questioning their robustness and portability to truly low-resource and more challenging setups. Weakly supervised methods ( $|D_0|=500$ or $D_0$ seeded with identical strings), in contrast, yield good solutions for all setups. ### Further Discussion and Conclusion The superiority of weakly supervised methods (e.g., full+sl+sym) over unsupervised methods is especially pronounced for distant and typologically heterogeneous language pairs. However, our study also indicates that even carefully engineered projection-based methods with some seed supervision yield lower absolute performance for such pairs. While we have witnessed the proliferation of fully unsupervised CLWE models recently, some fundamental questions still remain. For instance, the underlying assumption of all projection-based methods (both supervised and unsupervised) is the topological similarity between monolingual spaces, which is why standard simple linear projections result in lower absolute BLI scores for distant pairs (see Table 4 and results in the supplemental material). Unsupervised approaches even exploit the assumption twice as their seed extraction is fully based on the topological similarity. Future work should move beyond the restrictive assumption by exploring new methods that can, e.g., 1) increase the isomorphism between monolingual spaces BIBREF54 by distinguishing between language-specific and language-pair-invariant subspaces; 2) learn effective non-linear or multiple local projections between monolingual spaces similar to the preliminary work of nakashole2018norma; 3) similar to vulic2016on and Lubin:2019naacl “denoisify” seed lexicons during the self-learning procedure. For instance, keeping only mutual/symmetric nearest neighbour as in full+sl+sym can be seen as a form of rudimentary denoisifying: it is indicative to see that the best overall performance in this work is reported with that model configuration. Further, the most important contributions of unsupervised CLWE models are, in fact, the improved and more robust self-learning procedures (component C2) and technical enhancements (component C3). In this work we have demonstrated that these components can be equally applied to weakly supervised approaches: starting from a set of only several hundred pairs, they can guarantee consistently improved performance across the board. As there is still no clear-cut use case scenario for unsupervised CLWEs, instead of “going fully unsupervised”, one pragmatic approach to widen the scope of CLWE learning and its application might invest more effort into extracting at least some seed supervision for a variety of language pairs BIBREF22 . This finding aligns well with the ongoing initiatives of the PanLex project BIBREF44 and the ASJP database BIBREF56 , which aim to collate at least some translation pairs in most of the world’s languages. Finally, this paper demonstrates that, in order to enable fair comparisons, future work on CLWEs should focus on evaluating the CLWE methods' constituent components (e.g, components C1-C3 from this work) instead of full-blown composite systems directly. One goal of the paper is to acknowledge that the work on fully unsupervised CLWE methods has indeed advanced state-of-the-art in cross-lingual word representation learning by offering new solutions also to weakly supervised CLWE methods. However, the robustness problems are still prominent with fully unsupervised CLWEs, and future work should invest more time and effort into developing more robust and more effective methods, e.g., by reaching beyond projection-based methods towards joint approaches BIBREF16 , BIBREF57 . ### Acknowledgments This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The work of Goran Glavaš is supported by the Baden-Württemberg Stiftung (AGREE grant of the Eliteprogramm). Roi Reichart is partially funded by ISF personal grants No. 1625/18. We thank the three anonymous reviewers for their encouraging comments and suggestions. Figure 1: General unsupervised CLWE approach. Table 1: Configurations obtained by varying components C1, C2, and C3 used in our empirical comparison in §4. Table 2: The list of 15 languages from our main BLI experiments along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-1 code. Table 3: BLI scores (MRR) for all model configurations. The scores are averaged over all experimental setups where each of the 15 languages is used as L1: e.g., CA-* means that the translation direction is from Catalan (CA) as source (L1) to each of the remaining 14 languages listed in Table 2 as targets (L2), and we average over the corresponding 14 CA-* BLI setups. 5k and 1k denote the seed dictionary size for (weakly) supervised methods (D0). Unsuccessful setups refer to the number of BLI experimental setups with the fully UNSUPERVISED model that yield an MRR score ≤ 0.01. The Avg column refers to the averaged MRR scores of each model configuration over all 15×14=210 BLI setups. The highest scores for two different seed dictionary sizes in each column are in bold, the second best are underlined. See Table 1 for the brief description of all model configurations in the comparison. Full results for each particular language pair are available in the supplemental material. Table 4: Summary statistics computed over all 15×14=210 BLI setups. a) Unsuc. denotes the total number of unsuccessful setups, where a setup is considered unsuccessful if MRR ≤ 0.01 or MRR ≤ 0.05 (in the parentheses); b) Win refers to the total number of “winning” setups, that is, for all language pairs it counts how many times each particular model yields the best overall MRR score. We compute separate statistics for two settings (|D0| = 1k and |D0| = 5k). Figure 2: A comparison of average BLI scores with different seed dictionary sizes D0 between a fully unsupervised method (UNSUPER), a supervised method without self-learning (SUPER), and two best performing weakly supervised methods with self learning (+SL+NOD and +SL+SYM). While SUPER without selflearning displays a steep drop in performance with smaller seed dictionaries, there is only a slight decrease when self-learning is used: e.g., 500 translation pairs are still sufficient to initialize robust self-learning. Table 5: Results for a selection of BLI setups which were unsuccessful with the UNSUPERVIED CLWE method. Figure 3: A comparison of BLI scores on “easy” (i.e., similar) language pairs between the fully UNSUPERVISED model and a weakly supervised model (seed dictionary size |D0| = 200 or |D0| = 500) which relies on the self-learning procedure with the symmetry constraint (FULL+SL+SYM). Figure 4: BLI scores with the (most robust) fully UNSUPERVISED model for different language pairs when monolingual word embeddings are trained on dissimilar domains: parliamentary proceedings (EuroParl), Wikipedia (Wiki), and medical corpora (EMEA). Training and test data are the same as in (Søgaard et al., 2018). Table 6: All BLI scores (MRR) with Bulgarian (BG) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 7: All BLI scores (MRR) with Catalan (CA) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 8: All BLI scores (MRR) with Esperanto (EO) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 9: All BLI scores (MRR) with Estonian (ET) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 10: All BLI scores (MRR) with Basque (EU) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 11: All BLI scores (MRR) with Finnish (FI) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 12: All BLI scores (MRR) with Hebrew (HE) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 13: All BLI scores (MRR) with Hungarian (HU) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 14: All BLI scores (MRR) with Indonesian (ID) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 15: All BLI scores (MRR) with Georgian (KA) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 16: All BLI scores (MRR) with Korean (KO) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 17: All BLI scores (MRR) with Lithuanian (LT) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 18: All BLI scores (MRR) with Norwegian (NO) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 19: All BLI scores (MRR) with Thai (TH) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration. Table 20: All BLI scores (MRR) with Turkish (TR) as the source language. 5k and 1k denote the seed dictionary D0 size for (weakly) supervised methods. See Table 1 for a brief description of each model configuration.
we use mean average precision (MAP) as the main evaluation metric
What motivates Pop Young to live on the far side of the moon? A. He is being compensated for a wrongful death suit that occurred back on Earth B. He is close to Sattell's location, which enhances his memories of his wife and children C. If he left his post, there would be no one to monitor the mines in the Big Crack D. If he returned to Earth, he would be arrested for the murder of his family
SCRIMSHAW The old man just wanted to get back his memory—and the methods he used were gently hellish, from the viewpoint of the others.... BY MURRAY LEINSTER Illustrated by Freas Pop Young was the one known man who could stand life on the surface of the Moon's far side, and, therefore, he occupied the shack on the Big Crack's edge, above the mining colony there. Some people said that no normal man could do it, and mentioned the scar of a ghastly head-wound to explain his ability. One man partly guessed the secret, but only partly. His name was Sattell and he had reason not to talk. Pop Young alone knew the whole truth, and he kept his mouth shut, too. It wasn't anybody else's business. The shack and the job he filled were located in the medieval notion of the physical appearance of hell. By day the environment was heat and torment. By night—lunar night, of course, and lunar day—it was frigidity and horror. Once in two weeks Earth-time a rocketship came around the horizon from Lunar City with stores for the colony deep underground. Pop received the stores and took care of them. He handed over the product of the mine, to be forwarded to Earth. The rocket went away again. Come nightfall Pop lowered the supplies down the long cable into the Big Crack to the colony far down inside, and freshened up the landing field marks with magnesium marking-powder if a rocket-blast had blurred them. That was fundamentally all he had to do. But without him the mine down in the Crack would have had to shut down. The Crack, of course, was that gaping rocky fault which stretches nine hundred miles, jaggedly, over the side of the Moon that Earth never sees. There is one stretch where it is a yawning gulf a full half-mile wide and unguessably deep. Where Pop Young's shack stood it was only a hundred yards, but the colony was a full mile down, in one wall. There is nothing like it on Earth, of course. When it was first found, scientists descended into it to examine the exposed rock-strata and learn the history of the Moon before its craters were made. But they found more than history. They found the reason for the colony and the rocket landing field and the shack. The reason for Pop was something else. The shack stood a hundred feet from the Big Crack's edge. It looked like a dust-heap thirty feet high, and it was. The outside was surface moondust, piled over a tiny dome to be insulation against the cold of night and shadow and the furnace heat of day. Pop lived in it all alone, and in his spare time he worked industriously at recovering some missing portions of his life that Sattell had managed to take away from him. He thought often of Sattell, down in the colony underground. There were galleries and tunnels and living-quarters down there. There were air-tight bulkheads for safety, and a hydroponic garden to keep the air fresh, and all sorts of things to make life possible for men under if not on the Moon. But it wasn't fun, even underground. In the Moon's slight gravity, a man is really adjusted to existence when he has a well-developed case of agoraphobia. With such an aid, a man can get into a tiny, coffinlike cubbyhole, and feel solidity above and below and around him, and happily tell himself that it feels delicious. Sometimes it does. But Sattell couldn't comfort himself so easily. He knew about Pop, up on the surface. He'd shipped out, whimpering, to the Moon to get far away from Pop, and Pop was just about a mile overhead and there was no way to get around him. It was difficult to get away from the mine, anyhow. It doesn't take too long for the low gravity to tear a man's nerves to shreds. He has to develop kinks in his head to survive. And those kinks— The first men to leave the colony had to be knocked cold and shipped out unconscious. They'd been underground—and in low gravity—long enough to be utterly unable to face the idea of open spaces. Even now there were some who had to be carried, but there were some tougher ones who were able to walk to the rocketship if Pop put a tarpaulin over their heads so they didn't have to see the sky. In any case Pop was essential, either for carrying or guidance. Sattell got the shakes when he thought of Pop, and Pop rather probably knew it. Of course, by the time he took the job tending the shack, he was pretty certain about Sattell. The facts spoke for themselves. Pop had come back to consciousness in a hospital with a great wound in his head and no memory of anything that had happened before that moment. It was not that his identity was in question. When he was stronger, the doctors told him who he was, and as gently as possible what had happened to his wife and children. They'd been murdered after he was seemingly killed defending them. But he didn't remember a thing. Not then. It was something of a blessing. But when he was physically recovered he set about trying to pick up the threads of the life he could no longer remember. He met Sattell quite by accident. Sattell looked familiar. Pop eagerly tried to ask him questions. And Sattell turned gray and frantically denied that he'd ever seen Pop before. All of which happened back on Earth and a long time ago. It seemed to Pop that the sight of Sattell had brought back some vague and cloudy memories. They were not sharp, though, and he hunted up Sattell again to find out if he was right. And Sattell went into panic when he returned. Nowadays, by the Big Crack, Pop wasn't so insistent on seeing Sattell, but he was deeply concerned with the recovery of the memories that Sattell helped bring back. Pop was a highly conscientious man. He took good care of his job. There was a warning-bell in the shack, and when a rocketship from Lunar City got above the horizon and could send a tight beam, the gong clanged loudly, and Pop got into a vacuum-suit and went out the air lock. He usually reached the moondozer about the time the ship began to brake for landing, and he watched it come in. He saw the silver needle in the sky fighting momentum above a line of jagged crater-walls. It slowed, and slowed, and curved down as it drew nearer. The pilot killed all forward motion just above the field and came steadily and smoothly down to land between the silvery triangles that marked the landing place. Instantly the rockets cut off, drums of fuel and air and food came out of the cargo-hatch and Pop swept forward with the dozer. It was a miniature tractor with a gigantic scoop in front. He pushed a great mound of talc-fine dust before him to cover up the cargo. It was necessary. With freight costing what it did, fuel and air and food came frozen solid, in containers barely thicker than foil. While they stayed at space-shadow temperature, the foil would hold anything. And a cover of insulating moondust with vacuum between the grains kept even air frozen solid, though in sunlight. At such times Pop hardly thought of Sattell. He knew he had plenty of time for that. He'd started to follow Sattell knowing what had happened to his wife and children, but it was hearsay only. He had no memory of them at all. But Sattell stirred the lost memories. At first Pop followed absorbedly from city to city, to recover the years that had been wiped out by an axe-blow. He did recover a good deal. When Sattell fled to another continent, Pop followed because he had some distinct memories of his wife—and the way he'd felt about her—and some fugitive mental images of his children. When Sattell frenziedly tried to deny knowledge of the murder in Tangier, Pop had come to remember both his children and some of the happiness of his married life. Even when Sattell—whimpering—signed up for Lunar City, Pop tracked him. By that time he was quite sure that Sattell was the man who'd killed his family. If so, Sattell had profited by less than two days' pay for wiping out everything that Pop possessed. But Pop wanted it back. He couldn't prove Sattell's guilt. There was no evidence. In any case, he didn't really want Sattell to die. If he did, there'd be no way to recover more lost memories. Sometimes, in the shack on the far side of the Moon, Pop Young had odd fancies about Sattell. There was the mine, for example. In each two Earth-weeks of working, the mine-colony nearly filled up a three-gallon cannister with greasy-seeming white crystals shaped like two pyramids base to base. The filled cannister would weigh a hundred pounds on Earth. Here it weighed eighteen. But on Earth its contents would be computed in carats, and a hundred pounds was worth millions. Yet here on the Moon Pop kept a waiting cannister on a shelf in his tiny dome, behind the air-apparatus. It rattled if he shook it, and it was worth no more than so many pebbles. But sometimes Pop wondered if Sattell ever thought of the value of the mine's production. If he would kill a woman and two children and think he'd killed a man for no more than a hundred dollars, what enormity would he commit for a three-gallon quantity of uncut diamonds? But he did not dwell on such speculation. The sun rose very, very slowly in what by convention was called the east. It took nearly two hours to urge its disk above the horizon, and it burned terribly in emptiness for fourteen times twenty-four hours before sunset. Then there was night, and for three hundred and thirty-six consecutive hours there were only stars overhead and the sky was a hole so terrible that a man who looked up into it—what with the nagging sensation of one-sixth gravity—tended to lose all confidence in the stability of things. Most men immediately found it hysterically necessary to seize hold of something solid to keep from falling upward. But nothing felt solid. Everything fell, too. Wherefore most men tended to scream. But not Pop. He'd come to the Moon in the first place because Sattell was here. Near Sattell, he found memories of times when he was a young man with a young wife who loved him extravagantly. Then pictures of his children came out of emptiness and grew sharp and clear. He found that he loved them very dearly. And when he was near Sattell he literally recovered them—in the sense that he came to know new things about them and had new memories of them every day. He hadn't yet remembered the crime which lost them to him. Until he did—and the fact possessed a certain grisly humor—Pop didn't even hate Sattell. He simply wanted to be near him because it enabled him to recover new and vivid parts of his youth that had been lost. Otherwise, he was wholly matter-of-fact—certainly so for the far side of the Moon. He was a rather fussy housekeeper. The shack above the Big Crack's rim was as tidy as any lighthouse or fur-trapper's cabin. He tended his air-apparatus with a fine precision. It was perfectly simple. In the shadow of the shack he had an unfailing source of extreme low temperature. Air from the shack flowed into a shadow-chilled pipe. Moisture condensed out of it here, and CO 2 froze solidly out of it there, and on beyond it collected as restless, transparent liquid air. At the same time, liquid air from another tank evaporated to maintain the proper air pressure in the shack. Every so often Pop tapped the pipe where the moisture froze, and lumps of water ice clattered out to be returned to the humidifier. Less often he took out the CO 2 snow, and measured it, and dumped an equivalent quantity of pale-blue liquid oxygen into the liquid air that had been purified by cold. The oxygen dissolved. Then the apparatus reversed itself and supplied fresh air from the now-enriched fluid, while the depleted other tank began to fill up with cold-purified liquid air. Outside the shack, jagged stony pinnacles reared in the starlight, and craters complained of the bombardment from space that had made them. But, outside, nothing ever happened. Inside, it was quite different. Working on his memories, one day Pop made a little sketch. It helped a great deal. He grew deeply interested. Writing-material was scarce, but he spent most of the time between two particular rocket-landings getting down on paper exactly how a child had looked while sleeping, some fifteen years before. He remembered with astonishment that the child had really looked exactly like that! Later he began a sketch of his partly-remembered wife. In time—he had plenty—it became a really truthful likeness. The sun rose, and baked the abomination of desolation which was the moonscape. Pop Young meticulously touched up the glittering triangles which were landing guides for the Lunar City ships. They glittered from the thinnest conceivable layer of magnesium marking-powder. He checked over the moondozer. He tended the air apparatus. He did everything that his job and survival required. Ungrudgingly. Then he made more sketches. The images to be drawn came back more clearly when he thought of Sattell, so by keeping Sattell in mind he recovered the memory of a chair that had been in his forgotten home. Then he drew his wife sitting in it, reading. It felt very good to see her again. And he speculated about whether Sattell ever thought of millions of dollars' worth of new-mined diamonds knocking about unguarded in the shack, and he suddenly recollected clearly the way one of his children had looked while playing with her doll. He made a quick sketch to keep from forgetting that. There was no purpose in the sketching, save that he'd lost all his young manhood through a senseless crime. He wanted his youth back. He was recovering it bit by bit. The occupation made it absurdly easy to live on the surface of the far side of the Moon, whether anybody else could do it or not. Sattell had no such device for adjusting to the lunar state of things. Living on the Moon was bad enough anyhow, then, but living one mile underground from Pop Young was much worse. Sattell clearly remembered the crime Pop Young hadn't yet recalled. He considered that Pop had made no overt attempt to revenge himself because he planned some retaliation so horrible and lingering that it was worth waiting for. He came to hate Pop with an insane ferocity. And fear. In his mind the need to escape became an obsession on top of the other psychotic states normal to a Moon-colonist. But he was helpless. He couldn't leave. There was Pop. He couldn't kill Pop. He had no chance—and he was afraid. The one absurd, irrelevant thing he could do was write letters back to Earth. He did that. He wrote with the desperate, impassioned, frantic blend of persuasion and information and genius-like invention of a prisoner in a high-security prison, trying to induce someone to help him escape. He had friends, of a sort, but for a long time his letters produced nothing. The Moon swung in vast circles about the Earth, and the Earth swung sedately about the Sun. The other planets danced their saraband. The rest of humanity went about its own affairs with fascinated attention. But then an event occurred which bore directly upon Pop Young and Sattell and Pop Young's missing years. Somebody back on Earth promoted a luxury passenger-line of spaceships to ply between Earth and Moon. It looked like a perfect set-up. Three spacecraft capable of the journey came into being with attendant reams of publicity. They promised a thrill and a new distinction for the rich. Guided tours to Lunar! The most expensive and most thrilling trip in history! One hundred thousand dollars for a twelve-day cruise through space, with views of the Moon's far side and trips through Lunar City and a landing in Aristarchus, plus sound-tapes of the journey and fame hitherto reserved for honest explorers! It didn't seem to have anything to do with Pop or with Sattell. But it did. There were just two passenger tours. The first was fully booked. But the passengers who paid so highly, expected to be pleasantly thrilled and shielded from all reasons for alarm. And they couldn't be. Something happens when a self-centered and complacent individual unsuspectingly looks out of a spaceship port and sees the cosmos unshielded by mists or clouds or other aids to blindness against reality. It is shattering. A millionaire cut his throat when he saw Earth dwindled to a mere blue-green ball in vastness. He could not endure his own smallness in the face of immensity. Not one passenger disembarked even for Lunar City. Most of them cowered in their chairs, hiding their eyes. They were the simple cases of hysteria. But the richest girl on Earth, who'd had five husbands and believed that nothing could move her—she went into catatonic withdrawal and neither saw nor heard nor moved. Two other passengers sobbed in improvised strait jackets. The first shipload started home. Fast. The second luxury liner took off with only four passengers and turned back before reaching the Moon. Space-pilots could take the strain of space-flight because they had work to do. Workers for the lunar mines could make the trip under heavy sedation. But it was too early in the development of space-travel for pleasure-passengers. They weren't prepared for the more humbling facts of life. Pop heard of the quaint commercial enterprise through the micro-tapes put off at the shack for the men down in the mine. Sattell probably learned of it the same way. Pop didn't even think of it again. It seemed to have nothing to do with him. But Sattell undoubtedly dealt with it fully in his desperate writings back to Earth. Pop matter-of-factly tended the shack and the landing field and the stores for the Big Crack mine. Between-times he made more drawings in pursuit of his own private objective. Quite accidentally, he developed a certain talent professional artists might have approved. But he was not trying to communicate, but to discover. Drawing—especially with his mind on Sattell—he found fresh incidents popping up in his recollection. Times when he was happy. One day he remembered the puppy his children had owned and loved. He drew it painstakingly—and it was his again. Thereafter he could remember it any time he chose. He did actually recover a completely vanished past. He envisioned a way to increase that recovery. But there was a marked shortage of artists' materials on the Moon. All freight had to be hauled from Earth, on a voyage equal to rather more than a thousand times around the equator of the Earth. Artists' supplies were not often included. Pop didn't even ask. He began to explore the area outside the shack for possible material no one would think of sending from Earth. He collected stones of various sorts, but when warmed up in the shack they were useless. He found no strictly lunar material which would serve for modeling or carving portraits in the ground. He found minerals which could be pulverized and used as pigments, but nothing suitable for this new adventure in the recovery of lost youth. He even considered blasting, to aid his search. He could. Down in the mine, blasting was done by soaking carbon black—from CO 2 —in liquid oxygen, and then firing it with a spark. It exploded splendidly. And its fumes were merely more CO 2 which an air-apparatus handled easily. He didn't do any blasting. He didn't find any signs of the sort of mineral he required. Marble would have been perfect, but there is no marble on the Moon. Naturally! Yet Pop continued to search absorbedly for material with which to capture memory. Sattell still seemed necessary, but— Early one lunar morning he was a good two miles from his shack when he saw rocket-fumes in the sky. It was most unlikely. He wasn't looking for anything of the sort, but out of the corner of his eye he observed that something moved. Which was impossible. He turned his head, and there were rocket-fumes coming over the horizon, not in the direction of Lunar City. Which was more impossible still. He stared. A tiny silver rocket to the westward poured out monstrous masses of vapor. It decelerated swiftly. It curved downward. The rockets checked for an instant, and flamed again more violently, and checked once more. This was not an expert approach. It was a faulty one. Curving surface-ward in a sharply changing parabola, the pilot over-corrected and had to wait to gather down-speed, and then over-corrected again. It was an altogether clumsy landing. The ship was not even perfectly vertical when it settled not quite in the landing-area marked by silvery triangles. One of its tail-fins crumpled slightly. It tilted a little when fully landed. Then nothing happened. Pop made his way toward it in the skittering, skating gait one uses in one-sixth gravity. When he was within half a mile, an air-lock door opened in the ship's side. But nothing came out of the lock. No space-suited figure. No cargo came drifting down with the singular deliberation of falling objects on the Moon. It was just barely past lunar sunrise on the far side of the Moon. Incredibly long and utterly black shadows stretched across the plain, and half the rocketship was dazzling white and half was blacker than blackness itself. The sun still hung low indeed in the black, star-speckled sky. Pop waded through moondust, raising a trail of slowly settling powder. He knew only that the ship didn't come from Lunar City, but from Earth. He couldn't imagine why. He did not even wildly connect it with what—say—Sattell might have written with desperate plausibility about greasy-seeming white crystals out of the mine, knocking about Pop Young's shack in cannisters containing a hundred Earth-pounds weight of richness. Pop reached the rocketship. He approached the big tail-fins. On one of them there were welded ladder-rungs going up to the opened air-lock door. He climbed. The air-lock was perfectly normal when he reached it. There was a glass port in the inner door, and he saw eyes looking through it at him. He pulled the outer door shut and felt the whining vibration of admitted air. His vacuum suit went slack about him. The inner door began to open, and Pop reached up and gave his helmet the practiced twisting jerk which removed it. Then he blinked. There was a red-headed man in the opened door. He grinned savagely at Pop. He held a very nasty hand-weapon trained on Pop's middle. "Don't come in!" he said mockingly. "And I don't give a damn about how you are. This isn't social. It's business!" Pop simply gaped. He couldn't quite take it in. "This," snapped the red-headed man abruptly, "is a stickup!" Pop's eyes went through the inner lock-door. He saw that the interior of the ship was stripped and bare. But a spiral stairway descended from some upper compartment. It had a handrail of pure, transparent, water-clear plastic. The walls were bare insulation, but that trace of luxury remained. Pop gazed at the plastic, fascinated. The red-headed man leaned forward, snarling. He slashed Pop across the face with the barrel of his weapon. It drew blood. It was wanton, savage brutality. "Pay attention!" snarled the red-headed man. "A stickup, I said! Get it? You go get that can of stuff from the mine! The diamonds! Bring them here! Understand?" Pop said numbly: "What the hell?" The red-headed man hit him again. He was nerve-racked, and, therefore, he wanted to hurt. "Move!" he rasped. "I want the diamonds you've got for the ship from Lunar City! Bring 'em!" Pop licked blood from his lips and the man with the weapon raged at him. "Then phone down to the mine! Tell Sattell I'm here and he can come on up! Tell him to bring any more diamonds they've dug up since the stuff you've got!" He leaned forward. His face was only inches from Pop Young's. It was seamed and hard-bitten and nerve-racked. But any man would be quivering if he wasn't used to space or the feel of one-sixth gravity on the Moon. He panted: "And get it straight! You try any tricks and we take off! We swing over your shack! The rocket-blast smashes it! We burn you down! Then we swing over the cable down to the mine and the rocket-flame melts it! You die and everybody in the mine besides! No tricks! We didn't come here for nothing!" He twitched all over. Then he struck cruelly again at Pop Young's face. He seemed filled with fury, at least partly hysterical. It was the tension that space-travel—then, at its beginning—produced. It was meaningless savagery due to terror. But, of course, Pop was helpless to resent it. There were no weapons on the Moon and the mention of Sattell's name showed the uselessness of bluff. He'd pictured the complete set-up by the edge of the Big Crack. Pop could do nothing. The red-headed man checked himself, panting. He drew back and slammed the inner lock-door. There was the sound of pumping. Pop put his helmet back on and sealed it. The outer door opened. Outrushing air tugged at Pop. After a second or two he went out and climbed down the welded-on ladder-bars to the ground. He headed back toward his shack. Somehow, the mention of Sattell had made his mind work better. It always did. He began painstakingly to put things together. The red-headed man knew the routine here in every detail. He knew Sattell. That part was simple. Sattell had planned this multi-million-dollar coup, as a man in prison might plan his break. The stripped interior of the ship identified it. It was one of the unsuccessful luxury-liners sold for scrap. Or perhaps it was stolen for the journey here. Sattell's associates had had to steal or somehow get the fuel, and somehow find a pilot. But there were diamonds worth at least five million dollars waiting for them, and the whole job might not have called for more than two men—with Sattell as a third. According to the economics of crime, it was feasible. Anyhow it was being done. Pop reached the dust-heap which was his shack and went in the air lock. Inside, he went to the vision-phone and called the mine-colony down in the Crack. He gave the message he'd been told to pass on. Sattell to come up, with what diamonds had been dug since the regular cannister was sent up for the Lunar City ship that would be due presently. Otherwise the ship on the landing strip would destroy shack and Pop and the colony together. "I'd guess," said Pop painstakingly, "that Sattell figured it out. He's probably got some sort of gun to keep you from holding him down there. But he won't know his friends are here—not right this minute he won't." A shaking voice asked questions from the vision-phone. "No," said Pop, "they'll do it anyhow. If we were able to tell about 'em, they'd be chased. But if I'm dead and the shacks smashed and the cable burnt through, they'll be back on Earth long before a new cable's been got and let down to you. So they'll do all they can no matter what I do." He added, "I wouldn't tell Sattell a thing about it, if I were you. It'll save trouble. Just let him keep on waiting for this to happen. It'll save you trouble." Another shaky question. "Me?" asked Pop. "Oh, I'm going to raise what hell I can. There's some stuff in that ship I want." He switched off the phone. He went over to his air apparatus. He took down the cannister of diamonds which were worth five millions or more back on Earth. He found a bucket. He dumped the diamonds casually into it. They floated downward with great deliberation and surged from side to side like a liquid when they stopped. One-sixth gravity. Pop regarded his drawings meditatively. A sketch of his wife as he now remembered her. It was very good to remember. A drawing of his two children, playing together. He looked forward to remembering much more about them. He grinned. "That stair-rail," he said in deep satisfaction. "That'll do it!" He tore bed linen from his bunk and worked on the emptied cannister. It was a double container with a thermware interior lining. Even on Earth newly-mined diamonds sometimes fly to pieces from internal stress. On the Moon, it was not desirable that diamonds be exposed to repeated violent changes of temperature. So a thermware-lined cannister kept them at mine-temperature once they were warmed to touchability. Pop packed the cotton cloth in the container. He hurried a little, because the men in the rocket were shaky and might not practice patience. He took a small emergency-lamp from his spare spacesuit. He carefully cracked its bulb, exposing the filament within. He put the lamp on top of the cotton and sprinkled magnesium marking-powder over everything. Then he went to the air-apparatus and took out a flask of the liquid oxygen used to keep his breathing-air in balance. He poured the frigid, pale-blue stuff into the cotton. He saturated it. All the inside of the shack was foggy when he finished. Then he pushed the cannister-top down. He breathed a sigh of relief when it was in place. He'd arranged for it to break a frozen-brittle switch as it descended. When it came off, the switch would light the lamp with its bare filament. There was powdered magnesium in contact with it and liquid oxygen all about. He went out of the shack by the air lock. On the way, thinking about Sattell, he suddenly recovered a completely new memory. On their first wedding anniversary, so long ago, he and his wife had gone out to dinner to celebrate. He remembered how she looked: the almost-smug joy they shared that they would be together for always, with one complete year for proof. Pop reflected hungrily that it was something else to be made permanent and inspected from time to time. But he wanted more than a drawing of this! He wanted to make the memory permanent and to extend it— If it had not been for his vacuum suit and the cannister he carried, Pop would have rubbed his hands. Tall, jagged crater-walls rose from the lunar plain. Monstrous, extended inky shadows stretched enormous distances, utterly black. The sun, like a glowing octopod, floated low at the edge of things and seemed to hate all creation. Pop reached the rocket. He climbed the welded ladder-rungs to the air lock. He closed the door. Air whined. His suit sagged against his body. He took off his helmet. When the red-headed man opened the inner door, the hand-weapon shook and trembled. Pop said calmly: "Now I've got to go handle the hoist, if Sattell's coming up from the mine. If I don't do it, he don't come up." The red-headed man snarled. But his eyes were on the cannister whose contents should weigh a hundred pounds on Earth. "Any tricks," he rasped, "and you know what happens!" "Yeah," said Pop. He stolidly put his helmet back on. But his eyes went past the red-headed man to the stair that wound down, inside the ship, from some compartment above. The stair-rail was pure, clear, water-white plastic, not less than three inches thick. There was a lot of it! The inner door closed. Pop opened the outer. Air rushed out. He climbed painstakingly down to the ground. He started back toward the shack. There was the most luridly bright of all possible flashes. There was no sound, of course. But something flamed very brightly, and the ground thumped under Pop Young's vacuum boots. He turned. The rocketship was still in the act of flying apart. It had been a splendid explosion. Of course cotton sheeting in liquid oxygen is not quite as good an explosive as carbon-black, which they used down in the mine. Even with magnesium powder to start the flame when a bare light-filament ignited it, the cannister-bomb hadn't equaled—say—T.N.T. But the ship had fuel on board for the trip back to Earth. And it blew, too. It would be minutes before all the fragments of the ship returned to the Moon's surface. On the Moon, things fall slowly. Pop didn't wait. He searched hopefully. Once a mass of steel plating fell only yards from him, but it did not interrupt his search. When he went into the shack, he grinned to himself. The call-light of the vision-phone flickered wildly. When he took off his helmet the bell clanged incessantly. He answered. A shaking voice from the mining-colony panted: "We felt a shock! What happened? What do we do?" "Don't do a thing," advised Pop. "It's all right. I blew up the ship and everything's all right. I wouldn't even mention it to Sattell if I were you." He grinned happily down at a section of plastic stair-rail he'd found not too far from where the ship exploded. When the man down in the mine cut off, Pop got out of his vacuum suit in a hurry. He placed the plastic zestfully on the table where he'd been restricted to drawing pictures of his wife and children in order to recover memories of them. He began to plan, gloatingly, the thing he would carve out of a four-inch section of the plastic. When it was carved, he'd paint it. While he worked, he'd think of Sattell, because that was the way to get back the missing portions of his life—the parts Sattell had managed to get away from him. He'd get back more than ever, now! He didn't wonder what he'd do if he ever remembered the crime Sattell had committed. He felt, somehow, that he wouldn't get that back until he'd recovered all the rest. Gloating, it was amusing to remember what people used to call such art-works as he planned, when carved by other lonely men in other faraway places. They called those sculptures scrimshaw. But they were a lot more than that! THE END Transcriber's Note: This etext was produced from Astounding Science Fiction September 1955. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
B. He is close to Sattell's location, which enhances his memories of his wife and children
What is the source of the CAIS dataset?
### Introduction Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents. Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure FIGREF2 and Table TABREF3. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms BIBREF7, BIBREF8, while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms BIBREF9 is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling BIBREF10, is not explicitly modeled. In this paper, we try to address these issues, and thus propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components: Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner. Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention. Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation. Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net. We firstly conduct experiments on two popular benchmarks, SNIPS BIBREF11 and ATIS BIBREF12, BIBREF13. Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net. Our main contributions are as follows: We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks. Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria. We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community. ### Background In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \lbrace x_1, x_2, \cdots , x_N \rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \lbrace y_1, y_2, \cdots , y_N \rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\theta } : X \rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$. Typically, the input utterance is firstly encoded into a sequence of distributed representations $\mathbf {X} = \lbrace \mathbf {x}_1, \mathbf {x}_2, \cdots , \mathbf {x}_N\rbrace $ by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings $\mathbf {X}$ into context-sensitive representations $\mathbf {H} = \lbrace \mathbf {h}_1, \mathbf {h}_2, \cdots , \mathbf {h}_N\rbrace $. An external CRF BIBREF14 layer is widely utilized to calculate conditional probabilities of slot tags: Here $\mathbf {Y}_x$ is the set of all possible sequences of tags, and $F(\cdot )$ is the score function calculated by: where $\mathbf {A}$ is the transition matrix that $\mathbf {A}_{i,j}$ indicates the score of a transition from $i$ to $j$, and $\mathbf {P}$ is the score matrix output by RNNs. $P_{i,j}$ indicates the score of the $j^{th}$ tag of the $i^{th}$ word in a sentence BIBREF15. When testing, the Viterbi algorithm BIBREF16 is used to search the sequence of slot tags with maximum score: As to the prediction of intent, the word-level hidden states $\mathbf {H}$ are firstly summarized into a utterance-level representation $\mathbf {v}^{int}$ via mean pooling (or max pooling or self-attention, etc.): The most probable intent label $\hat{y}^{int}$ is predicted by softmax normalization over the intent label set: Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows: where $y^{int}_i$ and $y^{slot}_{i,j}$ are golden labels, and $\lambda $ is hyperparameter, and $|S^{int}|$ is the size of intent label set, and similarly for $|S^{slot}|$ . ### CM-Net ::: Overview In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer. ### CM-Net ::: Embedding Layers ::: Pre-trained Word Embedding The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen. ### CM-Net ::: Embedding Layers ::: Character-aware Word Embedding It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings. ### CM-Net ::: CM-block The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively. ### CM-Net ::: CM-block ::: Deliberate Attention To fully model semantic relations between slots and intents, we build the slot memory $\mathbf {M^{slot}} $ and intent memory $\mathbf {M^{int}}$, and further devise a collaborative retrieval approach. For the slot memory, it keeps $|S^{slot}|$ slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state $\mathbf {h}_t$ as query, and obtain slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ from both memories by the deliberate attention mechanism, which will be illustrated in the following. Specifically for the slot feature $\mathbf {h}_t^{slot}$, we firstly get a rough intent representation $\widetilde{\mathbf {h}}_t^{int}$ by the word-aware attention with hidden state $\mathbf {h}_t$ over the intent memory $\mathbf {M^{int}}$, and then obtain the final slot feature $\mathbf {h}_t^{slot}$ by the intent-aware attention over the slot memory $\mathbf {M^{slot}}$ with the intent-enhanced representation $[\mathbf {h}_t;\widetilde{\mathbf {h}}_t^{int}]$. Formally, the above-mentioned procedures are computed as follows: where $ATT(\cdot )$ is the query function calculated by the weighted sum of all cells $\mathbf {m}_i^{x}$ in memory $\mathbf {M}^{x}$ ($x \in \lbrace slot, int\rbrace $) : Here $\mathbf {u}$ and $\mathbf {W}$ are model parameters. We name the above calculations of two-round attentions (Equation DISPLAY_FORM23) as “deliberate attention". The intent representation $\mathbf {h}_t^{int}$ is computed by the deliberate attention as well: These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features $\mathbf {H}_t^{slot}$ and intent features $\mathbf {H}_t^{int}$ are utilized to provide guidances for the next local calculation layer. ### CM-Net ::: CM-block ::: Local Calculation Local context information is highly useful for sequence modeling BIBREF19, BIBREF20. BIBREF21 SLSTM2018 propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features $\mathbf {H}_t^{slot}$ and intent-specific features $\mathbf {H}_t^{slot}$ retrieved from memories. Specifically, at each input position $t$, we take the local window context $\mathbf {\xi }_t$, word embedding $\mathbf {x}_t$, slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ as inputs to conduct combinatorial calculation simultaneously. Formally, in the $l^{th}$ layer, the hidden state $\mathbf {h_t}$ is updated as follows: where $\mathbf { \xi } _ { t } ^ { l }$ is the concatenation of hidden states in a local window, and $\mathbf {i}_t^l$, $\mathbf {f}_t^l$, $\mathbf {o}_t^l$, $\mathbf {l}_t^l$ and $\mathbf {r}_t^l$ are gates to control information flows, and $\mathbf {W}_n^x$ $(x \in \lbrace i, o, f, l, r, u\rbrace , n \in \lbrace 1, 2, 3, 4\rbrace )$ are model parameters. More details about the state transition can be referred in BIBREF21. In the first CM-block, the hidden state $\mathbf {h}_t$ is initialized with the corresponding word embedding. In other CM-blocks, the $\mathbf {h}_t$ is inherited from the output of the adjacent lower CM-block. At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section SECREF46. ### CM-Net ::: CM-block ::: Global Recurrence Bi-directional RNNs, especially the BiLSTMs BIBREF22 are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks BIBREF23, BIBREF24. The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state $\mathbf {h}_t^{l-1}$ inherited from the local calculation layer as input, and conduct recurrent steps as follows: The output “states" of the BiLSTMs are taken as “states" input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section SECREF46. ### CM-Net ::: Inference Layer After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states $\mathbf {H}$ along with the retrieved slot $\mathbf {H}^{slot}$ representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (DISPLAY_FORM12) in Section SECREF2: For the prediction of intent label, we firstly aggregate the hidden state $\mathbf {h}_t$ and the retrieved intent representation $\mathbf {h}_t^{int}$ at each word position (from the final CM-block as well) via mean pooling: and then take the summarized vector $\mathbf {v}^{int}$ as input feature to conduct prediction of intent consistently with the Equation (DISPLAY_FORM14) in Section SECREF2. ### Experiments ::: Datasets and Metrics We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table TABREF32. ### Experiments ::: Datasets and Metrics ::: ATIS The Airline Travel Information Systems (ATIS) corpus BIBREF12 is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains BIBREF25, BIBREF29, therefore we train our model purely on the training set without additional hand-crafted features. ### Experiments ::: Datasets and Metrics ::: SNIPS SNIPS Natural Language Understanding benchmark BIBREF11 is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works BIBREF7, BIBREF9. ### Experiments ::: Datasets and Metrics ::: CAIS We collect utterances from the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field. ### Experiments ::: Datasets and Metrics ::: Metrics Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval as the token-level $F_1$ metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works BIBREF13, BIBREF25, we count an utterrance as a correct classification if any ground truth label is predicted. ### Experiments ::: Implementation Details All trainable parameters in our model are initialized by the method described in BIBREF31 Xavier. We apply dropout BIBREF32 to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer BIBREF33 with gradient clipping of 3 BIBREF34. The initial learning rate $\alpha $ is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight $\lambda $ (finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material. ### Experiments ::: Main Results Main results of our CM-Net on the SNIPS and ATIS are shown in Table TABREF33. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling $F_1$ score and intent detection accuracy, except for the $F_1$ score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section SECREF34. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling $F_1$ score on the SNIPS is able to demonstrate the superiority of our CM-Net. It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models BIBREF27, BIBREF28. We conduct auxiliary experiments by leveraging the well-known BERT BIBREF35 as an external resource for a relatively fair comparison with those models, and report details in Section SECREF48. ### Analysis Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments. ### Analysis ::: Whether Memories Promote Each Other? In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure FIGREF43. We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure FIGREF43), the $F_1$ scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int"), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot"). Similar observations can be found for the intent detection task (right part in Figure FIGREF43). In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other. ### Analysis ::: Ablation Experiments We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table TABREF44. Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section SECREF45. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0). ### Analysis ::: Effects of Pre-trained Language Models Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT BERT and EMLo EMLo). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table TABREF47 show that we establish new state-of-the-art results on both tasks of the SNIPS. ### Analysis ::: Evaluation on the CAIS We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages. ### Related Work ::: Memory Network Memory network is a general machine learning framework introduced by BIBREF37 memory2014, which have been shown effective in question answering BIBREF37, BIBREF38, machine translation BIBREF39, BIBREF40, aspect level sentiment classification BIBREF41, etc. For spoken language understanding, BIBREF42 memoryslu2016 introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach. ### Related Work ::: Interactions between slots and intents Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms BIBREF7, BIBREF8. Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. BIBREF9 capsule2018 propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents. ### Related Work ::: Sentence-State LSTM BIBREF21 BIBREF21 propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments. ### Conclusion We propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community. ### Acknowledgments Liu, Chen and Xu are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. Figure 1: Statistical association of slot tags (on the left) and intent labels (on the right) in the SNIPS, where colors indicate different intents and thicknesses of lines indicate proportions. Table 1: Examples in SNIPS with annotations of intent label for the utterance and slot tags for partial words. Figure 2: Overview of our proposed CM-Net. The input utterance is firstly encoded with the Embedding Layer (bottom), and then is transformed by multiple CM-blocks with the assistance of both slot and intent memories (on both sides). Finally we make predictions of slots and the intent in the Inference Layer (top). Figure 3: The internal structure of our CM-Block, which is composed of deliberate attention, local calculation and global recurrent respectively. Table 2: Dataset statistics. Table 3: Results on test sets of the SNIPS and ATIS, where our CM-Net achieves state-of-the-art performances in most cases. “*” indicates that results are retrieved from Slot-Gated (Goo et al., 2018), and “†” indicates our implementation. Figure 4: Investigations of the collaborative retrieval approach on slot filling (on the left) and intent detection (on the right), where “no slot2int” indicates removing slow-aware attention for the intent representation, and similarly for “no int2slot” and “neither”. Table 4: Ablation experiments on the SNIPS to investigate the impacts of various components, where “- slot memory” indicates removing the slot memory and its interactions with other components correspondingly. Similarly for the other options. Table 5: Results on the SNIPS benchmark with the assistance of pre-trained language model, where we establish new state-of-the-art results on the SNIPS. Table 6: Results on our CAIS dataset, where “†” indicates our implementation of the S-LSTM.
the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS)
Regarding her pancreatic condition, which procedure was Mrs. Anderson subjected to on 09/29/21? Choose the correct answer from the following options: A. Surgical port placement B. Endoscopic ultrasound-guided FNA C. Radiation therapy D. Neoadjuvant chemotherapy with FOLFIRINOX E. Thyroidectomy
### Patient Report 0 **Dear colleague, ** We report about your outpatient treatment on 09/01/2010. Diagnoses: extensor tendon rupture D3 right foot Anamnesis: The patient comes with a cut wound in the area of the MTP of the D3 of the right foot to our surgical outpatient clinic. A large shard of a broken vase had fallen on her toe with great force. Findings: Right foot, D3: Approximately 1cm long laceration in the area of the MTP. Tenderness to pressure. Flexion unrestricted, extension not possible. X-ray: X-ray of the D3 of the right foot from 09/01/2010: No evidence of bony lesion, regular joint position. Therapy: inspection, clinical examination, radiographic control, primary tendon suture and fitting of a dorsal splint. Tetanus booster. Medication: Mono-Embolex 3000IE s.c. (Certoparin). Procedure: We recommend the patient to wear a dorsal splint until the suture removal in 12-14 days. Afterwards further treatment with a vacuum orthosis for another 4 weeks. We ask for presentation in our accident surgery consultation on September 14^th^, 2010. In case of persistence or progression of complaints, we ask for an immediate our surgical clinic. If you have any questions, please do not hesitate to contact us. Best regards ### Patient Report 1 **Dear colleague, ** We report to you about our common patient, Mrs. Jill Anderson, born on 06/07/1975, who was in our outpatient treatment on 07/08/2014. Diagnoses: Fracture tuberculum majus humeri Luxation of the shoulder joint Anamnesis: Fell on the left arm while falling down a hill during a hike. No fall on the head. Tetanus vaccination coverage is present according to the patient. Findings: multiple abrasions: Left forearm, left pelvis and left tibia. Dislocation of the shoulder. Motor function of forearm and hand not limited. Peripheral circulation, motor function, and sensitivity intact. X-ray: Shoulder left in two planes from 07/08/2014. Anteroinferior shoulder dislocation with dislocated tuberculum majus fracture and possible subcapital fracture line. X-ray: Shoulder in 2 planes after reduction Reduction of the shoulder joint. Still more than 3 mm dislocated tuberculum majus **Therapy**: Reduction with **Midazolam** and **Fentanyl**. **Medication**: **Lovenox 40mg s.c.** daily **Ibuprofen 400mg** 1-1-1 Pain management as needed. **Procedure**: Due to sedation, the patient was not able to be educated for surgery. Surgery is planned for either tomorrow or today using a proximal humerus internal locking system (PHILOS) or screw osteosynthesis. The patient is to remain fasting. **Other Notes**: Inpatient admission. ### Patient Report 2 **Dear colleague, ** We report to you about our common patient, Mrs. Jill Anderson, who was in our outpatient treatment on 02/01/2015. Diagnoses: Ankle sprain on the right side. Case history: patient presents to the surgical emergency department with right ankle sprain after tripping on the stairs. The fall occurred yesterday evening. Immediately thereafter cooled and immobilized. Findings: Right foot: Swelling and pressure pain over the fibulotalar anterior ligament. No pressure pain over syndesmosis, outer ankle+fibula head, Inner ankle, Achilles tendon, tarsus, or with midfoot compression. Limited mobility due to pain. Toe mobility free, no pain over base of fifth toe. X-ray: X-ray of the right ankle in two planes dated 02/01/2015. No evidence of fresh fracture Procedure: The following procedure was discussed with the patient: -Cooling, resting, elevation and immobilization in the splint for a total of 6 weeks. -Pain medication: Ibuprofen 400mg 1-1-1-1 under stomach protection with Nexium 20mg 1-0-0 In case of persistence of symptoms, magnetic resonance imaging is recommended. Presentation with the findings to a resident orthopedist. ### Patient Report 3 **Dear colleague, ** we report on Mrs. Anderson, Jill, born 06/07/1975, who was in our inpatient treatment from 09/28/2021 to 10/03/2021 Diagnosis: Suspected pancreatic carcinoma Other diseases and previous operations: Status post thyroidectomy 2008 Fracture tuberculum majus humeri 2014 Current complaints: The patient presented as an elective admission for ERCP and EUS puncture for pancreatic head space involvement. She reported stool irregularities with steatorrhea and acholic stool beginning in July 2021. Weight loss of approximately 3kg. No bleeding stigmata. Micturition complaints are denied. Urine color: dark yellow. The patient first noticed scleral and cutaneous icterus in August 2021. No other hepatic skin signs. Patient reported mild pain 1/10 in right upper quadrant. CT of the chest and abdomen on 09/28/2021 showed a mass in the pancreatic head with contact with the SMV (approximately 90 degrees) and suspicion of lymph node metastasis dorsal adherent to the SMA. Pronounced intra or extrahepatic cholestasis. Congested pancreatic duct. Also showed suspicious locoregional lymph nodes, especially in the interaortocaval space. No evidence of distant metastases. Alcohol Average consumption: 0.20L/day (wine) Smoking status: Some days Consumption: 0.20 packs/day Smoking Years: 30.00; Pack Years: 6.00 Laboratory tests: Blood group & Rhesus factor Rh factor + AB0 blood group: B Family history Patient's mother died of breast cancer Occupational history: Consultant Physical examination: Fully oriented, neurologically unaffected. Normal general condition and nutritional status Heart: rhythmic, normofrequency, no heart murmurs. Lungs: vesicular breath sounds bilaterally. Abdomen: soft, vivid bowel sounds over all four quadrants. Negative Murphy\'s sign. Liver and spleen not enlarged palpable. Lymph nodes: unremarkable Scleral and cutaneous icterus. Mild skin itching. No other hepatic skin signs. ### Patient Report 4 **Dear colleague, ** We report on Mrs. Jill Anderson, born born 06/07/1975, who was in our inpatient treatment from 10/09/2021 to 10/30/2021. **Diagnosis:** High-grade suspicious for locally advanced pancreatic cancer. **-CT of chest/abdomen/pelvis**: Mass in the head of the pancreas with involvement of the SMV (approx. 90 degrees) and suspicious for lymph node metastasis adjacent to the SMA. Prominent intra- or extrahepatic bile duct dilation. Dilated pancreatic duct. Suspicious regional lymph nodes, notably in the interaortocaval region. No evidence of distant metastasis. **-Endoscopic ultrasound-guided FNA (Fine Needle Aspiration)** on 09/29/21. **-ERCP (Endoscopic Retrograde Cholangiopancreatography)** and metal stent placement, 10 mm x 60 mm, on 09/29/21. -Tumor board discussion on 09/30/21: Port placement recommended, neoadjuvant chemotherapy with FOLFIRINOX proposed. Medical history: Mrs. Anderson was admitted to the hospital on 09/29/21 for ERCP and endoscopic ultrasound-guided biopsy due to an unclear mass in the head of the pancreas. She reported changes in bowel habits with fatty stools and pale stools starting in July 2021, and has lost approximately 3 kg since then. She denied any signs of bleeding. She had no urinary symptoms but did note that her urine had been darker than usual. In August 2021, she first noticed yellowing of the eyes and skin. The CT scan of the chest and abdomen performed on 09/28/21 revealed a mass in the pancreatic head in contact with the SMV (approx. 90 degrees) and suspected lymph node metastasis close to the SMA. Additionally, there was significant intra- or extrahepatic bile duct dilation and a dilated pancreatic duct. Suspicious regional lymph nodes were also noted, particularly in the area between the aorta and vena cava. No distant metastases were found. She was admitted to our gastroenterology ward for further evaluation of the pancreatic mass. Upon admission, she reported only mild pain in the right upper abdomen (pain scale 2/10). Family history: Her mother passed away from breast cancer. Physical examination on admission: Appearance: Alert and oriented, neurologically intact. Heart: Regular rhythm, normal rate, no murmurs. Lungs: Clear breath sounds in both lungs. Abdomen: Soft, active bowel sounds in all quadrants. No tenderness. Liver and spleen not palpable. Lymph nodes: Not enlarged. Skin: Jaundice present in the eyes and skin, slight itching. No other liver-related skin changes. Radiology **Findings:** **CT Chest/Abdomen/Pelvis with contrast on 09/28/21:** Technique: After uneventful IV contrast injection, multi-slice spiral CT was performed through the upper abdomen during arterial and parenchymal phases and through the chest, abdomen, and pelvis during venous phase. Oral contrast was also administered. Thin-section, coronal, and sagittal reconstructions were done. Thorax: The soft tissues of the neck appear symmetric. Heart and mediastinum in midline position. No enlarged lymph nodes in mediastinum or axilla. A calcified granuloma is seen in the right lower lung lobe; no suspicious nodules or signs of inflammation. No fluid or air in the pleural space. Abdomen: A low-density mass is seen in the pancreatic head, measuring about 37 x 26 mm. The mass is in contact with the superior mesenteric artery (\<180°) and could represent lymph node metastasis. It is also in contact with the superior mesenteric vein (\<180°) and the venous confluence. There are some larger but not abnormally large lymph nodes between the aorta and vena cava, as well as other suspicious regional lymph nodes. Significant dilation of both intra- and extrahepatic bile ducts is noted. The pancreatic duct is dilated to about 5 mm. The liver appears normal without any suspicious lesions and shows signs of fatty infiltration. The hepatic and portal veins appear normal. Spleen appears normal; its vein is not involved. The left adrenal gland is slightly enlarged while the right is normal. Kidneys show uniform contrast uptake. No urinary retention. The contrast passes normally through the small intestine after oral administration. Uterus and its appendages appear normal. No free air or fluid inside the abdomen. Bones: No signs of destructive lesions. Mild degenerative changes are seen in the lower lumbar spine. Assessment: -Mass in the pancreatic head with contact to the SMV (approximately 90 degrees) and suspected lymph node metastasis near the SMA. There is significant dilation of the intra- or extrahepatic bile ducts and the pancreatic duct. -Suspicious regional lymph nodes, especially between the aorta and vena cava. -No distant metastases. **Ultrasound/Endoscopy:** Endoscopic Ultrasound (EUS) on 09/29/21: Procedure: Biopsy with a 22G needle was performed on an approximately 3 cm x 3 cm mass in the pancreatic head. No obvious bleeding was seen post-procedure. Histopathological examination is pending. Assessment: Biopsy of pancreatic head, awaiting histology results. **ERCP on 09/29/21:** Procedure: Fluoroscopy time: 17.7 minutes. Indication: ERCP/Stenting. The papilla was initially difficult to visualize due to a long mucosal impression/swelling (possible tumor). Initially, only the pancreatic duct was visualized with contrast. Afterward, the bile duct was probed and dark bile was extracted for microbial testing. The contrast image revealed a significant distal bile duct narrowing of about 2.8 cm length with extrahepatic bile duct dilation. After an endoscopic papillotomy (EPT) of 5 mm, a plastic stent with an inner diameter of 8.5 mm was placed through the narrow passage, and the bile duct was emptied. Assessment: Successful ERCP with stenting of bile duct. Clear signs of tumor growth/narrowing in the distal bile duct. Awaiting microbial results and histopathology results from the extracted bile. Treatment: Based on the initial findings, Mrs. Anderson was started on pain management with acetaminophen and was scheduled for an ERCP and endoscopic ultrasound-guided biopsy. The ERCP and stenting of the bile duct were successful, and she is currently awaiting histopathological examination results from the biopsy and microbial testing results from the bile. Gastrointestinal Tumor Board of 09/30/2021. Meeting Occasion: Pancreatic head carcinoma under evaluation. CT: Defined mass in the pancreatic head with contact to the SMV (approx. 90 degrees) and under evaluation for lymph node metastasis dorsally adherent to the SMA. Pronounced intra- or extrahepatic bile duct dilation. Dilated pancreatic duct. -Suspected locoregional lymph nodes especially between aorta and vena cava. -No evidence of distant metastases. MR liver (external): -No liver metastases. Previous therapy: -ERCP/Stenting. Question: -Neoadjuvant chemo with FOLFIRINOX? Consensus decision: -CT: Pancreatic head tumor with contact to SMA \<180° and SMV, contact to abdominal aorta, bile duct dilation. MR: No liver metastases. Pancreatic histology: -pending-. Consensus: -Surgical port placement, -wait for final histology, -intended neoadjuvant chemotherapy with FOLFIRINOX, -Follow-up after 4 cycles. Pathology findings as of 09/30/2021 Internal Pathology Report: Clinical information/question: FNA biopsy for pancreatic head carcinoma. Macroscopic Description: FNA: Fixed. Multiple fibrous tissue particles up to 2.2 cm in size. Entirely embedded. Processing: One block, H&E staining, PAS staining, serial sections. Microscopic Description: Histologically, multiple particles of columnar epithelium are present, some with notable cribriform architecture. The nuclei within are irregularly enlarged without discernible polarity. In the attached fibrin/blood, individual cells with enlarged, irregular nuclei are also observed. No clear stromal relationship is identified. Critical Findings Report: FNA: Segments of atypical glandular cell clusters, at least pancreatic intraepithelial neoplasia with low-grade dysplasia. Corresponding invasive growth can neither be confirmed nor ruled out with the current sample. For quality assurance, the case was reviewed by a pathology specialist. Expected follow-up: Mrs. Anderson is expected to follow up with her gastroenterologist and the multidisciplinary team for her biopsy results, and the potential treatment plan will be discussed after the results are available. Depending on the biopsy results, she may need further imaging, surgery, radiation, chemotherapy, or targeted therapies. Continuous monitoring of her jaundice, abdominal pain, and bile duct function will be critical. Based on this information, Mrs. Anderson has a mass in the pancreatic head with suspected metastatic regional lymph nodes. The management and prognosis for Mrs. Anderson will largely depend on the results of the histopathological examination and staging of the tumor. If it is pancreatic cancer, early diagnosis and treatment are crucial for a better outcome. The multidisciplinary team will discuss the best course of action for her treatment after the results are obtained. ### Patient Report 5 **Dear colleague, ** We are updating you on Mrs. Jill Anderson, who was under our outpatient care on October 4th, 2021. **Outpatient Treatment:** **Diagnoses:** Recommendation for neoadjuvant chemotherapy with FOLFIRINOX for advanced pancreatic cancer (Dated 10/21) Exocrine pancreatic dysfunction since around 07/21. Prior occurrences on 02/21 and 2020. **CT Scan of the chest, abdomen, and pelvis** on September 28, 2021: **Thorax:** Symmetrical imaging of neck soft tissues. Cardiomediastinum is centralized. There is no sign of mediastinal, hilar, or axillary lymphadenopathy. Calcified granuloma noted in the right lower lobe, and no concerning rounded objects or inflammatory infiltrates. No fluid in the pleural cavity or pneumothorax. **Abdomen:** Hypodense mass in the head of the pancreas measuring approximately 34 x 28 mm. A secondary finding touching the superior mesenteric artery (\< 180°). Possible lymph node metastasis. Contact with the superior mesenteric vein (\<180°) and venous confluence. Noticeable, yet not pathologically enlarged lymph nodes in the interaortocaval space and other regional suspicious lymph nodes. Significant intra- and extrahepatic bile duct blockage. The pancreatic duct is dilated up to around 5 mm. The liver is consistent with no signs of suspicious lesions and shows fatty infiltration. Liver and portal veins are well perfused. The spleen appears normal with its vein not infiltrated. The left adrenal gland appears enlarged, while the right is slim. Kidney tissue displays even contrast. No urinary retention observed. Post oral contrast, the contrast agent passed regularly through the small intestine. Both the uterus and adnexa appear normal. No free air or fluid present in the abdomen. **Skeleton:** No osteodestructive lesions. Mild degenerative changes with arthrosis of the facet joints in the lower back. **Assessment:** -Mass in the head of the pancreas touching the superior mesenteric vein (approx 90 degrees) and possible lymph node metastasis adhering dorsally to the superior mesenteric artery. Significant bile duct blockage. Dilated pancreatic duct. -Suspicious regional lymph nodes, especially interaortocaval. -No distant metastases found. **GI Tumor Board** on September 30, 2021: **CT:** Tumor in the pancreatic head with contacts noted. **MR:** No liver metastases. **Pancreatic histology:** Pending. **Consensus:** Await final pathology. Neoadjuvant-intended chemotherapy with FOLFIRINOX. Review after 4 cycles. **Summary:** Mrs. Anderson was referred to us by her primary care physician following the discovery of a tumor in the head of the pancreas through an ultrasound. She has been experiencing unexplained diarrhea for approximately 3 months, sometimes with an oily appearance. She exhibited jaundice noticeable for about a week without any itching, and an MRI was conducted. Given the suspicion of a pancreatic head cancer, we proceeded with CT staging. This identified an advanced pancreatic cancer with specific contacts. MRI did not reveal liver metastases. The imaging did show bile duct blockage consistent with her jaundice symptom. She was admitted for an endosonographic biopsy of the pancreatic tumor and ERCP/stenting. The biopsy identified dysplastic cells. No invasion was observed due to the absence of a stromal component. A metal stent was successfully inserted. After reviewing the findings in our tumor board, we recommended neoadjuvant chemotherapy with FOLFIRINOX. We scheduled her for a port implant, and a DPD test is currently underway. Chemotherapy will begin on October 14, with the first review scheduled after 4 cycles. Please reach out if you have any questions. If her symptoms persist or worsen, we advise an immediate revisit. For any emergencies outside regular office hours, she can seek medical attention at our emergency care unit. Best regards, ### Patient Report 6 **Dear colleague, ** We are writing to update you on Ms. Jill Anderson, who visited our day care center on December 22, 2021, for a partial inpatient treatment. Diagnosis: -Locally advanced pancreatic cancer recommended for neoadjuvant chemotherapy with FOLFIRINOX. -Exocrine pancreatic insufficiency since around July 2021. -Previous incidents in February 2021 and 2020. Past treatments: -Diagnosis of locally advanced pancreatic head cancer in September 2021. -4 cycles of FOLFIRINOX neoadjuvant were intended. CT Scan: GI Tumor Board Review: Summary: Mrs. Anderson had a CT follow-up while on FOLFIRINOX treatment. In case her symptoms persist or worsen, we advise an immediate consultation. If outside regular business hours, she can seek emergency care at our emergency medical unit. Best regards, ### Patient Report 7 **Dear colleague, ** Updating you about Mrs. Jill Anderson, who visited our surgical clinic on December 25, 2021. Diagnosis: Potentially resectable pancreatic head cancer. CT Scan: -Progressive tumor growth with significant contact to the celiac trunk and the superior mesenteric artery. Direct contact with the aorta beneath. -Progressive, suspicious lymph nodes around the aorta, but no clear distant metastases. -External MR for liver showed no liver metastases. Medical History: -ERCP/Stenting for bile duct blockage in 09/2021. -4 cycles of FOLFIRINOX neoadjuvant from November to December 15, 2021. -Encountered complications resulting in prolonged hospital stay. -Received 3 Covid-19 vaccinations, last one in May 2021 and recovered from the virus on August 14, 2021. -Exocrine pancreatic insufficiency. Physical stats: 65 kg (143 lbs), 176 cm (5\'9\"). CT consensus: -Primary tumor has reduced in size with decreased contact with the aorta. New tumor extension towards the celiac trunk. No distant metastases found. -MR showed no liver metastases. -Tumor marker Ca19-9 levels: 525 U/mL (previously 575 U/mL in September and 380 U/mL in November). Recommendation: Exploratory surgery and potential pancreatic head resection. Procedure: We discussed with the patient about undergoing an exploration with a possible Whipple\'s procedure. The patient is scheduled to meet the doctor today for lab work (Hemoglobin and white blood cell count). A prescription for pantoprazole was provided. Prehabilitation Recommendations: -Individualized strength training and aerobic exercises. -Lung function improvement exercises using Triflow, three times a day. -Consider psycho-oncological support through primary care. -Nutritional guidance, potential high-protein and calorie-dense diet, supplemental nutrition through a port, and intake of creon and pantoprazole. The patient is scheduled for outpatient preoperative preparation on January 13, 2022, at 10:00 AM. The surgical procedure is planned for January 15th. Eliquis needs to be stopped 48 hours before the surgery. Warm regards, **Surgery Report:** Diagnosis: Locally advanced pancreatic head cancer post 4 cycles of FOLFIRINOX. Procedure: Exploratory laparotomy, adhesion removal, pancreatic head and vascular visualization, biopsy of distal mesenteric root area, surgery halted due to positive frozen section results, gallbladder removal, catheter placement, and 2 drains. Report: Mrs. Anderson has a pancreatic head cancer and had received 4 cycles of FOLFIRINOX neoadjuvant therapy. The surgery involved a detailed abdominal exploration which did not reveal any liver metastases or peritoneal cancer spread. However, a hard nodule was found away from the head of the pancreas in the peripheral mesenteric root, from which a biopsy was taken. Results showed adenocarcinoma infiltrates, leading to the surgery\'s termination. An additional gallbladder removal was performed due to its congested appearance. The surgical procedure concluded with no complications. **Histopathological Report:** Further immunohistochemical tests were performed which indicate the presence of a pancreatobiliary primary cancer. Other findings from the gallbladder showed signs of chronic cholecystitis. GI Tumor Board Review on January 9th, 2022: Discussion focused on Mrs. Anderson's locally advanced pancreatic head cancer, her exploratory laparotomy, and the halted surgery due to positive frozen section results. The CT scan indicated the progression of her tumor, but no distant metastases or liver metastases were found. The question posed to the board concerns the best subsequent procedure to follow. ### Patient Report 8 **Dear colleague, ** We are providing an update on Mrs. Jill Anderson, who was in our outpatient care on 11/05/2022: **Outpatient treatment**: Diagnosis: Progressive tumor disease under gemcitabine/nab-paclitaxel for pancreatic head carcinoma (Date of onset 09/22). 01/17/22 Surgery: Exploratory laparotomy, adhesiolysis, visualization of the pancreatic head and vascular structures, biopsy near the distal mesenteric root. Surgery was stopped due to positive frozen section results; gallbladder removal. 09/21 ERCP/Stenting: Metal stent insertion. Diarrhea likely from exocrine pancreatic insufficiency since around 07/21. Prior diagnosis: Locally advanced pancreatic head carcinoma as of 09/21. Clinical presentation: Chronic diarrhea due to exocrine pancreatic insufficiency. CT: Pancreatic head carcinoma, borderline resectable. MRI of liver: No liver metastases. TM Ca19-9: 587 U/mL. ERCP/Stenting: Metal stent in the bile duct. EUS biopsy: PanIN with low-grade dysplasia. GI tumor board: Proposed neoadjuvant chemotherapy. From 10/21 to 12/21: 4 cycles of FOLFIRINOX (neoadjuvant). Hospitalized for: Anemia, dehydration, and COVID. 12/21 CT: Mixed response, primary tumor site, lymph node metastasis. GI tumor board: Recommendation for exploratory surgery/resection. 01/12/2021: Surgery: Evidence of adenocarcinoma near distal mesenteric root. Surgery was discontinued. GI tumor board: Chemotherapy change recommendation. 02/22 CT: Progression at the primary tumor site with increased contact to the SMA; lymph node metastasis. From 02/22 to 06/22: 4 cycles of gemcitabine/nab-paclitaxel. 05/22 TM Ca19-9: 224 U/mL. 1. Concomitant PRRT therapy: 02/22: 7.9 GBq Lutetium-177 FAP-3940. 04/22: 8.5 GBq Lutetium-177 FAP-3940. 06/22: 8.4 GBq Lutetium-177 FAP-3940. 07/22: CT: Progression of primary tumor with encasement of AMS; suspected liver metastases. TM: Ca19-9: 422 U/mL. Recommendation: Switch to the NAPOLI regimen and perform diagnostic panel sequencing. **Summary**: Mrs. Anderson visited with her sister and friend to discuss recent CT results. With advanced pancreatic cancer and a prior surgery in 01/22, she has been on gemcitabine/nab-paclitaxel and concurrent PRRT with lutetium-177 FAP since 02/22. The latest CT indicates tumor progression and potential liver metastases. We have recommended a change in chemotherapy and continuation of PRRT. A follow-up CT in 3 months is advised. Please contact us with any inquiries. If symptoms persist or worsen, urgent consultation is advised. After hours, she can visit the emergency room at our clinic. **Operation report**: Diagnosis: Infection of the right chest port. Procedure: Removal of the port system and microbiological culture. Anesthesia: Local. **Procedure Details**: Suspected infection of the right chest port. Elevated lab parameters indicated a possible infection, prompting port removal. The patient was informed and consented. After local anesthesia, the previous incision site was reopened. Yellowish discharge was observed. A sample was sent for microbiology. The port was accessed, detached, and removed along with the associated catheter. The vein was ligated. Infected tissue was excised and sent for pathology. The site was cleaned with an antiseptic solution and sutured closed. Sterile dressing applied. Post-operative care followed standard protocols. Warm regards, ### Patient Report 9 **Dear colleague, ** We report on Mrs. Jill Anderson, born 06/07/1975 who presented to our outpatient clinic on12/01/2022. Diagnosis: Progressive tumor disease under gemcitabine/nab-paclitaxel for pancreatic head carcinoma (Date of onset 09/22). -low progressive lung lesions, possibly metastases **CT pancreas, thorax, abdomen, pelvis dated 12/02/2022. ** **Findings:** Chest: Nodular goiter with low-density nodules in the left thyroid tissue. Port placement in the right chest with the catheter tip located in the superior vena cava. There are no suspicious pulmonary nodules. There is also no increase in mediastinal or axillary lymph nodes. The dense breast tissue on the right remains unchanged from the previous study. Abdomen: Fatty liver with uneven contrast in the liver tissue, possibly due to uneven blood flow. As far as can be seen, no new liver lesions are present. There is a small low-density area in the spleen, possibly a splenic cyst. Two distinct low-density areas are noted in the right kidney\'s tissue, likely cysts. Pancreatic tumor decreasing in site. Local lymph nodes and nodules in the mesentery, with sizes up to about 9mm; some are near the intestines, also decreasing in size. There are outpouchings (diverticula) in the left-sided colon. Hardening of the abdominal vessels. An elongation of the right iliac artery is noted. Spine: There are degenerative changes, including a forward slip of the fifth lumbar vertebra over the first sacral vertebra (grade 1-2 spondylolisthesis). There is also an indentation at the top of the tenth thoracic vertebra. Impression: In the context of post-treatment chemotherapy following the surgical removal of a pancreatic tumor, we note: -Advanced pancreatic cancer, decreasing in size. -Lymph nodes smaller than before. -No other signs of metastatic spread. **Summary:** Mrs. Andersen completed neoadjuvant chemotherapy. Pancreatic head resection can now be performed. For this we agreed on an appointment next week. If you have any questions, please do not hesitate to contact us. In case of persistence or worsening of the symptoms, we recommend an immediate reappearance. Outside of regular office hours, this is also possible in emergencies at our emergency unit. Yours sincerely ### Patient Report 0 **Dear colleague, ** we report on Mrs. Jill Anderson, born 06/07/1975 who presented to our outpatient clinic on 3/05/2023. Diagnosis: Progressive tumor disease under gemcitabine/nab-paclitaxel for pancreatic head carcinoma after resection in 12/2022. CT staging on 03/05/2023: No local recurrence. Intrapulmonary nodules of progressive size on both sides, suspicious for pulmonary metastases. Question: Biopsy confirmation of suspicious lung foci? Consensus decision: VATS of a suspicious lung lesion (vs. CT-guided puncture). ### Patient Report 1 **Dear colleague, ** We report on your outpatient treatment on 04/01/2023. Diagnoses: Follow-up after completion of adjuvant chemotherapy with Gemcitabine mono to 03/23 (initial gemcitabine / 5-FU) \- progressive lung lesions, possibly metastases -\> recommendation for CT guided puncture \- status post Whipple surgery for pancreatic cancer CT staging: unexplained pulmonary lesions, possibly metastatic **CT Chest/Abd./Pelvis with contrast dated 04/02/2023: ** Imaging method: Following complication-free bolus i.v. administration of 100 mL Ultravist 370, multi-detector spiral CT scan of the chest, abdomen, and pelvis during arterial, late arterial, and venous phases of contrast. Additionally, oral contrast was administered. Thin-slice reconstructions, as well as coronal and sagittal secondary reconstructions, were done. Chest: Normal lung aeration, fully expanded to the chest wall. No pneumothorax detected. Known metastatic lung nodules show increased size in this study. For instance, the nodule in the apical segment of the right lower lobe now measures 17 x 15 mm, previously around 8 x 10 mm. Similarly, a solid nodule in the right posterior basal segment of the lower lobe is now 12 mm (previously 8 mm) with adjacent atelectasis. No signs of pneumonia. No pleural effusions. Homogeneous thyroid tissue with a nodule on the left side. Solitary lymph nodes seen in the left axillary region and previously smaller (now 9 mm, was 4mm) but with a retained fatty hilum, suggesting an inflammatory origin. No other evidence of abnormally enlarged or conspicuously shaped mediastinal or hilar lymph nodes. A port catheter is inserted from the right, with its tip in the superior vena cava; no signs of port tip thrombosis. Mild coronary artery sclerosis. Abdomen/Pelvis: Fatty liver changes visible with some areas of irregular blood flow. No signs of lesions suspicious for cancer in the liver. A small area of decreased density in segment II of the liver, seen previously, hasn\'t grown in size. Portal and hepatic veins are patent. History of pancreatic head resection with pancreatogastrostomy. The remaining pancreas shows some dilated fluid-filled areas, consistent with a prior scan from 06/26/20. No signs of cancer recurrence. Local lymph nodes appear unchanged with no evidence of growth. More lymph nodes than usual are seen in the mesentery and behind the peritoneum. No signs of obstructions in the intestines. Mild abdominal artery sclerosis, but no significant narrowing of major vessels. Both kidneys appear normal with contrast, with some areas of dilated renal pelvis and cortical cysts in both kidneys. Both adrenal glands are small. The rest of the urinary system looks normal. Skeleton: Known degenerative changes in the spine with calcification, and a compression of the 10th thoracic vertebra, but no evidence of any fractures. There are notable herniations between vertebral discs in the lumbar spine and spondylolysis with spondylolisthesis at the L5/S1 level (Meyerding grade I-II). No osteolytic or suspicious lesions found in the skeleton. Conclusion: Oncologic follow-up post adjuvant chemotherapy and pancreatic cancer resection: -Lung nodules are increasing in size and number. -No signs of local recurrence or regional lymph node spread. -No new distant metastases detected **Summary:** Mrs. Anderson visited our outpatient department to discuss her CT scan results, part of her ongoing pancreatic cancer follow-up. For a detailed medical history, please refer to our previous notes. In brief, Mrs. Anderson had advanced pancreatic head cancer for which she underwent a pancreatic head resection after neoadjuvant therapy. She underwent three cycles of adjuvant chemotherapy with gemcitabine/5-FU. The CT scan did not show any local issues, and there was no evidence of local recurrence or liver metastases. The previously known lung lesions have slightly increased in size. We have considered a CT-guided biopsy. A follow-up appointment has been set for 04/22/23. We are available for any questions. If symptoms persist or worsen, we advise an immediate revisit. Outside of regular hours, emergency care is available at our clinic's department. Dear Mrs. Anderson, **Encounter Summary (05/01/2023):** **Diagnosis:** -Progressive lung metastasis during ongoing treatment break for pancreatic adenocarcinoma -CT scan 04/14-23: Uncertain progressive lung lesions -- differential diagnoses include metastases and inflammation. History of clot at the tip of the port. **Previous Treatment:** 09/21: Diagnosed with pancreatic head cancer. 12/22: Surgery - pancreatic head removal- 3 months adjuvant chemo with gemcitabine/5-FU (outpatient). **Summary:** Recent CT results showed mainly progressive lung metastasis. Weight is 59 kg, slightly decreased over the past months, with ongoing diarrhea (about 3 times daily). We have suggested adjusting the pancreatic enzyme dose and if no improvement, trying loperamide. The CT indicated slight size progression of individual lung metastases but no abdominal tumor progression. After discussing the potential for restarting treatment, considering her diagnosis history and previous therapies, we believe there is a low likelihood of a positive response to treatment, especially given potential side effects. Given the minor tumor progression over the last four months, we recommend continuing the treatment break. Mrs. Anderson wants to discuss this with her partner. If she decides to continue the break, we recommend another CT in 2-3 months. **Upcoming Appointment:** Wednesday, 3/15/2023 at 11 a.m. (Arrive by 9:30 a.m. for the hospital\'s imaging center). ### Patient Report 2 **Dear colleague, ** we report on Mrs. Jill Anderson, who was in our inpatient treatment from 07/20/2023 to 09/12/2023. **Diagnosis** Seropneumothorax secondary to punction of a malignant pleural effusion with progressive pulmonary metastasis of a pancreatic head carcinoma. Previous therapy and course -Status post Whipple surgery on 12/22 -3 months adjuvant CTx with gemcitabin/5-FU (out). -\> discontinuation due to intolerance 1/23-3/23: 3 cycles gemcitabine mono 06/23 CT: progressive pulmonary lesions bipulmonary metastases. 06/23-07/23: 2 cycles gemcitabine / nab-paclitaxel 07/23 CT: progressive pulmonary metastases bilaterally, otherwise idem Allergy: penicillin **Medical History** Mrs. Anderson came to our ER due to worsening shortness of breath. She has a history of metastatic pancreatic cancer in her lungs. With significant disease progression evident in the July 2023 CT scan and worsening symptoms, she was advised to begin chemotherapy with 5-FU and cisplatin (reduced dose) due to severe polyneuropathy in her lower limbs. She has experienced worsening shortness of breath since July. Three weeks ago, she developed a cough and consulted her primary care physician, who prescribed cefuroxime for a suspected pneumonia. The cough improved, but the shortness of breath worsened, leading her to come to our ER with suspected pleural effusion. She denies fever and systemic symptoms. Urinalysis was unremarkable, and stool is well-regulated with Creon. She denies nausea and vomiting. For further evaluation and treatment, she was admitted to our gastroenterology unit. **Physical Examination at Admission** 48-year-old female, 176 cm, 59 kg. Alert and stable. Skin: Warm, dry, no rashes. Lungs: Diminished breath sounds on the right, normal on the left. Cardiac: Regular rate and rhythm, no murmurs. Abdomen: Soft, non-tender. Extremities: Normal circulation, no edema. Neuro: Alert, oriented x3. Neurological exam normal. **Radiologic Findings** 07/20/2023 Chest X-ray: Evidence of right-sided pneumothorax with pleural fluid, multiple lung metastases, port-a-cath in place with tip at superior vena cava. Cardiomegaly observed. 08/02/2023 Chest X-ray: Pneumothorax on the right has increased. Fluid still present. 08/06/2023 Chest X-ray after chest tube insertion: Improved lung expansion, reduced fluid and pneumothorax. 08/17/2023 Chest X-ray: Chest tube on the right removed. Evidence of right pleural effusion. No new pneumothorax. 07/12/2023 CT Chest/Abdomen/Pelvis with contrast: Progression of pancreatic cancer with enlarged mediastinal and hilar lymph nodes suggestive of metastasis. Increase in right pleural effusion. Right adrenal mass noted, possibly adenoma. **Consultations/Interventions** 06/07/2023 Surgery: Insertion of a 20Ch chest tube on the right side, draining 500 mL of fluid immediately. 09/01/2023 Palliative Care: Discussed the progression of her disease, current symptoms, and future care plans. Patient is waiting for the next CT results but is leaning towards home care. Patient advised about painkiller recall (burning in the upper abdomen, central, radiating to the right; doctor\'s contact provided). Pain meds distributed. Patient reports increasing shortness of breath; according to on-call physician, a consult for pleural condition is scheduled. Patient denies pain and shortness of breath; overall, she is much improved. Oxygen arranged by ward for home use. -Home intake of pancreatic enzymes effective: 25,000 IU during main meals and 10,000 IU for snacks. -Patient notes constipation with excess pancreatic enzyme, insufficient enzyme results in diarrhea/steatorrhea. -Patient consumes Ensure Plus (400 kcal) once daily. Assessment: -Severe protein and calorie malnutrition with insufficient oral intake -Current oral caloric intake: 700 kcal + 400 kcal drink supplement -In the hospital, pancreatic enzyme intake is challenging because the patient struggles to assess food fat content. Recommendations: Lab tests for malnutrition: Vitamin D, Vitamin B12, zinc, folic acid Twice daily Ensure Plus or alternative product. Please record, possibly order from pharmacy. After discharge, prescribe via primary care doctor. -Pancreatic enzymes: 25,000 IU main meals, 10,000 IU snacks. Include in the medical chart. -Detailed discussion of pancreatic enzyme replacement (consumption of enzymes with fatty meals, dosage based on fat content). -Dietary guidelines for cancer patients (balanced nutrient-rich diet, frequent small high-calorie, and protein-rich meals to maintain weight). Psycho-oncology consult from 9/10/2023 Current status/medical history: The patient is noticeably stressed due to her physical limitations in the current scenario, leading to supply concerns. She is under added strain because her insurance recently denied a care level. She dwells on this and suffers from sleep disturbances. She also experiences pain but is hesitant about \"imposing\" and requesting painkillers. The palliative care service was consulted for both pain management and exploration of potential additional outpatient support. Mental assessment: Alert, fully oriented. Engages openly and amicably. Thought processes are orderly. Tends to ruminate. Worried about her care. No signs of delusion or ego disorders. No anhedonia. Decreased drive and energy. Appetite and sleep are significantly disrupted. No signs of suicidal tendencies. Coping with illness: Patient\'s approach to illness appears passive. There is a notable mental strain due to worries about living alone and managing daily life independently. Diagnosis: Adjustment disorder Interventions: A diagnostic and supportive discussion was held. We recommended mirtazapine 7.5 mg at night, increasing to 15 mg after a week if tolerated well. She was also encouraged to take pain medication with Tylenol proactively or at fixed intervals if needed. A follow-up visit at our outpatient clinic was scheduled for psycho-oncological care. **Encounter Summary (07/24/2023):** **Diagnosis:** Lung metastatic pancreatic cancer, seropneumothorax. **Procedure:** Left-sided chest tube placement. **Report: ** **INDICATION:** Mrs. Anderson showed signs of a rapidly expanding seropneumothorax following a procedure to drain a pleural effusion. Given the increase in size and Mrs. Anderson\'s new requirement for supplemental oxygen, we decided to place an emergency chest tube. After informing and obtaining consent from Mrs. Anderson, the procedure was performed. **PROCEDURE DETAILS:** After pain management and patient positioning, a local anesthetic was applied. An incision was made and the chest tube was inserted, which immediately drained about 500 mL of fluid. The tube was then secured, and the procedure was concluded. For the postoperative protocol, please refer to the attached documentation. **Pathology report (07/26/2023): ** Sample: Liquid material, 50 mL, yellow and cloudy. Processing: Papanicolaou, Hemacolor, and HE staining. Microscopic Findings: Protein deposits, red blood cells, lymphocytes, many granulocytes, eosinophils, histiocyte cell forms, mesothelium, and a lot of active mesothelium. Granulocyte count is raised. There is a notable increase in activated mesothelium. Additionally, atypical cells were found in clusters with vacuolated cytoplasm and darkly stained nuclei. Initial findings: Presence of a malignant cell population in the samples, suggestive of adenocarcinoma cells. A cell block was prepared from the residual liquid for further categorization. Follow-up findings from 8/04/2023: Processing: Immunohistochemistry (BerEP4, CK7, CK20, CK19.9, CEA). Microscopic Findings: As mentioned, a cell block was created from the leftover liquid. HE staining showed blood and clusters of plasma-rich cells, with contained eosinophilia, mild to moderate vacuolization. Cell nuclei are darkly stained, some are marginal. PAS test was negative. Immunohistochemical reaction with antibodies against BerEP4, CK7, CK20, CK19.9, CEA were all positive. Final Findings: After reviewing the leftover liquid in a cell block, the findings are: Pleural puncture sample with evidence of atypical cells, both cytopathologically and immunohistochemically, is consistent with cells from a primary pancreatic-biliary cancer. Diagnostic classification: Positive. **Treatment and Progress:** The patient was hospitalized with the mentioned medical history. Lab results were inconclusive. During the physical exam, a notably weak respiratory sound was noted on the right side; oxygen saturation was 97% under 3L of O2. X-rays revealed a significant right-sided pleural effusion, which was drained. After the procedure, the patient\'s shortness of breath improved, with SpO2 at 95% under 2L of O2. However, an x-ray follow-up displayed a seropneumothorax, which became more evident over time, leading to the placement of a chest tube by the thoracic surgery department. The pneumothorax decreased with suction and remained stable without suction, allowing for tube removal. After the pathological analysis of the fluid, atypical cells consistent with pancreatic cancer were identified. A dietary consultation occurred; the patient declined the recommended IV nutrition via port; proper pancreatic enzyme intake was thoroughly explained. Given the cancer\'s progression and the patient\'s deteriorating condition, psycho-oncological care was initiated, and Mirtazapine 7.5 mg at night was prescribed. An ultrasound follow-up at the bedside showed the pleural effusion was slowly progressing (around 100-200mL/day), but no draining was needed as vital signs were clinically stable. Our palliative care colleagues arranged home care, including home oxygen supply. The patient was discharged to her home on 9/28/2023 in stable condition and without symptoms. **Discharge Medications:** Mirtazapine 7.5 mg at night Paracetamol as required Tylenol as required Pancreatic enzymes: 25,000 IU main meals, 10,000 IU snacks. Follow-up: A follow-up visit was scheduled at our outpatient clinic for psycho-oncological care. The patient is advised to get in touch immediately if there are any concerns or if the pleural effusion returns.
Endoscopic ultrasound-guided FNA
Why does Ethaniel think the humans look defenseless? A. Without space travel, the humans seem defenseless against an alien attack. B. Without wings, the humans look small and defenseless. C. Without wings, the humans look like children. D. Without space weapon technology, the humans seem defenseless against an alien attack.
SECOND LANDING By FLOYD WALLACE A gentle fancy for the Christmas Season—an oft-told tale with a wistful twistful of Something that left the Earth with a wing and a prayer. Earth was so far away that it wasn't visible. Even the sun was only a twinkle. But this vast distance did not mean that isolation could endure forever. Instruments within the ship intercepted radio broadcasts and, within the hour, early TV signals. Machines compiled dictionaries and grammars and began translating the major languages. The history of the planet was tabulated as facts became available. The course of the ship changed slightly; it was not much out of the way to swing nearer Earth. For days the two within the ship listened and watched with little comment. They had to decide soon. "We've got to make or break," said the first alien. "You know what I'm in favor of," said the second. "I can guess," said Ethaniel, who had spoken first. "The place is a complete mess. They've never done anything except fight each other—and invent better weapons." "It's not what they've done," said Bal, the second alien. "It's what they're going to do, with that big bomb." "The more reason for stopping," said Ethaniel. "The big bomb can destroy them. Without our help they may do just that." "I may remind you that in two months twenty-nine days we're due in Willafours," said Bal. "Without looking at the charts I can tell you we still have more than a hundred light-years to go." "A week," said Ethaniel. "We can spare a week and still get there on time." "A week?" said Bal. "To settle their problems? They've had two world wars in one generation and that the third and final one is coming up you can't help feeling in everything they do." "It won't take much," said Ethaniel. "The wrong diplomatic move, or a trigger-happy soldier could set it off. And it wouldn't have to be deliberate. A meteor shower could pass over and their clumsy instruments could interpret it as an all-out enemy attack." "Too bad," said Bal. "We'll just have to forget there ever was such a planet as Earth." "Could you? Forget so many people?" "I'm doing it," said Bal. "Just give them a little time and they won't be here to remind me that I have a conscience." "My memory isn't convenient," said Ethaniel. "I ask you to look at them." Bal rustled, flicking the screen intently. "Very much like ourselves," he said at last. "A bit shorter perhaps, and most certainly incomplete. Except for the one thing they lack, and that's quite odd, they seem exactly like us. Is that what you wanted me to say?" "It is. The fact that they are an incomplete version of ourselves touches me. They actually seem defenseless, though I suppose they're not." "Tough," said Bal. "Nothing we can do about it." "There is. We can give them a week." "In a week we can't negate their entire history. We can't begin to undo the effect of the big bomb." "You can't tell," said Ethaniel. "We can look things over." "And then what? How much authority do we have?" "Very little," conceded Ethaniel. "Two minor officials on the way to Willafours—and we run directly into a problem no one knew existed." "And when we get to Willafours we'll be busy. It will be a long time before anyone comes this way again." "A very long time. There's nothing in this region of space our people want," said Ethaniel. "And how long can Earth last? Ten years? Even ten months? The tension is building by the hour." "What can I say?" said Bal. "I suppose we can stop and look them over. We're not committing ourselves by looking." They went much closer to Earth, not intending to commit themselves. For a day they circled the planet, avoiding radar detection, which for them was not difficult, testing, and sampling. Finally Ethaniel looked up from the monitor screen. "Any conclusions?" "What's there to think? It's worse than I imagined." "In what way?" "Well, we knew they had the big bomb. Atmospheric analysis showed that as far away as we were." "I know." "We also knew they could deliver the big bomb, presumably by some sort of aircraft." "That was almost a certainty. They'd have no use for the big bomb without aircraft." "What's worse is that I now find they also have missiles, range one thousand miles and upward. They either have or are near a primitive form of space travel." "Bad," said Ethaniel. "Sitting there, wondering when it's going to hit them. Nervousness could set it off." "It could, and the missiles make it worse," said Bal. "What did you find out at your end?" "Nothing worthwhile. I was looking at the people while you were investigating their weapons." "You must think something." "I wish I knew what to think. There's so little time," Ethaniel said. "Language isn't the difficulty. Our machines translate their languages easily and I've taken a cram course in two or three of them. But that's not enough, looking at a few plays, listening to advertisements, music, and news bulletins. I should go down and live among them, read books, talk to scholars, work with them, play." "You could do that and you'd really get to know them. But that takes time—and we don't have it." "I realize that." "A flat yes or no," said Bal. "No. We can't help them," said Ethaniel. "There is nothing we can do for them—but we have to try." "Sure, I knew it before we started," said Bal. "It's happened before. We take the trouble to find out what a people are like and when we can't help them we feel bad. It's going to be that way again." He rose and stretched. "Well, give me an hour to think of some way of going at it." It was longer than that before they met again. In the meantime the ship moved much closer to Earth. They no longer needed instruments to see it. The planet revolved outside the visionports. The southern plains were green, coursed with rivers; the oceans were blue; and much of the northern hemisphere was glistening white. Ragged clouds covered the pole, and a dirty pall spread over the mid-regions of the north. "I haven't thought of anything brilliant," said Ethaniel. "Nor I," said Bal. "We're going to have to go down there cold. And it will be cold." "Yes. It's their winter." "I did have an idea," said Bal. "What about going down as supernatural beings?" "Hardly," said Ethaniel. "A hundred years ago it might have worked. Today they have satellites. They are not primitives." "I suppose you're right," said Bal. "I did think we ought to take advantage of our physical differences." "If we could I'd be all for it. But these people are rough and desperate. They wouldn't be fooled by anything that crude." "Well, you're calling it," said Bal. "All right," said Ethaniel. "You take one side and I the other. We'll tell them bluntly what they'll have to do if they're going to survive, how they can keep their planet in one piece so they can live on it." "That'll go over big. Advice is always popular." "Can't help it. That's all we have time for." "Special instructions?" "None. We leave the ship here and go down in separate landing craft. You can talk with me any time you want to through our communications, but don't unless you have to." "They can't intercept the beams we use." "They can't, and even if they did they wouldn't know what to do with our language. I want them to think that we don't need to talk things over." "I get it. Makes us seem better than we are. They think we know exactly what we're doing even though we don't." "If we're lucky they'll think that." Bal looked out of the port at the planet below. "It's going to be cold where I'm going. You too. Sure we don't want to change our plans and land in the southern hemisphere? It's summer there." "I'm afraid not. The great powers are in the north. They are the ones we have to reach to do the job." "Yeah, but I was thinking of that holiday you mentioned. We'll be running straight into it. That won't help us any." "I know, they don't like their holidays interrupted. It can't be helped. We can't wait until it's over." "I'm aware of that," said Bal. "Fill me in on that holiday, anything I ought to know. Probably religious in origin. That so?" "It was religious a long time ago," said Ethaniel. "I didn't learn anything exact from radio and TV. Now it seems to be chiefly a time for eating, office parties, and selling merchandise." "I see. It has become a business holiday." "That's a good description. I didn't get as much of it as I ought to have. I was busy studying the people, and they're hard to pin down." "I see. I was thinking there might be some way we could tie ourselves in with this holiday. Make it work for us." "If there is I haven't thought of it." "You ought to know. You're running this one." Bal looked down at the planet. Clouds were beginning to form at the twilight edge. "I hate to go down and leave the ship up here with no one in it." "They can't touch it. No matter how they develop in the next hundred years they still won't be able to get in or damage it in any way." "It's myself I'm thinking about. Down there, alone." "I'll be with you. On the other side of the Earth." "That's not very close. I'd like it better if there were someone in the ship to bring it down in a hurry if things get rough. They don't think much of each other. I don't imagine they'll like aliens any better." "They may be unfriendly," Ethaniel acknowledged. Now he switched a monitor screen until he looked at the slope of a mountain. It was snowing and men were cutting small green trees in the snow. "I've thought of a trick." "If it saves my neck I'm for it." "I don't guarantee anything," said Ethaniel. "This is what I was thinking of: instead of hiding the ship against the sun where there's little chance it will be seen, we'll make sure that they do see it. Let's take it around to the night side of the planet and light it up." "Say, pretty good," said Bal. "They can't imagine that we'd light up an unmanned ship," said Ethaniel. "Even if the thought should occur to them they'll have no way of checking it. Also, they won't be eager to harm us with our ship shining down on them." "That's thinking," said Bal, moving to the controls. "I'll move the ship over where they can see it best and then I'll light it up. I'll really light it up." "Don't spare power." "Don't worry about that. They'll see it. Everybody on Earth will see it." Later, with the ship in position, glowing against the darkness of space, pulsating with light, Bal said: "You know, I feel better about this. We may pull it off. Lighting the ship may be just the help we need." "It's not we who need help, but the people of Earth," said Ethaniel. "See you in five days." With that he entered a small landing craft, which left a faintly luminescent trail as it plunged toward Earth. As soon as it was safe to do so, Bal left in another craft, heading for the other side of the planet. And the spaceship circled Earth, unmanned, blazing and pulsing with light. No star in the winter skies of the planet below could equal it in brilliancy. Once a man-made satellite came near but it was dim and was lost sight of by the people below. During the day the ship was visible as a bright spot of light. At evening it seemed to burn through the sunset colors. And the ship circled on, bright, shining, seeming to be a little piece clipped from the center of a star and brought near Earth to illuminate it. Never, or seldom, had Earth seen anything like it. In five days the two small landing craft that had left it arched up from Earth and joined the orbit of the large ship. The two small craft slid inside the large one and doors closed behind them. In a short time the aliens met again. "We did it," said Bal exultantly as he came in. "I don't know how we did it and I thought we were going to fail but at the last minute they came through." Ethaniel smiled. "I'm tired," he said, rustling. "Me too, but mostly I'm cold," said Bal, shivering. "Snow. Nothing but snow wherever I went. Miserable climate. And yet you had me go out walking after that first day." "From my own experience it seemed to be a good idea," said Ethaniel. "If I went out walking one day I noticed that the next day the officials were much more cooperative. If it worked for me I thought it might help you." "It did. I don't know why, but it did," said Bal. "Anyway, this agreement they made isn't the best but I think it will keep them from destroying themselves." "It's as much as we can expect," said Ethaniel. "They may have small wars after this, but never the big one. In fifty or a hundred years we can come back and see how much they've learned." "I'm not sure I want to," said Bal. "Say, what's an angel?" "Why?" "When I went out walking people stopped to look. Some knelt in the snow and called me an angel." "Something like that happened to me," said Ethaniel. "I didn't get it but I didn't let it upset me," said Bal. "I smiled at them and went about my business." He shivered again. "It was always cold. I walked out, but sometimes I flew back. I hope that was all right." In the cabin Bal spread his great wings. Renaissance painters had never seen his like but knew exactly how he looked. In their paintings they had pictured him innumerable times. "I don't think it hurt us that you flew," said Ethaniel. "I did so myself occasionally." "But you don't know what an angel is?" "No. I didn't have time to find out. Some creature of their folklore I suppose. You know, except for our wings they're very much like ourselves. Their legends are bound to resemble ours." "Sure," said Bal. "Anyway, peace on Earth." THE END Transcriber's Note: This etext was produced from Amazing Science Fiction Stories January 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
B. Without wings, the humans look small and defenseless.
Where did Mia grow up? A. Earth B. a space ship C. Tintera D. The Third Level
DOWN TO THE WORLDS OF MEN BY ALEXEI PANSHIN The ancient rule was sink or swim—swim in the miasma of a planet without spaceflight, or sink to utter destruction! [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, July 1963. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] I The horses and packs were loaded before we went aboard the scoutship. The scout bay is no more than a great oversized airlock with a dozen small ships squatting over their tubes, but it was the last of the Ship that I might ever see, so I took a long final look from the top of the ramp. There were sixteen of us girls and thirteen boys. We took our places in the seats in the center of the scout. Riggy Allen made a joke that nobody bothered to laugh at, and then we were all silent. I was feeling lost and just beginning to enjoy it when Jimmy Dentremont came over to me. He's red-headed and has a face that makes him look about ten. An intelligent runt like me. He said what I expected. "Mia, do you want to go partners if we can get together when we get down?" I guess he thought that because we were always matched on study I liked him. Well, I did when I wasn't mad at him, but now I had that crack he'd made about being a snob in mind, so I said, "Not likely. I want to come back alive." It wasn't fair, but it was a good crack and he went back to his place without saying anything. My name is Mia Havero. I'm fourteen, of course, or I wouldn't be telling this. I'm short, dark and scrawny, though I don't expect that scrawniness to last much longer. Mother is very good looking. In the meantime, I've got brains as a consolation. After we were all settled, George Fuhonin, the pilot, raised the ramps. We sat there for five minutes while they bled air out of our tube and then we just ... dropped. My stomach turned flips. We didn't have to leave that way, but George thinks it's fun to be a hot pilot. Thinking it over, I was almost sorry I'd been stinking to Jimmy D. He's the only competition I have my own age. The trouble is, you don't go partners with the competition, do you? Besides, there was still that crack about being a snob. The planet chosen for our Trial was called Tintera. The last contact the Ship had had with it—and we were the ones who dropped them—was almost 150 years ago. No contact since. That had made the Council debate a little before they dropped us there, but they decided it was all right in the end. It didn't make any practical difference to us kids because they never tell you anything about the place they're going to drop you. All I knew was the name. I wouldn't have known that much if Daddy weren't Chairman of the Council. I felt like crawling in a corner of the ship and crying, but nobody else was breaking down, so I didn't. I did feel miserable. I cried when I said good-by to Mother and Daddy—a real emotional scene—but that wasn't in public. It wasn't the chance of not coming back that bothered me really, because I never believed that I wouldn't. The thought that made me unhappy was that I would have to be on a planet for a whole month. Planets make me feel wretched. The gravity is always wrong, for one thing. Either your arches and calves ache or every time you step you think you're going to trip on a piece of fluff and break your neck. There are vegetables everywhere and little grubby things just looking for you to crawl on. If you can think of anything creepier than that, you've got a real nasty imagination. Worst of all, planets stink. Every single one smells—I've been on enough to know that. A planet is all right for a Mud-eater, but not for me. We have a place in the Ship like that—the Third Level—but it's only a thousand square miles and any time it gets on your nerves you can go up a level or down a level and be back in civilization. When we reached Tintera, they started dropping us. We swung over the sea from the morning side and then dropped low over gray-green forested hills. Finally George spotted a clear area and dropped into it. They don't care what order you go in, so Jimmy D. jumped up, grabbed his gear and then led his horse down the ramp. I think he was still smarting from the slap I'd given him. In a minute we were airborne again. I wondered if I would ever see Jimmy—if he would get back alive. It's no game we play. When we turn fourteen, they drop us on the nearest colonized planet and come back one month later. That may sound like fun to you, but a lot of us never come back alive. Don't think I was helpless. I'm hell on wheels. They don't let us grow for fourteen years and then kick us out to die. They prepare us. They do figure, though, that if you can't keep yourself alive by the time you're fourteen, you're too stupid, foolish or unlucky to be any use to the Ship. There's sense behind it. It means that everybody on the Ship is a person who can take care of himself if he has to. Daddy says that something has to be done in a closed society to keep the population from decaying mentally and physically, and this is it. And it helps to keep the population steady. I began to check my gear out—sonic pistol, pickup signal so I could be found at the end of the month, saddle and cinches, food and clothes. Venie Morlock has got a crush on Jimmy D., and when she saw me start getting ready to go, she began to check her gear, too. At our next landing, I grabbed Ninc's reins and cut Venie out smoothly. It didn't have anything to do with Jimmy. I just couldn't stand to put off the bad moment any longer. The ship lifted impersonally away from Ninc and me like a rising bird, and in just a moment it was gone. Its gray-blue color was almost the color of the half-overcast sky, so I was never sure when I saw it last. II The first night was hell, I guess because I'm not used to having the lights out. That's when you really start to feel lonely, being alone in the dark. When the sun disappears, somehow you wonder in your stomach if it's really going to come back. But I lived through it—one day in thirty gone. I rode in a spiral search pattern during the next two days. I had three things in mind—stay alive, find people and find some of the others. The first was automatic. The second was to find out if there was a slot I could fit into for a month. If not, I would have to find a place to camp out, as nasty as that would be. The third was to join forces, though not with that meatball Jimmy D. No, he isn't really a meatball. The trouble is that I don't take nothing from nobody, especially him, and he doesn't take nothing from nobody, especially me. So we do a lot of fighting. I had a good month for Trial. My birthday is in November—too close to Year End Holiday for my taste, but this year it was all right. It was spring on Tintera, but it was December in the Ship, and after we got back we had five days of Holiday to celebrate. It gave me something to look forward to. In two days of riding, I ran onto nothing but a few odd-looking animals. I shot one small one and ate it. It turned out to taste pretty good, though not as good as a slice from Hambone No. 4, to my mind the best meat vat on the Ship. I've eaten things so gruey-looking that I wondered that anybody had the guts to try them in the first place and they've turned out to taste good. And I've seen things that looked good that I couldn't keep on my stomach. So I guess I was lucky. On the third day, I found the road. I brought Ninc down off the hillside, losing sight of the road in the trees, and then reaching it in the level below. It was narrow and made of sand spread over a hard base. Out of the marks in the sand, I could pick out the tracks of horses and both narrow and wide wheels. Other tracks I couldn't identify. One of the smartest moves in history was to include horses when they dropped the colonies. I say "they" because, while we did the actual dropping, the idea originated with the whole evac plan back on Earth. Considering how short a time it was in which the colonies were established, there was not time to set up industry, so they had to have draft animals. The first of the Great Ships was finished in 2025. One of the eight, as well as the two that were being built then, went up with everything else in the Solar System in 2041. In that sixteen years 112 colonies were planted. I don't know how many of those planets had animals that could have been substituted but, even if they had, they would have had to be domesticated from scratch. That would have been stupid. I'll bet that half the colonies would have failed if they hadn't had horses. We'd come in from the west over the ocean, so I traveled east on the road. That much water makes me nervous, and roads have to go somewhere. I came on my first travelers three hours later. I rounded a tree-lined bend, ducking an overhanging branch, and pulled Ninc to a stop. There were five men on horseback herding a bunch of the ugliest creatures alive. They were green and grotesque. They had squat bodies, long limbs and knobby bulges at their joints. They had square, flat animal masks for faces. But they walked on their hind legs and they had paws that were almost hands, and that was enough to make them seem almost human. They made a wordless, chilling, lowing sound as they milled and plodded along. I started Ninc up again and moved slowly to catch up with them. All the men on horseback had guns in saddle boots. They looked as nervous as cats with kittens. One of them had a string of packhorses on a line and he saw me and called to another who seemed to be the leader. That one wheeled his black horse and rode back toward me. He was a middle-aged man, maybe as old as my Daddy. He was large and he had a hard face. Normal enough, but hard. He pulled to a halt when we reached each other, but I kept going. He had to come around and follow me. I believe in judging a person by his face. A man can't help the face he owns, but he can help the expression he wears on it. If a man looks mean, I generally believe that he is. This one looked mean. That was why I kept riding. He said, "What be you doing out here, boy? Be you out of your head? There be escaped Losels in these woods." I told you I hadn't finished filling out yet, but I hadn't thought it was that bad. I wasn't ready to make a fight over the point, though. Generally, I can't keep my bloody mouth shut, but now I didn't say anything. It seemed smart. "Where be you from?" he asked. I pointed to the road behind us. "And where be you going?" I pointed ahead. No other way to go. He seemed exasperated. I have that effect sometimes. Even on Mother and Daddy, who should know better. We were coming up on the others now, and the man said, "Maybe you'd better ride on from here with us. For protection." He had an odd way of twisting his sounds, almost as though he had a mouthful of mush. I wondered whether he were just an oddball or whether everybody here spoke the same way. I'd never heard International English spoken any way but one, even on the planet Daddy made me visit with him. One of the other outriders came easing by then. I suppose they'd been watching us all the while. He called to the hard man. "He be awfully small, Horst. I doubt me a Losel'd even notice him at all. We mought as well throw him back again." The rider looked at me. When I didn't dissolve in terror as he expected, he shrugged and one of the other men laughed. The hard man said to the others, "This boy will be riding along with us to Forton for protection." I looked down at the plodding, unhappy creatures they were driving along and one looked back at me with dull, expressionless golden eyes. I felt uncomfortable. I said, "I don't think so." What the man did then surprised me. He said, "I do think so," and reached for the rifle in his saddle boot. I whipped my sonic pistol out so fast that he was caught leaning over with the rifle half out. His jaw dropped. He knew what I held and he didn't want to be fried. I said, "Ease your rifles out and drop them gently to the ground." They did, watching me all the while with wary expressions. When all the rifles were on the ground, I said, "All right, let's go." They didn't want to move. They didn't want to leave the rifles. I could see that. Horst didn't say anything. He just watched me with narrowed eyes. But one of the others held up a hand and in wheedling tones said, "Look here, kid...." "Shut up," I said, in as mean a voice as I could muster, and he did. It surprised me. I didn't think I sounded that mean. I decided he just didn't trust the crazy kid not to shoot. After twenty minutes of easy riding for us and hard walking for the creatures, I said, "If you want your rifles, you can go back and get them now." I dug my heels into Ninc's sides and rode on. At the next bend I looked back and saw four of them holding their packhorses and the creatures still while one beat a dust-raising retreat down the road. I put this episode in the "file and hold for analysis" section in my mind and rode on, feeling good. I think I even giggled once. Sometimes I even convince myself that I'm hell on wheels. III When I was nine, my Daddy gave me a painted wooden doll that my great-grandmother brought from Earth. The thing is that inside it, nestled one in another, are eleven more dolls, each one smaller than the last. I like to watch people when they open it for the first time. My face must have been like that as I rode along the road. The country leveled into a great rolling valley and the trees gave way to great farms and fields. In the fields, working, were some of the green creatures, which surprised me since the ones I'd seen before hadn't seemed smart enough to count to one, let alone do any work. But it relieved me. I thought they might have been eating them or something. I passed two crossroads and started to meet more people, but nobody questioned me. I met people on horseback, and twice I met trucks moving silently past. And I overtook a wagon driven by the oldest man I've seen in my life. He waved to me, and I waved back. Near the end of the afternoon I came to the town, and there I received a jolt that sickened me. By the time I came out on the other side, I was sick. My hands were cold and sweaty and my head was spinning, and I wanted to kick Ninc to a gallop. I rode slowly in, looking all around, missing nothing. The town was all stone, wood and brick. Out of date. Out of time, really. There were no machines more complicated than the trucks I'd seen earlier. At the edge of town, I passed a newspaper office with a headline pasted in the window—INVASION! I remember that. I wondered about it. But I looked most closely at the people. In all that town, I didn't see one girl over ten years old and no grown-up women at all. There were little kids, there were boys and there were men, but no girls. All the boys and men wore pants, and so did I, which must have been why Horst and his buddies assumed I was a boy. It wasn't flattering; but I decided I'd not tell anybody different until I found what made the clocks tick on this planet. But that wasn't what bothered me. It was the kids. My God! They swarmed. I saw a family come out of a house—a father and four children. It was the most foul thing I've ever seen. It struck me then—these people were Free Birthers! I felt a wave of nausea and I closed my eyes until it passed. The first thing you learn in school is that if it weren't for idiot and criminal people like these, Earth would never have been destroyed. The evacuation would never have had to take place, and eight billion people wouldn't have died. There wouldn't have been eight billion people. But, no. They bred and they spread and they devoured everything in their path like a cancer. They gobbled up all the resources that Earth had and crowded and shoved one another until the final war came. I am lucky. My great-great-grandparents were among those who had enough foresight to see what was coming. If it hadn't been for them and some others like them, there wouldn't be any humans left anywhere. And I wouldn't be here. That may not scare you, but it scares me. What happened before, when people didn't use their heads and wound up blowing the Solar System apart, is something nobody should forget. The older people don't let us forget. But these people had, and that the Council should know. For the first time since I landed on Tintera, I felt really frightened. There was too much going on that I didn't understand. I felt a blind urge to get away, and when I reached the edge of town, I whomped Ninc a good one and gave him his head. I let him run for almost a mile before I pulled him down to a walk again. I couldn't help wishing for Jimmy D. Whatever else he is, he's smart and brains I needed. How do you find out what's going on? Eavesdrop? That's a lousy method. For one thing, people can't be depended on to talk about the things you want to hear. For another, you're likely to get caught. Ask somebody? Who? Make the mistake of bracing a fellow like Horst and you might wind up with a sore head and an empty pocket. The best thing I could think of was to find a library, but that might be a job. I'd had two bad shocks on this day, but they weren't the last. In the late afternoon, when the sun was starting to sink and a cool wind was starting to ripple the tree leaves, I saw the scoutship high in the sky. The dying sun colored it a deep red. Back again? I wondered what had gone wrong. I reached down into my saddlebag and brought out my contact signal. The scoutship swung up in the sky in a familiar movement calculated to drop the stomach out of everybody aboard. George Fuhonin's style. I triggered the signal, my heart turning flips all the while. I didn't know why he was back, but I wasn't really sorry. The ship swung around until it was coming back on a path almost over my head, going in the same direction. Then it went into a slip and started bucking so hard that I knew this wasn't hot piloting at all, just plain idiot stutter-fingered stupidity at the controls. As it skidded by me overhead, I got a good look at it and knew that it wasn't one of ours. Not too different, but not ours. One more enigma. Where was it from? Not here. Even if you know how, and we wouldn't tell these Mud-eaters how, a scoutship is something that takes an advanced technology to build. I felt defeated and tired. Not much farther along the road, I came to a campsite with two wagons pulled in for the night, and I couldn't help but pull in myself. The campsite was large and had two permanent buildings on it. One was a well enclosure and the other was little more than a high-walled pen. It didn't even have a roof. I set up camp and ate my dinner. In the wagon closest to me were a man, his wife and their three children. The kids were running around and playing, and one of them ran close to the high-walled pen. His father came and pulled him away. The kids weren't to blame for their parents, but when one of them said hello to me, I didn't even answer. I know how lousy I would feel if I had two or three brothers and sisters, but it didn't strike me until that moment that it wouldn't even seem out of the ordinary to these kids. Isn't that horrible? About the time I finished eating, and before it grew dark, the old man I had seen earlier in the day drove his wagon in. He fascinated me. He had white hair, something I had read about in stories but had never seen before. When nightfall came, they started a large fire. Everybody gathered around. There was singing for awhile, and then the father of the children tried to pack them off to bed. But they weren't ready to go, so the old man started telling them a story. In the old man's odd accent, and sitting there in the campfire light surrounded by darkness, it seemed just right. It was about an old witch named Baba Yaga who lived in the forest in a house that stood on chicken legs. She was the nasty stepmother of a nice little girl, and to get rid of the kid, she sent her on a phony errand into the deep dark woods at nightfall. I could appreciate the poor girl's position. All the little girl had to help her were the handkerchief, the comb and the pearl that she had inherited from her dear dead mother. But, as it turned out, they were just enough to defeat nasty old Baba Yaga and bring the girl safely home. I wished for the same for myself. The old man had just finished and they were starting to drag the kids off to bed when there was a commotion on the road at the edge of the camp. I looked but my eyes were adjusted to the light of the fire and I couldn't see far into the dark. A voice there said, "I'll be damned if I'll take another day like this one, Horst. We should have been here hours ago. It be your fault we're not." Horst growled a retort. I decided that it was time for me to leave the campfire. I got up and eased away as Horst and his men came up to the fire, and cut back to where Ninc was parked. I grabbed up my blankets and mattress and started to roll them up. I had a pretty good idea now what they used the high-walled pen for. I should have known that they would have to pen the animals up for the night. I should have used my head. I hadn't and now it was time to take leave. I never got the chance. I was just heaving the saddle up on Ninc when I felt a hand on my shoulder and I was swung around. "Well, well. Horst, look who we have here," he called. It was the one who'd made the joke about me being beneath the notice of a Losel. He was alone with me now, but with that call the others would be up fast. I brought the saddle around as hard as I could and then up, and he went down. He started to get up again, so I dropped the saddle on him and reached inside my jacket for my gun. Somebody grabbed me then from behind and pinned my arms to my side. I opened my mouth to scream—I have a good scream—but a rough smelly hand clamped down over it before I had a chance to get more than a lungful of air. I bit down hard—5000 lbs. psi, I'm told—but he didn't let me go. I started to kick, but Horst jerked me off my feet and dragged me off. When we were behind the pen and out of earshot of the fire, he stopped dragging me and dropped me in a heap. "Make any noise," he said, "and I'll hurt you." That was a silly way to put it, but somehow it said more than if he'd threatened to break my arm or my head. It left him a latitude of things to do if he pleased. He examined his hand. There was enough moonlight for that. "I ought to club you anyway," he said. The one I'd dropped the saddle on came up then. The others were putting the animals in the pen. He started to kick me, but Horst stopped him. "No," he said. "Look through the kid's gear, bring the horse and what we can use." The other one didn't move. "Get going, Jack," Horst said in a menacing tone and they stood toe to toe for a long moment before Jack finally backed down. It seemed to me that Horst wasn't so much objecting to me being kicked, but was rather establishing who did the kicking in his bunch. But I wasn't done yet. I was scared, but I still had the pistol under my jacket. Horst turned back to me and I said, "You can't do this and get away with it." He said, "Look, boy. You may not know it, but you be in a lot of trouble. So don't give me a hard time." He still thought I was a boy. It was not time to correct him, but I didn't like to see the point go unchallenged. It was unflattering. "The courts won't let you get away with this," I said. I'd passed a courthouse in the town with a carved motto over the doors: EQUAL JUSTICE UNDER THE LAW or TRUTH OUR SHIELD AND JUSTICE OUR SWORD or something stuffy like that. He laughed, not a phony, villian-type laugh, but a real laugh, so I knew I'd goofed. "Boy, boy. Don't talk about the courts. I be doing you a favor. I be taking what I can use of your gear, but I be letting you go. You go to court and they'll take everything and lock you up besides. I be leaving you your freedom." "Why would they be doing that?" I asked. I slipped my hand under my jacket. "Every time you open your mouth you shout that you be off one of the Ships," Horst said. "That be enough. They already have one of you brats in jail in Forton." I was about to bring my gun out when up came Jack leading Ninc, with all my stuff loaded on. I mentally thanked him. He said, "The kid's got some good equipment. But I can't make out what this be for." He held out my pickup signal. Horst looked at it, then handed it back. "Throw it away," he said. I leveled my gun at them—Hell on Wheels strikes again! I said, "Hand that over to me." Horst made a disgusted sound. "Don't make any noise," I said, "or you'll fry. Now hand it over." I stowed it away, then paused with one hand on the leather horn of the saddle. "What's the name of the kid in jail in Forton." "I can't remember," he said. "But it be coming to me. Hold on." I waited. Then suddenly my arm was hit a numbing blow from behind and the gun went flying. Jack pounced after it and Horst said, "Good enough," to the others who'd come up behind me. I felt like a fool. Horst stalked over and got the signal. He dropped it on the ground and said in a voice far colder than mine could ever be, because it was natural and mine wasn't, "The piece be yours." Then he tromped on it until it cracked and fell apart. Then he said, "Pull a gun on me twice. Twice." He slapped me so hard that my ears rang. "You dirty little punk." I said calmly, "You big louse." It was a time I would have done better to keep my mouth shut. All I can remember is a flash of pain as his fist crunched against the side of my face and then nothing. Brains are no good if you don't use them.
B. a space ship
How does the author appeal to readers to convince them to align themselves with Sharism? A. Promising a more equitable future for all B. Discussing how prior failed inventions could have been successful if more collaborators participated C. Refuting the argument that greedy corporations could manipulate the Sharist system D. Associating sharing with bravery and leadership
Sharism: A Mind Revolution With the People of the World Wide Web communicating more fully and freely in Social Media while rallying a Web 2.0 content boom, the inner dynamics of such a creative explosion must be studied more closely. What motivates those who join this movement and what future will they create? A key fact is that a superabundance of community respect and social capital are being accumulated by those who share. The key motivator of Social Media and the core spirit of Web 2.0 is a mind switch called Sharism. Sharism suggests a re-orientation of personal values. We see it in User Generated Content. It is the pledge of Creative Commons. It is in the plans of future-oriented cultural initiatives. Sharism is also a mental practice that anyone can try, a social-psychological attitude to transform a wide and isolated world into a super-smart Social Brain. The Neuron Doctrine Sharism is encoded in the Human Genome. Although eclipsed by the many pragmatisms of daily life, the theory of Sharism finds basis in neuroscience and its study of the working model of the human brain. Although we can’t entirely say how the brain works as a whole, we do have a model of the functional mechanism of the nervous system and its neurons. A neuron is not a simple organic cell, but a very powerful, electrically excitable biological processor. Groups of neurons form vastly interconnected networks, which, by changing the strength of the synapses between cells, can process information, and learn. A neuron, by sharing chemical signals with its neighbors, can be integrated into more meaningful patterns that keep the neuron active and alive. Moreover, such a simple logic can be iterated and amplified, since all neurons work on a similar principle of connecting and sharing. Originally, the brain is quite open. A neural network exists to share activity and information, and I believe this model of the brain should inspire ideas and decisions about human networks. Thus, our brain supports sharing in its very system-nature. This has profound implications for the creative process. Whenever you have an intention to create, you will find it easier to generate more creative ideas if you keep the sharing process firmly in mind. The idea-forming-process is not linear, but more like an avalanche of amplifications along the thinking path. It moves with the momentum of a creative snowball. If your internal cognitive system encourages sharing, you can engineer a feedback loop of happiness, which will help you generate even more ideas in return. It’s a kind of butterfly- effect, as the small creative energy you spend will eventually return to make you, and the world, more creative. However, daily decisions for most adults are quite low in creative productivity, if only because they’ve switched off their sharing paths. People generally like to share what they create, but in a culture that tells them to be protective of their ideas, people start to believe in the danger of sharing. Then Sharism will be degraded in their mind and not encouraged in their society. But if we can encourage someone to share, her sharing paths will stay open. Sharism will be kept in her mind as a memory and an instinct. If in the future she faces a creative choice, her choice will be, “Share.” These mind-switches are too subtle to be felt. But since the brain, and society, is a connected system, the accumulation of these micro-attitudes, from neuron to neuron and person to person, can result in observable behavior. It is easy to tell if a person, a group, a company, a nation is oriented toward Sharism or not. For those who are not, what they defend as “cultural goods” and “intellectual property” are just excuses for the status quo of keeping a community closed. Much of their “culture” will be protected, but the net result is the direct loss of many other precious ideas, and the subsequent loss of all the potential gains of sharing. This lost knowledge is a black hole in our life, which may start to swallow other values as well. Non-sharing culture misleads us with its absolute separation of Private and Public space. It makes creative action a binary choice between public and private, open and closed. This creates a gap in the spectrum of knowledge. Although this gap has the potential to become a valuable creative space, concerns about privacy make this gap hard to fill. We shouldn’t be surprised that, to be safe, most people keep their sharing private and stay “closed.” They may fear the Internet creates a potential for abuse that they can’t fight alone. However, the paradox is: The less you share, the less power you have. New Technologies and the Rise of Sharism Let’s track back to 1999, when there were only a few hundred pioneer bloggers around the world, and no more than ten times that many readers following each blog. Human history is always so: something important was happening, but the rest of the world hadn’t yet realized it. The shift toward easy-to-use online publishing triggered a soft revolution in just five years. People made a quick and easy transition from reading blogs, to leaving comments and taking part in online conversations, and then to the sudden realization that they should become bloggers themselves. More bloggers created more readers, and more readers made more blogs. The revolution was viral. Bloggers generate lively and timely information on the Internet, and connect to each other with RSS, hyperlinks, comments, trackbacks and quotes. The small-scale granularity of the content can fill discrete gaps in experience and thus record a new human history. Once you become a blogger, once you have accumulated so much social capital in such a small site, it’s hard to stop. We can’t explain this fact with a theory of addiction. It’s an impulse to share. It’s the energy of the memes that want to be passed from mouth to mouth and mind to mind. It’s more than just E-mail. It’s Sharism. Bloggers are always keen to keep the social context of their posts in mind, by asking themselves, “Who is going to see this?” Bloggers are agile in adjusting their tone−and privacy settings−to advance ideas and stay out of trouble. It’s not self-censorship, but a sense of smart expression. But once blogs reached the tipping point, they expanded into the blogosphere. This required a more delicate social networking system and content- sharing architecture. But people now understand that they can have better control over a wide spectrum of relationships. Like how Flickr allows people to share their photos widely, but safely. The checkbox-based privacy of Flickr may seem unfamiliar to a new user, but you can use it to toy with the mind-switches of Sharism. By checking a box we can choose to share or not to share. From my observations, I have seen photographers on Flickr become more open to sharing, while retaining flexible choices. The rapid emergence of Social Applications that can communicate and cooperate, by allowing people to output content from one service to another, is letting users pump their memes into a pipeline-like ecosystem. This interconnectedness allows memes to travel along multiple online social networks, and potentially reach a huge audience. As a result, such a Micro-pipeline system is making Social Media a true alternative to broadcast media. These new technologies are reviving Sharism in our closed culture. Local Practice, Global Gain If you happened to lose your Sharism in a bad educational or cultural setting, it’s hard to get it back. But it’s not impossible. A persistence of practice can lead to a full recovery. You can think of Sharism as a spiritual practice. But you must practice everyday. Otherwise, you might lose the power of sharing. Permanently. You might need something to spur you on, to keep you from quitting and returning to a closed mindset. Here’s an idea: put a sticky note on your desk that says, “What do you want to share today?” I’m not kidding. Then, if anything interesting comes your way: Share It! The easiest way to both start and keep sharing is by using different kinds of social software applications. Your first meme you want to share may be small, but you can amplify it with new technologies. Enlist some people from your network and invite them into a new social application. At first it might be hard to feel the gains of Sharism. The true test then is to see if you can keep track of the feedback that you get from sharing. You will realize that almost all sharing activities will generate positive results. The happiness that this will obtain is only the most immediate reward. But there are others. The first type of reward that you will get comes in the form of comments. Then you know you’ve provoked interest, appreciation, excitement. The second reward is access to all the other stuff being shared by friends in your network. Since you know and trust them, you will be that much more interested in what they have to share. Already, the return is a multiple of the small meme you first shared. But the third type of return is more dramatic still. Anything you share can be forwarded, circulated and republished via other people’s networks. This cascade effect can spread your work to the networked masses. Improvements in social software are making the speed of dissemination as fast as a mouse-click. You should get to know the Sharism-You. You’re about to become popular, and fast This brings us to the fourth and final type of return. It has a meaning not only for you, but for the whole of society. If you so choose, you may allow others to create derivative works from what you share. This one choice could easily snowball into more creations along the sharing path, from people at key nodes in the network who are all as passionate about creating and sharing as you are. After many iterative rounds of development, a large creative work may spring from your choice to share. Of course, you will get the credit that you asked for, and deserve. And it’s okay to seek financial rewards. But you will in every case get something just as substantial: Happiness. The more people who create in the spirit of Sharism, the easier it will be to attain well- balanced and equitable Social Media that is woven by people themselves. Media won’t be controlled by any single person but will rely on the even distribution of social networking. These “Shaeros” (Sharing Heroes) will naturally become the opinion leaders in the first wave of Social Media. However, these media rights will belong to everyone. You yourself can be both producer and consumer in such a system. Sharism Safeguards Your Rights Still, many questions will be raised about Sharism as an initiative in new age. The main one is copyright. One concern is that any loss of control over copyrighted content will lead to noticeable deficits in personal wealth, or just loss of control. 5 years ago, I would have said that this was a possibility. But things are changing today. The sharing environment is more protected than you might think. Many new social applications make it easy to set terms-of-use along your sharing path. Any infringement of those terms will be challenged not just by the law, but by your community. Your audience, who benefit form your sharing, can also be the gatekeepers of your rights. Even if you are a traditional copyright holder, this sounds ideal. Furthermore, by realizing all the immediate and emergent rewards that can be had by sharing, you may eventually find that copyright and “All Rights Reserved” are far from your mind. You will enjoy sharing too much to worry about who is keeping a copy. The new economic formula is, the more people remix your works, the higher the return. I want to point out that Sharism is not Communism, nor Socialism. As for those die- hard Communists we know, they have often abused people’s sharing nature and forced them to give up their rights, and their property. Socialism, that tender Communism, in our experience also lacked respect for these rights. Under these systems, the state owns all property. Under Sharism, you can keep ownership, if you want. But I like to share. And this is how I choose to spread ideas, and prosperity Sharism is totally based on your own consensus. It’s not a very hard concept to understand, especially since copyleft movements like the Free Software Foundation and Creative Commons have been around for years. These movements are redefining a more flexible spectrum of licenses for both developers and end-users to tag their works. Because the new licenses can be recognized by either humans or machines, it’s becoming easier to re-share those works in new online ecosystems. The Spirit of the Web, a Social Brain Sharism is the Spirit of the Age of Web 2.0. It has the consistency of a naturalized Epistemology and modernized Axiology, but also promises the power of a new Internet philosophy. Sharism will transform the world into an emergent Social Brain: a networked hybrid of people and software. We are Networked Neurons connected by the synapses of Social Software. This is an evolutionary leap, a small step for us and a giant one for human society. With new “hairy” emergent technologies sprouting all around us, we can generate higher connectivities and increase the throughput of our social links. The more open and strongly connected we social neurons are, the better the sharing environment will be for all people. The more collective our intelligence, the wiser our actions will be. People have always found better solutions through conversations. Now we can put it all online. Sharism will be the politics of the next global superpower. It will not be a country, but a new human network joined by Social Software. This may remain a distant dream, and even a well-defined public sharing policy might not be close at hand. But the ideas that I’m discussing can improve governments today. We can integrate our current and emerging democratic systems with new folksonomies (based on the collaborative, social indexing of information) to enable people to make queries, share data and remix information for public use. The collective intelligence of a vast and equitable sharing environment can be the gatekeeper of our rights, and a government watchdog. In the future, policymaking can be made more nuanced with the micro-involvement of the sharing community. This “Emergent Democracy” is more real-time than periodical parliamentary sessions. It will also increase the spectrum of our choices, beyond the binary options of “Yes” or “No” referenda. Representative democracy will become more timely and diligent, because we will represent ourselves within the system. Sharism will result in better social justice. In a healthy sharing environment, any evidence of injustice can get amplified to get the public’s attention. Anyone who has been abused can get real and instant support from her peers and her peers’ peers. Appeals to justice will take the form of petitions through multiple, interconnected channels. Using these tools, anyone can create a large social impact. With multiple devices and many social applications, each of us can become more sociable, and society more individual. We no longer have to act alone. Emergent democracy will only happen when Sharism becomes the literacy of the majority. Since Sharism can improve communication, collaboration and mutual understanding, I believe it has a place within the educational system. Sharism can be applied to any cultural discourse, CoP (Community of Practice) or problem-solving context. It is also an antidote to social depression, since sharelessness is just dragging our society down. In present or formerly totalitarian countries, this downward cycle is even more apparent. The future world will be a hybrid of human and machine that will generate better and faster decisions anytime, anywhere. The flow of information between minds will become more flexible and more productive. These vast networks of sharing will create a new social order−A Mind Revolution!
A. Promising a more equitable future for all
How long did Thig spend traveling with Ellen while posing as Lewis? A. Four weeks B. Twelve weeks C. Four months D. Two weeks
QUEST OF THIG By BASIL WELLS Thig of Ortha was the vanguard of the conquering "HORDE." He had blasted across trackless space to subdue a defenseless world—only to meet on Earth emotions that were more deadly than weapons. [Transcriber's Note: This etext was produced from Planet Stories Fall 1942. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Thig carefully smoothed the dark sand and seaweed of the lonely beach over the metal lid of the flexible ringed tunnel that linked the grubby ship from another planet with the upper air. He looked out across the heaving waters of the Sound toward Connecticut. He stared appraisingly around at the luxuriant green growth of foliage further inland; and started toward the little stretch of trees and brush, walking carefully because of the lesser gravitation. Thig was shorter than the average Earthman—although on Ortha he was well above the average in height—but his body was thick and powerfully muscled. His skull was well-shaped and large; his features were regular, perhaps a trifle oversize, and his hair and eyes were a curiously matching blend of reddish brown. Oddest of all, he wore no garments, other than the necessary belt and straps to support his rod-like weapon of white metal and his pouches for food and specimens. The Orthan entered the narrow strip of trees and crossed to the little-used highway on the other side. Here he patiently sat down to wait for an Earthman or an Earthwoman to pass. His task now was to bring a native, intact if possible, back to the carefully buried space cruiser where his two fellows and himself would drain the creature's mentality of all its knowledge. In this way they could learn whether a planet was suited for colonization by later swarms of Orthans. Already they had charted over a hundred celestial bodies but of them all only three had proven worthy of consideration. This latest planet, however, 72-P-3 on the chart, appeared to be an ideal world in every respect. Sunlight, plenty of water and a dense atmospheric envelope made of 72-P-3 a paradise among planets. The explorer from another world crouched into the concealment of a leafy shrub. A creature was approaching. Its squat body was covered with baggy strips of bluish cloth and it carried a jointed rod of metal and wood in its paw. It walked upright as did the men of Ortha. Thig's cold eyes opened a trifle wider as he stared into the thing's stupid face. It was as though he was looking into a bit of polished metal at the reflection of himself! The Earthman was opposite now and he must waste no more precious time. The mighty muscles of the Orthan sent him hurtling across the intervening space in two prodigious bounds, and his hands clamped across the mouth and neck of the stranger.... Lewis Terry was going fishing. For a week the typewriter mill that had ground out a thousand assorted yarns of the untamed West and the frigid desolation of the Northwoods had been silent. Lewis wondered if he was going stale. He had sat every day for eight hours in front of that shiny-buttoned bane of the typist, but there were no results. Feebly he had punched a key two days ago and a $ sign had appeared. He hadn't dared touch the machine since. For Mr. Terry, that hard-hitting writer of two-gun action, had never been further west of Long Island than Elizabeth, and he had promised his wife, Ellen, that he would take the three children and herself on a trailer tour of the West that very summer. Since that promise, he could not write a word. Visions of whooping red-skinned Apaches and be-chapped outlaws raiding his little trailer home kept rolling up out of his subconscious. Yet he had to write at least three novelets and a fistful of short stories in the next two weeks to finance the great adventure—or the trip was off. So Lewis left the weathered old cottage in the early dawn and headed for his tubby old boat at the landing in an attempt to work out a salable yarn.... "Hey!" he shouted as a naked man sprang out of the bushes beside the road. "What's the trouble?" Then he had no time for further speech, the massive arms of the stranger had wound around him and two hamlike hands shut off his speech and his wind. He fought futilely against trained muscles. The hand clamping his throat relaxed for a moment and hacked along the side of his head. Blackness flooded the brain of Lewis, and he knew no more. "There it is," announced Thig, dropping the limp body of the captured Earthman to the metal deck-plates. "It is a male of the species that must have built the cities we saw as we landed." "He resembles Thig," announced Kam. "But for the strange covering he wears he might be Thig." "Thig will be this creature!" announced Torp. "With a psychic relay we will transfer the Earthman's memories and meager store of knowledge to the brain of Thig! He can then go out and scout this world without arousing suspicion. While he is gone, I will take Kam and explore the two inner planets." "You are the commander," said Thig. "But I wish this beast did not wear these clumsy sheathing upon his body. On Ortha we do not hamper the use of our limbs so." "Do not question the word of your commander," growled Torp, swelling out his thick chest menacingly. "It is for the good of our people that you disguise yourself as an Earthman." "For the good of the Horde," Thig intoned almost piously as he lifted Terry's body and headed for the laboratory. Service for the Horde was all that the men of Ortha knew. Carefully cultured and brought to life in the laboratories of their Horde, they knew neither father nor mother. Affection and love were entirely lacking in their early training and later life. They were trained antlike from childhood that only the growth and power of the Horde were of any moment. Men and women alike toiled and died like unfeeling robots of flesh and bone for the Horde. The Horde was their religion, their love-life, their everything! So it was that the bodies of the Earthman and the Orthan were strapped on two parallel tables of chill metal and the twin helmets, linked to one another by the intricacies of the psychic relay, put upon their heads. For ten hours or more the droning hum of the relay sucked Terry's brain dry of knowledge. The shock upon the nervous system of the Earthman proved too violent and his heart faltered after a time and stopped completely. Twice, with subtle drugs they restored pseudo-life to his body and kept the electrical impulses throbbing from his tortured brain, but after the third suspension of life Thig removed his helmet. "There is nothing more to learn," he informed his impassive comrades. "Now, let us get on with the plastic surgery that is required. My new body must return to its barbaric household before undue attention is aroused. And when I return I will take along some of the gleaming baubles we found on the red planet—these people value them highly." An hour later, his scars and altered cartilage already healed and painless, Thig again scraped sand over the entrance to the space ship and set out along the moonlit beach toward the nearest path running inland to his home. Memory was laying the country bare about him, Terry's own childhood memories of this particular section of Long Island. Here was the place where Jake and Ted had helped him dig for the buried treasure that old 'Notch-ear' Beggs had told them so exactly about. Remembrance of that episode gave Thig an idea about the little lump of jewels in his pocket. He had found them in a chest along the beach! He was coming up on the porch now and at the sound of his foot on the sagging boards the screen door burst open and three little Earth-creatures were hugging at his legs. An odd sensation, that his acquired memories labeled as pleasure, sent a warm glow upward from around his heart. Then he saw the slender red-haired shape of a woman, the mate of the dead man he knew, and confusion struck his well-trained brain. Men had no mates on Ortha, sex had been overthrown with all the other primitive impulses of barbarism; so he was incapable of understanding the emotions that swept through his acquired memory. Unsteadily he took her in his arms and felt her warm lips pressed, trembling, against his own. That same hot wave of pulsing blood choked achingly up into his throat. "Lew, dear," Ellen was asking, "where have you been all day? I called up at the landing but you were not there. I wanted to let you know that Saddlebag Publications sent a check for $50 for "Reversed Revolvers" and three other editors asked for shorts soon." "Shoulda got a hundred bucks for that yarn," grunted Thig, and gasped. For the moment he had been Lewis Terry and not Thig! So thoroughly had he acquired the knowledge of Terry that he found himself unconsciously adopting the thinking and mannerism of the other. All the better this way, he realized—more natural. "Sorry I was late," he said, digging into his pocket for the glittering baubles, "but I was poking around on the beach where we used to hunt treasure and I found an old chest. Inside it I found nothing but a handful of these." He flashed the jewels in front of Ellen's startled eyes and she clung, unbelieving, to his arm. "Why, Lew," she gasped, "they're worth a fortune! We can buy that new trailer now and have a rebuilt motor in the car. We can go west right away.... Hollywood, the Grand Canyon, cowboys!" "Uh huh," agreed the pseudo Lewis, memories of the ferocious savages and gunmen of his stories rendering him acutely unhappy. Sincerely he hoped that the west had reformed. "I saved some kraut and weiners," Ellen said. "Get washed up while I'm warming them up. Kids ate all the bread so I had to borrow some from the Eskoes. Want coffee, too?" "Mmmmmm," came from the depths of the chipped white wash-basin. "Home again," whispered Ellen as she stood beside Thig twelve weeks later and gazed tearfully at the weathered little gray house. She knelt beside the front stoop and reached for the key hidden beneath it. "The west was wonderful; tremendous, vast and beautiful," she went on as they climbed the steps, "but nowhere was there any place as beautiful as our own little strip of sky and water." Thig sank into a dusty old swing that hung on creaking chains from the exposed rafters of the porch roof. He looked down at the dusty gray car and the bulbous silvery bulk of the trailer that had been their living quarters for almost three months. Strange thoughts were afloat in the chaos of his cool Orthan brain. Tonight or tomorrow night at the latest he must contact his two fellows and report that Earth was a planetary paradise. No other world, including Ortha, was so well-favored and rich. An expeditionary force to wipe the grotesque civilizations of Earth out of existence would, of course, be necessary before the first units of new Hordes could be landed. And there Thig balked. Why must they destroy these people, imperfect though their civilization might be, to make room for the Hordes? Thig tried to tell himself that it was the transmitted thoughts of the dead Earthman that made him feel so, but he was not too sure. For three months he had lived with people who loved, hated, wept and sacrificed for reasons that he had never known existed. He had learned the heady glory of thinking for himself and making his own decisions. He had experienced the primitive joy of matching his wits and tongue against the wits of other unpredictable human beings. There was no abrupt division of men and women into definite classes of endeavor. A laborer thought the same thoughts that a governor might think. Uncertainty added zest to every day's life. The Orthan had come to question the sole devotion of the individual to the Horde to the exclusion of all other interests. What, he wondered, would one new world—or a hundred—populated by the Hordes add to the progress of humanity? For a hundred thousand years the Orthan civilization had remained static, its energies directed into certain well-defined channels. They were mindless bees maintaining their vast mechanical hives. There was that moment on the brink of the Grand Canyon when Ellen had caught his arm breathlessly at all the beauty spread away there beneath them. There were mornings in the desert when the sun painted in lurid red the peaks above the harsh black-and-whites of the sagebrush and cactus slopes. There was the little boy, his body burning with fever, who nestled trustingly against his tense man's body and slept—the son of Ellen and the man he had destroyed. Thig groaned. He was a weakling to let sentimentality so get the better of his judgment. He would go now to the space ship and urge them to blast off for Ortha. He sprang off the porch and strode away down the road toward the beach. The children ran to him; wanted to go along. He sent them away harshly but they smiled and waved their brown little hands. Ellen came to the door and called after him. "Hurry home, dear," she said. "I'll have a bite ready in about an hour." He dared not say anything, for his voice would have broken and she would have known something was wrong. She was a very wise sort of person when something was troubling him. He waved his stubby paw of a hand to show that he had heard, and blindly hurried toward the Sound. Oddly enough, as he hurried away along the narrow path through the autumn woods, his mind busied itself with a new epic of the west that lived no longer. He mentally titled it: "Rustlers' Riot" and blocked in the outlines of his plot. One section of his brain was that of the careless author of gunslinging yarns, a section that seemed to be sapping the life from his own brain. He knew that the story would never be written, but he toyed with the idea. So far had Thig the emotionless, robot-being from Ortha drifted from the unquestioning worship of the Horde! "You have done well," announced Torp when Thig had completed his report on the resources and temperatures of various sections of Terra. "We now have located three worlds fit for colonization and so we will return to Ortha at once. "I will recommend the conquest of this planet, 72-P-3 at once and the complete destruction of all biped life upon it. The mental aberrations of the barbaric natives might lead to endless complications if they were permitted to exist outside our ordered way of life. I imagine that three circuits of the planet about its primary should prove sufficient for the purposes of complete liquidation." "But why," asked Thig slowly, "could we not disarm all the natives and exile them on one of the less desirable continents, Antarctica for example or Siberia? They are primitive humans even as our race was once a race of primitives. It is not our duty to help to attain our own degree of knowledge and comfort?" "Only the good of the Horde matters!" shouted Torp angrily. "Shall a race of feeble-witted beasts, such as these Earthmen, stand in the way of a superior race? We want their world, and so we will take it. The Law of the Horde states that all the universe is ours for the taking." "Let us get back to Ortha at once, then," gritted out Thig savagely. "Never again do I wish to set foot upon the soil of this mad planet. There are forces at work upon Earth that we of Ortha have long forgotten." "Check the blood of Thig for disease, Kam," ordered Torp shortly. "His words are highly irrational. Some form of fever perhaps native to this world. While you examine him I will blast off for Ortha." Thig followed Kam into the tiny laboratory and found a seat beside the squat scientist's desk. His eyes roamed over the familiar instruments and gauges, each in its own precise position in the cases along the walls. His gaze lingered longest on the stubby black ugliness of a decomposition blaster in its rack close to the deck. A blast of the invisible radiations from that weapon's hot throat and flesh or vegetable fiber rotted into flaky ashes. The ship trembled beneath their feet; it tore free from the feeble clutch of the sand about it, and they were rocketing skyward. Thig's broad fingers bit deep into the unyielding metal of his chair. Suddenly he knew that he must go back to Earth, back to Ellen and the children of the man he had helped destroy. He loved Ellen, and nothing must stand between them! The Hordes of Ortha must find some other world, an empty world—this planet was not for them. "Turn back!" he cried wildly. "I must go back to Earth. There is a woman there, helpless and alone, who needs me! The Horde does not need this planet." Kam eyed him coldly and lifted a shining hypodermic syringe from its case. He approached Thig warily, aware that disease often made a maniac of the finest members of the Horde. "No human being is more important than the Horde," he stated baldly. "This woman of whom you speak is merely one unit of the millions we must eliminate for the good of the Horde." Then it was that Thig went berserk. His fists slashed into the thick jaw of the scientist and his fingers ripped at the hard cords overlying the Orthan's vital throat tubes. His fingers and thumb gouged deep into Kam's startled throat and choked off any cry for assistance before it could be uttered. Kam's hand swept down to the holster swung from his intricate harness and dragged his blaster from it. Thig's other hand clamped over his and for long moments they swayed there, locked together in silent deadly struggle. The fate of a world hung in the balance as Kam's other hand fought against that lone arm of Thig. The scales swung in favor of Kam. Slowly the flaring snout of his weapon tilted upward until it reached the level of Thig's waist. Thig suddenly released his grip and dragged his enemy toward him. A sudden reversal of pressure on Kam's gun hand sent the weapon swivelling about full upon its owner's thick torso. Thig's fingers pressed down upon Kam's button finger, down upon the stud set into the grip of the decomposition blaster, and Kam's muscles turned to water. He shrieked. Before Thig's eyes half of his comrade's body sloughed away into foul corruption that swiftly gave way to hardened blobs of dessicated matter. Horror for what he had done—that he had slain one of his own Horde—made his limbs move woodenly. All of his thoughts were dulled for the moment. Painfully slow, he turned his body around toward the control blister, turned around on leaden feet, to look full into the narrowed icy eyes of his commander. He saw the heavy barrel of the blaster slashing down against his skull but he could not swing a fraction of an inch out of the way. His body seemed paralyzed. This was the end, he thought as he waited stupidly for the blow to fall, the end for Ellen and the kids and all the struggling races of Earth. He would never write another cowboy yarn—they would all be dead anyhow soon. Then a thunderclap exploded against his head and he dropped endlessly toward the deck. Blows rained against his skull. He wondered if Torp would ever cease to hammer at him and turn the deadly ray of the weapon upon him. Blood throbbed and pounded with every blow.... Bam, Bam, Bam, the blood pounded in his ears. Like repeated blows of a hammer they shook his booming head. No longer was Torp above him. He was in the corner of the laboratory, a crumpled blood-smeared heap of bruised flesh and bone. He was unfettered and the blood was caked upon his skull and in his matted hair. Torp must have thought he had killed him with those savage blows upon the head. Even Torp, thought Thig ruefully, gave way to the primitive rage of his ancestors at times; but to that very bit of unconscious atavism he now owed his life. A cool-headed robot of an Orthan would have efficiently used the blaster to destroy any possibility of remaining life in his unconscious body. Thig rolled slowly over so that his eye found the door into the control room. Torp would be coming back again to dispose of their bodies through the refuse lock. Already the body of Kam was gone. He wondered why he had been left until last. Perhaps Torp wished to take cultures of his blood and tissues to determine whether a disease was responsible for his sudden madness. The cases of fragile instruments were just above his head. Association of memories brought him the flash of the heavy blaster in its rack beneath them. His hand went up and felt the welcome hardness of the weapon. He tugged it free. In a moment he was on his knees crawling across the plates of the deck toward the door. Halfway across the floor he collapsed on his face, the metal of the gun making a harsh clang. He heard the feet of Torp scuffle out of silence and a choked cry in the man's throat squalled out into a senseless whinny. Thig raised himself up on a quivering elbow and slid the black length of the blaster in front of him. His eyes sought the doorway and stared full into the glaring vacant orbs of his commander. Torp leaned there watching him, his breath gurgling brokenly through his deep-bitten lips. The clawing marks of nails, fingernails, furrowed his face and chest. He was a madman! The deadly attack of Thig; his own violent avenging of Kam's death, and now the apparent return of the man he had killed come to life had all served to jolt his rigidly trained brain from its accustomed groove. The shock had been too much for the established thought-processes of the Orthan. So Thig shot him where he stood, mercifully, before that vacant mad stare set him, too, to gibbering and shrieking. Then he stepped over the skeleton-thing that had been Torp, using the new strength that victory had given him to drive him along. He had saved a world's civilization from extinction! The thought sobered him; yet, somehow, he was pleased that he had done so. After all, it had been the Earthwoman and the children he had been thinking of while he battled Kam, a selfish desire to protect them all. He went to the desk where Torp had been writing in the ship's log and read the last few nervously scrawled lines: Planet 72-P-3 unfit for colonization. Some pernicious disease that strikes at the brain centers and causes violent insanity is existent there. Thig, just returned from a survey of the planet, went mad and destroyed Kam. In turn I was forced to slay him. But it is not ended. Already I feel the insidious virus of.... And there his writing ended abruptly. Thig nodded. That would do it. He set the automatic pilot for the planet Ortha. Unless a rogue asteroid or a comet crossed the ship's path she would return safely to Ortha with that mute warning of danger on 72-P-3. The body of Torp would help to confirm his final message. Then Thig crossed the cabin to the auxiliary life boat there, one of a half-dozen space ships in miniature nested within the great ship's hull, and cut free from the mother vessel. He flipped the drive lever, felt the thrumming of the rockets driving him from the parent ship. The sensation of free flight against his new body was strangely exhilerating and heady. It was the newest of the emotions he had experienced on Earth since that day, so many months before, when he had felt the warmness of Ellen's lips tight against his. Thig flipped the drive lever, felt the thrumming of the rockets driving him from the parent ship. He swung about to the port, watched the flaming drive-rockets of the great exploratory ship hurl it toward far-away Ortha, and there was no regret in his mind that he was not returning to the planet of his first existence. He thought of the dull greys and blacks of his planet, of the monotonous routine of existence that had once been his—and his heart thrilled to the memories of the starry nights and perfect exciting days he had spent on his three month trip over Earth. He made a brief salute to the existence he had known, turned with a tiny sigh, and his fingers made brief adjustments in the controls. The rocket-thrum deepened, and the thin whistle of tenuous air clutching the ship echoed through the hull-plates. He thought of many things in those few moments. He watched the roundness of Earth flatten out, then take on the cup-like illusion that all planets had for an incoming ship. He reduced the drive of his rockets to a mere whisper, striving to control the impatience that crowded his mind. He shivered suddenly, remembering his utter callousness the first time he had sent a space ship whipping down toward the hills and valleys below. And there was a sickness within him when he fully realized that, despite his acquired memory and traits, he was an alien from outer space. He fingered the tiny scars that had completely obliterated the slight differences in his appearance from an Earthman's, and his fingers trembled a bit, as he bent and stared through the vision port. He said a brief prayer in his heart to a God whose presence he now felt very deeply. There were tears in the depths of his eyes, then, and memories were hot, bitter pains. Earth was not far below him. As he let gravity suck him earthward, he heaved a gasp of relief. He was no longer Thig, a creature of a Horde's creation, but Lewis Terry, writer of lurid gun-smoking tales of the West. He must remember that always. He had destroyed the real Terry and now, for the rest of his life, he must make up to the dead man's family. The knowledge that Ellen's love was not really meant for him would be a knife twisting in his heart but for her sake he must endure it. Her dreams and happiness must never be shattered. The bulge of Earth was flattening out now and he could see the outlines of Long Island in the growing twilight. A new plot was growing in the brain of Lewis Terry, a yarn about a cowboy suddenly transported to another world. He smiled ironically. He had seen those other worlds. Perhaps some day he would write about them.... He was Lewis Terry! He must remember that!
B. Twelve weeks
What is one limit all kinds of OA put on user freedom? A. A constraint on library privileges. B. A limit on text mining. C. A constraint on reproduction and distribution. D. There is an obligation to credit the work to the author.
What Is Open Access? Shifting from ink on paper to digital text suddenly allows us to make perfect copies of our work. Shifting from isolated computers to a globe-spanning network of connected computers suddenly allows us to share perfect copies of our work with a worldwide audience at essentially no cost. About thirty years ago this kind of free global sharing became something new under the sun. Before that, it would have sounded like a quixotic dream. Digital technologies have created more than one revolution. Let’s call this one the access revolution. Why don’t more authors take advantage of the access revolution to reach more readers? The answer is pretty clear. Authors who share their works in this way aren’t selling them, and even authors with purposes higher than money depend on sales to make a living. Or at least they appreciate sales. Let’s sharpen the question, then, by putting to one side authors who want to sell their work. We can even acknowledge that we’re putting aside the vast majority of authors. Imagine a tribe of authors who write serious and useful work, and who follow a centuries-old custom of giving it away without charge. I don’t mean a group of rich authors who don’t need money. I mean a group of authors defined by their topics, genres, purposes, incentives, and institutional circumstances, not by their wealth. In fact, very few are wealthy. For now, it doesn’t matter who these authors are, how rare they are, what they write, or why they follow this peculiar custom. It’s enough to know that their employers pay them salaries, freeing them to give away their work, that they write for impact rather than money, and that they score career points when they make the kind of impact they hoped to make. Suppose that selling their work would actually harm their interests by shrinking their audience, reducing their impact, and distorting their professional goals by steering them toward popular topics and away from the specialized questions on which they are experts. If authors like that exist, at least they should take advantage of the access revolution. The dream of global free access can be a reality for them, even if most other authors hope to earn royalties and feel obliged to sit out this particular revolution. These lucky authors are scholars, and the works they customarily write and publish without payment are peer-reviewed articles in scholarly journals. Open access is the name of the revolutionary kind of access these authors, unencumbered by a motive of financial gain, are free to provide to their readers. Open access (OA) literature is digital, online, free of charge, and free of most copyright and licensing restrictions. We could call it “barrier-free” access, but that would emphasize the negative rather than the positive. In any case, we can be more specific about which access barriers OA removes. A price tag is a significant access barrier. Most works with price tags are individually affordable. But when a scholar needs to read or consult hundreds of works for one research project, or when a library must provide access for thousands of faculty and students working on tens of thousands of topics, and when the volume of new work grows explosively every year, price barriers become insurmountable. The resulting access gaps harm authors by limiting their audience and impact, harm readers by limiting what they can retrieve and read, and thereby harm research from both directions. OA removes price barriers. Copyright can also be a significant access barrier. If you have access to a work for reading but want to translate it into another language, distribute copies to colleagues, copy the text for mining with sophisticated software, or reformat it for reading with new technology, then you generally need the permission of the copyright holder. That makes sense when the author wants to sell the work and when the use you have in mind could undermine sales. But for research articles we’re generally talking about authors from the special tribe who want to share their work as widely as possible. Even these authors, however, tend to transfer their copyrights to intermediaries—publishers—who want to sell their work. As a result, users may be hampered in their research by barriers erected to serve intermediaries rather than authors. In addition, replacing user freedom with permission-seeking harms research authors by limiting the usefulness of their work, harms research readers by limiting the uses they may make of works even when they have access, and thereby harms research from both directions. OA removes these permission barriers. Removing price barriers means that readers are not limited by their own ability to pay, or by the budgets of the institutions where they may have library privileges. Removing permission barriers means that scholars are free to use or reuse literature for scholarly purposes. These purposes include reading and searching, but also redistributing, translating, text mining, migrating to new media, long-term archiving, and innumerable new forms of research, analysis, and processing we haven’t yet imagined. OA makes work more useful in both ways, by making it available to more people who can put it to use, and by freeing those people to use and reuse it. Terminology When we need to, we can be more specific about access vehicles and access barriers. In the jargon, OA delivered by journals is called gold OA , and OA delivered by repositories is called green OA . Work that is not open access, or that is available only for a price, is called toll access (TA). Over the years I’ve asked publishers for a neutral, nonpejorative and nonhonorific term for toll-access publishers, and conventional publishers is the suggestion I hear most often. While every kind of OA removes price barriers, there are many different permission barriers we could remove if we wanted to. If we remove price barriers alone, we provide gratis OA , and if we remove at least some permission barriers as well, we provide libre OA . (Also see section 3.1 on green/gold and section 3.3 on gratis/libre.) OA was defined in three influential public statements: the Budapest Open Access Initiative (February 2002), the Bethesda Statement on Open Access Publishing (June 2003), and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (October 2003). I sometimes refer to their overlap or common ground as the BBB definition of OA. My definition here is the BBB definition reduced to its essential elements and refined with some post-BBB terminology (green, gold, gratis, libre) for speaking precisely about subspecies of OA. Here’s how the Budapest statement defined OA: There are many degrees and kinds of wider and easier access to [research] literature. By “open access” to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited. Here’s how the Bethesda and Berlin statements put it: For a work to be OA, the copyright holder must consent in advance to let users “copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attribution of authorship.” Note that all three legs of the BBB definition go beyond removing price barriers to removing permission barriers, or beyond gratis OA to libre OA. But at the same time, all three allow at least one limit on user freedom: an obligation to attribute the work to the author. The purpose of OA is to remove barriers to all legitimate scholarly uses for scholarly literature, but there’s no legitimate scholarly purpose in suppressing attribution to the texts we use. (That’s why my shorthand definition says that OA literature is free of “most” rather than “all” copyright and licensing restrictions.) The basic idea of OA is simple: Make research literature available online without price barriers and without most permission barriers. Even the implementation is simple enough that the volume of peer-reviewed OA literature and the number of institutions providing it have grown at an increasing rate for more than a decade. If there are complexities, they lie in the transition from where we are now to a world in which OA is the default for new research. This is complicated because the major obstacles are not technical, legal, or economic, but cultural. (More in chapter 9 on the future.) In principle, any kind of digital content can be OA, since any digital content can be put online without price or permission barriers. Moreover, any kind of content can be digital: texts, data, images, audio, video, multimedia, and executable code. We can have OA music and movies, news and novels, sitcoms and software—and to different degrees we already do. But the term “open access” was coined by researchers trying to remove access barriers to research. The next section explains why. 1.1 What Makes OA Possible? OA is made possible by the internet and copyright-holder consent. But why would a copyright holder consent to OA? Two background facts suggest the answer. First, authors are the copyright holders for their work until or unless they transfer rights to someone else, such as a publisher. Second, scholarly journals generally don’t pay authors for their research articles, which frees this special tribe of authors to consent to OA without losing revenue. This fact distinguishes scholars decisively from musicians and moviemakers, and even from most other kinds of authors. This is why controversies about OA to music and movies don’t carry over to OA for research articles. Both facts are critical, but the second is nearly unknown outside the academic world. It’s not a new fact of academic life, arising from a recent economic downturn in the publishing industry. Nor is it a case of corporate exploitation of unworldly academics. Scholarly journals haven’t paid authors for their articles since the first scholarly journals, the Philosophical Transactions of the Royal Society of London and the Journal des sçavans , launched in London and Paris in 1665. The academic custom to write research articles for impact rather than money may be a lucky accident that could have been otherwise. Or it may be a wise adaptation that would eventually evolve in any culture with a serious research subculture. (The optimist in me wants to believe the latter, but the evolution of copyright law taunts that optimism.) This peculiar custom does more than insulate cutting-edge research from the market and free scholars to consent to OA without losing revenue. It also supports academic freedom and the kinds of serious inquiry that advance knowledge. It frees researchers to challenge conventional wisdom and defend unpopular ideas, which are essential to academic freedom. At the same time it frees them to microspecialize and defend ideas of immediate interest to just a handful people in the world, which are essential to pushing the frontiers of knowledge. This custom doesn’t guarantee that truth-seeking won’t be derailed by profit-seeking, and it doesn’t guarantee that we’ll eventually fill the smallest gaps in our collaborative understanding of the world. It doesn’t even guarantee that scholars won’t sometimes play for the crowd and detour into fad thinking. But it removes a major distraction by allowing them, if they wish, to focus on what is likely to be true rather than what is likely to sell. It’s a payment structure we need for good research itself, not just for good access to research, and it’s the key to the legal and economic lock that would otherwise shackle steps toward OA. Creative people who live by royalties, such as novelists, musicians, and moviemakers, may consider this scholarly tradition a burden and sacrifice for scholars. We might even agree, provided we don’t overlook a few facts. First, it’s a sacrifice that scholars have been making for nearly 350 years. OA to research articles doesn’t depend on asking royalty-earning authors to give up their royalties. Second, academics have salaries from universities, freeing them to dive deeply into their research topics and publish specialized articles without market appeal. Many musicians and moviemakers might envy that freedom to disregard sales and popular taste. Third, academics receive other, less tangible rewards from their institutions—like promotion and tenure—when their research is recognized by others, accepted, cited, applied, and built upon. It’s no accident that faculty who advance knowledge in their fields also advance their careers. Academics are passionate about certain topics, ideas, questions, inquiries, or disciplines. They feel lucky to have jobs in which they may pursue these passions and even luckier to be rewarded for pursuing them. Some focus single-mindedly on carrying an honest pebble to the pile of knowledge (as John Lange put it), having an impact on their field, or scooping others working on the same questions. Others focus strategically on building the case for promotion and tenure. But the two paths converge, which is not a fortuitous fact of nature but an engineered fact of life in the academy. As incentives for productivity, these intangible career benefits may be stronger for the average researcher than royalties are for the average novelist or musician. (In both domains, bountiful royalties for superstars tell us nothing about effective payment models for the long tail of less stellar professionals.) There’s no sense in which research would be more free, efficient, or effective if academics took a more “businesslike” position, behaved more like musicians and moviemakers, abandoned their insulation from the market, and tied their income to the popularity of their ideas. Nonacademics who urge academics to come to their senses and demand royalties even for journal articles may be more naive about nonprofit research than academics are about for-profit business. We can take this a step further. Scholars can afford to ignore sales because they have salaries and research grants to take the place of royalties. But why do universities pay salaries and why do funding agencies award grants? They do it to advance research and the range of public interests served by research. They don’t do it to earn profits from the results. They are all nonprofit. They certainly don’t do it to make scholarly writings into gifts to enrich publishers, especially when conventional publishers erect access barriers at the expense of research. Universities and funding agencies pay researchers to make their research into gifts to the public in the widest sense. Public and private funding agencies are essentially public and private charities, funding research they regard as useful or beneficial. Universities have a public purpose as well, even when they are private institutions. We support the public institutions with public funds, and we support the private ones with tax exemptions for their property and tax deductions for their donors. We’d have less knowledge, less academic freedom, and less OA if researchers worked for royalties and made their research articles into commodities rather than gifts. It should be no surprise, then, that more and more funding agencies and universities are adopting strong OA policies. Their mission to advance research leads them directly to logic of OA: With a few exceptions, such as classified research, research that is worth funding or facilitating is worth sharing with everyone who can make use of it. (See chapter 4 on OA policies.) Newcomers to OA often assume that OA helps readers and hurts authors, and that the reader side of the scholarly soul must beg the author side to make the necessary sacrifice. But OA benefits authors as well as readers. Authors want access to readers at least as much as readers want access to authors. All authors want to cultivate a larger audience and greater impact. Authors who work for royalties have reason to compromise and settle for the smaller audience of paying customers. But authors who aren’t paid for their writing have no reason to compromise. It takes nothing away from a disinterested desire to advance knowledge to recognize that scholarly publication is accompanied by a strong interest in impact and career building. The result is a mix of interested and disinterested motives. The reasons to make work OA are essentially the same as the reasons to publish. Authors who make their work OA are always serving others but not always acting from altruism. In fact, the idea that OA depends on author altruism slows down OA progress by hiding the role of author self-interest. Another aspect of author self-interest emerges from the well-documented phenomenon that OA articles are cited more often than non-OA articles, even when they are published in the same issue of the same journal. There’s growing evidence that OA articles are downloaded more often as well, and that journals converting to OA see a rise in their submissions and citation impact. There are many hypotheses to explain the correlation between OA and increased citations, but it’s likely that ongoing studies will show that much of the correlation is simply due to the larger audience and heightened visibility provided by OA itself. When you enlarge the audience for an article, you also enlarge the subset of the audience that will later cite it, including professionals in the same field at institutions unable to afford subscription access. OA enlarges the potential audience, including the potential professional audience, far beyond that for even the most prestigious and popular subscription journals. In any case, these studies bring a welcome note of author self-interest to the case for OA. OA is not a sacrifice for authors who write for impact rather than money. It increases a work’s visibility, retrievability, audience, usage, and citations, which all convert to career building. For publishing scholars, it would be a bargain even if it were costly, difficult, and time-consuming. But as we’ll see, it’s not costly, not difficult, and not time-consuming. My colleague Stevan Harnad frequently compares research articles to advertisements. They advertise the author’s research. Try telling advertisers that they’re making a needless sacrifice by allowing people to read their ads without having to pay for the privilege. Advertisers give away their ads and even pay to place them where they might be seen. They do this to benefit themselves, and scholars have the same interest in sharing their message as widely as possible. Because any content can be digital, and any digital content can be OA, OA needn’t be limited to royalty-free literature like research articles. Research articles are just ripe examples of low-hanging fruit. OA could extend to royalty-producing work like monographs, textbooks, novels, news, music, and movies. But as soon as we cross the line into OA for royalty-producing work, authors will either lose revenue or fear that they will lose revenue. Either way, they’ll be harder to persuade. But instead of concluding that royalty-producing work is off limits to OA, we should merely conclude that it’s higher-hanging fruit. In many cases we can still persuade royalty-earning authors to consent to OA. (See section 5.3 on OA for books.) Authors of scholarly research articles aren’t the only players who work without pay in the production of research literature. In general, scholarly journals don’t pay editors or referees either. In general, editors and referees are paid salaries by universities to free them, like authors, to donate their time and labor to ensure the quality of new work appearing in scholarly journals. An important consequence follows. All the key players in peer review can consent to OA without losing revenue. OA needn’t dispense with peer review or favor unrefereed manuscripts over refereed articles. We can aim for the prize of OA to peer-reviewed scholarship. (See section 5.1 on peer review.) Of course, conventional publishers are not as free as authors, editors, and referees to forgo revenue. This is a central fact in the transition to OA, and it explains why the interests of scholars and conventional publishers diverge more in the digital age than they diverged earlier. But not all publishers are conventional, and not all conventional publishers will carry print-era business models into the digital age. Academic publishers are not monolithic. Some new ones were born OA and some older ones have completely converted to OA. Many provide OA to some of their work but not all of it. Some are experimenting with OA, and some are watching the experiments of others. Most allow green OA (through repositories) and a growing number offer at least some kind of gold OA (through journals). Some are supportive, some undecided, some opposed. Among the opposed, some have merely decided not to provide OA themselves, while others lobby actively against policies to encourage or require OA. Some oppose gold but not green OA, while others oppose green but not gold OA. OA gains nothing and loses potential allies by blurring these distinctions. This variety reminds us (to paraphrase Tim O’Reilly) that OA doesn’t threaten publishing; it only threatens existing publishers who do not adapt. A growing number of journal publishers have chosen business models allowing them to dispense with subscription revenue and offer OA. They have expenses but they also have revenue to cover their expenses. In fact, some OA publishers are for-profit and profitable. (See chapter 7 on economics.) Moreover, peer review is done by dedicated volunteers who don’t care how a journal pays its bills, or even whether the journal is in the red or the black. If all peer-reviewed journals converted to OA overnight, the authors, editors, and referees would have the same incentives to participate in peer review that they had the day before. They needn’t stop offering their services, needn’t lower their standards, and needn’t make sacrifices they weren’t already making. They volunteer their time not because of a journal’s choice of business model but because of its contribution to research. They could carry on with solvent or insolvent subscription publishers, with solvent or insolvent OA publishers, or even without publishers. The Budapest Open Access Initiative said in February 2002: “An old tradition and a new technology have converged to make possible an unprecedented public good. The old tradition is the willingness of scientists and scholars to publish the fruits of their research in scholarly journals without payment. . . . The new technology is the internet.” To see what this willingness looks like without the medium to give it effect, look at scholarship in the age of print. Author gifts turned into publisher commodities, and access gaps for readers were harmfully large and widespread. (Access gaps are still harmfully large and widespread, but only because OA is not yet the default for new research.) To see what the medium looks like without the willingness, look at music and movies in the age of the internet. The need for royalties keeps creators from reaching everyone who would enjoy their work. A beautiful opportunity exists where the willingness and the medium overlap. A scholarly custom that evolved in the seventeenth century frees scholars to take advantage of the access revolution in the twentieth and twenty-first. Because scholars are nearly unique in following this custom, they are nearly unique in their freedom to take advantage of this revolution without financial risk. In this sense, the planets have aligned for scholars. Most other authors are constrained to fear rather than seize the opportunities created by the internet. 1.2 What OA Is Not We can dispel a cloud of objections and misunderstandings simply by pointing out a few things that OA is not. (Many of these points will be elaborated in later chapters.) OA isn’t an attempt to bypass peer review. OA is compatible with every kind of peer review, from the most conservative to the most innovative, and all the major public statements on OA insist on its importance. Because scholarly journals generally don’t pay peer-reviewing editors and referees, just as they don’t pay authors, all the participants in peer review can consent to OA without losing revenue. While OA to unrefereed preprints is useful and widespread, the OA movement isn’t limited to unrefereed preprints and, if anything, focuses on OA to peer-reviewed articles. (More in section 5.1 on peer review.) OA isn’t an attempt to reform, violate, or abolish copyright. It’s compatible with copyright law as it is. OA would benefit from the right kinds of copyright reforms, and many dedicated people are working on them. But it needn’t wait for reforms and hasn’t waited. OA literature avoids copyright problems in exactly the same way that conventional toll-access literature does. For older works, it takes advantage of the public domain, and for newer works, it rests on copyright-holder consent. (More in chapter 4 on policies and chapter 6 on copyright.) OA isn’t an attempt to deprive royalty-earning authors of income. The OA movement focuses on research articles precisely because they don’t pay royalties. In any case, inside and outside that focus, OA for copyrighted work depends on copyright-holder consent. Hence, royalty-earning authors have nothing to fear but persuasion that the benefits of OA might outweigh the risks to royalties. (More in section 5.3 on OA for books.) OA isn’t an attempt to deny the reality of costs. No serious OA advocate has ever argued that OA literature is costless to produce, although many argue that it is less expensive to produce than conventionally published literature, even less expensive than born-digital toll-access literature. The question is not whether research literature can be made costless, but whether there are better ways to pay the bills than charging readers and creating access barriers. (More in chapter 7 on economics.) Terminology We could talk about vigilante OA, infringing OA, piratical OA, or OA without consent. That sort of OA could violate copyrights and deprive royalty-earning authors of royalties against their will. But we could also talk about vigilante publishing, infringing publishing, piratical publishing, or publishing without consent. Both happen. However, we generally reserve the term “publishing” for lawful publishing, and tack on special adjectives to describe unlawful variations on the theme. Likewise, I’ll reserve the term “open access” for lawful OA that carries the consent of the relevant rightsholder. OA isn’t an attempt to reduce authors’ rights over their work. On the contrary, OA depends on author decisions and requires authors to exercise more rights or control over their work than they are allowed to exercise under traditional publishing contracts. One OA strategy is for authors to retain some of the rights they formerly gave publishers, including the right to authorize OA. Another OA strategy is for publishers to permit more uses than they formerly permitted, including permission for authors to make OA copies of their work. By contrast, traditional journal-publishing contracts demand that authors transfer all rights to publishers, and author rights or control cannot sink lower than that. (See chapters 4 on policies and 6 on copyright.) OA isn’t an attempt to reduce academic freedom. Academic authors remain free to submit their work to the journals or publishers of their choice. Policies requiring OA do so conditionally, for example, for researchers who choose to apply for a certain kind of grant. In addition, these policies generally build in exceptions, waiver options, or both. Since 2008 most university OA policies have been adopted by faculty deeply concerned to preserve and even enhance their prerogatives. (See chapter 4 on OA policies.) OA isn’t an attempt to relax rules against plagiarism. All the public definitions of OA support author attribution, even construed as a “restriction” on users. All the major open licenses require author attribution. Moreover, plagiarism is typically punished by the plagiarist’s institution rather than by courts, that is, by social norms rather than by law. Hence, even when attribution is not legally required, plagiarism is still a punishable offense and no OA policy anywhere interferes with those punishments. In any case, if making literature digital and online makes plagiarism easier to commit, then OA makes plagiarism easier to detect. Not all plagiarists are smart, but the smart ones will not steal from OA sources indexed in every search engine. In this sense, OA deters plagiarism. OA isn’t an attempt to punish or undermine conventional publishers. OA is an attempt to advance the interests of research, researchers, and research institutions. The goal is constructive, not destructive. If OA does eventually harm toll-access publishers, it will be in the way that personal computers harmed typewriter manufacturers. The harm was not the goal, but a side effect of developing something better. Moreover, OA doesn’t challenge publishers or publishing per se, just one business model for publishing, and it’s far easier for conventional publishers to adapt to OA than for typewriter manufacturers to adapt to computers. In fact, most toll-access publishers are already adapting, by allowing author-initiated OA, providing some OA themselves, or experimenting with OA. (See section 3.1 on green OA and chapter 8 on casualties.) OA doesn’t require boycotting any kind of literature or publisher. It doesn’t require boycotting toll-access research any more than free online journalism requires boycotting priced online journalism. OA doesn’t require us to strike toll-access literature from our personal reading lists, course syllabi, or libraries. Some scholars who support OA decide to submit new work only to OA journals, or to donate their time as editors or referees only to OA journals, in effect boycotting toll-access journals as authors, editors, and referees. But this choice is not forced by the definition of OA, by a commitment to OA, or by any OA policy, and most scholars who support OA continue to work with toll-access journals. In any case, even those scholars who do boycott toll-access journals as authors, editors, or referees don’t boycott them as readers. (Here we needn’t get into the complexity that some toll-access journals effectively create involuntary reader boycotts by pricing their journals out of reach of readers who want access.) OA isn’t primarily about bringing access to lay readers. If anything, the OA movement focuses on bringing access to professional researchers whose careers depend on access. But there’s no need to decide which users are primary and which are secondary. The publishing lobby sometimes argues that the primary beneficiaries of OA are lay readers, perhaps to avoid acknowledging how many professional researchers lack access, or perhaps to set up the patronizing counter-argument that lay people don’t care to read research literature and wouldn’t understand it if they tried. OA is about bringing access to everyone with an internet connection who wants access, regardless of their professions or purposes. There’s no doubt that if we put “professional researchers” and “everyone else” into separate categories, a higher percentage of researchers will want access to research literature, even after taking into account that many already have paid access through their institutions. But it’s far from clear why that would matter, especially when providing OA to all internet users is cheaper and simpler than providing OA to just a subset of worthy internet users. If party-goers in New York and New Jersey can both enjoy the Fourth of July fireworks in New York Harbor, then the sponsors needn’t decide that one group is primary, even if a simple study could show which group is more numerous. If this analogy breaks down, it’s because New Jersey residents who can’t see the fireworks gain nothing from New Yorkers who can. But research does offer this double or indirect benefit. When OA research directly benefits many lay readers, so much the better. But when it doesn’t, it still benefits everyone indirectly by benefiting researchers directly. (Also see section 5.5.1 on access for lay readers.) Finally, OA isn’t universal access. Even when we succeed at removing price and permission barriers, four other kinds of access barrier might remain in place: Filtering and censorship barriers Many schools, employers, ISPs, and governments want to limit what users can see. Language barriers Most online literature is in English, or another single language, and machine translation is still very weak. Handicap access barriers Most web sites are not yet as accessible to handicapped users as they should be. Connectivity barriers The digital divide keeps billions of people offline, including millions of scholars, and impedes millions of others with slow, flaky, or low-bandwidth internet connections. Most us want to remove all four of these barriers. But there’s no reason to save the term open access until we succeed. In the long climb to universal access, removing price and permission barriers is a significant plateau worth recognizing with a special name.
D. There is an obligation to credit the work to the author.
What would happen if Lewis did not finish his short stories in the timeline he was given? A. He would lose his typewriter B. The trip with Ellen would be off. C. Outlaws would be raiding his trailer home D. He would be fired from his job
QUEST OF THIG By BASIL WELLS Thig of Ortha was the vanguard of the conquering "HORDE." He had blasted across trackless space to subdue a defenseless world—only to meet on Earth emotions that were more deadly than weapons. [Transcriber's Note: This etext was produced from Planet Stories Fall 1942. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Thig carefully smoothed the dark sand and seaweed of the lonely beach over the metal lid of the flexible ringed tunnel that linked the grubby ship from another planet with the upper air. He looked out across the heaving waters of the Sound toward Connecticut. He stared appraisingly around at the luxuriant green growth of foliage further inland; and started toward the little stretch of trees and brush, walking carefully because of the lesser gravitation. Thig was shorter than the average Earthman—although on Ortha he was well above the average in height—but his body was thick and powerfully muscled. His skull was well-shaped and large; his features were regular, perhaps a trifle oversize, and his hair and eyes were a curiously matching blend of reddish brown. Oddest of all, he wore no garments, other than the necessary belt and straps to support his rod-like weapon of white metal and his pouches for food and specimens. The Orthan entered the narrow strip of trees and crossed to the little-used highway on the other side. Here he patiently sat down to wait for an Earthman or an Earthwoman to pass. His task now was to bring a native, intact if possible, back to the carefully buried space cruiser where his two fellows and himself would drain the creature's mentality of all its knowledge. In this way they could learn whether a planet was suited for colonization by later swarms of Orthans. Already they had charted over a hundred celestial bodies but of them all only three had proven worthy of consideration. This latest planet, however, 72-P-3 on the chart, appeared to be an ideal world in every respect. Sunlight, plenty of water and a dense atmospheric envelope made of 72-P-3 a paradise among planets. The explorer from another world crouched into the concealment of a leafy shrub. A creature was approaching. Its squat body was covered with baggy strips of bluish cloth and it carried a jointed rod of metal and wood in its paw. It walked upright as did the men of Ortha. Thig's cold eyes opened a trifle wider as he stared into the thing's stupid face. It was as though he was looking into a bit of polished metal at the reflection of himself! The Earthman was opposite now and he must waste no more precious time. The mighty muscles of the Orthan sent him hurtling across the intervening space in two prodigious bounds, and his hands clamped across the mouth and neck of the stranger.... Lewis Terry was going fishing. For a week the typewriter mill that had ground out a thousand assorted yarns of the untamed West and the frigid desolation of the Northwoods had been silent. Lewis wondered if he was going stale. He had sat every day for eight hours in front of that shiny-buttoned bane of the typist, but there were no results. Feebly he had punched a key two days ago and a $ sign had appeared. He hadn't dared touch the machine since. For Mr. Terry, that hard-hitting writer of two-gun action, had never been further west of Long Island than Elizabeth, and he had promised his wife, Ellen, that he would take the three children and herself on a trailer tour of the West that very summer. Since that promise, he could not write a word. Visions of whooping red-skinned Apaches and be-chapped outlaws raiding his little trailer home kept rolling up out of his subconscious. Yet he had to write at least three novelets and a fistful of short stories in the next two weeks to finance the great adventure—or the trip was off. So Lewis left the weathered old cottage in the early dawn and headed for his tubby old boat at the landing in an attempt to work out a salable yarn.... "Hey!" he shouted as a naked man sprang out of the bushes beside the road. "What's the trouble?" Then he had no time for further speech, the massive arms of the stranger had wound around him and two hamlike hands shut off his speech and his wind. He fought futilely against trained muscles. The hand clamping his throat relaxed for a moment and hacked along the side of his head. Blackness flooded the brain of Lewis, and he knew no more. "There it is," announced Thig, dropping the limp body of the captured Earthman to the metal deck-plates. "It is a male of the species that must have built the cities we saw as we landed." "He resembles Thig," announced Kam. "But for the strange covering he wears he might be Thig." "Thig will be this creature!" announced Torp. "With a psychic relay we will transfer the Earthman's memories and meager store of knowledge to the brain of Thig! He can then go out and scout this world without arousing suspicion. While he is gone, I will take Kam and explore the two inner planets." "You are the commander," said Thig. "But I wish this beast did not wear these clumsy sheathing upon his body. On Ortha we do not hamper the use of our limbs so." "Do not question the word of your commander," growled Torp, swelling out his thick chest menacingly. "It is for the good of our people that you disguise yourself as an Earthman." "For the good of the Horde," Thig intoned almost piously as he lifted Terry's body and headed for the laboratory. Service for the Horde was all that the men of Ortha knew. Carefully cultured and brought to life in the laboratories of their Horde, they knew neither father nor mother. Affection and love were entirely lacking in their early training and later life. They were trained antlike from childhood that only the growth and power of the Horde were of any moment. Men and women alike toiled and died like unfeeling robots of flesh and bone for the Horde. The Horde was their religion, their love-life, their everything! So it was that the bodies of the Earthman and the Orthan were strapped on two parallel tables of chill metal and the twin helmets, linked to one another by the intricacies of the psychic relay, put upon their heads. For ten hours or more the droning hum of the relay sucked Terry's brain dry of knowledge. The shock upon the nervous system of the Earthman proved too violent and his heart faltered after a time and stopped completely. Twice, with subtle drugs they restored pseudo-life to his body and kept the electrical impulses throbbing from his tortured brain, but after the third suspension of life Thig removed his helmet. "There is nothing more to learn," he informed his impassive comrades. "Now, let us get on with the plastic surgery that is required. My new body must return to its barbaric household before undue attention is aroused. And when I return I will take along some of the gleaming baubles we found on the red planet—these people value them highly." An hour later, his scars and altered cartilage already healed and painless, Thig again scraped sand over the entrance to the space ship and set out along the moonlit beach toward the nearest path running inland to his home. Memory was laying the country bare about him, Terry's own childhood memories of this particular section of Long Island. Here was the place where Jake and Ted had helped him dig for the buried treasure that old 'Notch-ear' Beggs had told them so exactly about. Remembrance of that episode gave Thig an idea about the little lump of jewels in his pocket. He had found them in a chest along the beach! He was coming up on the porch now and at the sound of his foot on the sagging boards the screen door burst open and three little Earth-creatures were hugging at his legs. An odd sensation, that his acquired memories labeled as pleasure, sent a warm glow upward from around his heart. Then he saw the slender red-haired shape of a woman, the mate of the dead man he knew, and confusion struck his well-trained brain. Men had no mates on Ortha, sex had been overthrown with all the other primitive impulses of barbarism; so he was incapable of understanding the emotions that swept through his acquired memory. Unsteadily he took her in his arms and felt her warm lips pressed, trembling, against his own. That same hot wave of pulsing blood choked achingly up into his throat. "Lew, dear," Ellen was asking, "where have you been all day? I called up at the landing but you were not there. I wanted to let you know that Saddlebag Publications sent a check for $50 for "Reversed Revolvers" and three other editors asked for shorts soon." "Shoulda got a hundred bucks for that yarn," grunted Thig, and gasped. For the moment he had been Lewis Terry and not Thig! So thoroughly had he acquired the knowledge of Terry that he found himself unconsciously adopting the thinking and mannerism of the other. All the better this way, he realized—more natural. "Sorry I was late," he said, digging into his pocket for the glittering baubles, "but I was poking around on the beach where we used to hunt treasure and I found an old chest. Inside it I found nothing but a handful of these." He flashed the jewels in front of Ellen's startled eyes and she clung, unbelieving, to his arm. "Why, Lew," she gasped, "they're worth a fortune! We can buy that new trailer now and have a rebuilt motor in the car. We can go west right away.... Hollywood, the Grand Canyon, cowboys!" "Uh huh," agreed the pseudo Lewis, memories of the ferocious savages and gunmen of his stories rendering him acutely unhappy. Sincerely he hoped that the west had reformed. "I saved some kraut and weiners," Ellen said. "Get washed up while I'm warming them up. Kids ate all the bread so I had to borrow some from the Eskoes. Want coffee, too?" "Mmmmmm," came from the depths of the chipped white wash-basin. "Home again," whispered Ellen as she stood beside Thig twelve weeks later and gazed tearfully at the weathered little gray house. She knelt beside the front stoop and reached for the key hidden beneath it. "The west was wonderful; tremendous, vast and beautiful," she went on as they climbed the steps, "but nowhere was there any place as beautiful as our own little strip of sky and water." Thig sank into a dusty old swing that hung on creaking chains from the exposed rafters of the porch roof. He looked down at the dusty gray car and the bulbous silvery bulk of the trailer that had been their living quarters for almost three months. Strange thoughts were afloat in the chaos of his cool Orthan brain. Tonight or tomorrow night at the latest he must contact his two fellows and report that Earth was a planetary paradise. No other world, including Ortha, was so well-favored and rich. An expeditionary force to wipe the grotesque civilizations of Earth out of existence would, of course, be necessary before the first units of new Hordes could be landed. And there Thig balked. Why must they destroy these people, imperfect though their civilization might be, to make room for the Hordes? Thig tried to tell himself that it was the transmitted thoughts of the dead Earthman that made him feel so, but he was not too sure. For three months he had lived with people who loved, hated, wept and sacrificed for reasons that he had never known existed. He had learned the heady glory of thinking for himself and making his own decisions. He had experienced the primitive joy of matching his wits and tongue against the wits of other unpredictable human beings. There was no abrupt division of men and women into definite classes of endeavor. A laborer thought the same thoughts that a governor might think. Uncertainty added zest to every day's life. The Orthan had come to question the sole devotion of the individual to the Horde to the exclusion of all other interests. What, he wondered, would one new world—or a hundred—populated by the Hordes add to the progress of humanity? For a hundred thousand years the Orthan civilization had remained static, its energies directed into certain well-defined channels. They were mindless bees maintaining their vast mechanical hives. There was that moment on the brink of the Grand Canyon when Ellen had caught his arm breathlessly at all the beauty spread away there beneath them. There were mornings in the desert when the sun painted in lurid red the peaks above the harsh black-and-whites of the sagebrush and cactus slopes. There was the little boy, his body burning with fever, who nestled trustingly against his tense man's body and slept—the son of Ellen and the man he had destroyed. Thig groaned. He was a weakling to let sentimentality so get the better of his judgment. He would go now to the space ship and urge them to blast off for Ortha. He sprang off the porch and strode away down the road toward the beach. The children ran to him; wanted to go along. He sent them away harshly but they smiled and waved their brown little hands. Ellen came to the door and called after him. "Hurry home, dear," she said. "I'll have a bite ready in about an hour." He dared not say anything, for his voice would have broken and she would have known something was wrong. She was a very wise sort of person when something was troubling him. He waved his stubby paw of a hand to show that he had heard, and blindly hurried toward the Sound. Oddly enough, as he hurried away along the narrow path through the autumn woods, his mind busied itself with a new epic of the west that lived no longer. He mentally titled it: "Rustlers' Riot" and blocked in the outlines of his plot. One section of his brain was that of the careless author of gunslinging yarns, a section that seemed to be sapping the life from his own brain. He knew that the story would never be written, but he toyed with the idea. So far had Thig the emotionless, robot-being from Ortha drifted from the unquestioning worship of the Horde! "You have done well," announced Torp when Thig had completed his report on the resources and temperatures of various sections of Terra. "We now have located three worlds fit for colonization and so we will return to Ortha at once. "I will recommend the conquest of this planet, 72-P-3 at once and the complete destruction of all biped life upon it. The mental aberrations of the barbaric natives might lead to endless complications if they were permitted to exist outside our ordered way of life. I imagine that three circuits of the planet about its primary should prove sufficient for the purposes of complete liquidation." "But why," asked Thig slowly, "could we not disarm all the natives and exile them on one of the less desirable continents, Antarctica for example or Siberia? They are primitive humans even as our race was once a race of primitives. It is not our duty to help to attain our own degree of knowledge and comfort?" "Only the good of the Horde matters!" shouted Torp angrily. "Shall a race of feeble-witted beasts, such as these Earthmen, stand in the way of a superior race? We want their world, and so we will take it. The Law of the Horde states that all the universe is ours for the taking." "Let us get back to Ortha at once, then," gritted out Thig savagely. "Never again do I wish to set foot upon the soil of this mad planet. There are forces at work upon Earth that we of Ortha have long forgotten." "Check the blood of Thig for disease, Kam," ordered Torp shortly. "His words are highly irrational. Some form of fever perhaps native to this world. While you examine him I will blast off for Ortha." Thig followed Kam into the tiny laboratory and found a seat beside the squat scientist's desk. His eyes roamed over the familiar instruments and gauges, each in its own precise position in the cases along the walls. His gaze lingered longest on the stubby black ugliness of a decomposition blaster in its rack close to the deck. A blast of the invisible radiations from that weapon's hot throat and flesh or vegetable fiber rotted into flaky ashes. The ship trembled beneath their feet; it tore free from the feeble clutch of the sand about it, and they were rocketing skyward. Thig's broad fingers bit deep into the unyielding metal of his chair. Suddenly he knew that he must go back to Earth, back to Ellen and the children of the man he had helped destroy. He loved Ellen, and nothing must stand between them! The Hordes of Ortha must find some other world, an empty world—this planet was not for them. "Turn back!" he cried wildly. "I must go back to Earth. There is a woman there, helpless and alone, who needs me! The Horde does not need this planet." Kam eyed him coldly and lifted a shining hypodermic syringe from its case. He approached Thig warily, aware that disease often made a maniac of the finest members of the Horde. "No human being is more important than the Horde," he stated baldly. "This woman of whom you speak is merely one unit of the millions we must eliminate for the good of the Horde." Then it was that Thig went berserk. His fists slashed into the thick jaw of the scientist and his fingers ripped at the hard cords overlying the Orthan's vital throat tubes. His fingers and thumb gouged deep into Kam's startled throat and choked off any cry for assistance before it could be uttered. Kam's hand swept down to the holster swung from his intricate harness and dragged his blaster from it. Thig's other hand clamped over his and for long moments they swayed there, locked together in silent deadly struggle. The fate of a world hung in the balance as Kam's other hand fought against that lone arm of Thig. The scales swung in favor of Kam. Slowly the flaring snout of his weapon tilted upward until it reached the level of Thig's waist. Thig suddenly released his grip and dragged his enemy toward him. A sudden reversal of pressure on Kam's gun hand sent the weapon swivelling about full upon its owner's thick torso. Thig's fingers pressed down upon Kam's button finger, down upon the stud set into the grip of the decomposition blaster, and Kam's muscles turned to water. He shrieked. Before Thig's eyes half of his comrade's body sloughed away into foul corruption that swiftly gave way to hardened blobs of dessicated matter. Horror for what he had done—that he had slain one of his own Horde—made his limbs move woodenly. All of his thoughts were dulled for the moment. Painfully slow, he turned his body around toward the control blister, turned around on leaden feet, to look full into the narrowed icy eyes of his commander. He saw the heavy barrel of the blaster slashing down against his skull but he could not swing a fraction of an inch out of the way. His body seemed paralyzed. This was the end, he thought as he waited stupidly for the blow to fall, the end for Ellen and the kids and all the struggling races of Earth. He would never write another cowboy yarn—they would all be dead anyhow soon. Then a thunderclap exploded against his head and he dropped endlessly toward the deck. Blows rained against his skull. He wondered if Torp would ever cease to hammer at him and turn the deadly ray of the weapon upon him. Blood throbbed and pounded with every blow.... Bam, Bam, Bam, the blood pounded in his ears. Like repeated blows of a hammer they shook his booming head. No longer was Torp above him. He was in the corner of the laboratory, a crumpled blood-smeared heap of bruised flesh and bone. He was unfettered and the blood was caked upon his skull and in his matted hair. Torp must have thought he had killed him with those savage blows upon the head. Even Torp, thought Thig ruefully, gave way to the primitive rage of his ancestors at times; but to that very bit of unconscious atavism he now owed his life. A cool-headed robot of an Orthan would have efficiently used the blaster to destroy any possibility of remaining life in his unconscious body. Thig rolled slowly over so that his eye found the door into the control room. Torp would be coming back again to dispose of their bodies through the refuse lock. Already the body of Kam was gone. He wondered why he had been left until last. Perhaps Torp wished to take cultures of his blood and tissues to determine whether a disease was responsible for his sudden madness. The cases of fragile instruments were just above his head. Association of memories brought him the flash of the heavy blaster in its rack beneath them. His hand went up and felt the welcome hardness of the weapon. He tugged it free. In a moment he was on his knees crawling across the plates of the deck toward the door. Halfway across the floor he collapsed on his face, the metal of the gun making a harsh clang. He heard the feet of Torp scuffle out of silence and a choked cry in the man's throat squalled out into a senseless whinny. Thig raised himself up on a quivering elbow and slid the black length of the blaster in front of him. His eyes sought the doorway and stared full into the glaring vacant orbs of his commander. Torp leaned there watching him, his breath gurgling brokenly through his deep-bitten lips. The clawing marks of nails, fingernails, furrowed his face and chest. He was a madman! The deadly attack of Thig; his own violent avenging of Kam's death, and now the apparent return of the man he had killed come to life had all served to jolt his rigidly trained brain from its accustomed groove. The shock had been too much for the established thought-processes of the Orthan. So Thig shot him where he stood, mercifully, before that vacant mad stare set him, too, to gibbering and shrieking. Then he stepped over the skeleton-thing that had been Torp, using the new strength that victory had given him to drive him along. He had saved a world's civilization from extinction! The thought sobered him; yet, somehow, he was pleased that he had done so. After all, it had been the Earthwoman and the children he had been thinking of while he battled Kam, a selfish desire to protect them all. He went to the desk where Torp had been writing in the ship's log and read the last few nervously scrawled lines: Planet 72-P-3 unfit for colonization. Some pernicious disease that strikes at the brain centers and causes violent insanity is existent there. Thig, just returned from a survey of the planet, went mad and destroyed Kam. In turn I was forced to slay him. But it is not ended. Already I feel the insidious virus of.... And there his writing ended abruptly. Thig nodded. That would do it. He set the automatic pilot for the planet Ortha. Unless a rogue asteroid or a comet crossed the ship's path she would return safely to Ortha with that mute warning of danger on 72-P-3. The body of Torp would help to confirm his final message. Then Thig crossed the cabin to the auxiliary life boat there, one of a half-dozen space ships in miniature nested within the great ship's hull, and cut free from the mother vessel. He flipped the drive lever, felt the thrumming of the rockets driving him from the parent ship. The sensation of free flight against his new body was strangely exhilerating and heady. It was the newest of the emotions he had experienced on Earth since that day, so many months before, when he had felt the warmness of Ellen's lips tight against his. Thig flipped the drive lever, felt the thrumming of the rockets driving him from the parent ship. He swung about to the port, watched the flaming drive-rockets of the great exploratory ship hurl it toward far-away Ortha, and there was no regret in his mind that he was not returning to the planet of his first existence. He thought of the dull greys and blacks of his planet, of the monotonous routine of existence that had once been his—and his heart thrilled to the memories of the starry nights and perfect exciting days he had spent on his three month trip over Earth. He made a brief salute to the existence he had known, turned with a tiny sigh, and his fingers made brief adjustments in the controls. The rocket-thrum deepened, and the thin whistle of tenuous air clutching the ship echoed through the hull-plates. He thought of many things in those few moments. He watched the roundness of Earth flatten out, then take on the cup-like illusion that all planets had for an incoming ship. He reduced the drive of his rockets to a mere whisper, striving to control the impatience that crowded his mind. He shivered suddenly, remembering his utter callousness the first time he had sent a space ship whipping down toward the hills and valleys below. And there was a sickness within him when he fully realized that, despite his acquired memory and traits, he was an alien from outer space. He fingered the tiny scars that had completely obliterated the slight differences in his appearance from an Earthman's, and his fingers trembled a bit, as he bent and stared through the vision port. He said a brief prayer in his heart to a God whose presence he now felt very deeply. There were tears in the depths of his eyes, then, and memories were hot, bitter pains. Earth was not far below him. As he let gravity suck him earthward, he heaved a gasp of relief. He was no longer Thig, a creature of a Horde's creation, but Lewis Terry, writer of lurid gun-smoking tales of the West. He must remember that always. He had destroyed the real Terry and now, for the rest of his life, he must make up to the dead man's family. The knowledge that Ellen's love was not really meant for him would be a knife twisting in his heart but for her sake he must endure it. Her dreams and happiness must never be shattered. The bulge of Earth was flattening out now and he could see the outlines of Long Island in the growing twilight. A new plot was growing in the brain of Lewis Terry, a yarn about a cowboy suddenly transported to another world. He smiled ironically. He had seen those other worlds. Perhaps some day he would write about them.... He was Lewis Terry! He must remember that!
B. The trip with Ellen would be off.
Why does the author describe Charles Murray as a “publicity genius”? A. He sent out numerous press releases and did a press tour for this book. B. He published first in academic journals to increase the book’s authority. C. He limited access as a way to increase the allure of the book before publication. D. He attacked critics of his book to discredit them.
The Bell Curve Flattened Charles Murray is a publicity genius, and the publication of his and Richard Herrnstein's book, The Bell Curve: Intelligence and Class Structure in American Life , in the fall of 1994 was his masterpiece. Virtually all ambitious trade hardcover books are preceded by an edition of 100 to 200 flimsy "galley proofs." These are sent out to people who might generate buzz for the book: blurbists, bookers for television talk shows, editors, and--most important--book critics. There is an ethos of letting the chips fall where they may about the sending out of galleys: Now the book will begin to receive uncontrolled reaction. (For example, back in 1991, Murray somehow got hold of the galleys of my own last book, and wrote me heatedly denying that he was working on a book about black genetic intellectual inferiority, as I had asserted. I left the passage in, but softened it.) The Bell Curve was not circulated in galleys before publication. The effect was, first, to increase the allure of the book (There must be something really hot in there!), and second, to ensure that no one inclined to be skeptical would be able to weigh in at the moment of publication. The people who had galley proofs were handpicked by Murray and his publisher. The ordinary routine of neutral reviewers having a month or two to go over the book with care did not occur. Another handpicked group was flown to Washington at the expense of the American Enterprise Institute and given a weekend-long personal briefing on the book's contents by Murray himself (Herrnstein had died very recently), just before publication. The result was what you'd expect: The first wave of publicity was either credulous or angry, but short on evidence, because nobody had had time to digest and evaluate the book carefully. The Bell Curve isn't a typical work of trade nonfiction. It is gotten up as a work of original scholarly research. Most works containing fresh regression analysis and historical argument from primary sources would be published in academic quarterlies that send manuscripts out for elaborate, lengthy evaluation before deciding whether to publish them. Herrnstein and Murray didn't do this, so it wasn't until a full year or more after The Bell Curve was published that the leading experts on its subject had a chance to go through the underlying data with care. Therefore, as time went on, the knowledgeability of the Bell Curve discussion grew, but the attention paid to that discussion inevitably shrank. The debate on publication day was conducted in the mass media by people with no independent ability to assess the book. Over the next few months, intellectuals took some pretty good shots at it in smaller publications like the New Republic and the New York Review of Books . It wasn't until late 1995 that the most damaging criticism of The Bell Curve began to appear, in tiny academic journals. What follows is a brief summary of that last body of work. The Bell Curve , it turns out, is full of mistakes ranging from sloppy reasoning to mis-citations of sources to outright mathematical errors. Unsurprisingly, all the mistakes are in the direction of supporting the authors' thesis. First, a quick précis of The Bell Curve . IQ tests, according to Murray and Herrnstein, measure an essential human quality, general intelligence. During the second half of the 20 th century, this quality has risen to supreme importance, because society has become increasingly complex. The intelligent have therefore gone through an "invisible migration," from points of origin all over the class system to a concentration at the top of business, government, and the professions. They are likely to become ever more dominant and prosperous. The unintelligent are falling further and further behind. Because intelligence is substantially inherited, nothing is likely to reverse this process. Blacks are overrepresented among the unintelligent. Any efforts government might make to improve the economic opportunities of poor people, especially poor black people, are likely to fail, because their poverty is so much the result of inherited low intelligence. About the best that can be done for these people is an effort to create a world of simple, decent, honorable toil for them. Herrnstein and Murray begin by telling us that the liberal position on IQ--namely, "Intelligence is a bankrupt concept"--has been discredited, and that "a scholarly consensus has been reached" around their position. This consensus is "beyond significant technical dispute." Thus, by the end of their introduction, they have arranged matters so that if intelligence has any meaning at all, the idiotic liberals stand discredited; and meanwhile, extremely broad claims for intelligence have the cover of "consensus." The notion that IQ tests are completely useless never prevailed in liberal academia to nearly the extent that Herrnstein and Murray say. A more accurate rendering of the liberal position would be that rather than a single "general intelligence," there are a handful of crucial--and separate--mental abilities; that none of these abilities is important enough to obviate the role of family background and education; and that native ability (and economic success independent of native ability) can be enhanced by improving education, training, and public health. The Bell Curve refers in passing to some of these points, but on the whole it sets up a cartoon-left position as its (easy) target. Meanwhile, the psychometricians who dominate the footnotes of The Bell Curve are John Hunter, Arthur Jensen, Malcolm Ree, and Frank Schmidt. These men are well known within the field as representing its right wing, not a mainstream consensus. The next problem with The Bell Curve 's thesis is in the idea of the rise to dominance of the cognitive elite. To the book's initial audience of Ivy Leaguers, this idea seemed valid on its face. Everybody knows that the best universities, law firms, hospitals, investment banks, and the State Department used to be run by preppies whose main virtue was fortunate birth, and are now open to one and all on the basis of merit. But the larger premise--that intelligent people used to be scattered throughout the class structure, and are now concentrated at the top--is almost impossible to prove, simply because the mass administration of mental tests is such a recent phenomenon. High scorers on mental tests do "bunch up" (as Herrnstein and Murray put it) in elite-university student bodies. But this is tautological: Any group selected on the basis of scores on mental tests will be composed disproportionately of people who score high on mental tests. Proving The Bell Curve 's thesis would require proving that success increasingly correlates with IQ in areas of life where mental tests are not the explicit gatekeepers. To see how The Bell Curve tries and fails to get around these inherent problems, see and . Having conditioned its audience to view IQ as all-important, The Bell Curve then manipulates statistics in a way that makes IQ look bigger, and everything else smaller, in determining Americans' life-chances. The basic tool of statistical social science in general, and of The Bell Curve in particular, is regression analysis, a technique used to assign weights to various factors (called "independent variables") in determining a final outcome (called the "dependent variable"). The original statistical work in The Bell Curve consists of regression analyses on a database called the National Longitudinal Study of Youth. The authors claim to demonstrate that high IQ is more predictive of economic success than any other factor, and that low IQ is more predictive of poverty and social breakdown. Virtually all the early commentators on The Bell Curve were unable to assess the merits of the regression analysis. "I am not a scientist. I know nothing about psychometrics," wrote Leon Wieseltier (who was otherwise quite critical) in a typical disclaimer. But by now the statistics have been gone over by professionals, who have come up with different results. The key points of their critique of The Bell Curve are as follows: What Herrnstein and Murray used to measure IQ is actually a measure of education as well as intelligence. All the people tracked in the National Longitudinal Study of Youth took the Armed Forces Qualifying Test, which Herrnstein and Murray treat as a good measure of intelligence. Because the material covered in the test includes subjects like trigonometry, many academic critics of The Bell Curve have objected to its use as a measure only of IQ and not at all of academic achievement. Herrnstein and Murray concede in the footnotes that scores tend to rise with the subjects' education--but they seriously underestimate the magnitude of this rise, as shows. And they resist the obvious inference that the test scores are measuring something other than intelligence. Most of The Bell Curve 's analysis is devoted to proving that IQ has more predictive power than parental "socio-economic status." But Herrnstein and Murray's method of figuring socioeconomic status seems designed to low-ball its influence, as explains. Herrnstein and Murray begin their discussion of the National Longitudinal Study of Youth data by announcing that they aren't going to analyze the effect of education, because education is too much a result of IQ. It's not an independent variable. (Of course, according to their theory, socioeconomic status is also a result of IQ, but somehow, that doesn't stop them.) Therefore, what you'd most want to know from a policy standpoint--how much education can increase opportunity--isn't dealt with in the book, except in two obscure footnotes. Both would seem to support the liberal, pro-education position that Herrnstein and Murray say is futile. One footnote shows education increasing IQ year by year. The other shows a higher correlation between college degree and family income than between IQ and family income. One of The Bell Curve 's theoretical linchpins is the high heritability of IQ. Herrnstein and Murray, sounding like the souls of caution, write that "half a century of work, now amounting to hundreds of empirical and theoretical studies, permits a broad conclusion that the genetic component of IQ is unlikely to be smaller than 40 per cent or higher than 80 per cent. ... For purposes of this discussion, we will adopt a middling estimate of 60 per cent heritability." This now looks seriously overstated. Michael Daniels, Bernie Devlin, and Kathryn Roeder of Carnegie Mellon University took the same studies on which Herrnstein and Murray based their estimate, and subjected them to a computer meta-analysis ("a powerful method of statistical analysis"-- The Bell Curve ). Their paper, which has not yet been published, says: "In brief, studies of IQ, and our reanalyses of them, suggest a narrow-sense heritability of 34 per cent and a broad-sense heritability of 46 per cent. [The difference between broad and narrow is too technical to explain in this limited space.] This is a far cry from Herrnstein and Murray's maximum value of 80 per cent or their middling value of 60 per cent. Consequently, Herrnstein and Murray give the impression that IQ is highly 'heritable,' but it is not." If the purpose of the whole exercise is to figure out what our social policies should be, then, "Which is more predictive, IQ or socioeconomic status?" isn't the essential question anyway. Making it the essential question avoids the issue of whether IQ is really so massively predictive that it drowns out everything else. (Herrnstein and Murray mostly leave the evidence for this, their central contention, to footnotes. The figures they offer are far from dispositive.) The chapter of The Bell Curve on policies that might be able to overcome the fate of a low IQ focuses mainly on whether early-childhood programs like Head Start (most of which aren't run with raising IQ as their primary goal) can raise IQ significantly over the long term, and sorrowfully concludes that they can't. What the book doesn't discuss is whether public schools--by far the biggest government social program--can raise IQ, or earnings after you control for IQ. As James Heckman of the University of Chicago wrote in the Journal of Political Economy , " Evidence of a genetic component to skills has no bearing on the efficacy of any social policy. ... The relevant issue is the cost effectiveness of the intervention." (As an example of where the kind of analysis Herrnstein and Murray didn't do can lead, a new study by Jay Girotto and Paul Peterson of Harvard shows that students who raise their grades and take harder courses can increase their IQ scores by an average of eight points during the first three years of high school.) At the beginning of The Bell Curve , Herrnstein and Murray declare that "the concept of intelligence has taken on a much higher place in the pantheon of human virtues than it deserves." And they claim that their view of IQ tests is "squarely in the middle of the scientific road." They end by expressing the hope that we can "be a society that makes good on the fundamental promise of the American tradition: the opportunity for everyone, not just the lucky ones, to live a satisfying life." Throughout, Herrnstein and Murray consistently present themselves as fair- (or even liberal-) minded technicians who have, with great caution, followed the evidence where it leads--which, unfortunately, is to a few unassailable if unpleasant scientific truths that it is their reluctant duty to report. In fact, The Bell Curve is a relentless brief for the conservative position in psychometrics and social policy. For all its talk of reflecting a consensus, the sources it draws upon are heavily skewed to the right. Herrnstein and Murray used quasi-nutty studies that support their position (as Charles Lane demonstrated in the New York Review of Books ), and ignore mainstream studies that contradict it (as Richard Nisbett showed in the New Republic ). The data in The Bell Curve are consistently massaged to produce conservative conclusions; not once is a finding that contradicts the main thesis reported in the text. ( shows how Herrnstein and Murray have made the convergence in black-white IQ scores, which they claim to find "encouraging," look smaller than it actually is.) The Bell Curve 's air of strict scientism doesn't preclude the use of lightly sourced or unsourced assertions, such as the statement that the median IQ of all black Africans is 75, or that "intermarriage among people in the top few percentiles of intelligence may be increasing far more rapidly than suspected" (no footnote). Though they piously claim not to be doing so, Herrnstein and Murray leave readers with the distinct impression that IQ is the cause of economic success and failure, and that genetic difference explains the black-white IQ gap. In the most famous passage in The Republic , Plato describes an underground cave where people are held prisoner in chains, unable to see anything but the shadows cast by figures passing outside; they mistake the shadows for reality. The Republic is probably the first place in history where an idea like that of Murray and Herrnstein's cognitive elite appears. Plato believed that through education, people could leave the cave and be able to see the truth instead of the shadows, thus fitting themselves to become the wise rulers of society. But he was quick to insert a cautionary note: Those who have left the cave might be tempted to think they can see perfectly clearly, while actually they would be "dazzled by excess of light." The image applies to The Bell Curve : Presented as an exact representation of reality, in opposition to the shadows of political correctness, it actually reflects the blinkered vision of one part of the American elite. It constantly tells these people that they are naturally superior, and offers lurid descriptions of aspects of national life that they know about only by rumor. Readers who accept The Bell Curve as tough-minded and realistic, and who assume that all criticism of it is ignorant and ideologically motivated, are not as far removed from Plato's cave as they might think. : Dumb College Students : Smart Rich People : Education and IQ : Socioeconomic Status : Black-White Convergence
C. He limited access as a way to increase the allure of the book before publication.
How do previous methods perform on the Switchboard Dialogue Act and DailyDialog datasets?
### Introduction Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context. Many approaches have been proposed for DA recognition. Previous work relies heavily on handcrafted features which are domain-specific and difficult to scale up BIBREF4, BIBREF5, BIBREF6. Recently, with great ability to do feature extraction, deep learning has yielded state-of-the-art results for many NLP tasks, and also makes impressive advances in DA recognition. BIBREF7, BIBREF8 built hierarchical CNN/RNN models to encode sentence and incorporate context information for DA recognition. BIBREF9 achieved promising performance by adding the CRF to enhance the dependency between labels. BIBREF10 applied the self-attention mechanism coupled with a hierarchical recurrent neural network. However, previous approaches cannot make full use of the relative position relationship between utterances. It is natural that utterances in the local context always have strong dependencies in our daily dialog. In this paper, we propose a hierarchical model based on self-attention BIBREF11 and revise the attention distribution to focus on a local and contextual semantic information by a learnable Gaussian bias which represents the relative position information between utterances, inspired by BIBREF12. Further, to analyze the effect of dialog length quantitatively, we introduce a new dialog segmentation mechanism for the DA task and evaluate the performance of different dialogue length and context padding length under online and offline settings. Experiment and visualization show that our method can learn the local contextual dependency between utterances explicitly and achieve promising performance in two well-known datasets. The contributions of this paper are: We design a hierarchical model based on self-attention and revise the attention distribution to focus on a local and contextual semantic information by the relative position information between utterances. We introduce a new dialog segmentation mechaism for the DA task and analyze the effect of dialog length and context padding length. In addition to traditional offline prediction, we also analyze the accuracy and time complexity under the online setting. ### Background ::: Related Work DA recognition is aimed to assign a label to each utterance in a conversation. It can be formulated as a supervised classification problem. There are two trends to solve this problem: 1) as a sequence labeling problem, it will predict the labels for all utterances in the whole dialogue history BIBREF13, BIBREF14, BIBREF9; 2) as a sentence classification problem, it will treat utterance independently without any context history BIBREF5, BIBREF15. Early studies rely heavily on handcrafted features such as lexical, syntactic, contextual, prosodic and speaker information and achieve good results BIBREF13, BIBREF4, BIBREF16. Recent studies have applied deep learning based model for DA recognition. BIBREF14 proposed a model based on RNNs and CNNs that incorporates preceding short texts to classify current DAs. BIBREF7, BIBREF8 used hierarchical CNN and RNN to model the utterance sequence in the conversation, which can extract high-level sentence information to predict its label. They found that there is a small performance difference among different hierarchical CNN and RNN approaches. BIBREF9 added a CRF layer on the top of the hierarchical network to model the label transition dependency. BIBREF10 applied the context-aware self-attention mechanism coupled with a hierarchical recurrent neural network and got a significant improvement over state-of-the-art results on SwDA datasets. On another aspect, BIBREF17 combined a recurrent neural network language model with a latent variable model over DAs. BIBREF18 proposed a Discrete Information Variational Autoencoders (DI-VAE) model to learn discrete latent actions to incorporate sentence-level distributional semantics for dialogue generation. ### Background ::: Self-Attention Self-attention BIBREF11 achieves great success for its efficiently parallel computation and long-range dependency modeling. Given the input sequence $ s = \left( s_1,...,s_n \right) $ of n elements where $ s_i \in \mathbb {R}^{d_s} $. Each attention head holds three parameter matrices, $W_h^Q, W_h^K, W_h^V \in {\mathbb {R}}^{d_s \times d_z} $ where $ h $ present the index of head. For the head $h$, linear projection is applied to the sequence $s$ to obtain key (K), query (Q), and value (V) representations. the attention module gets the weight by computing dot-products between key/query pair and then $softmax$ normalizes the result. it is defined as: where $\sqrt{d_z}$ is the scaling factor to counteract this effect that the dot products may grow large in magnitude. For all the heads, where $W^O \in \mathbb {R}^{(d_z*h)\times d_s}$ is the output projection. One weakness of self-attention model it that they cannot encode the position information efficiently. Some methods have been proposed to encode the relative or absolute position of tokens in the sequence as the additional input to the model. BIBREF11 used sine and cosine functions of different frequencies and added positional encodings to the input embeddings together. It used absolute position embedding to capture relative positional relation by the characteristic of sine and cosine functions. Moreover, several studies show that explicitly modeling relative position can further improve performance. For example, BIBREF19 proposed relative position encoding to explicitly model relative position by independent semantic parameter. It demonstrated significant improvements even when entirely replacing conventional absolute position encodings. BIBREF12 proposed to model localness for the self-attention network by a learnable Gaussian bias which enhanced the ability to model local relationship and demonstrated the effectiveness on the translation task. In our study, we design a local contextual attention model, which incorporates relative position information by a learnable Gaussian bias into original attention distribution. Different from BIBREF12, in our method, the distribution center is regulated around the corresponding utterance with a window, which indicates the context dependency preference, for capturing more local contextual dependency. ### Methodology Before we describe the proposed model in detail, we first define the mathematical notation for the DA recognition task in this paper. Given the dataset, $X = (D_1,D_2,... D_L)$ with corresponding DA labels $(Y_1,Y_2,...Y_L)$. Each dialogue is a sequence of $ N_l $ utterances $ D_l = (u_1,u_2,...u_{N_l})$ with $ Y_l = (y_1,y_2,...y_{N_l}) $. Each utterance is padded or truncated to the length of $ M $ words, $u_j = (w_1,w_2,...w_{M})$. Figure FIGREF6 shows our overall model structure. For the first layer, we encode each utterance $u_j$ into a vector representation. Each word $w_m$ of the utterance $u_j$ is converted into dense vector representations $e_m$ from one-hot token representation. And then, we apply LSTM BIBREF20, a powerful and effective structure for sequence modeling, to encode the word sequence. Formally, for the utterance $u_j$: where $embed$ represents the embedding layer which can be initialized by pre-trained embeddings. To make a fair comparison with previous work, we do not use the fine-grained embedding presented in BIBREF21. LSTM helps us get the context-aware sentence representation for the input sequence. There are several approaches to represent the sentence from the words. Following BIBREF22, we add a max-pooling layer after LSTM, which selects the maximum value in each dimension from the hidden units. In our experiment, LSTM with max-pooling does perform a little better than LSTM with last-pooling, which is used in BIBREF9. Afterwards, we get the utterances vector representations $ u = (u_1,...,u_{N_l}) $ of $N_l$ elements for the dialogue $D_l$ where $ u_j \in \mathbb {R}^{d_s}, d_s$ is the dimension of hidden units. As we discussed in section SECREF7, given the sequence $ s \in \mathbb {R}^{N_l*d_s}$, self-attention mechanism calculates the attention weights between each pair of utterances in the sequence and get the weighted sum as output. The attention module explicitly models the context dependency between utterances. We employ a residual connection BIBREF23 around the attention module, which represents the dependency encoder between utterances, and the current utterance encoder $s$: Finally, we apply a two-layer fully connected network with a Rectified Linear Unit (ReLU) to get the final classification output for each utterance. ### Methodology ::: Modeling Local Contextual Attention The attention explicitly models the interaction between the utterances. However, for context modeling, original attention mechanism always considers all of the utterances in a dialogue which inhibits the relation among the local context and is prone to overfitting during training. It is natural that utterances in the local context always have strong dependencies in our daily dialog. Therefore, we add a learnable Gaussian bias with the local constraint to the weight normalized by $softmax$ to enhance the interaction between concerned utterances and its neighbors. The attention module formula is revised as: The first term is the original dot product self-attention model. $POS \in \mathbb {R}^{N\times N}$ is the bias matrix, where N is the length of dialogue. The element $POS_{i,j}$ is defined following by gaussian distribution: $POS_{i,j}$ measures the dependency between the utterance $u_j$ and the utterance $u_i$ in terms of the relative position prior. $w_{i}$ represents for the standard deviation, which controls the weight decaying. Because of local constraint, $|c_{i} - i| <= C$, for each utterance $u_i$, the predicted center position $c_{i}$ and window size $ w_{i}$ is defined as followed: where $W_i^c,W_i^d \in \mathbb {R}^{1*N}$ are both learnable parameters. We initialized the parameter $W_i^c$ to 0, which leads to center position $ c_i = i $ by default. Furthermore, $c_{i}$ and $w_{i}$ are both related to the semantic context of the utterances, so we assign the mean of key $\overline{K}$ in attention mechanism to represent the context information. Moreover, the central position also indicates the dependency preference of the preceding utterances or subsequent utterances. It is worth noting that there is a little difference with BIBREF12, although we both revise the attention module by the Gaussian distribution. In our method, for the given utterance $u_{i}$, the distribution center $c_{i}$ is regulated for capturing the not only local but also contextual dependency, which can be formally expressed as: $c_{i} \in (i-C,i+C)$. However, in their work, the distribution center can be anywhere in the sequence, and it is designed for capturing the phrasal patterns, which are essential for Neural Machine Translation task. ### Methodology ::: Online and Offline Predictions Previous work mainly focuses on the offline setting where we can access the whole utterances in the dialogue and predict all the DA labels simultaneously. However, the online setting is the natural demand in our real-time applications. For the online setting, we only care about the recognition result of the last utterance in the given context, as seen in the area with the red dashed line in Figure FIGREF6, our model is well compatible with online setting, we can calculate the attention between the last utterance and the other utterances directly where $K \in \mathbb {R}^{1\times d}, Q \in \mathbb {R}^{n\times d}, V \in \mathbb {R}^{n\times d}$. For LSTM, we still have to model the entire sequence, which is slower than attention based models. Table TABREF17 shows the time complexity comparison excluding the time cost of first layer encoding, and the dialogue length $n$ is smaller than the representation dimension $d$. Our model is easy to expand into the online setting, however, to have a fair comparison with previous work, in our experiments, we applied the models under the offline setting by default. ### Methodology ::: Separate into Sub-dialogues The length of different dialogues in the dataset varies a lot. It is worth noting that the length of dialog affects the model prediction. On the one hand, under the offline setting, we can access the whole utterances in the dialogue and predict all the DA labels simultaneously, so the more utterances, the more efficient. However, on the other hand, if we put too many utterances in once prediction, it will model too much unrelated dependency in the long utterances sequence for both LSTM and attention mechanism based model. The sub-dialogues with the same length also enable efficiently batch training. To study how the dialogue length and context padding length will affect the performance, so we defined a sliding window $W$ which is the sub-dialogue length. Then, we separate each long dialogue into several small sub-dialogues. For example, the dialog $D$ is a sequence of utterances with length $n$, and we will get $\lceil x/w \rceil $ sub-dialogues, for the k-th sub-dialogues, the utterances sequence is $(u_{(k-1)*W+1},u_{(k-1)*W+2},...,u_{k*W})$. In order to avoid losing some context information caused by being separated, which will affect the context modeling for the utterances in the begin and end of the sub-dialog, we add the corresponding context with $P$ (stands for context padding) utterances at the begin and the end of each sliding window, so for the k-th sub-dialogues, the revised utterances sequence is $(u_{(k-1)*W-P+1},u_{(k-1)*W-P+2},...,u_{k*W+P})$. Moreover, we mask the loss for the context padding utterances, which can be formally expressed as: $M(i)=0$ if utterance $i$ is in the context padding otherwise 1, $L$ is the cross entropy. The $W$ and $P$ are both hyperparameters; in the experiment SECREF21, we will talk about the effect of the window size and the context padding length. ### Experiments ::: Datasets We evaluate the performance of our model on two high-quality datasets: Switchboard Dialogue Act Corpus (SwDA) BIBREF4 and DailyDialog BIBREF24. SwDA has been widely used in previous work for the DA recognition task. It is annotated on 1155 human to human telephonic conversations about the given topic. Each utterance in the conversation is manually labeled as one of 42 dialogue acts according to SWBD-DAMSL taxonomy BIBREF25. In BIBREF10, they used 43 categories of dialogue acts, which is different from us and previous work. The difference in the number of labels is mainly due to the special label “+”, which represents that the utterance is interrupted by the other speaker (and thus split into two or more parts). We used the same processing with BIBREF26, which concatenated the parts of an interrupted utterance together, giving the result the tag of the first part and putting it in its place in the conversation sequence. It is critical for fair comparison because there are nearly 8% data has the label “+”. Lacking standard splits, we followed the training/validation/test splits by BIBREF14. DailyDialog dataset contains 13118 multi-turn dialogues, which mainly reflect our daily communication style. It covers various topics about our daily life. Each utterance in the conversation is manually labeled as one out of 4 dialogue act classes. Table TABREF18 presents the statistics for both datasets. In our preprocessing, the text was lowercased before tokenized, and then sentences were tokenized by WordPiece tokenizer BIBREF27 with a 30,000 token vocabulary to alleviate the Out-of-Vocabulary problem. [1]The author claimed that they achieved 78.7%(81.3%) accuracy with pre-trained word embedding (fine-grained embedding). For a fair comparison, both previous and our work is simply based on pre-trained word embedding. [2]The author randomly selected two test sets which are different from previous and our work and achieved 77.15% and 79.74%, and we reimplemented in standard test sets. ### Experiments ::: Results on SwDA In this section, we evaluate the proposed approaches on SwDA dataset. Table TABREF20 shows our experimental results and the previous ones on SwDA dataset. It is worth noting that BIBREF10 combined GloVeBIBREF28 and pre-trained ELMo representationsBIBREF29 as word embeddings. However, in our work, we only applied the pre-trained word embedding. To illustrate the importance of context information, we also evaluate several sentence classification methods (CNN, LSTM, BERT) as baselines. For baseline models, both CNN and LSTM, got similar accuracy (75.27% and 75.59% respectively). We also fine-tuned BERT BIBREF30 to do recognition based on single utterance. As seen, with the powerful unsupervised pre-trained language model, BERT (76.88% accuracy) outperformed LSTM and CNN models for single sentence classification. However, it was still much lower than the models based on context information. It indicates that context information is crucial in the DA recognition task. BERT can boost performance in a large margin. However, it costs too much time and resources. In this reason, we chose LSTM as our utterance encoder in further experiment. By modeling context information, the performance of the hierarchical model is improved by at least 3%, even compared to BERT. In order to better analyze the semantic dependency learned by attention, in our experiments, we removed the CRF module. In terms of different hierarchical models, our LSTM+BLSTM achieved good result. The accuracy was 80.00% which is even a little better than Hierarchical BLSTM-CRF BIBREF9. Relying on attention mechanism and local contextual modeling, our model, LSTM+Attention and LSTM+Local Contextual Attention, achieved 80.12% and 80.34% accuracy respectively. Compared with the previous best approach Hierarchical BLSTM-CRF, we can obtain a relative accuracy gain with 1.1% by our best model. It indicated that self-attention model can capture context dependency better than the BLSTM model. With adding the local constraint, we can get an even better result. To further illustrate the effect of the context length, we also performed experiments with different sliding window $W$ and context padding $P$. Table TABREF22 shows the result. It is worth noting that it is actually the same as single sentence classification when $P = 0$ (without any context provided). First, we set $W$ to 1 to discuss how the length of context padding will affect. As seen in the result, the accuracy increased when more context padding was used for both LSTM+BLSTM and LSTM+Attention approaches, so we did not evaluate the performance of LSTM+LC Attention when context padding is small. There was no further accuracy improvement when the length of context padding was beyond 5. Therefore, we fixed the context padding length $P$ to 5 and increased the size of the sliding window to see how it works. With sliding window size increasing, the more context was involved together with more unnecessary information. From the experiments, we can see that both LSTM+BLSTM and LSTM+Attention achieved the best performance when window size was 1 and context padding length was 5. When window size increased, the performances of these two models dropped. However, our model (LSTM+LC Attention) can leverage the context information more efficiently, which achieved the best performance when window size was 10, and the model was more stable and robust to the different setting of window size. For online prediction, we only care about the recognition result of the last utterance in the given context. We added 5 preceding utterances as context padding for every predicted utterance because we cannot access subsequent utterances in the online setting. As seen in Table TABREF22, without subsequent utterances, the performances of these three models dropped. However, LSTM+LC Attention still outperformed the other two models. ### Experiments ::: Result on DailyDialog The classification accuracy of DailyDialog dataset is summarized in Table TABREF23. As for sentence classification without context information, the fine-tuned BERT still outperformed LSTM and CNN based models. From table TABREF18 we can see that, the average dialogue length $|U|$ in DailyDialog is much shorter than the average length of SwDA. So, in our experiment, we set the maximum of the $W$ to 10, which almost covers the whole utterances in the dialogue. Using the same way as SwDA dataset, we, first, set W to 1 and increased the length of context padding. As seen, modeling local context information, hierarchical models yielded significant improvement than sentence classification. There was no further accuracy improvement when the length of context padding was beyond 2, so we fixed the context padding length P to 2 and increased the size of sliding window size W. From the experiments, we can see that LSTM+Attention always got a little better accuracy than LSTM+BLSTM. With window size increasing, the performances of these two models dropped. Relying on modeling local contextual information, LSTM+LC Attention achieved the best accuracy (85.81%) when the window size was 5. For the longer sliding window, the performance of LSTM+LC Attention was still better and more robust than the other two models. For online prediction, we added 2 preceding utterances as context padding, and the experiment shows that LSTM+LC Attention outperformed the other two models under the online setting, although the performances of these three models dropped without subsequent utterances. ### Experiments ::: Visualization In this section, we visualize the attention weights for analyzing how local contextual attention works in detail. Figure FIGREF24 shows the visualization of original attention and local contextual attention for the example dialogue shown in Table TABREF1. The attention matrix $M$ explicitly measures the dependency among utterances. Each row of grids is normalized by $softmax$, $M_{ij}$ represents for the dependency score between the utterance i and utterance j. As demonstrated in Figure FIGREF24, there are some wrong and uninterpretable attention weights annotated with red color, which is learned by the original attention. The original attention model gives the utterance “B: Hi” (position 0) and “A: Okay.” (position 7) a high dependency score. However, local contextual attention weakens its attention weights due to the long distance apart. Overall, the additional Gaussian bias trend to centralize the attention distribution to the diagonal of the matrix, which is in line with our linguistic intuition that utterances that are far apart usually don't have too strong dependencies. As demonstrated in Figure FIGREF24, benefiting of the additional Gaussian bias, the revised attention mechanism weakens the attention weights between utterances which cross the long relative distance. For the grids near diagonal, it strengthens their dependency score and doesn't bring other useless dependencies for its learnable magnitude. ### Conclusions and Future Work In the paper, we propose our hierarchical model with local contextual attention to the Dialogue Act Recognition task. Our model can explicitly capture the semantic dependencies between utterances inside the dialogue. To enhance our model with local contextual information, we revise the attention distribution by a learnable Gaussian bias to make it focus on the local neighbors. Based on our dialog segmentation mechanism, we find that local contextual attention reduces the noises through relative position information, which is essential for dialogue act recognition. And this segmentation mechanism can be applied under online and offline settings. Our model achieves promising performance in two well-known datasets, which shows that modeling local contextual information is crucial for dialogue act recognition. There is a close relation between dialogue act recognition and discourse parsing BIBREF31. The most discourse parsing process is composed of two stages: structure construction and dependency labeling BIBREF32, BIBREF33. For future work, a promising direction is to apply our method to multi-task training with two stages jointly. Incorporating supervised information from dependency between utterances may enhance the self-attention and further improve the accuracy of dialogue act recognition. Table 1: A snippet of a conversation with the DA labels from Switchboard dataset. Figure 1: The model structure for DA recognition, where the LSTM with max pooling is simplified as utterance encoder in our experiment. The area in the red dashed line represents the structure for online prediction. Table 2: Time complexity between LSTM and selfattention for both online and offline predictions excluding the time cost of first layer encoding. The parameter n represents for the dialogue length in the sliding window and d represent for the dimension of representation unit. Table 3: |C| indicates the number of classes. |U | indicates the average length of dialogues. The train/validation/test columns indicate the number of dialogues (the number of sentences) in the respective splits. Table 4: Comparison results with the previous approaches and our approaches on SwDA dataset. Table 5: Experiment results about the hyperparameter W and P on SwDA dataset and online prediction result. W,P indicate the size of sliding window and context padding length during training and testing. Table 6: Experiment results on DailyDialog dataset. Figure 2: Visualization of original attention and local contextual attention. Each colored grid represents the dependency score between two sentences. The deeper the color is, the higher the dependency score is.
Table TABREF20 , Table TABREF22, Table TABREF23
Of all the individuals described in the article, who seemed to make the riskiest decision described? A. McCain B. Buchanan C. Dole D. Bauer
Republican Shakeout This weekend's straw poll in Ames, Iowa, kicked off the 2000 presidential race and sorted out the Republican field. Everyone agrees that George W. Bush is the front-runner, that Steve Forbes is in second place, and that Dan Quayle, who finished back in the pack with Lamar Alexander, will soon join Alexander on the sidelines. But Ames failed to resolve the fate of the candidates who came in third and fourth--Elizabeth Dole and Gary Bauer--and the one who skipped Ames, John McCain. For these three, the post-game spin contest is crucial. Here's a playback of their takes on the straw poll results and a look ahead at their playbook of messages for the remainder of the race. Elizabeth Dole Playback 1. Top three. Dole needed to get within striking distance of Bush and to seal off the rest of the pack behind her. On Meet the Press , Face the Nation , and Late Edition , she boasted that she had cracked "the top three." Pundits bought the three-winners line, treating Ames as a horse race ("win, place, and show") and noting that "no one's ever won the Republican nomination without finishing in the top three" at Ames. Newspapers, cramped for space, confined their headlines to Bush, Forbes, and Dole. Though Dole's 14 percent was closer to Bauer's 9 than to Forbes' 21, she earned a "solid third" and a place among the leaders by crossing the "double-digit" threshold. As Fox News' Carl Cameron put it: "The other seven candidates could not crack double digits." 2. Race for third. Since Bush and Forbes were expected to finish first and second, many pundits concluded, as Lisa Myers put it on Meet the Press , that "the real race here was for third. Elizabeth Dole won that." The Boston Globe called Dole "the winner of this contest-within-the-contest." Dole touted her "victory" on every talk show and cited the Myers and Globe quotes in a press release. At a news conference, an aide introduced Dole as the straw poll's "real winner." 3. Underdog. In every TV interview, Dole claimed to have been "outspent by millions of dollars." Her spokesman told reporters that "on a dollar-per-vote basis, Elizabeth Dole trounced George Bush and Steve Forbes." Reporters love an underdog. "From a strict cost-benefit standpoint, the big winner may be Elizabeth Dole," concluded Time . 4. Comeback kid. Dismissive coverage of Dole before the straw poll played to her advantage, as everyone marveled at her "surprisingly" strong third. "Dole Revived," the Washington Post 's front page proclaimed. On This Week , George Will conceded, "There had been a lot of very skeptical stories about whether her people would show up. She, therefore, I think, is the biggest winner." Playbook 1. Race for second. Forbes wants to fast-forward the GOP tournament to a finals bracket: Bush vs. Forbes. To prevent this, Dole needs to create a semifinal playoff--Forbes vs. Dole--to determine who gets to play Bush. Despite Forbes' huge financial advantage, "we finished close to second," Dole told reporters Saturday night. "This is going to become a two-person race." The press agreed. "Forbes had growing hopes ... that he might upset Bush or finish a close second," recalled the Post . Instead, "he finished closer to Dole than to Bush." 2. Experience. Having narrowed the field to three, Dole needs to focus the contest on criteria that favor her. The first of these is political experience, of which Bush has little and Forbes has almost none. On every talk show, Dole vowed "to demonstrate that the candidate with the most experience is more qualified than the candidates with the most money. ... We're talking about president of the United States." 3. Gender. This is the more obvious criterion that distinguishes Dole. She hardly needs to mention it--the media bring it up anyway--but she invokes it subtly, alluding (as she did on two Sunday talk shows) to "women who drive their daughters halfway across the state to shake my hand, a woman they dare to believe in." Newspapers hail Dole's female followers as evidence "that she can attract new voters to the GOP." Gary Bauer Playback 1. Top four. Like Dole, Bauer needed to crack the top tier and seal off the pack. Since sports analogies tend to cut off the top tier at three rather than four (e.g., "bronze medal," "win, place, and show"), Bauer changed metaphors, telling reporters that he had reached "the first rung of candidates" and that lower finishers might soon perish. On Meet the Press , he called himself the "breakout candidate." While some pundits lumped Bauer with the winners, others offered him the next best position--"leading the rest of the pack"--or at least distinguished him from the "losers." 2. Social conservative quarterfinal. This was Bauer's big spin win. Like Dole, he won a crucial "contest-within-the-contest." His scant margin over Pat Buchanan--8.9 percent to 7.3 percent--became a huge factor in the post-poll analysis. Pundits concluded that Bauer "did what he had to do ... beat Pat Buchanan," and therefore "can legitimately say he is the candidate of the Christian right," establishing himself as "one of the winners," the "three or four" candidates who "got their tickets punched" to stay in the race. Talk show hosts reminded Buchanan that he had lost to Bauer and asked whether Buchanan was finished. 3. Conservative semifinal. Having scored well ahead of Bauer and Buchanan, Forbes anointed himself "the conservative in a two-man race" against Bush. Bauer disagreed, and the media took his side. "Forbes, Bauer Battle for Right," the Post proclaimed, concluding that because Forbes failed to break away, "he and Bauer are likely to continue a long and tough fight for the leadership of the conservative wing." 4. Underdog. Bauer couldn't claim to be more strapped than Dole, so he claimed underdog status on the basis of low name recognition, inexperience, and working-class heritage. "I am running against some big bios ... the son of a former president, the son of a tycoon, and the wife of a senator," Bauer argued on Late Edition . "I have never run for president or office before. And yet here we come in fourth place." Newsweek 's David Brooks wrote that Bauer "overcame his own financial disadvantages" and joined Dole as the two surviving "Have-Not candidates." Playbook 1. Buchanan will defect. Since Buchanan's combativeness and loyal base make him hard to write off as a candidate, his rivals have persuaded the media at least to write him off as a Republican by inferring that his low score at Ames will prompt him to transfer to the Reform Party. The more Buchanan fends off comparisons to Bauer by emphasizing his protectionism, the more he plays into this scenario. 2. Populism. With Buchanan out of the way, Bauer will go after Forbes. When asked on television about Forbes' claim to represent the right. Bauer cited Forbes' wealth and called himself "the son of a maintenance man." On This Week , George Stephanopoulos agreed that Bauer "is becoming the populist in the race," noting that Bauer's supporters "love the fact that he was the son of a janitor." 3. Conservatism. If Bauer wins the social conservative quarterfinal and the conservative semifinal, he gets to run as the "Reagan" candidate against "Bush-Gore" moderation on abortion, Hollywood, China, and other hot-button issues. This bracket-by-bracket tournament strategy reduces Bauer's obstacles from three candidates to two. He can target Forbes, knowing that if he prevails, either Bush or Dole will have vanquished the other in the moderate semifinal. Indeed, Dole's success at Ames arguably helps Bauer by giving Bush a semifinal contest. John McCain Playback 1. Ames meant nothing to him. Despite having skipped the straw poll, McCain was invited onto Face the Nation and Fox News Sunday to discuss it. "If you're going to be taken seriously," Brit Hume asked him, "don't you have to face up to the fact, when all the other candidates decide that an event is worth attending ... that maybe you've got to play too?" In reply, McCain repeatedly called Ames "meaningless." His chutzpah bowled over the pundits. Stephanopoulos called McCain's no-show "a pretty smart move" and portrayed the 83 votes he won in the straw poll--putting him in last place among active Republican candidates--as evidence of his strength. 2. Ames meant death for others. Noting that McCain had bypassed the event, Quayle explained on Face the Nation that he, too, "almost took a pass on this. It wasn't until George Bush said he was going to participate that then I said, 'OK, we've got to do it,' out of respect to the Iowa Republican Party." The result, Quayle pleaded, was that he lost to candidates who had been in Iowa "years and months." McCain, explaining his decision to stay out, espoused a less sentimental philosophy: "You always want to fight on ground that is most favorable to you." For this, the media executed Quayle and spared McCain. "Quayle and Lamar Alexander might be gone, but I think McCain is still in," concluded NPR's Mara Liasson. Ames was Vietnam in reverse: McCain ducked the fight, and Quayle took the beating. 3. Viability. "Once the dust has settled from the straw poll," McCain regally announced, "I will review the new political landscape" and begin "engaging the other Republican candidates." Why does McCain get a bye? Because he has convinced the media that he has enough money and support in New Hampshire, South Carolina, and other states to skip Iowa and catch fire later. Newsweek , the New York Times , the Los Angeles Times , and several TV pundits agreed that McCain remains formidable, wasn't hurt by Ames, and may well end up as the principal alternative to Bush. 4. Vote-buying. To undermine the straw poll's authority as an arbiter of his candidacy, McCain called it a "fund-raiser," "a sham and a joke" in which campaigns spent "millions" to "buy" votes. "My campaign theme is to try to reform the system that is now awash with money and the influence of special interests," he argued on Fox News Sunday . Brit Hume's retort--"that this whole process isn't quite pure enough for you"--played right into McCain's hands. McCain doesn't need to persuade the media that his reasons for skipping Ames were morally sound. He just needs to persuade them that his reasons were moral rather than political. Playbook 1. Real votes. The vote-buying complaint only gets McCain a bye on the straw poll. To get another bye on February's Iowa caucuses, he'll rely on two other moral arguments. First, he'll claim that caucuses aren't "real votes." "We'll have real votes in New Hampshire," McCain argued on Fox News Sunday . "That's where real people are motivated to vote." On Face the Nation , he suggested that he would focus on "the genuine balloting process, which takes place in New Hampshire and then South Carolina." 2. Ethanol. Many pundits, fancying themselves shrewd, suggest that McCain's true reason for skipping Iowa is that he has "taken a position on ethanol subsidies that's unpalatable to voters in Iowa." On This Week , Stephanopoulos suggested that McCain might "have to do something dramatic," such as "make a stand and say, 'We're not going to compete in Iowa. We think these ethanol subsidies are an abomination.' " This is McCain's greatest triumph: He has conned the media into disbelieving his political calculations and accusing him instead of principle. "I've taken a lot of unpopular positions," he conceded on Fox News Sunday . 3. Experience. The longer McCain stays out of the race without damaging his credibility, the more the field narrows to his advantage. Alexander and Rep. John Kasich, R-Ohio, are already gone. Quayle and Sen. Orrin Hatch, R-Utah, won't be far behind. If the field dwindles to Bush, Forbes, and Bauer, McCain can sell himself as the only experienced officeholder running against Bush. But Dole's third-place finish at Ames, coupled with her victory in the post-Ames spin contest, complicates this plan. So here's how the race shapes up. Bauer will frame it as a populist showdown, chiefly between himself and Forbes. Forbes will frame it as a fight between the establishment, led by Bush, and conservatives, led by himself. Dole will exploit feminism as well as feminine stereotypes, pitching herself as the candidate of change, civility, and moral renewal. And McCain will fortify his war chest while his rivals battle and bleed. Ames has organized the contestants. Let the games begin.
A. McCain
Which dataset(s) do they evaluate on?
### Introduction Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech. The recent push to utilize deep, end-to-end TTS architectures BIBREF1 BIBREF2 that can be trained on <text,audio> pairs shows that deep neural networks can indeed be used to synthesize realistic sounding speech, while at the same time eliminating the need for complex sub-systems that neede to be developed and trained seperately. The problem of TTS can be summed up as a signal-inversion problem: given a highly compressed source signal (text), we need to invert or "decompress" it into audio. This is a difficult problem as there're multi ways for the same text to be spoken. In addtion, unlike end-to-end translation or speech recognition, TTS ouptuts are continuous, and output sequences are much longer than input squences. Recent work on neural TTS can be split into two camps, in one camp Seq2Seq models with recurrent architectures are used BIBREF1 BIBREF3 . In the other camp, full convolutional Seq2Seq models are used BIBREF2 . Our model belongs in the first of these classes using recurrent architectures. Specifically we make the following contributions: ### Related Work Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 . A parrallel work exploring Seq2Seq RNN architecture for text-to-speech was called Char2Wav BIBREF3 . This work utilized a very similar RNN-based Seq2Seq architecture, albeit without any prenet modules. The attention mechanism is guassian mixture model (GMM) attention from Alex Grave's work. Their model mapped text sequence to 80 dimension vectors used for the WORLD Vocoder BIBREF9 , which invert these vectors into audio wave. More recently, a fully convolutional Seq2Seq architecture was investigated by Baidu Research BIBREF2 BIBREF10 . The deepvoice architecture is composed of causal 1-D convolution layers for both encoder and decoder. They utilized query-key attention similar to that from the transformer architecure BIBREF5 . Another fully convolutional Seq2Seq architecture known as DCTTS was proposed BIBREF6 . In this architecture they employ modules composed of Causal 1-D convolution layers combined with Highway networks. In addition they introduced methods for help guide attention alignments early. As well as a forced incremental attention mechanism that ensures monotonic increasing of attention read as the model decodes during inference. ### Model Overview The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 . Figure FIGREF3 below shows the overall architecture of our model. ### Text Encoder The encoder acts to encoder the input text sequence into a compact hidden representation, which is consumed by the decoder at every decoding step. The encoder is composed of a INLINEFORM0 -dim embedding layer that maps the input sequence into a dense vector. This is followed by a 1-layer bidirectional LSTM/GRU with INLINEFORM1 hidden dim ( INLINEFORM2 hidden dim total for both directions). two linear projections layers project the LSTM/GRU hidden output into two vectors INLINEFORM3 and INLINEFORM4 of the same INLINEFORM5 -dimension, these are the key and value vectors. DISPLAYFORM0 where INLINEFORM0 . ### Query-Key Attention Query key attention is similar to that from transformers BIBREF5 . Given INLINEFORM0 and INLINEFORM1 from the encoder, the query, INLINEFORM2 , is computed from a linear transform of the concatenation of previous decoder-rnn hidden state, INLINEFORM3 , combined with attention-rnn hidden state, INLINEFORM4 ). DISPLAYFORM0 Given INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , the attention at each decoding step is computed by the scaled dot-product operation as: DISPLAYFORM0 Note that similar to transformers BIBREF5 , we apply a scale the dot-product by INLINEFORM0 to prevent softmax function into regions where it has extremely small gradients. ### Decoder The decoder is an autoregressive recurrent neural network that predicts mel spectrogram from the encoded input sentence one frame at a time. The decoder decodes the hidden representation from the encoder, with the guidance of attention. The decoder is composed of two uni-directional LSTM/GRU with INLINEFORM0 hidden dimensions. The first LSTM/GRU, called the AttentionRNN, is for computing attention-mechanism related items such as the attention query INLINEFORM1 . DISPLAYFORM0 The second LSTM/GRU, DecoderRNN, is used to compute the decoder hidden output, INLINEFORM0 . DISPLAYFORM0 A 2-layer dense prenet of dimensions (256,256) projects the previous mel spectrogram output INLINEFORM0 into hidden dimension INLINEFORM1 . Similar to Tacotron 2, the prenet acts as an information bottleneck to help produce useful representation for the downstream attention mechanism. Our model differs from Tacotron 2 in that we jointly project 5 consequetive mel frames at once into our hidden representation, which is faster and unlike Tacotron 2 which project 1 mel frame at at time. The DecoderRNN's hidden state INLINEFORM0 is also projected to mel spectrogram INLINEFORM1 . A residual post-net composed of 2 dense layer followed by a tanh activation function also projects the same decoder hidden state INLINEFORM2 to mel spectrogram INLINEFORM3 , which is added to the linear projected mel INLINEFORM4 to produce the final mel spectrogram INLINEFORM5 . DISPLAYFORM0 A linear spectrogram INLINEFORM0 is also computed from a linear projection of the decoder hidden state INLINEFORM1 . This acts as an additional condition on the decoder hidden input. DISPLAYFORM0 A single scalar stop token is computed from a linear projection of the decoder hidden state INLINEFORM0 to a scalar, followed by INLINEFORM1 , or sigmoid function. This stop token allows the model to learn when to stop decoding during inference. During inference, if stop token output is INLINEFORM2 , we stop decoding. DISPLAYFORM0 ### Training and Loss Total loss on the model is computed as the sum of 3 component losses: 1. Mean-Squared-Error(MSE) of predicted and ground-truth mel spectrogram 2. MSE of Linear Spectrogram 3. Binary Cross Entropy Loss of our stop token. Adam optimizer is used to optimize the model with learning rate of INLINEFORM0 . Model is trained via teacher forcing, where the ground-truth mel spectrogram is supplied at every decoding step instead of the model's own predicted mel spectrogram. To ensure the model can learn for long term sequences, teacher forcing ratio is annealed from 1.0 (full teacher forcing) to 0.2 (20 percent teacher forcing) over 300 epochs. ### Proposed Improvements Our proposed improvements come from the observation that employing generic Seq2seq models for TTS application misses out on further optimization that can be achieved when we consider the specific problem of TTS. Specifically, we notice that in TTS, unlike in applications like machine translation, the Seq2Seq attention mechanism should be mostly monotonic. In other words, when one reads a sequence of text, it is natural to assume that the text position progress nearly linearly in time with the sequence of output mel spectrogram. With this insight, we can make 3 modifications to the model that allows us to train faster while using a a smaller model. ### Changes to Attention Mechanism In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention. We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention. ### Guided Attention Mask Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible. As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0 Where INLINEFORM0 , INLINEFORM1 is the INLINEFORM2 -th character, INLINEFORM3 is the max character length, INLINEFORM4 is the INLINEFORM5 -th mel frame, INLINEFORM6 is the max mel frame, and INLINEFORM7 is set at 0.2. This modification dramatically speed up the attention alignment and model convergence. Figure 3 below shows the results visually. The two images are side by side comparison of the model's attention after 10k training steps. The image on the left is trained with the atention mask, and the image on the right is not. We can see that with the attention mask, clear attention alignment is achieved much faster. ### Forced Incremental Attention During inference, the attention INLINEFORM0 occasionally skips multiple charaters or stall on the same character for multiple output frames. To make generation more robust, we modify INLINEFORM1 during inference to force it to be diagonal. The Forced incremental attention is implemented as follows: Given INLINEFORM0 , the position of character read at INLINEFORM1 -th time frame, where INLINEFORM2 , if INLINEFORM3 , the current attention is forcibly set to INLINEFORM4 , so that attention is incremental, i.e INLINEFORM5 . ### Experiment Dataset The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio. One thing to note that since this is open-source audio recorded in a semi-professional setting, the audio quality is not as good as that of proprietary internal data from Google or Baidu. As most things with deep learning, the better the data, the better the model and results. ### Experiment Procedure Our model was trained for 300 epochs, with batch size of 32. We used pre-trained opensource implementation of Tactron 2 (https://github.com/NVIDIA/tacotron2) as baseline comparison. Note this open-source version is trained for much longer (around 1000 epochs) however due to our limited compute we only trained our model up to 300 epochs ### Evaluation Metrics We decide to evaluate our model against previous baselines on two fronts, Mean Opnion Score (MOS) and training speed. Typical TTS system evaluation is done with mean opinion score (MOS). To compute this score, many samples of a TTS system is given to human evaluators and rated on a score from 1 (Bad) to 5 (Excellent). the MOS is then computed as the arithmetic mean of these score: DISPLAYFORM0 Where INLINEFORM0 are individual ratings for a given sample by N subjects. For TTS models from google and Baidu, they utilized Amazon mechanical Turk to collect and generate MOS score from larger number of workers. However due to our limited resources, we chose to collect MOS score from friends and families (total 6 people). For training time comparison, we choose the training time as when attention alignment start to become linear and clear. After digging through the git issues in the Tacotron 2 open-source implementation, we found a few posts where users posted their training curve and attention alignment during training (they also used the default batch size of 32). We used their training steps to roughly estimate the training time of Tacotron 2 when attention roughly aligns. For all other models the training time is not comparable as they either don't apply (e.g parametric model) or are not reported (Tacotron griffin lim, Deepvoice 3). Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality. ### Conclusion We introduce a new architecture for end-to-end neural text-to-speech system. Our model relies on RNN-based Seq2seq architecture with a query-key attention. We introduce novel guided attention mask to improve model training speed, and at the same time is able to reduce model parameters. This allows our model to achieve attention alignment at least 3 times faster than previous RNN-based Seq2seq models such as Tacotron 2. We also introduce forced incremental attention during synthesis to prevent attention alignment mistakes and allow model to generate coherent speech for very long sentences. Figure 1: Overall architecture of our Seq2Seq model for neural text-to-speech. Note that inputs, encoder, decoder and attention are labelled different colors. Figure 2: Attention guide mask. Note that bright area has larger values and dark area has small values. Figure 3: Attention alignment plots for two identifical models trained with and without guided attention masks. Both models have been trained for 10k in this figure. Table 1: Result table comparing MOS score and rough estimated training time between different TTS systems.
LJSpeech
What is one common theme in this article? A. Money buys happiness. B. Suspicion indicates deception. C. Education does not always lead to success. D. Wit and charm are the keys for negotiation.
CULTURAL EXCHANGE BY KEITH LAUMER It was a simple student exchange—but Retief gave them more of an education than they expected! [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, September 1962. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] I Second Secretary Magnan took his green-lined cape and orange-feathered beret from the clothes tree. "I'm off now, Retief," he said. "I hope you'll manage the administrative routine during my absence without any unfortunate incidents." "That seems a modest enough hope," Retief said. "I'll try to live up to it." "I don't appreciate frivolity with reference to this Division," Magnan said testily. "When I first came here, the Manpower Utilization Directorate, Division of Libraries and Education was a shambles. I fancy I've made MUDDLE what it is today. Frankly, I question the wisdom of placing you in charge of such a sensitive desk, even for two weeks. But remember. Yours is purely a rubber-stamp function." "In that case, let's leave it to Miss Furkle. I'll take a couple of weeks off myself. With her poundage, she could bring plenty of pressure to bear." "I assume you jest, Retief," Magnan said sadly. "I should expect even you to appreciate that Bogan participation in the Exchange Program may be the first step toward sublimation of their aggressions into more cultivated channels." "I see they're sending two thousand students to d'Land," Retief said, glancing at the Memo for Record. "That's a sizable sublimation." Magnan nodded. "The Bogans have launched no less than four military campaigns in the last two decades. They're known as the Hoodlums of the Nicodemean Cluster. Now, perhaps, we shall see them breaking that precedent and entering into the cultural life of the Galaxy." "Breaking and entering," Retief said. "You may have something there. But I'm wondering what they'll study on d'Land. That's an industrial world of the poor but honest variety." "Academic details are the affair of the students and their professors," Magnan said. "Our function is merely to bring them together. See that you don't antagonize the Bogan representative. This will be an excellent opportunity for you to practice your diplomatic restraint—not your strong point, I'm sure you'll agree." A buzzer sounded. Retief punched a button. "What is it, Miss Furkle?" "That—bucolic person from Lovenbroy is here again." On the small desk screen, Miss Furkle's meaty features were compressed in disapproval. "This fellow's a confounded pest. I'll leave him to you, Retief," Magnan said. "Tell him something. Get rid of him. And remember: here at Corps HQ, all eyes are upon you." "If I'd thought of that, I'd have worn my other suit," Retief said. Magnan snorted and passed from view. Retief punched Miss Furkle's button. "Send the bucolic person in." A tall broad man with bronze skin and gray hair, wearing tight trousers of heavy cloth, a loose shirt open at the neck and a short jacket, stepped into the room. He had a bundle under his arm. He paused at sight of Retief, looked him over momentarily, then advanced and held out his hand. Retief took it. For a moment the two big men stood, face to face. The newcomer's jaw muscles knotted. Then he winced. Retief dropped his hand and motioned to a chair. "That's nice knuckle work, mister," the stranger said, massaging his hand. "First time anybody ever did that to me. My fault though. I started it, I guess." He grinned and sat down. "What can I do for you?" Retief said. "You work for this Culture bunch, do you? Funny. I thought they were all ribbon-counter boys. Never mind. I'm Hank Arapoulous. I'm a farmer. What I wanted to see you about was—" He shifted in his chair. "Well, out on Lovenbroy we've got a serious problem. The wine crop is just about ready. We start picking in another two, three months. Now I don't know if you're familiar with the Bacchus vines we grow...?" "No," Retief said. "Have a cigar?" He pushed a box across the desk. Arapoulous took one. "Bacchus vines are an unusual crop," he said, puffing the cigar alight. "Only mature every twelve years. In between, the vines don't need a lot of attention, so our time's mostly our own. We like to farm, though. Spend a lot of time developing new forms. Apples the size of a melon—and sweet—" "Sounds very pleasant," Retief said. "Where does the Libraries and Education Division come in?" Arapoulous leaned forward. "We go in pretty heavy for the arts. Folks can't spend all their time hybridizing plants. We've turned all the land area we've got into parks and farms. Course, we left some sizable forest areas for hunting and such. Lovenbroy's a nice place, Mr. Retief." "It sounds like it, Mr. Arapoulous. Just what—" "Call me Hank. We've got long seasons back home. Five of 'em. Our year's about eighteen Terry months. Cold as hell in winter; eccentric orbit, you know. Blue-black sky, stars visible all day. We do mostly painting and sculpture in the winter. Then Spring; still plenty cold. Lots of skiing, bob-sledding, ice skating; and it's the season for woodworkers. Our furniture—" "I've seen some of your furniture," Retief said. "Beautiful work." Arapoulous nodded. "All local timbers too. Lots of metals in our soil and those sulphates give the woods some color, I'll tell you. Then comes the Monsoon. Rain—it comes down in sheets. But the sun's getting closer. Shines all the time. Ever seen it pouring rain in the sunshine? That's the music-writing season. Then summer. Summer's hot. We stay inside in the daytime and have beach parties all night. Lots of beach on Lovenbroy; we're mostly islands. That's the drama and symphony time. The theatres are set up on the sand, or anchored off-shore. You have the music and the surf and the bonfires and stars—we're close to the center of a globular cluster, you know...." "You say it's time now for the wine crop?" "That's right. Autumn's our harvest season. Most years we have just the ordinary crops. Fruit, grain, that kind of thing; getting it in doesn't take long. We spend most of the time on architecture, getting new places ready for the winter or remodeling the older ones. We spend a lot of time in our houses. We like to have them comfortable. But this year's different. This is Wine Year." Arapoulous puffed on his cigar, looked worriedly at Retief. "Our wine crop is our big money crop," he said. "We make enough to keep us going. But this year...." "The crop isn't panning out?" "Oh, the crop's fine. One of the best I can remember. Course, I'm only twenty-eight; I can't remember but two other harvests. The problem's not the crop." "Have you lost your markets? That sounds like a matter for the Commercial—" "Lost our markets? Mister, nobody that ever tasted our wines ever settled for anything else!" "It sounds like I've been missing something," said Retief. "I'll have to try them some time." Arapoulous put his bundle on the desk, pulled off the wrappings. "No time like the present," he said. Retief looked at the two squat bottles, one green, one amber, both dusty, with faded labels, and blackened corks secured by wire. "Drinking on duty is frowned on in the Corps, Mr. Arapoulous," he said. "This isn't drinking . It's just wine." Arapoulous pulled the wire retainer loose, thumbed the cork. It rose slowly, then popped in the air. Arapoulous caught it. Aromatic fumes wafted from the bottle. "Besides, my feelings would be hurt if you didn't join me." He winked. Retief took two thin-walled glasses from a table beside the desk. "Come to think of it, we also have to be careful about violating quaint native customs." Arapoulous filled the glasses. Retief picked one up, sniffed the deep rust-colored fluid, tasted it, then took a healthy swallow. He looked at Arapoulous thoughtfully. "Hmmm. It tastes like salted pecans, with an undercurrent of crusted port." "Don't try to describe it, Mr. Retief," Arapoulous said. He took a mouthful of wine, swished it around his teeth, swallowed. "It's Bacchus wine, that's all. Nothing like it in the Galaxy." He pushed the second bottle toward Retief. "The custom back home is to alternate red wine and black." Retief put aside his cigar, pulled the wires loose, nudged the cork, caught it as it popped up. "Bad luck if you miss the cork," Arapoulous said, nodding. "You probably never heard about the trouble we had on Lovenbroy a few years back?" "Can't say that I did, Hank." Retief poured the black wine into two fresh glasses. "Here's to the harvest." "We've got plenty of minerals on Lovenbroy," Arapoulous said, swallowing wine. "But we don't plan to wreck the landscape mining 'em. We like to farm. About ten years back some neighbors of ours landed a force. They figured they knew better what to do with our minerals than we did. Wanted to strip-mine, smelt ore. We convinced 'em otherwise. But it took a year, and we lost a lot of men." "That's too bad," Retief said. "I'd say this one tastes more like roast beef and popcorn over a Riesling base." "It put us in a bad spot," Arapoulous went on. "We had to borrow money from a world called Croanie. Mortgaged our crops. Had to start exporting art work too. Plenty of buyers, but it's not the same when you're doing it for strangers." "Say, this business of alternating drinks is the real McCoy," Retief said. "What's the problem? Croanie about to foreclose?" "Well, the loan's due. The wine crop would put us in the clear. But we need harvest hands. Picking Bacchus grapes isn't a job you can turn over to machinery—and anyway we wouldn't if we could. Vintage season is the high point of living on Lovenbroy. Everybody joins in. First, there's the picking in the fields. Miles and miles of vineyards covering the mountain sides, and crowding the river banks, with gardens here and there. Big vines, eight feet high, loaded with fruit, and deep grass growing between. The wine-carriers keep on the run, bringing wine to the pickers. There's prizes for the biggest day's output, bets on who can fill the most baskets in an hour.... The sun's high and bright, and it's just cool enough to give you plenty of energy. Come nightfall, the tables are set up in the garden plots, and the feast is laid on: roast turkeys, beef, hams, all kinds of fowl. Big salads. Plenty of fruit. Fresh-baked bread ... and wine, plenty of wine. The cooking's done by a different crew each night in each garden, and there's prizes for the best crews. "Then the wine-making. We still tramp out the vintage. That's mostly for the young folks but anybody's welcome. That's when things start to get loosened up. Matter of fact, pretty near half our young-uns are born after a vintage. All bets are off then. It keeps a fellow on his toes though. Ever tried to hold onto a gal wearing nothing but a layer of grape juice?" "Never did," Retief said. "You say most of the children are born after a vintage. That would make them only twelve years old by the time—" "Oh, that's Lovenbroy years; they'd be eighteen, Terry reckoning." "I was thinking you looked a little mature for twenty-eight," Retief said. "Forty-two, Terry years," Arapoulous said. "But this year it looks bad. We've got a bumper crop—and we're short-handed. If we don't get a big vintage, Croanie steps in. Lord knows what they'll do to the land. Then next vintage time, with them holding half our grape acreage—" "You hocked the vineyards?" "Yep. Pretty dumb, huh? But we figured twelve years was a long time." "On the whole," Retief said, "I think I prefer the black. But the red is hard to beat...." "What we figured was, maybe you Culture boys could help us out. A loan to see us through the vintage, enough to hire extra hands. Then we'd repay it in sculpture, painting, furniture—" "Sorry, Hank. All we do here is work out itineraries for traveling side-shows, that kind of thing. Now, if you needed a troop of Groaci nose-flute players—" "Can they pick grapes?" "Nope. Anyway, they can't stand the daylight. Have you talked this over with the Labor Office?" "Sure did. They said they'd fix us up with all the electronics specialists and computer programmers we wanted—but no field hands. Said it was what they classified as menial drudgery; you'd have thought I was trying to buy slaves." The buzzer sounded. Miss Furkle's features appeared on the desk screen. "You're due at the Intergroup Council in five minutes," she said. "Then afterwards, there are the Bogan students to meet." "Thanks." Retief finished his glass, stood. "I have to run, Hank," he said. "Let me think this over. Maybe I can come up with something. Check with me day after tomorrow. And you'd better leave the bottles here. Cultural exhibits, you know." II As the council meeting broke up, Retief caught the eye of a colleague across the table. "Mr. Whaffle, you mentioned a shipment going to a place called Croanie. What are they getting?" Whaffle blinked. "You're the fellow who's filling in for Magnan, over at MUDDLE," he said. "Properly speaking, equipment grants are the sole concern of the Motorized Equipment Depot, Division of Loans and Exchanges." He pursed his lips. "However, I suppose there's no harm in telling you. They'll be receiving heavy mining equipment." "Drill rigs, that sort of thing?" "Strip mining gear." Whaffle took a slip of paper from a breast pocket, blinked at it. "Bolo Model WV/1 tractors, to be specific. Why is MUDDLE interested in MEDDLE's activities?" "Forgive my curiosity, Mr. Whaffle. It's just that Croanie cropped up earlier today. It seems she holds a mortgage on some vineyards over on—" "That's not MEDDLE's affair, sir," Whaffle cut in. "I have sufficient problems as Chief of MEDDLE without probing into MUDDLE'S business." "Speaking of tractors," another man put in, "we over at the Special Committee for Rehabilitation and Overhaul of Under-developed Nations' General Economies have been trying for months to get a request for mining equipment for d'Land through MEDDLE—" "SCROUNGE was late on the scene," Whaffle said. "First come, first served. That's our policy at MEDDLE. Good day, gentlemen." He strode off, briefcase under his arm. "That's the trouble with peaceful worlds," the SCROUNGE committeeman said. "Boge is a troublemaker, so every agency in the Corps is out to pacify her. While my chance to make a record—that is, assist peace-loving d'Land—comes to naught." He shook his head. "What kind of university do they have on d'Land?" asked Retief. "We're sending them two thousand exchange students. It must be quite an institution." "University? D'Land has one under-endowed technical college." "Will all the exchange students be studying at the Technical College?" "Two thousand students? Hah! Two hundred students would overtax the facilities of the college." "I wonder if the Bogans know that?" "The Bogans? Why, most of d'Land's difficulties are due to the unwise trade agreement she entered into with Boge. Two thousand students indeed!" He snorted and walked away. Retief stopped by the office to pick up a short cape, then rode the elevator to the roof of the 230-story Corps HQ building and hailed a cab to the port. The Bogan students had arrived early. Retief saw them lined up on the ramp waiting to go through customs. It would be half an hour before they were cleared through. He turned into the bar and ordered a beer. A tall young fellow on the next stool raised his glass. "Happy days," he said. "And nights to match." "You said it." He gulped half his beer. "My name's Karsh. Mr. Karsh. Yep, Mr. Karsh. Boy, this is a drag, sitting around this place waiting...." "You meeting somebody?" "Yeah. Bunch of babies. Kids. How they expect—Never mind. Have one on me." "Thanks. You a Scoutmaster?" "I'll tell you what I am. I'm a cradle-robber. You know—" he turned to Retief—"not one of those kids is over eighteen." He hiccupped. "Students, you know. Never saw a student with a beard, did you?" "Lots of times. You're meeting the students, are you?" The young fellow blinked at Retief. "Oh, you know about it, huh?" "I represent MUDDLE." Karsh finished his beer, ordered another. "I came on ahead. Sort of an advance guard for the kids. I trained 'em myself. Treated it like a game, but they can handle a CSU. Don't know how they'll act under pressure. If I had my old platoon—" He looked at his beer glass, pushed it back. "Had enough," he said. "So long, friend. Or are you coming along?" Retief nodded. "Might as well." At the exit to the Customs enclosure, Retief watched as the first of the Bogan students came through, caught sight of Karsh and snapped to attention, his chest out. "Drop that, mister," Karsh snapped. "Is that any way for a student to act?" The youth, a round-faced lad with broad shoulders, grinned. "Heck, no," he said. "Say, uh, Mr. Karsh, are we gonna get to go to town? We fellas were thinking—" "You were, hah? You act like a bunch of school kids! I mean ... no! Now line up!" "We have quarters ready for the students," Retief said. "If you'd like to bring them around to the west side, I have a couple of copters laid on." "Thanks," said Karsh. "They'll stay here until take-off time. Can't have the little dears wandering around loose. Might get ideas about going over the hill." He hiccupped. "I mean they might play hookey." "We've scheduled your re-embarkation for noon tomorrow. That's a long wait. MUDDLE's arranged theater tickets and a dinner." "Sorry," Karsh said. "As soon as the baggage gets here, we're off." He hiccupped again. "Can't travel without our baggage, y'know." "Suit yourself," Retief said. "Where's the baggage now?" "Coming in aboard a Croanie lighter." "Maybe you'd like to arrange for a meal for the students here." "Sure," Karsh said. "That's a good idea. Why don't you join us?" Karsh winked. "And bring a few beers." "Not this time," Retief said. He watched the students, still emerging from Customs. "They seem to be all boys," he commented. "No female students?" "Maybe later," Karsh said. "You know, after we see how the first bunch is received." Back at the MUDDLE office, Retief buzzed Miss Furkle. "Do you know the name of the institution these Bogan students are bound for?" "Why, the University at d'Land, of course." "Would that be the Technical College?" Miss Furkle's mouth puckered. "I'm sure I've never pried into these details." "Where does doing your job stop and prying begin, Miss Furkle?" Retief said. "Personally, I'm curious as to just what it is these students are travelling so far to study—at Corps expense." "Mr. Magnan never—" "For the present. Miss Furkle, Mr. Magnan is vacationing. That leaves me with the question of two thousand young male students headed for a world with no classrooms for them ... a world in need of tractors. But the tractors are on their way to Croanie, a world under obligation to Boge. And Croanie holds a mortgage on the best grape acreage on Lovenbroy." "Well!" Miss Furkle snapped, small eyes glaring under unplucked brows. "I hope you're not questioning Mr. Magnan's wisdom!" "About Mr. Magnan's wisdom there can be no question," Retief said. "But never mind. I'd like you to look up an item for me. How many tractors will Croanie be getting under the MEDDLE program?" "Why, that's entirely MEDDLE business," Miss Furkle said. "Mr. Magnan always—" "I'm sure he did. Let me know about the tractors as soon as you can." Miss Furkle sniffed and disappeared from the screen. Retief left the office, descended forty-one stories, followed a corridor to the Corps Library. In the stacks he thumbed through catalogues, pored over indices. "Can I help you?" someone chirped. A tiny librarian stood at his elbow. "Thank you, ma'am," Retief said. "I'm looking for information on a mining rig. A Bolo model WV tractor." "You won't find it in the industrial section," the librarian said. "Come along." Retief followed her along the stacks to a well-lit section lettered ARMAMENTS. She took a tape from the shelf, plugged it into the viewer, flipped through and stopped at a squat armored vehicle. "That's the model WV," she said. "It's what is known as a continental siege unit. It carries four men, with a half-megaton/second firepower." "There must be an error somewhere," Retief said. "The Bolo model I want is a tractor. Model WV M-1—" "Oh, the modification was the addition of a bulldozer blade for demolition work. That must be what confused you." "Probably—among other things. Thank you." Miss Furkle was waiting at the office. "I have the information you wanted," she said. "I've had it for over ten minutes. I was under the impression you needed it urgently, and I went to great lengths—" "Sure," Retief said. "Shoot. How many tractors?" "Five hundred." "Are you sure?" Miss Furkle's chins quivered. "Well! If you feel I'm incompetent—" "Just questioning the possibility of a mistake, Miss Furkle. Five hundred tractors is a lot of equipment." "Was there anything further?" Miss Furkle inquired frigidly. "I sincerely hope not," Retief said. III Leaning back in Magnan's padded chair with power swivel and hip-u-matic concontour, Retief leafed through a folder labelled "CERP 7-602-Ba; CROANIE (general)." He paused at a page headed Industry. Still reading, he opened the desk drawer, took out the two bottles of Bacchus wine and two glasses. He poured an inch of wine into each and sipped the black wine meditatively. It would be a pity, he reflected, if anything should interfere with the production of such vintages.... Half an hour later he laid the folder aside, keyed the phone and put through a call to the Croanie Legation. He asked for the Commercial Attache. "Retief here, Corps HQ," he said airily. "About the MEDDLE shipment, the tractors. I'm wondering if there's been a slip up. My records show we're shipping five hundred units...." "That's correct. Five hundred." Retief waited. "Ah ... are you there, Retief?" "I'm still here. And I'm still wondering about the five hundred tractors." "It's perfectly in order. I thought it was all settled. Mr. Whaffle—" "One unit would require a good-sized plant to handle its output," Retief said. "Now Croanie subsists on her fisheries. She has perhaps half a dozen pint-sized processing plants. Maybe, in a bind, they could handle the ore ten WV's could scrape up ... if Croanie had any ore. It doesn't. By the way, isn't a WV a poor choice as a mining outfit? I should think—" "See here, Retief! Why all this interest in a few surplus tractors? And in any event, what business is it of yours how we plan to use the equipment? That's an internal affair of my government. Mr. Whaffle—" "I'm not Mr. Whaffle. What are you going to do with the other four hundred and ninety tractors?" "I understood the grant was to be with no strings attached!" "I know it's bad manners to ask questions. It's an old diplomatic tradition that any time you can get anybody to accept anything as a gift, you've scored points in the game. But if Croanie has some scheme cooking—" "Nothing like that, Retief. It's a mere business transaction." "What kind of business do you do with a Bolo WV? With or without a blade attached, it's what's known as a continental siege unit." "Great Heavens, Retief! Don't jump to conclusions! Would you have us branded as warmongers? Frankly—is this a closed line?" "Certainly. You may speak freely." "The tractors are for transshipment. We've gotten ourselves into a difficult situation, balance-of-payments-wise. This is an accommodation to a group with which we have rather strong business ties." "I understand you hold a mortgage on the best land on Lovenbroy," Retief said. "Any connection?" "Why ... ah ... no. Of course not, ha ha." "Who gets the tractors eventually?" "Retief, this is unwarranted interference!" "Who gets them?" "They happen to be going to Lovenbroy. But I scarcely see—" "And who's the friend you're helping out with an unauthorized transshipment of grant material?" "Why ... ah ... I've been working with a Mr. Gulver, a Bogan representative." "And when will they be shipped?" "Why, they went out a week ago. They'll be half way there by now. But look here, Retief, this isn't what you're thinking!" "How do you know what I'm thinking? I don't know myself." Retief rang off, buzzed the secretary. "Miss Furkle, I'd like to be notified immediately of any new applications that might come in from the Bogan Consulate for placement of students." "Well, it happens, by coincidence, that I have an application here now. Mr. Gulver of the Consulate brought it in." "Is Mr. Gulver in the office? I'd like to see him." "I'll ask him if he has time." "Great. Thanks." It was half a minute before a thick-necked red-faced man in a tight hat walked in. He wore an old-fashioned suit, a drab shirt, shiny shoes with round toes and an ill-tempered expression. "What is it you wish?" he barked. "I understood in my discussions with the other ... ah ... civilian there'd be no further need for these irritating conferences." "I've just learned you're placing more students abroad, Mr. Gulver. How many this time?" "Two thousand." "And where will they be going?" "Croanie. It's all in the application form I've handed in. Your job is to provide transportation." "Will there be any other students embarking this season?" "Why ... perhaps. That's Boge's business." Gulver looked at Retief with pursed lips. "As a matter of fact, we had in mind dispatching another two thousand to Featherweight." "Another under-populated world—and in the same cluster, I believe," Retief said. "Your people must be unusually interested in that region of space." "If that's all you wanted to know, I'll be on my way. I have matters of importance to see to." After Gulver left, Retief called Miss Furkle in. "I'd like to have a break-out of all the student movements that have been planned under the present program," he said. "And see if you can get a summary of what MEDDLE has been shipping lately." Miss Furkle compressed her lips. "If Mr. Magnan were here, I'm sure he wouldn't dream of interfering in the work of other departments. I ... overheard your conversation with the gentleman from the Croanie Legation—" "The lists, Miss Furkle." "I'm not accustomed," Miss Furkle said, "to intruding in matters outside our interest cluster." "That's worse than listening in on phone conversations, eh? But never mind. I need the information, Miss Furkle." "Loyalty to my Chief—" "Loyalty to your pay-check should send you scuttling for the material I've asked for," Retief said. "I'm taking full responsibility. Now scat." The buzzer sounded. Retief flipped a key. "MUDDLE, Retief speaking...." Arapoulous's brown face appeared on the desk screen. "How-do, Retief. Okay if I come up?" "Sure, Hank. I want to talk to you." In the office, Arapoulous took a chair. "Sorry if I'm rushing you, Retief," he said. "But have you got anything for me?" Retief waved at the wine bottles. "What do you know about Croanie?" "Croanie? Not much of a place. Mostly ocean. All right if you like fish, I guess. We import our seafood from there. Nice prawns in monsoon time. Over a foot long." "You on good terms with them?" "Sure, I guess so. Course, they're pretty thick with Boge." "So?" "Didn't I tell you? Boge was the bunch that tried to take us over here a dozen years back. They'd've made it too, if they hadn't had a lot of bad luck. Their armor went in the drink, and without armor they're easy game." Miss Furkle buzzed. "I have your lists," she said shortly. "Bring them in, please." The secretary placed the papers on the desk. Arapoulous caught her eye and grinned. She sniffed and marched from the room. "What that gal needs is a slippery time in the grape mash," Arapoulous observed. Retief thumbed through the papers, pausing to read from time to time. He finished and looked at Arapoulous. "How many men do you need for the harvest, Hank?" Retief inquired. Arapoulous sniffed his wine glass and looked thoughtful. "A hundred would help," he said. "A thousand would be better. Cheers." "What would you say to two thousand?" "Two thousand? Retief, you're not fooling?" "I hope not." He picked up the phone, called the Port Authority, asked for the dispatch clerk. "Hello, Jim. Say, I have a favor to ask of you. You know that contingent of Bogan students. They're traveling aboard the two CDT transports. I'm interested in the baggage that goes with the students. Has it arrived yet? Okay, I'll wait." Jim came back to the phone. "Yeah, Retief, it's here. Just arrived. But there's a funny thing. It's not consigned to d'Land. It's ticketed clear through to Lovenbroy." "Listen, Jim," Retief said. "I want you to go over to the warehouse and take a look at that baggage for me." Retief waited while the dispatch clerk carried out the errand. The level in the two bottles had gone down an inch when Jim returned to the phone. "Hey, I took a look at that baggage, Retief. Something funny going on. Guns. 2mm needlers, Mark XII hand blasters, power pistols—" "It's okay, Jim. Nothing to worry about. Just a mix-up. Now, Jim, I'm going to ask you to do something more for me. I'm covering for a friend. It seems he slipped up. I wouldn't want word to get out, you understand. I'll send along a written change order in the morning that will cover you officially. Meanwhile, here's what I want you to do...." Retief gave instructions, then rang off and turned to Arapoulous. "As soon as I get off a couple of TWX's, I think we'd better get down to the port, Hank. I think I'd like to see the students off personally."
B. Suspicion indicates deception.
Why does Thaddeus Funston smile at the sight of the demolished arts and crafts building? A. His prophecy of an alien invasion was fulfilled B. He is gleeful at the idea of part of the mental hospital being destroyed C. His self-constructed clay atom bomb was effectively detonated D. He knows the explosion will distract the hospital staff and give him an opportunity to escape
Transcriber's Note: This etext was produced from Astounding Science Fiction November 1959. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. A FILBERT IS A NUT BY RICK RAPHAEL That the gentleman in question was a nut was beyond question. He was an institutionalized psychotic. He was nutty enough to think he could make an atom bomb out of modeling clay! Illustrated by Freas Miss Abercrombie, the manual therapist patted the old man on the shoulder. "You're doing just fine, Mr. Lieberman. Show it to me when you have finished." The oldster in the stained convalescent suit gave her a quick, shy smile and went back to his aimless smearing in the finger paints. Miss Abercrombie smoothed her smock down over trim hips and surveyed the other patients working at the long tables in the hospital's arts and crafts shop. Two muscular and bored attendants in spotless whites, lounged beside the locked door and chatted idly about the Dodgers' prospects for the pennant. Through the barred windows of the workshop, rolling green hills were seen, their tree-studded flanks making a pleasant setting for the mental institution. The crafts building was a good mile away from the main buildings of the hospital and the hills blocked the view of the austere complex of buildings that housed the main wards. The therapist strolled down the line of tables, pausing to give a word of advice here, and a suggestion there. She stopped behind a frowning, intense patient, rapidly shaping blobs of clay into odd-sized strips and forms. As he finished each piece, he carefully placed it into a hollow shell hemisphere of clay. "And what are we making today, Mr. Funston?" Miss Abercrombie asked. The flying fingers continued to whip out the bits of shaped clay as the patient ignored the question. He hunched closer to his table as if to draw away from the woman. "We mustn't be antisocial, Mr. Funston," Miss Abercrombie said lightly, but firmly. "You've been coming along famously and you must remember to answer when someone talks to you. Now what are you making? It looks very complicated." She stared professionally at the maze of clay parts. Thaddeus Funston continued to mold the clay bits and put them in place. Without looking up from his bench he muttered a reply. "Atom bomb." A puzzled look crossed the therapist's face. "Pardon me, Mr. Funston. I thought you said an 'atom bomb.'" "Did," Funston murmured. Safely behind the patient's back, Miss Abercrombie smiled ever so slightly. "Why that's very good, Mr. Funston. That shows real creative thought. I'm very pleased." She patted him on the shoulder and moved down the line of patients. A few minutes later, one of the attendants glanced at his watch, stood up and stretched. "All right, fellows," he called out, "time to go back. Put up your things." There was a rustle of paint boxes and papers being shuffled and chairs being moved back. A tall, blond patient with a flowing mustache, put one more dab of paint on his canvas and stood back to survey the meaningless smears. He sighed happily and laid down his palette. At the clay table, Funston feverishly fabricated the last odd-shaped bit of clay and slapped it into place. With a furtive glance around him, he clapped the other half of the clay sphere over the filled hemisphere and then stood up. The patients lined up at the door, waiting for the walk back across the green hills to the main hospital. The attendants made a quick count and then unlocked the door. The group shuffled out into the warm, afternoon sunlight and the door closed behind them. Miss Abercrombie gazed around the cluttered room and picked up her chart book of patient progress. Moving slowly down the line of benches, she made short, precise notes on the day's work accomplished by each patient. At the clay table, she carefully lifted the top half of the clay ball and stared thoughtfully at the jumbled maze of clay strips laced through the lower hemisphere. She placed the lid back in place and jotted lengthily in her chart book. When she had completed her rounds, she slipped out of the smock, tucked the chart book under her arm and left the crafts building for the day. The late afternoon sun felt warm and comfortable as she walked the mile to the main administration building where her car was parked. As she drove out of the hospital grounds, Thaddeus Funston stood at the barred window of his locked ward and stared vacantly over the hills towards the craft shop. He stood there unmoving until a ward attendant came and took his arm an hour later to lead him off to the patients' mess hall. The sun set, darkness fell over the stilled hospital grounds and the ward lights winked out at nine o'clock, leaving just a single light burning in each ward office. A quiet wind sighed over the still-warm hills. At 3:01 a.m., Thaddeus Funston stirred in his sleep and awakened. He sat up in bed and looked around the dark ward. The quiet breathing and occasional snores of thirty other sleeping patients filled the room. Funston turned to the window and stared out across the black hills that sheltered the deserted crafts building. He gave a quick cry, shut his eyes and clapped his hands over his face. The brilliance of a hundred suns glared in the night and threw stark shadows on the walls of the suddenly-illuminated ward. An instant later, the shattering roar and blast of the explosion struck the hospital buildings in a wave of force and the bursting crash of a thousand windows was lost in the fury of the explosion and the wild screams of the frightened and demented patients. It was over in an instant, and a stunned moment later, recessed ceiling lights began flashing on throughout the big institution. Beyond the again-silent hills, a great pillar of smoke, topped by a small mushroom-shaped cloud, rose above the gaping hole that had been the arts and crafts building. Thaddeus Funston took his hands from his face and lay back in his bed with a small, secret smile on his lips. Attendants and nurses scurried through the hospital, seeing how many had been injured in the explosion. None had. The hills had absorbed most of the shock and apart from a welter of broken glass, the damage had been surprisingly slight. The roar and flash of the explosion had lighted and rocked the surrounding countryside. Soon firemen and civil defense disaster units from a half-dozen neighboring communities had gathered at the still-smoking hole that marked the site of the vanished crafts building. Within fifteen minutes, the disaster-trained crews had detected heavy radiation emanating from the crater and there was a scurry of men and equipment back to a safe distance, a few hundred yards away. At 5:30 a.m., a plane landed at a nearby airfield and a platoon of Atomic Energy Commission experts, military intelligence men, four FBI agents and an Army full colonel disembarked. At 5:45 a.m. a cordon was thrown around both the hospital and the blast crater. In Ward 4-C, Thaddeus Funston slept peacefully and happily. "It's impossible and unbelievable," Colonel Thomas Thurgood said for the fifteenth time, later that morning, as he looked around the group of experts gathered in the tent erected on the hill overlooking the crater. "How can an atom bomb go off in a nut house?" "It apparently was a very small bomb, colonel," one of the haggard AEC men offered timidly. "Not over three kilotons." "I don't care if it was the size of a peanut," Thurgood screamed. "How did it get here?" A military intelligence agent spoke up. "If we knew, sir, we wouldn't be standing around here. We don't know, but the fact remains that it WAS an atomic explosion." Thurgood turned wearily to the small, white-haired man at his side. "Let's go over it once more, Dr. Crane. Are you sure you knew everything that was in that building?" Thurgood swept his hand in the general direction of the blast crater. "Colonel, I've told you a dozen times," the hospital administrator said with exasperation, "this was our manual therapy room. We gave our patients art work. It was a means of getting out of their systems, through the use of their hands, some of the frustrations and problems that led them to this hospital. They worked with oil and water paints and clay. If you can make an atomic bomb from vermillion pigments, then Madame Curie was a misguided scrubwoman." "All I know is that you say this was a crafts building. O.K. So it was," Thurgood sighed. "I also know that an atomic explosion at 3:02 this morning blew it to hell and gone. "And I've got to find out how it happened." Thurgood slumped into a field chair and gazed tiredly up at the little doctor. "Where's that girl you said was in charge of this place?" "We've already called for Miss Abercrombie and she's on her way here now," the doctor snapped. Outside the tent, a small army of military men and AEC technicians moved around the perimeter of the crater, scintillators in hand, examining every tiny scrap that might have been a part of the building at one time. A jeep raced down the road from the hospital and drew up in front of the tent. An armed MP helped Miss Abercrombie from the vehicle. She walked to the edge of the hill and looked down with a stunned expression. "He did make an atom bomb," she cried. Colonel Thurgood, who had snapped from his chair at her words, leaped forward to catch her as she collapsed in a faint. At 4:00 p.m., the argument was still raging in the long, narrow staff room of the hospital administration building. Colonel Thurgood, looking more like a patient every minute, sat on the edge of his chair at the head of a long table and pounded with his fist on the wooden surface, making Miss Abercrombie's chart book bounce with every beat. "It's ridiculous," Thurgood roared. "We'll all be the laughingstocks of the world if this ever gets out. An atomic bomb made out of clay. You are all nuts. You're in the right place, but count me out." At his left, Miss Abercrombie cringed deeper into her chair at the broadside. Down both sides of the long table, psychiatrists, physicists, strategists and radiologists sat in various stages of nerve-shattered weariness. "Miss Abercrombie," one of the physicists spoke up gently, "you say that after the patients had departed the building, you looked again at Funston's work?" The therapist nodded unhappily. "And you say that, to the best of your knowledge," the physicist continued, "there was nothing inside the ball but other pieces of clay." "I'm positive that's all there was in it," Miss Abercrombie cried. There was a renewed buzz of conversation at the table and the senior AEC man present got heads together with the senior intelligence man. They conferred briefly and then the intelligence officer spoke. "That seems to settle it, colonel. We've got to give this Funston another chance to repeat his bomb. But this time under our supervision." Thurgood leaped to his feet, his face purpling. "Are you crazy?" he screamed. "You want to get us all thrown into this filbert factory? Do you know what the newspapers would do to us if they ever got wind of the fact, that for one, tiny fraction of a second, anyone of us here entertained the notion that a paranoidal idiot with the IQ of an ape could make an atomic bomb out of kid's modeling clay? "They'd crucify us, that's what they'd do!" At 8:30 that night, Thaddeus Funston, swathed in an Army officer's greatcoat that concealed the strait jacket binding him and with an officer's cap jammed far down over his face, was hustled out of a small side door of the hospital and into a waiting staff car. A few minutes later, the car pulled into the flying field at the nearby community and drove directly to the military transport plane that stood at the end of the runway with propellers turning. Two military policemen and a brace of staff psychiatrists sworn to secrecy under the National Atomic Secrets Act, bundled Thaddeus aboard the plane. They plopped him into a seat directly in front of Miss Abercrombie and with a roar, the plane raced down the runway and into the night skies. The plane landed the next morning at the AEC's atomic testing grounds in the Nevada desert and two hours later, in a small hot, wooden shack miles up the barren desert wastelands, a cluster of scientists and military men huddled around a small wooden table. There was nothing on the table but a bowl of water and a great lump of modeling clay. While the psychiatrists were taking the strait jacket off Thaddeus in the staff car outside, Colonel Thurgood spoke to the weary Miss Abercrombie. "Now you're positive this is just about the same amount and the same kind of clay he used before?" "I brought it along from the same batch we had in the store room at the hospital," she replied, "and it's the same amount." Thurgood signaled to the doctors and they entered the shack with Thaddeus Funston between them. The colonel nudged Miss Abercrombie. She smiled at Funston. "Now isn't this nice, Mr. Funston," she said. "These nice men have brought us way out here just to see you make another atom bomb like the one you made for me yesterday." A flicker of interest lightened Thaddeus' face. He looked around the shack and then spotted the clay on the table. Without hesitation, he walked to the table and sat down. His fingers began working the damp clay, making first the hollow, half-round shell while the nation's top atomic scientists watched in fascination. His busy fingers flew through the clay, shaping odd, flat bits and clay parts that were dropped almost aimlessly into the open hemisphere in front of him. Miss Abercrombie stood at his shoulder as Thaddeus hunched over the table just as he had done the previous day. From time to time she glanced at her watch. The maze of clay strips grew and as Funston finished shaping the other half hemisphere of clay, she broke the tense silence. "Time to go back now, Mr. Funston. You can work some more tomorrow." She looked at the men and nodded her head. The two psychiatrists went to Thaddeus' side as he put the upper lid of clay carefully in place. Funston stood up and the doctors escorted him from the shack. There was a moment of hushed silence and then pandemonium burst. The experts converged on the clay ball, instruments blossoming from nowhere and cameras clicking. For two hours they studied and gently probed the mass of child's clay and photographed it from every angle. Then they left for the concrete observatory bunker, several miles down range where Thaddeus and the psychiatrists waited inside a ring of stony-faced military policemen. "I told you this whole thing was asinine," Thurgood snarled as the scientific teams trooped into the bunker. Thaddeus Funston stared out over the heads of the MPs through the open door, looking uprange over the heat-shimmering desert. He gave a sudden cry, shut his eyes and clapped his hands over his face. A brilliance a hundred times brighter than the glaring Nevada sun lit the dim interior of the bunker and the pneumatically-operated door slammed shut just before the wave of the blast hit the structure. Six hours and a jet plane trip later, Thaddeus, once again in his strait jacket, sat between his armed escorts in a small room in the Pentagon. Through the window he could see the hurried bustle of traffic over the Potomac and beyond, the domed roof of the Capitol. In the conference room next door, the joint chiefs of staff were closeted with a gray-faced and bone-weary Colonel Thurgood and his baker's dozen of AEC brains. Scraps of the hot and scornful talk drifted across a half-opened transom into the room where Thaddeus Funston sat in a neatly-tied bundle. In the conference room, a red-faced, four-star general cast a chilling glance at the rumpled figure of Colonel Thurgood. "I've listened to some silly stories in my life, colonel," the general said coldly, "but this takes the cake. You come in here with an insane asylum inmate in a strait jacket and you have the colossal gall to sit there and tell me that this poor soul has made not one, but two atomic devices out of modeling clay and then has detonated them." The general paused. "Why don't you just tell me, colonel, that he can also make spaceships out of sponge rubber?" the general added bitingly. In the next room, Thaddeus Funston stared out over the sweeping panorama of the Washington landscape. He stared hard. In the distance, a white cloud began billowing up from the base of the Washington Monument, and with an ear-shattering, glass-splintering roar, the great shaft rose majestically from its base and vanished into space on a tail of flame. THE END
C. His self-constructed clay atom bomb was effectively detonated
What is the symbolism of the title? A. The monkey represents the series of false memories implanted in Zarwell's mind B. The monkey represents Zarwell's affliction with ennui after becoming a civilian and living a more mundane existence C. The monkey represents Dr. Bergstrom's manipulative influence on Zarwell's psyche D. The monkey represents Zarwell's pattern of joining resistance movements, only to watch them turn corrupt
Transcriber’s note: This story was published in Galaxy magazine, June 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. [p 135 ] By CHARLES V. DE VET monkey on his back Under the cloud of cast-off identities lay the shape of another man— was it himself? Illustrated by DILLON HE was walking endlessly down a long, glass-walled corridor. Bright sunlight slanted in through one wall, on the blue knapsack across his shoulders. Who he was, and what he was doing here, was clouded. The truth lurked in some corner of his consciousness, but it was not reached by surface awareness. The corridor opened at last into a large high-domed room, much like a railway station or an air terminal. He walked straight ahead. At the sight of him a man leaning negligently against a stone pillar, to his right but within vision, straightened and barked an order to him, “Halt!” He lengthened his stride but gave no other sign. [p 136 ] Two men hurried through a doorway of a small anteroom to his left, calling to him. He turned away and began to run. Shouts and the sound of charging feet came from behind him. He cut to the right, running toward the escalator to the second floor. Another pair of men were hurrying down, two steps at a stride. With no break in pace he veered into an opening beside the escalator. At the first turn he saw that the aisle merely circled the stairway, coming out into the depot again on the other side. It was a trap. He glanced quickly around him. At the rear of the space was a row of lockers for traveler use. He slipped a coin into a pay slot, opened the zipper on his bag and pulled out a flat briefcase. It took him only a few seconds to push the case into the compartment, lock it and slide the key along the floor beneath the locker. There was nothing to do after that—except wait. The men pursuing him came hurtling around the turn in the aisle. He kicked his knapsack to one side, spreading his feet wide with an instinctive motion. Until that instant he had intended to fight. Now he swiftly reassessed the odds. There were five of them, he saw. He should be able to incapacitate two or three and break out. But the fact that they had been expecting him meant that others would very probably be waiting outside. His best course now was to sham ignorance. He relaxed. He offered no resistance as they reached him. They were not gentle men. A tall ruffian, copper-brown face damp with perspiration and body oil, grabbed him by the jacket and slammed him back against the lockers. As he shifted his weight to keep his footing someone drove a fist into his face. He started to raise his hands; and a hard flat object crashed against the side of his skull. The starch went out of his legs. “D O you make anything out of it?” the psychoanalyst Milton Bergstrom, asked. John Zarwell shook his head. “Did I talk while I was under?” “Oh, yes. You were supposed to. That way I follow pretty well what you’re reenacting.” “How does it tie in with what I told you before?” Bergstrom’s neat-boned, fair-skinned face betrayed no emotion other than an introspective stillness of his normally alert gaze. “I see no connection,” he decided, his words once again precise and meticulous. “We don’t have enough to go on. Do you feel able to try another comanalysis this afternoon yet?” “I don’t see why not.” Zarwell [p 137 ] opened the collar of his shirt. The day was hot, and the room had no air conditioning, still a rare luxury on St. Martin’s. The office window was open, but it let in no freshness, only the mildly rank odor that pervaded all the planet’s habitable area. “Good.” Bergstrom rose. “The serum is quite harmless, John.” He maintained a professional diversionary chatter as he administered the drug. “A scopolamine derivative that’s been well tested.” The floor beneath Zarwell’s feet assumed abruptly the near transfluent consistency of a damp sponge. It rose in a foot-high wave and rolled gently toward the far wall. Bergstrom continued talking, with practiced urbanity. “When psychiatry was a less exact science,” his voice went on, seeming to come from a great distance, “a doctor had to spend weeks, sometimes months or years interviewing a patient. If he was skilled enough, he could sort the relevancies from the vast amount of chaff. We are able now, with the help of the serum, to confine our discourses to matters cogent to the patient’s trouble.” The floor continued its transmutation, and Zarwell sank deep into viscous depths. “Lie back and relax. Don’t …” The words tumbled down from above. They faded, were gone. ZARWELL found himself standing on a vast plain. There was no sky above, and no horizon in the distance. He was in a place without space or dimension. There was nothing here except himself—and the gun that he held in his hand. A weapon beautiful in its efficient simplicity. He should know all about the instrument, its purpose and workings, but he could not bring his thoughts into rational focus. His forehead creased with his mental effort. Abruptly the unreality about him shifted perspective. He was approaching—not walking, but merely shortening the space between them—the man who held the gun. The man who was himself. The other “himself” drifted nearer also, as though drawn by a mutual attraction. The man with the gun raised his weapon and pressed the trigger. With the action the perspective shifted again. He was watching the face of the man he shot jerk and twitch, expand and contract. The face was unharmed, yet it was no longer the same. No longer his own features. The stranger face smiled approvingly at him. “O DD,” Bergstrom said. He brought his hands up and joined the tips of his fingers against his chest. “But it’s another piece in the [p 138 ] jig-saw. In time it will fit into place.” He paused. “It means no more to you than the first, I suppose?” “No,” Zarwell answered. He was not a talking man, Bergstrom reflected. It was more than reticence, however. The man had a hard granite core, only partially concealed by his present perplexity. He was a man who could handle himself well in an emergency. Bergstrom shrugged, dismissing his strayed thoughts. “I expected as much. A quite normal first phase of treatment.” He straightened a paper on his desk. “I think that will be enough for today. Twice in one sitting is about all we ever try. Otherwise some particular episode might cause undue mental stress, and set up a block.” He glanced down at his appointment pad. “Tomorrow at two, then?” Zarwell grunted acknowledgment and pushed himself to his feet, apparently unaware that his shirt clung damply to his body. THE sun was still high when Zarwell left the analyst’s office. The white marble of the city’s buildings shimmered in the afternoon heat, squat and austere as giant tree trunks, pock-marked and gray-mottled with windows. Zarwell was careful not to rest his hand on the flesh searing surface of the stone. The evening meal hour was approaching when he reached the Flats, on the way to his apartment. The streets of the old section were near-deserted. The only sounds he heard as he passed were the occasional cry of a baby, chronically uncomfortable in the day’s heat, and the lowing of imported cattle waiting in a nearby shed to be shipped to the country. All St. Martin’s has a distinctive smell, as of an arid dried-out swamp, with a faint taint of fish. But in the Flats the odor changes. Here is the smell of factories, warehouses, and trading marts; the smell of stale cooking drifting from the homes of the laborers and lower class techmen who live there. Zarwell passed a group of smaller children playing a desultory game of lic-lic for pieces of candy and cigarettes. Slowly he climbed the stairs of a stone flat. He prepared a supper for himself and ate it without either enjoyment or distaste. He lay down, fully clothed, on his bed. The visit to the analyst had done nothing to dispel his ennui. [p 139 ] The next morning when Zarwell awoke he lay for a moment, unmoving. The feeling was there again, like a scene waiting only to be gazed at directly to be perceived. It was as though a great wisdom lay at the edge of understanding. If he rested quietly it would all come to him. Yet always, when his mind lost its sleep-induced [p 140 ] lethargy, the moment of near understanding slipped away. This morning, however, the sense of disorientation did not pass with full wakefulness. He achieved no understanding, but the strangeness did not leave as he sat up. He gazed about him. The room did not seem to be his own. The furnishings, and the clothing he observed in a closet, might have belonged to a stranger. He pulled himself from his blankets, his body moving with mechanical reaction. The slippers into which he put his feet were larger than he had expected them to be. He walked about the small apartment. The place was familiar, but only as it would have been if he had studied it from blueprints, not as though he lived there. The feeling was still with him when he returned to the psychoanalyst. THE scene this time was more kaleidoscopic, less personal. A village was being ravaged. Men struggled and died in the streets. Zarwell moved among them, seldom taking part in the individual clashes, yet a moving force in the conflict . The background changed. He understood that he was on a different world. Here a city burned. Its resistance was nearing its end. Zarwell was riding a shaggy pony outside a high wall surrounding the stricken metropolis. He moved in and joined a party of short, bearded men, directing them as they battered at the wall with a huge log mounted on a many-wheeled truck. The log broke a breach in the concrete and the besiegers charged through, carrying back the defenders who sought vainly to plug the gap. Soon there would be rioting in the streets again, plundering and killing. Zarwell was not the leader of the invaders, only a lesser figure in the rebellion. But he had played a leading part in the planning of the strategy that led to the city’s fall. The job had been well done. Time passed, without visible break in the panorama. Now Zarwell was fleeing, pursued by the same bearded men who had been his comrades before. Still he moved with the same firm purpose, vigilant, resourceful, and well prepared for the eventuality that had befallen. He made his escape without difficulty. He alighted from a space ship on still another world—another shift in time—and the atmosphere of conflict engulfed him. Weary but resigned he accepted it, and did what he had to do … BERGSTROM was regarding him with speculative scrutiny. “You’ve had quite a past, apparently,” he observed. [p 141 ] Zarwell smiled with mild embarrassment. “At least in my dreams.” “Dreams?” Bergstrom’s eyes widened in surprise. “Oh, I beg your pardon. I must have forgotten to explain. This work is so routine to me that sometimes I forget it’s all new to a patient. Actually what you experienced under the drug were not dreams. They were recollections of real episodes from your past.” Zarwell’s expression became wary. He watched Bergstrom closely. After a minute, however, he seemed satisfied, and he let himself settle back against the cushion of his chair. “I remember nothing of what I saw,” he observed. “That’s why you’re here, you know,” Bergstrom answered. “To help you remember.” “But everything under the drug is so …” “Haphazard? That’s true. The recall episodes are always purely random, with no chronological sequence. Our problem will be to reassemble them in proper order later. Or some particular scene may trigger a complete memory return. “It is my considered opinion,” Bergstrom went on, “that your lost memory will turn out to be no ordinary amnesia. I believe we will find that your mind has been tampered with.” “Nothing I’ve seen under the drug fits into the past I do remember.” “That’s what makes me so certain,” Bergstrom said confidently. “You don’t remember what we have shown to be true. Conversely then, what you think you remember must be false. It must have been implanted there. But we can go into that later. For today I think we have done enough. This episode was quite prolonged.” “I won’t have any time off again until next week end,” Zarwell reminded him. “That’s right.” Bergstrom thought for a moment. “We shouldn’t let this hang too long. Could you come here after work tomorrow?” “I suppose I could.” “Fine,” Bergstrom said with satisfaction. “I’ll admit I’m considerably more than casually interested in your case by this time.” A WORK truck picked Zarwell up the next morning and he rode with a tech crew to the edge of the reclam area. Beside the belt bringing ocean muck from the converter plant at the seashore his bulldozer was waiting. He took his place behind the drive wheel and began working dirt down between windbreakers anchored in the rock. Along a makeshift road into the badlands trucks brought crushed lime and phosphorus to supplement the ocean sediment. The progress of life from the sea to the land was a mechanical [p 142 ] process of this growing world. Nearly two hundred years ago, when Earth established a colony on St. Martin’s, the land surface of the planet had been barren. Only its seas thrived with animal and vegetable life. The necessary machinery and technicians had been supplied by Earth, and the long struggle began to fit the world for human needs. When Zarwell arrived, six months before, the vitalized area already extended three hundred miles along the coast, and sixty miles inland. And every day the progress continued. A large percentage of the energy and resources of the world were devoted to that essential expansion. The reclam crews filled and sodded the sterile rock, planted binding grasses, grain and trees, and diverted rivers to keep it fertile. When there were no rivers to divert they blasted out springs and lakes in the foothills to make their own. Biologists developed the necessary germ and insect life from what they found in the sea. Where that failed, they imported microorganisms from Earth. Three rubber-tracked crawlers picked their way down from the mountains until they joined the road passing the belt. They were loaded with ore that would be smelted into metal for depleted Earth, or for other colonies short of minerals. It was St. Martin’s only export thus far. Zarwell pulled his sun helmet lower, to better guard his hot, dry features. The wind blew continuously on St. Martin’s, but it furnished small relief from the heat. After its three-thousand-mile journey across scorched sterile rock, it sucked the moisture from a man’s body, bringing a membrane-shrinking dryness to the nostrils as it was breathed in. With it came also the cloying taste of limestone in a worker’s mouth. Zarwell gazed idly about at the other laborers. Fully three-quarters of them were beri-rabza ridden. A cure for the skin fungus had not yet been found; the men’s faces and hands were scabbed and red. The colony had grown to near self-sufficiency, would soon have a moderate prosperity, yet they still lacked adequate medical and research facilities. Not all the world’s citizens were content. Bergstrom was waiting in his office when Zarwell arrived that evening. HE was lying motionless on a hard cot, with his eyes closed, yet with his every sense sharply quickened. Tentatively he tightened small muscles in his arms and legs. Across his wrists and thighs he felt straps binding him to the cot. “So that’s our big, bad man,” a coarse voice above him observed [p 143 ] caustically. “He doesn’t look so tough now, does he?” “It might have been better to kill him right away,” a second, less confident voice said. “It’s supposed to be impossible to hold him.” “Don’t be stupid. We just do what we’re told. We’ll hold him.” “What do you think they’ll do with him?” “Execute him, I suppose,” the harsh voice said matter-of-factly. “They’re probably just curious to see what he looks like first. They’ll be disappointed.” Zarwell opened his eyes a slit to observe his surroundings. It was a mistake. “He’s out of it,” the first speaker said, and Zarwell allowed his eyes to open fully. The voice, he saw, belonged to the big man who had bruised him against the locker at the spaceport. Irrelevantly he wondered how he knew now that it had been a spaceport. His captor’s broad face jeered down at Zarwell. “Have a good sleep?” he asked with mock solicitude. Zarwell did not deign to acknowledge that he heard. The big man turned. “You can tell the Chief he’s awake,” he said. Zarwell followed his gaze to where a younger man, with a blond lock of hair on his forehead, stood behind him. The youth nodded and went out, while the other pulled a chair up to the side of Zarwell’s cot. While their attention was away from him Zarwell had unobtrusively loosened his bonds as much as possible with arm leverage. As the big man drew his chair nearer, he made the hand farthest from him tight and compact and worked it free of the leather loop. He waited. The big man belched. “You’re supposed to be great stuff in a situation like this,” he said, his smoke-tan face splitting in a grin that revealed large square teeth. “How about giving me a sample?” “You’re a yellow-livered bastard,” Zarwell told him. The grin faded from the oily face as the man stood up. He leaned over the cot—and Zarwell’s left hand shot up and locked about his throat, joined almost immediately by the right. The man’s mouth opened and he tried to yell as he threw himself frantically backward. He clawed at the hands about his neck. When that failed to break the grip he suddenly reversed his weight and drove his fist at Zarwell’s head. Zarwell pulled the struggling body down against his chest and held it there until all agitated movement ceased. He sat up then, letting the body slide to the floor. The straps about his thighs came loose with little effort. THE analyst dabbed at his upper lip with a handkerchief. “The episodes are beginning to tie together,” he said, with an attempt at [p 144 ] nonchalance. “The next couple should do it.” Zarwell did not answer. His memory seemed on the point of complete return, and he sat quietly, hopefully. However, nothing more came and he returned his attention to his more immediate problem. Opening a button on his shirt, he pulled back a strip of plastic cloth just below his rib cage and took out a small flat pistol. He held it in the palm of his hand. He knew now why he always carried it. Bergstrom had his bad moment. “You’re not going to …” he began at the sight of the gun. He tried again. “You must be joking.” “I have very little sense of humor,” Zarwell corrected him. “You’d be foolish!” Bergstrom obviously realized how close he was to death. Yet surprisingly, after the first start, he showed little fear. Zarwell had thought the man a bit soft, too adjusted to a life of ease and some prestige to meet danger calmly. Curiosity restrained his trigger finger. “Why would I be foolish?” he asked. “Your Meninger oath of inviolable confidence?” Bergstrom shook his head. “I know it’s been broken before. But you need me. You’re not through, you know. If you killed me you’d still have to trust some other analyst.” “Is that the best you can do?” “No.” Bergstrom was angry now. “But use that logical mind you’re supposed to have! Scenes before this have shown what kind of man you are. Just because this last happened here on St. Martin’s makes little difference. If I was going to turn you in to the police, I’d have done it before this.” Zarwell debated with himself the truth of what the other had said. “Why didn’t you turn me in?” he asked. “Because you’re no mad-dog killer!” Now that the crisis seemed to be past, Bergstrom spoke more calmly, even allowed himself to relax. “You’re still pretty much in the fog about yourself. I read more in those comanalyses than you did. I even know who you are!” Zarwell’s eyebrows raised. “Who am I?” he asked, very interested now. Without attention he put his pistol away in a trouser pocket. Bergstrom brushed the question aside with one hand. “Your name makes little difference. You’ve used many. But you are an idealist. Your killings were necessary to bring justice to the places you visited. By now you’re almost a legend among the human worlds. I’d like to talk more with you on that later.” While Zarwell considered, Bergstrom pressed his advantage. “One more scene might do it,” he said. “Should we try again—if you trust me, that is?” [p 145 ] Zarwell made his decision quickly. “Go ahead,” he answered. ALL Zarwell’s attention seemed on the cigar he lit as he rode down the escalator, but he surveyed the terminal carefully over the rim of his hand. He spied no suspicious loungers. Behind the escalator he groped along the floor beneath the lockers until he found his key. The briefcase was under his arm a minute later. In the basement lave he put a coin in the pay slot of a private compartment and went in. As he zipped open the briefcase he surveyed his features in the mirror. A small muscle at the corner of one eye twitched spasmodically. One cheek wore a frozen quarter smile. Thirty-six hours under the paralysis was longer than advisable. The muscles should be rested at least every twenty hours. Fortunately his natural features would serve as an adequate disguise now. He adjusted the ring setting on the pistol-shaped instrument that he took from his case, and carefully rayed several small areas of his face, loosening muscles that had been tight too long. He sighed gratefully when he finished, massaging his cheeks and forehead with considerable pleasure. Another glance in the mirror satisfied him with the changes that had been made. He turned to his briefcase again and exchanged the gun for a small syringe, which he pushed into a trouser pocket, and a single-edged razor blade. Removing his fiber-cloth jacket he slashed it into strips with the razor blade and flushed it down the disposal bowl. With the sleeves of his blouse rolled up he had the appearance of a typical workman as he strolled from the compartment. Back at the locker he replaced the briefcase and, with a wad of gum, glued the key to the bottom of the locker frame. One step more. Taking the syringe from his pocket, he plunged the needle into his forearm and tossed the instrument down a waste chute. He took three more steps and paused uncertainly. When he looked about him it was with the expression of a man waking from a vivid dream. “Q UITE ingenious,” Graves murmured admiringly. “You had your mind already preconditioned for the shot. But why would you deliberately give yourself amnesia?” “What better disguise than to believe the part you’re playing?” “A good man must have done that job on your mind,” Bergstrom commented. “I’d have hesitated to try it myself. It must have taken a lot of trust on your part.” [p 146 ] “Trust and money,” Zarwell said drily. “Your memory’s back then?” Zarwell nodded. “I’m glad to hear that,” Bergstrom assured him. “Now that you’re well again I’d like to introduce you to a man named Vernon Johnson. This world …” Zarwell stopped him with an upraised hand. “Good God, man, can’t you see the reason for all this? I’m tired. I’m trying to quit.” “Quit?” Bergstrom did not quite follow him. “It started on my home colony,” Zarwell explained listlessly. “A gang of hoods had taken over the government. I helped organize a movement to get them out. There was some bloodshed, but it went quite well. Several months later an unofficial envoy from another world asked several of us to give them a hand on the same kind of job. The political conditions there were rotten. We went with him. Again we were successful. It seems I have a kind of genius for that sort of thing.” He stretched out his legs and regarded them thoughtfully. “I learned then the truth of Russell’s saying: ‘When the oppressed win their freedom they are as oppressive as their former masters.’ When they went bad, I opposed them. This time I failed. But I escaped again. I have quite a talent for that also. “I’m not a professional do-gooder.” Zarwell’s tone appealed to Bergstrom for understanding. “I have only a normal man’s indignation at injustice. And now I’ve done my share. Yet, wherever I go, the word eventually gets out, and I’m right back in a fight again. It’s like the proverbial monkey on my back. I can’t get rid of it.” He rose. “That disguise and memory planting were supposed to get me out of it. I should have known it wouldn’t work. But this time I’m not going to be drawn back in! You and your Vernon Johnson can do your own revolting. I’m through!” Bergstrom did not argue as he left. RESTLESSNESS drove Zarwell from his flat the next day—a legal holiday on St. Martin’s. At a railed-off lot he stopped and loitered in the shadow of an adjacent building watching workmen drilling an excavation for a new structure. When a man strolled to his side and stood watching the workmen, he was not surprised. He waited for the other to speak. “I’d like to talk to you, if you can spare a few minutes,” the stranger said. Zarwell turned and studied the man without answering. He was medium tall, with the body of an athlete, though perhaps ten years [p 147 ] beyond the age of sports. He had a manner of contained energy. “You’re Johnson?” he asked. The man nodded. Zarwell tried to feel the anger he wanted to feel, but somehow it would not come. “We have nothing to talk about,” was the best he could manage. “Then will you just listen? After, I’ll leave—if you tell me to.” Against his will he found himself liking the man, and wanting at least to be courteous. He inclined his head toward a curb wastebox with a flat top. “Should we sit?” Johnson smiled agreeably and they walked over to the box and sat down. “When this colony was first founded,” Johnson began without preamble, “the administrative body was a governor, and a council of twelve. Their successors were to be elected biennially. At first they were. Then things changed. We haven’t had an election now in the last twenty-three years. St. Martin’s is beginning to prosper. Yet the only ones receiving the benefits are the rulers. The citizens work twelve hours a day. They are poorly housed , poorly fed, poorly clothed. They …” Zarwell found himself not listening as Johnson’s voice went on. The story was always the same. But why did they always try to drag him into their troubles? Why hadn’t he chosen some other world on which to hide? The last question prompted a new thought. Just why had he chosen St. Martin’s? Was it only a coincidence? Or had he, subconsciously at least, picked this particular world? He had always considered himself the unwilling subject of glib persuaders … but mightn’t some inner compulsion of his own have put the monkey on his back? “… and we need your help.” Johnson had finished his speech. Zarwell gazed up at the bright sky. He pulled in a long breath, and let it out in a sigh. “What are your plans so far?” he asked wearily. — CHARLES V. DE VET
D. The monkey represents Zarwell's pattern of joining resistance movements, only to watch them turn corrupt
What is the architecture of the model?
### Introduction Code-switching has received a lot of attention from speech and computational linguistic communities especially on how to automatically recognize text from speech and understand the structure within it. This phenomenon is very common in bilingual and multilingual communities. For decades, linguists studied this phenomenon and found that speakers switch at certain points, not randomly and obeys several constraints which point to the code-switched position in an utterance BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . These hypotheses have been empirically proven by observing that bilinguals tend to code-switch intra-sententially at certain (morpho)-syntactic boundaries BIBREF5 . BIBREF1 defined the well-known theory that constraints the code-switch between a functional head and its complement is given the strong relationship between the two constituents, which corresponds to a hierarchical structure in terms of Part-of-Speech (POS) tags. BIBREF3 introduced Matrix-Language Model Framework for an intra-sentential case where the primary language is called Matrix Language and the second one called Embedded Language BIBREF2 . A language island was then introduced which is a constituent composed entirely of the language morphemes. From the Matrix-Language Frame Model, both matrix language (ML) island and embedded language (EL) islands are well-formed in their grammars and the EL islands are constrained under ML grammar BIBREF6 . BIBREF7 studied determiner–noun switches in Spanish–English bilinguals . Code-switching can be classified into two categories: intra-sentential and inter-sentential switches BIBREF0 . Intra-sentential switch defines a shift from one language to another language within an utterance. Inter-sentential switch refers to the change between two languages in a single discourse, where the switching occurs after a sentence in the first language has been completed and the next sentence starts with a new language. The example of the intra-sentential switch is shown in (1), and the inter-sentential switch is shown in (2). Language modeling using only word lexicons is not adequate to learn the complexity of code-switching patterns, especially in a low resource setting. Learning at the same time syntactic features such as POS tag and language identifier allows to have a shared grammatical information that constraint the next word prediction. Due to this reason, we propose a multi-task learning framework for code-switching language modeling task which is able to leverage syntactic features such as language and POS tag. The main contribution of this paper is two-fold. First, multi-task learning model is proposed to jointly learn language modeling task and POS sequence tagging task on code-switched utterances. Second, we incorporate language information into POS tags to create bilingual tags - it distinguishes tags between Chinese and English. The POS tag features are shared towards the language model and enrich the features to better learn where to switch. From our experiments result, we found that our method improves the perplexity on SEAME Phase I and Phase II dataset BIBREF8 . ### Related Work The earliest language modeling research on code-switching data was applying linguistic theories on computational modelings such as Inversion Constraints and Functional Head Constraints on Chinese-English code-switching data BIBREF9 , BIBREF10 . BIBREF11 built a bilingual language model which is trained by interpolating two monolingual language models with statistical machine translation (SMT) based text generation to generate artificial code-switching text. BIBREF12 , BIBREF13 introduced a class-based method using RNNLM for computing the posterior probability and added POS tags in the input. BIBREF14 explored the combination of brown word clusters, open class words, and clusters of open class word embeddings as hand-crafted features for improving the factored language model. In addition, BIBREF15 proposed a generative language modeling with explicit phrase structure. A method of tying input and output embedding helped to reduce the number of parameters in language model and improved the perplexity BIBREF16 . Learning multiple NLP tasks using multi-task learning have been recently used in many domains BIBREF17 , BIBREF18 , BIBREF19 . They presented a joint many-task model to handle multiple NLP tasks and share parameters with growing depth in a single end-to-end model. A work by BIBREF20 showed the potential of combining POS tagging with Named-Entity Recognition task. ### Methodology This section shows how to build the features and how to train our multi-task learning language model. Multi-task learning consists of two NLP tasks: Language modeling and POS sequence tagging. ### Feature Representation In the model, word lexicons and syntactic features are used as input. Word Lexicons: Sentences are encoded as 1-hot vectors and our vocabulary is built from training data. Syntactic Features: For each language island, phrase within the same language, we extract POS Tags iteratively using Chinese and English Penn Tree Bank Parser BIBREF21 , BIBREF22 . There are 31 English POS Tags and 34 Chinese POS Tags. Chinese words are distinguishable from English words since they have different encoding. We add language information in the POS tag label to discriminate POS tag between two languages. ### Model Description faFigure FIGREF7 illustrates our multi-task learning extension to recurrent language model. In this multi-task learning setting, the tasks are language modeling and POS tagging. The POS tagging task shares the POS tag vector and the hidden states to LM task, but it does not receive any information from the other loss. Let INLINEFORM0 be the word lexicon in the document and INLINEFORM1 be the POS tag of the corresponding INLINEFORM2 at index INLINEFORM3 . They are mapped into embedding matrices to get their INLINEFORM4 -dimensional vector representations INLINEFORM5 and INLINEFORM6 . The input embedding weights are tied with the output weights. We concatenate INLINEFORM7 and INLINEFORM8 as the input of INLINEFORM9 . The information from the POS tag sequence is shared to the language model through this step. INLINEFORM10 INLINEFORM11 where INLINEFORM0 denotes the concatenation operator, INLINEFORM1 and INLINEFORM2 are the final hidden states of INLINEFORM3 and INLINEFORM4 respectively. INLINEFORM5 and INLINEFORM6 , the hidden states from both LSTMs are summed before predicting the next word. INLINEFORM7 INLINEFORM8 The word distribution of the next word INLINEFORM0 is normalized using softmax function. The model uses cross-entropy losses as error functions INLINEFORM1 and INLINEFORM2 for language modeling task and POS tagging task respectively. We optimize the multi-objective losses using the Back Propagation algorithm and we perform a weighted linear sum of the losses for each individual task. INLINEFORM3 where INLINEFORM0 is the weight of the loss in the training. ### Experimental Setup In this section, we present the experimental setting for this task Corpus: SEAME (South East Asia Mandarin-English), a conversational Mandarin-English code-switching speech corpus consists of spontaneously spoken interviews and conversations BIBREF8 . Our dataset (LDC2015S04) is the most updated version of the Linguistic Data Consortium (LDC) database. However, the statistics are not identical to BIBREF23 . The corpus consists of two phases. In Phase I, only selected audio segments were transcribed. In Phase II, most of the audio segments were transcribed. According to the authors, it was not possible to restore the original dataset. The authors only used Phase I corpus. Few speaker ids are not in the speaker list provided by the authors BIBREF23 . Therefore as a workaround, we added these ids to the train set. As our future reference, the recording lists are included in the supplementary material. Preprocessing: First, we tokenized English and Chinese word using Stanford NLP toolkit BIBREF24 . Second, all hesitations and punctuations were removed except apostrophe, for examples: “let's" and “it's". Table TABREF9 and Table TABREF10 show the statistics of SEAME Phase I and II corpora. Table TABREF11 shows the most common trigger POS tag for Phase II corpus. Training: The baseline model was trained using RNNLM BIBREF25 . Then, we trained our LSTM models with different hidden sizes [200, 500]. All LSTMs have 2 layers and unrolled for 35 steps. The embedding size is equal to the LSTM hidden size. A dropout regularization BIBREF26 was applied to the word embedding vector and POS tag embedding vector, and to the recurrent output BIBREF27 with values between [0.2, 0.4]. We used a batch size of 20 in the training. EOS tag was used to separate every sentence. We chose Stochastic Gradient Descent and started with a learning rate of 20 and if there was no improvement during the evaluation, we reduced the learning rate by a factor of 0.75. The gradient was clipped to a maximum of 0.25. For the multi-task learning, we used different loss weights hyper-parameters INLINEFORM0 in the range of [0.25, 0.5, 0.75]. We tuned our model with the development set and we evaluated our best model using the test set, taking perplexity as the final evaluation metric. Where the latter was calculated by taking the exponential of the error in the negative log-form. INLINEFORM1 ### Results Table TABREF14 and Table TABREF15 show the results of multi-task learning with different values of the hyper-parameter INLINEFORM0 . We observe that the multi-task model with INLINEFORM1 achieved the best performance. We compare our multi-task learning model against RNNLM and LSTM baselines. The baselines correspond to recurrent neural networks that are trained with word lexicons. Table TABREF16 and Table TABREF17 present the overall results from different models. The multi-task model performs better than LSTM baseline by 9.7% perplexity in Phase I and 7.4% perplexity in Phase II. The performance of our model in Phase II is also better than the RNNLM (8.9%) and far better than the one presented in BIBREF13 in Phase I. Moreover, the results show that adding shared POS tag representation to INLINEFORM0 does not hurt the performance of the language modeling task. This implies that the syntactic information helps the model to better predict the next word in the sequence. To further verify this hypothesis, we conduct two analysis by visualizing our prediction examples in Figure FIGREF13 : Results with different hyper-parameter settings ### Conclusion In this paper, we propose a multi-task learning approach for code-switched language modeling. The multi-task learning models achieve the best performance and outperform LSTM baseline with 9.7% and 7.4% improvement in perplexity for Phase I and Phase II SEAME corpus respectively. This implies that by training two different NLP tasks together the model can correctly learn the correlation between them. Indeed, the syntactic information helps the model to be aware of code-switching points and it improves the performance over the language model. Finally, we conclude that multi-task learning has good potential on code-switching language modeling research and there are still rooms for improvements, especially by adding more language pairs and corpora. ### Acknowledgments This work is partially funded by ITS/319/16FP of the Innovation Technology Commission, HKUST 16214415 & 16248016 of Hong Kong Research Grants Council, and RDC 1718050-0 of EMOS.AI. ### Recording Lists We split the recording ids into train, development, and test set as the following: Figure 1: Multi-Task Learning Framework Table 1: Data Statistics in SEAME Phase I Table 2: Data Statistics in SEAME Phase II Table 3: Code-Switching Trigger Words in SEAME Phase II Table 4: Multi-task results with different weighted loss hyper-parameter in Phase I Table 5: Multi-task results with different weighted loss hyper-parameter in Phase II Table 7: Results in Phase II Figure 2: Prediction examples in Phase II. Left: Each square shows the target word’s log probability improvement by multi-task model compared to LSTM model (Darker color is better). Right: Each square shows the probability of the next POS tag is Chinese (Darker color represents higher probability) Table 8: Results in Phase I Table 9: Results in Phase II
LSTM
What is the performance of the model for the German sub-task A?
### Introduction In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society. Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity. Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language. For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language. In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification. ### Related works Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum. Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22. One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently. ### Dataset and Task description The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages. ### Dataset and Task description ::: Datasets We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced. ### Dataset and Task description ::: Tasks Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask. Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task. Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset. ### System Description In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system. ### System Description ::: Feature Generation ::: Preprocessing: We preprocess the tweets before performing the feature extraction. The following steps were followed: We remove all the URLs. Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters. We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders. Any numerical figure was normalized to a string `number'. We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact. ### System Description ::: Feature Generation ::: Feature vectors: The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier. Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768. LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31. We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model. ### System Description ::: Our Model The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition. ### Results The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively. ### Discussion In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21. ### Conclusion In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. Table 1. This table shows the initial statistics about the training and test data Fig. 1. Architecture of our system Table 2. This table gives the language wise result of sub-task A by comparing the macro F1 values Table 3. This table gives the language wise result of sub-task B by comparing the macro F1 values Table 4. This table gives the language wise result of sub-task C by comparing the macro F1 values
macro F1 score of 0.62
Who is Big Louis? A. Big Louis is Lawrence Reston-Farrell's boss. B. Big Louis is Al Rossi's boss. C. Big Louis is Warren Brett- James' boss. D. Big Louis is Joe Prantera's boss.
Illustrated by van Dongen A gun is an interesting weapon; it can be hired, of course, and naturally doesn't care who hires it. Something much the same can be said of the gunman, too.... GUN FOR HIRE By MACK REYNOLDS Joe Prantera called softly, "Al." The pleasurable, comfortable, warm feeling began spreading over him, the way it always did. The older man stopped and squinted, but not suspiciously, even now. The evening was dark, it was unlikely that the other even saw the circle of steel that was the mouth of the shotgun barrel, now resting on the car's window ledge. "Who's it?" he growled. Joe Prantera said softly, "Big Louis sent me, Al." And he pressed the trigger. And at that moment, the universe caved inward upon Joseph Marie Prantera. There was nausea and nausea upon nausea. There was a falling through all space and through all time. There was doubling and twisting and twitching of every muscle and nerve. There was pain, horror and tumultuous fear. And he came out of it as quickly and completely as he'd gone in. He was in, he thought, a hospital and his first reaction was to think, This here California. Everything different. Then his second thought was Something went wrong. Big Louis, he ain't going to like this. He brought his thinking to the present. So far as he could remember, he hadn't completely pulled the trigger. That at least meant that whatever the rap was it wouldn't be too tough. With luck, the syndicate would get him off with a couple of years at Quentin. A door slid open in the wall in a way that Joe had never seen a door operate before. This here California. The clothes on the newcomer were wrong, too. For the first time, Joe Prantera began to sense an alienness—a something that was awfully wrong. The other spoke precisely and slowly, the way a highly educated man speaks a language which he reads and writes fluently but has little occasion to practice vocally. "You have recovered?" Joe Prantera looked at the other expressionlessly. Maybe the old duck was one of these foreign doctors, like. The newcomer said, "You have undoubtedly been through a most harrowing experience. If you have any untoward symptoms, possibly I could be of assistance." Joe couldn't figure out how he stood. For one thing, there should have been some kind of police guard. The other said, "Perhaps a bit of stimulant?" Joe said flatly, "I wanta lawyer." The newcomer frowned at him. "A lawyer?" "I'm not sayin' nothin'. Not until I get a mouthpiece." The newcomer started off on another tack. "My name is Lawrence Reston-Farrell. If I am not mistaken, you are Joseph Salviati-Prantera." Salviati happened to be Joe's mother's maiden name. But it was unlikely this character could have known that. Joe had been born in Naples and his mother had died in childbirth. His father hadn't brought him to the States until the age of five and by that time he had a stepmother. "I wanta mouthpiece," Joe said flatly, "or let me outta here." Lawrence Reston-Farrell said, "You are not being constrained. There are clothes for you in the closet there." Joe gingerly tried swinging his feet to the floor and sitting up, while the other stood watching him, strangely. He came to his feet. With the exception of a faint nausea, which brought back memories of that extreme condition he'd suffered during ... during what? He hadn't the vaguest idea of what had happened. He was dressed in a hospital-type nightgown. He looked down at it and snorted and made his way over to the closet. It opened on his approach, the door sliding back into the wall in much the same manner as the room's door had opened for Reston-Farrell. Joe Prantera scowled and said, "These ain't my clothes." "No, I am afraid not." "You think I'd be seen dead wearing this stuff? What is this, some religious crackpot hospital?" Reston-Farrell said, "I am afraid, Mr. Salviati-Prantera, that these are the only garments available. I suggest you look out the window there." Joe gave him a long, chill look and then stepped to the window. He couldn't figure the other. Unless he was a fruitcake. Maybe he was in some kind of pressure cooker and this was one of the fruitcakes. He looked out, however, not on the lawns and walks of a sanitarium but upon a wide boulevard of what was obviously a populous city. And for a moment again, Joe Prantera felt the depths of nausea. This was not his world. He stared for a long, long moment. The cars didn't even have wheels, he noted dully. He turned slowly and faced the older man. Reston-Farrell said compassionately, "Try this, it's excellent cognac." Joe Prantera stared at him, said finally, flatly, "What's it all about?" The other put down the unaccepted glass. "We were afraid first realization would be a shock to you," he said. "My colleague is in the adjoining room. We will be glad to explain to you if you will join us there." "I wanta get out of here," Joe said. "Where would you go?" The fear of police, of Al Rossi's vengeance, of the measures that might be taken by Big Louis on his failure, were now far away. Reston-Farrell had approached the door by which he had entered and it reopened for him. He went through it without looking back. There was nothing else to do. Joe dressed, then followed him. In the adjoining room was a circular table that would have accommodated a dozen persons. Two were seated there now, papers, books and soiled coffee cups before them. There had evidently been a long wait. Reston-Farrell, the one Joe had already met, was tall and drawn of face and with a chainsmoker's nervousness. The other was heavier and more at ease. They were both, Joe estimated, somewhere in their middle fifties. They both looked like docs. He wondered, all over again, if this was some kind of pressure cooker. But that didn't explain the view from the window. Reston-Farrell said, "May I present my colleague, Citizen Warren Brett-James? Warren, this is our guest from ... from yesteryear, Mr. Joseph Salviati-Prantera." Brett-James nodded to him, friendly, so far as Joe could see. He said gently, "I think it would be Mr. Joseph Prantera, wouldn't it? The maternal linage was almost universally ignored." His voice too gave the impression he was speaking a language not usually on his tongue. Joe took an empty chair, hardly bothering to note its alien qualities. His body seemed to fit into the piece of furniture, as though it had been molded to his order. Joe said, "I think maybe I'll take that there drink, Doc." Reston-Farrell said, "Of course," and then something else Joe didn't get. Whatever the something else was, a slot opened in the middle of the table and a glass, so clear of texture as to be all but invisible, was elevated. It contained possibly three ounces of golden fluid. Joe didn't allow himself to think of its means of delivery. He took up the drink and bolted it. He put the glass down and said carefully, "What's it all about, huh?" Warren Brett-James said soothingly, "Prepare yourself for somewhat of a shock, Mr. Prantera. You are no longer in Los Angeles—" "Ya think I'm stupid? I can see that." "I was about to say, Los Angeles of 1960. Mr. Prantera, we welcome you to Nuevo Los Angeles." "Ta where?" "To Nuevo Los Angeles and to the year—" Brett-James looked at his companion. "What is the date, Old Calendar?" "2133," Reston-Farrell said. "2133 A.D. they would say." Joe Prantera looked from one of them to the other, scowling. "What are you guys talking about?" Warren Brett-James said softly, "Mr. Prantera, you are no longer in the year 1960, you are now in the year 2133." He said, uncomprehendingly, "You mean I been, like, unconscious for—" He let the sentence fall away as he realized the impossibility. Brett-James said gently, "Hardly for one hundred and seventy years, Mr. Prantera." Reston-Farrell said, "I am afraid we are confusing you. Briefly, we have transported you, I suppose one might say, from your own era to ours." Joe Prantera had never been exposed to the concept of time travel. He had simply never associated with anyone who had ever even remotely considered such an idea. Now he said, "You mean, like, I been asleep all that time?" "Not exactly," Brett-James said, frowning. Reston-Farrell said, "Suffice to say, you are now one hundred and seventy-three years after the last memory you have." Joe Prantera's mind suddenly reverted to those last memories and his eyes narrowed dangerously. He felt suddenly at bay. He said, "Maybe you guys better let me in on what's this all about." Reston-Farrell said, "Mr. Prantera, we have brought you from your era to perform a task for us." Joe stared at him, and then at the other. He couldn't believe he was getting through to them. Or, at least, that they were to him. Finally he said, "If I get this, you want me to do a job for you." "That is correct." Joe said, "You guys know the kind of jobs I do?" "That is correct." "Like hell you do. You think I'm stupid? I never even seen you before." Joe Prantera came abruptly to his feet. "I'm gettin' outta here." For the second time, Reston-Farrell said, "Where would you go, Mr. Prantera?" Joe glared at him. Then sat down again, as abruptly as he'd arisen. "Let's start all over again. I got this straight, you brought me, some screwy way, all the way ... here. O.K., I'll buy that. I seen what it looks like out that window—" The real comprehension was seeping through to him even as he talked. "Everybody I know, Jessie, Tony, the Kid, Big Louis, everybody, they're dead. Even Big Louis." "Yes," Brett-James said, his voice soft. "They are all dead, Mr. Prantera. Their children are all dead, and their grandchildren." The two men of the future said nothing more for long minutes while Joe Prantera's mind whirled its confusion. Finally he said, "What's this bit about you wanting me to give it to some guy." "That is why we brought you here, Mr. Prantera. You were ... you are, a professional assassin." "Hey, wait a minute, now." Reston-Farrell went on, ignoring the interruption. "There is small point in denying your calling. Pray remember that at the point when we ... transported you, you were about to dispose of a contemporary named Alphonso Annunziata-Rossi. A citizen, I might say, whose demise would probably have caused small dismay to society." They had him pegged all right. Joe said, "But why me? Why don't you get some heavy from now? Somebody knows the ropes these days." Brett-James said, "Mr. Prantera, there are no professional assassins in this age, nor have there been for over a century and a half." "Well, then do it yourself." Joe Prantera's irritation over this whole complicated mess was growing. And already he was beginning to long for the things he knew—for Jessie and Tony and the others, for his favorite bar, for the lasagne down at Papa Giovanni's. Right now he could have welcomed a calling down at the hands of Big Louis. Reston-Farrell had come to his feet and walked to one of the large room's windows. He looked out, as though unseeing. Then, his back turned, he said, "We have tried, but it is simply not in us, Mr. Prantera." "You mean you're yella?" "No, if by that you mean afraid. It is simply not within us to take the life of a fellow creature—not to speak of a fellow man." Joe snapped: "Everything you guys say sounds crazy. Let's start all over again." Brett-James said, "Let me do it, Lawrence." He turned his eyes to Joe. "Mr. Prantera, in your own era, did you ever consider the future?" Joe looked at him blankly. "In your day you were confronted with national and international, problems. Just as we are today and just as nations were a century or a millennium ago." "Sure, O.K., so we had problems. I know whatcha mean—like wars, and depressions and dictators and like that." "Yes, like that," Brett-James nodded. The heavy-set man paused a moment. "Yes, like that," he repeated. "That we confront you now indicates that the problems of your day were solved. Hadn't they been, the world most surely would have destroyed itself. Wars? Our pedagogues are hard put to convince their students that such ever existed. More than a century and a half ago our society eliminated the reasons for international conflict. For that matter," he added musingly, "we eliminated most international boundaries. Depressions? Shortly after your own period, man awoke to the fact that he had achieved to the point where it was possible to produce an abundance for all with a minimum of toil. Overnight, for all practical purposes, the whole world was industrialized, automated. The second industrial revolution was accompanied by revolutionary changes in almost every field, certainly in every science. Dictators? Your ancestors found, Mr. Prantera, that it is difficult for a man to be free so long as others are still enslaved. Today the democratic ethic has reached a pinnacle never dreamed of in your own era." "O.K., O.K.," Joe Prantera growled. "So everybody's got it made. What I wanta know is what's all this about me giving it ta somebody? If everything's so great, how come you want me to knock this guy off?" Reston-Farrell bent forward and thumped his right index finger twice on the table. "The bacterium of hate—a new strain—has found the human race unprotected from its disease. We had thought our vaccines immunized us." "What's that suppose to mean?" Brett-James took up the ball again. "Mr. Prantera, have you ever heard of Ghengis Khan, of Tamerlane, Alexander, Caesar?" Joe Prantera scowled at him emptily. "Or, more likely, of Napoleon, Hitler, Stalin?" "Sure I heard of Hitler and Stalin," Joe growled. "I ain't stupid." The other nodded. "Such men are unique. They have a drive ... a drive to power which exceeds by far the ambitions of the average man. They are genii in their way, Mr. Prantera, genii of evil. Such a genius of evil has appeared on the current scene." "Now we're getting somewheres," Joe snorted. "So you got a guy what's a little ambitious, like, eh? And you guys ain't got the guts to give it to him. O.K. What's in it for me?" The two of them frowned, exchanged glances. Reston-Farrell said, "You know, that is one aspect we had not considered." Brett-James said to Joe Prantera, "Had we not, ah, taken you at the time we did, do you realize what would have happened?" "Sure," Joe grunted. "I woulda let old Al Rossi have it right in the guts, five times. Then I woulda took the plane back to Chi." Brett-James was shaking his head. "No. You see, by coincidence, a police squad car was coming down the street just at that moment to arrest Mr. Rossi. You would have been apprehended. As I understand Californian law of the period, your life would have been forfeit, Mr. Prantera." Joe winced. It didn't occur to him to doubt their word. Reston-Farrell said, "As to reward, Mr. Prantera, we have already told you there is ultra-abundance in this age. Once this task has been performed, we will sponsor your entry into present day society. Competent psychiatric therapy will soon remove your present—" "Waita minute, now. You figure on gettin' me candled by some head shrinker, eh? No thanks, Buster. I'm going back to my own—" Brett-James was shaking his head again. "I am afraid there is no return, Mr. Prantera. Time travel works but in one direction, with the flow of the time stream. There can be no return to your own era." Joe Prantera had been rocking with the mental blows he had been assimilating, but this was the final haymaker. He was stuck in this squaresville of a world. Joe Prantera on a job was thorough. Careful, painstaking, competent. He spent the first three days of his life in the year 2133 getting the feel of things. Brett-James and Reston-Farrell had been appointed to work with him. Joe didn't meet any of the others who belonged to the group which had taken the measures to bring him from the past. He didn't want to meet them. The fewer persons involved, the better. He stayed in the apartment of Reston-Farrell. Joe had been right, Reston-Farrell was a medical doctor. Brett-James evidently had something to do with the process that had enabled them to bring Joe from the past. Joe didn't know how they'd done it, and he didn't care. Joe was a realist. He was here. The thing was to adapt. There didn't seem to be any hurry. Once the deal was made, they left it up to him to make the decisions. They drove him around the town, when he wished to check the traffic arteries. They flew him about the whole vicinity. From the air, Southern California looked much the same as it had in his own time. Oceans, mountains, and to a lesser extent, deserts, are fairly permanent even against man's corroding efforts. It was while he was flying with Brett-James on the second day that Joe said, "How about Mexico? Could I make the get to Mexico?" The physicist looked at him questioningly. "Get?" he said. Joe Prantera said impatiently, "The getaway. After I give it to this Howard Temple-Tracy guy, I gotta go on the run, don't I?" "I see." Brett-James cleared his throat. "Mexico is no longer a separate nation, Mr. Prantera. All North America has been united into one unit. Today, there are only eight nations in the world." "Where's the nearest?" "South America." "That's a helluva long way to go on a get." "We hadn't thought of the matter being handled in that manner." Joe eyed him in scorn. "Oh, you didn't, huh? What happens after I give it to this guy? I just sit around and wait for the cops to put the arm on me?" Brett-James grimaced in amusement. "Mr. Prantera, this will probably be difficult for you to comprehend, but there are no police in this era." Joe gaped at him. "No police! What happens if you gotta throw some guy in stir?" "If I understand your idiom correctly, you mean prison. There are no prisons in this era, Mr. Prantera." Joe stared. "No cops, no jails. What stops anybody? What stops anybody from just going into some bank, like, and collecting up all the bread?" Brett-James cleared his throat. "Mr. Prantera, there are no banks." "No banks! You gotta have banks!" "And no money to put in them. We found it a rather antiquated method of distribution well over a century ago." Joe had given up. Now he merely stared. Brett-James said reasonably, "We found we were devoting as much time to financial matters in all their endless ramifications—including bank robberies—as we were to productive efforts. So we turned to more efficient methods of distribution." On the fourth day, Joe said, "O.K., let's get down to facts. Summa the things you guys say don't stick together so good. Now, first place, where's this guy Temple-Tracy you want knocked off?" Reston-Farrell and Brett-James were both present. The three of them sat in the living room of the latter's apartment, sipping a sparkling wine which seemed to be the prevailing beverage of the day. For Joe's taste it was insipid stuff. Happily, rye was available to those who wanted it. Reston-Farrell said, "You mean, where does he reside? Why, here in this city." "Well, that's handy, eh?" Joe scratched himself thoughtfully. "You got somebody can finger him for me?" "Finger him?" "Look, before I can give it to this guy I gotta know some place where he'll be at some time. Get it? Like Al Rossi. My finger, he works in Rossi's house, see? He lets me know every Wednesday night, eight o'clock, Al leaves the house all by hisself. O.K., so I can make plans, like, to give it to him." Joe Prantera wound it up reasonably. "You gotta have a finger." Brett-James said, "Why not just go to Temple-Tracy's apartment and, ah, dispose of him?" "Jest walk in, eh? You think I'm stupid? How do I know how many witnesses hangin' around? How do I know if the guy's carryin' heat?" "Heat?" "A gun, a gun. Ya think I'm stupid? I come to give it to him and he gives it to me instead." Dr. Reston-Farrell said, "Howard Temple-Tracy lives alone. He customarily receives visitors every afternoon, largely potential followers. He is attempting to recruit members to an organization he is forming. It would be quite simple for you to enter his establishment and dispose of him. I assure you, he does not possess weapons." Joe was indignant. "Just like that, eh?" he said sarcastically. "Then what happens? How do I get out of the building? Where's my get car parked? Where do I hide out? Where do I dump the heat?" "Dump the heat?" "Get rid of the gun. You want I should get caught with the gun on me? I'd wind up in the gas chamber so quick—" "See here, Mr. Prantera," Brett-James said softly. "We no longer have capital punishment, you must realize." "O.K. I still don't wanta get caught. What is the rap these days, huh?" Joe scowled. "You said they didn't have no jails any more." "This is difficult for you to understand, I imagine," Reston-Farrell told him, "but, you see, we no longer punish people in this era." That took a long, unbelieving moment to sink in. "You mean, like, no matter what they do? That's crazy. Everybody'd be running around giving it to everybody else." "The motivation for crime has been removed, Mr. Prantera," Reston-Farrell attempted to explain. "A person who commits a violence against another is obviously in need of medical care. And, consequently, receives it." "You mean, like, if I steal a car or something, they just take me to a doctor?" Joe Prantera was unbelieving. "Why would anybody wish to steal a car?" Reston-Farrell said easily. "But if I give it to somebody?" "You will be turned over to a medical institution. Citizen Howard Temple-Tracy is the last man you will ever kill, Mr. Prantera." A chillness was in the belly of Joe Prantera. He said very slowly, very dangerously, "You guys figure on me getting caught, don't you?" "Yes," Brett-James said evenly. "Well then, figure something else. You think I'm stupid?" "Mr. Prantera," Dr. Reston-Farrell said, "there has been as much progress in the field of psychiatry in the past two centuries as there has in any other. Your treatment would be brief and painless, believe me." Joe said coldly, "And what happens to you guys? How do you know I won't rat on you?" Brett-James said gently, "The moment after you have accomplished your mission, we plan to turn ourselves over to the nearest institution to have determined whether or not we also need therapy." "Now I'm beginning to wonder about you guys," Joe said. "Look, all over again, what'd'ya wanta give it to this guy for?" The doctor said, "We explained the other day, Mr. Prantera. Citizen Howard Temple-Tracy is a dangerous, atavistic, evil genius. We are afraid for our institutions if his plans are allowed to mature." "Well if you got things so good, everybody's got it made, like, who'd listen to him?" The doctor nodded at the validity of the question. "Mr. Prantera, Homo sapiens is a unique animal. Physically he matures at approximately the age of thirteen. However, mental maturity and adjustment is often not fully realized until thirty or even more. Indeed, it is sometimes never achieved. Before such maturity is reached, our youth are susceptible to romantic appeal. Nationalism, chauvinism, racism, the supposed glory of the military, all seem romantic to the immature. They rebel at the orderliness of present society. They seek entertainment in excitement. Citizen Temple-Tracy is aware of this and finds his recruits among the young." "O.K., so this guy is dangerous. You want him knocked off before he screws everything up. But the way things are, there's no way of making a get. So you'll have to get some other patsy. Not me." "I am afraid you have no alternative," Brett-James said gently. "Without us, what will you do? Mr. Prantera, you do not even speak the language." "What'd'ya mean? I don't understand summa the big words you eggheads use, but I get by O.K." Brett-James said, "Amer-English is no longer the language spoken by the man in the street, Mr. Prantera. Only students of such subjects any longer speak such tongues as Amer-English, French, Russian or the many others that once confused the race with their limitations as a means of communication." "You mean there's no place in the whole world where they talk American?" Joe demanded, aghast. Dr. Reston-Farrell controlled the car. Joe Prantera sat in the seat next to him and Warren Brett-James sat in the back. Joe had, tucked in his belt, a .45 caliber automatic, once displayed in a museum. It had been more easily procured than the ammunition to fit it, but that problem too had been solved. The others were nervous, obviously repelled by the very conception of what they had planned. Inwardly, Joe was amused. Now that they had got in the clutch, the others were on the verge of chickening out. He knew it wouldn't have taken much for them to cancel the project. It wasn't any answer though. If they allowed him to call it off today, they'd talk themselves into it again before the week was through. Besides, already Joe was beginning to feel the comfortable, pleasurable, warm feeling that came to him on occasions like this. He said, "You're sure this guy talks American, eh?" Warren Brett-James said, "Quite sure. He is a student of history." "And he won't think it's funny I talk American to him, eh?" "He'll undoubtedly be intrigued." They pulled up before a large apartment building that overlooked the area once known as Wilmington. Joe was coolly efficient now. He pulled out the automatic, held it down below his knees and threw a shell into the barrel. He eased the hammer down, thumbed on the safety, stuck the weapon back in his belt and beneath the jacketlike garment he wore. He said, "O.K. See you guys later." He left them and entered the building. An elevator—he still wasn't used to their speed in this era—whooshed him to the penthouse duplex occupied by Citizen Howard Temple-Tracy. There were two persons in the reception room but they left on Joe's arrival, without bothering to look at him more than glancingly. He spotted the screen immediately and went over and stood before it. The screen lit and revealed a heavy-set, dour of countenance man seated at a desk. He looked into Joe Prantera's face, scowled and said something. Joe said, "Joseph Salviati-Prantera to interview Citizen Howard Temple-Tracy." The other's shaggy eyebrows rose. "Indeed," he said. "In Amer-English?" Joe nodded. "Enter," the other said. A door had slid open on the other side of the room. Joe walked through it and into what was obviously an office. Citizen Temple-Tracy sat at a desk. There was only one other chair in the room. Joe Prantera ignored it and remained standing. Citizen Temple-Tracy said, "What can I do for you?" Joe looked at him for a long, long moment. Then he reached down to his belt and brought forth the .45 automatic. He moistened his lips. Joe said softly, "You know what this here is?" Temple-Tracy stared at the weapon. "It's a handgun, circa, I would say, about 1925 Old Calendar. What in the world are you doing with it?" Joe said, very slowly, "Chief, in the line you're in these days you needa heavy around with wunna these. Otherwise, Chief, you're gunna wind up in some gutter with a lotta holes in you. What I'm doin', I'm askin' for a job. You need a good man knows how to handle wunna these, Chief." Citizen Howard Temple-Tracy eyed him appraisingly. "Perhaps," he said, "you are right at that. In the near future, I may well need an assistant knowledgeable in the field of violence. Tell me more about yourself. You surprise me considerably." "Sure, Chief. It's kinda a long story, though. First off, I better tell you you got some bad enemies, Chief. Two guys special, named Brett-James and Doc Reston-Farrell. I think one of the first jobs I'm gunna hafta do for you, Chief, is to give it to those two." THE END Transcriber's Note: This etext was produced from Analog December 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
D. Big Louis is Joe Prantera's boss.
What additional therapy was discussed for Mr. Sanders due to skin hardening at injection sites in 2017? Choose the correct answer from the following options: A. Slower infusion time B. Change in medication C. Oral steroids D. Antihistamines E. Antibiotics
### Patient Report 0 **Dear colleague, ** We are writing to provide an update on the examination results of our patient Mrs. Hilary Sanders, born on 08/24/1976, who presented to our outpatient clinic on 10/09/2016. **Diagnoses:** - Common Variable Immunodeficiency Syndrome (CVID) - Complete IgG deficiency - Complete IgM deficiency - Complete IgA deficiency - Recurrent respiratory infections - Idiopathic thombocytopenic purpura - Arterial hypertension - Initiation of subcutaneous immunoglobulin substitution therapy with human immunoglobulin **Medical History:** Mrs. Sanders presented with suspected previously undiagnosed immunodeficiency. There were no reports of frequent infections during childhood and adolescence. No increased herpes infections. No history of pneumonia, meningitis, or other serious infections. **Current Presentation:** Mrs. Sanders has experienced recurrent respiratory infections (bronchitis, pharyngitis) for about 3 years. **Physical Examination:** She reported joint pain in the left knee and numbness below the shoulder blade. A tendency to bruise easily. No mucosal lesions, recurrent axillary lymph node swelling. No recurrent fevers. No B-symptoms. No resting dyspnea, no subjective heart rhythm disturbances, no syncope, no peripheral edema, or other signs of cardiopulmonary decompensation. **Immunological Diagnostics:** - Immunoglobulins including subclasses: IgA, IgG, IgM, and all IgG subclasses were reduced. - Numerically unremarkable monocytes and granulocytes, lymphocytopenia with reduced B- and NK-cells, normal CD4/CD8 ratio. - B-lymphocyte subpopulation with numerically reduced B-cells. - Monocytic HLA-DR expression (immune competence marker) within the normal range. - No evidence of acute or chronic T-cell activation. - IL-6, LBP (Lipopolysaccharide-Binding Protein), and IL-8 post-erylysis were unremarkable, elevated s-IL-2. - Monocytic TNF-alpha secretion after 4h LPS stimulation was unremarkable. - T-cell function after 24h polyvalent ConA stimulation: TNF-alpha, IFN-gamma, IL-2, IL-4 unremarkable **Assessment**: In the immunological diagnostics, as in previous outpatient findings, a reduction in all major immunoglobulin classes and subclasses was observed. Cellular immune status revealed lymphocytopenia with reduced B- and natural killer-cells. Further cellular immune status, including the complement system and soluble mediators, showed no significant abnormalities except for an elevated soluble IL-2 receptor. Given the unremarkable monocytic TNF-alpha secretion after LPS stimulation, a significant Toll-like Receptor 4 defect is unlikely. An antibody response to Tetanus Toxoid was demonstrated in a vaccine titer test. Protective pneumococcal-specific antibodies could not be detected. There were no abnormalities in autoimmune diagnostics. Immunofixation showed no evidence of monoclonal gammopathy. Hypogammaglobulinemia due to enteral or renal protein loss is unlikely in the presence of normal albumin. Overall, the picture is consistent with Common Variable Immunodeficiency. Formally, CVID is defined by a reduction in the major immunoglobulin class IgG, with accompanying reduction in IgA and/or IgM, in the absence of normal or impaired vaccine response. Due to very low immunoglobulin levels and planned travel, determination of vaccine response was currently omitted in the absence of therapeutic consequence. After stable substitution, specific vaccine antibody levels can be determined before or after vaccination, with the assumption that stable antibody concentrations exist due to continuous immunoglobulin substitution. According to B-cell differentiation, it corresponds to Type Ib according to the Freiburg Classification and Type B+smB-CD21lo according to the Euro Classification. The classification is clinically relevant, as Type Ia is associated with increased immunocytopenias (especially ITP and AIH) and splenomegaly. In CVID with a high proportion (\>10%) of CD-21 low B-cells, increased granulomatous diseases and splenomegaly have also been observed. The indication for immunoglobulin substitution therapy exists because of recurrent infections. The form of substitution therapy (intravenous. vs. subcutaneous) is primarily based on patient preferences, but also on medical conditions (concomitant diseases such as thrombocytopenia, convenience, insurance, etc.). **Current Recommendations:** We propose to initiate immunoglobulin substitution therapy with Hizentra 20% (subcutaneous) at a dose of 200 ml once a week on Tuesdays. Further information and training on subcutaneous immunoglobulin substitution therapy will be provided by a home care nursing service. Mrs. Sanders will remain under regular medical supervision with close monitoring of clinical symptoms, laboratory parameters, and the effectiveness of immunoglobulin substitution therapy. Any unexpected side effects or changes in her condition should be reported immediately. **Lab results:** **Parameter** **Results** **Reference Range** --------------------------------------- --------------- --------------------- Sodium 141 mEq/L 132-146 mEq/L Potassium 4.2 mEq/L 3.4-4.5 mEq/L Calcium 2.41 mg/dL 2.15-2.50 mg/dL Inorganic Phosphate 1.00 mg/dL 0.87-1.45 mg/dL Selenium 0.79 µmol/L 0.60-1.50 µmol/L Zinc 10.1 µmol/L 9.0-22.0 µmol/L Creatinine 0.75 mg/dL 0.50-0.90 mg/dL Estimated GFR (eGFR CKD-EPI) \>90 mL/min \>90 mL/min Total Bilirubin 0.37 mg/dL \< 1.20 mg/dL Albumin 4.55 g/dL 3.50-5.20 g/dL Total Protein 6.3 g/dL 6.4-8.3 g/dL Albumin Fraction 71.8% 55.8-66.1% A1-Globulin 5.1% 2.9-4.9% A2-Globulin in Serum 10.7% 7.1-11.8% ß-Globulin in Serum 9.2% 8.4-13.1% Gamma-Globulin in Serum 3.2% 11.1-18.8% Immunoglobulin G 514 mg/dL 700-1600 mg/dL Immunoglobulin A 14 mg/dL 70-400 mg/dL Immunoglobulin M 19 mg/dL 40-230 mg/dL Immunoglobulin E 90 kU/L 0.0-100.0 kU/L IgG 1 299.5 mg/dL 280-800 mg/dL IgG 2 162.7 mg/dL 115-570 mg/dL IgG 3 49.1 mg/dL 24-125 mg/dL IgG 4 4.0 mg/dL 5.2-125 mg/dL Serum Immunofixation CRP 4.8 mg/L \< 5.0 mg/L C3 Complement 980 mg/L 900-1800 mg/L C4 Complement 120 mg/L 100-400 mg/L ß-2-Microglobulin 3.6 mg/L 0.8-2.2 mg/L HBs Antigen Negative HBc Antibody Negative HBs Antibody Negative Ferritin 56 µg/L 13-140 µg/L ALT (GPT) 33 U/L \< 31 U/L AST (GOT) 29 U/L \< 35 U/L Alkaline Phosphatase 84 U/L 35-105 U/L Creatine Kinase 90 U/L \< 167 U/L CK-MB 8.3 U/L \< 24.0 U/L Gamma-GT 40 U/L 5-36 U/L LDH 204 U/L 135-214 U/L Lipase 50 U/L 13-60 U/L Cortisol 306.6 nmol/L 64.0-327.0 nmol/L 25-OH-Vitamin D3 65.3 nmol/L 50.0-150.0 nmol/L 1.25-OH-Vitamin D3 134 pmol/L 18.0-155.0 pmol/L TSH 1.42 mU/L 0.27-4.20 mU/L Vitamin B12 770 pg/mL 191-663 pg/mL Folic Acid 14.6 ng/mL 4.6-18.7 ng/mL Hemoglobin 13.9 g/dL 12.0-15.6 g/dL Hematocrit 41.0% 35.5-45.5% Erythrocytes 5.2 M/uL 3.9-5.2 M/uL Leukocytes 4.13 K/uL 3.90-10.50 K/uL Platelets 174 K/uL 150-370 K/uL MCV 80.0 fL 80.0-99.0 fL MCH 26.7 pg 27.0-33.5 pg MCHC 33.6 g/dL 31.5-36.0 g/dL RDW-CV 13.7% 11.5-15.0% Absolute Neutrophils 2.87 K/uL 1.50-7.70 K/uL Absolute Immature Granulocytes 0.010 K/uL \< 0.050 K/uL Absolute Lymphocytes 0.71 K/uL 1.10-4.50 K/uL Absolute Monocytes 0.42 K/uL 0.10-0.90 K/uL Absolute Eosinophils 0.09 K/uL 0.02-0.50 K/uL Absolute Basophils 0.03 K/uL 0.00-0.20 K/uL HbA1c 4.9% \< 6.0% HbA1c (IFCC) 30.1 mmol/mol \< 42.0 HBV Serology Result Negative HIV1/2 Antibodies, P24 Antigen Negative Hepatitis C Virus Antibodies in Serum Negative **Dear colleague,** We report the examination results of Mrs. Hilary Sanders, born on 08/24/1976 who presented at our outpatient clinic on 03/04/2017. **Diagnoses:** - Common Variable Immunodeficiency Syndrome (CVID) - Complete IgG deficiency - Complete IgM deficiency - Complete IgA deficiency - Recurrent respiratory infections - Idiopathic thombocytopenic purpura - Arterial hypertension - Initiation of subcutaneous immunoglobulin substitution therapy human immunoglobulin **Immunological Diagnostics:** - Immunoglobulins including subclasses: IgA, IgG, IgM, and all IgG-Subclasses were reduced. - Numerically unremarkable monocytes and granulocytes, lymphocytopenia with reduced B- and natural killer-cells, normal CD4/CD8 ratio. - B-lymphocyte subpopulation with numerically reduced B cells. - Monocytic HLA-DR expression within the normal range. - No evidence of acute or chronic T-cell activation. - IL-6, LBP (Lipopolysaccharide-Binding Protein), and IL-8 post-erylysis were unremarkable, elevated s-IL-2. - Monocytic TNF-alpha secretion after 4h LPS stimulation was unremarkable. **Assessment**: In the immunological diagnostics, as in previous outpatient findings, a reduction in all major immunoglobulin classes and subclasses was observed. Cellular immune status revealed lymphocytopenia with reduced B- and natural killer-cells. The further cellular immune status, including the complement system and soluble mediators, showed no significant abnormalities except for an elevated soluble IL-2 receptor. **Current Presentation:** Mrs. Sanders was again provided with detailed information about her condition and the planned course of action. We scheduled an appointment to initiate regular subcutaneous immunoglobulin therapy. **Medical History:** Mrs. Sanders received her first dose of Hizentra 20% subcutaneously as immunoglobulin substitution therapy for CVID. The administration was well-tolerated initially, with no evidence of significant local or systemic side effects. Mrs. Sanders was once again informed about possible risks (especially hypersensitivity reactions) and advised to contact us immediately in case of questions, uncertainties, or any abnormalities. The dosing for the first four weeks was 3x20mL Hizentra 20% subcutaneously, and from the fifth week onward, it was changed to either 1x40mL or 2x20mL Hizentra 20% subcutaneously per week. In the past days, Mrs. Sanders has been experiencing a cold: runny nose, cough (green-yellow), difficulty clearing mucus, slight fever, sinus inflammation, sore throat, difficulty speaking, and swallowing problems. There was no improvement. **Physical Examination:** Reddened throat, no exudates, non-swollen cervical lymph nodes, lung examination showed bronchitis-like breathing sounds, no rales. **Therapy and Progression**: Today\'s CRP is not elevated. IgGs are still below normal. We recommended increasing immunoglobulin substitution during the infection. The patient had difficulty finding a suitable injection site on her abdomen. However, she reported that the secretions were gradually becoming lighter, so she decided to wait with the antibiotic and only use it if there was no improvement. The patient has been receiving 3x20mL Hizentra 20% per week since her last visit. She complained of developing skin hardening at the injection sites, so a slower infusion time was discussed. She has been experiencing a strong cough for several weeks without fever. No rales or signs of pleuritis were detected on auscultation. No abnormalities were observed on the chest X-ray. Laboratory results now show normal IgG levels, so the dose was reduced to 2x20mL per week. A CT scan of the thorax and abdominal ultrasound were requested. **Chest X-ray in two planes from 03/04/2017:** [Findings/Assessment:]{.underline} No previous images are available for comparison. Upper mediastinum and heart appear normal, with no central congestion. No pneumothorax, effusions, confluent infiltrates, or significant focal lesions. **Abdominal ultrasound on 03/04/2017:** Hepatosplenomegaly and retroperitoneal lymphadenopathy up to 26mm. **CT Chest/Abdomen/ from 03/04/2017:** [Methodology]{.underline}: Digital overview radiographs. After intravenous injection of contrast agent a 16-row CT scan of the thorax and entire abdomen was performed in the venous contrast phase, with primary data set reconstruction at a thickness of 1.25 mm. Multiplanar reconstructions were created. [Findings]{.underline}: A conventional radiographic pre-image from 11/18/2014 is available for comparison. [Thorax]{.underline}: Normal lung parenchyma with normal vascular markings. Small, sometimes hazy, sometimes nodular densities measuring up to 4mm in both lower lobes and the left upper lobe. Small pleura-adjacent density in the right lower lobe. No evidence of confluent infiltrates. No pleural effusion or pneumothorax. Normal heart size and configuration. Normal diameter of the thoracic aorta and pulmonary trunk. Increased number and enlarged retroclavicular lymph nodes on the right and left, axillary on both sides measuring up to 30mm in diameter. Trachea and esophagus displayed normally. No hiatus hernia. Thyroid and neck soft tissues were unremarkable, as far as depicted. Normal thoracic soft tissue mantle. No soft tissue emphysema. [Abdomen]{.underline}: Hepatomegaly with morphologically normal liver parenchyma. No portal vein thrombosis. Gallbladder is unremarkable with no calculi. Intrahepatic and extrahepatic bile ducts are not dilated. Pancreas is normally lobulated and structured, with no dilation of the pancreatic duct. Splenomegaly. Accessory spleen measuring approximately 20 mm in diameter. Splenic parenchyma is homogeneously contrasted in the venous phase. Kidneys are orthotopically positioned, normal size with no side differences, and contrasted equally on both sides. Two regularly configured hypodense lesions in the left kidney, suggestive of uncomplicated renal cysts. No dilation of the urinary tract, and no evidence of stones. Adrenal glands are not visualized. Increased and enlarged mesenteric, pararaortic, parailiacal, and inguinal lymph nodes up to 30 mm in size. Gastrointestinal tract is displayed normally, as far as assessable. Normal representation of major abdominal vessels. No free intraperitoneal fluid or air. [Osseous structures:]{.underline} No evidence of suspicious osseous destruction. Normal soft tissue mantle. [Assessment:]{.underline} Intrapulmonary multifocal, sometimes hazy, sometimes nodular densities, differential diagnosis includes atypical pneumonia. Thoracoabdominal lymphadenopathy. Hepatosplenomegaly without suspicious lesions. **Current Recommendations:** - Outpatient follow-up for discussion of findings - Continue regular subcutaneous immunoglobulin administration with current regimen of Hizentra 20% 2x20mL/week - Lung function test - Gastroscopy - In case of acute infection: increase immunoglobulin administration - Abdominal ultrasound: annually - H. pylori testing, e.g., breath test or H. pylori antigen in stool: annually Seasonal influenza vaccination: annually **Lab results upon discharge:** **Parameter** **Results** **Reference Range** ---------------------- ------------- --------------------- Total Protein 6.3 g/dL 6.4-8.3 g/dL Albumin Fraction 71.8% 55.8-66.1% A1-Globulin 5.1% 2.9-4.9% Gamma-Globulin 3.2% 11.1-18.8% Immunoglobulin G 188 mg/dL 700-1600 mg/dL Immunoglobulin A 11 mg/dL 70-400 mg/dL Immunoglobulin M 12 mg/dL 40-230 mg/dL IgG Subclass 1 113 mg/dL 280-800 mg/dL IgG Subclass 2 49.1 mg/dL 115-570 mg/dL IgG Subclass 4 \<0.0 mg/dL 5.2-125 mg/dL aPCP-IgG 7.32 mg/dL 10.00-191.20 mg/dL aPCP-IgG2 2.74 mg/dL 4.70-89.40 mg/dL ß-2-Microglobulin 3.6 mg/L 0.8-2.2 mg/L LDH 224 U/L 135-214 U/L Vitamin B12 708 pg/mL 191-663 pg/mL Erythrocytes 5.3 M/uL 3.9-5.2 M/uL Platelets 129 K/uL 150-370 K/uL MCV 78.0 fL 80.0-99.0 fL MCH 25.1 pg 27.0-33.5 pg Absolute Lymphocytes 0.91 K/uL 1.10-4.50 K/uL ### Patient Report 1 **Dear colleague, ** We are reporting on Mrs. Hilary Sanders, born on 08/24/1976, who presented to our Immunodeficiency Clinic on 10/06/2017. **Diagnoses:** - Common Variable Immunodeficiency Syndrome (CVID) - Complete IgG deficiency - Complete IgM deficiency - Complete IgA deficiency - Leukopenia and lymphopenia - Recurrent respiratory infections - Idiopathic thombocytopenic purpura - Hepatosplenomegaly - Thoracoabdominal, inguinal, and axillary lymphadenopathy - Arterial hypertension - Initiation of subcutaneous immunoglobulin substitution therapy human immunoglobulin **Medical History:** Mrs. Sanders first presented herself to our clinic, with suspected undiagnosed immunodeficiency. Regular subcutaneous immunoglobulin therapy with Hizentra 20% (2x20mL/week) has been well-tolerated. Initially, there were frequent upper respiratory tract infections with sore throat and cough. In the absence of fever, a one-time course of Cotrim was prescribed for 7 days due to sinusitis. We discussed Mrs. Sanders' medical history in detail, including the recent CT findings. She has been informed about the necessity of vigilance in case of unclear and especially persistent lymph node swellings. Regarding the inguinal and axillary lymph nodes measuring up to 30mm in diameter found on CT, we recommend an observational approach with regular sonographic monitoring. There have been no significant changes in laboratory parameters, with good IgG levels during ongoing substitution therapy and known moderate leukopenia and lymphopenia. During the next appointment, an additional lung function test, including diffusion measurement, will be conducted **Current Recommendations:** - Outpatient follow-up, including lung function test - Continue regular subcutaneous immunoglobulin administration with current regimen of Hizentra 20% (2x20mL/week). - Current gastroscopy. <!-- --> - In case of acute infection: increase immunoglobulin administration. - Administer targeted, sufficiently long, and high-dose antibiotic therapy if bacterial infections require treatment. - Ideally, obtain material for microbiological diagnostics. - In case of increasing diarrhea, consider outpatient stool examinations, including Giardia lamblia and Cryptosporidium. - Abdominal ultrasound: annually. - Lung function test, including diffusion measurement: annually. - H. pylori testing, e.g., breath test or H. pylori antigen in stool: annually. - Gastroscopy: approximately every 2-3 years, depending on previous findings or H. pylori testing - Chest X-ray or CT thorax: if clinical symptoms or lung function abnormalities are observed. - Seasonal influenza vaccination: annually. **Lab results upon Discharge:** **Parameter** **Results** **Reference Range** -------------------------------- ------------- --------------------- Sodium 141 mEq/L 132-146 mEq/L Potassium 4.1 mEq/L 3.4-4.5 mEq/L Creatinine (Jaffé) 0.82 mg/dL 0.50-0.90 mg/dL Estimated GFR (eGFR CKD-EPI) \>90 \- Total Bilirubin 0.21 mg/dL \< 1.20 mg/dL Albumin 4.09 g/dL 3.5-5.2 g/dL Immunoglobulin G 1025 mg/dL 700-1600 mg/dL Immunoglobulin A 16 mg/dL 70-400 mg/dL Immunoglobulin M 28 mg/dL 40-230 mg/dL Free Lambda Light Chains 5.86 5.70-26.30 Free Kappa Light Chains 6.05 3.30-19.40 Kappa/Lambda Ratio 1.03 0.26-1.65 IgG Subclass 1 580.9 mg/dL 280-800 mg/dL IgG Subclass 2 340.7 mg/dL 115-570 mg/dL IgG Subclass 3 50.9 mg/dL 24-125 mg/dL IgG Subclass 4 5.7 mg/dL 5.2-125 mg/dL CRP 7.3 mg/L \< 5.0 mg/L Haptoglobin 108 mg/dL 30-200 mg/dL Ferritin 24 µg/L 13-140 µg/L ALT 24 U/L \< 31 U/L AST 37 U/L \< 35 U/L Gamma-GT 27 U/L 5-36 U/L Lactate Dehydrogenase 244 U/L 135-214 U/L 25-OH-Vitamin D3 91.7 nmol/L 50.0-150.0 nmol/L Hemoglobin 13.1 g/dL 12.0-15.6 g/dL Hematocrit 40.0% 35.5-45.5% Red Blood Cells 5.5 M/uL 3.9-5.2 M/uL White Blood Cells 2.41 K/uL 3.90-10.50 K/uL Platelets 142 K/uL 150-370 K/uL MCV 73.0 fL 80.0-99.0 fL MCH 23.9 pg 27.0-33.5 pg MCHC 32.7 g/dL 31.5-36.0 g/dL MPV 10.7 fL 7.0-12.0 fL RDW-CV 14.8% 11.5-15.0% Absolute Neutrophils 1.27 K/uL 1.50-7.70 K/uL Absolute Immature Granulocytes 0.000 K/uL \< 0.050 K/uL Absolute Lymphocytes 0.67 K/uL 1.10-4.50 K/uL Absolute Monocytes 0.34 K/uL 0.10-0.90 K/uL Absolute Eosinophils 0.09 K/uL 0.02-0.50 K/uL Absolute Basophils 0.04 K/uL 0.00-0.20 K/uL Free Hemoglobin 5.00 mg/dL \< 20.00 mg/dL **Abdominal Ultrasound on 10/06/2017:** [Liver]{.underline}: Measures 19 cm in the MCL, homogeneous parenchyma, no focal lesions. [Gallbladder/Biliary Tract:]{.underline} No evidence of calculi, no signs of inflammation, no congestion. [Spleen]{.underline}: Measures 14 cm in diameter, homogeneous. Accessory spleen measures 16 mm at the hilus. [Pancreas]{.underline}: Morphologically unremarkable, as far as visible due to intestinal gas overlay, no evidence of space-occupying processes. Retroperitoneum: No signs of aneurysms. Enlarged retroperitoneal and iliac lymph nodes, measuring up to approximately 2.5 cm in diameter. [Kidneys]{.underline}: Both kidneys are of normal size (right 4.3 x 11.8 cm, left 4.6 cm x 11.9 cm). No congestion, no evidence of calculi (stones), no evidence of space-occupying processes. [Bladder]{.underline}: Smoothly defined and normally configured. Minimally filled. [Uterus]{.underline}: Size within the normal range, homogeneous. No ascites. [Assessment:]{.underline} Evidence of enlarged lymph nodes up to 2.5 cm retroperitoneal and iliac. Compared to previous findings, a slight decrease in splenomegaly. ### Patient Report 2 **Dear colleague, ** We are reporting on the examination results of our patient, Mrs. Hilary Sanders, born on 08/24/1976, who presented herself in our Immunodeficiency Clinic on 02/10/2018. **Diagnoses:** - Common Variable Immunodeficiency Syndrome (CVID) - Complete IgG deficiency - Complete IgM deficiency - Complete IgA deficiency - Recurrent respiratory infections - Idiopathic thombocytopenic purpura - Arterial hypertension - Initiation of subcutaneous immunoglobulin substitution therapy human immunoglobulin **Medical History:** For a detailed medical history, please refer to our previous medical records. **Therapy and Progression:** Ongoing diarrhea in the morning, often recurring in the afternoon. No melena, no fresh blood. Resolving respiratory infection, positive influenza. Currently, IgG levels remain within the target range. An increased need for immunoglobulins is expected, especially in the third trimester of pregnancy. Therefore, we recommend close monitoring with us during pregnancy. Ferritin levels have further declined, indicating the need for iron substitution. Anamnestically, there is an intolerance to oral iron preparations. **Recommendations:** - Outpatient follow-up - Early follow-up in case of infections or persistent diarrhea - Continue regular subcutaneous immunoglobulin therapy, currently with Hizentra 20% 2x20mL/week - In case of increasing diarrhea, conduct outpatient stool examinations, including testing for Giardia lamblia and Cryptosporidium - Pulmonary function tests including diffusion measurement: annually - Helicobacter pylori (HP) testing: e.g., breath test or HP antigen in stool: annually - Gastroscopy: approximately every 2-3 years, depending on previous findings and HP testing - Chest X-ray or chest CT: in case of abnormal clinical presentation or pulmonary function - Annual seasonal influenza vaccination ### Patient Report 3 **Dear colleague, ** We are writing to provide an update on Mrs. Hilary Sanders, born on 08/24/1976, who presented to our outpatient Immunodeficiency Clinic on 04/12/2018. **Diagnoses:** - Common Variable Immunodeficiency Syndrome (CVID) - Complete IgG deficiency - Complete IgM deficiency - Complete IgA deficiency - Recurrent respiratory infections - Idiopathic thombocytopenic purpura - Arterial hypertension - Initiation of subcutaneous immunoglobulin substitution therapy human immunoglobulin - Suspected CVID Enteropathy **Medical History:** For a detailed medical history, please refer to our previous medical records. **Therapy and Progression:** Respiratory infection with symptoms for 3-4 weeks. No antibiotics. No significant infections since then. Hizentra 3x20 mL with good tolerance. IgG levels remain within the target range; therefore, we recommend continuing the current treatment unchanged. Since the last visit, mild upper respiratory tract infections. No fever (except for one episode of sinusitis), no antibiotics. SCIG treatment unchanged with 3x20mL/week of Hizentra ®. Mrs. Sanders continues to experience watery diarrhea about 5-7 times daily. No blood in stools, no pain, no vomiting, no nausea. There has been no clear association with specific foods observed. Current weight: 69kg. We discussed further diagnostic steps. Initially, outpatient endoscopic diagnostics should be performed. **Current Recommendations:** - Outpatient follow-up in three months - Continue SCIG treatment as is - External upper gastrointestinal endoscopy and colonoscopy (please return with findings) - In case of increasing diarrhea, conduct outpatient stool examinations, including testing for Giardia lamblia and Cryptosporidium - Abdominal ultrasound: annually - Pulmonary function tests including diffusion measurement: annually - Helicobacter pylori testing: e.g., breath test or Helicobacter pylori antigen in stool: annually - Gastroscopy: approximately every 2-3 years, depending on previous findings and Helicobacter pylori testing - Chest X-ray or chest CT: in case of abnormal clinical presentation or pulmonary function - Annual seasonal influenza vaccination ### Patient Report 4 **Dear colleague, ** We are writing to provide an update on Mrs. Hilary Sanders, born on 08/24/1976, who presented to our outpatient Immunodeficiency Clinic on 02/18/2019. **Diagnoses:** - Common Variable Immunodeficiency Syndrome (CVID) - Complete IgG deficiency - Complete IgM deficiency - Complete IgA deficiency - Recurrent respiratory infections - Idiopathic thombocytopenic purpura - Arterial hypertension - Initiation of subcutaneous immunoglobulin substitution therapy human immunoglobulin - Suspected CVID Enteropathy **Medical History:** For a detailed medical history, please refer to our previous medical records. **Therapy and Progression:** Respiratory infections with symptoms for 7 weeks. No antibiotics. No significant infections since then. Hizentra 3x20 mL with good tolerance. Continued diarrhea, approximately 6 times a day, without weight loss. IgG levels remain within the target range; therefore, we recommend continuing the current treatment unchanged. We discussed further diagnostic steps. Initially, outpatient endoscopic diagnostics should be performed. **Current Recommendations:** - Outpatient follow-up in three months - Continue treatment as is - External upper gastrointestinal endoscopy (and colonoscopy (please return with findings) - In case of increasing diarrhea, conduct outpatient stool examinations, including testing for Giardia lamblia and Cryptosporidium - Abdominal ultrasound: annually - Pulmonary function tests including diffusion measurement: annually - Helicobacter pylori (HP) testing: e.g., breath test or HP antigen in stool: annually - Gastroscopy: approximately every 2-3 years, depending on previous findings and HP testing - Chest X-ray or chest CT: in case of abnormal clinical presentation or pulmonary function - Annual seasonal influenza vaccination ### Patient Report 5 **Dear colleague, ** We are writing to provide summary on the clinical course of Mrs. Hilary Sanders, born on 08/24/1976, who presented at our outpatient Immunodeficiency Clinic. **Diagnoses:** - Common Variable Immunodeficiency Syndrome (CVID) - Complete IgG deficiency - Complete IgM deficiency - Complete IgA deficiency - Recurrent respiratory infections - Idiopathic thombocytopenic purpura - Arterial hypertension - Initiation of subcutaneous immunoglobulin substitution therapy human immunoglobulin - Suspected CVID Enteropathy - Iron-deficiency anemia **Medical History:** For a detailed medical history, please refer to our previous medical records. **Therapy and Progression:** Overall stable condition. No longer experiencing cough. Persistent fatigue. Upcoming appointment with the Gastroenterology department next week. There is again an indication for iron substitution. **Update on 11/15/2019: Laboratory results from 11/15/2019:** Transaminase elevation, Protein 18, markedly elevated BNP. However, IgA is at 0.5 (otherwise not detectable), IgG subclasses within normal range. Findings do not align. Patient informed by phone, returning for further evaluation today; also screening for Hepatitis A, B, C, and E, EBV, CMV, TSH, coagulation. No shortness of breath, no edema, no abdominal enlargement, stable weight at 69 kg. In case of worsening symptoms, shortness of breath, or fever, immediate referral to the emergency department recommended. **02/12/2020:** The patient is doing reasonably well. She has had a mild cold for about 2 weeks, no fever, but nasal congestion and yellowish-green sputum. No other infections. No antibiotics prescribed. She has adapted to her gastrointestinal issues. An appointment with the Gastroenterology department. She is currently working from home. Medication: no new medications, only Cuvitru 20mL 3x weekly. Weight remains stable at 67 kg. The last lung function test was in the summer of this year and was within normal limits. Imaging has not been performed recently. Gastroscopy and colonoscopy have not been conducted for some time. **04/14/2020:** Referral to Gastroenterology at is recommended for persistent abdominal symptoms. **10/24/2020:** The patient has mostly avoided social contacts due to the pandemic. She continues to experience digestive problems (food intolerances, diarrhea, flatulence). She has less stamina. Few infections in the past year, at most a minor cold. No significant infections. Hizentra injections remain unchanged at 20 mL 3 times a week. **03/22/2021:** Constant colds since December 2020. One-time antibiotic treatment in October 2019. Subcutaneous Immunoglobulin therapy remains unchanged at 20 mL 3 times weekly. **09/19/2021:** She feels disoriented and very tired, more so than usual. Difficulty maintaining a steady gaze. No steroid therapy was administered. CT showed enlarged lymph nodes. Diarrhea, especially in the morning, 3-4 times a day, additional bowel movements with meals, sometimes watery. No fever, no infections. Hizentra injections continued unchanged. **Summary**: IgG levels are currently within the target range, so we recommend continuing immunoglobulin substitution therapy without changes. The antibody response (SARS-CoV-2 (S-Ag) IgG ELISA) to the Covid-19 vaccination is, as expected, negative. However, there is a positive detection of SARS-CoV-2 (N-Ag) IgG ELISA, as expected in the case of viral contact (not vaccination). We consider this to be an unspecific reaction and recommend further monitoring at the next follow-up appointment. With a platelet count currently at 55 K/uL, we recommend a short-term blood count check with us or your primary care physician. Due to the immunodeficiency, a lack of antibody response to vaccination was expected. In the medium term, passive protection through immunoglobulin substitution therapy will play a role. This is contingent on a significant portion of plasma donors having antibodies against SARS-CoV2. There is a multi-month delay from the time of donation to the release of the preparations, so we anticipate that meaningful protection through immunoglobulin products will not be expected. An exact prognosis in this regard is not possible. **Current Recommendations:** - Outpatient follow-up in three months - Consultation with Gastroenterology - Continue SCIG treatment as is - External upper gastrointestinal endoscopy and colonoscopy (please return with findings) - In case of increasing diarrhea, conduct outpatient stool examinations, including testing for Giardia lamblia and Cryptosporidium - Abdominal ultrasound: annually - Pulmonary function tests including diffusion measurement: annually - Helicobacter pylori (HP) testing: e.g., breath test or HP antigen in stool: annually - Gastroscopy: approximately every 2-3 years, depending on previous findings and HP testing - Chest X-ray or chest CT: in case of abnormal clinical presentation or pulmonary function - Annual seasonal influenza vaccination ### Patient Report 6 **Dear colleague, ** We are providing you with an update regarding our patient Mrs. Hilary Sanders, born on 08/24/1976. She was under our inpatient care from 03/29/2023 to 04/05/2023. **Diagnoses:** - Suspected CVID-Associated enteropathy - Known hepatosplenomegaly with a borderline enlarged portal vein, no significant portocaval shunts. Multiple liver lesions, possibly hemangiomas further evaluation if not already done. - Known retroperitoneal and iliac lymphadenopathy, likely related to the underlying condition. - Known changes in the lower lung bases, likely associated with the underlying condition, e.g., ILD. Refer to previous examinations. - Capsule endoscopy: Incomplete capsule enteroscopy with no evidence of inflammatory changes. Some hyperemia and blurry vascular pattern observed in the visible colon. - CVID-Associated Hepatopathy in the Form of Nodular Regenerative Hyperplasia **Other Diagnoses:** Common Variable Immunodeficiency Syndrome (CVID) with: - Complete IgG deficiency - Complete IgM deficiency - Complete IgA deficiency - Leukopenia and lymphopenia - Initiation of subcutaneous immunoglobulin substitution therapy with Hizentra 20% - Infectious manifestations: Frequent respiratory tract infections - Non-Infectious manifestations: - ITP (Immune Thrombocytopenia) - Hepatosplenomegaly - Lymphadenopathy in supraclavicular, infraclavicular, thoracoabdominal, inguinal, and axillary regions - Suspected Granulomatous-Lymphocytic Interstitial Lung Disease in CVID <!-- --> - Iron-deficiency anemia **Pysical Examination:** Patient in normal general condition and nutritional status (175 cm, 65.8 kg. No resting dyspnea. [Neuro (grossly orienting):]{.underline} awake, oriented to time/place/person/situation, No evidence of focal neurological deficit. No meningism. [Head/neck]{.underline}: pharynx non-irritable. Moist, rosy mucous membranes. Tongue occupied. [Skin]{.underline}: intact, turgor normal, no icterus, no cyanosis. [Thorax]{.underline}: normal configuration, no spinal palpitation, renal bed clear. [Lung]{.underline}: vesicular breath sound bds, no accessory sounds, sonorous tapping sound bds. [Cor]{.underline}: Cardiac action pure, rhythmic, no vitia typical murmurs. [Abdomen]{.underline}: regular bowel sounds, soft abdominal wall, no tenderness, no resistances, no hepatosplenomegaly. [Extremities]{.underline}: no edema. Feet warm. Dorsalis pedis +/+ and posterior tibial artery +/+. **Current Presentation:** The patient was admitted for further evaluation of suspected CVID-associated enteropathy, as she had been experiencing chronic diarrhea for the past three years. On admission, the patient reported an overall good general and nutritional condition. She described her current subjective well-being as good but mentioned having chronic diarrhea for the past three years, with up to 7 bowel movements per day. The stools were watery without any signs of blood. There were no indications of infection, such as fever, chills, dysuria, hematuria, cough, sputum, or dyspnea. She also experienced intermittent left-sided upper abdominal pain, primarily postprandially. She had a good appetite. On the day of admission, an esophagogastroduodenoscopy was performed, which revealed erythematous antral gastritis. Additionally, there was an approximately 1 cm irregular mucosal area at the corpus-antrum junction on the greater curvature side. A magnetic resonance imaging scan showed no evidence of inflamed bowel loops, ruling out chronic inflammatory bowel disease or celiac disease. To further investigate, a capsule endoscopy was performed, with results pending at the time of discharge. Hypovitaminosis B12 and folate deficiency were ruled out. However, iron-deficiency anemia was confirmed, and the patient had already scheduled an outpatient appointment for iron substitution. Serum levels of vitamin B6 and zinc were pending at discharge. Due to a moderate increase in transaminases and evidence of hepatosplenomegaly, we decided, after detailed explanation and with the patient\'s consent, to perform a sonographically guided liver biopsy in addition to the planned endoscopy. The differential diagnosis included CVID-associated hepatopathy. The biopsy was successfully conducted , without any post-interventional bleeding. Histology revealed mild acute hepatitis and nodular regenerative hyperplasia.This finding could be consistent with changes in CVID-associated hepatopathy. Granulomas were not observed. With only slightly elevated liver values, a trial therapy with budesonide was initiated, and clinical (diarrhea?) and laboratory (transaminases?) follow-up will be performed in the outpatient setting. We discharged Mrs. Sanders in a cardiopulmonarily stable condition. [Current Recommendations:]{.underline} - Follow-up in the gastroenterological outpatient clinic **Esophagogastroduodenoscopy (EGD) on 04/01/2023:** Introduction of the gastroscope in a left lateral position. Visualized up to the descending part of the duodenum. Unremarkable upper esophageal sphincter. Normal motility and mucosa in the upper, middle, and distal esophagus. The Z-line is sharply demarcated in the hiatus. The cardia closes sufficiently. The stomach expands normally in all parts under air insufflation. Multiple glandular cysts \< 8 mm in size in the fundus and corpus. Approximately 1 cm irregular mucosal area at the corpus-antrum junction on the greater curvature side. Streaky redness of the mucosa in the antrum. Unremarkable mucosa in the bulb. Unremarkable mucosa in the descending part of the duodenum. Step biopsies performed. [Summary]{.underline}: Erythematous antral gastritis. Approximately 1 cm irregular mucosal area at the corpus-antrum junction on the greater curvature side, suggestive of inflammation. Multiple glandular cysts observed in the fundus and corpus. [Abdominal MRI on 04/02/2023:]{.underline} [Clinical information, questions, and justification for the exam]{.underline}: Chronic diarrhea, suspected CVID-associated enteropathy, differential diagnosis of celiac disease, and inflammatory bowel disease (IBD). Assessment of malignancy. Technique: After oral administration of mannitol solution and injection of 40 mg Buscopan, a 3-Tesla abdominal MRI was performed. [Findings]{.underline}: Multiple nodular consolidations and opacities detected in the lower basal lung segments, measuring 7 x 4 mm, for example, in the right lateral lower lobe (Series 18, Image 3). Additionally, streaky-reticular changes observed. Left diaphragmatic elevation. Liver globally enlarged and smooth-bordered with several lesions showing mild to moderately hyperintense signals in T2-weighted images and hypointense signals in T1-weighted images. These lesions demonstrated increased enhancement in the early contrast phases, especially those at the periphery, and more diffuse enhancement in the late phases. For example, a lesion measuring 12 x 11 mm in Segment 2, a lesion measuring 8 mm in Segment 8 and a lesion measuring 21 x 13 mm in Segment 7. The portal vein measures borderline wide, up to 15 mm in diameter. Gallbladder is unremarkable without evidence of stones. Intra- and extrahepatic bile ducts are not dilated. Spleen significantly enlarged, measuring 14 cm in pole-to-pole distance and 7.2 cm in transverse diameter, homogeneous enhancement in native phases and late contrast phase. Large accessory spleen located hilarly. Bilateral adrenal glands appear slender. Pancreas displays typical appearance with no ductal dilatation. Both kidneys are in orthotopic position, with unremarkable cortical cysts on the right side. No signs of urinary obstruction. The urinary bladder is moderately filled. No free fluid. Adequate dilation of small bowel loops. No evidence of significant bowel obstruction. No thickened bowel walls or increased post-contrast signal in the bowel loops. Cystic lesion in the right ovary measuring 17 x 11 mm consistent with a corpus luteum cyst. Multiple enlarged retroperitoneal lymph nodes observed, for example, paracaval node with a short-axis diameter of 14 mm and right iliacoexternal node with a short-axis diameter of 14.5 mm No evidence of enlarged mesenteric or inguinal lymph nodes.
Slower infusion time
How are the Gascons different from the rest of King Richard's cohort? A. They are better trained B. They are treasonous C. They are mercenaries D. They are not as well trained
... After a Few Words ... by Seaton McKettrig Illustrated by Summer [Transcriber's Note: This etext was produced from Analog October 1962. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] This is a science-fiction story. History is a science; the other part is, as all Americans know, the most fictional field we have today. He settled himself comfortably in his seat, and carefully put the helmet on, pulling it down firmly until it was properly seated. For a moment, he could see nothing. Then his hand moved up and, with a flick of the wrist, lifted the visor. Ahead of him, in serried array, with lances erect and pennons flying, was the forward part of the column. Far ahead, he knew, were the Knights Templars, who had taken the advance. Behind the Templars rode the mailed knights of Brittany and Anjou. These were followed by King Guy of Jerusalem and the host of Poitou. He himself, Sir Robert de Bouain, was riding with the Norman and English troops, just behind the men of Poitou. Sir Robert turned slightly in his saddle. To his right, he could see the brilliant red-and-gold banner of the lion-hearted Richard of England— gules, in pale three lions passant guardant or . Behind the standard-bearer, his great war horse moving with a steady, measured pace, his coronet of gold on his steel helm gleaming in the glaring desert sun, the lions of England on his firm-held shield, was the King himself. Further behind, the Knights Hospitallers protected the rear, guarding the column of the hosts of Christendom from harassment by the Bedouins. "By our Lady!" came a voice from his left. "Three days out from Acre, and the accursed Saracens still elude us." Sir Robert de Bouain twisted again in his saddle to look at the knight riding alongside him. Sir Gaeton de l'Arc-Tombé sat tall and straight in his saddle, his visor up, his blue eyes narrowed against the glare of the sun. Sir Robert's lips formed a smile. "They are not far off, Sir Gaeton. They have been following us. As we march parallel to the seacoast, so they have been marching with us in those hills to the east." "Like the jackals they are," said Sir Gaeton. "They assail us from the rear, and they set up traps in our path ahead. Our spies tell us that the Turks lie ahead of us in countless numbers. And yet, they fear to face us in open battle." "Is it fear, or are they merely gathering their forces?" "Both," said Sir Gaeton flatly. "They fear us, else they would not dally to amass so fearsome a force. If, as our informers tell us, there are uncounted Turks to the fore, and if, as we are aware, our rear is being dogged by the Bedouin and the black horsemen of Egypt, it would seem that Saladin has at hand more than enough to overcome us, were they all truly Christian knights." "Give them time. We must wait for their attack, sir knight. It were foolhardy to attempt to seek them in their own hills, and yet they must stop us. They will attack before we reach Jerusalem, fear not." "We of Gascony fear no heathen Musselman," Sir Gaeton growled. "It's this Hellish heat that is driving me mad." He pointed toward the eastern hills. "The sun is yet low, and already the heat is unbearable." Sir Robert heard his own laugh echo hollowly within his helmet. "Perhaps 'twere better to be mad when the assault comes. Madmen fight better than men of cooler blood." He knew that the others were baking inside their heavy armor, although he himself was not too uncomfortable. Sir Gaeton looked at him with a smile that held both irony and respect. "In truth, sir knight, it is apparent that you fear neither men nor heat. Nor is your own blood too cool. True, I ride with your Normans and your English and your King Richard of the Lion's Heart, but I am a Gascon, and have sworn no fealty to him. But to side with the Duke of Burgundy against King Richard—" He gave a short, barking laugh. "I fear no man," he went on, "but if I had to fear one, it would be Richard of England." Sir Robert's voice came like a sword: steely, flat, cold, and sharp. "My lord the King spoke in haste. He has reason to be bitter against Philip of France, as do we all. Philip has deserted the field. He has returned to France in haste, leaving the rest of us to fight the Saracen for the Holy Land leaving only the contingent of his vassal the Duke of Burgundy to remain with us." "Richard of England has never been on the best of terms with Philip Augustus," said Sir Gaeton. "No, and with good cause. But he allowed his anger against Philip to color his judgment when he spoke harshly against the Duke of Burgundy. The Duke is no coward, and Richard Plantagenet well knows it. As I said, he spoke in haste." "And you intervened," said Sir Gaeton. "It was my duty." Sir Robert's voice was stubborn. "Could we have permitted a quarrel to develop between the two finest knights and warleaders in Christendom at this crucial point? The desertion of Philip of France has cost us dearly. Could we permit the desertion of Burgundy, too?" "You did what must be done in honor," the Gascon conceded, "but you have not gained the love of Richard by doing so." Sir Robert felt his jaw set firmly. "My king knows I am loyal." Sir Gaeton said nothing more, but there was a look in his eyes that showed that he felt that Richard of England might even doubt the loyalty of Sir Robert de Bouain. Sir Robert rode on in silence, feeling the movement of the horse beneath him. There was a sudden sound to the rear. Like a wash of the tide from the sea came the sound of Saracen war cries and the clash of steel on steel mingled with the sounds of horses in agony and anger. Sir Robert turned his horse to look. The Negro troops of Saladin's Egyptian contingent were thundering down upon the rear! They clashed with the Hospitallers, slamming in like a rain of heavy stones, too close in for the use of bows. There was only the sword against armor, like the sound of a thousand hammers against a thousand anvils. "Stand fast! Stand fast! Hold them off!" It was the voice of King Richard, sounding like a clarion over the din of battle. Sir Robert felt his horse move, as though it were urging him on toward the battle, but his hand held to the reins, keeping the great charger in check. The King had said "Stand fast!" and this was no time to disobey the orders of Richard. The Saracen troops were coming in from the rear, and the Hospitallers were taking the brunt of the charge. They fought like madmen, but they were slowly being forced back. The Master of the Hospitallers rode to the rear, to the King's standard, which hardly moved in the still desert air, now that the column had stopped moving. The voice of the Duke of Burgundy came to Sir Robert's ears. "Stand fast. The King bids you all to stand fast," said the duke, his voice fading as he rode on up the column toward the knights of Poitou and the Knights Templars. The Master of the Hospitallers was speaking in a low, urgent voice to the King: "My lord, we are pressed on by the enemy and in danger of eternal infamy. We are losing our horses, one after the other!" "Good Master," said Richard, "it is you who must sustain their attack. No one can be everywhere at once." The Master of the Hospitallers nodded curtly and charged back into the fray. The King turned to Sir Baldwin de Carreo, who sat ahorse nearby, and pointed toward the eastern hills. "They will come from there, hitting us in the flank; we cannot afford to amass a rearward charge. To do so would be to fall directly into the hands of the Saracen." A voice very close to Sir Robert said: "Richard is right. If we go to the aid of the Hospitallers, we will expose the column to a flank attack." It was Sir Gaeton. "My lord the King," Sir Robert heard his voice say, "is right in all but one thing. If we allow the Egyptians to take us from the rear, there will be no need for Saladin and his Turks to come down on our flank. And the Hospitallers cannot hold for long at this rate. A charge at full gallop would break the Egyptian line and give the Hospitallers breathing time. Are you with me?" "Against the orders of the King?" "The King cannot see everything! There are times when a man must use his own judgment! You said you were afraid of no man. Are you with me?" After a moment's hesitation, Sir Gaeton couched his lance. "I'm with you, sir knight! Live or die, I follow! Strike and strike hard!" "Forward then!" Sir Robert heard himself shouting. "Forward for St. George and for England!" "St. George and England!" the Gascon echoed. Two great war horses began to move ponderously forward toward the battle lines, gaining momentum as they went. Moving in unison, the two knights, their horses now at a fast trot, lowered their lances, picking their Saracen targets with care. Larger and larger loomed the Egyptian cavalrymen as the horses changed pace to a thundering gallop. The Egyptians tried to dodge, as they saw, too late, the approach of the Christian knights. Sir Robert felt the shock against himself and his horse as the steel tip of the long ash lance struck the Saracen horseman in the chest. Out of the corner of his eye, he saw that Sir Gaeton, too, had scored. The Saracen, impaled on Sir Robert's lance, shot from the saddle as he died. His lighter armor had hardly impeded the incoming spear-point, and now his body dragged it down as he dropped toward the desert sand. Another Moslem cavalryman was charging in now, swinging his curved saber, taking advantage of Sir Robert's sagging lance. There was nothing else to do but drop the lance and draw his heavy broadsword. His hand grasped it, and it came singing from its scabbard. The Egyptian's curved sword clanged against Sir Robert's helm, setting his head ringing. In return, the knight's broadsword came about in a sweeping arc, and the Egyptian's horse rode on with the rider's headless body. Behind him, Sir Robert heard further cries of "St. George and England!" The Hospitallers, taking heart at the charge, were going in! Behind them came the Count of Champagne, the Earl of Leister, and the Bishop of Beauvais, who carried a great warhammer in order that he might not break Church Law by shedding blood. Sir Robert's own sword rose and fell, cutting and hacking at the enemy. He himself felt a dreamlike detachment, as though he were watching the battle rather than participating in it. But he could see that the Moslems were falling back before the Christian onslaught. And then, quite suddenly, there seemed to be no foeman to swing at. Breathing heavily, Sir Robert sheathed his broadsword. Beside him, Sir Gaeton did the same, saying: "It will be a few minutes before they can regroup, sir knight. We may have routed them completely." "Aye. But King Richard will not approve of my breaking ranks and disobeying orders. I may win the battle and lose my head in the end." "This is no time to worry about the future," said the Gascon. "Rest for a moment and relax, that you may be the stronger later. Here—have an Old Kings ." He had a pack of cigarettes in his gauntleted hand, which he profferred to Sir Robert. There were three cigarettes protruding from it, one slightly farther than the others. Sir Robert's hand reached out and took that one. "Thanks. When the going gets rough, I really enjoy an Old Kings ." He put one end of the cigarette in his mouth and lit the other from the lighter in Sir Gaeton's hand. "Yes, sir," said Sir Gaeton, after lighting his own cigarette, " Old Kings are the greatest. They give a man real, deep-down smoking pleasure." "There's no doubt about it, Old Kings are a man's cigarette." Sir Robert could feel the soothing smoke in his lungs as he inhaled deeply. "That's great. When I want a cigarette, I don't want just any cigarette." "Nor I," agreed the Gascon. " Old Kings is the only real cigarette when you're doing a real man's work." "That's for sure." Sir Robert watched a smoke ring expand in the air. There was a sudden clash of arms off to their left. Sir Robert dropped his cigarette to the ground. "The trouble is that doing a real he-man's work doesn't always allow you to enjoy the fine, rich tobaccos of Old Kings right down to the very end." "No, but you can always light another later," said the Gascon knight. King Richard, on seeing his army moving suddenly toward the harassed rear, had realized the danger and had charged through the Hospitallers to get into the thick of the fray. Now the Turks were charging down from the hills, hitting—not the flank as he had expected, but the rear! Saladin had expected him to hold fast! Sir Robert and Sir Gaeton spurred their chargers toward the flapping banner of England. The fierce warrior-king of England, his mighty sword in hand, was cutting down Turks as though they were grain-stalks, but still the Saracen horde pressed on. More and more of the terrible Turks came boiling down out of the hills, their glittering scimitars swinging. Sir Robert lost all track of time. There was nothing to do but keep his own great broadsword moving, swinging like some gigantic metronome as he hacked down the Moslem foes. And then, suddenly, he found himself surrounded by the Saracens! He was isolated and alone, cut off from the rest of the Christian forces! He glanced quickly around as he slashed another Saracen from pate to breastbone. Where was Sir Gaeton? Where were the others? Where was the red-and-gold banner of Richard? He caught a glimpse of the fluttering banner far to the rear and started to fall back. And then he saw another knight nearby, a huge man who swung his sparkling blade with power and force. On his steel helm gleamed a golden coronet! Richard! And the great king, in spite of his prowess was outnumbered heavily and would, within seconds, be cut down by the Saracen horde! Without hesitation, Sir Robert plunged his horse toward the surrounded monarch, his great blade cutting a path before him. He saw Richard go down, falling from the saddle of his charger, but by that time his own sword was cutting into the screaming Saracens and they had no time to attempt any further mischief to the King. They had their hands full with Sir Robert de Bouain. He did not know how long he fought there, holding his charger motionless over the inert body of the fallen king, hewing down the screaming enemy, but presently he heard the familiar cry of "For St. George and for England" behind him. The Norman and English troops were charging in, bringing with them the banner of England! And then Richard was on his feet, cleaving the air about him with his own broadsword. Its bright edge, besmeared with Saracen blood, was biting viciously into the foe. The Turks began to fall back. Within seconds, the Christian knights were boiling around the embattled pair, forcing the Turks into retreat. And for the second time, Sir Robert found himself with no one to fight. And then a voice was saying: "You have done well this day, sir knight. Richard Plantagenet will not forget." Sir Robert turned in his saddle to face the smiling king. "My lord king, be assured that I would never forget my loyalty to my sovereign and liege lord. My sword and my life are yours whenever you call." King Richard's gauntleted hand grasped his own. "If it please God, I shall never ask your life. An earldom awaits you when we return to England, sir knight." And then the king mounted his horse and was running full gallop after the retreating Saracens. Robert took off his helmet. He blinked for a second to adjust his eyes to the relative dimness of the studio. After the brightness of the desert that the televicarion helmet had projected into his eyes, the studio seemed strangely cavelike. "How'd you like it, Bob?" asked one of the two producers of the show. Robert Bowen nodded briskly and patted the televike helmet. "It was O.K.," he said. "Good show. A little talky at the beginning, and it needs a better fade-out, but the action scenes were fine. The sponsor ought to like it—for a while, at least." "What do you mean, 'for a while'?" Robert Bowen sighed. "If this thing goes on the air the way it is, he'll lose sales." "Why? Commercial not good enough?" " Too good! Man, I've smoked Old Kings , and, believe me, the real thing never tasted as good as that cigarette did in the commercial!"
C. They are mercenaries
How does the photographer feel about dark rooms? A. Darkrooms don't make sense anymore with today's technology. B. They are a darkroom geek. C. Darkrooms are not all that exciting. D. Doing the wet work in the darkroom will always produce a superior picture.
Just another free soul In his foreword to the book, Lessig writes that you understand your subjects “by learning to see them in a certain way.” What is that certain way? I think I’m trying to get a mental image of a person, certain expressions, or what I think that person is about. I’m trying to capture what I think they look like, which is many times a minority of their typical expressions, or their typical stance. So, if I’m taking pictures of Larry [Lessig], I want to have his signature hand gestures, and not just random ones. I think I’m trying to capture pictures of people that help others see what they’re about. Some photographers will make someone look the way the photographer wants them to look, and not the way they appear, so they’ll pick the one picture out of 100 where the guy looks more egotistical than he really is. Some photographers are almost medical, and are going after a perfect portrait. I’m somewhere in between. It’s amazing how many people will upload snapshots of people where the pictures don’t look like them at all. To me, uploading a picture that is not an easily recognizable picture of that person defeats the point, which I’m working toward, to try to express who they are. On the other hand, professional photographers usually have a subject whom they don’t know personally, so they end up having to try to capture an image that they’ve created based on who they think the person is or how they want that person to appear. You know how sculptors often say that they’re just freeing an image from a block? What I’m trying to do is free someone’s soul from his or her image. There are a lot of things that make this hard. A lot of people are uncomfortable in front of a camera, or might make expressions that aren’t very natural for them. And if the person is nervous, it’s very difficult to try to see what it is that you’re trying to capture. A lot of what I’m doing is, I just start shooting photos. After half an hour of having their picture taken, people start to ignore you. Or I’ll take pictures when I’m talking to people about what they’re doing, so after a while they get distracted by the conversation and forget about the camera. That’s something that I’m not perfect at, but I’m getting better. I think good photographers are also able to disarm people through conversation, but still, it’s difficult to have a disarming conversation with somebody you don’t know, or to make them laugh. Many times people make a face for me that they wouldn’t make for a professional photographer. For instance, a board meeting picture, like the one with Eric Saltzman: that was during a very tense discussion. I’ve found that people are at their most animated at these kinds of meetings, and look the most alive when they are under a lot of pressure, and super- focused. But usually if an outsider is in the room, they won’t get into that. I mean, it would be difficult for a cameraman to be in a room where a board is having a heated debate. But those are the things that I’m trying to capture, because most people don’t get to see that. At the Creative Commons board meeting, Larry asked me to put the camera away after awhile [laughs] because it was distracting. We were having a very heated discussion and I was taking all of these pictures. But he credited me later because afterward those pictures turned out the best. In your mind, what is a ‘Freesoul’ ? A freesoul is somewhat of a pun. On the one hand it means you are free, liberated. You, as a human spirit, are open. And then, it also has the meaning that you are unencumbered legally, that you are free, as in ‘free software.’ There’s a paradox: with many people’s Wikipedia articles to which I’ve contributed, when it comes to the picture, many of these people don’t have any free photos of themselves on the web, so while they are “notable” on Wikipedia, their images aren’t free of the copyright of the photographer, or the institution who hired the photographer to take the picture. Often, even the subject of the article can’t make an image available to the Wikimedia/Wikipedia community. This means that a lot of people who have a Net presence have a legally encumbered Net presence. People who are invited to conferences get asked all the time, “By the way, do you have a photo that we can use?” But they don’t. By making these pictures available under a Creative Commons license, now they do. This is solving the issue of legal freedom. The third part of the pun is that, since I’m asking for a model release from the subjects, I’m asking everyone to be much more open and giving about their image than most people typically are. I’m giving, you’re giving, we’re all giving to participate and to try to create this wonderful work, and allow others to create derivative works. Of course people can abuse that, just like they can abuse anything. But I want people to see the value in sharing over the fear in sharing. The fact is, it’s much more likely that somebody is going to use these pictures for something positive, rather than for something negative. The benefits greatly outweigh the risks. I think we spend way too much of our lives worrying about the risks, at the cost of a lot of the benefits. This is a celebration of all of the people who are willing to give. In a way, giving up your image and allowing anyone to use it: it’s the ultimate gift. In one way it’s kind of vain. [laughs] But in another way it’s wonderful. A Wikipedia article on some person but with no picture is sad. Besides Wikipedia, how do you imagine these photos being used? They can be used in textbooks and in mainstream media articles about the person. Now they can get a picture that represents the person, at least from my perspective. That said, I shouldn’t be the only person doing this. More people should do the same, and make the photographs available freely. For one, I feel that “free” CC licensed photos have a much higher chance of not disappearing. But I don’t know exactly how these photos are going to be used, so in a sense I’m curious. For example, recently I received the Harvard Berkman Center pamphlet. It was a report of what they’re doing, and they also had a bunch of my pictures in there. They all had attribution, and it made me feel really good. There were pictures of different Berkman Center members that I had taken in various places all over the world. I think that the subject is probably happy with this, and I’m happy, and the Berkman Center’s happy because they’re not all pictures of people sitting at desks in the Berkman Center. There’s one more important thing: Creative Commons is great for original creative works or derivative creative works, but when it involves human images, it gets very complicated. We all know the Virgin Mobile case, where Virgin used CC licensed images in an advertisement without getting permission from the models, and got in trouble. What we’re trying to do here is to expand beyond just copyright, to make it more thorough from a legal perspective. It’s also an important educational point, so people understand that, in addition to the Creative Commons licenses, we need people to provide other rights in cases where the law requires such rights to be cleared before reuse. What have you learned about the people in these networks, just in the past year? That’s a good question. I think that at least Creative Commons has become much more mainstream. Creative Commons has moved from a fringy academic discussion to a boardroom discussion. Yahoo announced that it will be using Creative Commons for all of their basic infrastructure, and integrating it all. Google has CC search in their advanced search. Microsoft is working with CC as well and have a plug-in. Nine Inch Nails released their album, Ghost, under a Creative Commons license. The list goes on. Many people are asking: can you make money and share? The answer is, yes. CC is becoming an important part of the business discussion. But one thing that happens when a movement like CC becomes a business thing, is that a lot of the pioneers fade into the background, and it becomes a part of industry. This happened to the Internet. And so while you still have the core people who still remember and hold the torch for the philosophical side, the Internet has become much more of a business. Now, when you go to many Internet conferences, it’s mostly salesmen in attendance. I believe that the success of the Internet has two parts. The first part is the market- driven business side, which has made the Internet affordable and ubiquitous. The second part is the strong movement of participants who fight to keep the Internet open and try to prevent the business side from corrupting the fundamental elements that make the Internet great. The Net Neutrality or Open Network discussion going on right now is a good example of the importance of continuing to balance these principles with business interests. Similarly, I think that business interests can help make Creative Commons ubiquitous and more easily accessible to everyone. However, I think it’s important to remember to keep pushing to make content more “free” and not allow businesses to use Creative Commons in exploitive or destructive ways. In addition to the business side, Creative Commons is being used by educators to create open courseware around the world and in the area of science and technology to promote sharing in research. And as of now, we have the license ported to at least 44 jurisdictions, and the number of countries with projects continues to grow. In many ways, the movement outside of the United States has become much bigger than the movement in the United States. Although the United States is still slightly farther ahead in terms of commercialization, the size of the whole free culture movement outside of the United States is huge now. The CC China Photo exhibit was just amazing. There were some great images, and a lot of the photographers were professionals. This is beyond what anybody has done in the US. A lot of the progress that we’re making is international. What are your personal realizations or experiences? Well, we’re all getting old, if you look at these pictures. But there’s another thing, though, about this book: the number of professional-quality amateurs has increased significantly due to the importance of digital in both professional and high-end amateur photography I hate to say it, a lot of people love the darkroom, but it really feels like the death of the darkroom with this year. With new 22 megapixel cameras coming in under $10,000, and Lightroom and some of this software at a couple hundred dollars, it doesn’t really make sense, except for particularly fussy artists, to do wet-work anymore. If you’re a commercial photographer or a high-end amateur, you can do anything you used to do in the darkroom. I think it has really lowered the bar. I don’t know how that affects the industry directly, but for me, it bridged a huge gap. I used to be darkroom geek. I loved my darkroom, and even when I didn’t have my darkroom anymore, I still was shooting 6x6 Hasselblad 120 film and processing it in a special lab, and then digitizing it. For me, that film was it. You could never get as good as medium-format film or large-format film At the time, the digital Hasselblad backs were too expensive, and were still not as good as 8x10 film. So there was this whole period where the darkroom was not all that exciting, but the digital wasn’t perfect. I went through a limbo period. I had invested so much in my Hasselblad system, and my Leica M6 set. I had bought the Leica R8, but I was kicking myself because it was terrible. But then the Leica M8 came out, and I bought one at the beginning of 2007. The M8 really got me to where I could use my old gear, and it had enough megapixels to be as good as some film. Another way of saying it was that there was a gear breakthrough at the beginning of last year. Okay, that’s pretty materialistic! So there was a technology breakthrough, let’s call it that, that allowed me to switch completely away from film, and I think this happened to a lot of photographers. It caused an explosion of content and an increase in the quality of content on sites like Flickr. It has allowed amateurs to create a business model with professionals. Interestingly, I think these new high-end amateurs are buying more photography books and photographs and are probably providing an increasing revenue stream for professional photographers. I think most amateurs, including myself, are paying homage to the professionals and not trying to “compete” with them. Despite the existence of social software, what is still important about meeting people face-to-face? For me, the right way to use a lot of the new social software is by making it easier to spend more physical time with the people you like best. Dopplr is a great example. When I visit a city, I will see all of the people who are in the city at the same time. When I went to London awhile ago, there were 47 people I knew in London, and a huge percentage of those people don’t live there. I would bet that more than half of the photos in this book are pictures of friends, and they’re not in their hometown. That’s the really interesting thing that is happening right now: it’s really increasing your ability to spend quality time with, actually, a smaller number of people. It allows you to actively filter. Your meetings don’t have to be random. If I look at the list of people in this book, although there are some obvious people missing whom I didn’t see last year, probably met more of my friends last year, my real friends, than I’ve met in any other year. I know my travels were crazy, but I think that the online world has allowed me to do that. What’s great about photography is that it captures the moment that I was sharing with that person. It’s not just a connection on a social network online, which is really pretty binary. I can look at all these photos and remember exactly what we were doing, what we were eating, what we were drinking, what we were talking about, and to me that’s a much more rich experience. It’s the combination of social software and photography. For me, reality is “the present” plus what you remember from the past. I think this project is really sharing memories with people. Blog posts contribute as well, but to me photography is a really good way of doing that. When I look at the expressions, I remember the moment and get a sense of presence. I think the main problem for me is the environmental impact of flying around. Just as I never believed that we would have a paperless office, being able to connect with people through social software mostly increases your travel, it doesn’t decrease it. It is great because you get to meet all these people. But it is bad for the environment, and bad for our jet lag. How would you characterize your contributions to free culture? I think it’s mostly incremental. I think there is very little we actually do all by ourselves, and I hate saying, “I did this” or “I did that.” I think that in most cases, focusing on individual contributions or achievements undervalues the importance of everyone else involved. Having said that, I think my main contribution is probably in supporting Creative Commons as a fan, board member, chairman of the board and now CEO. I think CC has a significant role, and helping to keep it on track and growing is probably the single most important role that I have in Free Culture. Specifically, I think that trying to keep an international focus and a balance between business and the non-business elements of the movement is essential. My job is to keep that focus and maintain that balance. Also, CC needs to run smoothly as an organization and there is a lot of operational work that we all need to do. My photography is a way for me to participate in a small measure on the creative side of the Free Culture movement, and helps me see things from that perspective as well. However, I believe in emergent democracy and the importance of trying to celebrate the community more than the heroes. Of course, I’m a huge fan of Larry’s and I have great respect for the leaders of our movement. But more than anything, I’m thankful for and respectful of all of the participants who aren’t so well known and who are essential to moving everything forward. Personally, I don’t think it’s ultimately meaningful to talk about one individual’s personal contribution to any movement. The real meaning is in the whole movement. I’m just one participant. Just another free soul.
A. Darkrooms don't make sense anymore with today's technology.
Why doesn’t Eddie act excited about Teena going prospecting with him?  A. Eddie doesn’t want Teena to come because there isn’t much time left in the day for prospecting Cedar Point.  B. Eddie has a crush on Teena, and therefore doesn’t want to act too eager and uncool. C. Eddie doesn’t want Teena to feel like she is obligated to help him fulfill his dream of finding radioactivity at Cedar Point. D. It is implied that Eddie doesn’t want Teena to feel like he knows a lot more science than she does. Eddie feels this will make Teena not like him. 
YOUNG READERS Atom Mystery 11 CHAPTER ONE It was only a dream. Eddie Taylor would like to have finished it, but the bar of morning sunlight poking in under the window shade pried his eyes open. The dream fled. Eddie kicked off the sheet, swung his feet to the floor, and groped under the bed for his tennis shoes. He heard his father’s heavy footsteps in the hallway. They stopped outside of his bedroom door. “You awake, Eddie?” “I’m awake, Dad,” Eddie answered. “Breakfast’s ready. Get washed and dressed.” 12 “Be right there,” Eddie said. Then, remembering the dream, he added, “Oh, Dad, is it all right if I use the Geiger counter today?” Mr. Taylor opened the door. He was a big man, broad-shouldered and still thin-waisted. Eddie found it easy to believe the stories he had heard about his father being an outstanding football player in his time. Even his glasses and the gray hair at his temples didn’t add much age, although Eddie knew it had been eighteen years since his father had played his last game of college football. “You may use the Geiger counter any time you want, Eddie,” Mr. Taylor said, “as long as you take good care of it. You figured out where you can find some uranium ore?” Eddie smiled sheepishly. “I—I had a dream,” he said. “Plain as day. It was out on Cedar Point. I was walking along over some rocks. Suddenly the Geiger counter began clicking like everything.” 13 “Cedar Point?” his father asked. “I’ve never been out there. But, from what I hear, there are plenty of rock formations. Might be worth a try, at that. You never can tell where you might strike some radioactivity.” “Do you believe in dreams, Dad?” “Well, now, that’s a tough question, son. I can’t say that I really do. Still, one clue is as good as another when it comes to hunting uranium ore, I guess. But right now we’d better get out to breakfast before your mother scalps us. Hurry it up.” His father turned and went back down the hallway toward the kitchen. Eddie pulled on his trousers and T shirt and went into the bathroom. He washed hurriedly, knowing that even if he missed a spot or two, he was fairly safe. During the summer months his freckles got so thick and dark that it would take a magnifying glass to detect any small smudges of dirt hiding among them. He plastered some water on his dark-red hair, pushed a comb through it, and shrugged as it snapped back almost to its original position. Oh, well, he had tried. 14 He grinned into the mirror, reached a finger into his mouth, and unhooked the small rubber bands from his tooth braces. He dropped them into the waste basket. He’d put fresh ones in after breakfast. He brushed his teeth carefully, taking particular pains around the metal braces. The tooth-straightening orthodontist had warned him about letting food gather around the metal clamps. It could start cavities. Finished, Eddie went out to breakfast. “Good morning, dear,” his mother greeted him, handing him a plate of eggs. “Hi, Mom,” Eddie said. “Gotta hurry. Big day today.” “So your father says. But I’m afraid your big day will have to start with sorting out and tying up those newspapers and magazines that have been collecting in the garage.” “Aw, Mom—” “Eddie, I asked you to do it three days ago. Remember? And the Goodwill truck comes around today.” “But, Mom—” 15 “No arguments, son,” his father put in calmly but firmly. “School vacation doesn’t mean that your chores around here are on vacation, too. Get at it right away, and you’ll still have time to hunt your uranium. “Well,” Mr. Taylor added, excusing himself from the table, “I’d better be getting over to school. I’m expecting to receive shipment of a new radioisotope today.” The very word excited Eddie. In fact, anything having to do with atomic science excited him. He knew something about isotopes—pronounced eye-suh-tope . You couldn’t have a father who was head of the atomic-science department at Oceanview College without picking up a little knowledge along the way. Eddie knew that a radioisotope was a material which had been “cooked” in an atomic reactor until it was “hot” with radioactivity. When carefully controlled, the radiation stored up in such isotopes was used in many beneficial ways. 16 “Why don’t college professors get summer vacations, too?” Eddie asked. One reason for asking that particular question was to keep from prying deeper into the subject of the radioisotope. Much of his father’s work at Oceanview College was of a secret nature. Eddie had learned not to ask questions about it. His father usually volunteered any information he wanted known, so Eddie stuck to questions which could and would be answered. “We get vacations,” his father said. “But—well, my work is a little different, you know. At the speed atomic science is moving today, we simply can’t afford to waste time. But don’t worry. We’ll take a week or so off before school starts in the fall. Maybe head for the mountains with our tent and sleeping bags.” “And Geiger counter?” Eddie asked eagerly. “Wouldn’t think of leaving it home,” his father said, smiling. “By the way, I put new batteries in it the other day. Take it easy on them. Remember to switch it off when you’re not actually using it.” “I will,” Eddie promised. He had forgotten several times before, weakening the batteries. 17 It took Eddie over an hour to sort out the newspapers and magazines in the garage, tie them in neat bundles, and place them out on the front curb for the Goodwill pickup. By that time the sun was high overhead. It had driven off the coolness which the ocean air had provided during the earlier hours. “Anything else, Mom?” he asked, returning to the house and getting the Geiger counter out of the closet. He edged toward the back door before his mother had much time to think of something more for him to do. “I guess not, dear,” Mrs. Taylor said, smiling over his hasty retreat. “What are you going to do?” “Think I’ll do a little prospecting,” Eddie said. “Where?” “Probably in the hills beyond the college,” Eddie said. The more he thought about it, the more he realized it was a little late in the day to go to Cedar Point. The best way to get there was by rowboat across Moon Bay, and that was too long a row to be starting now. Besides, there were plenty of other places around the outskirts of Oceanview where likely looking rock formations invited search with a Geiger counter. 18 “Are you going alone?” his mother asked. “Oh, guess I’ll stop by and see if Teena wants to go,” Eddie answered casually. He tried to make it sound as though he would be doing Teena Ross a big favor. After all, she was only a girl. Eddie didn’t figure a girl would make a very good uranium prospecting partner, but most of the fellows he knew were away at camp, or vacationing with their folks, or something like that. “She’ll enjoy it, I’m sure,” his mother said. “I’ll take Sandy, too,” Eddie said. “He needs the exercise.” “That’s a good idea, dear. Be back in time for an early dinner.” Eddie let Sandy off his chain. The taffy-colored cocker spaniel yipped wildly over his freedom, racing back and forth as Eddie started down the street. 19 Christina Ross—whom everybody called Teena—lived at the far end of the block. Eddie went around to the side door of the light-green stucco house and knocked. “Oh, hi, Eddie,” Teena greeted him, appearing at the screen door. “I was hoping you’d come over.” “Well, I—I just happened to be going by,” Eddie said. “Thought you might want to watch me do a little prospecting with the Geiger counter. But maybe you’re too busy.” That’s how to handle it, Eddie thought. Don’t act anxious. Let Teena be anxious. Then maybe she’ll even offer to bring along a couple of sandwiches or some fruit. “Oh, I’d love to go,” Teena said eagerly, “but I’m just finishing the dishes. Come on in.” “I’m in kind of a hurry.” “I’ll only be a minute.” She pushed the screen door open for him. “I’ll make us some sandwiches.” “Stay here, Sandy,” Eddie said. “Sit.” The dog minded, although he looked a bit rebellious. 20 Eddie went inside and followed Teena to the kitchen. He felt triumphant about the sandwiches. Teena tossed him a dish towel. “You dry them,” she said. “Who, me?” “Why not? You’re in a hurry, aren’t you? I can make the sandwiches while you dry the silverware.” She smiled, putting tiny crinkles in her small, slightly upturned nose. She wore her hair in a pony tail. Even though her hair was blond all year long, it seemed even lighter in the summer. Eddie couldn’t tell whether the sun had faded it, or whether her deep summer tan simply made her hair look lighter by contrast. Maybe both. “Hello, Eddie,” Mrs. Ross said, coming into the kitchen. “Looks like Teena put you to work.” “She always does, Mrs. Ross,” Eddie said, pretending great injury. “Don’t know why I keep coming over here.” “I know,” Teena spoke up quickly. “It’s because we’re friends, that’s why.” 21 Eddie knew she was right. They were friends—good friends. They had been ever since Eddie’s family had moved to Oceanview and his father had become head of the college’s atomic-science department. In fact, their parents were close friends, also. Teena’s father was chief engineer for the Acme Aviation Company, one of the coast town’s largest manufacturing concerns. “Well, I’ll be glad to finish them, Eddie,” Mrs. Ross offered. “I know how boys detest doing dishes.” “Oh, I don’t really mind, Mrs. Ross,” Eddie said. “Besides, Teena’s making sandwiches to take with us.” “Another prospecting trip?” Teena’s mother glanced at the Geiger counter which Eddie had set carefully on the dinette table. “I still think there must be some uranium around here,” Eddie insisted. “And we can find it if anyone can.” “I agree,” Mrs. Ross said. “But even if you don’t find it, you both seem to enjoy your hikes.” 22 “Oh, yes, it’s fun, Mother,” Teena replied, wrapping wax paper around a sandwich. “Guess I’m ready. I’ve got a bone for Sandy, too.” “Don’t go too far out from town,” Mrs. Ross cautioned, as Eddie picked up the Geiger counter. “And stick near the main roads. You know the rules.” “We sure do, Mrs. Ross,” Eddie assured her. “And we’ll be back early.” They walked past the college campus, and toward the rocky foothills beyond. At various rock mounds and outcroppings, Eddie switched on the Geiger counter. The needle of the dial on the black box wavered slightly. A slow clicking came through the earphones, but Eddie knew these indicated no more than a normal background count. There were slight traces of radioactivity in almost all earth or rocks. It was in the air itself, caused by mysterious and ever-present cosmic rays, so there was always a mild background count when the Geiger counter was turned on; but to mean anything, the needle had to jump far ahead on the gauge, and the clicking through the earphones had to speed up until it sounded almost like bacon frying in a hot skillet. 23 There was none of that today. After they had hiked and searched most of the forenoon, Eddie said, “We might as well call it a day, Teena. Doesn’t seem to be anything out here.” “It’s all right with me,” Teena agreed, plucking foxtails from Sandy’s ears. “Pretty hot, anyway. Let’s eat our sandwiches and go back home.” “All right,” Eddie said. “You know, one of these days I’d like to go out to Cedar Point and scout around. Maybe we’ll find something there.” Then he told Teena about his dream. Teena smiled. “A dream sure isn’t much to go on,” she said, “but they say it’s pretty out on Cedar Point. I’ll go any time you want to, Eddie.” She handed him one of the sandwiches. It was midafternoon by the time they arrived back at Teena’s house. They worked a while on a new jigsaw puzzle Teena had received on a recent birthday. Then Eddie said good-by and went on down the street toward his own home. 24 After putting Sandy on his long chain and filling his water dish, Eddie went in the back door. He put the Geiger counter in the closet and went into the kitchen. “What’s for dinner, Mom?” he asked. Mrs. Taylor turned from the sink. Eddie knew at once, just seeing the expression on his mother’s face, that something was wrong. “Dinner?” his mother said absently. “It’s not quite four o’clock yet, Eddie. Besides, dinner may be a little late today.” “But this morning you said it would be early,” Eddie reminded her, puzzled. “This morning I didn’t know what might happen.” 25 Then Eddie heard the sound of his father’s voice coming from the den. There was a strange urgent tone in it. The door to the den was open. Eddie went through the dining room and glanced into the den. His father sat stiffly behind his homemade desk, talking rapidly into the telephone. Eddie caught only the last few sketchy words. Then his father placed the telephone in its cradle, glanced up, and saw Eddie. If there had been even the slightest doubt in Eddie’s mind about something being wrong, it vanished now. Mr. Taylor looked years older than he had that very morning. Worry lay deep in his eyes. He fumbled thoughtfully with a pencil, turning it end over end on his desk. “Hello, son,” he said. He didn’t even ask whether Eddie had discovered any uranium ore that day. Always before, he had shown genuine interest in Eddie’s prospecting trips. “Dad,” Eddie said anxiously, “what—what’s the matter?” “It shows that much, does it, son?” his father said tiredly. “What’s wrong, Dad?” Eddie prompted. “Or can’t you tell me?” Mr. Taylor leaned back. “Quite a bit’s wrong, Eddie,” he said, “and I guess there’s no reason why I shouldn’t tell you. It’ll be in the evening papers, anyway.” 26 “Evening papers?” “Eddie, you remember me mentioning this morning about that radioisotope shipment I was expecting today?” “I remember,” Eddie said. “Did it come?” “It did—and it didn’t,” his father said. “What does that mean, Dad?” Eddie asked, puzzled. “The delivery truck arrived at the school with it,” his father explained, “but while the driver was inquiring where to put it, the container disappeared.” “Disappeared?” “The radioisotope was stolen, Eddie,” his father said slowly. “Stolen right out from under our noses!” 27 CHAPTER TWO At the moment, Eddie didn’t pry for further information on the theft of the valuable radioactive isotope. His father had plenty on his mind, as it was. The main information was in the evening Globe , which Eddie rushed out to get as soon as he heard it plop onto the front porch. He took the newspaper to his father to read first. After having finished, Mr. Taylor handed the paper to Eddie and leaned back thoughtfully in his chair. 28 “They’ve got it pretty straight, at that,” Mr. Taylor said, “but I’m afraid this is going to stir up quite a bit of trouble.” “It wasn’t your fault, was it, Dad?” Eddie defended. “It was as much mine as anybody’s, son,” his father said. “Probably more so. After all, I am head of the department. I knew about the shipment. That should make it my responsibility to see that it was properly received and placed in our atomic-materials storage vault. But there is little point in trying to place the blame on anyone. I’m willing to accept that part of it. The important thing is that we recover that radioisotope. Not only is it of a secret nature, but it is also dangerously radioactive if improperly handled.” “But—but wasn’t it in a safe container?” Eddie asked. 29 “Of course,” his father said. “There were only two ounces of it in a fifty-pound lead capsule. As long as it remains in that capsule it’s safe. As you know, the lead prevents any radiation from escaping. Out of that capsule, however, those two ounces of radioisotope can be very dangerous.” “Fifty pounds,” Eddie said thoughtfully. “That’s a pretty big thing to steal, isn’t it?” “Not when it’s lead, son,” his father replied. “Not much bigger than a two-quart milk bottle, in fact.” “Even at that, no kid could have taken it,” Eddie said. “Kid?” His father smiled thinly. “We don’t think it was any kid, Eddie. Not by a long shot. The whole thing was carefully planned and carefully carried out. It was not the work of amateurs.” Eddie read the newspaper account. The small truck from Drake Ridge, where one of the country’s newest atomic reactors was located, had arrived earlier than expected at Oceanview College. It had backed up to the receiving dock where all of the college supplies were delivered. Since deliveries during vacation months were few, there was no one on the dock when the truck arrived. A half hour later, when the delivery was expected, there would have been. The truck’s early arrival had caught them unprepared. 30 The driver had left the truck and had gone around the building to the front office. It had taken him less than five minutes to locate the receiving-dock foreman. Together, they had returned through the small warehouse and opened the rear door onto the dock. During that short time someone had pried open the heavy padlock on the delivery truck’s rear door and had stolen the fifty-pound lead capsule containing the radioisotope. Dusty footprints on the pavement around the rear of the truck indicated that two men had carried out the theft. A heavy iron pry bar had been dropped at the rear of the truck after the lock was sprung. It was a common type used by carpenters. There were no fingerprints or other identifying marks on it. The footprints were barely visible and of no help other than to indicate that two men were involved in the crime. 31 “Dad,” Eddie asked, looking up from the paper, “how could anyone carry away something weighing fifty pounds without being noticed?” “Chances are they had their car parked nearby,” his father said. “As you know, there are no fences or gates around Oceanview College. People come and go as they please. As a matter of fact, there are always quite a few automobiles parked around the shipping and receiving building, and parking space is scarce even during summer sessions. Anyone could park and wait there unnoticed. Or they could walk around without attracting any undue attention.” “But, Dad,” Eddie continued, “how would the men know that the delivery truck would arrive a half hour early?” “They wouldn’t,” his father said. “They may have had another plan. The way things worked out, they didn’t need to use it. The early delivery and the business of leaving the truck unguarded for a few minutes probably gave them a better opportunity than they had expected. At least, they took quick advantage of it.” 32 “I don’t see what anyone would want with a radioisotope,” Eddie said. “Maybe they figured there was something else inside of that lead capsule.” “That’s unlikely, son,” Mr. Taylor said. “Believe me, it was no common theft. Nor were the thieves ordinary thieves. That isotope was a new one. A very secret one. Our job at the college was to conduct various tests with it in order to find out exactly how it could best be put to use as a cure for disease, or for sterilizing food, or even as a source of power.” “Power?” Eddie said. “Boy, it must have been a strong isotope.” He knew that the strength of radioisotopes could be controlled largely by the length of time they were allowed to “cook” in an atomic reactor and soak up radioactivity. 33 “We weren’t planning to run a submarine with it,” his father said. “It wasn’t that strong. Still, it doesn’t take so very much radioactivity to make two ounces of an isotope quite powerful—and quite deadly. I only hope whoever stole it knows what he’s doing. However, I’m sure he does.” “You mean he must have been an atomic scientist himself?” Eddie asked. “Let’s just say he—or both of them—have enough training in the subject to know how to handle that isotope safely,” Mr. Taylor said. “But, Dad,” Eddie wondered, “what could they do with it?” “They could study it,” his father explained. “At least, they could send it somewhere to be broken down and studied. Being a new isotope, the formula is of great value.” “What do you mean, send it somewhere?” Eddie asked. “Perhaps to some other country.” “Then—then you mean whoever stole it were spies!” Eddie exclaimed breathlessly. “That’s entirely possible,” his father said. “In fact, it’s the only logical explanation I can think of. People simply don’t go around stealing radioactive isotopes without a mighty important reason.” 34 “Dinner’s ready,” Eddie’s mother called from the kitchen. During dinner Eddie wasn’t sure just what he was eating. The idea of spies stealing atomic materials kept building up in his mind. By the time dessert was finished, he was anxious to talk with someone, yet he knew he shouldn’t bother his father with any more questions. He asked if he could go over and visit with Teena for a while. “Well, you were together most of the day,” his mother said, “but I guess it’s all right. Be back in about an hour, though.” It was a balmy evening. On such evenings, he and Teena sometimes walked along the beach barefoot, collecting sea shells. Today Eddie had no desire to do that. He ran down the block. Teena answered his knock. “Come on in, Eddie,” she invited, seeming surprised to see him. “Mother and I are just finishing dinner.” “Oh, I figured you’d be through by now,” Eddie apologized, following her inside. 35 “Hello, Eddie,” Mrs. Ross said, but she didn’t seem as cheerful as usual. “Good evening, Mrs. Ross,” Eddie said. “I—I hope I’m not making a pest of myself.” He looked around for Mr. Ross, but Teena’s father apparently hadn’t arrived home from Acme Aircraft yet. There wasn’t a place set for him at the table, either. “You’re never a pest, Eddie,” Mrs. Ross assured him. “I was going to call your mother in a little while about that newspaper write-up.” “Oh, you read it?” Eddie said. “How could anyone miss it?” Teena said. “Right on the front page.” “I suppose your father is quite concerned over it,” Teena’s mother said. “Oh, yes,” Eddie affirmed. “He was the one who ordered the isotope.” “What’s an isotope?” Teena asked. “I’m not sure I know, either,” Mrs. Ross said. “Maybe we could understand more of what it’s all about if you could explain what a radioisotope is, Eddie.” 36 “Well,” Eddie said slowly, “it’s not easy to explain, but I’ll try. You know how rare uranium is. There’s not nearly enough of it to fill all the needs for radioactive materials. Besides, pure uranium is so powerful and expensive and dangerous to handle that it’s not a very good idea to try using it in its true form. So they build an atomic reactor like the one at Drake Ridge.” “We’ve driven by it,” Mrs. Ross said. “My, it’s a big place.” “I’ll say,” Eddie agreed. “Of course, only one building holds the reactor itself. It’s the biggest building near the center.” “I remember it,” Teena said. “Well, the reactor is about four stories high,” Eddie went on. “They call it a uranium ‘pile.’ It’s made up of hundreds and hundreds of graphite bricks. That’s where they get the name ‘pile’—from brick pile. Anyway, scattered around in between the bricks are small bits of uranium. Uranium atoms are radioactive. That is, they keep splitting up and sending out rays.” “Why do they do that?” Teena asked. 37 “It’s just the way nature made uranium, I guess,” Eddie said. “Most atoms stay in one piece, although they move around lickety-split all of the time. Uranium atoms not only move around, but they break apart. They shoot out little particles called neutrons. These neutrons hit other atoms and split them apart, sending out more neutrons. It’s a regular chain reaction.” “I’ve heard of chain reactions,” Mrs. Ross said. “Well, with all of the splitting up and moving around of the uranium atoms,” Eddie went on, “an awful lot of heat builds up. If they don’t control it—well, you’ve seen pictures of atomic-bomb explosions. That’s a chain reaction out of control.” “Out of control is right,” Teena said. 38 “But the atomic piles control the reaction,” Eddie said. “The graphite bricks keep the splitting-up atoms apart so one neutron won’t go smashing into other atoms unless they want it to. They have ways of controlling it so that only as much radiation builds up as they want. You can even hear the reactor hum as the radioactive rays go tearing through it. But by careful tending, the scientists keep the atomic collisions far enough apart so the thing doesn’t blow up.” “Boy, that sounds dangerous,” Teena said. “Well, they know just how to do it,” Eddie replied. “Aren’t the rays dangerous?” Mrs. Ross asked. “I’ll say they’re dangerous,” Eddie said. “But the whole pile is covered by a shield of concrete about eight feet thick. That keeps the rays from getting out and injuring the workmen.” “Goodness. Eight feet is a lot of cement.” “It takes a lot to stop radioactive atomic particles,” Eddie explained. “Especially the gamma rays. They’re the fastest and most dangerous, and the hardest to stop. Alpha and beta rays are fairly easy to stop. But the gamma rays are regular high-velocity invisible bullets. They’ll go right through a stone wall unless it’s plenty thick. Of course, you can’t see them. Not with even the most powerful microscope in the world.” 39 “I wouldn’t want to work around a place where I might get shot at by—by dangerous rays you can’t even see,” Teena said. “I would,” Eddie said. “Everyone is carefully protected. They see to that. Well, anyway, if all of those uranium atoms were shooting radioactive rays around inside of that pile and doing nothing, there would be an awful lot of energy going to waste. So the atomic scientists take certain elements which aren’t radioactive, but can be made radioactive, and shove small pieces of them into holes drilled in the pile.” “Isn’t that dangerous?” Teena asked. “They don’t shove them in with their bare hands,” Eddie said, trying not to show exasperation. “They use long holders to push the small chunks of material into the holes in the reactor. Then, as those uranium atoms keep splitting up and shooting particles around inside of the pile, some of them smack into the chunks of material, and stick there. Most elements will soak up radiation, just like a sponge soaks up water.” 40 “My, that’s interesting, Eddie,” Mrs. Ross said. “I’ve seen them do it,” Eddie said proudly, then added, “from behind a protective shield, of course. When the material has soaked up enough radiation, they pull it back out. They say it’s ‘cooked.’” “You mean it’s hot?” Teena asked. “It’s hot,” Eddie said, “but not like if it came out of a stove. By hot, they mean it’s radioactive. If you touched it, or even got near it, you would get burned, but you probably wouldn’t even know it for a while. It would be a radiation burn. That’s a kind of burn you don’t feel, but it destroys your blood cells and tissues, and—well, you’ve had it.” “So that’s what a radioisotope is,” Mrs. Ross said. “It’s like a sponge. Only instead of soaking up water, it soaks up radiation.” 41 “That’s about it,” Eddie said. “My dad says that as more is learned about the ways to use isotopes, the whole world is going to be improved. You’ve heard of radiocobalt for curing cancer. Well, that’s an isotope. They make it by cooking cobalt in an atomic reactor. Oh, there are hundreds of different isotopes. Like I said, isotopes can be made of most of the elements. And there are over a hundred elements. Some soak up a lot of radioactivity, and are strong and dangerous. Others absorb only a little and are pretty safe to use. Depends, too, on how long they let them cook in the reactor.” “What kind was the one stolen from the college today?” Teena asked. “Dad didn’t say exactly,” Eddie answered, “except he did say that if whoever took it didn’t know what he was doing and opened up the lead capsule, it could kill him. Of course, even the mild isotopes are deadly if they’re not handled right.” “My goodness, it is a serious matter, isn’t it?” Mrs. Ross said. 42 Eddie nodded. It was even more serious than its threat of danger to anyone who handled it carelessly. It was a new isotope—a secret isotope. His father hadn’t said whether it had been developed for curing things or for destroying things. But many radioisotopes could do either; it depended on how they were used. Eddie assumed that anyone who would stoop to stealing isotopes more than likely would be interested in their ability to destroy rather than their ability to benefit mankind. “Well, I certainly do hope everything works out all right,” Teena’s mother said. “So do I,” Teena agreed. Eddie glanced at the kitchen clock. “Oh, boy,” he said, “I’d better be heading back home. I didn’t mean to come over here and talk so long.” “Oh, we’re glad you did, Eddie,” Mrs. Ross said. “I’m afraid too few of us know anything about this atom business.” 43 “That’s right, Mrs. Ross,” Eddie agreed. “People should talk more and read more about it. After all, this is an atomic age. We might as well face it. My father says that in horse-and-buggy days everyone knew how to feed a horse and grease a wagon wheel. They knew what was needed to get the work done. But now that atoms are being harnessed to do the work, not many people even bother to find out what an atom is.” Mrs. Ross smiled. “I guess you’re right, Eddie,” she said, “but I wouldn’t quite know how to go about feeding an atom.” “Or greasing one,” Teena added. Eddie laughed. “I sure wouldn’t want the job of trying to feed a herd of them the size of a period,” he said. “Did you know that there are about three million billion atoms of carbon in a single period printed at the end of a sentence. That’s how small atoms are.” “Three million billion is a lot of something,” a man’s voice spoke behind him. “What are we talking about, Eddie?” “Oh, hello, Mr. Ross,” Eddie said, turning around and standing up. “I didn’t hear you come in.” 44 Teena’s father was a medium-sized man with light-brown hair which was getting somewhat thin on top. He was usually quite cheerful and full of fun, but tonight his face seemed unusually drawn and sober. He stepped to the table, leaned over, and gave both Teena and Mrs. Ross a kiss on the cheek. “Eddie was telling us about atoms,” Teena’s mother said. “Did you know there were three million billion of them in a period?” “How many in a comma?” Mr. Ross said to Eddie, then added quickly, “forget it, Eddie. It wasn’t very funny. I—I’m afraid I don’t feel very funny tonight.” “Sit down, dear,” Mrs. Ross said. “I’ll warm your dinner. You didn’t sound very cheerful when you called to say you would be late. How did everything go at the plant today?” “Not so good,” Teena’s father said tiredly. “In fact, not good at all.” Problems. It seemed that everyone had problems, Eddie thought, as he started to leave.
B. Eddie has a crush on Teena, and therefore doesn’t want to act too eager and uncool.
What is the general's primary concern regarding the leader of the mission? A. Exceptional leadership skills B. Strongest intellectual quotient C. Peak body and brain function D. Unwavering belief in the mission
Transcriber's Note: This etext was produced from Astounding Science Fiction December 1955. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. BREAKAWAY BY STANLEY GIMBLE Illustrated by Freas She surely got her wish ... but there was some question about getting what she wanted. Phil Conover pulled the zipper of his flight suit up the front of his long, thin body and came into the living room. His face, usually serious and quietly handsome, had an alive, excited look. And the faint lines around his dark, deep-set eyes were accentuated when he smiled at his wife. "All set, honey. How do I look in my monkey suit?" His wife was sitting stiffly on the flowered couch that was still not theirs completely. In her fingers she held a cigarette burned down too far. She said, "You look fine, Phil. You look just right." She managed a smile. Then she leaned forward and crushed the cigarette in the ash tray on the maple coffee table and took another from the pack. He came to her and touched his hands to her soft blond hair, raising her face until she was looking into his eyes. "You're the most beautiful girl I know. Did I ever tell you that?" "Yes, I think so. Yes, I'm sure you did," she said, finishing the ritual; but her voice broke, and she turned her head away. Phil sat beside her and put his arm around her small shoulders. He had stopped smiling. "Honey, look at me," he said. "It isn't going to be bad. Honestly it isn't. We know exactly how it will be. If anything could go wrong, they wouldn't be sending me; you know that. I told you that we've sent five un-manned ships up and everyone came back without a hitch." She turned, facing him. There were tears starting in the corners of her wide, brown eyes, and she brushed them away with her hand. "Phil, don't go. Please don't. They can send Sammy. Sammy doesn't have a wife. Can't he go? They'd understand, Phil. Please!" She was holding his arms tightly with her hands, and the color had drained from her cheeks. "Mary, you know I can't back out now. How could I? It's been three years. You know how much I've wanted to be the first man to go. Nothing would ever be right with me again if I didn't go. Please don't make it hard." He stopped talking and held her to him and stroked the back of her head. He could feel her shoulders shaking with quiet sobs. He released her and stood up. "I've got to get started, Mary. Will you come to the field with me?" "Yes, I'll come to say good-by." She paused and dropped her eyes. "Phil, if you go, I won't be here when you get back—if you get back. I won't be here because I won't be the wife of a space pilot for the rest of my life. It isn't the kind of life I bargained for. No matter how much I love you, I just couldn't take that, Phil. I'm sorry. I guess I'm not the noble sort of wife." She finished and took another cigarette from the pack on the coffee table and put it to her lips. Her hand was trembling as she touched the lighter to the end of the cigarette and drew deeply. Phil stood watching her, the excitement completely gone from his eyes. "I wish you had told me this a long time ago, Mary," Phil said. His voice was dry and low. "I didn't know you felt this way about it." "Yes, you did. I told you how I felt. I told you I could never be the wife of a space pilot. But I don't think I ever really believed it was possible—not until this morning when you said tonight was the take-off. It's so stupid to jeopardize everything we've got for a ridiculous dream!" He sat down on the edge of the couch and took her hands between his. "Mary, listen to me," he said. "It isn't a dream. It's real. There's nothing means anything more to me than you do—you know that. But no man ever had the chance to do what I'm going to do tonight—no man ever. If I backed out now for any reason, I'd never be able to look at the sky again. I'd be through." She looked at him without seeing him, and there was nothing at all in her eyes. "Let's go, if you're still going," she finally said. They drove through the streets of the small town with its small bungalows, each alike. There were no trees and very little grass. It was a new town, a government built town, and it had no personality yet. It existed only because of the huge ship standing poised in the take-off zone five miles away in the desert. Its future as a town rested with the ship, and the town seemed to feel the uncertainty of its future, seemed ready to stop existing as a town and to give itself back to the desert, if such was its destiny. Phil turned the car off the highway onto the rutted dirt road that led across the sand to the field where the ship waited. In the distance they could see the beams of the searchlights as they played across the take-off zone and swept along the top of the high wire fence stretching out of sight to right and left. At the gate they were stopped by the guard. He read Phil's pass, shined his flashlight in their faces, and then saluted. "Good luck, colonel," he said, and shook Phil's hand. "Thanks, sergeant. I'll be seeing you next week," Phil said, and smiled. They drove between the rows of wooden buildings that lined the field, and he parked near the low barbed fence ringing the take-off zone. He turned off the ignition, and sat quietly for a moment before lighting a cigarette. Then he looked at his wife. She was staring through the windshield at the rocket two hundred yards away. Its smooth polished surface gleamed in the spotlight glare, and it sloped up and up until the eye lost the tip against the stars. "She's beautiful, Mary. You've never seen her before, have you?" "No, I've never seen her before," she said. "Hadn't you better go?" Her voice was strained and she held her hands closed tightly in her lap. "Please go now, Phil," she said. He leaned toward her and touched her cheek. Then she was in his arms, her head buried against his shoulder. "Good-by, darling," she said. "Wish me luck, Mary?" he asked. "Yes, good luck, Phil," she said. He opened the car door and got out. The noise of men and machines scurrying around the ship broke the spell of the rocket waiting silently for flight. "Mary, I—" he began, and then turned and strode toward the administration building without looking back. Inside the building it was like a locker room before the big game. The tension stood alone, and each man had the same happy, excited look that Phil had worn earlier. When he came into the room, the noise and bustle stopped. They turned as one man toward him, and General Small came up to him and took his hand. "Hello, Phil. We were beginning to think you weren't coming. You all set, son?" "Yes, sir, I'm all set, I guess," Phil said. "I'd like you to meet the Secretary of Defense, Phil. He's over here by the radar." As they crossed the room, familiar faces smiled, and each man shook his hand or touched his arm. He saw Sammy, alone, by the coffee urn. Sammy waved to him, but he didn't smile. Phil wanted to talk to him, to say something; but there was nothing to be said now. Sammy's turn would come later. "Mr. Secretary," the general said, "this is Colonel Conover. He'll be the first man in history to see the other side of the Moon. Colonel—the Secretary of Defense." "How do you do, sir. I'm very proud to meet you," Phil said. "On the contrary, colonel. I'm very proud to meet you. I've been looking at that ship out there and wondering. I almost wish I were a young man again. I'd like to be going. It's a thrilling thought—man's first adventure into the universe. You're lighting a new dawn of history, colonel. It's a privilege few men have ever had; and those who have had it didn't realize it at the time. Good luck, and God be with you." "Thank you, sir. I'm aware of all you say. It frightens me a little." The general took Phil's arm and they walked to the briefing room. There were chairs set up for the scientists and Air Force officers directly connected with the take-off. They were seated now in a semicircle in front of a huge chart of the solar system. Phil took his seat, and the last minute briefing began. It was a routine he knew by heart. He had gone over and over it a thousand times, and he only half listened now. He kept thinking of Mary outside, alone by the fence. The voice of the briefing officer was a dull hum in his ears. "... And orbit at 18,000-mph. You will then accelerate for the breakaway to 24,900-mph for five minutes and then free-coast for 116 hours until—" Phil asked a few questions about weather and solar conditions. And then the session was done. They rose and looked at each other, the same unanswered questions on each man's face. There were forced smiles and handshakes. They were ready now. "Phil," the general said, and took him aside. "Sir?" "Phil, you're ... you feel all right, don't you, son?" "Yes, sir. I feel fine. Why?" "Phil, I've spent nearly every day with you for three years. I know you better than I know myself in many ways. And I've studied the psychologist's reports on you carefully. Maybe it's just nervousness, Phil, but I think there's something wrong. Is there?" "No, sir. There's nothing wrong," Phil said, but his voice didn't carry conviction. He reached for a cigarette. "Phil, if there is anything—anything at all—you know what it might mean. You've got to be in the best mental and physical condition of your life tonight. You know better than any man here what that means to our success. I think there is something more than just natural apprehension wrong with you. Want to tell me?" Outside, the take-off zone crawled with men and machines at the base of the rocket. For ten hours, the final check-outs had been in progress; and now the men were checking again, on their own time. The thing they had worked toward for six years was ready to happen, and each one felt that he was sending just a little bit of himself into the sky. Beyond the ring of lights and moving men, on the edge of the field, Mary stood. Her hands moved slowly over the top of the fence, twisting the barbs of wire. But her eyes were on the ship. And then they were ready. A small group of excited men came out from the administration building and moved forward. The check-out crews climbed into their machines and drove back outside the take-off zone. And, alone, one man climbed the steel ladder up the side of the rocket—ninety feet into the air. At the top he waved to the men on the ground and then disappeared through a small port. Mary waved to him. "Good-by," she said to herself, but the words stuck tight in her throat. The small group at the base of the ship turned and walked back to the fence. And for an eternity the great ship stood alone, waiting. Then, from deep inside, a rumble came, increasing in volume to a gigantic roar that shook the earth and tore at the ears. Slowly, the first manned rocket to the Moon lifted up and up to the sky. For a long time after the rocket had become a tiny speck of light in the heavens, she stood holding her face in her hands and crying softly to herself. And then she felt the touch of a hand on her arm. She turned. "Phil! Oh, Phil." She held tightly to him and repeated his name over and over. "They wouldn't let me go, Mary," he said finally. "The general would not let me go." She looked at him. His face was drawn tight, and there were tears on his cheeks. "Thank, God," she said. "It doesn't matter, darling. The only thing that matters is you didn't go." "You're right, Mary," he said. His voice was low—so low she could hardly hear him. "It doesn't matter. Nothing matters now." He stood with his hands at his sides, watching her. And then turned away and walked toward the car. THE END
C. Peak body and brain function
What does the author seem to like to see in movies? A. movies that stay true to the books and original scripts B. movies that dig deeper into life's realities C. unpredictability in the story line D. movies that show the good in people
War and Pieces No movie in the last decade has succeeded in psyching out critics and audiences as fully as the powerful, rambling war epic The Thin Red Line , Terrence Malick's return to cinema after 20 years. I've sat through it twice and am still trying to sort out my responses, which run from awe to mockery and back. Like Saving Private Ryan , the picture wallops you in the gut with brilliant, splattery battle montages and Goyaesque images of hell on earth. But Malick, a certified intellectual and the Pynchonesque figure who directed Badlands and Days of Heaven in the 1970s and then disappeared, is in a different philosophical universe from Steven Spielberg. Post-carnage, his sundry characters philosophize about their experiences in drowsy, runic voice-overs that come at you like slow bean balls: "Why does nature vie with itself? ... Is there an avenging power in nature, not one power but two?" Or "This great evil: Where's it come from? What seed, what root did it grow from? Who's doin' this? Who's killin' us, robbin' us of life and light?" First you get walloped with viscera, then you get beaned by blather. Those existential speculations don't derive from the screenplay's source, an archetypal but otherwise down-to-earth 1962 novel by James Jones (who also wrote From Here to Eternity ) about the American invasion of the South Pacific island of Guadalcanal. They're central to Malick's vision of the story, however, and not specious. In the combat genre, the phrase "war is hell" usually means nothing more than that it's a bummer to lose a limb or two, or to see your buddy get his head blown off. A true work of art owes us more than literal horrors, and Malick obliges by making his theater of war the setting for nothing less than a meditation on the existence of God. He tells the story solemnly, in three parts, with a big-deal cast (Sean Penn, Nick Nolte, John Cusack) and a few other major stars (John Travolta, Woody Harrelson, George Clooney) dropping by for cameos. After an Edenic prelude, in which a boyishly idealistic absent without leave soldier, Pvt. Witt (Jim Caviezel), swims with native youths to the accompaniment of a heavenly children's choir, the first part sees the arrival of the Allied forces on the island, introduces the principal characters (none of whom amounts to a genuine protagonist), and lays out the movie's geographical and philosophical terrain. The centerpiece--the fighting--goes on for over an hour and features the most frantic and harrowing sequences, chiefly the company's initially unsuccessful frontal assault on a Japanese hilltop bunker. The coda lasts nearly 40 minutes and is mostly talk and cleanup, the rhythms growing more relaxed until a final, incongruous spasm of violence--whereupon the surviving soldiers pack their gear and motor off to another South Pacific battle. In the final shot, a twisted tree grows on the waterline of the beach, the cycle of life beginning anew. The Thin Red Line has a curious sound-scape, as the noise of battle frequently recedes to make room for interior monologues and Hans Zimmer's bump-bump, minimalist New Age music. Pvt. Bell (Ben Chaplin) talks to his curvy, redheaded wife, viewed in deliriously sensual flashbacks. ("Love: Where does it come from? Who lit this flame in us?") Lt. Col. Tall (Nolte), a borderline lunatic passed over one too many times for promotion and itching to win a battle no matter what the human cost, worries groggily about how his men perceive him. The dreamer Witt poses folksy questions about whether we're all a part of one big soul. If the movie has a spine, it's his off-and-on dialogue with Sgt. Welsh (Penn), who's increasingly irritated by the private's beatific, almost Billy Budd-like optimism. Says Welsh, "In this world, a man himself is nothin', and there ain't no world but this one." Replies Witt, high cheekbones glinting, "I seen another world." At first it seems as if Witt will indeed be Billy Budd to Welsh's vindictive Claggart. But if Witt is ultimately an ethereal martyr, Welsh turns out to be a Bogart-like romantic who can't stop feeling pain in the face of an absent God. He speaks the movie's epitaph, "Darkness and light, strife and love: Are they the workings of one mind, the feature of the same face? O my soul, let me be in you now. Look out through my eyes. Look out at the things you made, all things shining." Malick puts a lot of shining things on the screen: soldiers, natives, parrots, bats, rodents, visions of Eden by way of National Geographic and of the Fall by way of Alpo. Malick's conception of consciousness distributes it among the animate and inanimate alike; almost every object is held up for rapturous contemplation. I could cite hundreds of images: A soldier in a rocking boat hovers over a letter he's writing, which is crammed from top to bottom and side to side with script. (You don't know the man, but you can feel in an instant his need to cram everything in.) A small, white-bearded Melanesian man strolls nonchalantly past a platoon of tensely trudging grunts who can't believe they're encountering this instead of a hail of Japanese bullets. Two shots bring down the first pair of soldiers to advance on the hill; a second later, the sun plays mystically over the tall, yellow grass that has swallowed their bodies. John Toll's camera rushes in on a captured Japanese garrison: One Japanese soldier shrieks; another, skeletal, laughs and laughs; a third weeps over a dying comrade. The face of a Japanese soldier encased in earth speaks from the dead, "Are you righteous? Know that I was, too." Whether or not these pearllike epiphanies are strung is another matter. Malick throws out his overarching theme--is nature two-sided, at war with itself?--in the first few minutes but, for all his startling juxtapositions, he never dramatizes it with anything approaching the clarity of, say, Brian De Palma's Casualties of War (1989). Besides the dialogue between Welsh and Witt, The Thin Red Line 's other organizing story involves a wrenching tug of war between Nolte's ambition-crazed Tall and Capt. Staros (Elias Koteas), who refuses an order to send his men on what will surely be a suicidal--and futile--assault on a bunker. But matters of cause and effect don't really interest Malick. Individual acts of conscience can and do save lives, and heroism can win a war or a battle, he acknowledges. But Staros is ultimately sent packing, and Malick never bothers to trace the effect of his action on the Guadalcanal operation. In fact, the entire battle seems to take place in a crazed void. Tall quotes Homer's "rosy-fingered dawn" and orders a meaningless bombardment to "buck the men up--it'll look like the Japs are catching hell." Soldiers shoot at hazy figures, unsure whether they're Japanese or American. Men collide, blow themselves in half with their own mishandled grenades, stab themselves frantically with morphine needles, shove cigarettes up their noses to keep the stench of the dying and the dead at bay. A tiny bird, mortally wounded, flutters in the grass. Malick is convincing--at times overwhelming--on the subject of chaos. It's when he tries to ruminate on order that he gets gummed up, retreating to one of his gaseous multiple mouthpieces: "Where is it that we were together? Who is it that I lived with? Walked with? The brother. ... The friend. ... One mind." I think I'd have an easier time with Malick's metaphysical speculations if I had a sense of some concomitant geopolitical ones--central to any larger musings on forces of nature as viewed through the prism of war. Couldn't it be that the German and Japanese fascist orders were profoundly anti-natural, and that the Allies' cause was part of a violent but natural correction? You don't have to buy into Spielberg's Lincolnesque pieties in Saving Private Ryan to believe that there's a difference between World War II and Vietnam (or, for that matter, World War II and the invasion of Grenada or our spats with Iraq). While he was at Harvard, Malick might have peeled himself off the lap of his pointy-headed mentor, Stanley Cavell, the philosopher and film theorist, and checked out a few of Michael Waltzer's lectures on just and unjust wars. Maybe then he'd view Guadalcanal not in an absurdist vacuum (the soldiers come, they kill and are killed, they leave) but in the larger context of a war that was among the most rational (in its aims, if not its methods) fought in the last several centuries. For all his visionary filmmaking, Malick's Zen neutrality sometimes seems like a cultivated--and pretentious--brand of fatuousness. John Travolta's empty nightclub impersonation of Bill Clinton in Primary Colors (1998) had one positive result: It gave him a jump-start on Jan Schlichtmann, the reckless personal injury lawyer at the center of A Civil Action . Travolta's Schlichtmann is much more redolent of Clinton: slick and selfish and corrupt in lots of ways but basically on the side of the angels, too proud and arrogant to change tactics when all is certainly lost. Schlichtmann pursued--and more or less blew--a civil liability case against the corporate giants Beatrice and W.R. Grace over the allegedly carcinogenic water supply of Woburn, Mass. Boston writer Jonathan Harr, in the book the movie is based on, went beyond the poison in the Woburn wells to evoke (stopping just short of libel) the poison of the civil courts, where platoons of overpaid corporate lawyers can drive opponents with pockets less deep and psyches less stable into bankruptcy and hysteria. Director Steven Zaillian's version doesn't capture the mounting rage that one experiences while reading Harr's book, or even the juicy legal machinations that Francis Ford Coppola giddily manipulated in his underrated adaptation of John Grisham's The Rainmaker (1997). But A Civil Action is a sturdy piece of work, an old-fashioned conversion narrative with some high-tech zip. Schlichtmann doesn't take this "orphan" case--brought by the parents of several children who died of leukemia--because he wants to do good but because he figures that Grace and Beatrice will fork over huge sums of money to keep the parents from testifying publicly about their children's last days. He might succeed, too, if it weren't for Jerome Facher (Robert Duvall), the Beatrice lawyer who knows how to keep Schlichtmann shadowboxing while his small firm's financial resources dwindle to nothing. Zaillian is at his most assured when he cuts back and forth between Facher's Harvard Law School lectures on what not to do in court and Schlichtmann's fumbling prosecution. The sequence has the extra dimension of good journalism: It dramatizes and comments simultaneously. Plus, it gives Duvall a splendid platform for impish understatement. (Duvall has become more fun to watch than just about anyone in movies.) Elsewhere, Zaillian takes a more surface approach, sticking to legal minutiae and rarely digging for the deeper evil. As in his Searching for Bobby Fischer (1993), the outcome of every scene is predictable, but how Zaillian gets from beat to beat is surprisingly fresh. He also gets sterling bit performances from Sydney Pollack as the spookily sanguine Grace CEO, William H. Macy as Schlichtmann's rabbity accountant, and Kathleen Quinlan as the mother of one of the victims. Quinlan knows that when you're playing a woman who has lost a child you don't need to emote--you reveal the emotion by trying not to emote. To the families involved in the Woburn tragedy, the real climax of this story isn't the downbeat ending of the book or the sleight of hand, "let's call the Environmental Protection Agency," upbeat ending of the movie. The climax is the publication of a book that takes the plaintiffs' side and that remains on the best-seller list in hardcover and paperback for years. The climax is the movie starring John Travolta. Beatrice and Grace made out OK legally, but some of us will never use their products again without thinking about Travolta losing his shirt in the name of those wasted-away little kids.
B. movies that dig deeper into life's realities
What caught Natalie's attention at the Kudos Room and prompted the chat with Si? A. The bartender introduced the two after serving them drinks at the same time. B. She thought he was attractive enough and she was bored. C. He had offered to buy her drinks all night. D. She noticed his space pin.
SPACEMAN ON A SPREE BY MACK REYNOLDS Illustrated by Nodel [Transcriber's Note: This etext was produced from Worlds of Tomorrow June 1963 Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] What's more important—Man's conquest of space, or one spaceman's life? I They gave him a gold watch. It was meant to be symbolical, of course. In the old tradition. It was in the way of an antique, being one of the timepieces made generations past in the Alpine area of Eur-Asia. Its quaintness lay in the fact that it was wound, not electronically by power-radio, but by the actual physical movements of the bearer, a free swinging rotor keeping the mainspring at a constant tension. They also had a banquet for him, complete with speeches by such bigwigs of the Department of Space Exploration as Academician Lofting Gubelin and Doctor Hans Girard-Perregaux. There was also somebody from the government who spoke, but he was one of those who were pseudo-elected and didn't know much about the field of space travel nor the significance of Seymour Pond's retirement. Si didn't bother to remember his name. He only wondered vaguely why the cloddy had turned up at all. In common with recipients of gold watches of a score of generations before him, Si Pond would have preferred something a bit more tangible in the way of reward, such as a few shares of Variable Basic to add to his portfolio. But that, he supposed, was asking too much. The fact of the matter was, Si knew that his retiring had set them back. They hadn't figured he had enough shares of Basic to see him through decently. Well, possibly he didn't, given their standards. But Space Pilot Seymour Pond didn't have their standards. He'd had plenty of time to think it over. It was better to retire on a limited crediting, on a confoundedly limited crediting, than to take the two or three more trips in hopes of attaining a higher standard. He'd had plenty of time to figure it out, there alone in space on the Moon run, there on the Venus or Mars runs. There on the long, long haul to the Jupiter satellites, fearfully checking the symptoms of space cafard, the madness compounded of claustrophobia, monotony, boredom and free fall. Plenty of time. Time to decide that a one room mini-auto-apartment, complete with an autochair and built-in autobar, and with one wall a teevee screen, was all he needed to find contentment for a mighty long time. Possibly somebody like Doc Girard-Perregaux might be horrified at the idea of living in a mini-auto-apartment ... not realizing that to a pilot it was roomy beyond belief compared to the conning tower of a space craft. No. Even as Si listened to their speeches, accepted the watch and made a halting little talk of his own, he was grinning inwardly. There wasn't anything they could do. He had them now. He had enough Basic to keep him comfortably, by his standards, for the rest of his life. He was never going to subject himself to space cafard again. Just thinking about it, now, set the tic to going at the side of his mouth. They could count down and blast off, for all he gave a damn. The gold watch idea had been that of Lofting Gubelin, which was typical, he being in the way of a living anachronism himself. In fact, Academician Gubelin was possibly the only living man on North America who still wore spectacles. His explanation was that a phobia against having his eyes touched prohibited either surgery to remould his eyeballs and cure his myopia, or contact lenses. That was only an alibi so far as his closest associate, Hans Girard-Perregaux, was concerned. Doctor Girard-Perregaux was convinced Gubelin would have even worn facial hair, had he but a touch more courage. Gubelin longed for yesteryear, a seldom found phenomenon under the Ultrawelfare State. Slumped in an autochair in the escape room of his Floridian home, Lofting Gubelin scowled at his friend. He said, acidly, "Any more bright schemes, Hans? I presume you now acknowledge that appealing to the cloddy's patriotism, sentiment and desire for public acclaim have miserably failed." Girard-Perregaux said easily, "I wouldn't call Seymour Pond a cloddy. In his position, I am afraid I would do the same thing he has." "That's nonsense, Hans. Zoroaster! Either you or I would gladly take Pond's place were we capable of performing the duties for which he has been trained. There aren't two men on North America—there aren't two men in the world!—who better realize the urgency of continuing our delving into space." Gubelin snapped his fingers. "Like that, either of us would give our lives to prevent man from completely abandoning the road to his destiny." His friend said drily, "Either of us could have volunteered for pilot training forty years ago, Lofting. We didn't." "At that time there wasn't such a blistering percentage of funkers throughout this whole blistering Ultrawelfare State! Who could foresee that eventually our whole program would face ending due to lack of courageous young men willing to take chances, willing to face adventure, willing to react to the stimulus of danger in the manner our ancestors did?" Girard-Perregaux grunted his sarcasm and dialed a glass of iced tea and tequila. He said, "Nevertheless, both you and I conform with the present generation in finding it far more pleasant to follow one's way of life in the comfort of one's home than to be confronted with the unpleasantness of facing nature's dangers in more adventurous pastimes." Gubelin, half angry at his friend's argument, leaned forward to snap rebuttal, but the other was wagging a finger at him negatively. "Face reality, Lofting. Don't require or expect from Seymour Pond more than is to be found there. He is an average young man. Born in our Ultrawelfare State, he was guaranteed his fundamental womb-to-tomb security by being issued that minimum number of Basic shares in our society that allows him an income sufficient to secure the food, clothing, shelter, medical care and education to sustain a low level of subsistence. Percentages were against his ever being drafted into industry. Automation being what it is, only a fraction of the population is ever called up. But Pond was. His industrial aptitude dossier revealed him a possible candidate for space pilot, and it was you yourself who talked him into taking the training ... pointing out the more pragmatic advantages such as complete retirement after but six trips, added shares of Basic so that he could enjoy a more comfortable life than most and the fame that would accrue to him as one of the very few who still participate in travel to the planets. Very well. He was sold. Took his training, which, of course, required long years of drudgery to him. Then, performing his duties quite competently, he made his six trips. He is now legally eligible for retirement. He was drafted into the working force reserves, served his time, and is now free from toil for the balance of his life. Why should he listen to our pleas for a few more trips?" "But has he no spirit of adventure? Has he no feeling for...." Girard-Perregaux was wagging his finger again, a gesture that, seemingly mild though it was, had an astonishing ability to break off the conversation of one who debated with the easy-seeming, quiet spoken man. He said, "No, he hasn't. Few there are who have, nowadays. Man has always paid lip service to adventure, hardships and excitement, but in actuality his instincts, like those of any other animal, lead him to the least dangerous path. Today we've reached the point where no one need face danger—ever. There are few who don't take advantage of the fact. Including you and me, Lofting, and including Seymour Pond." His friend and colleague changed subjects abruptly, impatiently. "Let's leave this blistering jabber about Pond's motivation and get to the point. The man is the only trained space pilot in the world. It will take months, possibly more than a year, to bring another novitiate pilot to the point where he can safely be trusted to take our next explorer craft out. Appropriations for our expeditions have been increasingly hard to come by—even though in our minds, Hans, we are near important breakthroughs, breakthroughs which might possibly so spark the race that a new dream to push man out to the stars will take hold of us. If it is admitted that our organization has degenerated to the point that we haven't a single pilot, then it might well be that the Economic Planning Board, and especially those cloddies on Appropriations, will terminate the whole Department of Space Exploration." "So...." Girard-Perregaux said gently. "So some way we've got to bring Seymour Pond out of his retirement!" "Now we are getting to matters." Girard-Perregaux nodded his agreement. Looking over the rim of his glass, his eyes narrowed in thought as his face took on an expression of Machiavellianism. "And do not the ends justify the means?" Gubelin blinked at him. The other chuckled. "The trouble with you, Lofting, is that you have failed to bring history to bear on our problem. Haven't you ever read of the sailor and his way of life?" "Sailor? What in the name of the living Zoroaster has the sailor got to do with it?" "You must realize, my dear Lofting, that our Si Pond is nothing more than a latter-day sailor, with many of the problems and view-points, tendencies and weaknesses of the voyager of the past. Have you never heard of the seaman who dreamed of returning to the village of his birth and buying a chicken farm or some such? All the long months at sea—and sometimes the tramp freighters or whaling craft would be out for years at a stretch before returning to home port—he would talk of his retirement and his dream. And then? Then in port, it would be one short drink with the boys, before taking his accumulated pay and heading home. The one short drink would lead to another. And morning would find him, drunk, rolled, tattooed and possibly sleeping it off in jail. So back to sea he'd have to go." Gubelin grunted bitterly. "Unfortunately, our present-day sailor can't be separated from his money quite so easily. If he could, I'd personally be willing to lure him down some dark alley, knock him over the head and roll him myself. Just to bring him back to his job again." He brought his wallet from his pocket, and flicked it open to his universal credit card. "The ultimate means of exchange," he grunted. "Nobody can spend your money, but you, yourself. Nobody can steal it, nobody can, ah, con you out of it. Just how do you expect to sever our present-day sailor and his accumulated nest egg?" The other chuckled again. "It is simply a matter of finding more modern methods, my dear chap." II Si Pond was a great believer in the institution of the spree. Any excuse would do. Back when he had finished basic education at the age of twenty-five and was registered for the labor draft, there hadn't been a chance in a hundred that he'd have the bad luck to have his name pulled. But when it had been, Si had celebrated. When he had been informed that his physical and mental qualifications were such that he was eligible for the most dangerous occupation in the Ultrawelfare State and had been pressured into taking training for space pilot, he had celebrated once again. Twenty-two others had taken the training with him, and only he and Rod Cameroon had passed the finals. On this occasion, he and Rod had celebrated together. It had been quite a party. Two weeks later, Rod had burned on a faulty take-off on what should have been a routine Moon run. Each time Si returned from one of his own runs, he celebrated. A spree, a bust, a bat, a wing-ding, a night on the town. A commemoration of dangers met and passed. Now it was all over. At the age of thirty he was retired. Law prevented him from ever being called up for contributing to the country's labor needs again. And he most certainly wasn't going to volunteer. He had taken his schooling much as had his contemporaries. There wasn't any particular reason for trying to excell. You didn't want to get the reputation for being a wise guy, or a cloddy either. Just one of the fellas. You could do the same in life whether you really studied or not. You had your Inalienable Basic stock, didn't you? What else did you need? It had come as a surprise when he'd been drafted for the labor force. In the early days of the Ultrawelfare State, they had made a mistake in adapting to the automation of the second industrial revolution. They had attempted to give everyone work by reducing the number of working hours in the day, and the number of working days in the week. It finally became ludicrous when employees of industry were working but two days a week, two hours a day. In fact, it got chaotic. It became obvious that it was more practical to have one worker putting in thirty-five hours a week and getting to know his job well, than it was to have a score of employees, each working a few hours a week and none of them ever really becoming efficient. The only fair thing was to let the technologically unemployed remain unemployed, with their Inalienable Basic stock as the equivalent of unemployment insurance, while the few workers still needed put in a reasonable number of hours a day, a reasonable number of weeks a year and a reasonable number of years in a life time. When new employees were needed, a draft lottery was held. All persons registered in the labor force participated. If you were drawn, you must need serve. The dissatisfaction those chosen might feel at their poor luck was offset by the fact that they were granted additional Variable Basic shares, according to the tasks they fulfilled. Such shares could be added to their portfolios, the dividends becoming part of their current credit balance, or could be sold for a lump sum on the market. Yes, but now it was all over. He had his own little place, his own vacuum-tube vehicle and twice the amount of shares of Basic that most of his fellow citizens could boast. Si Pond had it made. A spree was obviously called for. He was going to do this one right. This was the big one. He'd accumulated a lot of dollars these past few months and he intended to blow them, or at least a sizeable number of them. His credit card was burning a hole in his pocket, as the expression went. However, he wasn't going to rush into things. This had to be done correctly. Too many a spree was played by ear. You started off with a few drinks, fell in with some second rate mopsy and usually wound up in a third rate groggery where you spent just as much as though you'd been in the classiest joint in town. Came morning and you had nothing to show for all the dollars that had been spent but a rum-head. Thus, Si was vaguely aware, it had always been down through the centuries since the Phoenecian sailor, back from his year-long trip to the tin mines of Cornwall, blew his hard earned share of the voyage's profits in a matter of days in the wine shops of Tyre. Nobody gets quite so little for his money as that loneliest of all workers, he who must leave his home for distant lands, returning only periodically and usually with the salary of lengthy, weary periods of time to be spent hurriedly in an attempt to achieve the pleasure and happiness so long denied him. Si was going to do it differently this time. Nothing but the best. Wine, women, song, food, entertainment. The works. But nothing but the best. To start off, he dressed with great care in the honorable retirement-rank suit he had so recently purchased. His space pin he attached carefully to the lapel. That was a good beginning, he decided. A bit of prestige didn't hurt you when you went out on the town. In the Ultrawelfare State hardly one person in a hundred actually ever performed anything of value to society. The efforts of most weren't needed. Those few who did contribute were awarded honors, decorations, titles. Attired satisfactorily, Si double-checked to see that his credit card was in his pocket. As an after-thought, he went over to the auto-apartment's teevee-phone, flicked it on, held the card to the screen and said, "Balance check, please." In a moment, the teevee-phone's robot voice reported, "Ten shares of Inalienable Basic. Twelve shares of Variable Basic, current value, four thousand, two hundred and thirty-three dollars and sixty-two cents apiece. Current cash credit, one thousand and eighty-four dollars." The screen went dead. One thousand and eighty-four dollars. That was plenty. He could safely spend as much as half of it, if the spree got as lively as he hoped it would. His monthly dividends were due in another week or so, and he wouldn't have to worry about current expenses. Yes, indeedy, Si Pond was as solvent as he had ever been in his thirty years. He opened the small, closet-like door which housed his vacuum-tube two-seater, and wedged himself into the small vehicle. He brought down the canopy, dropped the pressurizer and considered the dial. Only one place really made sense. The big city. He considered for a moment, decided against the boroughs of Baltimore and Boston, and selected Manhattan instead. He had the resources. He might as well do it up brown. He dialed Manhattan and felt the sinking sensation that presaged his car's dropping to tube level. While it was being taken up by the robot controls, being shuttled here and there preparatory to the shot to his destination, he dialed the vehicle's teevee-phone for information on the hotels of the island of the Hudson. He selected a swank hostelry he'd read about and seen on the teevee casts of society and celebrity gossip reporters, and dialed it on the car's destination dial. "Nothing too good for ex-Space Pilot Si Pond," he said aloud. The car hesitated for a moment, that brief hesitation before the shot, and Si took the involuntary breath from which only heroes could refrain. He sank back slowly into the seat. Moments passed, and the direction of the pressure was reversed. Manhattan. The shuttling began again, and one or two more traversing sub-shots. Finally, the dash threw a green light and Si opened the canopy and stepped into his hotel room. A voice said gently, "If the quarters are satisfactory, please present your credit card within ten minutes." Si took his time. Not that he really needed it. It was by far the most swank suite he had ever seen. One wall was a window of whatever size the guest might desire and Si touched the control that dilated it to the full. His view opened in such wise that he could see both the Empire State Building Museum and the Hudson. Beyond the river stretched the all but endless city which was Greater Metropolis. He didn't take the time to flick on the menu, next to the auto-dining table, nor to check the endless potables on the autobar list. All that, he well knew, would be superlative. Besides, he didn't plan to dine or do much drinking in his suite. He made a mock leer. Not unless he managed to acquire some feminine companionship, that was. He looked briefly into the swimming pool and bath, then flopped himself happily onto the bed. It wasn't up to the degree of softness he presently desired, and he dialed the thing to the ultimate in that direction so that with a laugh he sank almost out of sight into the mattress. He came back to his feet, gave his suit a quick patting so that it fell into press and, taking his credit card from his pocket, put it against the teevee-phone screen and pressed the hotel button so that registration could be completed. For a moment he stood in the center of the floor, in thought. Take it easy, Si Pond, take it all easy, this time. No throwing his dollars around in second-class groggeries, no eating in automated luncheterias. This time, be it the only time in his life, he was going to frolic in the grand manner. No cloddy was Si Pond. He decided a drink was in order to help him plan his strategy. A drink at the hotel's famous Kudos Room where celebrities were reputed to be a dime a dozen. He left the suite and stepped into one of the elevators. He said, "Kudos Room." The auto-elevator murmured politely, "Yes, sir, the Kudos Room." At the door to the famous rendezvous of the swankiest set, Si paused a moment and looked about. He'd never been in a place like this, either. However, he stifled his first instinct to wonder about what this was going to do to his current credit balance with an inner grin and made his way to the bar. There was actually a bartender. Si Pond suppressed his astonishment and said, offhand, attempting an air of easy sophistication, "Slivovitz Sour." "Yes, sir." The drinks in the Kudos Room might be concocted by hand, but Si noticed they had the routine teevee screens built into the bar for payment. He put his credit card on the screen immediately before him when the drink came, and had to quell his desire to dial for a balance check, so as to be able to figure out what the Sour had cost him. Well, this was something like it. This was the sort of thing he'd dreamed about, out there in the great alone, seated in the confining conning tower of his space craft. He sipped at the drink, finding it up to his highest expectations, and then swiveled slightly on his stool to take a look at the others present. To his disappointment, there were no recognizable celebrities. None that he placed, at least—top teevee stars, top politicians of the Ultrawelfare State or Sports personalities. He turned back to his drink and noticed, for the first time, the girl who occupied the stool two down from him. Si Pond blinked. He blinked and then swallowed. " Zo-ro-as-ter ," he breathed. She was done in the latest style from Shanghai, even to the point of having cosmetically duplicated the Mongolian fold at the corners of her eyes. Every pore, but every pore, was in place. She sat with the easy grace of the Orient, so seldom found in the West. His stare couldn't be ignored. She looked at him coldly, turned to the bartender and murmured, "A Far Out Cooler, please, Fredric." Then deliberately added, "I thought the Kudos Room was supposed to be exclusive." There was nothing the bartender could say to that, and he went about building the drink. Si cleared his throat. "Hey," he said, "how about letting this one be on me?" Her eyebrows, which had been plucked and penciled to carry out her Oriental motif, rose. "Really!" she said, drawing it out. The bartender said hurriedly, "I beg your pardon, sir...." The girl, her voice suddenly subtly changed, said, "Why, isn't that a space pin?" Si, disconcerted by the sudden reversal, said, "Yeah ... sure." "Good Heavens, you're a spaceman?" "Sure." He pointed at the lapel pin. "You can't wear one unless you been on at least a Moon run." She was obviously both taken back and impressed. "Why," she said, "you're Seymour Pond, the pilot. I tuned in on the banquet they gave you." Si, carrying his glass, moved over to the stool next to her. "Call me Si," he said. "Everybody calls me Si." She said, "I'm Natalie. Natalie Paskov. Just Natalie. Imagine meeting Seymour Pond. Just sitting down next to him at a bar. Just like that." "Si," Si said, gratified. Holy Zoroaster, he'd never seen anything like this rarified pulchritude. Maybe on teevee, of course, one of the current sex symbols, but never in person. "Call me Si," he said again. "I been called Si so long, I don't even know who somebody's talking to if they say Seymour." "I cried when they gave you that antique watch," she said, her tone such that it was obvious she hadn't quite adjusted as yet to having met him. Si Pond was surprised. "Cried?" he said. "Well, why? I was kind of bored with the whole thing. But old Doc Gubelin, I used to work under him in the Space Exploration department, he was hot for it." " Academician Gubelin?" she said. "You just call him Doc ?" Si was expansive. "Why, sure. In the Space Department we don't have much time for formality. Everybody's just Si, and Doc, and Jim. Like that. But how come you cried?" She looked down into the drink the bartender had placed before her, as though avoiding his face. "I ... I suppose it was that speech Doctor Girard-Perregaux made. There you stood, so fine and straight in your space-pilot uniform, the veteran of six exploration runs to the planets...." "Well," Si said modestly, "two of my runs were only to the Moon." "... and he said all those things about man's conquest of space. And the dream of the stars which man has held so long. And then the fact that you were the last of the space pilots. The last man in the whole world trained to pilot a space craft. And here you were, retiring." Si grunted. "Yeah. That's all part of the Doc's scheme to get me to take on another three runs. They're afraid the whole department'll be dropped by the Appropriations Committee on this here Economic Planning Board. Even if they can find some other patsy to train for the job, it'd take maybe a year before you could even send him on a Moon hop. So old man Gubelin, and Girard-Perregaux too, they're both trying to pressure me into more trips. Otherwise they got a Space Exploration Department, with all the expense and all, but nobody to pilot their ships. It's kind of funny, in a way. You know what one of those spaceships costs?" "Funny?" she said. "Why, I don't think it's funny at all." Si said, "Look, how about another drink?" Natalie Paskov said, "Oh, I'd love to have a drink with you, Mr...." "Si," Si said. He motioned to the bartender with a circular twist of the hand indicating their need for two more of the same. "How come you know so much about it? You don't meet many people who are interested in space any more. In fact, most people are almost contemptuous, like. Think it's kind of a big boondoggle deal to help use up a lot of materials and all and keep the economy going." Natalie said earnestly, "Why, I've been a space fan all my life. I've read all about it. Have always known the names of all the space pilots and everything about them, ever since I was a child. I suppose you'd say I have the dream that Doctor Girard-Perregaux spoke about." Si chuckled. "A real buff, eh? You know, it's kind of funny. I was never much interested in it. And I got a darn sight less interested after my first run and I found out what space cafard was." She frowned. "I don't believe I know much about that." Sitting in the Kudos Room with the most beautiful girl to whom he had ever talked, Si could be nonchalant about the subject. "Old Gubelin keeps that angle mostly hushed up and out of the magazine and newspaper articles. Says there's enough adverse publicity about space exploration already. But at this stage of the game when the whole ship's crammed tight with this automatic scientific apparatus and all, there's precious little room in the conning tower and you're the only man aboard. The Doc says later on when ships are bigger and there's a whole flock of people aboard, there won't be any such thing as space cafard, but...." Of a sudden the right side of Si Pond's mouth began to tic and he hurriedly took up his drink and knocked it back.
D. She noticed his space pin.
What can be inferred about Spironolactone in Mr. Nilsson's treatment from 2005 to 2008? Choose the correct answer from the following options: A. It was consistently used at the same dosage and frequency. B. Its frequency of use increased over time. C. It was not included in the medication regimen in March 2008. D. The dosage was increased by July 2008. E. Only used during hospital admissions.
### Patient Report 0 **Dear colleague, ** We are reporting on the inpatient stay of our patient, Emil Nilsson, born on 12/04/2004, who was under our inpatient care from 01/26/05 to 02/02/05. **Diagnoses:** - Upper respiratory tract infection - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency **Medical History:** Emil has a Hypoplastic Left Heart Syndrome. The corrective procedure, including the Damus-Kaye-Stansel and Blalock-Taussig Anastomosis, took place three months ago. Under the current medication, the cardiac situation has been stable. He has shown satisfactory weight gain. Emil is the first child of parents with healthy hearts. An external nursing service provides home care every two days. The parents feel confident in the daily care of the child, including the placement of gastric tubes. **Current Presentation:** Since the evening before admission, Emil had elevated temperatures up to 40°C with a slight runny nose. No coughing, no diarrhea, no vomiting. After an outpatient visit to the treating pediatrician, Emil was referred to our hospital due to the complex cardiac history. Admission for the Glenn procedure is scheduled for 01/20/05. **Physical Examination:** Stable appearance and condition. Pinkish skin color, good skin turgor. - Cardiovascular: Rhythmic, 3/6 systolic murmur auscultated on the left parasternal side, radiating to the back. - Respiratory: Bilateral vesicular breath sounds, no rales. - Abdomen: Soft and unremarkable, no hepatosplenomegaly, no pathological resistances. - ENT exam, except for runny nose, unremarkable. - Good spontaneous motor skills with cautious head control. - Current Weight: 4830 g; Current Length: 634cm. Transcutaneous Oxygen Saturation: 78%. - Blood Pressure Measurement (mmHg): Left Upper Arm 89/56 (66), Right Upper Arm 90/45 **Current Medication:** **Medication** **Dosage** **Frequency** --------------------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Furosemide (Lasix) 4 mg 1-1-1-1 Spironolactone (Aldactone) 10 mg 1-0-0-0 Hydrochlorothiazide (Microzide) 2 mg 1-0-1 Aspirin 10 mg 1-0-0 Omeprazole (Prilosec) 2.5 mg 1-0-1 Vitamin D (Drisdol) 500 IU Once daily **ECG on 01/27/2006:** Sinus rhythm, heart rate 83/min, sagittal type. P: 60 ms, PQ: 100 ms, QRS: 80 ms, QT: 260 ms. T-wave negative in V1 and V2, biphasic in V3, positive from V4 onward, no arrhythmias. Signs of right ventricular hypertrophy. **Echocardiography on 01/27/2006:** Satisfactory function of the morphological right ventricle, small hypoplastic left ventricle with minimal contractility. Hypoplastic mitral and original aortic valve barely opening. Regular flow profile in the neoaorta. Aortic arch and Blalock-Taussig shunt not optimally visible due to restlessness. Trivial tricuspid valve insufficiency. **Chest X-ray on 01/28/2006:** Widened heart shadow, cardiothoracic ratio 0.5. Slight diffuse increase in markings on the right lung, no signs of pulmonary congestion. Hilum delicate. Recesses visible, no effusion. No localized infiltrations. No pneumothorax. **Therapy and Progression:** Based on the clinical and paraclinical picture of a pulmonary infection, we treated Emil with intravenous Cefuroxime for five days, along with daily physical therapy. Under this treatment, Emil's condition improved rapidly, with no auscultatory lung abnormalities. CRP and leukocyte count reduced. No fever. In the course of treatment, Emil had temporary diarrhea, which was well managed with adequate fluid substitution. We were able to discharge Emil in a significantly improved and stable general condition on the fifth day of treatment, with a weight of 5060 g. Transcutaneous oxygen saturations were consistently between 70% (during infection) and 85%. Three days later, the mother presented the child again at the emergency department due to vomiting after each meal and diarrhea. After changing the gastric tube and readmission here, there was no more vomiting, and feeding was feasible. Three to four stools of adequate consistency occurred daily. Cardiac medication remained unchanged. **Medication upon Discharge:** **Medication** **Dosage** **Frequency** --------------------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-0-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Furosemide (Lasix) 4 mg 1-1-1-1 Spironolactone (Aldactone) 10 mg 1-0-0-0 Hydrochlorothiazide (Microzide) 2 mg 1-0-1 Aspirin 10 mg 1-0-0 Omeprazole (Prilosec) 2.5 mg 1-0-1 Vitamin D (Drisdol) 500 IU Once daily ### Patient Report 1 **Dear colleague, ** We are reporting on the inpatient stay of your patient Emil Nilsson, born on 12/04/2004, who received inpatient care from 01/20/2005 to 01/27/2005. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency **Current Presentation:** Bidirectional Glenn Anastomosis, enlargement of the pulmonary trunk, and closure of BT shunt **Medical History:** We kindly assume that you are familiar with the detailed medical history. **Medication upon Admission:** **Medication** **Dosage** **Frequency** --------------------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-0-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Furosemide (Lasix) 4 mg 1-1-1-1 Spironolactone (Aldactone) 10 mg 1-0-0-0 Hydrochlorothiazide (Microzide) 2 mg 1-0-1 Aspirin 10 mg 1-0-0 Omeprazole (Prilosec) 2.5 mg 1-0-1 Vitamin D 500 IU Once daily **Physical Examination:** Stable general condition, no fever. Gastric tube. Unremarkable sternotomy scar, dry. Drains in situ, unremarkable. [Heart]{.underline}: Rhythmic heart action, 2/6 systolic murmur audible left parasternal. [Lungs]{.underline}: Bilateral vesicular breath sounds, no additional sounds. [Abdomen]{.underline}: Soft liver 1.5 cm below the costal margin. No pathological resistances. Pulses palpable on all sides. [Current weight:]{.underline} 4765 g; current length: 62 cm; head circumference: 37 cm. Transcutaneous oxygen saturation: 85%. [Blood pressure (mmHg):]{.underline} Left arm 91/65 (72), right arm 72/55 (63). **Echocardiography on 01/21/2005 and 01/27/2005:** Global mildly impaired function of the morphologically right systemic ventricle with satisfactory contractility. Minimal tricuspid insufficiency with two small jets (central and septal), Inflow merged Vmax 0.9 m/s. DKS anastomosis well visible, aortic VTI 14-15 cm. Free flow in Glenn with breath-variable flow pattern, Vmax 0.5 m/s. No pleural effusions, good diaphragmatic mobility bilaterally, no pericardial effusion. Isthmus optically free with Vmax 1.8 m/s. **Speech Therapy Consultation on 01/23/2005:** No significant orofacial disorders. Observation of drinking behavior recommended initially. Stimulation of sucking with various pacifiers. Instruction given to the father. **Therapy and Progression:** On 02/15/2006, the BT shunt was severed and a bidirectional Glenn Anastomosis was created, along with an enlargement of the pulmonary artery. The course was uncomplicated with swift extubation and transfer to the intermediate care unit on the second postoperative day. Timely removal of drains and pacemaker wires. The child remained clinically stable throughout the stay. The child\'s own drinking performance is satisfactory, with varying amounts of fluid intake between 60 and 100 ml per meal. The tube feeding is well tolerated, no vomiting, and discharged without a tube. Stool normal. IV antibiotics were continued until 01/22/2005. Transition from heparinization to daily Aspirin. Inhalation was also stopped during the course with a stable clinical condition. Due to persistently elevated mean pressures of 70 to 80 mmHg and limited global contractility of the morphologically right systemic ventricle, we increased both Carvedilol and Captopril medication. Blood pressures have changed only slightly. Therefore, we request an outpatient long-term blood pressure measurement and, if necessary, further medication optimization. Echocardiographically, we observed impaired but satisfactory contractility of the right systemic ventricle with only minimal tricuspid valve insufficiency, as well as a well-functioning Glenn Anastomosis. No insufficiency of the neoaortic valve with a VTI of 15 cm. No pericardial effusion or pleural effusions upon discharge. A copy of the summary has been sent to the involved external home care service for further outpatient care. **Medication upon Discharge:** **Medication** **Dosage** **Frequency** ---------------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-0-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Spironolactone (Aldactone) 10 mg 1-0-0-0 Iron Supplement 4 drops 1-0-1 Omeprazole (Prilosec) 2.5 mg 1-0-1 Vitamin D 500 IU Once daily Aspirin 10 mg 1-0-0 ### Patient Report 2 **Dear colleague, ** We are reporting to you about the inpatient stay of our patient, Emil Nilsson, born on 12/04/2004. He was admitted to our ward from 03/01/2008 to 03/10/2008. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency **Current Presentation:** Inpatient admission for dental rehabilitation under intubation anesthesia **Medical History:** We may kindly assume that you are familiar with the medical history. Prior to the planned Fontan completion, dental rehabilitation under intubation anesthesia was required due to the patient\'s carious dental status, which led to the scheduled inpatient admission. **Physical Examination:** Friendly toddler in stable general condition, pale skin color, central cyanosis, no edema. - ENT unremarkable, large tonsils, no cervical lymphadenopathy. - Heart: Heart sounds clear, rhythmic, 1/6 systolic murmur with a point of maximal intensity over the 3rd intercostal space on the left. - Lungs: Bilateral equal ventilation, vesicular breath sounds. - Initial neurological examination unremarkable. - Current weight: 12.4 kg; current body length: 93 cm. - Percutaneous oxygen saturation: 76%. - Blood pressure (mmHg): Right upper arm 117/50, left upper arm 110/57, right lower leg 134/55, left lower leg 146/71. **Medication upon Admission:** **Medication** **Dosage** **Frequency** --------------------- ------------ ----------------------------------------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 10 mg 1-0-0 (discontinued 10 days before admission) **ECG at Admission:** Sinus rhythm, heart rate 84/min, sagittal type. P wave 50 ms, PQ interval 120 ms, QRS duration 80 ms, QT interval 360 ms, QTc interval 440 ms, R/S transition in V4, T wave positive in V3 to V6. Persistent S wave in V4 to V6 -1.1 mV, no extrasystoles in the rhythm strip. **Consultation with Maxillofacial Surgery on 02/03/2008:** Timely wound conditions, clot at positions 55, 65, 84 in situ, Aspirin may be resumed today, further treatment by the Southern Dental Clinic. **Treatment and Progression:** Upon admission, the necessary pre-interventional diagnostics were performed. Dental rehabilitation (extraction and fillings) was performed without complications under intubation anesthesia on 03/02/2008. After anesthesia, the child experienced pronounced restlessness, requiring a single sedation with intravenous Midazolam. The child\'s behavior improved over time, and the wound conditions were unremarkable. Discharge on 03/03/2008 after consultation with our maxillofacial surgeon into outpatient follow-up care. We request pediatric cardiology and dental follow-up checks. **Medication upon Discharge:** **Medication** **Dosage** **Frequency** --------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Lab results upon Discharge:** **Parameter** **Results** **Reference Range** ----------------------------------------------- --------------- --------------------- Calcium 2.33 mEq/L 2.10-2.55 mEq/L Phosphorus 1.12 mEq/L 0.84-1.45 mEq/L Osmolality 286 mOsm/kg 280-300 mOsm/kg Iron 20.4 µg/dL 4.8-24.7 µg/dL Transferrin Saturation 28.3% 16.0-45.0% Magnesium 1.84 mg/dL 1.5-2.3 mg/dL Creatinine 0.84 mg/dL 0.70-1.20 mg/dL Estimated GFR (eGFR CKD-EPI) 132 mL/min Estimated GFR (eGFR Cystatin) \>90.0 mL/min Blood Urea Nitrogen (BUN) 29 mg/dL 18-45 mg/dL Total Bilirubin 0.97 mg/dL \<1.20 mg/dL Direct Bilirubin 0.34 mg/dL \<0.30 mg/dL Immunoglobulin G 11.42 g/L 5.49-15.84 g/L Immunoglobulin A 1.94 g/L 0.61-3.48 g/L Immunoglobulin M 0.65 g/L 0.50-1.90 g/L Cystatin C 0.93 mg/L 0.50-1.00 mg/L Transferrin 2.89 g/L Ferritin 54.2 ng/mL 14.0-152.0 ng/mL Total Cholesterol 110 mg/dL 82-192 mg/dL Triglycerides 64 mg/dL Apolipoprotein A1 0.91 g/L 1.04-2.02 g/L ALT 37 U/L \<41 U/L AST 33 U/L \<50 U/L Alkaline Phosphatase 138 U/L 55-149 U/L Butyrylcholinesterase (Pseudo-Cholinesterase) 5.62 kU/L 5.32-12.92 kU/L GLDH 3.1 U/L \<6.4 U/L Gamma-GT 96 U/L 8-61 U/L LDH 184 U/L 135-250 U/L Parathyroid Hormone 55.0 pg/mL 15.0-65.0 pg/mL 25-OH-Vitamin D3 10.9 ng/mL 20.0-50.0 ng/mL Free Thyroxine 17.90 ng/dL 9.50-16.40 ng/dL TSH 3.56 mIU/mL 0.50-4.30 mIU/mL ### Patient Report 3 **Dear colleague, ** We are reporting about the inpatient stay of our patient, Emil Nilsson, born on 12/04/2004. He was admitted to our ward from 07/02/2008 to 07/23/2008. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency **Current Presentation:** Planned admission for Fontan Procedure **Medical History:** We may assume that you are familiar with the detailed medical history. **Physical Examination:** Friendly toddler in stable general condition, pale skin color, central cyanosis, no edema. - ENT unremarkable, large tonsils, no cervical lymphadenopathy. - Heart: Heart sounds clear, rhythmic, 1/6 systolic murmur with a point of maximal intensity over the 3rd intercostal space on the left. - Lungs: Bilateral equal ventilation, vesicular breath sounds. Initial neurological examination unremarkable. - Percutaneous oxygen saturation: 77%. - Blood pressure (mmHg): Right upper arm 124/60, left upper arm 112/59, right lower leg 134/55, left lower leg 146/71. **Medication upon Admission:** **Medication** **Dosage** **Frequency** ---------------------- ------------ --------------- Captopril (Capoten®) 2 mg 1-1-1 Carvedilol (Coreg®) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Surgical Report:** Median Sternotomy, dissection of adhesions to access the anterior aspect of the heart, cannulation for extracorporeal circulation with bicaval cannulation. Further preparation of the heart, followed by clamping of the inferior vena cava towards the heart. Cutting the vessel, suturing the cardiac end, and then anastomosis of the inferior vena cava with an 18mm Gore-Tex prosthesis, which is subsequently tapered and sutured to the central pulmonary artery in an open anastomosis technique. Resumption of ventilation, smooth termination of extracorporeal circulation. Placement of 2 drains. Layered wound closure. Transesophageal Echocardiogram shows good biventricular function. The patient is transferred back to the ward with ongoing catecholamine support. **ECG on 07/02/2008:** Sinus rhythm, heart rate 76/min, steep type, PQ interval 140 ms, QRS duration 110 ms, QT interval 340 ms, QTc 385 mmHg. ST depression, descending in V2+V3. T-wave positivity from V2. No extrasystoles. No pauses. **Therapy and Progression:** The patient was admitted for a planned Fontan procedure on 07/02/2008. The procedure was performed without complications. An extracardiac conduit without overflow was created. Postoperatively, there was a rapid recovery. Extubation took place 2 hours after the procedure. Peri- and postoperative antibiotic treatment with Cefuroxim was administered. Bilateral pleural effusions were drained using thoracic drains, which were subsequently changed to pigtail drains after transfer to the general ward. Daily aspiration of the pleural effusions was performed. These effusions decreased over time, and the drains were removed on 07/14/2008. No further pleural effusions occurred. A minimal pericardial effusion and ascites were still present. Diuretic therapy was initially continued but could be significantly reduced by the time of discharge. Echocardiography showed a favorable postoperative result. Monitoring of vital signs and consciousness did not reveal any abnormalities. However, the ECG showed occasional idioventricular rhythms during bradycardia. Oxygen saturation ranged between 95% and 100%. Scarring revealed a dehiscence in the middle third and apical region. Regular dressing changes and disinfection of the affected wound area were performed. After consulting with our pediatric surgical colleagues, glucose was locally applied. There was no fever. Antibiotic treatment was discontinued after the removal of the pigtail drain, and the postoperatively increased inflammatory parameters had already returned to normal. The patient received physiotherapy, and their general condition improved daily. We were thus able to discharge Emil on 07/23/2008. **Current Recommendations:** - We recommend regular wound care with Octinisept. - Follow-up in the pediatric cardiology outpatient clinic. **Medication upon Discharge:** **Medication** **Dosage** **Frequency** --------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Lab results upon Discharge:** **Parameter** **Result** **Reference Range** ------------------------------- --------------- --------------------- Calcium 2.54 mEq/L 2.10-2.55 mEq/L Phosphate 1.42 mEq/L 0.84-1.45 mEq/L Osmolality 298 mOsm/kg 280-300 mOsm/kg Iron 20.6 µmol/L 4.8-24.7 µmol/L Transferrin Saturation 34 % 16.0-45.0 % Magnesium 0.61 mEq/L 0.62-0.91 mEq/L Creatinine 0.84 mg/dL 0.70-1.20 mg/dL Estimated GFR (eGFR CKD-EPI) 132 mL/min Estimated GFR (eGFR Cystatin) \>90.0 mL/min Urea 29 mg/dL 18-45 mg/dL Total Bilirubin 0.97 mg/dL \<1.20 mg/dL Direct Bilirubin 0.34 mg/dL \<0.30 mg/dL Immunoglobulin G 11.42 g/L 5.49-15.84 g/L Immunoglobulin A 1.94 g/L 0.61-3.48 g/L Immunoglobulin M 0.65 g/L 0.50-1.90 g/L Cystatin C 0.93 mg/L 0.50-1.00 mg/L Transferrin 2.89 g/L Ferritin 54.2 µg/L 14.0-152.0 µg/L Total Cholesterol 110 mg/dL 82-192 mg/dL Apolipoprotein A1 0.91 g/L 1.04-2.02 g/L ALT 37 U/L \<41 U/L AST 33 U/L \<50 U/L Alkaline Phosphatase 139 U/L 55-149 U/L GLDH 3.5 U/L \<6.4 U/L Gamma-GT 24 U/L 8-61 U/L LDH 145 U/L 135-250 U/L Parathyroid Hormone 57.2 ng/L 15.0-65.0 ng/L 25-OH-Vitamin D3 34.2 nmol/L 50.0-150.0 nmol/L ### Patient Report 4 **Dear colleague, ** We are reporting to you about the inpatient stay of our patient, Emil Nilsson, born on 12/04/2004, who was admitted to our clinic from 10/20/2021 to 10/22/2021. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency - Status post Glenn procedure - Fontan conduit retrocardial narrowing, extended hepatic vein window/VCI - Chronic liver congestion with mild fibrosis (sonography) **Procedures**: Diagnostic cardiac catheterization in analgosedation on 10/20/2021. **Medical History:** We kindly assume that the detailed medical history is known to you and refer to previous medical reports from our clinic. The current admission is based on a referral from the outpatient pediatric cardiologist for a diagnostic cardiac catheterization to evaluate Fontan hemodynamics in the context of desaturation during a stress test. Emil reports feeling subjectively well, but during school sports, he can only run briefly before experiencing palpitations and dyspnea. Emil attends a special needs school. He is currently free from infection and fever. **Medication upon Admission:** **Medication** **Dosage** **Frequency** --------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Physical Examination:** Emil is in good general condition and slim build, with no signs of infection. - Cardiac status: Rhythmic heart action, 2/6 systolic murmur. - Pulse status: Normal. - Lungs: Bilateral equal ventilation, vesicular breath sounds, no rales. - Abdomen: Soft, no hepatosplenomegaly. Unremarkable sternal scars. No signs of cardiopulmonary decompensation. - Current weight: 47 kg; current height: 169 cm. - Pulse oximetry oxygen saturation: 95%. - Blood pressure (mmHg): Right upper arm 132/94, left upper arm 121/98, right lower leg 158/94, left lower leg 156/94. **Lab results:** **Parameter** **Result** **Reference Range** ------------------------------- --------------- --------------------- Calcium 2.38 mEq/L 2.10-2.55 mEq/L Phosphate 1.19 mEq/L 0.84-1.45 mEq/L Osmolality 282 mOsm/Kg 280-300 mOsm/Kg Iron 20.0 µg/dL 4.8-24.7 µg/dL Transferrin Saturation 28.1 % 16.0-45.0 % Magnesium 0.79 mEq/L 0.62-0.91 mEq/L Creatinine 0.81 mg/dL 0.70-1.20 mg/dL Estimated GFR (eGFR CKD-EPI) 131 mL/min Estimated GFR (eGFR Cystatin) \>90.0 mL/min Urea (BUN) 27 mg/dL 18-45 mg/dL Total Bilirubin 0.92 mg/dL \<1.20 mg/dL Direct Bilirubin 0.38 mg/dL \<0.30 mg/dL Immunoglobulin G 11.47 g/L 5.49-15.84 g/L Immunoglobulin A 1.99 g/L 0.61-3.48 g/L Immunoglobulin M 0.61 g/L 0.50-1.90 g/L Cystatin C 0.95 mg/L 0.50-1.00 mg/L Transferrin 2.83 g/L Ferritin 54.5 µg/L 14.0-152.0 µg/L Total Cholesterol 110 mg/dL 82-192 mg/dL Triglycerides 62 mg/dL Apolipoprotein A1 0.94 g/L 1.04-2.02 g/L ALT (GPT) 35 U/L \<41 U/L AST (GOT) 32 U/L \<50 U/L Alkaline Phosphatase 135 U/L 55-149 U/L Pseudo-Cholinesterase 5.65 kU/L 5.32-12.92 kU/L GLDH 3.7 U/L \<6.4 U/L Gamma-GT 89 U/L 8-61 U/L LDH 184 U/L 135-250 U/L Parathyroid Hormone 55.0 pg/mL 15.0-65.0 pg/mL 25-OH-Vitamin D3 10.9 ng/mL 50.0-150.0 ng/mL Free Thyroxine 17.90 ng/dL 9.50-16.40 ng/dL TSH 3.56 mIU/L 0.50-4.30 mIU/L **ECG on 10/20/21:** Sinus rhythm, heart rate 79/min, steep type, PQ interval 140 ms, QRS duration 110 ms, QT interval 340 ms, QTc 385 mmHg. ST depression, descending in V2+V3. T-wave positivity from V2. No extrasystoles. No pauses. **ECG on 11/20/2021:** Sinus rhythm, heart rate 70/min, left type, inverted RS wave in lead I, PQ 160, QRS 100 ms, QT 340 ms, QTc 390 ms. ST depression, descending in V1+V2, T-wave positivity from V2, isoelectric in V5/V6, S-wave persistence until V6. Intraventricular conduction disorder. No extrasystoles. No pauses. **Holter monitor from 11/21/2021:** Normal heart rate spectrum, min 64 bpm, median 81 bpm, max 102 bpm, no intolerable bradycardia or pauses, monomorphic ventricular extrasystole in 0.5% of QRS complexes, no couplets or salvos. **Echocardiography on 10/20/2021:** Poor ultrasound conditions, TI I+°, good RV function, no LV cavity, aortic arch normal. No pulmonary embolism after catheterization. **Abdominal Ultrasound on 10/20/2021:** Borderline enlarged liver with extremely hypoechoic basic structure, wide hepatic veins extending into second-order branches, and a barely compressible wide inferior vena cava. The basic architecture is preserved, the ventral contour is smooth, no nodularity. No suspicious focal lesions, no portal vein thrombosis, no ascites, no splenomegaly. [Measurement values as follows:]{.underline} ATI damping coefficient (as always in congestion livers) very low, sometimes below 0.45 dB/cm/MHz, thus certainly no steatosis. Elastography with good measurement quality (IQR=0.22) with 1.9 m/s or 10.9 kPa with significantly elevated values (attributed to all conventional elastography, including Fibroscan, measurement error in congestion livers). Dispersion measurement (parametrized not for fibrosis, but for viscosity, here therefore the congestion component) in line with the images at 18 (m/s)/kHz, significantly elevated, thus corroborating that the elastography values are too high. In the synopsis of the different parameterizations as well as the overall image, mild fibrosis at a low F2 level. [Other Status]{.underline}: No enlargement of intra- and extrahepatic bile ducts. Normal-sized gallbladder with echo-free lumen and delicate wall. The pancreas is well defined, with homogeneous parenchyma; no pancreatic duct dilation, no focal lesions. The spleen is homogeneous and not enlarged. Both kidneys are orthotopic and normal in size. The parenchymal rim is not narrowed. The non-bridging bile duct is closed, no evidence of stones. The moderately filled bladder is unremarkable. No pathological findings in the pelvis. No enlarged lymph nodes along the large vessels, no free fluid. [Result:]{.underline} Morphologically and parametrically (after downgrading the significantly elevated elastography value due to congestion), there is evidence of chronic congestive liver with mild fibrosis (low F2 level). Otherwise, an unremarkable abdominal overview. **Cardiac Angiography and Catheterization on 10/20/2021:** [X-ray data]{.underline}: 5.50 min / 298.00 cGy\*cm² [Medication]{.underline}: 4 mg Acetaminophen (5 mg/5 mL, 5 mL/amp); 4000 IU Heparin RATIO (25000 IU/5 ml, 5 mL/IJF); 156 mg Propofol 1% MCT (200 mg/20 mL, 20 mL/amp); 5 mg/ml, 5 mL/vial) [Contrast agent:]{.underline} 105 ml Iomeron 350 [Puncture site]{.underline}: Right femoral vein (Terumo Pediatric Sheath 5F 7 cm). Right femoral artery (Terumo Pediatric Sheath 5F 7 cm). [Vital Parameters:]{.underline} - Height: 169.0 cm - Weight: 47.00 kg - Body surface area: 1.44 m² - [Catheter course]{.underline}**:** Puncture of the above-mentioned vessels under analgosedation and local anesthesia. Performance of oximetry, pressure measurements, and angiographies. After completing the examination, removal of the sheaths, Angioseal 6F AFC right, manual compression until hemostasis, and application of a pressure bandage. Transfer of the patient in a cardiopulmonary stable condition to the post-interventional intensive care unit 24i for heparinization and monitor monitoring. [Pressure values (mmHg):]{.underline} - VCI: 8 mmHg - VCS: 9 mmHg - RV: 103/0-8 syst/diast-edP mmHg - RPA: 8 syst/diast mmHg - LPA: 8 syst/diast mmHg - AoAsc 103/63 (82) syst/diast mmHg - AoDesc 103/61 (81) syst/diast mmHg - PCW left: 6 mmHg - PCW right: 6 mmHg [Summary]{.underline}**:** Uncomplicated arterial and venous puncture, 5F right femoral arterial sheath, cannulation of VCI, VCS up to V. anonyma, LPA and RPA with 5F wedge and 5F pigtail catheters. Retrograde aorta to atretic AoV and via Neo-AoV (PV) into RV. Low pressures, Fontan 8 mmHg, TPG 2 mmHg with wedge 6 mmHg, max. RVedP 8 mmHg. No shunt oximetrically, CI 2.7 l/min/m2. No gradient across Neo-AoV and arch. Angiographically no veno-venous collaterals, no MAPCA. Glenn wide, LPA and RPA stenosis-free, well-developed, rapid capillary phase and pulmonary vein return to LA/RA. Fontan tunnel centrally constricted to 12.5 mm, to VCI 18 mm. Satisfactory function of the hypertrophic right systemic ventricle, mild TI. No Neo-AI, native AoV without flow, normal coronary arteries, wide DKS, aortic arch without any stenosis. **Abdominal Ultrasound on 10/21/2022: ** [Clinical Information, Question, Justification:]{.underline} Post-Fontan procedure. Evaluation for chronic congestive liver. [Findings]{.underline}: Moderately enlarged liver with an extremely hypoechoic texture, which is typical for congestive livers. There are dilated liver veins extending into the second-order branches and a barely compressible wide inferior vena cava. The basic architecture of the liver is preserved, and the contour is smooth without nodularity. On the high-frequency scan, there are subtle but significant periportal cuffing enhancements throughout the liver, consistent with mild fibrosis. No suspicious focal lesions, no portal vein thrombosis, no ascites, and no splenomegaly are observed. Measurement values as follows: ATI damping coefficient (as usual in congestive livers) is very low, sometimes less than 0.45 dB/cm/MHz, indicating no steatosis. Shear wave elastography with good measurement quality (IQR=0.22) shows a velocity of 1.9 m/s or 10.9 kPa, which are significantly higher values (attributable to measurement errors inherent in all conventional elastography techniques, including Fibroscan, in congestive livers). Dispersion measurement (parameters not indicating fibrosis but viscosity, which in this case represents congestion) corresponds to the images, with a significantly high 18 (m/s)/kHz, thus supporting that the shear wave elastography values are too high (and should be lower). Overall, a mild fibrosis at a low F2 level is evident based on the synopsis of various parameterizations and the overall image impression. [Other findings:]{.underline} No dilation of intrahepatic and extrahepatic bile ducts. The gallbladder is of normal size with anechoic lumen and a delicate wall. The pancreas is well-defined with homogeneous parenchyma, no dilation of the pancreatic duct, and no focal lesions. The spleen is homogeneous and not enlarged. Both kidneys are and of normal size. The parenchymal rim is not narrowed. No evidence of stones in the renal collecting system. The moderately filled bladder is unremarkable. No pathological findings in the small pelvis. No enlarged lymph nodes along major vessels, and no free fluid. Conclusion: Morphologically and parametrically (after downgrading the significantly elevated elastography values due to congestion), the findings are consistent with chronic congestive liver with mild fibrosis. Otherwise, the abdominal overview is unremarkable. [Assessment]{.underline}: Very good findings after Norwood I-III, no current need for intervention. In the long term, there may be an indication for BAP/stent expansion of the central conduit constriction. The routine blood test for Fontan patients showed no abnormalities; vitamin D supplementation may be recommended in case of low levels. A cardiac MRI with flow measurement in the Fontan tunnel is initially recommended, followed by a decision on intervention in that area. We kindly remind you of the unchanged necessity of endocarditis prophylaxis in case of all bacteremias and dental restorations. An appropriate certificate is available for Emil, and the family is well-informed about the indication and the existence of the certificate. A LIMAX examination can only be performed in an inpatient setting, which was not possible during this stay due to organizational reasons. This should be done in the next inpatient stay. **Summary**: We are discharging Emil in good general condition and slim build, with no signs of infection. Puncture site is unremarkable. Cardiac status: Rhythmic heart action, no pathological heart sounds. Pulse status is normal. Lungs: Clear. Abdomen: Soft. Pulse oximetry oxygen saturation: 93% Blood pressure measurement (mmHg): 117/74 **Current Recommendations:** - Cardiac MRI in follow-up, appointment will be communicated, possibly including LIMAX - Vitamin D supplementation **Medication upon Discharge:** **Medication** **Dosage** **Frequency** --------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Lab results upon Discharge:** **Parameter** **Result** **Reference Range** ------------------------ -------------- --------------------- Calcium 2.34 mEq/L 2.10-2.55 mEq/L Phosphate 1.20 mEq/L 0.84-1.45 mEq/L Osmolality 285 mosmo/Kg 280-300 mosmo/Kg Iron 20.0 µmol/L 4.8-24.7 µmol/L Transferrin Saturation 28.1% 16.0-45.0% Magnesium 0.77 mEq/L 0.62-0.91 mEq/L Creatinine (Jaffé) 0.85 mg/dL 0.70-1.20 mg/dL Urea 26 mg/dL 18-45 mg/dL Total Bilirubin 0.97 mg/dL \<1.20 mg/dL Direct Bilirubin 0.33 mg/dL \<0.30 mg/dL Immunoglobulin G 11.44 g/L 5.49-15.84 g/L Immunoglobulin A 1.95 g/L 0.61-3.48 g/L Immunoglobulin M 0.62 g/L 0.50-1.90 g/L Cystatin C 0.96 mg/L 0.50-1.00 mg/L Transferrin 2.87 g/L \- Ferritin 54.5 µg/L 14.0-152.0 µg/L Total Cholesterol 110 mg/dL 82-192 mg/dL Triglycerides 64 mg/dL \- Apolipoprotein A1 0.96 g/L 1.04-2.02 g/L GPT 36 U/L \<41 U/L GOT 35 U/L \<50 U/L Alkaline Phosphatase 135 U/L 55-149 U/L Pseudo-Cholinesterase 5.64 kU/L 5.32-12.92 kU/L GLDH 3.2 U/L \<6.4 U/L Gamma-GT 92 U/L 8-61 U/L LDH 180 U/L 135-250 U/L Parathyroid Hormone 55.0 ng/L 15.0-65.0 ng/L 25-OH-Vitamin D3 10.9 nmol/L 50.0-150.0 nmol/L Free Thyroxine 17.90 ng/L 9.50-16.40 ng/L TSH 3.56 mU/L 0.50-4.30 mU/L ### Patient Report 5 **Dear colleague, ** We are reporting about the examination of our patient, Emil Nilsson, born on 12/04/2004, who presented to our outpatient clinic on 12/10/2021. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency - Status post Glenn procedure - Fontan conduit retrocardial narrowing, extended hepatic vein window/VCI - Chronic liver congestion with mild fibrosis (sonography) **Procedures**: Cardiac MRI. **Medical History:** We kindly assume that the detailed medical history is known to you and refer to previous medical reports from our clinic. The current presentation is based on a referral from the outpatient pediatric cardiologist for a Cardiac MRI. Emil reports feeling subjectively well. **Physical Examination:** Emil is in good general condition and slim build, with no signs of infection. - Cardiac status: Rhythmic heart action, 2/6 systolic murmur. - Pulse status: Normal. - Lungs: Bilateral equal ventilation, vesicular breath sounds, no rales. - Abdomen: Soft, no hepatosplenomegaly. Unremarkable sternal scars. No signs of cardiopulmonary decompensation. - Current weight: 47 kg; current height: 169 cm. - Pulse oximetry oxygen saturation: 95%. - Blood pressure (mmHg): Right upper arm 132/94, left upper arm 121/98, right lower leg 158/94, left lower leg 156/94. **Cardiac MRI on 03/02/2022:** [Clinical Information, Question, Justification:]{.underline} Hypoplastic Left Heart Syndrome, Fontan procedure, congestive liver, retrocardiac Fontan tunnel narrowing, VCI dilation, Fontan tunnel flow pathology? [Technique]{.underline}: 1.5 Tesla MRI. Localization scan. Transverse/coronal T2 HASTE. Cine Fast Imaging with Steady-State Precession functional assessment in short-axis view, two-chamber view, four-chamber view, and three-chamber view. Flow quantifications of the right and left pulmonary arteries, main pulmonary artery, superior vena cava, and inferior vena cava using through-plane phase-contrast gradient-echo measurement. Contrast-enhanced MR angiography. [Findings]{.underline}: No prior images for comparison available. Anatomy: Hypoplastic left heart with DKS (Damus-Kaye-Stansel) anastomosis, dilated and hypertrophied right ventricle, broad ASD. No focal wall thinning or outpouchings. No intracavitary thrombi detected. No pericardial effusion. Descending aorta on the left side. Status post total cavopulmonary anastomosis with slight tapering between the LPA and the anastomosis at 7 mm, LPA 11 mm, RPA 14 mm. No pleural effusions. No evidence of confluent pulmonary infiltrates in the imaged lung regions. Congestive liver. Cine MRI: The 3D volumetry shows a normal global RVEF in the setting of Fontan procedure. No regional wall motion abnormalities. Mild tricuspid valve prolapse with minor regurgitation jet. **Volumetry: ** [1) Left Ventricle:]{.underline} - Left Ventricle Absolute Normalized LV-EF: 29 % LV-EDV: 6 ml 4.2 mL/m² <!-- --> - LV-ESV: 4 ml 3 mL/m² - LV-SV: 2 ml 1 mL/m² - Cardiac Output: 0.1 L/min 0.1 L/min*m² * [2) Right Ventricle:]{.underline} - Right Ventricle maximum flow velocity: 109 cm/s - Antegrade volume 50 mL - Retrograde volume 2 mL - Regurgitation fraction 4 % [3) Right Pulmonary Artery: ]{.underline} - Right Pulmonary Artery maximum flow velocity: 27 cm/s <!-- --> - Antegrade volume: 14 mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % - CAVE: Right upper pulmonary artery not captured [4) Left Pulmonary Artery:]{.underline} - Maximum flow velocity: 33 cm/s - Antegrade volume: 18 mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % [5) Inferior Vena Cava:]{.underline} - Maximum flow velocity: 38 cm/s - Antegrade volume: 30 mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % [6) Fontan Tunnel:]{.underline} - Maximum flow velocity: 53 cm/s - Antegrade volume 31: mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % [7) Superior Vena Cava]{.underline}: - Maximum flow velocity: 23 cm/s - Antegrade volume: 16 mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % [Assessment:]{.underline} In the setting of status post Total Cavopulmonary Anastomosis with DKS anastomosis for hypoplastic left heart, there is good right ventricular systolic function with only minimal ejection above the aortic valve. Slight tapering of the baffles up to 13 mm compared to VCI up to 21 mm without evidence of stenosis or major baffle leakage. Morphologically, slight tapering between the LPA and the anastomosis with essentially balanced flow between the LPA and RPA. Mild tricuspid valve prolapse with discrete insufficiency. Hepatomegaly with signs of chronic congestion.
It was not included in the medication regimen in March 2008.
Which actor gets the most negative critique from the film reviewer? A. Jonathan Rhys-Myers B. Anthony Hopkins C. Brad Pitt D. Christian Bale
Warrior Queens Elizabeth is a lurid paraphrase of the old Groucho Marx line about Doris Day: "I knew the Virgin Queen before she was a virgin." As the movie tells it, she was a sylvan, redheaded princess (Cate Blanchett) given to gamboling with her fella (Joseph Fiennes) between periods of internment in the Tower of London on charges of conspiring to overthrow her half-sister, the heatedly Catholic Queen Mary (Kathy Burke). The daughter of the second wife of Henry VIII, Anne Boleyn, and therefore dubbed a bastard by the papists, the Protestant Elizabeth ascends the throne to find the air still thick with smoke from roasted heretics, a team of skulking Catholics plotting her assassination, and a council of advisers (lords, bishops, sundry old boys) who snigger openly at the prospect of taking orders from a woman. Only a strategic marriage to a Spaniard or a Frenchman will mollify all factions, her advisers insist, but the pickings prove dismal. (Her French suitor enjoys wearing dresses.) After skulls are smashed, throats slit, and bosoms skewered in the name of Christ, Elizabeth decides to: a) "unsex" herself and become a symbol--the Virgin Queen, married only to England; and b) entertain dissenting opinions exclusively from those whose heads are affixed to spikes. You can't be both a queenly queen and a womanly woman, says the script (by Michael Hirst)--at least not in 1554. (The director, Shekhar Kapur, made the same point in his grim 1994 Indian epic The Bandit Queen , against a backdrop of scrubby plains along the Ganges.) Is this feminist take historically accurate? Probably, although the evidence suggests that Elizabeth had developed a head for stratagems earlier in life (her position had been precarious since the beheading of her mother) and came to the throne with few girlish illusions about How Things Work in a barbarous state. That said, the movie's approach makes for juicy melodrama. The tone of Elizabeth comes nearer to the nihilistic relish of Jacobeans such as John Ford and John Webster than to the more sorrowful horror of the Elizabethan dramatists Ben Jonson and William Shakespeare. It's even closer to a Jacobean drama of our own age: The Godfather (1972), which it emulates by cutting back-and-forth between queen and courtiers in prayer and the roundup and slaughter of Catholics on their privies, in bed with their mistresses, and so on. Their severed heads look on, wide-eyed, as Elizabeth directs her hair to be shorn--images of her girlhood flashing by as her locks rain down--and then walks weightily to her throne, now a chalk-faced gorgon. With all due respect to Blanchett, Bette Davis, and Glenda Jackson, my favorite Elizabeth I remains Miranda Richardson's capricious, baby-talking psychopath on the BBC comedy Blackadder II . (Casting about for a new lord high executioner, she mews to Rowan Atkinson, "There are thousands of Catholics simply dying to have their heads sneaked off --and there's no one to organize it.") But Blanchett comes in a close second, pulling off the transition from hapless young woman to coolly ruthless monarch with uncommon subtlety. Gradually expunging all empathy from her moist, pink eyes and permitting her visage to ossify, she gives this carnival of carnage an awe-inspiring center. A more subversive sort of queen is on display in Velvet Goldmine , Todd Haynes' musical fantasia on the early '70s era of "glam" or "glitter" rock. Here the monarch is a David Bowie-esque singer called Brian Slade (Jonathan Rhys-Meyers) and his spidery, space-age alter ego, Maxwell Demon. The movie opens with a spaceship depositing an infant Oscar Wilde on the stoop of a Dublin townhouse. Then it skips ahead to track a jade pin (it signifies hedonistic liberation) from the custody of a young Wilde to a swishy fringe creature called Jack Fairy to the regal Slade, a bisexual superstar who carries the news to all the young dudes. After that, we're in an Orwellian 1984 that's presided over by a vaguely fascist president and by arena rockers who serve as propagandists for a repressively conformist state. Whatever happened to Brian Slade, the glitter kids, the visionary exhibitionists and gleeful poseurs? Borrowing its framework from Citizen Kane , the movie follows a reporter (Christian Bale) assigned to reconstruct Slade's life and solve the mystery of his whereabouts. Whatever you make of Velvet Goldmine (opinions have ranged from rapturous to casually dismissive), it's like no other musical ever made. It's determinedly swirling, discursive, elliptical. Now the story is told by an omniscient narrator, now a TV reporter, now a participant. Now it's flashing back, now forward. Every other line of dialogue is a cue for one of its dazzling numbers, largely covers of songs by Brian Eno, Bryan Ferry, and T. Rex. The narrative is a challenge to keep up with, but then, great artists often invent their own syntax. In the '80s, Haynes employed Barbie dolls to depict the rise and wasting away from anorexia of the singer Karen Carpenter. Lucky audiences who caught Superstar: The Karen Carpenter Story (it was shelved when Richard Carpenter served the producers with an order to cease and desist exhibition) began by laughing at this elaborately posed, soft-rock femme, only to discover by the climax that the cultural forces that were eating at her (and that kept her from eating) had grown heartbreakingly palpable. Poison (1991), Haynes' Genêt-inspired exploration of transgression, didn't overcome its own artiness. But Safe (1995), the story of a Reagan-era housewife (Julianne Moore) convinced that her environment is poisoning her, is an entrancing meditation on the power of culture to crush the individual. Despite its ironic detachment, the film draws you into its heroine's sickly state: Breathing oxygen from a canister inside a high-tech igloo, she dwindles to nearly nothing, the modern incarnation of the Incredible Shrinking Man. (It was partly my passion for Haynes' films that led me to accept a job offer from his indefatigable producer Christine Vachon last year to collaborate on a nuts-and-bolts book about producing, Shooting To Kill . So my review of Velvet Goldmine --like my review of Vachon's other recent release, Happiness --should be read as the work of a partisan. But not a blind partisan.) In Velvet Goldmine , Haynes sets out to demonstrate the power of popular music to change people's lives--to tell them it's OK to fashion themselves into anything they please. The core of the movie turns out not to be the Bowie figure but the journalist, Arthur Stuart, who was a witness to the events he's now reconstructing. Bale is such an expressive performer that Stuart's remembrance of things past attains a Proustian intensity. To him, Slade was a sexual messiah. I've never seen a more vivid distillation of rock's allure than the scene in which he reverently opens the new Brian Slade album--its centerfold image is a lithe, naked, green-tinged Maxwell Demon--slips the vinyl out of its paper jacket and, after gingerly setting the LP on the turntable, props a chair under the doorknob to keep the uncomprehending world at bay. But if Haynes wants Velvet Goldmine to be an anthem to the principles Bowie once embodied--the embrace of artifice and the smashing of conventional sexual roles--he also wants to portray the rocker as a hollow opportunist who abandoned glam and bisexuality for the life of a corporate superstar, throwing in his lot with the forces of repression. That's a lot to cover. An actor of stature might have bridged these two impulses, but the beautiful, brazenly slim-hipped Rhys-Meyers doesn't make his lines sound as if he's thinking them up on the spot, and Slade's self-destructive passion for Curt Wild (Ewan McGregor), the film's fuzzy, sweet Iggy Pop figure, seems less an emotional imperative than a thematic one. A case can be made that Velvet Goldmine isn't fully filled in, and that Haynes, who has never shaken off his background as a semiotics major, has made a movie that's all signifiers. I sometimes found myself wishing he would let the picture catch its breath, that the performers would stop coming at me in stroboscopic flashes. But then I'd be swept up in the sinuous motion of his filmmaking, in the elation of watching point of view passed like a baton from hand to hand, in the liberating force of his language and soundtrack. Velvet Goldmine might seem like a collection of baubles, but those baubles are strung. Is Brad Pitt the worst actor on earth? The case could be made, and Meet Joe Black could serve as Exhibit A. Pitt plays two roles in this seven course schlockfest. He's (briefly) a slick but wholesome yuppie and then (interminably) Death, who takes over the young man's body when he's thumped by a couple of cars in the movie's most promising moment. Bleached so blond that he looks like an irradiated android, Pitt expels all expression from his face and all tone from his voice. He speaks very, very slowly. The stunt half-works, at least until he's supposed to undergo an inner transformation and acquire human emotions--whereupon his face remains just as blank. Pitt's conception of the role is an idée fixe by someone who doesn't appear to have an idée in his head. Martin Brest, the director, is known for shooting a ton of footage and then "finding" his films in the editing room. What do you suppose he "found" when he scrutinized these miles of celluloid with Pitt doing nothing and taking his sweet time doing it? The first adaptation of this story (originally a play) was the 1934 Death Takes a Holiday , which came in at a perky 78 minutes. A conceit this fragile needs to whiz along to keep our disbelief in suspension, but Meet Joe Black grinds on for three hours (longer than either Beloved or Saving Private Ryan ), and Pitt acts as if he has leased the screen by the year. Anthony Hopkins plays the zillionaire communications baron whom Death enlists in the hope of understanding the human condition--an odd choice for a tour guide, since most people's condition doesn't involve personal helicopters, sprawling mansions on Long Island Sound, or Manhattan apartments that sport Olympic-size swimming pools. Four screenwriters, among them the great Bo Goldman ( Melvin and Howard , 1980; Shoot the Moon , 1982), labored on this moldy script, which features characters who ask questions that begin "Am I to understand that ...?" and a corporate villain who directs another character to "wake up and smell the thorns." It apparently never occurred to even one of these overpaid scribes to eliminate Hopkins' rueful realization that he'd "never write the great American novel"--no kidding, given his flagrantly Welsh accent. Actually, Hopkins gives this humanistic magnate considerable weight, so that whether or not Death takes him before he can stop to smell the roses and make amends to his neglected children becomes a matter of some suspense. The rest of the cast works with equal fortitude, especially Jeffrey Tambor (Hank "Hey now!" Kingsley on The Larry Sanders Show ) as Hopkins' milksop son-in-law and Marcia Gay Harden as his party planning, perpetually wilting elder daughter. As the younger daughter, the dark eyed, spaghetti thin Claire Forlani has to carry the picture's bathos on her exquisite shoulders. Her tremulous thoroughbred act wears thin, but it's hardly her fault: She has to emote like mad opposite a black pit of death--or is that the Black Death of Pitt?
C. Brad Pitt
Who didn't understand Dole's accusations towards the Times? A. the author of this text B. Dole's staff members C. Times readers D. Times reporters
Dole vs. the Times For several weeks now, pundits have debated how Bob Dole would exit the stage. Would he depart on a negative note about his opponent or a positive one about himself? Would he leave with anger or with humor? In the past several days, the issue has been settled. Dole, it appears, will end his political career raging against the New York Times . Dole's spat with the gray lady went public on Thursday, Oct. 24. In New Orleans, Dole charged the paper with ignoring a story about a Miami drug dealer who got invited to the White House. "This is a disgrace," Dole insisted. "I doubt if you even read it in the New York Times . They probably put it in the want ads. They don't put any anti-Clinton stories in the New York Times . Only anti-Dole stories in the New York Times ." Dole repeated his attack for the next five days. "We are not going to let the media steal this election," he told a crowd in Dallas on Friday. "This country belongs to the people, not the New York Times ." On Saturday, in Visalia, Calif., he added, "I know that with a crowd this size, the New York Times will write not many people showed up, but the other papers will get it right." On Sunday (the day the Times endorsed Clinton), Dole called the paper "the apologist for President Clinton for the last four years and an arm of the Democratic National Committee." In a CNN interview broadcast Monday, Dole said the Times "might as well be part of the Democratic Party. ... They hammer us on a daily basis. We make a major speech, they bury it back on section D. They put a front-page story that, well, Bob Dole and Jack Kemp didn't get along together 12 years ago." On Tuesday, Dole was still at it, referring to the 28 words of the 10th Amendment, and quipping, "That's about what I got in the New York Times today." The Times has reacted to this assault by highhandedly quoting everything and explaining none of it, leaving its readers baffled as to why the Republican nominee is so upset at the paper. In fact, Dole's fury at the Times is hardly news to those who work at the paper. According to Katharine Seelye, who has covered Dole since the beginning of his campaign, the complaints date from December 1995, when Dole staff members first protested that she had misunderstood the candidate's position on abortion. The real bitterness, however, began in May, when the paper played what Dole aides billed as a major address about welfare on Page 19 of the business section. Since then, campaign honchos have peppered the paper's reporters and editors with constant phone calls and letters complaining about unfair treatment. Reporters traveling with Dole caught a glimpse of the enmity Oct. 9, when Nelson Warfield, Dole's press secretary, staged a public confrontation with Seelye. The candidate, Warfield told reporters waiting to board the campaign plane, had just come from an appearance on G. Gordon Liddy's radio show. Why, Seelye asked, weren't reporters told about the appearance in advance? According to reporters present, Warfield snapped that it wouldn't make any difference because the Times would get the story wrong anyway. Then, on the plane, Warfield walked back to the press section and grandly served Seelye with a copy of a letter from Communications Director John Buckley to her boss, Times Washington Editor Andrew Rosenthal. That letter, which has fallen into the hands of Slate, protests Seelye's coverage of a speech the previous day. Dole, in New Jersey, had talked about Clinton being AWOL in the drug war. "Where has he been for four years? How many hundreds of thousands of young people started drugs?" Dole said. "Three million have started smoking while he was playing around with smoking and all this stuff finally in an election year." Seelye's front-page story reported that "Mr. Dole accused the President of 'playing around' while the drug war raged out of control." Buckley complains that the story "could lead the reader to believe that Dole was talking about a very different kind of 'playing around'--something he did not say, and something he would not say." The letter continues: "Since May, I have been pointing out to you a problem we see with the accuracy and understanding of context revealed in Kit's reporting," going on to assert that "Seelye has misquoted Dole on numerous occasions and done so in a manner that distorted the accuracy of her assertions and your coverage." No Dole staff would be quoted by name for this story, but speaking on background, a senior campaign official elaborated upon the complaint. "They've just done a miserable job throughout this campaign," the official said. "The coverage of Dole has been excessively bitchy from day one, in addition to having a number of extraordinary factual problems." With Seelye, the official says, the problem is "not being able to transcribe a tape accurately." With Adam Nagourney, the Times ' other reporter covering Dole full time since the summer, "the problem is an incredible focus on the little picture as opposed to the big picture." As an example, the official cites a September story in which Nagourney lumped together Dole's fall from a platform in Chico, Calif., and his mistaken reference to the "Brooklyn" Dodgers as "a rough stretch of politicking." Other than those two episodes, the official says, Dole actually had a great week. The campaign's complaint extends to unequal treatment--a nine-part series on Clinton's record, which the official describes as "the softest portrait since they invented black velvet"--and the Times perpetually underestimating the size of Dole crowds. "Clinton even gets better photographs," the official contends. Rosenthal, who has direct responsibility for campaign coverage at the Times , professes bewilderment at these complaints. "We don't make editorial judgments based on disposition to be tough on Bob Dole or nice to Bob Dole," he says. On the specifics, Rosenthal says that the Times ran an editor's note acknowledging that it shouldn't have truncated the "playing around" quote. He points out that the Times ran its story on the Miami drug dealer who visited the White House the same day Dole accused the paper of not covering it. As for the nine-part series on Clinton, Rosenthal says it is the long-standing practice of the paper to do a lengthy series on the incumbent's record. "If Dole wins and runs again in 2000, he will get nine-part series too," he says. "Ithink we have been tough on him," Seelye says. This stems, however, not from any bias, she says, but from the campaign's own internal problems. Dole's campaign has been especially "porous," with aides emulating the proverbial seafaring rats. This is true enough--in recent days ex-strategist Don Sipple has trashed the campaign on the record. But there's another point, too. Contrary to Buckley's charge that she misquotes Dole, Seelye routinely makes Dole look ridiculous by quoting him all too accurately, depicting him in what one colleague calls a "cinema verité " style. Famous for going over and over her tape recordings on the campaign plane, Seelye manages to get every Dole mumble, repetition, and verbal miscue down. For instance, in her Oct. 26 story reporting Dole's attack on the Times , Seelye writes: "In Phoenix on Friday night, he had a delightful time drawing out his vowels as he described financial contributions to the Clinton campaign. "From Indoneeesia," he said. "Yeah. From INdiaaaaah. Some fellow named Gandhi out there. He owes $10,000 in back taxes, but he found $300,000 to give to the Clinton campaign. And now Gandhi is gaaaawn. Gaaaaandhi, gone gone gone. They can't find him." Two days later, she quoted Dole in another story: "They've turned the White House into something else, I don't know what it is. It's the animal house! It's the animal house!" Most reporters would write, Bob Dole yesterday compared the White House to an "animal house," sparing the exclamation points, and making him sound at least compos mentis. But though unflattering, Seelye's Mametizing of Bob Dole can hardly be called unfair. It is not as if the Times cleans up Clinton's quotes; the president simply observes the rules of syntax most of the time. Something similar may be happening with the pictures. After four years, Clinton has learned how to avoid looking unpresidential. He no longer allows himself to be photographed wearing too-short running shorts, and he avoids pulling faces in public. Dole, who is simply less photogenic, is an easier victim for picture editors--who, like their editorial counterparts, have a strong bias against dullness. Take, for instance, the two pictures shown above. The front-page picture the Times ran the day after the second presidential debate does make Dole look like a decomposing monster. But unlike the picture in the Washington Post the same day, it captures the spirit of the event, with Dole grimly taking the offensive and Clinton watching warily but standing aside from the attacks. Dole sounds absurd when he alleges that the paper that broke Whitewater and the story of the first lady's commodities trades has not been aggressive in pursuing Clinton scandals. All sorts of potential Dole scandals have been soft-pedaled by the media, including the Times , because he is so far behind. It's true that coverage of Clinton on the campaign trail has been somewhat softer than the coverage of Dole, as even other Times reporters acknowledge. But the explanation is institutional, not ideological. The press, as many have complained, overemphasizes the "horse race" aspect of politics. As a side effect of that disease, reporters have excessive respect for a well-run campaign. (In 1988, Republican George Bush benefited from this phenomenon.) A cruder reality is that reporters need to have a relationship with Clinton after Tuesday. None of these factors, though, is unique to the Times . So why is Dole singling it out? Dole's attacks on the Times have the appearance of being an exercise in populist demagogy. In one of his great cue-card reading remarks, Dole tried to explain his recent attacks on CNN the other night by saying, "I like the media. They don't like them in the South." But this pat explanation doesn't entirely make sense. Red meat for right-wing crowds doesn't help Dole with the centrist voters he would need to turn around in order to make the miraculous happen. And in fact, according to a senior Dole aide, the attacks are heartfelt on the candidate's part. Dole has been going after the Times over the objections of advisers who have been telling him there's no percentage in picking fights with the press. But if Dole is attacking the Times because he is truly furious and not because he thinks it will help him get elected, what is he so angry about? The answer, I think, is that there has always been a Nixonian streak in Bob Dole, by which I mean a part of him which feels shut out of the closed circle of the Eastern establishment. At the Republican convention, Dole blasted the Clinton administration as a "corps of the elite who never grew up, never did anything real, never sacrificed, never suffered, and never learned." That phrase recalled an attack he made on the press long ago, in the days of Watergate, when he accused the Washington Post of being in bed with George McGovern. "There is a cultural and social affinity between the McGovernites and the Post executives and editors," Dole said then. "They belong to the same elite: They can be found living cheek-by-jowl in the same exclusive chic neighborhoods, and hob-nobbing at the same Georgetown parties." The deeper story here isn't whether Dole was wrongly shunted onto D19 when he ought to have been on A1. It's his feelings, as he says goodbye to politics, about the people who get to decide.
C. Times readers
What condition was observed in Mr. Nilsson's liver during sonography? Choose the correct answer from the following options: A. Acute hepatitis B. Liver cirrhosis C. Chronic liver congestion D. Liver steatosis E. Liver fibrosis
### Patient Report 0 **Dear colleague, ** We are reporting on the inpatient stay of our patient, Emil Nilsson, born on 12/04/2004, who was under our inpatient care from 01/26/05 to 02/02/05. **Diagnoses:** - Upper respiratory tract infection - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency **Medical History:** Emil has a Hypoplastic Left Heart Syndrome. The corrective procedure, including the Damus-Kaye-Stansel and Blalock-Taussig Anastomosis, took place three months ago. Under the current medication, the cardiac situation has been stable. He has shown satisfactory weight gain. Emil is the first child of parents with healthy hearts. An external nursing service provides home care every two days. The parents feel confident in the daily care of the child, including the placement of gastric tubes. **Current Presentation:** Since the evening before admission, Emil had elevated temperatures up to 40°C with a slight runny nose. No coughing, no diarrhea, no vomiting. After an outpatient visit to the treating pediatrician, Emil was referred to our hospital due to the complex cardiac history. Admission for the Glenn procedure is scheduled for 01/20/05. **Physical Examination:** Stable appearance and condition. Pinkish skin color, good skin turgor. - Cardiovascular: Rhythmic, 3/6 systolic murmur auscultated on the left parasternal side, radiating to the back. - Respiratory: Bilateral vesicular breath sounds, no rales. - Abdomen: Soft and unremarkable, no hepatosplenomegaly, no pathological resistances. - ENT exam, except for runny nose, unremarkable. - Good spontaneous motor skills with cautious head control. - Current Weight: 4830 g; Current Length: 634cm. Transcutaneous Oxygen Saturation: 78%. - Blood Pressure Measurement (mmHg): Left Upper Arm 89/56 (66), Right Upper Arm 90/45 **Current Medication:** **Medication** **Dosage** **Frequency** --------------------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Furosemide (Lasix) 4 mg 1-1-1-1 Spironolactone (Aldactone) 10 mg 1-0-0-0 Hydrochlorothiazide (Microzide) 2 mg 1-0-1 Aspirin 10 mg 1-0-0 Omeprazole (Prilosec) 2.5 mg 1-0-1 Vitamin D (Drisdol) 500 IU Once daily **ECG on 01/27/2006:** Sinus rhythm, heart rate 83/min, sagittal type. P: 60 ms, PQ: 100 ms, QRS: 80 ms, QT: 260 ms. T-wave negative in V1 and V2, biphasic in V3, positive from V4 onward, no arrhythmias. Signs of right ventricular hypertrophy. **Echocardiography on 01/27/2006:** Satisfactory function of the morphological right ventricle, small hypoplastic left ventricle with minimal contractility. Hypoplastic mitral and original aortic valve barely opening. Regular flow profile in the neoaorta. Aortic arch and Blalock-Taussig shunt not optimally visible due to restlessness. Trivial tricuspid valve insufficiency. **Chest X-ray on 01/28/2006:** Widened heart shadow, cardiothoracic ratio 0.5. Slight diffuse increase in markings on the right lung, no signs of pulmonary congestion. Hilum delicate. Recesses visible, no effusion. No localized infiltrations. No pneumothorax. **Therapy and Progression:** Based on the clinical and paraclinical picture of a pulmonary infection, we treated Emil with intravenous Cefuroxime for five days, along with daily physical therapy. Under this treatment, Emil's condition improved rapidly, with no auscultatory lung abnormalities. CRP and leukocyte count reduced. No fever. In the course of treatment, Emil had temporary diarrhea, which was well managed with adequate fluid substitution. We were able to discharge Emil in a significantly improved and stable general condition on the fifth day of treatment, with a weight of 5060 g. Transcutaneous oxygen saturations were consistently between 70% (during infection) and 85%. Three days later, the mother presented the child again at the emergency department due to vomiting after each meal and diarrhea. After changing the gastric tube and readmission here, there was no more vomiting, and feeding was feasible. Three to four stools of adequate consistency occurred daily. Cardiac medication remained unchanged. **Medication upon Discharge:** **Medication** **Dosage** **Frequency** --------------------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-0-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Furosemide (Lasix) 4 mg 1-1-1-1 Spironolactone (Aldactone) 10 mg 1-0-0-0 Hydrochlorothiazide (Microzide) 2 mg 1-0-1 Aspirin 10 mg 1-0-0 Omeprazole (Prilosec) 2.5 mg 1-0-1 Vitamin D (Drisdol) 500 IU Once daily ### Patient Report 1 **Dear colleague, ** We are reporting on the inpatient stay of your patient Emil Nilsson, born on 12/04/2004, who received inpatient care from 01/20/2005 to 01/27/2005. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency **Current Presentation:** Bidirectional Glenn Anastomosis, enlargement of the pulmonary trunk, and closure of BT shunt **Medical History:** We kindly assume that you are familiar with the detailed medical history. **Medication upon Admission:** **Medication** **Dosage** **Frequency** --------------------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-0-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Furosemide (Lasix) 4 mg 1-1-1-1 Spironolactone (Aldactone) 10 mg 1-0-0-0 Hydrochlorothiazide (Microzide) 2 mg 1-0-1 Aspirin 10 mg 1-0-0 Omeprazole (Prilosec) 2.5 mg 1-0-1 Vitamin D 500 IU Once daily **Physical Examination:** Stable general condition, no fever. Gastric tube. Unremarkable sternotomy scar, dry. Drains in situ, unremarkable. [Heart]{.underline}: Rhythmic heart action, 2/6 systolic murmur audible left parasternal. [Lungs]{.underline}: Bilateral vesicular breath sounds, no additional sounds. [Abdomen]{.underline}: Soft liver 1.5 cm below the costal margin. No pathological resistances. Pulses palpable on all sides. [Current weight:]{.underline} 4765 g; current length: 62 cm; head circumference: 37 cm. Transcutaneous oxygen saturation: 85%. [Blood pressure (mmHg):]{.underline} Left arm 91/65 (72), right arm 72/55 (63). **Echocardiography on 01/21/2005 and 01/27/2005:** Global mildly impaired function of the morphologically right systemic ventricle with satisfactory contractility. Minimal tricuspid insufficiency with two small jets (central and septal), Inflow merged Vmax 0.9 m/s. DKS anastomosis well visible, aortic VTI 14-15 cm. Free flow in Glenn with breath-variable flow pattern, Vmax 0.5 m/s. No pleural effusions, good diaphragmatic mobility bilaterally, no pericardial effusion. Isthmus optically free with Vmax 1.8 m/s. **Speech Therapy Consultation on 01/23/2005:** No significant orofacial disorders. Observation of drinking behavior recommended initially. Stimulation of sucking with various pacifiers. Instruction given to the father. **Therapy and Progression:** On 02/15/2006, the BT shunt was severed and a bidirectional Glenn Anastomosis was created, along with an enlargement of the pulmonary artery. The course was uncomplicated with swift extubation and transfer to the intermediate care unit on the second postoperative day. Timely removal of drains and pacemaker wires. The child remained clinically stable throughout the stay. The child\'s own drinking performance is satisfactory, with varying amounts of fluid intake between 60 and 100 ml per meal. The tube feeding is well tolerated, no vomiting, and discharged without a tube. Stool normal. IV antibiotics were continued until 01/22/2005. Transition from heparinization to daily Aspirin. Inhalation was also stopped during the course with a stable clinical condition. Due to persistently elevated mean pressures of 70 to 80 mmHg and limited global contractility of the morphologically right systemic ventricle, we increased both Carvedilol and Captopril medication. Blood pressures have changed only slightly. Therefore, we request an outpatient long-term blood pressure measurement and, if necessary, further medication optimization. Echocardiographically, we observed impaired but satisfactory contractility of the right systemic ventricle with only minimal tricuspid valve insufficiency, as well as a well-functioning Glenn Anastomosis. No insufficiency of the neoaortic valve with a VTI of 15 cm. No pericardial effusion or pleural effusions upon discharge. A copy of the summary has been sent to the involved external home care service for further outpatient care. **Medication upon Discharge:** **Medication** **Dosage** **Frequency** ---------------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-0-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Spironolactone (Aldactone) 10 mg 1-0-0-0 Iron Supplement 4 drops 1-0-1 Omeprazole (Prilosec) 2.5 mg 1-0-1 Vitamin D 500 IU Once daily Aspirin 10 mg 1-0-0 ### Patient Report 2 **Dear colleague, ** We are reporting to you about the inpatient stay of our patient, Emil Nilsson, born on 12/04/2004. He was admitted to our ward from 03/01/2008 to 03/10/2008. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency **Current Presentation:** Inpatient admission for dental rehabilitation under intubation anesthesia **Medical History:** We may kindly assume that you are familiar with the medical history. Prior to the planned Fontan completion, dental rehabilitation under intubation anesthesia was required due to the patient\'s carious dental status, which led to the scheduled inpatient admission. **Physical Examination:** Friendly toddler in stable general condition, pale skin color, central cyanosis, no edema. - ENT unremarkable, large tonsils, no cervical lymphadenopathy. - Heart: Heart sounds clear, rhythmic, 1/6 systolic murmur with a point of maximal intensity over the 3rd intercostal space on the left. - Lungs: Bilateral equal ventilation, vesicular breath sounds. - Initial neurological examination unremarkable. - Current weight: 12.4 kg; current body length: 93 cm. - Percutaneous oxygen saturation: 76%. - Blood pressure (mmHg): Right upper arm 117/50, left upper arm 110/57, right lower leg 134/55, left lower leg 146/71. **Medication upon Admission:** **Medication** **Dosage** **Frequency** --------------------- ------------ ----------------------------------------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 10 mg 1-0-0 (discontinued 10 days before admission) **ECG at Admission:** Sinus rhythm, heart rate 84/min, sagittal type. P wave 50 ms, PQ interval 120 ms, QRS duration 80 ms, QT interval 360 ms, QTc interval 440 ms, R/S transition in V4, T wave positive in V3 to V6. Persistent S wave in V4 to V6 -1.1 mV, no extrasystoles in the rhythm strip. **Consultation with Maxillofacial Surgery on 02/03/2008:** Timely wound conditions, clot at positions 55, 65, 84 in situ, Aspirin may be resumed today, further treatment by the Southern Dental Clinic. **Treatment and Progression:** Upon admission, the necessary pre-interventional diagnostics were performed. Dental rehabilitation (extraction and fillings) was performed without complications under intubation anesthesia on 03/02/2008. After anesthesia, the child experienced pronounced restlessness, requiring a single sedation with intravenous Midazolam. The child\'s behavior improved over time, and the wound conditions were unremarkable. Discharge on 03/03/2008 after consultation with our maxillofacial surgeon into outpatient follow-up care. We request pediatric cardiology and dental follow-up checks. **Medication upon Discharge:** **Medication** **Dosage** **Frequency** --------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Lab results upon Discharge:** **Parameter** **Results** **Reference Range** ----------------------------------------------- --------------- --------------------- Calcium 2.33 mEq/L 2.10-2.55 mEq/L Phosphorus 1.12 mEq/L 0.84-1.45 mEq/L Osmolality 286 mOsm/kg 280-300 mOsm/kg Iron 20.4 µg/dL 4.8-24.7 µg/dL Transferrin Saturation 28.3% 16.0-45.0% Magnesium 1.84 mg/dL 1.5-2.3 mg/dL Creatinine 0.84 mg/dL 0.70-1.20 mg/dL Estimated GFR (eGFR CKD-EPI) 132 mL/min Estimated GFR (eGFR Cystatin) \>90.0 mL/min Blood Urea Nitrogen (BUN) 29 mg/dL 18-45 mg/dL Total Bilirubin 0.97 mg/dL \<1.20 mg/dL Direct Bilirubin 0.34 mg/dL \<0.30 mg/dL Immunoglobulin G 11.42 g/L 5.49-15.84 g/L Immunoglobulin A 1.94 g/L 0.61-3.48 g/L Immunoglobulin M 0.65 g/L 0.50-1.90 g/L Cystatin C 0.93 mg/L 0.50-1.00 mg/L Transferrin 2.89 g/L Ferritin 54.2 ng/mL 14.0-152.0 ng/mL Total Cholesterol 110 mg/dL 82-192 mg/dL Triglycerides 64 mg/dL Apolipoprotein A1 0.91 g/L 1.04-2.02 g/L ALT 37 U/L \<41 U/L AST 33 U/L \<50 U/L Alkaline Phosphatase 138 U/L 55-149 U/L Butyrylcholinesterase (Pseudo-Cholinesterase) 5.62 kU/L 5.32-12.92 kU/L GLDH 3.1 U/L \<6.4 U/L Gamma-GT 96 U/L 8-61 U/L LDH 184 U/L 135-250 U/L Parathyroid Hormone 55.0 pg/mL 15.0-65.0 pg/mL 25-OH-Vitamin D3 10.9 ng/mL 20.0-50.0 ng/mL Free Thyroxine 17.90 ng/dL 9.50-16.40 ng/dL TSH 3.56 mIU/mL 0.50-4.30 mIU/mL ### Patient Report 3 **Dear colleague, ** We are reporting about the inpatient stay of our patient, Emil Nilsson, born on 12/04/2004. He was admitted to our ward from 07/02/2008 to 07/23/2008. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency **Current Presentation:** Planned admission for Fontan Procedure **Medical History:** We may assume that you are familiar with the detailed medical history. **Physical Examination:** Friendly toddler in stable general condition, pale skin color, central cyanosis, no edema. - ENT unremarkable, large tonsils, no cervical lymphadenopathy. - Heart: Heart sounds clear, rhythmic, 1/6 systolic murmur with a point of maximal intensity over the 3rd intercostal space on the left. - Lungs: Bilateral equal ventilation, vesicular breath sounds. Initial neurological examination unremarkable. - Percutaneous oxygen saturation: 77%. - Blood pressure (mmHg): Right upper arm 124/60, left upper arm 112/59, right lower leg 134/55, left lower leg 146/71. **Medication upon Admission:** **Medication** **Dosage** **Frequency** ---------------------- ------------ --------------- Captopril (Capoten®) 2 mg 1-1-1 Carvedilol (Coreg®) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Surgical Report:** Median Sternotomy, dissection of adhesions to access the anterior aspect of the heart, cannulation for extracorporeal circulation with bicaval cannulation. Further preparation of the heart, followed by clamping of the inferior vena cava towards the heart. Cutting the vessel, suturing the cardiac end, and then anastomosis of the inferior vena cava with an 18mm Gore-Tex prosthesis, which is subsequently tapered and sutured to the central pulmonary artery in an open anastomosis technique. Resumption of ventilation, smooth termination of extracorporeal circulation. Placement of 2 drains. Layered wound closure. Transesophageal Echocardiogram shows good biventricular function. The patient is transferred back to the ward with ongoing catecholamine support. **ECG on 07/02/2008:** Sinus rhythm, heart rate 76/min, steep type, PQ interval 140 ms, QRS duration 110 ms, QT interval 340 ms, QTc 385 mmHg. ST depression, descending in V2+V3. T-wave positivity from V2. No extrasystoles. No pauses. **Therapy and Progression:** The patient was admitted for a planned Fontan procedure on 07/02/2008. The procedure was performed without complications. An extracardiac conduit without overflow was created. Postoperatively, there was a rapid recovery. Extubation took place 2 hours after the procedure. Peri- and postoperative antibiotic treatment with Cefuroxim was administered. Bilateral pleural effusions were drained using thoracic drains, which were subsequently changed to pigtail drains after transfer to the general ward. Daily aspiration of the pleural effusions was performed. These effusions decreased over time, and the drains were removed on 07/14/2008. No further pleural effusions occurred. A minimal pericardial effusion and ascites were still present. Diuretic therapy was initially continued but could be significantly reduced by the time of discharge. Echocardiography showed a favorable postoperative result. Monitoring of vital signs and consciousness did not reveal any abnormalities. However, the ECG showed occasional idioventricular rhythms during bradycardia. Oxygen saturation ranged between 95% and 100%. Scarring revealed a dehiscence in the middle third and apical region. Regular dressing changes and disinfection of the affected wound area were performed. After consulting with our pediatric surgical colleagues, glucose was locally applied. There was no fever. Antibiotic treatment was discontinued after the removal of the pigtail drain, and the postoperatively increased inflammatory parameters had already returned to normal. The patient received physiotherapy, and their general condition improved daily. We were thus able to discharge Emil on 07/23/2008. **Current Recommendations:** - We recommend regular wound care with Octinisept. - Follow-up in the pediatric cardiology outpatient clinic. **Medication upon Discharge:** **Medication** **Dosage** **Frequency** --------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Lab results upon Discharge:** **Parameter** **Result** **Reference Range** ------------------------------- --------------- --------------------- Calcium 2.54 mEq/L 2.10-2.55 mEq/L Phosphate 1.42 mEq/L 0.84-1.45 mEq/L Osmolality 298 mOsm/kg 280-300 mOsm/kg Iron 20.6 µmol/L 4.8-24.7 µmol/L Transferrin Saturation 34 % 16.0-45.0 % Magnesium 0.61 mEq/L 0.62-0.91 mEq/L Creatinine 0.84 mg/dL 0.70-1.20 mg/dL Estimated GFR (eGFR CKD-EPI) 132 mL/min Estimated GFR (eGFR Cystatin) \>90.0 mL/min Urea 29 mg/dL 18-45 mg/dL Total Bilirubin 0.97 mg/dL \<1.20 mg/dL Direct Bilirubin 0.34 mg/dL \<0.30 mg/dL Immunoglobulin G 11.42 g/L 5.49-15.84 g/L Immunoglobulin A 1.94 g/L 0.61-3.48 g/L Immunoglobulin M 0.65 g/L 0.50-1.90 g/L Cystatin C 0.93 mg/L 0.50-1.00 mg/L Transferrin 2.89 g/L Ferritin 54.2 µg/L 14.0-152.0 µg/L Total Cholesterol 110 mg/dL 82-192 mg/dL Apolipoprotein A1 0.91 g/L 1.04-2.02 g/L ALT 37 U/L \<41 U/L AST 33 U/L \<50 U/L Alkaline Phosphatase 139 U/L 55-149 U/L GLDH 3.5 U/L \<6.4 U/L Gamma-GT 24 U/L 8-61 U/L LDH 145 U/L 135-250 U/L Parathyroid Hormone 57.2 ng/L 15.0-65.0 ng/L 25-OH-Vitamin D3 34.2 nmol/L 50.0-150.0 nmol/L ### Patient Report 4 **Dear colleague, ** We are reporting to you about the inpatient stay of our patient, Emil Nilsson, born on 12/04/2004, who was admitted to our clinic from 10/20/2021 to 10/22/2021. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency - Status post Glenn procedure - Fontan conduit retrocardial narrowing, extended hepatic vein window/VCI - Chronic liver congestion with mild fibrosis (sonography) **Procedures**: Diagnostic cardiac catheterization in analgosedation on 10/20/2021. **Medical History:** We kindly assume that the detailed medical history is known to you and refer to previous medical reports from our clinic. The current admission is based on a referral from the outpatient pediatric cardiologist for a diagnostic cardiac catheterization to evaluate Fontan hemodynamics in the context of desaturation during a stress test. Emil reports feeling subjectively well, but during school sports, he can only run briefly before experiencing palpitations and dyspnea. Emil attends a special needs school. He is currently free from infection and fever. **Medication upon Admission:** **Medication** **Dosage** **Frequency** --------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Physical Examination:** Emil is in good general condition and slim build, with no signs of infection. - Cardiac status: Rhythmic heart action, 2/6 systolic murmur. - Pulse status: Normal. - Lungs: Bilateral equal ventilation, vesicular breath sounds, no rales. - Abdomen: Soft, no hepatosplenomegaly. Unremarkable sternal scars. No signs of cardiopulmonary decompensation. - Current weight: 47 kg; current height: 169 cm. - Pulse oximetry oxygen saturation: 95%. - Blood pressure (mmHg): Right upper arm 132/94, left upper arm 121/98, right lower leg 158/94, left lower leg 156/94. **Lab results:** **Parameter** **Result** **Reference Range** ------------------------------- --------------- --------------------- Calcium 2.38 mEq/L 2.10-2.55 mEq/L Phosphate 1.19 mEq/L 0.84-1.45 mEq/L Osmolality 282 mOsm/Kg 280-300 mOsm/Kg Iron 20.0 µg/dL 4.8-24.7 µg/dL Transferrin Saturation 28.1 % 16.0-45.0 % Magnesium 0.79 mEq/L 0.62-0.91 mEq/L Creatinine 0.81 mg/dL 0.70-1.20 mg/dL Estimated GFR (eGFR CKD-EPI) 131 mL/min Estimated GFR (eGFR Cystatin) \>90.0 mL/min Urea (BUN) 27 mg/dL 18-45 mg/dL Total Bilirubin 0.92 mg/dL \<1.20 mg/dL Direct Bilirubin 0.38 mg/dL \<0.30 mg/dL Immunoglobulin G 11.47 g/L 5.49-15.84 g/L Immunoglobulin A 1.99 g/L 0.61-3.48 g/L Immunoglobulin M 0.61 g/L 0.50-1.90 g/L Cystatin C 0.95 mg/L 0.50-1.00 mg/L Transferrin 2.83 g/L Ferritin 54.5 µg/L 14.0-152.0 µg/L Total Cholesterol 110 mg/dL 82-192 mg/dL Triglycerides 62 mg/dL Apolipoprotein A1 0.94 g/L 1.04-2.02 g/L ALT (GPT) 35 U/L \<41 U/L AST (GOT) 32 U/L \<50 U/L Alkaline Phosphatase 135 U/L 55-149 U/L Pseudo-Cholinesterase 5.65 kU/L 5.32-12.92 kU/L GLDH 3.7 U/L \<6.4 U/L Gamma-GT 89 U/L 8-61 U/L LDH 184 U/L 135-250 U/L Parathyroid Hormone 55.0 pg/mL 15.0-65.0 pg/mL 25-OH-Vitamin D3 10.9 ng/mL 50.0-150.0 ng/mL Free Thyroxine 17.90 ng/dL 9.50-16.40 ng/dL TSH 3.56 mIU/L 0.50-4.30 mIU/L **ECG on 10/20/21:** Sinus rhythm, heart rate 79/min, steep type, PQ interval 140 ms, QRS duration 110 ms, QT interval 340 ms, QTc 385 mmHg. ST depression, descending in V2+V3. T-wave positivity from V2. No extrasystoles. No pauses. **ECG on 11/20/2021:** Sinus rhythm, heart rate 70/min, left type, inverted RS wave in lead I, PQ 160, QRS 100 ms, QT 340 ms, QTc 390 ms. ST depression, descending in V1+V2, T-wave positivity from V2, isoelectric in V5/V6, S-wave persistence until V6. Intraventricular conduction disorder. No extrasystoles. No pauses. **Holter monitor from 11/21/2021:** Normal heart rate spectrum, min 64 bpm, median 81 bpm, max 102 bpm, no intolerable bradycardia or pauses, monomorphic ventricular extrasystole in 0.5% of QRS complexes, no couplets or salvos. **Echocardiography on 10/20/2021:** Poor ultrasound conditions, TI I+°, good RV function, no LV cavity, aortic arch normal. No pulmonary embolism after catheterization. **Abdominal Ultrasound on 10/20/2021:** Borderline enlarged liver with extremely hypoechoic basic structure, wide hepatic veins extending into second-order branches, and a barely compressible wide inferior vena cava. The basic architecture is preserved, the ventral contour is smooth, no nodularity. No suspicious focal lesions, no portal vein thrombosis, no ascites, no splenomegaly. [Measurement values as follows:]{.underline} ATI damping coefficient (as always in congestion livers) very low, sometimes below 0.45 dB/cm/MHz, thus certainly no steatosis. Elastography with good measurement quality (IQR=0.22) with 1.9 m/s or 10.9 kPa with significantly elevated values (attributed to all conventional elastography, including Fibroscan, measurement error in congestion livers). Dispersion measurement (parametrized not for fibrosis, but for viscosity, here therefore the congestion component) in line with the images at 18 (m/s)/kHz, significantly elevated, thus corroborating that the elastography values are too high. In the synopsis of the different parameterizations as well as the overall image, mild fibrosis at a low F2 level. [Other Status]{.underline}: No enlargement of intra- and extrahepatic bile ducts. Normal-sized gallbladder with echo-free lumen and delicate wall. The pancreas is well defined, with homogeneous parenchyma; no pancreatic duct dilation, no focal lesions. The spleen is homogeneous and not enlarged. Both kidneys are orthotopic and normal in size. The parenchymal rim is not narrowed. The non-bridging bile duct is closed, no evidence of stones. The moderately filled bladder is unremarkable. No pathological findings in the pelvis. No enlarged lymph nodes along the large vessels, no free fluid. [Result:]{.underline} Morphologically and parametrically (after downgrading the significantly elevated elastography value due to congestion), there is evidence of chronic congestive liver with mild fibrosis (low F2 level). Otherwise, an unremarkable abdominal overview. **Cardiac Angiography and Catheterization on 10/20/2021:** [X-ray data]{.underline}: 5.50 min / 298.00 cGy\*cm² [Medication]{.underline}: 4 mg Acetaminophen (5 mg/5 mL, 5 mL/amp); 4000 IU Heparin RATIO (25000 IU/5 ml, 5 mL/IJF); 156 mg Propofol 1% MCT (200 mg/20 mL, 20 mL/amp); 5 mg/ml, 5 mL/vial) [Contrast agent:]{.underline} 105 ml Iomeron 350 [Puncture site]{.underline}: Right femoral vein (Terumo Pediatric Sheath 5F 7 cm). Right femoral artery (Terumo Pediatric Sheath 5F 7 cm). [Vital Parameters:]{.underline} - Height: 169.0 cm - Weight: 47.00 kg - Body surface area: 1.44 m² - [Catheter course]{.underline}**:** Puncture of the above-mentioned vessels under analgosedation and local anesthesia. Performance of oximetry, pressure measurements, and angiographies. After completing the examination, removal of the sheaths, Angioseal 6F AFC right, manual compression until hemostasis, and application of a pressure bandage. Transfer of the patient in a cardiopulmonary stable condition to the post-interventional intensive care unit 24i for heparinization and monitor monitoring. [Pressure values (mmHg):]{.underline} - VCI: 8 mmHg - VCS: 9 mmHg - RV: 103/0-8 syst/diast-edP mmHg - RPA: 8 syst/diast mmHg - LPA: 8 syst/diast mmHg - AoAsc 103/63 (82) syst/diast mmHg - AoDesc 103/61 (81) syst/diast mmHg - PCW left: 6 mmHg - PCW right: 6 mmHg [Summary]{.underline}**:** Uncomplicated arterial and venous puncture, 5F right femoral arterial sheath, cannulation of VCI, VCS up to V. anonyma, LPA and RPA with 5F wedge and 5F pigtail catheters. Retrograde aorta to atretic AoV and via Neo-AoV (PV) into RV. Low pressures, Fontan 8 mmHg, TPG 2 mmHg with wedge 6 mmHg, max. RVedP 8 mmHg. No shunt oximetrically, CI 2.7 l/min/m2. No gradient across Neo-AoV and arch. Angiographically no veno-venous collaterals, no MAPCA. Glenn wide, LPA and RPA stenosis-free, well-developed, rapid capillary phase and pulmonary vein return to LA/RA. Fontan tunnel centrally constricted to 12.5 mm, to VCI 18 mm. Satisfactory function of the hypertrophic right systemic ventricle, mild TI. No Neo-AI, native AoV without flow, normal coronary arteries, wide DKS, aortic arch without any stenosis. **Abdominal Ultrasound on 10/21/2022: ** [Clinical Information, Question, Justification:]{.underline} Post-Fontan procedure. Evaluation for chronic congestive liver. [Findings]{.underline}: Moderately enlarged liver with an extremely hypoechoic texture, which is typical for congestive livers. There are dilated liver veins extending into the second-order branches and a barely compressible wide inferior vena cava. The basic architecture of the liver is preserved, and the contour is smooth without nodularity. On the high-frequency scan, there are subtle but significant periportal cuffing enhancements throughout the liver, consistent with mild fibrosis. No suspicious focal lesions, no portal vein thrombosis, no ascites, and no splenomegaly are observed. Measurement values as follows: ATI damping coefficient (as usual in congestive livers) is very low, sometimes less than 0.45 dB/cm/MHz, indicating no steatosis. Shear wave elastography with good measurement quality (IQR=0.22) shows a velocity of 1.9 m/s or 10.9 kPa, which are significantly higher values (attributable to measurement errors inherent in all conventional elastography techniques, including Fibroscan, in congestive livers). Dispersion measurement (parameters not indicating fibrosis but viscosity, which in this case represents congestion) corresponds to the images, with a significantly high 18 (m/s)/kHz, thus supporting that the shear wave elastography values are too high (and should be lower). Overall, a mild fibrosis at a low F2 level is evident based on the synopsis of various parameterizations and the overall image impression. [Other findings:]{.underline} No dilation of intrahepatic and extrahepatic bile ducts. The gallbladder is of normal size with anechoic lumen and a delicate wall. The pancreas is well-defined with homogeneous parenchyma, no dilation of the pancreatic duct, and no focal lesions. The spleen is homogeneous and not enlarged. Both kidneys are and of normal size. The parenchymal rim is not narrowed. No evidence of stones in the renal collecting system. The moderately filled bladder is unremarkable. No pathological findings in the small pelvis. No enlarged lymph nodes along major vessels, and no free fluid. Conclusion: Morphologically and parametrically (after downgrading the significantly elevated elastography values due to congestion), the findings are consistent with chronic congestive liver with mild fibrosis. Otherwise, the abdominal overview is unremarkable. [Assessment]{.underline}: Very good findings after Norwood I-III, no current need for intervention. In the long term, there may be an indication for BAP/stent expansion of the central conduit constriction. The routine blood test for Fontan patients showed no abnormalities; vitamin D supplementation may be recommended in case of low levels. A cardiac MRI with flow measurement in the Fontan tunnel is initially recommended, followed by a decision on intervention in that area. We kindly remind you of the unchanged necessity of endocarditis prophylaxis in case of all bacteremias and dental restorations. An appropriate certificate is available for Emil, and the family is well-informed about the indication and the existence of the certificate. A LIMAX examination can only be performed in an inpatient setting, which was not possible during this stay due to organizational reasons. This should be done in the next inpatient stay. **Summary**: We are discharging Emil in good general condition and slim build, with no signs of infection. Puncture site is unremarkable. Cardiac status: Rhythmic heart action, no pathological heart sounds. Pulse status is normal. Lungs: Clear. Abdomen: Soft. Pulse oximetry oxygen saturation: 93% Blood pressure measurement (mmHg): 117/74 **Current Recommendations:** - Cardiac MRI in follow-up, appointment will be communicated, possibly including LIMAX - Vitamin D supplementation **Medication upon Discharge:** **Medication** **Dosage** **Frequency** --------------------- ------------ --------------- Captopril (Capoten) 2 mg 1-1-1 Carvedilol (Coreg) 0.2 mg 1-0-1 Aspirin 25 mg 1-0-0 **Lab results upon Discharge:** **Parameter** **Result** **Reference Range** ------------------------ -------------- --------------------- Calcium 2.34 mEq/L 2.10-2.55 mEq/L Phosphate 1.20 mEq/L 0.84-1.45 mEq/L Osmolality 285 mosmo/Kg 280-300 mosmo/Kg Iron 20.0 µmol/L 4.8-24.7 µmol/L Transferrin Saturation 28.1% 16.0-45.0% Magnesium 0.77 mEq/L 0.62-0.91 mEq/L Creatinine (Jaffé) 0.85 mg/dL 0.70-1.20 mg/dL Urea 26 mg/dL 18-45 mg/dL Total Bilirubin 0.97 mg/dL \<1.20 mg/dL Direct Bilirubin 0.33 mg/dL \<0.30 mg/dL Immunoglobulin G 11.44 g/L 5.49-15.84 g/L Immunoglobulin A 1.95 g/L 0.61-3.48 g/L Immunoglobulin M 0.62 g/L 0.50-1.90 g/L Cystatin C 0.96 mg/L 0.50-1.00 mg/L Transferrin 2.87 g/L \- Ferritin 54.5 µg/L 14.0-152.0 µg/L Total Cholesterol 110 mg/dL 82-192 mg/dL Triglycerides 64 mg/dL \- Apolipoprotein A1 0.96 g/L 1.04-2.02 g/L GPT 36 U/L \<41 U/L GOT 35 U/L \<50 U/L Alkaline Phosphatase 135 U/L 55-149 U/L Pseudo-Cholinesterase 5.64 kU/L 5.32-12.92 kU/L GLDH 3.2 U/L \<6.4 U/L Gamma-GT 92 U/L 8-61 U/L LDH 180 U/L 135-250 U/L Parathyroid Hormone 55.0 ng/L 15.0-65.0 ng/L 25-OH-Vitamin D3 10.9 nmol/L 50.0-150.0 nmol/L Free Thyroxine 17.90 ng/L 9.50-16.40 ng/L TSH 3.56 mU/L 0.50-4.30 mU/L ### Patient Report 5 **Dear colleague, ** We are reporting about the examination of our patient, Emil Nilsson, born on 12/04/2004, who presented to our outpatient clinic on 12/10/2021. **Diagnoses:** - Hypoplastic Left Heart Syndrome - Persistent foramen ovale - Persistent ductus arteriosus botalli (under Prostaglandin E1 Infusion) - Dysplasia of the mitral valve - Damus-Kaye-Stansel Procedure and aortopulmonary anastomosis on the right (modified BT-Shunt) - Secondary thoracic closure - Tricuspid valve insufficiency - Mild aortic valve insufficiency - Status post Glenn procedure - Fontan conduit retrocardial narrowing, extended hepatic vein window/VCI - Chronic liver congestion with mild fibrosis (sonography) **Procedures**: Cardiac MRI. **Medical History:** We kindly assume that the detailed medical history is known to you and refer to previous medical reports from our clinic. The current presentation is based on a referral from the outpatient pediatric cardiologist for a Cardiac MRI. Emil reports feeling subjectively well. **Physical Examination:** Emil is in good general condition and slim build, with no signs of infection. - Cardiac status: Rhythmic heart action, 2/6 systolic murmur. - Pulse status: Normal. - Lungs: Bilateral equal ventilation, vesicular breath sounds, no rales. - Abdomen: Soft, no hepatosplenomegaly. Unremarkable sternal scars. No signs of cardiopulmonary decompensation. - Current weight: 47 kg; current height: 169 cm. - Pulse oximetry oxygen saturation: 95%. - Blood pressure (mmHg): Right upper arm 132/94, left upper arm 121/98, right lower leg 158/94, left lower leg 156/94. **Cardiac MRI on 03/02/2022:** [Clinical Information, Question, Justification:]{.underline} Hypoplastic Left Heart Syndrome, Fontan procedure, congestive liver, retrocardiac Fontan tunnel narrowing, VCI dilation, Fontan tunnel flow pathology? [Technique]{.underline}: 1.5 Tesla MRI. Localization scan. Transverse/coronal T2 HASTE. Cine Fast Imaging with Steady-State Precession functional assessment in short-axis view, two-chamber view, four-chamber view, and three-chamber view. Flow quantifications of the right and left pulmonary arteries, main pulmonary artery, superior vena cava, and inferior vena cava using through-plane phase-contrast gradient-echo measurement. Contrast-enhanced MR angiography. [Findings]{.underline}: No prior images for comparison available. Anatomy: Hypoplastic left heart with DKS (Damus-Kaye-Stansel) anastomosis, dilated and hypertrophied right ventricle, broad ASD. No focal wall thinning or outpouchings. No intracavitary thrombi detected. No pericardial effusion. Descending aorta on the left side. Status post total cavopulmonary anastomosis with slight tapering between the LPA and the anastomosis at 7 mm, LPA 11 mm, RPA 14 mm. No pleural effusions. No evidence of confluent pulmonary infiltrates in the imaged lung regions. Congestive liver. Cine MRI: The 3D volumetry shows a normal global RVEF in the setting of Fontan procedure. No regional wall motion abnormalities. Mild tricuspid valve prolapse with minor regurgitation jet. **Volumetry: ** [1) Left Ventricle:]{.underline} - Left Ventricle Absolute Normalized LV-EF: 29 % LV-EDV: 6 ml 4.2 mL/m² <!-- --> - LV-ESV: 4 ml 3 mL/m² - LV-SV: 2 ml 1 mL/m² - Cardiac Output: 0.1 L/min 0.1 L/min*m² * [2) Right Ventricle:]{.underline} - Right Ventricle maximum flow velocity: 109 cm/s - Antegrade volume 50 mL - Retrograde volume 2 mL - Regurgitation fraction 4 % [3) Right Pulmonary Artery: ]{.underline} - Right Pulmonary Artery maximum flow velocity: 27 cm/s <!-- --> - Antegrade volume: 14 mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % - CAVE: Right upper pulmonary artery not captured [4) Left Pulmonary Artery:]{.underline} - Maximum flow velocity: 33 cm/s - Antegrade volume: 18 mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % [5) Inferior Vena Cava:]{.underline} - Maximum flow velocity: 38 cm/s - Antegrade volume: 30 mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % [6) Fontan Tunnel:]{.underline} - Maximum flow velocity: 53 cm/s - Antegrade volume 31: mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % [7) Superior Vena Cava]{.underline}: - Maximum flow velocity: 23 cm/s - Antegrade volume: 16 mL - Retrograde volume: 0 mL - Regurgitation fraction: 0 % [Assessment:]{.underline} In the setting of status post Total Cavopulmonary Anastomosis with DKS anastomosis for hypoplastic left heart, there is good right ventricular systolic function with only minimal ejection above the aortic valve. Slight tapering of the baffles up to 13 mm compared to VCI up to 21 mm without evidence of stenosis or major baffle leakage. Morphologically, slight tapering between the LPA and the anastomosis with essentially balanced flow between the LPA and RPA. Mild tricuspid valve prolapse with discrete insufficiency. Hepatomegaly with signs of chronic congestion.
Chronic liver congestion
Which relationship best describes the dynamic between the prisoners and the figures controlling them? A. The prisoners are being groomed to serve as future collaborators in an intergalactic sex trafficking stint, carried out through the fourth dimension. B. The prisoners serve as entertainment for the figures, who seem to have made a game out of snatching up humans and manipulating their thoughts and behaviors. C. The prisoners have committed some sort of Earthly crime, and their punishment -- in order to avoid the death penalty -- is to spend a sentence in a labor camp operated by the figures. D. The prisoners have volunteered to be part of the figures' experiment for a specific time period, under the agreement that they will be returned to Earth in the condition they left it.
JUDAS RAM BY SAM MERWIN, Jr. Illustrated by JAMES VINCENT [Transcriber's Note: This etext was produced from Galaxy Science Fiction December 1950. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] The house was furnished with all luxuries, including women. If it only had a lease that could be broken— Roger Tennant, crossing the lawn, could see two of the three wings of the house, which radiated spoke-like from its heptagonal central portion. The wing on the left was white, with slim square pillars, reminiscent of scores of movie sets of the Deep South. That on the right was sundeck solar-house living-machine modern, something like a montage of shoeboxes. The wing hidden by the rest of the house was, he knew, spired, gabled and multicolored, like an ancient building in pre-Hitler Cracow. Dana was lying under a tree near the door, stretched out on a sort of deck chair with her eyes closed. She wore a golden gown, long and close-fitting and slit up the leg like the gown of a Chinese woman. Above it her comely face was sullen beneath its sleek cocoon of auburn hair. She opened her eyes at his approach and regarded him with nothing like favor. Involuntarily he glanced down at the tartan shorts that were his only garment to make sure that they were on properly. They were. He had thought them up in a moment of utter boredom and they were extremely comfortable. However, the near-Buchanan tartan did not crease or even wrinkle when he moved. Their captors had no idea of how a woven design should behave. "Waiting for me?" Tennant asked the girl. She said, "I'd rather be dead. Maybe I am. Maybe we're all dead and this is Hell." He stood over her and looked down until she turned away her reddening face. He said, "So it's going to be you again, Dana. You'll be the first to come back for a second run." "Don't flatter yourself," she replied angrily. She sat up, pushed back her hair, got to her feet a trifle awkwardly because of the tight-fitting tubular gown. "If I could do anything about it...." "But you can't," he told her. "They're too clever." "Is this crop rotation or did you send for me?" she asked cynically. "If you did, I wish you hadn't. You haven't asked about your son." "I don't even want to think about him," said Tennant. "Let's get on with it." He could sense the restless stirring of the woman within Dana, just as he could feel the stirring toward her within himself—desire that both of them loathed because it was implanted within them by their captors. They walked toward the house. It didn't look like a prison—or a cage. Within the dome of the barrier, it looked more like a well-kept if bizarre little country estate. There was clipped lawn, a scattering of trees, even a clear little brook that chattered unending annoyance at the small stones which impeded its flow. But the lawn was not of grass—it was of a bright green substance that might have been cellophane but wasn't, and it sprouted from a fabric that might have been canvas but was something else. The trees looked like trees, only their trunks were bark all the way through—except that it was not bark. The brook was practically water, but the small stones over which it flowed were of no earthly mineral. They entered the house, which had no roof, continued to move beneath a sky that glowed with light which did not come from a sun or moon. It might have been a well-kept if bizarre little country estate, but it wasn't. It was a prison, a cage. The other two women were sitting in the heptagonal central hall. Eudalia, who had borne twin girls recently, was lying back, newly thin and dark of skin and hair, smoking a scentless cigarette. A tall woman, thirtyish, she wore a sort of shimmering green strapless evening gown. Tennant wondered how she maintained it in place, for despite her recent double motherhood, she was almost flat of bosom. He asked her how she was feeling. "Okay, I guess," she said. "The way they manage it, there's nothing to it." She had a flat, potentially raucous voice. Eudalia had been a female foreman in a garment-cutting shop before being captured and brought through. "Good," he said. "Glad to hear it." He felt oddly embarrassed. He turned to Olga, broad, blonde and curiously vital, who sat perfectly still, regarding him over the pregnant swell of her dirndl-clad waist. Olga had been a waitress in a mining town hash-house near Scranton. Tennant wanted to put an encouraging hand on her shoulder, to say something that might cheer her up, for she was by far the youngest of the three female captives, barely nineteen. But with the eyes of the other two, especially Dana, upon him, he could not. "I guess I wasn't cut out to be a Turk," he said. "I don't feel at ease in a harem, even when it's supposedly my own." "You're not doing so badly," Dana replied acidly. "Lay off—he can't help it," said Eudalia unexpectedly. "He doesn't like it any better than we do." "But he doesn't have to—have them," objected Olga. She had a trace of Polish accent that was not unpleasant. In fact, Tennant thought, only her laughter was unpleasant, a shrill, uncontrolled burst of staccato sound that jarred him to his heels. Olga had not laughed of late, however. She was too frightened. "Let's get the meal ordered," said Dana and they were all silent, thinking of what they wanted to eat but would not enjoy when it came. Tennant finished with his order, then got busy with his surprise. It arrived before the meal, materializing against one of the seven walls of the roofless chamber. It was a large cabinet on slender straight legs that resembled dark polished wood. Tennant went to it, opened a hingeless door and pushed a knob on the inner surface. At once the air was hideous with the acerate harmony of a singing commercial.... ... so go soak your head, be it gold, brown or red, in Any-tone Shampoo! A disc jockey's buoyant tones cut in quickly as the final ooooo faded. "This is Grady Martin, your old night-owl, coming to you with your requests over Station WZZX, Manhattan. Here's a wire from Theresa McManus and the girls in the family entrance of Conaghan's Bar and Grill on West...." Tennant watched the girls as a sweet-voiced crooner began to ply an unfamiliar love lyric to a melody whose similarity to a thousand predecessors doomed it to instant success. Olga sat up straight, her pale blue eyes round with utter disbelief. She looked at the radio, at Tennant, at the other two women, then back at the machine. She murmured something in Polish that was inaudible, but her expression showed that it must have been wistful. Eudalia grinned at Tennant and, rising, did a sort of tap dance to the music, then whirled back into her chair, green dress ashimmer, and sank into it just to listen. Dana stood almost in the center of the room, carmine-tipped fingers clasped beneath the swell of her breasts. She might have been listening to Brahms or Debussy. Her eyes glowed with the salty brilliance of emotion and she was almost beautiful. " Rog! " she cried softly when the music stopped. "A radio and WZZX! Is it—are they—real?" "As real as you or I," he told her. "It took quite a bit of doing, getting them to put a set together. And I wasn't sure that radio would get through. TV doesn't seem to. Somehow it brings things closer...." Olga got up quite suddenly, went to the machine and, after frowning at it for a moment, tuned in another station from which a Polish-speaking announcer was followed by polka music. She leaned against the wall, resting one smooth forearm on the top of the machine. Her eyes closed and she swayed a little in time to the polka beat. Tennant caught Dana looking at him and there was near approval in her expression—approval that faded quickly as soon as she caught his gaze upon her. The food arrived then and they sat down at the round table to eat it. Tennant's meat looked like steak, it felt like steak, but, lacking the aroma of steak, it was almost tasteless. This was so with all of their foods, with their cigarettes, with everything in their prison—or their cage. Their captors were utterly without a human conception of smell, living, apparently, in a world without odor at all. Dana said suddenly, "I named the boy Tom, after somebody I hate almost as much as I hate you." Eudalia laid down her fork with a clatter and regarded Dana disapprovingly. "Why take it out on Rog?" she asked bluntly. "He didn't ask to come here any more than we did. He's got a wife back home. Maybe you want him to fall in love with you? Maybe you're jealous because he doesn't? Well, maybe he can't! And maybe it wouldn't work, the way things are arranged here." "Thanks, Eudalia," said Tennant. "I think I can defend myself. But she's right, Dana. We're as helpless as—laboratory animals. They have the means to make us do whatever they want." "Rog," said Dana, looking suddenly scared, "I'm sorry I snapped at you. I know it's not your fault. I'm— changing ." He shook his head. "No, Dana, you're not changing. You're adapting. We all are. We seem to be in a universe of different properties as well as different dimensions. We're adjusting. I can do a thing or two myself that seem absolutely impossible." "Are we really in the fourth dimension?" Dana asked. Of the three of them, she alone had more than a high-school education. "We may be in the eleventh for all I know," he told her. "But I'll settle for the fourth—a fourth dimension in space, if that makes scientific sense, because we don't seem to have moved in time. I wasn't sure of that, though, till we got the radio." "Why haven't they brought more of us through?" Eudalia asked, tamping out ashes in a tray that might have been silver. "I'm not sure," he said thoughtfully. "I think it's hard for them. They have a hell of a time bringing anyone through alive, and lately they haven't brought anyone through—not alive." "Why do they do it—the other way, I mean?" asked Dana. Tennant shrugged. "I don't know. I've been thinking about it. I suppose it's because they're pretty human." " Human! " Dana was outraged. "Do you call it human to—" "Hold on," he said. "They pass through their gateway to Earth at considerable danger and, probably, expense of some kind. Some of them don't come back. They kill those of us who put up a fight. Those who don't—or can't—they bring back with them. Live or dead, we're just laboratory specimens." "Maybe," Eudalia conceded doubtfully. Then her eyes blazed. "But the things they do—stuffing people, mounting their heads, keeping them on display in their—their whatever they live in. You call that human, Rog?" "Were you ever in a big-game hunter's trophy room?" Tennant asked quietly. "Or in a Museum of Natural History? A zoo? A naturalist's lab? Or even, maybe, photographed as a baby on a bear-skin rug?" "I was," said Olga. "But that's not the same thing." "Of course not," he agreed. "In the one instance, we're the hunters, the breeders, the trophy collectors. In the other"—he shrugged—"we're the trophies." There was a long silence. They finished eating and then Dana stood up and said, "I'm going out on the lawn for a while." She unzipped her golden gown, stepped out of it to reveal a pair of tartan shorts that matched his, and a narrow halter. "You thought those up while we ate," he said. It annoyed him to be copied, though he did not know why. She laughed at him silently, tossed her auburn hair back from her face and went out of the roofless house, holding the gold dress casually over her bare arm. Eudalia took him to the nursery. He was irritated now in another, angrier way. The infants, protected by cellophane-like coverlets, were asleep. "They never cry," the thin woman told him. "But they grow—God, how they grow!" "Good," said Tennant, fighting down his anger. He kissed her, held her close, although neither of them felt desire at the moment. Their captors had seen to that; it wasn't Eudalia's turn. Tennant said, "I wish I could do something about this. I hate seeing Dana so bitter and Olga so scared. It isn't their fault." "And it's not yours," insisted Eudalia. "Don't let them make you think it is." "I'll try not to," he said and stopped, realizing the family party was over. He had felt the inner tug of command, said good-by to the women and returned to his smaller compound within its own barrier dome. Then came the invisible aura of strain in the air, the shimmering illusion of heat that was not heat, that was prelude to his teleportation ... if that were the word. It was neither pleasant nor unpleasant; it was , that was all. He called it the training hall, not because it looked like a training hall but because that was its function. It didn't actually look like anything save some half-nourished dream a surrealist might have discarded as too nightmarish for belief. As in all of this strange universe, excepting the dome-cages in which the captives were held, the training hall followed no rules of three-dimensional space. One wall looked normal for perhaps a third of its length, then it simply wasn't for a bit. It came back farther on at an impossible angle. Yet, walking along it, touching it, it felt perfectly smooth and continuously straight. The opposite wall resembled a diagonal cross-section of an asymmetrical dumbbell—that was the closest Tennant could come to it in words. And it, too, felt straight. The floor looked like crystal smashed by some cosmic impact, yet it had reason. He knew this even though no reason was apparent to his three-dimensional vision. The ceiling, where he could see it, was beyond description. The captor Tennant called Opal came in through a far corner of the ceiling. He—if it was a he—was not large, although this, Tennant knew, meant nothing; Opal might extend thousands of yards in some unseen direction. He had no regular shape and much of him was iridescent and shot with constantly changing colors. Hence the name Opal. Communication was telepathic. Tennant could have yodeled or yelled or sung Mississippi Mud and Opal would have shown no reaction. Yet Tennant suspected that the captors could hear somewhere along the auditory scale, just as perhaps they could smell, although not in any human sense. You will approach without use of your appendages. The command was as clear as if it had been spoken aloud. Tennant took a deep breath. He thought of the space beside Opal. It took about three seconds and he was there, having spanned a distance of some ninety feet. He was getting good at it. Dog does trick, he thought. He went through the entire routine at Opal's bidding. When at last he was allowed to relax, he wondered, not for the first time, if he weren't mastering some of the alleged Guru arts. At once he felt probing investigation. Opal, like the rest of the captors, was as curious as a cat—or a human being. Tennant sat against a wall, drenched with sweat. There would be endless repetition before his workout was done. On Earth, dogs were said to be intellectually two-dimensional creatures. He wondered if they felt this helpless futility when their masters taught them to heel, to point, to retrieve. Some days later, the training routine was broken. He felt a sudden stir of near-sick excitement as he received the thought: Now you are ready. We are going through at last. Opal was nervous, so much so that he revealed more than he intended. Or perhaps that was his intent; Tennant could never be sure. They were going through to Tennant's own dimension. He wondered briefly just what his role was to be. He had little time to speculate before Opal seemed to envelop him. There was the blurring wrench of forced teleportation and they were in another room, a room which ended in a huge irregular passage that might have been the interior of a giant concertina—or an old-fashioned kodak. He stood before a kidney-shaped object over whose jagged surface colors played constantly. From Opal's thoughts it appeared to be some sort of ultradimensional television set, but to Tennant it was as incomprehensible as an oil painting to an animal. Opal was annoyed that Tennant could make nothing of it. Then came the thought: What cover must your body have not to be conspicuous? Tennant wondered, cynically, what would happen if he were to demand a costume of mediaeval motley, complete with Pied Piper's flute. He received quick reproof that made his head ring as from a blow. He asked Opal where and when they were going, was informed that he would soon emerge on Earth where he had left it. That told him everything but the date and season. Opal, like the rest of the captors, seemed to have no understanding of time in a human sense. Waiting, Tennant tried not to think of his wife, of the fact that he hadn't seen her in—was it more than a year and a half on Earth? He could have controlled his heartbeat with one of his new powers, but that might have made Opal suspicious. He should be somewhat excited. He allowed himself to be, though he obscured the reasons. He was going to see his wife again ... and maybe he could trick his way into not returning. The maid who opened the door for him was new, although her eyes were old. But she recognized him and stood aside to let him enter. There must, he thought, still be pictures of him around. He wondered how Agatha could afford a servant. "Is Mrs. Tennant in?" he asked. She shook her head and fright made twin stoplights of the rouge on her cheeks as she shut the door behind him. He went into the living room, directly to the long silver cigarette box on the coffee table. It was proof of homecoming to fill his lungs with smoke he could smell . He took another drag, saw the maid still in the doorway, staring. "There's no need for fright," he told her. "I believe I still own this house." Then, "When do you expect Mrs. Tennant?" "She just called. She's on her way home from the club." Still looking frightened, she departed for the rear of the house. Tennant stared after her puzzledly until the kitchen door swung shut behind her. The club? What club? He shrugged, returned to the feeling of comfort that came from being back here, about to see Agatha again, hold her close in no more than a few minutes. And stay, his mind began to add eagerly, but he pushed the thought down where Opal could not detect it. He took another deep, lung-filling drag on his cigarette, looked around the room that was so important a part of his life. The three women back there would be in a ghastly spot. He felt like a heel for wanting to leave them there, then knew that he would try somehow to get them out. Not, of course, anything that would endanger his remaining with Agatha; the only way his captors would get him back would be as a taxidermist's specimen. He realized, shocked and scared, that his thoughts of escape had slipped past his mental censor, and he waited apprehensively for Opal to strike. Nothing happened and he warily relaxed. Opal wasn't tapping his thoughts. Because he felt sure of his captive ... or because he couldn't on Earth? It was like being let out of a cage. Tennant grinned at the bookcase; the ebony-and-ivory elephants that Agatha had never liked were gone, but he'd get them back or another pair. The credenza had been replaced by a huge and ugly television console. That, he resolved, would go down in the cellar rumpus room, where its bleached modernity wouldn't clash with the casual antiquity of the living room. Agatha would complain, naturally, but his being back would make up for any amount of furniture shifting. He imagined her standing close to him, her lovely face lifted to be kissed, and his heart lurched like an adolescent's. This hunger was real, not implanted. Everything would be real ... his love for her, the food he ate, the things he touched, his house, his life.... Your wife and a man are approaching the house. The thought message from Opal crumbled his illusion of freedom. He sank down in a chair, trying to refuse to listen to the rest of the command: You are to bring the man through the gateway with you. We want another live male. Tennant shook his head, stiff and defiant in his chair. The punishment, when it came, was more humiliating than a slap across a dog's snout. Opal had been too interested in the next lab specimen to bother about his thoughts—that was why he had been free to think of escape. Tennant closed his eyes, willed himself to the front window. Now that he had mastered teleportation, it was incredible how much easier it was in his own world. He had covered the two miles from the gateway to the house in a mere seven jumps, the distance to the window in an instant. But there was no pleasure in it, only a confirmation of his captor's power over him. He was not free of them. He understood all too well what they wanted him to do; he was to play the Judas goat ... or rather the Judas ram, leading another victim to the fourth-dimensional pen. Grim, he watched the swoop of headlights in the driveway and returned to the coffee table, lit a fresh cigarette. The front door was flung open and his diaphragm tightened at the remembered sound of Agatha's throaty laugh ... and tightened further when it was followed by a deeper rumbling laugh. Sudden fear made the cigarette shake in his fingers. "... Don't be such a stuffed-shirt, darling." Agatha's mocking sweetness rang alarm-gongs in Tennant's memory. "Charley wasn't making a grab for me . He'd had one too many and only wanted a little fun. Really, darling, you seem to think that a girl...." Her voice faded out as she saw Tennant standing there. She was wearing a white strapless gown, had a blue-red-and-gold Mandarin jacket slung hussar-fashion over her left shoulder. She looked even sleeker, better groomed, more assured than his memory of her. "I'm no stuffed-shirt and you know it." Cass' tone was peevish. "But your idea of fun, Agatha, is pretty damn...." It was his turn to freeze. Unbelieving, Tennant studied his successor. Cass Gordon—the man , the ex-halfback whose bulk was beginning to get out of hand, but whose inherent aggressive grace had not yet deserted him. The man , that was all—unless one threw in the little black mustache and the smooth salesman's manner. "You know, Cass," Tennant said quietly, "I never for a moment dreamed it would be you." " Roger! " Agatha found her voice. "You're alive !" "Roger," repeated Tennant viciously. He felt sick with disgust. Maybe he should have expected a triangle, but somehow he hadn't. And here it was, with all of them going through their paces like a trio of tent-show actors. He said, "For God's sake, sit down." Agatha did so hesitantly. Her huge dark eyes, invariably clear and limpid no matter how much she had drunk, flickered toward him furtively. She said defensively, "I had detectives looking for you for six months. Where have you been, Rog? Smashing up the car like that and—disappearing! I've been out of my mind." "Sorry," said Tennant. "I've had my troubles, too." Agatha was scared stiff—of him. Probably with reason. He looked again at Cass Gordon and found that he suddenly didn't care. She couldn't say it was loneliness. Women have waited longer than eighteen months. He would have if his captors had let him. "Where in hell have you been, Rog?" Gordon's tone was almost parental. "I don't suppose it's news to you, but there was a lot of suspicion directed your way while that crazy killer was operating around here. Agatha and I managed to clear you." "Decent of you," said Tennant. He got up, crossed to the cabinet that served as a bar. It was fully equipped—with more expensive liquor, he noticed, than he had ever been able to afford. He poured a drink of brandy, waited for the others to fill their glasses. Agatha looked at him over the rim of hers. "Tell us, Rog. We have a right to know. I do, anyway." "One question first," he said. "What about those killings? Have there been any lately?" "Not for over a year," Cass told him. "They never did get the devil who skinned those bodies and removed the heads." So, Tennant thought, they hadn't used the gateway. Not since they had brought the four of them through, not since they had begun to train him for his Judas ram duties. Agatha was asking him if he had been abroad. "In a way," he replied unemotionally. "Sorry if I've worried you, Agatha, but my life has been rather—indefinite, since I—left." He was standing no more than four inches from this woman he had desired desperately for six years, and he no longer wanted her. He was acutely conscious of her perfume. It wrapped them both like an exotic blanket, and it repelled him. He studied the firm clear flesh of her cheek and chin, the arch of nostril, the carmine fullness of lower lip, the swell of bosom above low-cut gown. And he no longer wanted any of it or of her. Cass Gordon— It didn't have to be anybody at all. For it to be Cass Gordon was revolting. "Rog," she said and her voice trembled, "what are we going to do? What do you want to do?" Take her back? He smiled ironically; she wouldn't know what that meant. It would serve her right, but maybe there was another way. "I don't know about you," he said, "but I suspect we're in the same boat. I also have other interests." "You louse!" said Cass Gordon, arching rib cage and nostrils. "If you try to make trouble for Agatha, I can promise...." " What can you promise?" demanded Tennant. When Gordon's onset subsided in mumbles, he added, "Actually, I don't think I'm capable of making more than a fraction of the trouble for either of you that you both are qualified to make for yourselves." He lit a cigarette, inhaled. "Relax. I'm not planning revenge. After this evening, I plan to vanish for good. Of course, Agatha, that offers you a minor nuisance. You will have to wait six years to marry Cass—seven years if the maid who let me in tonight talks. That's the law, isn't it, Cass? You probably had it all figured out." "You bastard," said Cass. "You dirty bastard! You know what a wait like that could do to us." "Tristan and Isolde," said Tennant, grinning almost happily. "Well, I've had my little say. Now I'm off again. Cass, would you give me a lift? I have a conveyance of sorts a couple of miles down the road." He needed no telepathic powers to read the thoughts around him then. He heard Agatha's quick intake of breath, saw the split-second look she exchanged with Cass. He turned away, knowing that she was imploring her lover to do something, anything , as long as it was safe. Deliberately, Tennant poured himself a second drink. This might be easier and pleasanter than he had expected. They deserved some of the suffering he had had and there was a chance that they might get it. Tennant knew now why he was the only male human the captors had been able to take alive. Apparently, thanks to the rain-slick road, he had run the sedan into a tree at the foot of the hill beyond the river. He had been sitting there, unconscious, ripe fruit on their doorstep. They had simply picked him up. Otherwise, apparently, men were next to impossible for them to capture. All they could do was kill them and bring back their heads and hides as trophies. With women it was different—perhaps the captors' weapons, whatever they were, worked more efficiently on females. A difference in body chemistry or psychology, perhaps. More than once, during his long training with Opal, Tennant had sent questing thoughts toward his captor, asking why they didn't simply set up the gateway in some town or city and take as many humans as they wanted. Surprisingly there had been a definite fear reaction. As nearly as he could understand, it had been like asking an African pygmy, armed with a blowgun, to set up shop in the midst of a herd of wild elephants. It simply wasn't feasible—and furthermore he derived an impression of the tenuosity as well as the immovability of the gateway itself. They could be hurt, even killed by humans in a three-dimensional world. How? Tennant did not know. Perhaps as a man can cut finger or even throat on the edge of a near-two-dimensional piece of paper. It took valor for them to hunt men in the world of men. In that fact lay a key to their character—if such utterly alien creatures could be said to have character.
B. The prisoners serve as entertainment for the figures, who seem to have made a game out of snatching up humans and manipulating their thoughts and behaviors.
What methods is RelNet compared to?
### Introduction Reasoning about entities and their relations is an important problem for achieving general artificial intelligence. Often such problems are formulated as reasoning over graph-structured representation of knowledge. Knowledge graphs, for example, consist of entities and relations between them BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Representation learning BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and reasoning BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 with such structured representations is an important and active area of research. Most previous work on knowledge representation and reasoning relies on a pipeline of natural language processing systems, often consisting of named entity extraction BIBREF12 , entity resolution and coreference BIBREF13 , relationship extraction BIBREF4 , and knowledge graph inference BIBREF14 . While this cascaded approach of using NLP systems can be effective at reasoning with knowledge bases at scale, it also leads to a problem of compounding of the error from each component sub-system. The importance of each of these sub-component on a particular downstream application is also not clear. For the task of question-answering, we instead make an attempt at an end-to-end approach which directly models the entities and relations in the text as memory slots. While incorporating existing knowledge (from curated knowledge bases) for the purpose of question-answering BIBREF11 , BIBREF8 , BIBREF15 is an important area of research, we consider the simpler setting where all the information is contained within the text itself – which is the approach taken by many recent memory based neural network models BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 . Recently, BIBREF17 proposed a dynamic memory based neural network for implicitly modeling the state of entities present in the text for question answering. However, this model lacks any module for relational reasoning. In response, we propose RelNet, which extends memory-augmented neural networks with a relational memory to reason about relationships between multiple entities present within the text. Our end-to-end method reads text, and writes to both memory slots and edges between them. Intuitively, the memory slots correspond to entities and the edges correspond to relationships between entities, each represented as a vector. The only supervision signal for our method comes from answering questions on the text. We demonstrate the utility of the model through experiments on the bAbI tasks BIBREF18 and find that the model achieves smaller mean error across the tasks than the best previously published result BIBREF17 in the 10k examples regime and achieves 0% error on 11 of the 20 tasks. ### RelNet Model We describe the RelNet model in this section. Figure 1 provides a high-level view of the model. The model is sequential in nature, consisting of the following steps: read text, process it into a dynamic relational memory and then attention conditioned on the question generates the answer. We model the dynamic memory in a fashion similar to Recurrent Entity Networks BIBREF17 and then equip it with an additional relational memory. There are three main components to the model: 1) input encoder 2) dynamic memory, and 3) output module. We will describe these three modules in details. The input encoder and output module implementations are similar to the Entity Network BIBREF17 and main novelty lies in the dynamic memory. We describe the operations executed by the network for a single example consisting of a document with $T$ sentences, where each sentence consists of a sequence of words represented with $K$ -dimensional word embeddings $\lbrace e_1, \ldots , e_N\rbrace $ , a question on the document represented as another sequence of words and an answer to the question. ### Related Work There is a long line of work in textual question-answering systems BIBREF21 , BIBREF22 . Recent successful approaches use memory based neural networks for question answering, for example BIBREF23 , BIBREF18 , BIBREF24 , BIBREF19 , BIBREF17 . Our model is also a memory network based model and is also related to the neural turing machine BIBREF25 . As described previously, the model is closely related to the Recurrent Entity Networks model BIBREF17 which describes an end-to-end approach to model entities in text but does not directly model relations. Other approaches to question answering use external knowledge, for instance external knowledge bases BIBREF26 , BIBREF11 , BIBREF27 , BIBREF28 , BIBREF9 or external text like Wikipedia BIBREF29 , BIBREF30 . Very recently, and in parallel to this work, a method for relational reasoning called relation networks BIBREF31 was proposed. They demonstrated that simple neural network modules are not as effective at relational reasoning and their proposed module is similar to our model. However, relation network is not a memory-based model and there is no mechanism to read and write relevant information for each pair. Moreover, while their approach scales as the square of the number of sentences, our approach scales as the square of the number of memory slots used per QA pair. The output module in our model can be seen as a type of relation network. Representation learning and reasoning over graph structured data is also relevant to this work. Graph based neural network models BIBREF32 , BIBREF33 , BIBREF34 have been proposed which take graph data as an input. The relational memory however does not rely on a specified graph structure and such models can potentially be used for multi-hop reasoning over the relational memory. BIBREF35 proposed a method for learning a graphical representation of the text data for question answering, however the model requires explicit supervision for the graph at every step whereas RelNet does not require explicit supervision for the graph. ### Experiments We evaluate the model's performance on the bAbI tasks BIBREF18 , a collection of 20 question answering tasks which have become a benchmark for evaluating memory-augmented neural networks. We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17 . Performance is measured in terms of mean percentage error on the tasks. Training Details: We used Adam and did a grid search for the learning rate in {0.01, 0.005, 0.001} and choose a fixed learning rate of 0.005 based on performance on the validation set, and clip the gradient norm at 2. We keep all other details similar to BIBREF17 for a fair comparison. embedding dimensions were fixed to be 100, models were trained for a maximum of 250 epochs with mini-batches size of 32 for all tasks except 3 for which the batch size was 16. The document sizes were limited to most recent 70 sentences for all tasks, except for task 3 for which it was limited to 130. The RelNet models were run for 5 times with random seed on each task and the model with best validation performance was chosen as the final model. The baseline EntNet model was run for 10 times for each task BIBREF17 . The results are shown in Table 1 . The RelNet model achieves a mean error of 0.285% across tasks which is better than the results of the EntNet model BIBREF17 . The RelNet model is able to achieve 0% test error on 11 of the tasks, whereas the EntNet model achieves 0% error on 7 of the tasks. ### Conclusion We demonstrated an end-to-end trained neural network augmented with a structured memory representation which can reason about entities and relations for question answering. Future work will investigate the performance of these models on more real world datasets, interpreting what the models learn, and scaling these models to answer questions about entities and relations from reading massive text corpora. Figure 1: RelNet Model: The model represents the state of the world as a neural turing machine with relational memory. At each time step, the model reads the sentence into an encoding vector and updates both entity memories and all edges between them representing the relations. Table 1: Mean % Error on the 20 Babi tasks.
We compare the performance with the Recurrent Entity Networks model (EntNet) BIBREF17
Why does Matilda feel she was being made fun of? A. She though t Gorka was making up stories to appeal to her childish nature. B. She thought Gorka was playing with her trusting nature by telling her lies. C. She thought Gorka didn’t respe ct her enough, D. She thought Gorka was trying to make her feel stupid by saying things she couldn’t disprove.
PEN PAL Illustrated by DON SIBLEY By MILTON LESSER [Transcriber's Note: This etext was produced from Galaxy Science Fiction July 1951. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] All she wanted was a mate and she had the gumption to go out and hunt one down. But that meant poaching in a strictly forbidden territory! The best that could be said for Matilda Penshaws was that she was something of a paradox. She was thirty-three years old, certainly not aged when you consider the fact that the female life expectancy is now up in the sixties, but the lines were beginning to etch their permanent paths across her face and now she needed certain remedial undergarments at which she would have scoffed ten or even five years ago. Matilda was also looking for a husband. This, in itself, was not unusual—but Matilda was so completely wrapped up in the romantic fallacy of her day that she sought a prince charming, a faithful Don Juan, a man who had been everywhere and tasted of every worldly pleasure and who now wanted to sit on a porch and talk about it all to Matilda. The fact that in all probability such a man did not exist disturbed Matilda not in the least. She had been known to say that there are over a billion men in the world, a goodly percentage of whom are eligible bachelors, and that the right one would come along simply because she had been waiting for him. Matilda, you see, had patience. She also had a fetish. Matilda had received her A.B. from exclusive Ursula Johns College and Radcliff had yielded her Masters degree, yet Matilda was an avid follower of the pen pal columns. She would read them carefully and then read them again, looking for the masculine names which, through a system known only to Matilda, had an affinity to her own. To the gentlemen upon whom these names were affixed, Matilda would write, and she often told her mother, the widow Penshaws, that it was in this way she would find her husband. The widow Penshaws impatiently told her to go out and get dates. That particular night, Matilda pulled her battered old sedan into the garage and walked up the walk to the porch. The widow Penshaws was rocking on the glider and Matilda said hello. The first thing the widow Penshaws did was to take Matilda's left hand in her own and examine the next-to-the-last finger. "I thought so," she said. "I knew this was coming when I saw that look in your eye at dinner. Where is Herman's engagement ring?" Matilda smiled. "It wouldn't have worked out, Ma. He was too darned stuffy. I gave him his ring and said thanks anyway and he smiled politely and said he wished I had told him sooner because his fifteenth college reunion was this weekend and he had already turned down the invitation." The widow Penshaws nodded regretfully. "That was thoughtful of Herman to hide his feelings." "Hogwash!" said her daughter. "He has no true feelings. He's sorry that he had to miss his college reunion. That's all he has to hide. A stuffy Victorian prude and even less of a man than the others." "But, Matilda, that's your fifth broken engagement in three years. It ain't that you ain't popular, but you just don't want to cooperate. You don't fall in love, Matilda—no one does. Love osmoses into you slowly, without you even knowing, and it keeps growing all the time." Matilda admired her mother's use of the word osmosis, but she found nothing which was not objectionable about being unaware of the impact of love. She said good-night and went upstairs, climbed out of her light summer dress and took a cold shower. She began to hum to herself. She had not yet seen the pen pal section of the current Literary Review , and because the subject matter of that magazine was somewhat highbrow and cosmopolitan, she could expect a gratifying selection of pen pals. She shut off the shower, brushed her teeth, gargled, patted herself dry with a towel, and jumped into bed, careful to lock the door of her bedroom. She dared not let the widow Penshaws know that she slept in the nude; the widow Penshaws would object to a girl sleeping in the nude, even if the nearest neighbor was three hundred yards away. Matilda switched her bed lamp on and dabbed some citronella on each ear lobe and a little droplet on her chin (how she hated insects!). Then she propped up her pillows—two pillows partially stopped her post-nasal drip; and took the latest issue of the Literary Review off the night table. She flipped through the pages and came to personals. Someone in Nebraska wanted to trade match books; someone in New York needed a midwestern pen pal, but it was a woman; an elderly man interested in ornithology wanted a young chick correspondent interested in the same subject; a young, personable man wanted an editorial position because he thought he had something to offer the editorial world; and— Matilda read the next one twice. Then she held it close to the light and read it again. The Literary Review was one of the few magazines which printed the name of the advertiser rather than a box number, and Matilda even liked the sound of the name. But mostly, she had to admit to herself, it was the flavor of the wording. This very well could be it . Or, that is, him . Intelligent, somewhat egotistical male who's really been around, whose universal experience can make the average cosmopolite look like a provincial hick, is in need of several female correspondents: must be intelligent, have gumption, be capable of listening to male who has a lot to say and wants to say it. All others need not apply. Wonderful opportunity cultural experience ... Haron Gorka, Cedar Falls, Ill. The man was egotistical, all right; Matilda could see that. But she had never minded an egotistical man, at least not when he had something about which he had a genuine reason to be egotistical. The man sounded as though he would have reason indeed. He only wanted the best because he was the best. Like calls to like. The name—Haron Gorka: its oddness was somehow beautiful to Matilda. Haron Gorka—the nationality could be anything. And that was it. He had no nationality for all intents and purposes; he was an international man, a figure among figures, a paragon.... Matilda sighed happily as she put out the light. The moon shone in through the window brightly, and at such times Matilda generally would get up, go to the cupboard, pull out a towel, take two hairpins from her powder drawer, pin the towel to the screen of her window, and hence keep the disturbing moonlight from her eyes. But this time it did not disturb her, and she would let it shine. Cedar Falls was a small town not fifty miles from her home, and she'd get there a hop, skip, and jump ahead of her competitors, simply by arriving in person instead of writing a letter. Matilda was not yet that far gone in years or appearance. Dressed properly, she could hope to make a favorable impression in person, and she felt it was important to beat the influx of mail to Cedar Falls. Matilda got out of bed at seven, tiptoed into the bathroom, showered with a merest wary trickle of water, tiptoed back into her bedroom, dressed in her very best cotton over the finest of uplifting and figure-moulding underthings, made sure her stocking seams were perfectly straight, brushed her suede shoes, admired herself in the mirror, read the ad again, wished for a moment she were a bit younger, and tiptoed downstairs. The widow Penshaws met her at the bottom of the stairwell. "Mother," gasped Matilda. Matilda always gasped when she saw something unexpected. "What on earth are you doing up?" The widow Penshaws smiled somewhat toothlessly, having neglected to put in both her uppers and lowers this early in the morning. "I'm fixing breakfast, of course...." Then the widow Penshaws told Matilda that she could never hope to sneak about the house without her mother knowing about it, and that even if she were going out in response to one of those foolish ads in the magazines, she would still need a good breakfast to start with like only mother could cook. Matilda moodily thanked the widow Penshaws. Driving the fifty miles to Cedar Falls in a little less than an hour, Matilda hummed Mendelssohn's Wedding March all the way. It was her favorite piece of music. Once, she told herself: Matilda Penshaws, you are being premature about the whole thing. But she laughed and thought that if she was, she was, and, meanwhile, she could only get to Cedar Falls and find out. And so she got there. The man in the wire cage at the Cedar Falls post office was a stereotype. Matilda always liked to think in terms of stereotypes. This man was small, roundish, florid of face, with a pair of eyeglasses which hung too far down on his nose. Matilda knew he would peer over his glasses and answer questions grudgingly. "Hello," said Matilda. The stereotype grunted and peered at her over his glasses. Matilda asked him where she could find Haron Gorka. "What?" "I said, where can I find Haron Gorka?" "Is that in the United States?" "It's not a that; it's a he. Where can I find him? Where does he live? What's the quickest way to get there?" The stereotype pushed up his glasses and looked at her squarely. "Now take it easy, ma'am. First place, I don't know any Haron Gorka—" Matilda kept the alarm from creeping into her voice. She muttered an oh under her breath and took out the ad. This she showed to the stereotype, and he scratched his bald head. Then he told Matilda almost happily that he was sorry he couldn't help her. He grudgingly suggested that if it really were important, she might check with the police. Matilda did, only they didn't know any Haron Gorka, either. It turned out that no one did: Matilda tried the general store, the fire department, the city hall, the high school, all three Cedar Falls gas stations, the livery stable, and half a dozen private dwellings at random. As far us the gentry of Cedar Falls was concerned, Haron Gorka did not exist. Matilda felt bad, but she had no intention of returning home this early. If she could not find Haron Gorka, that was one thing; but she knew that she'd rather not return home and face the widow Penshaws, at least not for a while yet. The widow Penshaws meant well, but she liked to analyze other people's mistakes, especially Matilda's. Accordingly, Matilda trudged wearily toward Cedar Falls' small and unimposing library. She could release some of her pent-up aggression by browsing through the dusty slacks. This she did, but it was unrewarding. Cedar Falls had what might be called a microscopic library, and Matilda thought that if this small building were filled with microfilm rather than books, the library still would be lacking. Hence she retraced her steps and nodded to the old librarian as she passed. Then Matilda frowned. Twenty years from now, this could be Matilda Penshaws—complete with plain gray dress, rimless spectacles, gray hair, suspicious eyes, and a broom-stick figure.... On the other hand—why not? Why couldn't the librarian help her? Why hadn't she thought of it before? Certainly a man as well-educated as Haron Gorka would be an avid reader, and unless he had a permanent residence here in Cedar Palls, one couldn't expect that he'd have his own library with him. This being the case, a third-rate collection of books was far better than no collection at all, and perhaps the librarian would know Mr. Haron Gorka. Matilda cleared her throat. "Pardon me," she began. "I'm looking for—" "Haron Gorka." The librarian nodded. "How on earth did you know?" "That's easy. You're the sixth young woman who came here inquiring about that man today. Six of you—five others in the morning, and now you in the afternoon. I never did trust this Mr. Gorka...." Matilda jumped as if she had been struck strategically from the rear. "You know him? You know Haron Gorka?" "Certainly. Of course I know him. He's our steadiest reader here at the library. Not a week goes by that he doesn't take out three, four books. Scholarly gentleman, but not without charm. If I were twenty years younger—" Matilda thought a little flattery might be effective. "Only ten," she assured the librarian. "Ten years would be more than sufficient, I'm sure." "Are you? Well. Well, well." The librarian did something with the back of her hair, but it looked the same as before. "Maybe you're right. Maybe you're right at that." Then she sighed. "But I guess a miss is as good as a mile." "What do you mean?" "I mean anyone would like to correspond with Haron Gorka. Or to know him well. To be considered his friend. Haron Gorka...." The librarian seemed about to soar off into the air someplace, and if five women had been here first, Matilda was now definitely in a hurry. "Um, where can I find Mr. Gorka?" "I'm not supposed to do this, you know. We're not permitted to give the addresses of any of our people. Against regulations, my dear." "What about the other five women?" "They convinced me that I ought to give them his address." Matilda reached into her pocket-book and withdrew a five dollar bill. "Was this the way?" she demanded. Matilda was not very good at this sort of thing. The librarian shook her head. Matilda nodded shrewdly and added a twin brother to the bill in her hand. "Then is this better?" "That's worse. I wouldn't take your money—" "Sorry. What then?" "If I can't enjoy an association with Haron Gorka directly, I still could get the vicarious pleasure of your contact with him. Report to me faithfully and you'll get his address. That's what the other five will do, and with half a dozen of you, I'll get an overall picture. Each one of you will tell me about Haron Gorka, sparing no details. You each have a distinct personality, of course, and it will color each picture considerably. But with six of you reporting, I should receive my share of vicarious enjoyment. Is it—ah—a deal?" Matilda assured her that it was, and, breathlessly, she wrote down the address. She thanked the librarian and then she went out to her car, whistling to herself. Haron Gorka lived in what could have been an agrarian estate, except that the land no longer was being tilled. The house itself had fallen to ruin. This surprised Matilda, but she did not let it keep her spirits in check. Haron Gorka, the man, was what counted, and the librarian's account of him certainly had been glowing enough. Perhaps he was too busy with his cultural pursuits to pay any real attention to his dwelling. That was it, of course: the conspicuous show of wealth or personal industry meant nothing at all to Haron Gorka. Matilda liked him all the more for it. There were five cars parked in the long driveway, and now Matilda's made the sixth. In spite of herself, she smiled. She had not been the only one with the idea to visit Haron Gorka in person. With half a dozen of them there, the laggards who resorted to posting letters would be left far behind. Matilda congratulated herself for what she thought had been her ingenuity, and which now turned out to be something which she had in common with five other women. You live and learn, thought Matilda. And then, quite annoyedly, she berated herself for not having been the first. Perhaps the other five all were satisfactory; perhaps she wouldn't be needed; perhaps she was too late.... As it turned out, she wasn't. Not only that, she was welcomed with open arms. Not by Haron Gorka; that she really might have liked. Instead, someone she could only regard as a menial met her, and when he asked had she come in response to the advertisement, she nodded eagerly. He told her that was fine and he ushered her straight into a room which evidently was to be her living quarters. It contained a small undersized bed, a table, and a chair, and, near a little slot in the wall, there was a button. "You want any food or drink," the servant told her, "and you just press that button. The results will surprise you." "What about Mr. Gorka?" "When he wants you, he will send for you. Meanwhile, make yourself to home, lady, and I will tell him you are here." A little doubtful now, Matilda thanked him and watched him leave. He closed the door softly behind his retreating feet, but Matilda's ears had not missed the ominous click. She ran to the door and tried to open it, but it would not budge. It was locked—from the outside. It must be said to Matilda's favor that she sobbed only once. After that she realized that what is done is done and here, past thirty, she wasn't going to be girlishly timid about it. Besides, it was not her fault if, in his unconcern, Haron Gorka had unwittingly hired a neurotic servant. For a time Matilda paced back and forth in her room, and of what was going on outside she could hear nothing. In that case, she would pretend that there was nothing outside the little room, and presently she lay down on the bed to take a nap. This didn't last long, however: she had a nightmare in which Haron Gorka appeared as a giant with two heads, but, upon awaking with a start, she immediately ascribed that to her overwrought nerves. At that point she remembered what the servant had said about food and she thought at once of the supreme justice she could do to a juicy beefsteak. Well, maybe they didn't have a beefsteak. In that case, she would take what they had, and, accordingly, she walked to the little slot in the wall and pressed the button. She heard the whir of machinery. A moment later there was a soft sliding sound. Through the slot first came a delicious aroma, followed almost instantly by a tray. On the tray were a bowl of turtle soup, mashed potatoes, green peas, bread, a strange cocktail, root-beer, a parfait—and a thick tenderloin sizzling in hot butter sauce. Matilda gasped once and felt about to gasp again—but by then her salivary glands were working overtime, and she ate her meal. The fact that it was precisely what she would have wanted could, of course, be attributed to coincidence, and the further fact that everything was extremely palatable made her forget all about Haron Gorka's neurotic servant. When she finished her meal a pleasant lethargy possessed her, and in a little while Matilda was asleep again. This time she did not dream at all. It was a deep sleep and a restful one, and when she awoke it was with the wonderful feeling that everything was all right. The feeling did not last long. Standing over her was Haron Gorka's servant, and he said, "Mr. Gorka will see you now." "Now?" "Now. That's what you're here for, isn't it?" He had a point there, but Matilda hardly even had time to fix her hair. She told the servant so. "Miss," he replied, "I assure you it will not matter in the least to Haron Gorka. You are here and he is ready to see you and that is all that matters." "You sure?" Matilda wanted to take no chances. "Yes. Come." She followed him out of the little room and across what should have been a spacious dining area, except that everything seemed covered with dust. Of the other women Matilda could see nothing, and she suddenly realized that each of them probably had a cubicle of a room like her own, and that each in her turn had already had her first visit with Haron Gorka. Well, then, she must see to it that she impressed him better than did all the rest, and, later, when she returned to tell the old librarian of her adventures, she could perhaps draw her out and compare notes. She would not admit even to herself that she was disappointed with Haron Gorka. It was not that he was homely and unimpressive; it was just that he was so ordinary -looking. She almost would have preferred the monster of her dreams. He wore a white linen suit and he had mousy hair, drab eyes, an almost-Roman nose, a petulant mouth with the slight arch of the egotist at each corner. He said, "Greetings. You have come—" "In response to your ad. How do you do, Mr. Gorka?" She hoped she wasn't being too formal. But, then, there was no sense in assuming that he would like informality. She could only wait and see and adjust her own actions to suit him. Meanwhile, it would be best to keep on the middle of the road. "I am fine. Are you ready?" "Ready?" "Certainly. You came in response to my ad. You want to hear me talk, do you not?" "I—do." Matilda had had visions of her prince charming sitting back and relaxing with her, telling her of the many things he had done and seen. But first she certainly would have liked to get to know the man. Well, Haron Gorka obviously had more experience along these lines than she did. He waited, however, as if wondering what to say, and Matilda, accustomed to social chatter, gave him a gambit. "I must admit I was surprised when I got exactly what I wanted for dinner," she told him brightly. "Eh? What say? Oh, yes, naturally. A combination of telepathy and teleportation. The synthetic cookery is attuned to your mind when you press the buzzer, and the strength of your psychic impulses determines how closely the meal will adjust to your desires. The fact that the adjustment here was near perfect is commendable. It means either that you have a high psi-quotient, or that you were very hungry." "Yes," said Matilda vaguely. Perhaps it might be better, after all, if Haron Gorka were to talk to her as he saw fit. "Ready?" "Uh—ready." "Well?" "Well, what, Mr. Gorka?" "What would you like me to talk about?" "Oh, anything." "Please. As the ad read, my universal experience—is universal. Literally. You'll have to be more specific." "Well, why don't you tell me about some of your far travels? Unfortunately, while I've done a lot of reading, I haven't been to all the places I would have liked—" "Good enough. You know, of course, how frigid Deneb VII is?" Matilda said, "Beg pardon?" "Well, there was the time our crew—before I had retired, of course—made a crash landing there. We could survive in the vac-suits, of course, but the thlomots were after us almost at once. They go mad over plastic. They will eat absolutely any sort of plastic. Our vac-suits—" "—were made of plastic," Matilda suggested. She did not understand a thing he was talking about, but she felt she had better act bright. "No, no. Must you interrupt? The air-hose and the water feed, these were plastic. Not the rest of the suit. The point is that half of us were destroyed before the rescue ship could come, and the remainder were near death. I owe my life to the mimicry of a flaak from Capella III. It assumed the properties of plastic and led the thlomots a merry chase across the frozen surface of D VII. You travel in the Deneb system now and Interstellar Ordinance makes it mandatory to carry flaaks with you. Excellent idea, really excellent." Almost at once, Matilda's educational background should have told her that Haron Gorka was mouthing gibberish. But on the other hand she wanted to believe in him and the result was that it took until now for her to realize it. "Stop making fun of me," she said. "So, naturally, you'll see flaaks all over that system—" "Stop!" "What's that? Making fun of you?" Haron Gorka's voice had been so eager as he spoke, high-pitched, almost like a child's, and now he seemed disappointed. He smiled, but it was a sad smile, a smile of resignation, and he said, "Very well. I'm wrong again. You are the sixth, and you're no better than the other five. Perhaps you are even more outspoken. When you see my wife, tell her to come back. Again she is right and I am wrong...." Haron Gorka turned his back. Matilda could do nothing but leave the room, walk back through the house, go outside and get into her car. She noticed not without surprise that the other five cars were now gone. She was the last of Haron Gorka's guests to depart. As she shifted into reverse and pulled out of the driveway, she saw the servant leaving, too. Far down the road, he was walking slowly. Then Haron Gorka had severed that relationship, too, and now he was all alone. As she drove back to town, the disappointment melted slowly away. There were, of course, two alternatives. Either Haron Gorka was an eccentric who enjoyed this sort of outlandish tomfoolery, or else he was plainly insane. She could still picture him ranting on aimlessly to no one in particular about places which had no existence outside of his mind, his voice high-pitched and eager. It was not until she had passed the small library building that she remembered what she had promised the librarian. In her own way, the aging woman would be as disappointed as Matilda, but a promise was a promise, and Matilda turned the car in a wide U-turn and parked it outside the library. The woman sat at her desk as Matilda had remembered her, gray, broom-stick figure, rigid. But now when she saw Matilda she perked up visibly. "Hello, my dear," she said. "Hi." "You're back a bit sooner than I expected. But, then, the other five have returned, too, and I imagine your story will be similar." "I don't know what they told you," Matilda said. "But this is what happened to me." She quickly then related everything which had happened, completely and in detail. She did this first because it was a promise, and second because she knew it would make her feel better. "So," she finished, "Haron Gorka is either extremely eccentric or insane. I'm sorry." "He's neither," the librarian contradicted. "Perhaps he is slightly eccentric by your standards, but really, my dear, he is neither." "What do you mean?" "Did he leave a message for his wife?" "Why, yes. Yes, he did. But how did you know? Oh, I suppose he told the five." "No. He didn't. But you were the last and I thought he would give you a message for his wife—" Matilda didn't understand. She didn't understand at all, but she told the little librarian what the message was. "He wanted her to return," she said. The librarian nodded, a happy smile on her lips. "You wouldn't believe me if I told you something." "What's that?" "I am Mrs. Gorka." The librarian stood up and came around the desk. She opened a drawer and took out her hat and perched it jauntily atop her gray hair. "You see, my dear, Haron expects too much. He expects entirely too much." Matilda did not say a word. One madman a day would be quite enough for anybody, but here she found herself confronted with two. "We've been tripping for centuries, visiting every habitable star system from our home near Canopus. But Haron is too demanding. He says I am a finicky traveler, that he could do much better alone, the accommodations have to be just right for me, and so forth. When he loses his temper, he tries to convince me that any number of females of the particular planet would be more than thrilled if they were given the opportunity just to listen to him. "But he's wrong. It's a hard life for a woman. Someday—five thousand, ten thousand years from now—I will convince him. And then we will settle down on Canopus XIV and cultivate torgas . That would be so nice—" "I'm sure." "Well, if Haron wants me back, then I have to go. Have a care, my dear. If you marry, choose a home-body. I've had the experience and you've seen my Haron for yourself." And then the woman was gone. Numbly, Matilda walked to the doorway and watched her angular figure disappear down the road. Of all the crazy things.... Deneb and Capella and Canopus, these were stars. Add a number and you might have a planet revolving about each star. Of all the insane— They were mad, all right, and now Matilda wondered if, actually, they were husband and wife. It could readily be; maybe the madness was catching. Maybe if you thought too much about such things, such travels, you could get that way. Of course, Herman represented the other extreme, and Herman was even worse in his own way—but hereafter Matilda would seek the happy medium. And, above all else, she had had enough of her pen pal columns. They were, she realized, for kids. She ate dinner in Cedar Falls and then she went out to her car again, preparing for the journey back home. The sun had set and it was a clear night, and overhead the great broad sweep of the Milky Way was a pale rainbow bridge in the sky. Matilda paused. Off in the distance there was a glow on the horizon, and that was the direction of Haron Gorka's place. The glow increased; soon it was a bright red pulse pounding on the horizon. It flickered. It flickered again, and finally it was gone. The stars were white and brilliant in the clear country air. That was why Matilda liked the country better than the city, particularly on a clear summer night when you could see the span of the Milky Way. But abruptly the stars and the Milky Way were paled by the brightest shooting star Matilda had ever seen. It flashed suddenly and it remained in view for a full second, searing a bright orange path across the night sky. Matilda gasped and ran into her car. She started the gears and pressed the accelerator to the floor, keeping it there all the way home. It was the first time she had ever seen a shooting star going up .
B. She thought Gorka was playing with her trusting nature by telling her lies.
What three datasets are used to measure performance?
### Introduction In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$. Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16. While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance. Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models. Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20. In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods. The main contributions of this study are as follows: 1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding. 2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework. 3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations. ### Related Work In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function: TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations. In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24. Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$. Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective. ### Problem Formulation In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows: Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values. Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively. The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively. Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework. ### Proposed Model In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively. ### Proposed Model ::: Overall Architecture The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification. ### Proposed Model ::: Attribute Embedding Layer The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value. Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows. where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$. Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation. LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value. where $f_{lstm}$ is the LSTM network. ### Proposed Model ::: Embedding Propagation Layer Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation. Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$. In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows: where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ . In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows: Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows: where the leakyRelu is selected as activation function. As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds. Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators: Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation: where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding. Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging: In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows: After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail. ### Proposed Model ::: Output Layer and Training Details Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed. knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as: Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as: where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$. We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate. ### Experiments ::: Date sets In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24. ### Experiments ::: Experiments Setting In evaluation, we compare our method with three types of models: 1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author. 2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length. 3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets. In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets. ### Experiments ::: Entity Classification ::: Evaluation Protocol. In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric. ### Experiments ::: Entity Classification ::: Test Performance. Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power. ### Experiments ::: Entity Classification ::: Efficiency Evaluation. Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods. ### Experiments ::: Knowledge Graph Completion The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior. The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models. ### Conclusion and Future Work Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. Figure 1: Subgraph of a knowledge graph contains entities, relations and attributes. Figure 2: Illustration of the KANE architecture. Table 1: The statistics of datasets. Table 2: Entity classification results in accuracy. We run all models 10 times and report mean ± standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K. Figure 3: Test accuracy with increasing epoch. Table 3: Results of knowledge graph completion (FB24K) Figure 4: Test accuracy by varying parameter. Figure 5: The t-SNE visualization of entity embeddings in Game30K.
Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph
Which good thing didn't come because of Mr. Graham's strange luck? A. Nat got a lead on an exciting new story B. Mr. Graham found inspiration for his book C. his wife came home D. Mr. Graham's neighbor won his poker game
I am a Nucleus By STEPHEN BARR Illustrated by GAUGHAN [Transcriber's Note: This etext was produced from Galaxy Science Fiction February 1957. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] No doubt whatever about it, I had the Indian sign on me ... my comfortably untidy world had suddenly turned into a monstrosity of order! When I got home from the office, I was not so much tired as beaten down, but the effect is similar. I let myself into the apartment, which had an absentee-wife look, and took a cold shower. The present downtown temperature, according to the radio, was eighty-seven degrees, but according to my Greenwich Village thermometer, it was ninety-six. I got dressed and went into the living room, and wished ardently that my wife Molly were here to tell me why the whole place looked so woebegone. What do they do, I asked myself, that I have left undone? I've vacuumed the carpet, I've dusted and I've straightened the cushions.... Ah! The ashtrays. I emptied them, washed them and put them back, but still the place looked wife-deserted. It had been a bad day; I had forgotten to wind the alarm clock, so I'd had to hurry to make a story conference at one of the TV studios I write for. I didn't notice the impending rain storm and had no umbrella when I reached the sidewalk, to find myself confronted with an almost tropical downpour. I would have turned back, but a taxi came up and a woman got out, so I dashed through the rain and got in. "Madison and Fifty-fourth," I said. "Right," said the driver, and I heard the starter grind, and then go on grinding. After some futile efforts, he turned to me. "Sorry, Mac. You'll have to find another cab. Good hunting." If possible, it was raining still harder. I opened my newspaper over my hat and ran for the subway: three blocks. Whizzing traffic held me up at each crossing and I was soaked when I reached the platform, just in time to miss the local. After an abnormal delay, I got one which exactly missed the express at Fourteenth Street. The same thing happened at both ends of the crosstown shuttle, but I found the rain had stopped when I got out at Fifty-first and Lexington. As I walked across to Madison Avenue, I passed a big excavation where they were getting ready to put up a new office building. There was the usual crowd of buffs watching the digging machines and, in particular, a man with a pneumatic drill who was breaking up some hard-packed clay. While I looked, a big lump of it fell away, and for an instant I was able to see something that looked like a chunk of dirty glass, the size of an old-fashioned hatbox. It glittered brilliantly in the sunlight, and then his chattering drill hit it. There was a faint bang and the thing disintegrated. It knocked him on his back, but he got right up and I realized he was not hurt. At the moment of the explosion—if so feeble a thing can be called one—I felt something sting my face and, on touching it, found blood on my hand. I mopped at it with my handkerchief but, though slight, the bleeding would not stop, so I went into a drugstore and bought some pink adhesive which I put on the tiny cut. When I got to the studio, I found that I had missed the story conference. During the day, by actual count, I heard the phrase "I'm just spitballing" eight times, and another Madison Avenue favorite, "The whole ball of wax," twelve times. However, my story had been accepted without change because nobody had noticed my absence from the conference room. There you have what is known as the Advertising World, the Advertising game or the advertising racket, depending upon which rung of the ladder you have achieved. The subway gave a repeat performance going home, and as I got to the apartment house we live in, the cop on the afternoon beat was standing there talking to the doorman. He said, "Hello, Mr. Graham. I guess you must have just have missed it at your office building." I looked blank and he explained, "We just heard it a little while ago: all six elevators in your building jammed at the same time. Sounds crazy. I guess you just missed it." Anything can happen in advertising, I thought. "That's right, Danny, I just missed it," I said, and went on in. Psychiatry tells us that some people are accident-prone; I, on the other hand, seemed recently to be coincidence-prone, fluke-happy, and except for the alarm clock, I'd had no control over what had been going on. I went into our little kitchen to make a drink and reread the directions Molly had left, telling me how to get along by myself until she got back from her mother's in Oyster Bay, a matter of ten days. How to make coffee, how to open a can, whom to call if I took sick and such. My wife used to be a trained nurse and she is quite convinced that I cannot take a breath without her. She is right, but not for the reasons she supposes. I opened the refrigerator to get some ice and saw another notice: "When you take out the Milk or Butter, Put it Right Back. And Close the Door, too." Intimidated, I took my drink into the living room and sat down in front of the typewriter. As I stared at the novel that was to liberate me from Madison Avenue, I noticed a mistake and picked up a pencil. When I put it down, it rolled off the desk, and with my eyes on the manuscript, I groped under the chair for it. Then I looked down. The pencil was standing on its end. There, I thought to myself, is that one chance in a million we hear about, and picked up the pencil. I turned back to my novel and drank some of the highball in hopes of inspiration and surcease from the muggy heat, but nothing came. I went back and read the whole chapter to try to get a forward momentum, but came to a dead stop at the last sentence. Damn the heat, damn the pencil, damn Madison Avenue and advertising. My drink was gone and I went back to the kitchen and read Molly's notes again to see if they would be like a letter from her. I noticed one that I had missed, pinned to the door of the dumbwaiter: "Garbage picked up at 6:30 AM so the idea is to Put it Here the Night Before. I love you." What can you do when the girl loves you? I made another drink and went and stared out of the living room window at the roof opposite. The Sun was out again and a man with a stick was exercising his flock of pigeons. They wheeled in a circle, hoping to be allowed to perch, but were not allowed to. Pigeons fly as a rule in formation and turn simultaneously, so that their wings all catch the sunlight at the same time. I was thinking about this decorative fact when I saw that as they were making a turn, they seemed to bunch up together. By some curious chance, they all wanted the same place in the sky to turn in, and several collided and fell. The man was as surprised as I and went to one of the dazed birds and picked it up. He stood there shaking his head from side to side, stroking its feathers. My speculations about this peculiar aerial traffic accident were interrupted by loud voices in the hallway. Since our building is usually very well behaved, I was astonished to hear what sounded like an incipient free-for-all, and among the angry voices I recognized that of my neighbor, Nat, a very quiet guy who works on a newspaper and has never, to my knowledge, given wild parties, particularly in the late afternoon. "You can't say a thing like that to me!" I heard him shout. "I tell you I got that deck this afternoon and they weren't opened till we started to play!" Several other loud voices started at the same time. "Nobody gets five straight-flushes in a row!" "Yeah, and only when you were dealer!" The tone of the argument was beginning to get ugly, and I opened the door to offer Nat help if he needed it. There were four men confronting him, evidently torn between the desire to make an angry exit and the impulse to stay and beat him up. His face was furiously red and he looked stunned. "Here!" he said, holding out a deck of cards, "For Pete's sake, look at 'em yourselves if you think they're marked!" The nearest man struck them up from his hand. "Okay, Houdini! So they're not marked! All I know is five straight...." His voice trailed away. He and the others stared at the scattered cards on the floor. About half were face down, as might be expected, and the rest face up—all red. Someone must have rung, because at that moment the elevator arrived and the four men, with half frightened, incredulous looks, and in silence, got in and were taken down. My friend stood looking at the neatly arranged cards. "Judas!" he said, and started to pick them up. "Will you look at that! My God, what a session...." I helped him and said to come in for a drink and tell me all about it, but I had an idea what I would hear. After a while, he calmed down, but he still seemed dazed. "Never seen anything to equal it," he said. "Wouldn't have believed it. Those guys didn't believe it. Every round normal, nothing unusual about the hands—three of a kind, a low straight, that sort of thing and one guy got queens over tens, until it gets to be my deal. Brother! Straight flush to the king—every time! And each time, somebody else has four aces...." He started to sweat again, so I got up to fix him another drink. There was one quart of club soda left, but when I tried to open it, the top broke and glass chips got into the bottle. "I'll have to go down for more soda," I said. "I'll come, too. I need air." At the delicatessen on the corner, the man gave me three bottles in what must have been a wet bag, because as he handed them to me over the top of the cold-meat display, the bottom gave and they fell onto the tile floor. None of them broke, although the fall must have been from at least five feet. Nat was too wound up in his thoughts to notice and I was getting used to miracles. We left the proprietor with his mouth open and met Danny, the cop, looking in at the door, also with his mouth open. On the sidewalk, a man walking in front of Nat stooped suddenly to tie his shoe and Nat, to avoid bumping him, stepped off the curb and a taxi swerved to avoid Nat. The street was still wet and the taxi skidded, its rear end lightly flipping the front of one of those small foreign cars, which was going rather fast. It turned sideways and, without any side-slip, went right up the stoop of a brownstone opposite, coming to rest with its nose inside the front door, which a man opened at that moment. The sight of this threw another driver into a skid, and when he and the taxi had stopped sliding around, they were face to face, arranged crosswise to the street. This gave them exactly no room to move either forward or backward, for the car had its back to a hydrant and the taxi to a lamp. Although rather narrow, this is a two-way street, and in no time at all, traffic was stacked up from both directions as far as the avenues. Everyone was honking his horn. Danny was furious—more so when he tried to put through a call to his station house from the box opposite. It was out of order. Upstairs, the wind was blowing into the apartment and I closed the windows, mainly to shut out the tumult and the shouting. Nat had brightened up considerably. "I'll stay for one more drink and then I'm due at the office," he said. "You know, I think this would make an item for the paper." He grinned and nodded toward the pandemonium. When he was gone, I noticed it was getting dark and turned on the desk lamp. Then I saw the curtains. They were all tied in knots, except one. That was tied in three knots. All right , I told myself, it was the wind. But I felt the time had come for me to get expert advice, so I went to the phone to call McGill. McGill is an assistant professor of mathematics at a university uptown and lives near us. He is highly imaginative, but we believe he knows everything. When I picked up the receiver, the line sounded dead and I thought, more trouble. Then I heard a man cough and I said hello. McGill's voice said, "Alec? You must have picked up the receiver just as we were connected. That's a damn funny coincidence." "Not in the least," I said. "Come on over here. I've got something for you to work on." "Well, as a matter of fact, I was calling up to ask you and Molly—" "Molly's away for the week. Can you get over here quick? It's urgent." "At once," he said, and hung up. While I waited, I thought I might try getting down a few paragraphs of my novel—perhaps something would come now. It did, but as I came to a point where I was about to put down the word "agurgling," I decided it was too reminiscent of Gilbert and Sullivan, and stopped at the letter "R." Then I saw that I had unaccountably hit all four keys one step to the side of the correct ones, and tore out the page, with my face red. This was absolutely not my day. "Well," McGill said, "nothing you've told me is impossible or supernatural. Just very, very improbable. In fact, the odds against that poker game alone would lead me to suspect Nat, well as I know him. It's all those other things...." He got up and walked over to the window and looked at the hot twilight while I waited. Then he turned around; he had a look of concern. "Alec, you're a reasonable guy, so I don't think you'll take offense at what I'm going to say. What you have told me is so impossibly unlikely, and the odds against it so astronomical, that I must take the view that you're either stringing me or you're subject to a delusion." I started to get up and expostulate, but he motioned me back. "I know, but don't you see that that is far more likely than...." He stopped and shook his head. Then he brightened. "I have an idea. Maybe we can have a demonstration." He thought for a tense minute and snapped his fingers. "Have you any change on you?" "Why, yes," I said. "Quite a bit." I reached into my pocket. There must have been nearly two dollars in silver and pennies. "Do you think they'll each have the same date, perhaps?" "Did you accumulate all that change today?" "No. During the week." He shook his head. "In that case, no. Discounting the fact that you could have prearranged it, if my dim provisional theory is right, that would be actually impossible. It would involve time-reversal. I'll tell you about it later. No, just throw down the change. Let's see if they all come up heads." I moved away from the carpet and tossed the handful of coins onto the floor. They clattered and bounced—and bounced together—and stacked themselves into a neat pile. I looked at McGill. His eyes were narrowed. Without a word, he took a handful of coins from his own pocket and threw them. These coins didn't stack. They just fell into an exactly straight line, the adjacent ones touching. "Well," I said, "what more do you want?" "Great Scott," he said, and sat down. "I suppose you know that there are two great apparently opposite principles governing the Universe—random and design. The sands on the beach are an example of random distribution and life is an example of design. The motions of the particles of a gas are what we call random, but there are so many of them, we treat them statistically and derive the Second Law of Thermodynamics—quite reliable. It isn't theoretically hard-and-fast; it's just a matter of extreme probability. Now life, on the other hand, seems not to depend on probability at all; actually, it goes against it. Or you might say it is certainly not an accidental manifestation." "Do you mean," I asked in some confusion, "that some form of life is controlling the coins and—the other things?" He shook his head. "No. All I mean is that improbable things usually have improbable explanations. When I see a natural law being broken, I don't say to myself, 'Here's a miracle.' I revise my version of the book of rules. Something—I don't know what—is going on, and it seems to involve probability, and it seems to center around you. Were you still in that building when the elevators stuck? Or near it?" "I guess I must have been. It happened just after I left." "Hm. You're the center, all right. But why?" "Center of what?" I asked. "I feel as though I were the center of an electrical storm. Something has it in for me!" McGill grinned. "Don't be superstitious. And especially don't be anthropomorphic." "Well, if it's the opposite of random, it's got to be a form of life." "On what basis? All we know for certain is that random motions are being rearranged. A crystal, for example, is not life, but it's a non-random arrangement of particles.... I wonder." He had a faraway, frowning look. I was beginning to feel hungry and the drinks had worn off. "Let's go out and eat," I said, "There's not a damn thing in the kitchen and I'm not allowed to cook. Only eggs and coffee." We put on our hats and went down to the street. From either end, we could hear wrecking trucks towing away the stalled cars. There were, by this time, a number of harassed cops directing the maneuver and we heard one of them say to Danny, "I don't know what the hell's going on around here. Every goddam car's got something the matter with it. They can't none of them back out for one reason or another. Never seen anything like it." Near us, two pedestrians were doing a curious little two-step as they tried to pass one another; as soon as one of them moved aside to let the other pass, the other would move to the same side. They both had embarrassed grins on their faces, but before long their grins were replaced by looks of suspicion and then determination. "All right, smart guy!" they shouted in unison, and barged ahead, only to collide. They backed off and threw simultaneous punches which met in mid-air. Then began one of the most remarkable bouts ever witnessed—a fight in which fist hit fist but never anything else, until both champions backed away undefeated, muttering identical excuses and threats. Danny appeared at that moment. His face was dripping. "You all right, Mr. Graham?" he asked. "I don't know what's going on around here, but ever since I came on this afternoon, things are going crazy. Bartley!" he shouted—he could succeed as a hog-caller. "Bring those dames over here!" Three women in a confused wrangle, with their half-open umbrellas intertwined, were brought across the street, which meant climbing over fenders. Bartley, a fine young patrolman, seemed self-conscious; the ladies seemed not to be. "All right, now, Mrs. Mac-Philip!" one of them said. "Leave go of my umbrella and we'll say no more about it!" "And so now it's Missus Mac-Philip, is it?" said her adversary. The third, a younger one with her back turned to us, her umbrella also caught in the tangle, pulled at it in a tentative way, at which the other two glared at her. She turned her head away and tried to let go, but the handle was caught in her glove. She looked up and I saw it was Molly. My nurse-wife. "Oh, Alec!" she said, and managed to detach herself. "Are you all right?" Was I all right! "Molly! What are you doing here?" "I was so worried, and when I saw all this, I didn't know what to think." She pointed to the stalled cars. "Are you really all right?" "Of course I'm all right. But why...." "The Oyster Bay operator said someone kept dialing and dialing Mother's number and there wasn't anyone on the line, so then she had it traced and it came from our phone here. I kept calling up, but I only got a busy signal. Oh, dear, are you sure you're all right?" I put my arm around her and glanced at McGill. He had an inward look. Then I caught Danny's eye. It had a thoughtful, almost suspicious cast to it. "Trouble does seem to follow you, Mr. Graham," was all he said. When we got upstairs, I turned to McGill. "Explain to Molly," I said. "And incidentally to me. I'm not properly briefed yet." He did so, and when he got to the summing up, I had the feeling she was a jump ahead of him. "In other words, you think it's something organic?" "Well," McGill said, "I'm trying to think of anything else it might be. I'm not doing so well," he confessed. "But so far as I can see," Molly answered, "it's mere probability, and without any over-all pattern." "Not quite. It has a center. Alec is the center." Molly looked at me with a curious expression for a moment. "Do you feel all right, darling?" she asked me. I nodded brightly. "You'll think this silly of me," she went on to McGill, "but why isn't it something like an overactive poltergeist?" "Pure concept," he said. "No genuine evidence." "Magnetism?" "Absolutely not. For one thing, most of the objects affected weren't magnetic—and don't forget magnetism is a force, not a form of energy, and a great deal of energy has been involved. I admit the energy has mainly been supplied by the things themselves, but in a magnetic field, all you'd get would be stored kinetic energy, such as when a piece of iron moves to a magnet or a line of force. Then it would just stay there, like a rundown clock weight. These things do a lot more than that—they go on moving." "Why did you mention a crystal before? Why not a life-form?" "Only an analogy," said McGill. "A crystal resembles life in that it has a definite shape and exhibits growth, but that's all. I'll agree this—thing—has no discernible shape and motion is involved, but plants don't move and amebas have no shape. Then a crystal feeds, but it does not convert what it feeds on; it merely rearranges it into a non-random pattern. In this case, it's rearranging random motions and it has a nucleus and it seems to be growing—at least in what you might call improbability." Molly frowned. "Then what is it? What's it made of?" "I should say it was made of the motions. There's a similar idea about the atom. Another thing that's like a crystal is that it appears to be forming around a nucleus not of its own material—the way a speck of sand thrown into a supersaturated solution becomes the nucleus of crystallization." "Sounds like the pearl in an oyster," Molly said, and gave me an impertinent look. "Why," I asked McGill, "did you say the coins couldn't have the same date? I mean apart from the off chance I got them that way." "Because I don't think this thing got going before today and everything that's happened can all be described as improbable motions here and now. The dates were already there, and to change them would require retroactive action, reversing time. That's out, in my book. That telephone now—" The doorbell rang. We were not surprised to find it was the telephone repairman. He took the set apart and clucked like a hen. "I guess you dropped it on the floor, mister," he said with strong disapproval. "Certainly not," I said. "Is it broken?" "Not exactly broken , but—" He shook his head and took it apart some more. McGill went over and they discussed the problem in undertones. Finally the man left and Molly called her mother to reassure her. McGill tried to explain to me what had happened with the phone. "You must have joggled something loose. And then you replaced the receiver in such a way that the contact wasn't quite open." "But for Pete's sake, Molly says the calls were going on for a long time! I phoned you only a short time ago and it must have taken her nearly two hours to get here from Oyster Bay." "Then you must have done it twice and the vibrations in the floor—something like that—just happened to cause the right induction impulses. Yes, I know how you feel," he said, seeing my expression. "It's beginning to bear down." Molly was through telephoning and suggested going out for dinner. I was so pleased to see her that I'd forgotten all about being hungry. "I'm in no mood to cook," she said. "Let's get away from all this." McGill raised an eyebrow. "If all this, as you call it, will let us." In the lobby, we ran into Nat, looking smug in a journalistic way. "I've been put on the story—who could be better?—I live here. So far, I don't quite get what's been happening. I've been talking to Danny, but he didn't say much. I got the feeling he thinks you're involved in some mystical, Hibernian way. Hello, McGill, what's with you?" "He's got a theory," said Molly. "Come and eat with us and he'll tell you all about it." Since we decided on an air-conditioned restaurant nearby on Sixth Avenue, we walked. The jam of cars didn't seem to be any less than before and we saw Danny again. He was talking to a police lieutenant, and when he caught sight of us, he said something that made the lieutenant look at us with interest. Particularly at me. "If you want your umbrella, Mrs. Graham," Danny said, "it's at the station house. What there's left of it, that is." Molly thanked him and there was a short pause, during which I felt the speculative regard of the lieutenant. I pulled out a packet of cigarettes, which I had opened, as always, by tearing off the top. I happened to have it upside down and all the cigarettes fell out. Before I could move my foot to obliterate what they had spelled out on the sidewalk, the two cops saw it. The lieutenant gave me a hard look, but said nothing. I quickly kicked the insulting cigarettes into the gutter. When we got to the restaurant, it was crowded but cool—although it didn't stay cool for long. We sat down at a side table near the door and ordered Tom Collinses as we looked at the menu. Sitting at the next table were a fat lady, wearing a very long, brilliant green evening gown, and a dried-up sour-looking man in a tux. When the waiter returned, they preempted him and began ordering dinner fussily: cold cuts for the man, and vichyssoise, lobster salad and strawberry parfait for the fat lady. I tasted my drink. It was most peculiar; salt seemed to have been used instead of sugar. I mentioned this and my companions tried theirs, and made faces. The waiter was concerned and apologetic, and took the drinks back to the bar across the room. The bartender looked over at us and tasted one of the drinks. Then he dumped them in his sink with a puzzled expression and made a new batch. After shaking this up, he set out a row of glasses, put ice in them and began to pour. That is to say he tilted the shaker over the first one, but nothing came out. He bumped it against the side of the bar and tried again. Still nothing. Then he took off the top and pried into it with his pick, his face pink with exasperation. I had the impression that the shaker had frozen solid. Well, ice is a crystal, I thought to myself. The other bartender gave him a fresh shaker, but the same thing happened, and I saw no more because the customers sitting at the bar crowded around in front of him, offering advice. Our waiter came back, baffled, saying he'd have the drinks in a moment, and went to the kitchen. When he returned, he had madame's vichyssoise and some rolls, which he put down, and then went to the bar, where the audience had grown larger. Molly lit a cigarette and said, "I suppose this is all part of it, Alec. Incidentally, it seems to be getting warmer in here." It was, and I had the feeling the place was quieter—a background noise had stopped. It dawned on me that I no longer heard the faint hum of the air-conditioner over the door, and as I started to say so, I made a gesture toward it. My hand collided with Molly's when she tapped her cigarette over the ashtray, and the cigarette landed in the neighboring vichyssoise. "Hey! What's the idea?" snarled the sour-looking man. "I'm terribly sorry," I said. "It was an accident. I—" "Throwing cigarettes at people!" the fat lady said. "I really didn't mean to," I began again, getting up. There must have been a hole in the edge of their tablecloth which one of my cuff buttons caught in, because as I stepped out from between the closely set tables, I pulled everything—tablecloth, silver, water glasses, ashtrays and the vichyssoise-à-la-nicotine—onto the floor. The fat lady surged from the banquette and slapped me meatily. The man licked his thumb and danced as boxers are popularly supposed to do. The owner of the place, a man with thick black eyebrows, hustled toward us with a determined manner. I tried to explain what had happened, but I was outshouted, and the owner frowned darkly.
B. Mr. Graham found inspiration for his book
What does the reader learn from Lane's inability to identify a flag he sees flying outside a tower? A. That he is colorblind B. That he wants to abstain from political conversations C. That he knows his city's flag but not those of other American cities D. That he is not well-informed on general politics
MUTINEER By ROBERT J. SHEA For every weapon there was a defense, but not against the deadliest weapon—man himself! Raging , Trooper Lane hovered three thousand feet above Tammany Square. The cool cybrain surgically implanted in him was working on the problem. But Lane had no more patience. They'd sweat, he thought, hating the chill air-currents that threw his hovering body this way and that. He glared down at the three towers bordering on the Square. He spat, and watched the little white speck fall, fall. Lock me up in barracks. All I wanted was a little time off. Did I fight in Chi for them? Damn right I did. Just a little time off, so I shouldn't blow my top. Now the lid's gone. He was going over all their heads. He'd bowled those city cops over like paper dolls, back at the Armory. The black dog was on Lane's back. Old Mayor himself was going to hear about it. Why not? Ain't old Mayor the CinC of the Newyork Troopers? The humming paragrav-paks embedded beneath his shoulder blades held him motionless above Newyork's three administrative towers. Tammany Hall. Mayor's Palace. Court House. Lane cursed his stupidity. He hadn't found out which one was which ahead of time. They keep Troopers in the Armory and teach them how to fight. They don't teach them about their own city, that they'll be fighting for. There's no time. From seven years old up, Troopers have too much to learn about fighting. The Mayor was behind one of those thousands of windows. Old cybrain, a gift from the Trooper surgeons, compliments of the city, would have to figure out which one. Blood churned in his veins, nerves shrieked with impatience. Lane waited for the electronic brain to come up with the answer. Then his head jerked up, to a distant buzz. There were cops coming. Two black paragrav-boats whirred along the translucent underside of Newyork's anti-missile force-shield, the Shell. Old cybrain better be fast. Damn fast! The cybrain jolted an impulse through his spine. Lane somersaulted. Cybrain had taken charge of his motor nerves. Lane's own mind was just along for the ride. His body snapped into a stiff dive position. He began to plummet down, picking up speed. His mailed hands glittered like arrowheads out in front. They pointed to a particular window in one of the towers. A predatory excitement rippled through him as he sailed down through the air. It was like going into battle again. A little red-white-and-green flag fluttered on a staff below the window. Whose flag? The city flag was orange and blue. He shrugged away the problem. Cybrain knew what it was doing. The little finger of his right hand vibrated in its metal sheath. A pale vibray leaped from the lensed fingertip. Breakthrough! The glasstic pane dissolved. Lane streamed through the window. The paragrav-paks cut off. Lane dropped lightly to the floor, inside the room, in battle-crouch. A 3V set was yammering. A girl screamed. Lane's hand shot out automatically. A finger vibrated. Out of the corner of his eye, Lane saw the girl fold to the floor. There was no one else in the room. Lane, still in a crouch, chewed his lip. The Mayor? His head swung around and he peered at the 3V set. He saw his own face. "Lashing police with his vibray," said the announcer, "Lane broke through the cordon surrounding Manhattan Armory. Two policemen were killed, four others seriously injured. Tammany Hall has warned that this man is extremely dangerous. Citizens are cautioned to keep clear of him. Lane is an insane killer. He is armed with the latest military weapons. A built-in electronic brain controls his reflexes—" "At ease with that jazz," said Lane, and a sheathed finger snapped out. There was a loud bang. The 3V screen dissolved into a puddle of glasstic. The Mayor. Lane strode to the window. The two police boats were hovering above the towers. Lane's mailed hand snapped open a pouch at his belt. He flipped a fist-sized cube to the floor. The force-bomb "exploded"—swelled or inflated, really, but with the speed of a blast. Lane glanced out the window. A section of the energy globe bellied out from above. It shaded the view from his window and re-entered the tower wall just below. Now the girl. He turned back to the room. "Wake up, outa-towner." He gave the blonde girl a light dose of the vibray to slap her awake. "Who are you?" she said, shakily. Lane grinned. "Trooper Lane, of the Newyork Special Troops, is all." He threw her a mock salute. "You from outa-town, girlie. I ain't seen a Newyork girl with yellow hair in years. Orange or green is the action. Whatcha doing in the Mayor's room?" The girl pushed herself to her feet. Built, Lane saw. She was pretty and clean-looking, very out-of-town. She held herself straight and her blue-violet eyes snapped at him. "What the devil do you think you're doing, soldier? I am a diplomat of the Grassroots Republic of Mars. This is an embassy, if you know what that means." "I don't," said Lane, unconcerned. "Well, you should have had brains enough to honor the flag outside this window. That's the Martian flag, soldier. If you've never heard of diplomatic immunity, you'll suffer for your ignorance." Her large, dark eyes narrowed. "Who sent you?" "My cybrain sent me." She went openmouthed. "You're Lane ." "I'm the guy they told you about on the 3V. Where's the Mayor? Ain't this his place?" "No. No, you're in the wrong room. The wrong building. That's the Mayor's suite over there." She pointed. "See where the balcony is? This is the Embassy suite. If you want the Mayor you'll have to go over there." "Whaddaya know," said Lane. "Cybrain didn't know, no more than me." The girl noticed the dark swell of the force-globe. "What's that out there?" "Force-screen. Nothing gets past, except maybe a full-size blaster-beam. Keeps cops out. Keeps you in. You anybody important?" "I told you, I'm an ambassador. From Mars. I'm on a diplomatic mission." "Yeah? Mars a big city?" She stared at him, violet eyes wide. "The planet Mars." "Planet? Oh, that Mars. Sure, I've heard of it—you gotta go by spaceship. What's your name?" "Gerri Kin. Look, Lane, holding me is no good. It'll just get you in worse trouble. What are you trying to do?" "I wanna see the Mayor. Me and my buddies, we just come back from fighting in Chi, Gerri. We won. They got a new Mayor out there in Chi. He takes orders from Newyork." Gerri Kin said, "That's what the force-domes did. The perfect defense. But also the road to the return to city-states. Anarchy." Lane said, "Yeah? Well, we done what they wanted us to do. We did the fighting for them. So we come back home to Newyork and they lock us up in the Armory. Won't pay us. Won't let us go nowhere. They had cops guarding us. City cops." Lane sneered. "I busted out. I wanna see the Mayor and find out why we can't have time off. I don't play games, Gerri. I go right to the top." Lane broke off. There was a hum outside the window. He whirled and stared out. The rounded black hulls of the two police paragrav-boats were nosing toward the force-screen. Lane could read the white numbers painted on their bows. A loudspeaker shouted into the room: "Come out of there, Lane, or we'll blast you out." "You can't," Lane called. "This girl from Mars is here." "I repeat, Lane—come out or we'll blast you out." Lane turned to the girl. "I thought you were important." She stood there with her hands together, calmly looking at him. "I am. But you are too, to them. Mars is millions of miles away, and you're right across the Square from the Mayor's suite." "Yeah, but—" Lane shook his head and turned back to the window. "All right, look! Move them boats away and I'll let this girl out!" "No deal, Lane. We're coming in." The police boats backed away slowly, then shot straight up, out of the line of vision. Lane looked down at the Square. Far below, the long, gleaming barrel of a blaster cannon caught the dim light filtering down through Newyork's Shell. The cannon trundled into the Square on its olive-drab, box-shaped caterpillar mounting and took up a position equidistant from the bases of the three towers. Now a rumble of many voices rose from below. Lane stared down to see a large crowd gathering in Tammany Square. Sound trucks were rolling to a stop around the edges of the crowd. The people were all looking up. Lane looked across the Square. The windows of the tower opposite, the ones he could see clearly, were crowded with faces. There were white dot faces on the balcony that Gerri Kin had pointed out as the Mayor's suite. The voice of a 3V newscaster rolled up from the Square, reechoing against the tower walls. "Lane is holding the Martian Ambassador, Gerri Kin, hostage. You can see the Martian tricolor behind his force-globe. Police are bringing up blaster cannon. Lane's defense is a globe of energy similar to the one which protects Newyork from aerial attack." Lane grinned back at Gerri Kin. "Whole town's down there." Then his grin faded. Nice-looking, nice-talking girl like this probably cared a lot more about dying than he did. Why the hell didn't they give him a chance to let her out? Maybe he could do it now. Cybrain said no. It said the second he dropped his force-screen, they'd blast this room to hell. Poor girl from Mars, she didn't have a chance. Gerri Kin put her hand to her forehead. "Why did you have to pick my room? Why did they send me to this crazy city? Private soldiers. Twenty million people living under a Shell like worms in a corpse. Earth is sick and it's going to kill me. What's going to happen?" Lane looked sadly at her. Only two kinds of girls ever went near a Trooper—the crazy ones and the ones the city paid. Why did he have to be so near getting killed when he met one he liked? Now that she was showing a little less fear and anger, she was talking straight to him. She was good, but she wasn't acting as if she was too good for him. "They'll start shooting pretty quick," said Lane. "I'm sorry about you." "I wish I could write a letter to my parents," she said. "What?" "Didn't you understand what I said?" "What's a letter?" "You don't know where Mars is. You don't know what a letter is. You probably can't even read and write!" Lane shrugged. He carried on the conversation disinterestedly, professionally relaxed before battle. "What's these things I can't do? They important?" "Yes. The more I see of this city and its people, the more important I realize they are. You know how to fight, don't you? I'll bet you're perfect with those weapons." "Listen. They been training me to fight since I was a little kid. Why shouldn't I be a great little fighter?" "Specialization," said the girl from Mars. "What?" "Specialization. Everyone I've met in this city is a specialist. SocioSpecs run the government. TechnoSpecs run the machinery. Troopers fight the wars. And ninety per cent of the people don't work at all because they're not trained to do anything." "The Fans," said Lane. "They got it soft. That's them down there, come to watch the fight." "You know why you were kept in the Armory, Lane? I heard them talking about it, at the dinner I went to last night." "Why?" "Because they're afraid of the Troopers. You men did too good a job out in Chi. You are the deadliest weapon that has ever been made. You. Single airborne infantrymen!" Lane said, "They told us in Trooper Academy that it's the men that win the wars." "Yes, but people had forgotten it until the SocioSpecs of Newyork came up with the Troopers. Before the Troopers, governments concentrated on the big weapons, the missiles, the bombs. And the cities, with the Shells, were safe from bombs. They learned to be self-sufficient under the Shells. They were so safe, so isolated, that national governments collapsed. But you Troopers wiped out that feeling of security, when you infiltrated Chi and conquered it." "We scared them, huh?" Gerri said, "You scared them so much that they were afraid to let you have a furlough in the city when you came back. Afraid you Troopers would realize that you could easily take over the city if you wanted to. You scared them so much that they'll let me be killed. They'll actually risk trouble with Mars just to kill you." "I'm sorry about you. I mean it, I like—" At that moment a titanic, ear-splitting explosion hurled him to the carpet, deafened and blinded him. He recovered and saw Gerri a few feet away, dazed, groping on hands and knees. Lane jumped to the window, looked quickly, sprang back. Cybrain pumped orders to his nervous system. "Blaster cannon," he said. "But just one. Gotcha, cybrain. I can beat that." He picked up the black box that generated his protective screen. Snapping it open with thumb-pressure, he turned a small dial. Then he waited. Again an enormous, brain-shattering concussion. Again Lane and Gerri were thrown to the floor. But this time there was a second explosion and a blinding flash from below. Lane laughed boyishly and ran to the window. "Look!" he called to Gerri. There was a huge gap in the crowd below. The pavement was blackened and shattered to rubble. In and around the open space sprawled dozens of tiny black figures, not moving. "Backfire," said Lane. "I set the screen to throw their blaster beam right back at them." "And they knew you might—and yet they let a crowd congregate!" Gerri reeled away from the window, sick. Lane said, "I can do that a couple times more, but it burns out the force-globe. Then I'm dead." He heard the 3V newscaster's amplified voice: "—approximately fifty killed. But Lane is through now. He has been able to outthink police with the help of his cybrain. Now police are feeding the problem to their giant analogue computer in the sub-basement of the Court House. The police analogue computer will be able to outthink Lane's cybrain, will predict Lane's moves in advance. Four more blaster cannon are coming down Broadway—" "Why don't they clear those people out of the Square?" Gerri cried. "What? Oh, the Fans—nobody clears them out." He paused. "I got one more chance to try." He raised a mailed glove to his mouth and pressed a small stud in the wrist. He said, "Trooper HQ, this is Lane." A voice spoke in his helmet. "Lane, this is Trooper HQ. We figured you'd call." "Get me Colonel Klett." Thirty seconds passed. Lane could hear the clank of caterpillar treads as the mobile blaster cannon rolled into Tammany Square. The voice of the commanding officer of the Troopers rasped into Lane's ear: "Meat-head! You broke out against my orders! Now look at you!" "I knew you didn't mean them orders, sir." "If you get out of there alive, I'll hang you for disobeying them!" "Yes, sir. Sir, there's a girl here—somebody important—from Mars. You know, the planet. Sir, she told me we could take over the city if we got loose. That right, sir?" There was a pause. "Your girl from Mars is right, Lane. But it's too late now. If we had moved first, captured the city government, we might have done it. But they're ready for us. They'd chop us down with blaster cannon." "Sir, I'm asking for help. I know you're on my side." "I am, Lane." The voice of Colonel Klett was lower. "I'd never admit it if you had a chance of getting out of there alive. You've had it, son. I'd only lose more men trying to rescue you. When they feed the data into that analogue computer, you're finished." "Yes, sir." "I'm sorry, Lane." "Yes, sir. Over and out." Lane pressed the stud on his gauntlet again. He turned to Gerri. "You're okay. I wish I could let you out. Old cybrain says I can't. Says if I drop the force-globe for a second, they'll fire into the room, and then we'll both be dead." Gerri stood with folded arms and looked at him. "Do what you have to do. As far as I can see, you're the only person in this city that has even a little bit of right on his side." Lane laughed. "Any of them purple-haired broads I know would be crazy scared. You're different." "When my grandparents landed on Mars, they found out that selfishness was a luxury. Martians can't afford it." Lane frowned with the effort of thinking. "You said I had a little right on my side. That's a good feeling. Nobody ever told me to feel that way about myself before. It'll be better to die knowing that." "I know," she said. The amplified voice from below said, "The police analogue computer is now hooked directly to the controls of the blaster cannon battery. It will outguess Lane's cybrain and check his moves ahead of time." Lane looked at Gerri. "How about giving me a kiss before they get us? Be nice if I kissed a girl like you just once in my life." She smiled and walked forward. "You deserve it, Lane." He kissed her and it filled him with longings for things he couldn't name. Then he stepped back and shook his head. "It ain't right you should get killed. If I take a dive out that window, they shoot at me, not in here." "And kill you all the sooner." "Better than getting burned up in this lousy little room. You also got right on your side. There's too many damn Troopers and not enough good persons like you. Old cybrain says stay here, but I don't guess I will. I'm gonna pay you back for that kiss." "But you're safe in here!" "Worry about yourself, not about me." Lane picked up the force-bomb and handed it to her. "When I say now, press this. Then take your hand off, real fast. It'll shut off the screen for a second." He stepped up on to the window ledge. Automatically, the cybrain cut in his paragrav-paks. "So long, outa-towner. Now! " He jumped. He was hurtling across the Square when the blaster cannons opened up. They weren't aimed at the window where the little red-white-and-green tricolor was flying. But they weren't aimed at Lane, either. They were shooting wild. Which way now? Looks like I got a chance. Old cybrain says fly right for the cannons. He saw the Mayor's balcony ahead. Go to hell, old cybrain. I'm doing all right by myself. I come to see the Mayor, and I'm gonna see him. Lane plunged forward. He heard the shouts of frightened men. He swooped over the balcony railing. A man was pointing a blaster pistol at him. There were five men on the balcony—emergency! Years of training and cybrain took over. Lane's hand shot out, fingers vibrating. As he dropped to the balcony floor in battle-crouch, the men slumped around him. He had seen the man with the blaster pistol before. It was the Mayor of Newyork. Lane stood for a moment in the midst of the sprawled men, the shrieks of the crowd floating up to him. Then he raised his glove to his lips. He made contact with Manhattan Armory. "Colonel Klett, sir. You said if we captured the city government we might have a chance. Well, I captured the city government. What do we do with it now?" Lane was uncomfortable in his dress uniform. First there had been a ceremony in Tammany Square inaugurating Newyork's new Military Protectorate, and honoring Trooper Lane. Now there was a formal dinner. Colonel Klett and Gerri Kin sat on either side of Lane. Klett said, "Call me an opportunist if you like, Miss Kin, my government will be stable, and Mars can negotiate with it." He was a lean, sharp-featured man with deep grooves in his face, and gray hair. Gerri shook her head. "Recognition for a new government takes time. I'm going back to Mars, and I think they'll send another ambassador next time. Nothing personal—I just don't like it here." Lane said, "I'm going to Mars, too." "Did she ask you to?" demanded Klett. Lane shook his head. "She's got too much class for me. But I like what she told me about Mars. It's healthy, like." Klett frowned. "If I thought there was a gram of talent involved in your capture of the Mayor, Lane, I'd never release you from duty. But I know better. You beat that analogue computer by sheer stupidity—by disregarding your cybrain." Lane said, "It wasn't so stupid if it worked." "That's what bothers me. It calls for a revision in our tactics. We've got a way of beating those big computers now, should anyone use them against us." "I just didn't want her to be hurt." "Exactly. The computer could outguess a machine, like your cybrain. But you introduced a totally unpredictable factor—human emotion. Which proves what I, as a military man, have always maintained—that the deadliest weapon in man's arsenal is still, and will always be, the individual soldier." "What you just said there, sir," said Lane. "That's why I'm leaving Newyork." "What do you mean?" asked Colonel Klett. "I'm tired of being a weapon, sir. I want to be a human being." END Work is the elimination of the traces of work. —Michelangelo Transcriber's Note: This etext was produced from If July 1959. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
D. That he is not well-informed on general politics
Which statement best represents the central theme of the text? A. The media is ultimately responsible for the breakdown of the American family. B. People will be happy as long as the status quo is maintained. C. Humans have much more in common than they have in difference. D. While social media purports to bring us together, it more often drives us apart.
Divided we stand Sara lets the Lyft park itself in the drive, lets out a sigh, and tweets wish me luck plus some emojis before slipping her phone into a hoody pocket. Curtains twitch, and before she can get her bag out of the back Mom is there, right there next to her, their hands touching on the handle as they compete for control. "It's OK Mom, I got it." "You should have let us come pick you up." "It's fine, there was no need. I didn't want to put any-" "But you shouldn't be wasting money, not with how much rent you pay and-" Jesus. Not this already. "Mom. I can afford a cab ride. I'm not that much of a failure." Mom sighs, shoulders falling, looks at Sara directly. "I'm sorry honey." She looks old, Sara thinks, watching a resigned tiredness flicker across her face in a way she'd not noticed before. Like she's exhausted by conflict, surrendered to it. "Now, don't I get a hug?" Sara smiles. They hold each other for a few long seconds, rubbing and squeezing each other as the Lyft silently backs itself out of the driveway. When they part it's Mom's hand that's on the bag's handle. Inside she unwraps herself from scarves and layers, the heat in the house almost a shock after the cold air. Michigan in February. Mom is already halfway up the stairs, bag in tow, headed for her room. "Mom, just leave that and I'll…" "Your father's in the front room," she says, just before she disappears from view. "Go say hi." For a few seconds Sara is alone in the hallway, the smell of cooking meat coming from one doorway, the sound of rolling news from another. She shakes her head, kicks off shoes, tucks hair behind her ears. Braces herself. He's sat in the living room, reclining in the Lazy Boy. He doesn't hear her enter - her socked feet silent on the pile carpet floor, his attention lost in the screen that fills most of the wall. Fox News. She braces herself again. "Hey Dad." His head jerks to look at her. "Hey! When did you get here?" He starts to push himself up. "Don't get up Dad, it's fine. Really." She takes a seat on the couch. "I just got here, like two minutes ago." "Good flight?" "Yeah. Fine. Y'know. Same as always." He smiles back at her, nods knowingly. Their first words in nearly a year. Fine. So far. She relaxes. Of course it is. How bad could it be? "I thought I was gonna come pick you up from the airport?" "Ah, no. I got a cab. I didn't want to bother you." "Bother me? You think I'm too old and infirm to pick my own daughter up from the airport?" "No Dad, of course not." The war spills out of Fox News, casualty figures scrolling across monochrome drone footage, attack helicopters circling over Caracas apartment blocks, pundits with bronzed skin and immaculate blond hair smiling from four-way split screens. "So you just got a cab?" "Yeah." "How much did that cost?" "Not much. Really. I can afford-" "Cabs are expensive. You shouldn't be wasting your money." "It wasn't expensive. It wasn't a cab, it was a Lyft." "One of those driverless things?" "Yeah." Ad break. An elderly couple ride a tandem bicycle through a park, laughing and smiling in Instagram-perfect sunshine, as a calm, relaxing voice lists the potentially lethal side effects of a diabetes drug. Dad shakes his head. "I don't know how you can use those things. I don't trust them." "Dad, they're perfectly safe." "That's not what I mean. They're stealing people's jobs." There's a brief second, a fleeting moment, where Sara can bite her lip, let it go. She misses it. "But I thought it was immigrants that are stealing people's jobs?" "You might think it's funny little lady, but let me tell you - you remember Kyle and Max, Bill Cooper's boys? Live up off Lafayette, past the Checkers?" "Nope." "Well let me tell you," He shifts in the recliner, with some obvious pain and effort, to face her. "Both of 'em lost their jobs just this last year. Both of 'em were truckers. Both of 'em been driving trucks since high school. Now the damn trucks are driving themselves and they're both out of work. And they got families to support. Kids." "Well I'm sure they'll be fine." She regrets the sarcasm as soon as she hears it in her own voice, but she still can't stop herself, like it's expected, like it's part of the routine. Part of their schtick. "They just got to get themselves out there, huh Dad? Pull themselves up by their bootstraps. That's the American way, right?" "I'm glad you think this is funny, I really do. But what you New York types need to realise is-" "Ed!" Mom had appeared in the doorway. "Please! Both of you. No fighting today, please." "Sheryl-" "No. I don't want to hear you two as much as disagreeing about anything today, unless it's about the game. And even then you'd better keep it civil. Otherwise you can both go hungry. Understand?" Awkward pause. "Fine." "Sorry Mom." Sara turns back to the TV, to watching the war, to trying to work out which one it is. It had always been this way, ever since she was about thirteen. Up until then it just seemed like constant warmth, as though she didn't have any childhood concept of Dad apart from him getting home from work, then her sitting on his knee, eating cookies and watching football highlights until Mom came in and scolded them both for ruining their appetites before dinner. And then everything changed. Suddenly there was rap music and nose rings, sneaking out of the house to see her friends and not wanting to go to church. Suddenly he was no longer this lovable bear-man that ruffled her hair and gave her candy and explained defensive plays to her, but this huge obelisk of injustice that just wanted to crush her high school life into dust. It was constant warfare; every opinion she had became a battle, every decision she made a conflict. Getting away to college gave her escape, but bred resentment too; he hated that she went to New York, even though NYU was a good school, and her decision to stay there after she finished made things even worse. And then politics got all crazy, weirder then ever, and it became impossible for them to talk without it erupting into fights almost instantly. It was bad enough when the smart, young guy she liked was president and Dad constantly spewed his hate for him at her, but somehow it got even worse when the old, racist, women hating war-starter he liked won. Twice. So they didn't talk much now, barely online, never on the phone. Since her second year of school he'd never been to NYC to visit her. She came back when she could face it; sometimes for birthdays, sometimes for Thanksgiving. Maybe for Christmas. But somehow always, like now, for the Super Bowl. Like football was the one thing they still had, that one thing they could still sit in the same room together for. Shouting at players, screaming at the ref, laughing at the ads. Dad is in the bathroom, and Sara has had enough of Fox and whichever war this is. She reaches over and grabs the remote from the arm of his chair, and tries to find something else to watch. The government had scrapped all the rules about how the internet worked, and for most people like her parents it had suddenly gotten a lot cheaper to get their TV through Facebook, so all she can find is Fox, Breitbart News, Family Values TV, Info Wars, The Rebel, Glenn Beck, The Voice of America, America First, The Bible Today and lots of hunting and sports channels she doesn't even recognise. It's signed in to her Dad's FB account, and the last thing she wants is to try and log in on hers before he gets back from the john. Yeah. There was no way that would end up with them keeping it civil. In her pocket her phone vibrates, purrs against her skin, reminding her it's there, making sure she's not forgotten where her real friends are, that there's a world outside, beyond Dad and his TV. She takes it out and cradles it in her hands, the dark screen fleetingly reflecting back her face before it jumps awake at her very touch, opening up to bathe her in blue light, in comfort and warmth and the familiar. For the first time since she got home she feels herself relax. Dinner is Mom's meatloaf, with gravy and mashed potatoes. Cornbread and broccoli. Every mouthful tastes like nostalgia, and Sara can feel herself being encompassed by a bubble, this barrier of warm air and long forgotten simplicity enveloping her body, protecting her from the confusion of the world outside. "How's work, honey?" Mom asks. "Yeah, going OK." Sara works for a non-profit in Brooklyn that helps big organisations to transition to renewable energy. The pay is lousy but it feels important. "We just got the last few schools in the city to agree to put solar panels on their roofs. Big deal for us. I've been working on them for the last two years." Mom says nothing, just looks down at her plate. Dad finishes chewing his mouthful, swallows, wipes his beard with a napkin. Sighs, barely controlled anger simmering behind his face. "Solar panels cause cancer." Sara laughs, covering her mouth as she nearly chokes on chewed food. "What? No they don't Dad." "They do. The material they use to coat them reacts to sunlight, and produces an airborne carcinogen. It's based on a particular kind of rare earth. It's a bit like teflon. The Chinese have known about this for decades but have kept it covered up, because they-" "Dad, no. Just no. Trust me." "-because they are the world's largest manufacturers of solar panels. But the research has been done. The scientific evidence is out there. Look it up." "Look it up?" Sara shakes her head, not knowing where to even start. "Dad, who is telling you this stuff?" "No one is telling me it, Sara. I read it. It's in the news. I mean, really, I'm surprised you've not seen it. It was all over Facebook." "Maybe on yours, but it's not all over my Facebook." She doesn't have the heart to tell him she muted him six months ago. "Well, I don't read the news and I don't know any science," says Mom, "But I do know this: after they opened that solar farm up near Mary, within just a few years her and two of her neighbours had cancer. I mean I don't know anything for sure honey, but given the risk are you sure it's safe to be putting these panels on top of schools?" "There's no risk, Mom. None at all. Dad, I wish you'd stop believing everything you see on Facebook." "Well, maybe you should read things yourself before passing judgement on them." He pushes himself up from his seat, steps away from the table. Sara sighs, thinking she's upset him that much that he's actually abandoning his dinner, but he stops to grab something off a nearby shelf. His iPad. He heads back and takes his seat again. Oh, here we fucking go she thinks to herself. He stabs at the screen, looks for a while, stabs again. Flips it over and hands it to her. "Here. Read." Reluctantly, she takes it. His Facebook feed. Somewhere in the middle of it is the article, a very to the point CHINESE SOLAR PANELS CAUSE CANCER headline. But she can't even focus on it, because the rest of the screen is filled with distractions, looping videos and animated gifs, all adverts, and all for guns. Or security systems. Panic rooms. Back up power generators. Emergency rations. More guns. "Jesus Christ Dad, these ads!" "No blasphemy at the dinner table, please honey" says Mom. "What about them?" "Just… just look at them. They're terrifying. They're like… like adverts for the end of the world! You know they show you this stuff just to make you scared, right? Just to keep you paranoid." "They show me this stuff because they've got products to sell. That's how the economy works. That's how we create jobs. Godammit Sara, are you telling me you hate advertising now? Do you just hate everything about America?" Sara looks over to Mom, who looks like she's on the brink of tears. Suddenly she finds she's also lost the will to fight. Gently she closes the iPad and puts it down on the table, next to her plate. "No, of course not Dad. Maybe I'll read this later, after the game." After dinner she helps Mom clean-up, the two of them loading the dishwasher in near silence. She's leaning against the counter, scrolling through Twitter on her phone, when Mom finally speaks. "You should go easy on your father, you know. He's worried about a lot of things." "What things? Solar panel cancer?" "Don't joke Sara, I'm serious. There's a lot that bothers him. The state of the world. The future. All these damn wars." "We're all worried about all that, Mom." "He's worried about his health. I'm worried about his health. Probably more than he is." Sara looks up from her phone, genuine concern. "Is he OK?" "I don't know. He won't go to the doctor. Hasn't been in months. He's worried about his insurance." "I had no idea-" "Yeah, well you know your father. Doesn't like to talk about it. Doesn't want to burden other people with his problems. Hates pity." She pauses, looks out the window into the yard. When she turns back to Sara her eyes are damp. "This is why I was so excited about you coming back. Why he was so excited! I thought it'd take his mind of all this. He was so excited to see you. You know he loves watching the game with you, Sara." "I know. I'm sorry I-" "And the ads! The Super Bowl ads! You know how much he loves watching the new ads with you. It's a stupid thing, sure, but he loves it. Talks about it all the time. It's like a tradition to him. That's why he got so upset over dinner when you got angry at his ads. It's something special he has with you, he doesn't want to lose it." Sara slips her phone into her pocket, genuine guilt. Feels like a spoiled kid. "I didn't realise. I'm sorry." Mom smiles, walks over and kisses her on the forehead. "It's OK honey. Don't feel bad. Just go. Just go sit in there with him and watch some TV. Please." It's the second down on the Falcon's 60 yard line with 30 yards to cover, and the Lions need one touchdown to equalise. Sara and her Dad are sat in the front room, working their way through a family sized pack of Oreos, when the ad break starts. Dawn. Red skies over the desert. A Chevrolet truck pulls up next to a large, trailer. Low shot next to the front tire, as a cowboy booted foot drops down from the door, disturbing dust. Cut to: internal shot of the trailer, darkness split by morning light through the opening door. The figure enters, flicks on lights. The room is full of equipment, computers. The figure takes a seat, puts on a headset, thumbs on screens. Rests their hands on two large joysticks on the desk. Cut to: airfield, the desert. The distinctive silhouette of a Predator drone taxis across the screen, rising heat shimmering the air around it. Cut to: interior of the trailer. The faceless figure works controls, the joysticks, touch screens. Voiceover: They say you need to get up pretty early to get past America's finest. But the truth is we never sleep. Cut to: a uniformed guard on top of the border wall. He looks up and gives a salute to the drone as it soars above him, out and across the desert. Cut to: drone footage. Grainy, monochrome. A group of figures move slowly through the desert. The camera tracks them. Zooms in. The pilot punches buttons. The figures become highlighted by a computer overlay, text appears next to them. ILLEGAL ENTRY ATTEMPT SUSPECTED. GROUND PATROLS ALERTED. "Fuck this," says Sara, getting up from her seat. "Sara!" says Mom. "No I'm sorry, I can't. I can't sit here and watch this… this bullshit. This propaganda." She storms out of the room. "Sara!" Mom makes to get up. "No, just leave her," says Dad, gently, his eyes still fixed on the screen. "Just let her go." Out in the kitchen Sara sits at the table and wants to scream. She's angry, mainly with herself. She should never have fucking come here. She should have known better. There was never any fucking way anything good was going to come from this. As much as Mom wants to romanticise things, to make them sound cute and adorable, the truth is shit with Dad has never been right since she was a teenager. Too much resentment, too much bad blood, too much control and rebellion. They hadn't agreed on anything - they hadn't managed to have a simple conversation that didn't descend into fighting - in 15 goddamn years, and no amount of eating cookies and watching fucking Super Bowl ads on the TV was going to fix that. She sighs, wipes a tear from her cheek. On autopilot she takes her phone from her pocket, feels its reassuring warmth in her hand, and swipes open Twitter. Everybody seems to be talking about the same thing. omg im crying holy shit that chevrolet ad /fire emoji that was sooooo beautiful who knew chevrolet were so woke i can't believe they did that, so amazing Hang on, are they taking about the same ad? Hastily she opens her FB TV app, pulls up the game. The ad is just finishing. She hits the 10-second rewind icon a couple of times, then leans the phone on its side against a ketchup bottle. Cut to: drone footage. Grainy, monochrome. A group of figures move slowly through the desert. The camera tracks them. Zooms in. The pilot punches buttons. The figures become highlighted by a computer overlay, text appears next to them. ILLEGAL ENTRY ATTEMPT SUSPECTED. GROUND PATROLS ALERTED. Cut to: on the ground, in the desert. The group of figures are revealed to be a Mexican family, maybe two. Men, women, children. They look tired, hungry. They stop to rest, sipping the little water they have left from tattered plastic bottles. A little way away from the main group sits a small child, a girl. Maybe 8 years old. She is drawing shapes in the dust with a stick. She's drawn quite a bit it looks like, but from our angle we can't see what. Cut to: drone footage. The pilot is watching the group. As he tracks away from the main party to where the girl is sat, the camera reveals what she has drawn. A large, child's rendition of the American flag. Underneath it, it childlike handwriting, some words. 'I have a dream' Text flashes across the screen. ALERT CANCELLED. ALL PATROLS: STAND DOWN Cut to: the drone, banking and turning, flying away. Cut to: exterior shot of the trailer. The still anonymous pilot exits, walks back towards his jeep. Voiceover: Keeping America safe means never sleeping, but keeping America great means never forgetting who we are, and how we got here. The jeep starts up, pulls away from the camera in a cloud of dust. Fade to black. Chevrolet logo. White text against black. 'We know what really makes America great' Sara finds herself in the front room, sobbing. "Honey?" Dad pauses the TV, looks up at her. It looks like he's been crying too. "Sara?" "Did you - did you watch it?" "The Chevrolet ad?" "Yeah." "Yeah, we did." Embarrassed, he wipes a tear from his cheek. "It was… it was very moving." She falls on him, wrapping her arms around his neck, burying her face in his chest. "I'm sorry Dad. I'm so sorry. I didn't mean to be so mean-" "It's OK, honey. It really is." "No, no it's not. We always fight. And I know that's mainly my fault-" 'Well, now, c'mon-" "No, it is. It's my fault. I got myself into thinking we can never agree on anything, that we can never see eye to eye. That we've got nothing in common anymore." She lifts her head to look up at him. "But I know that's wrong. That I shouldn't assume things about you. That there's still things that can bring us together." He grins back at her. "Like Super Bowl ads?" She laughs. "I guess. But you know what I mean, really." "I know honey. And I'm sorry too. I didn't mean what I said earlier. I know you don't really hate this country." He gestures to the couch next to him. "Why don't you sit down, huh? We can watch the rest of the game together." She straightens herself up, wipes her eyes. Suddenly feels a little self conscious. "Sure. Let me just go freshen up first." "Of course honey." Mom and Dad watch Sara leave the room, and then look at each other. "Well." "Well indeed." "What did I tell you? You two just needed to spend some time together. Some quality time." "I guess so. What did I ever do to deserve a woman as hot and as smart as you, huh Sheryl?" Mom stands up and makes to leave the room, leaning down to kiss him as she passes. "I ask myself that question every day." Alone, seen only by the TV, Dad smiles to himself. He picks up the remote, but instead of hitting play, he finds himself hitting rewind. Cut to: drone footage. Grainy, monochrome. A group of figures move slowly through the desert. The camera tracks them. Zooms in. The pilot punches buttons. The figures become highlighted by a computer overlay, text appears next to them. ILLEGAL ENTRY ATTEMPT SUSPECTED. GROUND PATROLS ALERTED. Cut to: on the ground, in the desert. The group of figures are all men. Dirty, scruffy, furtive. Like they mean business.They carry guns, pistols, and assault riffles. Bad hombres. One of them pulls open a bag, looks inside. Cut to: close up of the inside of the bag. Inside are packets of white powder. Suddenly, one of the party looks up, shouts something in Spanish. They all go to grab their guns. But it's too late. From three different directions, three different Chevrolet jeeps appear, screeching to a halt, kicking up dust. From them jump Border Patrol agents and Minutemen militia, guns drawn and ready. The gang of men don't even put up a fight. They know they're surrounded, they drop their weapons and pathetically raise their hands. All except one. The guy with the bag full of drugs. He's got nothing to lose. He reaches for his rifle. Cut to: Border Patrol agents, opening fire. Text flashes across the screen. ALERT CANCELLED. THREAT NEUTRALISED. Cut to: the drone, banking and turning, flying away. Cut to: exterior shot of the trailer. The still anonymous pilot exits, walks back towards his jeep. Voiceover: Keeping America safe means never sleeping, but keeping America great means never forgetting who we are, and what keeps us strong. The jeep starts up, pulls away from the camera in a cloud of dust. Fade to black. Chevrolet logo. White text against black. 'We know what really makes America great' Dad wipes another team from his eye. "I think we're going to be OK," he says to himself. "I think we're going to be just fine." This article was originally published on TheLong+Short. Read the original article.
D. While social media purports to bring us together, it more often drives us apart.
On what date was a follow-up MRI performed in Mr. Havers that showed no indication of local recurrence? Choose the correct answer from the following options: A. 02/19/2017 B. 10/03/2017 C. 01/04/2017 D. 04/23/2017 E. 03/29/2017
### Patient Report 0 **Dear colleague, ** We are reporting on our patient, John Havers, born on 05/29/1953, who received an MRI of the right proximal thigh for further clarification of a potential tumor. **MRI of the right thigh, plain and with contrast agent, on 02/19/2017:** [Technique]{.underline}: Surface coil, localization scan, coronal T1 SE, transverse, coronal, sagittal T2 TSE with fat suppression. After intravenous contrast administration, T1-TSE transverse and T1-TSE FS (coronal, T2 TSE FS coronal as an additional fat-saturated sequence in the same section level for exploring relevant edema). [Findings]{.underline}: Normal bone marrow signal consistent with age. No signs of fractures. Coexistence of moderate degenerative changes in the hip joints, more pronounced on the right than on the left. Mild activation of the muscles in the left proximal adductor region. Ventral to the gracilis muscle and dorsal to the sartorius muscle at the level of the middle third of the right thigh is a subfascial intermuscular oval mass lesion with a high-signal appearance on T2-weighted images and a low-signal appearance on T1-weighted images. It is partially septated, well-demarcated, and shows strong contrast enhancement. No evidence of blood degradation products. Dimensions are 35 x 45 x 40 mm. No evidence of suspiciously enlarged lymph nodes. Other assessed soft tissues are unremarkable for the patient\'s age. [Assessment]{.underline}: Overall, a high suspicion of a mucinous mass lesion in the region of the right adductor compartment. Differential Diagnosis: Mucinous liposarcoma. Further histological evaluation is strongly recommended. **Current Recommendations:** Presentation at the clinic for surgery for further differential diagnostic clarification. ### Patient Report 1 **Dear colleague, ** We are reporting on our patient, John Havers, born on 05/29/1953. He was under our inpatient care from 03/10/2017 to 03/12/2017. **Diagnosis:** Soft tissue tumor of the right proximal thigh **Other Diagnoses:** - Arterial hypertension - Non-insulin-dependent diabetes mellitus - Coronary artery disease with stent placement - Nicotine abuse (80-100 pack-years) - Arterial hypertension - Status post apoplexy - Status post cataract surgery - Status post right hip total hip replacement - Status post Polypectomy for polyposis coli (minimal dysplasia) - Status post appendectomy - M. Meniere **Allergies**: Hay fever **Treatment**: Incisional biopsy on 03/10/2017 **Histology:** [Microscopy]{.underline}: (Hematoxylin and Eosin staining): Histologically, an infiltrate of a mesenchymal neoplasm is evident in a section prepared by us and stained with HE. There are areas with an estimated tumor percentage of approximately 90% that were selected and labeled for molecular pathology analysis. [Molecular Pathology]{.underline}: After macrodissection of labeled tumor areas from unstained consecutive sections, RNA was extracted and analyzed using focused next-generation sequencing technology. The analysis was performed using FusioPlex Sarcoma v2 assays, allowing detection of fusions in 63 genes. **Medical History:** We may kindly assume that you are familiar with Mr. Havers's medical history. The patient presented to our surgery clinic due to a mass in the right proximal thigh. The swelling was first noticed approximately 3 months ago and has shown significant enlargement since. The patient subsequently consulted a general surgeon, who referred him to our center after performing an MRI, suspecting an intramuscular liposarcoma. After presenting the case to our interdisciplinary tumor board, the decision was made to perform an incisional biopsy. The patient was admitted for the above procedure on 03/10/2017. **Physical Examination:** On clinical examination, a patient in slightly reduced general and nutritional status was observed. Approximately 6 x 7 x 4 cm-sized tumor in the right proximal thigh, well mobile, intramuscular. Numbness in both legs at L5/S1. No change in skin color. No fluctuation or redness. The rest of the clinical examination was unremarkable. **Treatment and Progression:** Following routine preoperative preparations and informed consent, the above-mentioned procedure was performed under general anesthesia on 03/10/2017. The intraoperative and postoperative courses were uncomplicated. Initial mild swelling regressed over time. The inserted drainage was removed on the second postoperative day. The patient mobilized independently on the ward. Pain management was provided as needed. With the patient\'s subjective well-being and inconspicuous wound conditions, we were able to discharge Mr. Havers on 03/12/2017 for outpatient follow-up. **Current Recommendations:** - Suture material to be shortened on the 14th postoperative day. - Follow-up appointments in our outpatient clinic **Medication upon Discharge:** **Medication** **Dosage** **Frequency** -------------------------------------- ------------ --------------- Empagliflozin (Jardiance) 10 mg 1-0-0-0 Metformin Hydrochloride (Glucophage) 1000 mg 1-0-1-0 Atorvastatin Calcium (Lipitor) 21.7 mg 0-0-1-0 Metoprolol Tartrate (Lopressor) 50 mg 0.5-0-0.5-0 Aspirin 100 mg 1-0-0-0 Pantoprazole Sodium (Protonix) 22.6 mg 1-0-0-0 **Lab results upon Discharge: ** **Parameter** **Results** **Reference Range** ---------------------- ------------- --------------------- Sodium 138 mEq/L 136-145 mEq/L Potassium 4.9 mEq/L 3.5-4.5 mEq/L Creatinine 0.81 mg/dL 0.70-1.20 mg/dL Estimated GFR \- \- Urea 38 mg/dL 17-48 mg/dL C-Reactive Protein 2.6 mg/dL \< 5.0 mg/dL Complete Blood Count \- \- Hemoglobin 16.7 g/dL 13.5-17.0 g/dL Hematocrit 49.5% 39.5-50.5% Erythrocytes 5.2 M/µL 4.3-5.8 M/µL Leukocytes 10.07 K/µL 3.90-10.50 K/µL Platelets 167 K/µL 150-370 K/µL MCV 95.4 fL 80.0-99.0 fL MCH 32.2 pg 27.0-33.5 pg MCHC 33.7 g/dL 31.5-36.0 g/dL MPV 11.7 fL 7.0-12.0 fL RDW-CV 12.6% 11.5-15.0% Prothrombin Time 120% 78-123% INR 0.94 0.90-1.25 aPTT 30.1 sec 25.0-38.0 sec **Addition: Histology Report:** [Microscopy:]{.underline} (Hematoxylin and Eosin staining): Histologically, infiltrates of a mesenchymal neoplasm can be seen in a section we prepared. Below this are areas estimated to contain 90% tumor, which have been selected and labeled for molecular pathological analysis. [Molecular Pathology:]{.underline} After macrodissection of the marked tumor areas from unstained consecutive sections, RNA was extracted and analyzed using focused Next-Generation Sequencing technology. The examination was performed using the FusioPlex Sarcoma v2 Assays, that allows for the detection of fusions involving 63 genes. [Diagnosis:]{.underline} 1. Incisional biopsy from a myxoid liposarcoma, Grade 1 according to FNCLCC (Sum score 2 + 0 + 1 = 3), with the detection of a FUS: DDIT3 fusion transcript (right adductor compartment). 2. Predominantly mature fatty tissue as well as fascial tissue. - In addition to previous reports, myxoid neoplasm is characterized by minimal cell density/round cell areas, here less than 25%, according to FNCLCC (=2 points for tumor differentiation). - No evidence of necrosis (=0 points). - 2 mitotic figures in 10 high-power fields (=1 point). - Total score is 2 + 0 + 1 = 3, corresponding to Grade 1 according to FNCLCC. [Diagnosis]{.underline} 1. Incisional biopsy from a myxoid liposarcoma (right adductor compartment). 2. Predominantly mature fatty tissue as well as fascial tissue (subcutaneous). [Comment]{.underline}: The present biopsy material corresponds to Grade 1 according to FNCLCC. A supplementary report follows. **Supplementary Report from: 03/29/2017:** [Clinical Information:]{.underline} Suspected liposarcoma of the right proximal thigh. Encapsulated subfascial tumor, palpably indurated. Adipose tissue adjacent to the tumor, macroscopically lighter and finer than the subcutaneous adipose tissue towards the skin. [Material]{.underline}: Microscopy and Molecular Pathology Interphase FISH analysis using a two-color break-apart probe to examine a chromosomal break in the FUS gene (chromosome 16p11.2) and in the DDIT3 gene (chromosome 12q13.3-q14.1). Interphase FISH analysis reveals a specific break event in the FUS gene (FUS-FISH positive). This indicates the presence of a FUS translocation. Similarly, in interphase FISH analysis, a specific break event is detectable in the DDIT3 gene (DDIT3-FISH positive), indicating the presence of a DDIT3 translocation. [Diagnosis:]{.underline} Incisional biopsy from a myxoid liposarcoma of the right adductor compartment. Predominantly mature fatty tissue as well as fascial tissue. [Comment]{.underline}: The cytogenetic findings are indicative of a myxoid liposarcoma. Technical validation by RNA sequencing will be provided in a supplemental report. This does not affect the above diagnosis. **Supplementary Report from: 03/18/2017:** [Microscopy: MDM2, S100:]{.underline} Partial weak expression of S100 protein by the lesional cells, occasionally including pre-existing adipocytes. No abnormal expression of MDM2. No abnormal expression of MDM2 in mature adipose tissue. [Diagnosis:]{.underline} Incisional biopsy from a myxoid liposarcoma of the right adductor compartment. Predominantly mature fatty tissue as well as fascial tissue. **Main Report from: 03/18/2017** [Clinical Information:]{.underline} Suspected liposarcoma of the right proximal thigh, as per MRI 02/19/2017. Encapsulated subfascial tumor, palpably indurated located in the right adductor compartment. Adipose tissue adjacent to the tumor, macroscopically lighter and finer than the subcutaneous adipose tissue towards the skin. [Macroscopy:]{.underline} Tumor: Brown, nodular piece of tissue, 20 x 14 x 10 mm, with smooth and rough surface. Cut surface shiny and mottled, sometimes gray, sometimes brown. Subcutaneous adipose tissue: A piece of adipose tissue, 25 x 20 x 5 mm. [Microscopy:]{.underline} Moderately cell dense mesenchymal proliferation with a myxoid matrix. Predominantly round nuclei, moderately dense nuclear chromatin, slight pleomorphism. Occasional adipocytic cells with univacuolar cytoplasm. Partially dense, ribbon-like connective tissue as well as mature univacuolar adipose tissue. [Diagnosis:]{.underline} Incisional biopsy suspected of a myxoid liposarcoma. Predominantly mature fatty tissue as well as fascial tissue. ### Patient Report 2 **Dear colleague, ** We would like to inform you about our patient Mr. John Havers, born on 05/29/1953, who was admitted to our hospital from 03/29/2017 to 04/05/2017. **Diagnoses**: - Myxoid liposarcoma on the right medial thigh, pT2 pNX L0 V0 Pn0 G1 R0, Stage IB - Incisional biopsy on 03/10/2017 **Other Diagnoses:** - Arterial hypertension - Non-insulin-dependent diabetes mellitus - Coronary artery disease with stent placement - Nicotine abuse (80-100 pack-years) - Arterial hypertension - Status post apoplexy - Status post cataract surgery - Status post right hip total hip replacement - Status post Polypectomy for polyposis coli (minimal dysplasia) - Status post appendectomy - M. Meniere **Allergies**: Hay fever **Current Presentation**: Neoplasm of uncertain or unknown behavior. **Treatment**: On 04/01/2017, en bloc tumor excision with removal of the old biopsy scar, partial resection of the M. gracilis, fibers of the M. sartorius and M. adductor longus, and ligation of the V. saphena magna was performed. **Histology from 04/11/2017** Clinical Information: Myxoid liposarcoma, localized in the right thigh. [Macroscopy Tumor, right thigh]{.underline}: A triple surgical resection was performed, removing skin and subcutaneous tissue and the underlying soft tissue and muscle. The size of the excised skin spindle was 130 x 45 mm with a resection depth of up to 48 mm. A wound 25 mm long and 6 mm wide was noted on the skin surface. The muscle attached laterally/dorsally measured 75 x 25 x 6 mm. Two nodules were noted on the cut surface. The larger nodule, located in the subcutaneous tissue, measured 33 mm (proximal/distal) x 36 mm (anterior/dorsal) x 30 mm. Its distance from the proximal preparation cap was 26 mm, from the distal preparation cap more than 60 mm, from the ventral soft tissue 20 mm, and from the dorsal soft tissue 3 mm, with less than 1 mm basal extension. Superficially, it was surrounded by a delicate capsule. A separate nodule measuring 20 mm (proximal/distal) x 24 mm (ventral/dorsal) x 20 mm was found immediately ventro-distal to the first nodule. This nodule was located more than 40 mm from the proximal preparation cap, more than 50 mm from the distal preparation cap, 12 mm ventrally, 11 mm dorsally, and 5 mm basally. Consequently, the maximum size of the tumor from proximal to distal was 53 mm. No macroscopic necrotic areas were evident on the cut surface of the nodule. However, partial necrosis of the subcutaneous fatty tissue in the vicinity of the described wound was observed. [Microscopy HE, PAS:]{.underline} Histomorphologically, there is a moderately cell-dense proliferation with a significant myxoid matrix in the area of the two confluent nodules, with a maximum diameter of 53 mm. There are also areas with relative cell poverty. Within the myxoid matrix, there are blood vessels with a distinct growth pattern referred to as the \"chicken wire pattern.\" No clear tumor necroses are evident. The tumor cell nuclei have a round configuration with moderately dense chromatin. Apoptotic figures are increased. The number of mitoses is low.The lesion was completely removed with a minimal margin of 0.5 mm from the posterior resection edge. In the superficial subcutaneous tissue, there is a band-like necrosis directly related to superficial granulation tissue. The included skin spindle shows regular epidermal covering and a largely unremarkable dermis. [Diagnosis]{.underline}: Skin/subcutaneous excision with a maximum 53 mm myxoid liposarcoma that was completely removed (minimum distance to posterior cutoff plane 0.5 mm). [Comment]{.underline}: In view of the present morphology and knowledge of the molecular pathological examination results with proven break events in the FUS gene and DDIT3 gene as part of interphase FISH analysis, the diagnosed condition is myxoid liposarcoma. According to the FNCLCC grading scheme, this corresponds to grade 1: Histological type: 2 points + mitotic index 1 point + necrosis index 0 points = 3 points. ICD-O-3 tumor classification: Myxoid liposarcoma TNM (8th edition): pT2 pNX L0 V0 Pn0 G1 R0 **Medical History:** We assume that you are familiar with Mr. Havers's medical history, and we refer to our previous correspondence. **Physical Examination:** Patient in good general condition. Oriented in all aspects. No cyanosis. No edema. Warm and dry skin. Normal nasal and pharyngeal findings. Pupils round, equal, and react promptly to light bilaterally. Moist tongue. Pharynx and buccal mucosa unremarkable. No jugular vein distension. No carotid bruits heard. Palpation of lymph nodes unremarkable. Palpation of the thyroid gland unremarkable, freely movable. Lungs: Normal chest shape, moderately mobile, vesicular breath sounds. Heart: Regular heart action, normal rate; heart sounds clear, no pathological sounds. Abdomen: Peristalsis and bowel sounds normal in all quadrants; soft abdomen, no tenderness, no palpable masses, liver and spleen not palpable due to limited access, non-tender kidneys. Normal peripheral pulses; joints freely movable. Strength, motor function, and sensation are unremarkable. **Therapy and Progression**: The patient presented to our surgical clinic because of a mass in the right proximal thigh. The swelling was first noticed about 3 months ago and has increased significantly in size since then. MRI findings raised suspicion of a liposarcoma. After consultation in the interdisciplinary tumor board, the indication for incisional biopsy was performed on 03/10/2017. The histopathological examination confirmed the presence of a myxoid liposarcoma, leading to the decision for en bloc excision. The patient was extensively informed about the procedure and the risks and gave his consent. The patient was admitted for the procedure on 03/29/2017. Upon clinical examination, a patient in good general and nutritional condition was noted. Other general clinical findings were unremarkable. A wound healing disorder of 2 cm was observed in the area of the wound after incisional biopsy. **Sarcoma Tumor Board Recommendation dated 03/11/2017:** R0 G1 finding, standard sarcoma follow-up. **Procedure**: Following standard preoperative preparations and informed consent, the aforementioned procedure was performed on 03/01/2017 under general anesthesia. The intraoperative and postoperative course was uneventful. On the first postoperative day, there was slight swelling in the affected area, which gradually subsided. Analgesia was sufficient with Acetaminophen as needed. Thrombosis prophylaxis was administered with subcutaneous Enoxaparin 0.4 mL. The patient mobilized independently on the ward. The inserted drainage could not be removed so far due to excessive drainage output. During the hospital stay, a staging CT of the chest and abdomen was performed. No thoracoabdominal metastases were detected. **Summary**: With a good subjective well-being and unremarkable wound conditions, Mr. Havers was discharged on 04/05/2017 for further outpatient care. Clinical examination reveals slight swelling of the wound area. The wound is not dehiscent and shows no signs of irritation. The patient is mobilizing independently. **CT Chest/Abdomen/Pelvis from 04/01/2017: ** [Clinical Information, Question, Justification]{.underline}: Liposarcoma of the thigh. Staging. [Technique]{.underline}: Digital overview radiographs. Following intravenous contrast agent administration (100 ml Xenetix), CT of the chest and entire abdomen in the venous contrast phase. Reconstruction of the primary dataset with a slice thickness of 0.625 mm. Multiplanar reconstruction. Total DLP: 885 mGy\*cm. [Findings]{.underline}: There are no prior images available for comparison. [Chest]{.underline}: Lungs are evenly ventilated and normally developed bilaterally. No pneumothorax on either side. Minimal right-sided pleural effusion. Mild basilar hypoventilation, particularly in the right lower lobe. Calcified granuloma in the apical right lower lobe. No suspicious pulmonary nodules. Heart shows enlargement of the left ventricle and left atrium. Coronary artery sclerosis. Atherosclerosis of the aortic arch. No pericardial effusion. Aorta and pulmonary trunk have normal diameters. No central pulmonary artery embolism. No pathologically enlarged mediastinal or hilar lymph nodes. Symmetric appearance of the neck soft tissues. Thyroid gland without focal lesions. Axillary lymph nodes are of normal size. [Abdomen]{.underline}: Liver is of normal size and has a smooth contour. No signs of cholestasis. No portal vein thrombosis. No suspicious intrahepatic lesions. Gallbladder appears normal. Common bile duct is not dilated. Spleen is not enlarged. Pancreas shows regular lobulation, and there is no dilatation of the pancreatic duct. Both kidneys are free from urinary tract obstruction. No solid intrarenal masses. Few renal cysts. Adrenal glands appear unremarkable. Urinary bladder shows no focal wall thickening. Prostate is not enlarged. Advanced atherosclerosis of the abdominal aorta and pelvic vessels. History of stenting of the left external iliac artery with no reocclusion. Mesenteric, para-aortic, and parailiac lymph nodes are not pathologically enlarged. No free intraperitoneal fluid or air is detected. Osseous Structures: Degenerative changes in the spine. No evidence of suspicious osseous destruction suggestive of tumors. Soft tissue mantle appears unremarkable. **Assessment**: No thoracoabdominal metastases. **Current Recommendations**: - Regular wound inspections and dressing changes. - Documentation of drainage output and removal if the output is \<20 ml/24 hours, expected removal on 04/23/2017 at our outpatient clinic. - Removal of sutures is not required for absorbable sutures. - According to the tumor board decision dated 04/11/2017, we recommend regular follow-up according to the schedule. **Sarcoma Follow-up Schedule Stage I** - Local Follow-up: 1. MRI right thigh: Years 1-5: every 6 months 2. Years 6-10: every 12 months - Pulmonary Follow-up: 3. Chest X-ray, CT chest with contrast agent Years 1-5: every 6 months in alternation 4. Years 6-10: every 12 months in alternation **Medication upon Discharge:** **Medication** **Dosage** **Frequency** -------------------------------------- ------------ ------------------- Aspirin 100 mg 1-0-0-0 Atorvastatin (Lipitor) 20 mg 0-0-1-0 Enoxaparin (Lovenox) Variable 0-0-1-0 Empagliflozin (Jardiance) 10 mg 1-0-0-0 Metformin Hydrochloride (Glucophage) 1000 mg 1-0-1-0 Metoprolol Tartrate (Lopressor) 50 mg 0.5-0-0.5-0 Acetaminophen (Tylenol) 500 mg 2-2-2-2 if needed Pantoprazole (Protonix) 20 mg 1-0-0-0 **Lab results upon Discharge:** **Parameter** **Results** **Reference Range** ------------------------------------------- ------------- --------------------- Sodium 137 mEq/L 136-145 mEq/L Potassium 4.4 mEq/L 3.5-4.5 mEq/L Creatinine 0.74 mg/dL 0.70-1.20 mg/dL Blood Urea Nitrogen 33 mg/dL 17-48 mg/dL C-Reactive Protein 1.7 mg/dL \< 5.0 mg/dL Thyroid-Stimulating Hormone 3.58 mIU/L 0.27-4.20 mIU/L Hemoglobin 16.5 g/dL 13.5-17.0 g/dL Hematocrit 49.3% 39.5-50.5% Red Blood Cells 5.2 M/µL 4.3-5.8 M/µL White Blood Cells 9.63 K/µL 3.90-10.50 K/µL Platelets 301 K/µL 150-370 K/µL Mean Corpuscular Volume 95.7 fL 80.0-99.0 fL Mean Corpuscular Hemoglobin 32.0 pg 27.0-33.5 pg Mean Corpuscular Hemoglobin Concentration 33.5 g/dL 31.5-36.0 g/dL Mean Platelet Volume 10.4 fL 7.0-12.0 fL Red Cell Distribution Width 12.1% 11.6-14.4% Activated Partial Thromboplastin Time 32.4 sec 25.0-38.0 sec ### Patient Report 3 **Dear colleague, ** We are writing to provide an update on our patient Mr. John Havers, born on 05/29/1953, who presented to our outpatient surgery clinic on 04/23/2017. **Diagnosis**: Myxoid liposarcoma, right medial thigh, pT2 pNX L0 V0 Pn0 G1 R0, Stage IB - Following incisional biopsy - After en bloc tumor excision with removal of the previous biopsy scar, partial resection of the gracilis, sartorius and adductor longus muscles and ligation of the great saphenous vein. **Other Diagnoses:** - Arterial hypertension - Non-insulin-dependent diabetes mellitus - Coronary artery disease with stent placement - Nicotine abuse (80-100 pack-years) - Arterial hypertension - Status post apoplexy - Status post cataract surgery - Status post right hip total hip replacement - Status post Polypectomy for polyposis coli (minimal dysplasia) - Status post appendectomy - M. Meniere **Allergies**: Hay fever **Medical History:** We kindly assume that you are familiar with the patient\'s detailed medical history and refer to our previous discharge letter. **Current Presentation:** The patient presented today for a follow-up visit in our clinic. He reported no complaints. The Redon drain has not produced any secretions in the last 2 days. Clinical examination revealed uneventful wound conditions with applied Steri-strips. There is no evidence of infection. The Redon drain contains serous wound secretions. Procedure: The Redon drain is being removed today. With nearly fully healed wound conditions, we recommend initiating scar massage with fatty topical products in the near future. **MRI of the Right Thigh on** 04/23/2017**:** [Clinical Background, Question, Justification:]{.underline} Sarcoma follow-up for myxoid liposarcoma on the right medial thigh, pT2 pNX L0 V0 Pn0 G1 R0, Stage IB. Recurrence? Regional behavior? Lymph nodes? [Technique]{.underline}: 3 Tesla MRI of the right thigh, both plain and after the administration of 8 ml of Gadovist intravenously. Supine position, surface coil. Sequences: TIRM coronal and axial, T2-TSE coronal and axial, T1 VIBE Dixon axial, EPI-DWI with ADC map axial, T1-Starvibe vascular images plain and post-contrast axial with subtraction images, T1-TSE FS post-contrast coronal. [Findings]{.underline}: Minor FLAIR hyperintense streaky signal alteration in the surgical area, most likely scar-related, with slight diffusion restriction and streaky contrast enhancement. No evidence of a recurrent suspicious substrate. No nodular contrast enhancement. Slightly accentuated inguinal lymph nodes on the right, most likely reactive. Unremarkable visualization of the remaining soft tissue. Normal bone marrow signal. Bladder filled. Unremarkable representation of the imaged pelvic organs. [Assessment]{.underline}: Following the resection of a myxoid liposarcoma on the right medial thigh, there is a regular postoperative finding. No indication of local recurrence. **Chest X-ray in Two Planes on 04/23/2017: ** [Clinical Background, Question, Justification]{.underline}: Myxoid liposarcoma of the right thigh, initial diagnosis in 2022. Follow-up. Metastases? [Findings]{.underline}: No corresponding prior images for comparison. The upper mediastinum is centrally located and not widened. Hila are free. No acute congestion. No confluent pneumonic infiltrate. No evidence of larger intrapulmonary lesions. A 7 mm spot shadow is noted right suprahilar, primarily representing a vascular structure. No effusions. No pneumothorax. **Current Recommendations:** The patient would like to continue follow-up care with us, so we scheduled an MRI control appointment to assess the possibility of local recurrence. On this day, a two-view chest X-ray is also required. **We recommend the following follow-up schedule:** - Local Follow-up: 5. MRI right thigh: Years 1-5: every 6 months 6. Years 6-10: every 12 months - Pulmonary Follow-up: 7. Chest X-ray, CT chest with contrast agent Years 1-5: every 6 months in alternation 8. Years 6-10: every 12 months in alternation ### Patient Report 4 **Dear colleague, ** We are writing to provide an update on our patient Mr. John Havers, born on 05/29/1953, who presented for tumor follow-up on 02/10/2018, in our outpatient surgery clinic for a discussion of findings. **Diagnosis**: Myxoid liposarcoma on the right medial thigh, pT2 pNX L0 V0 Pn0 G1 R0, Stage IB - Following incisional biopsy - After en bloc tumor excision with removal of the previous biopsy scar, partial resection of the gracilis, sartorius and adductor longus muscles and ligation of the great saphenous vein. **Other Diagnoses:** - Arterial hypertension - Non-insulin-dependent diabetes mellitus - Coronary artery disease with stent placement - Nicotine abuse (80-100 pack-years) - Arterial hypertension - Status post apoplexy - Status post cataract surgery - Status post right hip total hip replacement - Status post Polypectomy for polyposis coli (minimal dysplasia) - Status post appendectomy - M. Meniere **Summary**: Clinically, there is a regular postoperative finding on the right thigh. The control MRI with contrast of the right thigh on 04/23/2017 revealed morphologically: - No evidence of a local-regional recurrence. - In pulmonary follow-up using conventional chest X-ray on 04/23/2017, no signs of pulmonary metastasis were detected. **Current Recommendations:** Sarcoma Follow-up Schedule Stage I - Local Follow-up: 9. MRI right thigh: Years 1-5: every 6 months 10. Years 6-10: every 12 months - Pulmonary Follow-up: 11. Chest X-ray, CT chest with contrast agent Years 1-5: every 6 months in alternation 12. Years 6-10: every 12 months in alternation ### Patient Report 5 **Dear colleague, ** We are reporting to you on our patient Mr. John Havers, born on 05/29/1953, who presented himself on **08/01/2018** at our outpatient surgery clinic for a discussion of findings as part of tumor follow-up. **Diagnosis**: Myxoid liposarcoma, right medial thigh, pT2 pNX L0 V0 Pn0 G1 R0, Stage IB - Post-incision biopsy - After en bloc tumor excision with removal of the previous biopsy scar, partial resection of the gracilis, sartorius and adductor longus muscles and ligation of the great saphenous vein. **Other Diagnoses:** - Arterial hypertension - Non-insulin-dependent diabetes mellitus (NIDDM) - Coronary artery disease (CAD) with stent placement - Nicotine abuse (80-100 pack-years) - Arterial hypertension - Status post apoplexy - Status post cataract surgery - Status post right hip total hip replacement (THR) - Status post Polypectomy for polyposis coli (minimal dysplasia) - Status post appendectomy - M. Meniere **Summary**: Clinically, there is a normal postoperative condition in the right thigh. **MRI of the Right Thigh on 08/01/2018:** [Clinical Background, Question, Justification:]{.underline} Sarcoma follow-up for myxoid liposarcoma on the right medial thigh. Progress assessment. [Method]{.underline}: 1.5 Tesla. Localization sequences. TIRM and T2 TSE coronal. TIRM, T2 TSE, VIBE DIXON, and RESOLVE-DWI axial. StarVIBE FS before and after contrast + subtraction. T1 TSE FS coronal after contrast. [Findings]{.underline}: Comparison with MRI from 04/23/2017. Post-resection of a myxoid liposarcoma in the proximal medial right thigh soft tissue. In the surgical area, there is no evidence of a suspicious nodular, contrast-affine lesion, and no evidence of malignancy-suspected diffusion restriction. Slight scar-related changes in the access path. Otherwise, unremarkable presentation of soft tissues and included bony structures. No inguinal lymphadenopathy. Assessment: For myxoid liposarcoma, there has been consistent evidence since 02/2018: **Chest CT on 08/01/2018**: [Clinical Background, Question, Justification: Liposarcoma on the thigh. Staging.]{.underline} After risk history assessment, oral and written explanation of contrast agent application and examination procedure, as well as potential risks of the examination (see also informed consent form). Written patient consent. [Method]{.underline}: Digital overview radiographs. After intravenous contrast agent administration (80 ml of Imeron), CT of the chest in venous contrast phase, reconstruction of the primary dataset with a slice thickness of 0.625 mm. Total DLP 185 mGy\*cm. [Findings]{.underline}: For comparison, there is a CT of the chest/abdomen/pelvis from 04/01/2018. No evidence of suspicious pulmonary nodules. Several partly calcified micronodules bipulmonary, especially in the right lower lobe (ex. S303/IMA179). Partial underventilation bipulmonary. No pleural effusion. No evidence of pathologically enlarged lymph nodes. Constant calcified right hilar lymph nodes. Calcifying aortic sclerosis along with coronary sclerosis. Hepatic steatosis. Individual renal cysts. Slightly shrunken left adrenal gland. Degenerative changes of the axial skeleton without evidence of a malignancy-suspected osseous lesion. [Assessment]{.underline}: No evidence of a new thoracic tumor manifestation. **Recommendations:** Sarcoma Follow-up - Local Follow-up: 13. MRI right thigh: Years 1-5: every 6 months 14. Years 6-10: every 12 months - Pulmonary Follow-up: 15. Chest X-ray, CT chest with contrast agent Years 1-5: every 6 months in alternation 16. Years 6-10: every 12 months in alternation **Lab results upon Discharge: ** **Parameter** **Results** **Reference Range** ---------------------------------------------- ------------- --------------------- Sodium 138 mEq/L 136-145 mEq/L Potassium 4.9 mEq/L 3.5-4.5 mEq/L Creatinine 0.81 mg/dL 0.70-1.20 mg/dL Estimated GFR \- \- Blood Urea Nitrogen 38 mg/dL 17-48 mg/dL C-Reactive Protein 2.6 mg/dL \< 5.0 mg/dL Hemoglobin 16.7 g/dL 13.5-17.0 g/dL Hematocrit 49.5% 39.5-50.5% RBC 5.2 M/µL 4.3-5.8 M/µL WBC 10.07 K/µL 3.90-10.50 K/µL Platelets 167 K/µL 150-370 K/µL MCV 95.4 fL 80.0-99.0 fL MCH 32.2 pg 27.0-33.5 pg MCHC 33.7 g/dL 31.5-36.0 g/dL MPV 11.7 fL 7.0-12.0 fL RDW-CV 12.6% 11.5-15.0% Prothrombin Time 120% 78-123% International Normalized Ratio (INR) 0.94 0.90-1.25 Activated Partial Thromboplastin Time (aPTT) 30.1 sec 25.0-38.0 sec ### Patient Report 6 **Dear colleague, ** We are writing to provide an update on our patient Mr. John Havers, born on 05/29/1953, who was admitted to our clinic from 08/14/2023 to 09/02/2023. **Diagnosis:** Pulmonary Metastasis from Myxoid Liposarcoma - Myxoid liposarcoma on the right medial thigh, pT2 pNX L0 V0 Pn0 G1 R0, Stage IB <!-- --> - Post-incision biopsy - After en bloc tumor excision with removal of the previous biopsy scar, partial resection of the gracilis, sartorius and adductor longus muscles and ligation of the great saphenous vein. **Other Diagnoses:** - Arterial hypertension - Non-insulin-dependent diabetes mellitus (NIDDM) - Coronary artery disease (CAD) with stent placement - Nicotine abuse (80-100 pack-years) - Arterial hypertension - Status post apoplexy - Status post cataract surgery - Status post right hip total hip replacement (THR) - Status post Polypectomy for polyposis coli (minimal dysplasia) - Status post appendectomy - M. Meniere **Medical History:** Mr. Havers has been under our care for myxoid liposarcoma, which was previously excised from his right medial thigh. He had a stable postoperative course and was scheduled for regular follow-up to monitor for any potential recurrence or metastasis. **Current Presentation:** During a follow-up appointment on 08/14/2023, Mr. Havers complained of mild shortness of breath, occasional coughing, and intermittent chest discomfort. He reported no significant weight loss but noted a decrease in his overall energy levels. Physical examination revealed decreased breath sounds in the right lung base. **Physical Examination:** Patient in adequate general condition. Oriented in all aspects. No cyanosis. No edema. Warm and dry skin. Normal nasal and pharyngeal findings. Pupils round, equal, and react promptly to light bilaterally. Moist tongue. Pharynx and buccal mucosa unremarkable. No jugular vein distension. No carotid bruits heard. Palpation of lymph nodes unremarkable. Palpation of the thyroid gland unremarkable, freely movable. Lungs: Normal chest shape, moderately mobile, decreased breath sounds in the right lung base. Heart: Regular heart action, normal rate; heart sounds clear, no pathological sounds. Abdomen: Peristalsis and bowel sounds normal in all quadrants; soft abdomen, markedly obese, no tenderness, no palpable masses, liver and spleen not palpable due to limited access, non-tender kidneys. Normal peripheral pulses; joints freely movable. Strength, motor function, and sensation is unremarkable. **Chest X-ray (08/14/2023):** A chest X-ray was performed, which revealed a suspicious opacity in the right lower lung field. **CT Chest (08/16/2023):** In light of the chest X-ray findings, a contrast-enhanced CT scan of the chest was conducted to obtain more detailed information. The CT imaging demonstrated a well-defined, irregularly shaped lesion in the right lower lobe of the lung, measuring approximately 2.5 cm in diameter. The lesion exhibited characteristics highly suggestive of a metastatic deposit. There were no other significant abnormalities noted in the chest. **Histology (08/21/2023):** Based on the CT findings, a CT-guided core needle biopsy of the pulmonary lesion was performed to confirm the nature of the lesion. Histopathological examination of the biopsy specimen confirmed the presence of myxoid liposarcoma cells in the pulmonary lesion. Immunohistochemical staining for MDM2 and CDK4 supported the diagnosis of metastatic myxoid liposarcoma. **Treatment Discussion:** Given the diagnosis of a pulmonary metastasis from myxoid liposarcoma, the case was reviewed in the interdisciplinary tumor board. The consensus decision was to pursue surgical resection of the pulmonary metastasis, as it remained localized and resectable. The patient and his family were informed of the treatment options and associated risks, and they provided informed consent for the procedure. **Surgery Report (08/29/2023):** Mr. Havers underwent a right lower lobectomy with lymph node dissection to remove the pulmonary metastasis. The procedure was performed by our thoracic surgery team and was completed without any immediate complications. Intraoperative frozen section analysis confirmed the presence of metastatic myxoid liposarcoma in the resected lung tissue. **Postoperative Course:** Mr. Havers postoperative course was uneventful, and he demonstrated good respiratory recovery. He was managed with adequate pain control and underwent chest physiotherapy to prevent postoperative complications. Pathological examination of the resected lung tissue confirmed the presence of metastatic myxoid liposarcoma, with clear surgical margins. **Current Recommendations:** 1. **Follow-up:** A strict follow-up plan should be established for Mr. Havers to monitor for any potential recurrence or new metastatic lesions. This should include regular clinical assessments, chest imaging, and other relevant investigations. **Medication upon Discharge:** **Medication ** **Dosage** **Frequency** --------------------------------- ------------ --------------- Empagliflozin (Jardiance) 10 mg 1-0-0-0 Metformin (Glucophage) 1000 mg 1-0-1-0 Atorvastatin (Lipitor) 20 mg 0-0-1-0 Metoprolol Tartrate (Lopressor) 50 mg 0.5-0-0.5-0 Aspirin 100 mg 1-0-0-0 Pantoprazole (Protonix) 20 mg 1-0-0-0 **Lab results upon Discharge: ** **Parameter** **Results** **Reference Range** --------------------- ------------- --------------------- Sodium 135 mEq/L 136-145 mEq/L Potassium 4.4 mEq/L 3.5-4.5 mEq/L Creatinine 0.82 mg/dL 0.70-1.20 mg/dL Estimated GFR \- \- Blood Urea Nitrogen 39 mg/dL 17-48 mg/dL C-Reactive Protein 2.5 mg/dL \< 5.0 mg/dL Hemoglobin 16.6 g/dL 13.5-17.0 g/dL Hematocrit 49.4 % 39.5-50.5 % RBC 5.1 M/µL 4.3-5.8 M/µL WBC 10.04 K/µL 3.90-10.50 K/µL Platelets 166 K/µL 150-370 K/µL MCV 95.2 fL 80.0-99.0 fL MCH 32.6 pg 27.0-33.5 pg MCHC 33.2 g/dL 31.5-36.0 g/dL MPV 11.4 fL 7.0-12.0 fL RDW-CV 12.5 % 11.5-15.0 % Prothrombin Time 122 % 78-123 % INR 0.99 0.90-1.25 aPTT 30.1 sec 25.0-38.0 sec
04/23/2017
What percent of Ulta Beauty's total spend on stock repurchases for FY 2023 occurred in Q4 of FY2023?
Evidence 0: Share Repurchase Program During the fourth quarter of fiscal 2022, the Company repurchased 722,457 shares of its common stock at a cost of $328.1 million. During fiscal 2022, the Company repurchased 2.2 million shares of its common stock at a cost of $900.0 million. As of January 28, 2023, $1.1 billion remained available under the $2.0 billion share repurchase program announced in March 2022.
36%. The answer here assumes FY2023 refers to the 12 months ended on January 28, 2023 (although the company refers to this period as its fiscal 2022.
At what moment in the story did the characters seem to have the most hope? A. When Captain Llud was looking at old photographs of his crewmates and reflecting on his long journey with people he cares for. B. When the group started to return to Earth and things looked like smooth sailing. C. When the group found a potentially human-friendly planet to inhabit. D. When the group landed on Earth and walked around on grass for the first time in 10 years.
THE GIANTS RETURN By ROBERT ABERNATHY Earth set itself grimly to meet them with corrosive fire, determined to blast them back to the stars. But they erred in thinking the Old Ones were too big to be clever. [Transcriber's Note: This etext was produced from Planet Stories Fall 1949. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] In the last hours the star ahead had grown brighter by many magnitudes, and had changed its color from a dazzling blue through white to the normal yellow, of a GO sun. That was the Doppler effect as the star's radial velocity changed relative to the Quest III , as for forty hours the ship had decelerated. They had seen many such stars come near out of the galaxy's glittering backdrop, and had seen them dwindle, turn red and go out as the Quest III drove on its way once more, lashed by despair toward the speed of light, leaving behind the mockery of yet another solitary and lifeless luminary unaccompanied by worlds where men might dwell. They had grown sated with the sight of wonders—of multiple systems of giant stars, of nebulae that sprawled in empty flame across light years. But now unwonted excitement possessed the hundred-odd members of the Quest III's crew. It was a subdued excitement; men and women, they came and stood quietly gazing into the big vision screens that showed the oncoming star, and there were wide-eyed children who had been born in the ship and had never seen a planet. The grownups talked in low voices, in tones of mingled eagerness and apprehension, of what might lie at the long journey's end. For the Quest III was coming home; the sun ahead was the Sun, whose rays had warmed their lives' beginning. Knof Llud, the Quest III's captain, came slowly down the narrow stair from the observatory, into the big rotunda that was now the main recreation room, where most of the people gathered. The great chamber, a full cross-section of the vessel, had been at first a fuel hold. At the voyage's beginning eighty per cent of the fifteen-hundred-foot cylinder had been engines and fuel; but as the immense stores were spent and the holds became radioactively safe, the crew had spread out from its original cramped quarters. Now the interstellar ship was little more than a hollow shell. Eyes lifted from the vision screens to interrogate Knof Llud; he met them with an impassive countenance, and announced quietly, "We've sighted Earth." A feverish buzz arose; the captain gestured for silence and went on, "It is still only a featureless disk to the telescope. Zost Relyul has identified it—no more." But this time the clamor was not to be settled. People pressed round the screens, peering into them as if with the naked eye they could pick out the atom of reflected light that was Earth, home. They wrung each other's hands, kissed, shouted, wept. For the present their fears were forgotten and exaltation prevailed. Knof Llud smiled wryly. The rest of the little speech he had been about to make didn't matter anyway, and it might have spoiled this moment. He turned to go, and was halted by the sight of his wife, standing at his elbow. His wry smile took on warmth; he asked, "How do you feel, Lesra?" She drew an uncertain breath and released it in a faint sigh. "I don't know. It's good that Earth's still there." She was thinking, he judged shrewdly, of Knof Jr. and Delza, who save from pictures could not remember sunlit skies or grassy fields or woods in summer.... He said, with a touch of tolerant amusement, "What did you think might have happened to Earth? After all, it's only been nine hundred years." "That's just it," said Lesra shakily. "Nine hundred years have gone by— there —and nothing will be the same. It won't be the same world we left, the world we knew and fitted in...." The captain put an arm round her with comforting pressure. "Don't worry. Things may have changed—but we'll manage." But his face had hardened against registering the gnawing of that same doubtful fear within him. He let his arm fall. "I'd better get up to the bridge. There's a new course to be set now—for Earth." He left her and began to climb the stairway again. Someone switched off the lights, and a charmed whisper ran through the big room as the people saw each other's faces by the pale golden light of Earth's own Sun, mirrored and multiplied by the screens. In that light Lesra's eyes gleamed with unshed tears. Captain Llud found Navigator Gwar Den looking as smug as the cat that ate the canary. Gwar Den was finding that the actual observed positions of the planets thus far located agreed quite closely with his extrapolations from long unused charts of the Solar System. He had already set up on the calculator a course that would carry them to Earth. Llud nodded curt approval, remarking, "Probably we'll be intercepted before we get that far." Den was jolted out of his happy abstraction. "Uh, Captain," he said hesitantly. "What kind of a reception do you suppose we'll get?" Llud shook his head slowly. "Who knows? We don't know whether any of the other Quests returned successful, or if they returned at all. And we don't know what changes have taken place on Earth. It's possible—not likely, though—that something has happened to break civilization's continuity to the point where our expedition has been forgotten altogether." He turned away grim-lipped and left the bridge. From his private office-cabin, he sent a message to Chief Astronomer Zost Relyul to notify him as soon as Earth's surface features became clear; then he sat idle, alone with his thoughts. The ship's automatic mechanisms had scant need of tending; Knof Llud found himself wishing that he could find some back-breaking task for everyone on board, himself included, to fill up the hours that remained. There was an extensive and well-chosen film library in the cabin, but he couldn't persuade himself to kill time that way. He could go down and watch the screens, or to the family apartment where he might find Lesra and the children—but somehow he didn't want to do that either. He felt empty, drained—like his ship. As the Quest III's fuel stores and the hope of success in man's mightiest venture had dwindled, so the strength had gone out of him. Now the last fuel compartment was almost empty and Captain Knof Llud felt tired and old. Perhaps, he thought, he was feeling the weight of his nine hundred Earth years—though physically he was only forty now, ten years older than when the voyage had begun. That was the foreshortening along the time axis of a space ship approaching the speed of light. Weeks and months had passed for the Quest III in interstellar flight while years and decades had raced by on the home world. Bemusedly Llud got to his feet and stood surveying a cabinet with built-in voice recorder and pigeonholes for records. There were about three dozen film spools there—his personal memoirs of the great expedition, a segment of his life and of history. He might add that to the ship's official log and its collections of scientific data, as a report to whatever powers might be on Earth now—if such powers were still interested. Llud selected a spool from among the earliest. It was one he had made shortly after leaving Procyon, end of the first leg of the trip. He slid it onto the reproducer. His own voice came from the speaker, fresher, more vibrant and confident than he knew it was now. "One light-day out from Procyon, the thirty-third day by ship's time since leaving Earth. "Our visit to Procyon drew a blank. There is only one huge planet, twice the size of Jupiter, and like Jupiter utterly unfit to support a colony. "Our hopes were dashed—and I think all of us, even remembering the Centaurus Expedition's failure, hoped more than we cared to admit. If Procyon had possessed a habitable planet, we could have returned after an absence of not much over twenty years Earth time. "It is cheering to note that the crew seems only more resolute. We go on to Capella; its spectrum, so like our own Sun's, beckons. If success comes there, a century will have passed before we can return to Earth; friends, relatives, all the generation that launched the Quest ships will be long since dead. Nevertheless we go on. Our generation's dream, humanity's dream, lives in us and in the ship forever...." Presently Knof Llud switched off that younger voice of his and leaned back, an ironic smile touching his lips. That fervent idealism seemed remote and foreign to him now. The fanfares of departure must still have been ringing in his ears. He rose, slipped the record back in its niche and picked out another, later, one. "One week since we passed close enough to Aldebaran to ascertain that that system, too, is devoid of planets. "We face the unpleasant realization that what was feared is probably true—that worlds such as the Sun's are a rare accident, and that we may complete our search without finding even one new Earth. "It makes no difference, of course; we cannot betray the plan.... This may be man's last chance of escaping his pitiful limitation to one world in all the Universe. Certainly the building of this ship and its two sisters, the immense expenditure of time and labor and energy stores that went into them, left Earth's economy drained and exhausted. Only once in a long age does mankind rise to such a selfless and transcendent effort—the effort of Egypt that built the pyramids, or the war efforts of the nations in the last great conflicts of the twentieth century. "Looked at historically, such super-human outbursts of energy are the result of a population's outgrowing its room and resources, and therefore signalize the beginning of the end. Population can be limited, but the price is a deadly frustration, because growth alone is life.... In our day the end of man's room for growth on the Earth was in sight—so we launched the Quests . Perhaps our effort will prove as futile as pyramid-building, less practical than orgies of slaughter to reduce pressure.... In any case, it would be impossible to transport very many people to other stars; but Earth could at least go into its decline with the knowledge that its race went onward and upward, expanding limitlessly into the Universe.... "Hopeless, unless we find planets!" Knof Llud shook his head sorrowfully and took off the spool. That was from the time when he had grown philosophical after the first disappointments. He frowned thoughtfully, choosing one more spool that was only four years old. The recorded voice sounded weary, yet alive with a strange longing.... "We are in the heart of Pleiades; a hundred stars show brilliant on the screens, each star encircled by a misty halo like lights glowing through fog, for we are traversing a vast diffuse nebula. "According to plan, the Quest III has reached its furthest point from Earth. Now we turn back along a curve that will take us past many more stars and stellar systems—but hope is small that any of those will prove a home for man, as have none of the thousands of stars examined already. "But what are a few thousand stars in a galaxy of billions? We have only, as it were, visited a handful of the outlying villages of the Universe, while the lights of its great cities still blaze far ahead along the Milky Way. "On flimsy excuses I have had Zost Relyul make observations of the globular cluster Omega Centauri. There are a hundred thousand stars there in a volume of space where one finds a few dozen in the Sun's neighborhood; there if anywhere must circle the planets we seek! But Omega Centauri is twenty thousand light years away.... "Even so—by expending its remaining fuel freely, the Quest III could achieve a velocity that would take us there without dying of senility of aging too greatly. It would be a one-way journey—even if enough fuel remained, there would be little point in returning to Earth after more than forty thousand years. By then our civilization certainly, and perhaps the human race itself, would have perished from memory. "That was why the planners limited our voyage, and those of the other Quests , to less than a thousand years Earth time. Even now, according to the sociodynamic predictions made then, our civilization—if the other expeditions failed also—will have reached a dangerously unstable phase, and before we can get back it may have collapsed completely from overpopulation. "Why go back, then with the news of our failure? Why not forget about Earth and go on to Omega Centauri? What use is quixotic loyalty to a decree five thousand years old, whose makers are dead and which may be forgotten back there? "Would the crew be willing? I don't know—some of them still show signs of homesickness, though they know with their minds that everything that was once 'home' has probably been swept away.... "It doesn't matter. Today I gave orders to swing the ship." Savagely Knof Llud stabbed the button that shut off the speaker. Then he sat for a time with head resting in his hands, staring into nothing. The memory of that fierce impulse to go on still had power to shake him. A couple of lines of poetry came into his head, as he read them once in translation from the ancient English.... ... for my purpose holds To sail beyond the sunset, and the baths Of all the western stars, until I die. Llud sighed. He still couldn't say just why he had given the order to turn back. The stars had claimed his heart—but he was still a part of Earth, and not even nine hundred years of space and time had been able to alter that. He wondered if there would still be a quiet stream and a green shady place beside it where a death-weary man, relieved at last of responsibility, could rest and dream no more.... Those things went on, if men didn't change them. And a pine forest where he and young Knof could go camping, and lie on their backs at night and gaze at the glittering constellations, far away, out of reach.... He wasn't sure he would want to do that, though. Suddenly a faint cushioned jar went through the great ship; it seemed to falter one moment in flight. The captain was on his feet instantly, but then his movements became unhurried. Whatever it had been was past, and he had a good idea what it had been—a meteoroid, nothing unusual in the vicinity of the Sun, though in interstellar space and around planetless stars such collisions were rare to the vanishing point. No harm could have been done. The Quest III's collision armor was nonmaterial and for practical purposes invulnerable. Just as he took his finger off the button that opened the door, the intercommunication phone shrilled imperatively. Knof Llud wheeled, frowning—surely a meteoroid impact wasn't that serious. Coincidence, maybe—it might be Zost Relyul calling as instructed. He reached the phone at the moment when another, heavier jolt shook the vessel. Llud snatched up the receiver with the speed of a scalded cat. "Captain?" It was Gwar Den's voice, stammering a little. "Captain, we're being attacked!" "Sound the alarm. Emergency stations." He had said it automatically, then felt a curious detached relief at the knowledge that after all these years he could still respond quickly and smoothly to a crisis. There was a moment's silence, and he heard the alarm start—three short buzzes and repeat, ringing through all the great length of the interstellar ship. Knowing that Gwar Den was still there, he said, "Now—attacked by what?" "Ships," said Gwar Den helplessly. "Five of them so far. No, there's a sixth now." Repeated blows quivered the Quest III's framework. The navigator said, obviously striving for calm, "They're light craft, not fifty feet long, but they move fast. The detectors hardly had time to show them before they opened up. Can't get a telescope beam on them long enough to tell much." "If they're that small," said Knof Llud deliberately, "they can't carry anything heavy enough to hurt us. Hold to course. I'll be right up." In the open doorway he almost fell over his son. Young Knof's eyes were big; he had heard his father's words. "Something's happened," he judged with deadly twelve-year-old seriousness and, without wasting time on questions, "Can I go with you, huh, Dad?" Llud hesitated, said, "All right. Come along and keep out of the way." He headed for the bridge with strides that the boy could not match. There were people running in the corridors, heading for their posts. Their faces were set, scared, uncomprehending. The Quest III shuddered, again and again, under blows that must have had millions of horsepower behind them; but it plunged on toward Earth, its mighty engines still steadily braking its interstellar velocity. To a man, the ship's responsible officers were already on the bridge, most of them breathless. To a man they looked appeal at Captain Knof Llud. "Well?" he snapped. "What are they doing?" Gwar Den spoke. "There are thirteen of them out there now, sir, and they're all banging away at us." The captain stared into the black star-strewn depths of a vision screen where occasional blue points of light winked ominously, never twice from the same position. Knof Jr. flattened himself against the metal wall and watched silently. His young face was less anxious than his elders'; he had confidence in his father. "If they had anything heavier," surmised the captain, "they'd have unlimbered it by now. They're out to get us. But at this rate, they can't touch us as long as our power lasts—or until they bring up some bigger stuff." The mild shocks went on—whether from projectiles or energy-charges, would be hard to find out and it didn't matter; whatever was hitting the Quest III's shell was doing it at velocities where the distinction between matter and radiation practically ceases to exist. But that shell was tough. It was an extension of the gravitic drive field which transmitted the engines' power equally to every atom of the ship; forces impinging on the outside of the field were similarly transmitted and rendered harmless. The effect was as if the vessel and all space inside its field were a single perfectly elastic body. A meteoroid, for example, on striking it rebounded—usually vaporized by the impact—and the ship, in obedience to the law of equal and opposite forces, rebounded too, but since its mass was so much greater, its deflection was negligible. The people in the Quest III would have felt nothing at all of the vicious onslaught being hurled against them, save that their inertialess drive, at its normal thrust of two hundred gravities, was intentionally operated at one half of one per cent efficiency to provide the illusion of Earthly gravitation. One of the officers said shakily, "It's as if they've been lying in wait for us. But why on Earth—" "That," said the captain grimly, "is what we have to find out. Why—on Earth. At least, I suspect the answer's there." The Quest III bored steadily on through space, decelerating. Even if one were no fatalist, there seemed no reason to stop decelerating or change course. There was nowhere else to go and too little fuel left if there had been; come what might, this was journey's end—perhaps in a more violent and final way than had been anticipated. All around wheeled the pigmy enemies, circling, maneuvering, and attacking, always attacking, with the senseless fury of maddened hornets. The interstellar ship bore no offensive weapons—but suddenly on one of the vision screens a speck of light flared into nova-brilliance, dazzling the watchers for the brief moment in which its very atoms were torn apart. Knof Jr. whooped ecstatically and then subsided warily, but no one was paying attention to him. The men on the Quest III's bridge looked questions at each other, as the thought of help from outside flashed into many minds at once. But Captain Llud said soberly, "It must have caught one of their own shots, reflected. Maybe its own, if it scored too direct a hit." He studied the data so far gathered. A few blurred pictures had been got, which showed cylindrical space ships much like the Quest III , except that they were rocket-propelled and of far lesser size. Their size was hard to ascertain, because you needed to know their distance and speed—but detector-beam echoes gave the distance, and likewise, by the Doppler method, the velocity of directly receding or approaching ships. It was apparent that the enemy vessels were even smaller than Gwar Den had at first supposed—not large enough to hold even one man. Tiny, deadly hornets with a colossal sting. "Robot craft, no doubt," said Knof Llud, but a chill ran down his spine as it occurred to him that perhaps the attackers weren't of human origin. They had seen no recognizable life in the part of the galaxy they had explored, but one of the other Quests might have encountered and been traced home by some unhuman race that was greedy and able to conquer. It became evident, too, that the bombardment was being kept up by a constant arrival of fresh attackers, while others raced away into space, presumably returning to base to replenish their ammunition. That argued a planned and prepared interception with virulent hatred behind it. Elsuz Llug, the gravitic engineer, calculated dismally, "At the rate we're having to shed energy, the fuel will be gone in six or eight hours." "We'll have reached Earth before then," Gwar Den said hopefully. "If they don't bring out the heavy artillery first." "We're under the psychological disadvantage," said the captain, "of not knowing why we're being attacked." Knof Jr. burst out, spluttering slightly with the violence of a thought too important to suppress, "But we're under a ps-psychological advantage, too!" His father raised an eyebrow. "What's that? I don't seem to have noticed it." "They're mad and we aren't, yet," said the boy. Then, seeing that he hadn't made himself clear, "In a fight, if a guy gets mad he starts swinging wild and then you nail him." Smiles splintered the ice of tension. Captain Llud said, "Maybe you've got something there. They seem to be mad, all right. But we're not in a position to throw any punches." He turned back to the others. "As I was going to say—I think we'd better try to parley with the enemy. At least we may find out who he is and why he's determined to smash us." And now instead of tight-beam detectors the ship was broadcasting on an audio carrier wave that shifted through a wide range of frequencies, repeating on each the same brief recorded message: "Who are you? What do you want? We are the interstellar expedition Quest III ...." And so on, identifying themselves and protesting that they were unarmed and peaceful, that there must be some mistake, and querying again, "Who are you ?" There was no answer. The ship drove on, its fuel trickling away under multiplied demands. Those outside were squandering vastly greater amounts of energy in the effort to batter down its defenses, but converting that energy into harmless gravitic impulses was costing the Quest III too. Once more Knof Llud had the insidious sense of his own nerves and muscles and will weakening along with the power-sinews of his ship. Zost Relyul approached him apologetically. "If you have time, Captain—I've got some data on Earth now." Eagerly Llud took the sheaf of photographs made with the telescope. But they told him nothing; only the continental outlines were clear, and those were as they had been nine hundred years ago.... He looked up inquiringly at Zost Relyul. "There are some strange features," said the astronomer carefully. "First of all—there are no lights on the night side. And on the daylight face, our highest magnification should already reveal traces of cities, canals, and the like—but it does not. "The prevailing color of the land masses, you see, is the normal green vegetation. But the diffraction spectrum is queer. It indicates reflecting surfaces less than one-tenth millimeter wide—so the vegetation there can't be trees or grass, but must be more like a fine moss or even a coarse mold." "Is that all?" demanded Llud. "Isn't it enough?" said Zost Relyul blankly. "Well—we tried photography by invisible light, of course. The infra-red shows nothing and likewise the ultraviolet up to the point where the atmosphere is opaque to it." The captain sighed wearily. "Good work," he said. "Keep it up; perhaps you can answer some of these riddles before—" " We know who you are ," interrupted a harshly crackling voice with a strange accent, " and pleading will do you no good. " Knof Llud whirled to the radio apparatus, his weariness dropping from him once more. He snapped, "But who are you?" and the words blended absurdly with the same words in his own voice on the still repeating tape. He snapped off the record; as he did so the speaker, still crackling with space static, said, "It may interest you to know that you are the last. The two other interstellar expeditions that went out have already returned and been destroyed, as you will soon be—the sooner, if you continue toward Earth." Knof Llud's mind was clicking again. The voice—which must be coming from Earth, relayed by one of the midget ships—was not very smart; it had already involuntarily told him a couple of things—that it was not as sure of itself as it sounded he deduced from the fact it had deigned to speak at all, and from its last remark he gathered that the Quest III's ponderous and unswerving progress toward Earth had somehow frightened it. So it was trying to frighten them. He shoved those facts back for future use. Just now he had to know something, so vitally that he asked it as a bald question, " Are you human? " The voice chuckled sourly. "We are human," it answered, "but you are not." The captain was momentarily silent, groping for an adequate reply. Behind him somebody made a choked noise, the only sound in the stunned hush, and the ship jarred slightly as a thunderbolt slammed vengefully into its field. "Suppose we settle this argument about humanity," said Knof Llud woodenly. He named a vision frequency. "Very well." The tone was like a shrug. The voice went on in its language that was quite intelligible, but alien-sounding with the changes that nine hundred years had wrought. "Perhaps, if you realize your position, you will follow the intelligent example of the Quest I's commander." Knof Llud stiffened. The Quest I , launched toward Arcturus and the star cloud called Berenice's Hair, had been after the Quest III the most hopeful of the expeditions—and its captain had been a good friend of Llud's, nine hundred years ago.... He growled, "What happened to him?" "He fought off our interceptors, which are around you now, for some time," said the voice lightly. "When he saw that it was hopeless, he preferred suicide to defeat, and took his ship into the Sun." A short pause. "The vision connection is ready." Knof Llud switched on the screen at the named wavelength, and a picture formed there. The face and figure that appeared were ugly, but undeniably a man's. His features and his light-brown skin showed the same racial characteristics possessed by those aboard the Quest III , but he had an elusive look of deformity. Most obviously, his head seemed too big for his body, and his eyes in turn too big for his head. He grinned nastily at Knof Llud. "Have you any other last wishes?" "Yes," said Llud with icy control. "You haven't answered one question. Why do you want to kill us? You can see we're as human as you are." The big-headed man eyed him with a speculative look in his great eyes, behind which the captain glimpsed the flickering raw fire of a poisonous hatred. "It is enough for you to know that you must die."
B. When the group started to return to Earth and things looked like smooth sailing.
Why is Wyandotte didactic? A. He is likely being monitored by the Terrans and cannot speak freely. B. He thinks Craig is an uneducated hick. C. He knows that gravity conditioning is horrible. He is trying to change Craig's mind about going to Terra. D. He thinks Craig will be a fish out of water in Terran society.
SEA LEGS By FRANK QUATTROCCHI Illustrated by EMSH [Transcriber's Note: This etext was produced from Galaxy Science Fiction November 1951. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Rootless and footloose, a man in space can't help but dream of coming home. But something nobody should do is bet on the validity of a homesick dream! Flight Officer Robert Craig surrendered the tube containing his service record tapes and stood waiting while the bored process clerk examined the seal. "Your clearance," said the clerk. Craig handed him a battered punch card and watched the man insert it in the reproducer. He felt anxiety as the much-handled card refused for a time to match the instrument's metal contact points. The line of men behind Craig fidgeted. "You got to get this punched by Territorial," said the clerk. "Take it back to your unit's clearance office." "Look again, Sergeant," Craig said, repressing his irritation. "It ain't notched." "The hell it isn't." The man examined the card with squinting care and nodded finally. "It's so damn notched," he complained. "You ought to take care of that card; can't get on without one." Craig hesitated before moving. "Next," said the clerk, "What you waiting for?" "Don't I take my 201 file?" "We send it on ahead. Go to Grav 1 desk." A murmur greeted the order. Craig experienced the thrill of knowing the envy of the others. Grav 1—that meant Terra. He crossed the long, dreary room, knowing the eyes of the other men were upon him. "Your service tapes," the next noncom said. "Where you going?" "Grav 1—Terra," fumbled Craig. "Los Angeles." "Los Angeles, eh? Where in Los Angeles?" "I—I—" Craig muttered, fumbling in his pockets. "No specific destination," supplied the man as he punched a key on a small instrument, "Air-lock ahead and to your right. Strip and follow the robot's orders. Any metal?" "Metal?" asked Craig. "You know, metal ." "Well, my identification key." "Here," commanded the clerk, extending a plastic envelope. Craig moved in the direction indicated. He fought the irrational fear that he had missed an important step in the complicated clerical process. He cursed the grudging attitude of the headquarters satellite personnel and felt the impotence of a spaceman who had long forgotten the bureaucracy of a rear area base. The knowledge that much of it was motivated by envy soothed him as he clumsily let himself into the lock. "Place your clothing in the receptacle provided and assume a stationary position on the raised podium in the center of the lock." Craig obeyed the robot voice and began reluctantly to remove his flight jacket. Its incredibly fine-grained leather would carry none of the strange, foreign associations for the base station clerk who would appropriate it. He would never know the beautiful, gentle beast that supplied this skin. "You are retarding the progress of others. Please respond more quickly to your orders." Craig quickly removed the last of his clothing. It was impossible to hate a robot, but one could certainly hate those who set it into operation. "You will find a red button at your feet. Lower your head and depress that button." Stepping on the button with his bare foot produced an instant of brilliant blue illumination. A small scratch on his arm stung briefly and he was somewhat blinded by the flash even through his eyelids, but that was all there was to the sterilizing process. "Your clothing and effects will be in the dressing room immediately beyond the locked door." He found his clothing cleanly and neatly hung on plastic hangers just inside the door to the dressing room. The few personal items he carried in his pockets were still there. The Schtann flight jacket was actually there, looking like new, its space-blue unfaded and as wonderfully pliant as before. "Insert your right arm into the instrument on the central table," commanded the same voice he had heard before. "Turn your arm until the scratch is in contact with the metal plate. There will be a slight pain, but it is necessary to treat the small injury you have been disregarding." Craig obeyed and clenched his teeth against a sharp stinging. His respect for the robot-controlled equipment of bases had risen. When he withdrew his arm, the scratch was neatly coated with a layer of flesh-colored plastic material. He dressed quickly and was on the verge of asking the robot for instructions, when a man appeared in the open doorway. "I am Captain Wyandotte," said the man in a pleasant voice. "Well, what's next?" asked Craig somewhat more belligerently than he had intended. The man smiled. "Your reaction is quite natural. You are somewhat aggressive after Clerical, eh?" "I'm a little anxious to get home, I suppose," said Craig defensively. "By 'home' you mean Terra. But you've never been there, have you?" "No, but my father—" "Your parents left Terra during the Second Colonization of Cassiopeia II, didn't they?" "Yes," Craig said. He was uncomfortable; Wyandotte seemed to know all about him. "We might say you've been away quite a while, eh?" "I was entered as a spaceman when I was 16," Craig said. "I've never been down for any period as yet." "You mean you haven't been in a gravity system?" "Oh, I've landed a few times, even walked around for a while...." "With the help of paraoxylnebutal," supplied the captain. "Well, sure." "Mr. Craig, I suppose you've guessed that the next step in our little torture system here is psych." "So I gathered." The captain laughed reassuringly. "No, don't put up your guard again. The worst is over. Short of Gravitational conditioning, there is nothing to stop you from going to Terra." "Sorry, I guess I'm a little touchy. This is my first time...." "Quite natural. But it being your first time—in quite a number of ways, I might add—it will be necessary for you to undergo some conditioning." "Conditioning?" asked Craig. "Yes. You have spent eleven years in space. Your body is conditioned to a normal state of free fall, or at best to a state of acceleration." "Yeah, I know. Once on Gerymeade...." "You were ill, couldn't keep your balance, felt dizzy. That is why all spacemen carry PON, paraoxylnebutal, with them. It helps suppress certain physiological reactions to an entirely new set of conditions. Channels of the ear, for example. They play an important part in our awareness of balance. They operate on a simple gravity principle. Without gravity they act up for a time, then gradually lose function. Returning to gravity is rather frightening at first." "I know all about this, Captain." "You've undoubtedly read popularizations in tapezines. But you have experienced it briefly." "I expect to have some trouble at first." Craig was disturbed by the wordy psychologist. What was the man actually saying? "Do you know what sailors of ancient times meant by 'sea legs?'" asked Wyandotte. "Men on a rolling ocean acclimated themselves to a rolling horizontal. They had trouble when they went ashore and the horizontal didn't roll any more. "It meant more than that. There were excellent psychological reasons for the old stereotype, the 'drunken sailor.' A port city was a frightening thing to an old sailor—but let's begin our little job at the beginning. I'll turn you over to psychometry for the usual tests and pick you up tomorrow morning at, say, 0900." During the days that followed, the psychologist seemed to Craig to become progressively more didactic. He would deliver long speeches about the "freedom of open space." He spoke repetitiously of the "growing complexity of Terran society." And yet the man could not be pinned down to any specific condition the spaceman would find intolerable. Craig began to hate the delay that kept him from Terra. Through the ports of the headquarters base satellite, he scanned the constellations for the scores of worlds he had visited during his eleven years in space. They were incredibly varied, even those that supported life. He had weathered difficult landings on worlds with rip-tide gravities, had felt the pull of the incredible star-tides imparted by twin and even triple star systems. He had been on Einstein IV, the planet of eight moons, and had felt the pulse of all eight of the satellites at once that no PON could completely nullify. But even if he could accept the psychologist's authority for the cumulative effect of a gravity system, he could not understand the unspoken warning he felt underlying all that the man said. "Of course it has changed," Craig was protesting. "Anyway, I never really knew very much about Terra. So what? I know it won't be as it was in tapezines either." "Yet you are so completely sure you will want to live out your life there, that you are willing to give up space service for it." "We've gone through this time and time again," Craig said wearily. "I gave you my reasons for quitting space. We analyzed them. You agreed that you could not decide that for me and that my decision is logical. You tell me spacemen don't settle down on Terra. Yet you won't—or can't—tell me why. I've got a damned good job there—" "You may find that 'damned good jobs' become boring." "So I'll transfer. I don't know what you're trying to get at, Captain, but you're not talking me out of going back. If the service needs men so badly, let them get somebody else. I've put in my time." "Do you really think that's my reason?" "Sure. What else can it be?" "Mr. Craig," the psychologist said slowly, "you have my authorization for you to return to Terra as a private citizen of that planet. You will be given a very liberal supply of PON—which you will definitely need. Good luck. You'll need that too." On the eighth day, two attendants, who showed the effects of massive doses of PON to protect themselves from the centrifugal force, had to carry a man out of the tank. Many others asked to be removed, begged to be allowed to withdraw their resignations. "The twelfth day is the worst," a grizzled spaceman told Craig. "That's when the best of 'em want out." Craig clenched the iron rung of his bed and struggled to bring the old man's face into focus. "How ... how do they know when you ought ... to come out?" he asked between waves of nausea. "Blood pressure. They get you just before you go into shock." "How can they tell?" Craig fought down his growing panic. "I can't." "That strap around your belly. You mean you ain't noticed it?" "Haven't noticed much of anything." "Well, it's keyed to give them some kind of signal." The old man lapsed into silence. Craig wished him to continue. He desperately wanted something to distract his mind from the ghastly conditioning process. Slowly at first, the lines formed by seams in the metal ceiling began to bend. Here it came again! "Old man!" shouted Craig. "Yeah, son. They've dropped it down a notch." "Dropped ... it ... down?" "Maybe that ain't scientific, but it's the way I always think of it." "Can't they ... drop it down continuously?" "They tried that a few times—once when I was aboard. You wouldn't like it, kid. You wouldn't like it at all." "How ... many times ... do they drop it?" "Four times during the day, three at night. Twenty days." A nightmare of visual sensations ebbed into Craig's mind. He was vaguely aware of the moans of other men in the vaultlike room. Wave upon wave of nausea swept him as he watched the seam lines bend and warp fantastically. He snapped his eyelids shut, only to begin feeling the nightmarish bodily sensations once more. He felt the cot slowly rise longitudinally, felt himself upside down, then the snap of turning right side up once more—and he knew that neither he nor the cot had moved so much as an inch. Craig heard the voices around him, muffled, as though talking through wadding. "... got it bad." "We better take him out." "... pretty bad." "He'll go into shock." "... never make it the twelfth." "We better yank him." "I'm ... all right," Craig mumbled at the voices. He struggled with the bonds of his cot. With terrible effort he forced his eyes open. Two white-clad figures, ridiculously out of proportion, hovered wraithlike over him. Four elongated eyes peered at him. Attendants coming for to take me home.... "Touch me and I'll kick your teeth in!" he yelled. "I'm going to Terra. Wish you were going to Terra?" Then it was better. Oddly, he passed the twelfth day easily. By the fourteenth day, Craig knew he could stand Grav 1. The whine of the centrifuge's motors had diminished to a low hum. Either that or they had begun to produce ultra-sonic waves. Craig was not sure. Most of the men had passed through the torments of gravitational conditioning. The huge headquarters base centrifuge aboard the man-made satellite had gradually caused their bodies to respond once more to a single source of pull. They were now ready to become inhabitants of planets again, instead of free-falling ships. On the eighteenth day, automatic machinery freed them from their imprisoning cots. Clumsily and awkwardly at first, the men began to walk, to hold their heads and arms in proper attitudes. They laughed and joked about it and kidded those who were slow at adjusting. Then they again began taking paraoxylnebutal in preparation for the free-fall flight to Terra. Only one of the score of men in the centrifuge tank remained voluntarily in his cot. "Space article violator," the old man informed Craig. "Psycho, I think. Went amuck with some extraterritorials. Killed a dozen." "What will they do, exile him?" "Not to Chociante, if that's what you mean. They just jerked his space card and gave him a one-way ticket to Terra." "For twelve murders?" asked Craig incredulously. "That's enough, son." The old man eyed Craig for an instant before looking away. "Pick something to talk about. What do you figure on doing when you get to Terra, for instance?" "I'm going into Import. My father was in it for twenty years." "Sure," said the old spaceman, watching a group of young crewmen engaged in an animated conversation. "It's a good job. There's a future to it." "Yeah." Why did he have to explain anything at all to the old space tramp? "Once I get set up, I'll probably try to open my own business." "And spend your weekends on Luna." Craig half rose from his cot, jarred into anger. But the old spaceman turned, smiling wryly. "Don't get hot, kid. I guess I spent too long in Zone V." He paused to examine his wrinkled hands. They were indelibly marked with lever callouses. "You get to thinking anyone who stays closer'n eighty light years from Terra is a land-lubber." Craig relaxed, realizing he had acted childishly. "Used to think the same. Then I took the exam and got this job." "Whereabouts?" "Los Angeles." The old man looked up at Craig. "You don't know much about Terra, do you, son?" "Not much." "Yeah. Well, I hope you ain't disappointed." "My father was born there, but I never saw it. Never hit the Solar System, matter of fact. Never saw much of anything close up. I stood it a long time, old man, this hitting atmospheres all over the Universe." But the spaceman seemed to have lost interest. He was unpacking some personal belongings from a kit. "What are you doing in Grav 1?" Craig asked. The old man's face clouded for an instant. "In the old days, they used to say us old-timers acted like clocks. They used to say we just ran down. Now they got some fancy psychology name for it." Craig regretted his question. He would have muttered some word of apology, but the old man continued. "Maybe you've read some of the old sea stories, or more'n likely had 'em read to you. Sailors could go to sea until they just sort of dried up. The sea tanned their skins and stiffened their bones, but it never stiffened their hearts. When they got old, it just pulled them in. "But space is different. Space is raw and new. It tugs at your guts. It sends the blood rushing through your veins. It's like loving. You don't become a part of space the way you do the old sea, though. It leaves you strictly alone. Except that it sucks you dry, takes all the soup out of you, leaves you brittle and old—old as a dehydrated piece of split leather. "Then one day it shoots a spurt of blood around in one of your old veins. Something gives. Space is through with you then. And if you can stand this whirligig conditioning, you're through with space." " You can't figure it. Some of 'em urp all over and turn six shades of green. " " You got to watch the ones that don't. " " Yeah, you got to watch the ones that don't. Especially the old ones. " " He's old. You think it was his heart? " " Who knows? " " They'll dump him, won't they? " " After a tracer is sent through. But it won't do any good. " " He probably outlived everybody that ever knew him. " " Wouldn't be surprised. Here, grab his leg. " Robert Craig folded the flight jacket tightly and stuffed it into the cylindrical carton. A sleeve unwound just as he did so, making it difficult to fit into the place he had made for it. Exasperated, he refolded it and jammed it in place. Smaller rolls of underclothing were then fitted in. When he was satisfied with the layer, he tossed in a small handful of crystals and began to fill the next layer. After the carton was completely filled, he ignited the sealing strip and watched as the plastic melted into a single, seamless whole. It was ready for irradiation. Probably in another ten years his son-to-be would put it on and play spaceman. But Craig swore he'd make sure that the kid knew what a stinking life it was. At 1300 hours, the ferry bumped heavily alongside the starboard lock. It was the signal for relief in the passengers' quarters; many were beginning to feel a reaction to the short free-fall flight from the headquarters satellite. The audio called out: "Flight Officer Robert Craig. Flight Officer Robert Craig. Report to Orderly 12. Report to Orderly 12 through the aft door." With pangs of anxiety he could not completely suppress, Craig obeyed. Orderly 12 handed him a message container. "Who's it from? Somebody on Terra?" "From a private spaceman named Morgan Brockman." " Brockman? " "He was with you in the grav tank." "The old man!" The message container produced a battered punch card. Craig straightened it and was about to reach into his pocket for a hand transcriber. But then he noticed the card bore only a few irregular punches and was covered with rough hand printing. Son, when the flunkies get around to giving you this, they'll have shot me out the tube. How do I know? Same way you know when your turbos are going to throw a blade. It's good this way. There's something you can do for me if you want to. Way back, some fifty years ago, there was a woman. She was my wife. It's a long story I won't bother you with. Anyway, I left her. Wanted to take her along with me, but she wouldn't go. Earth was a lot different then than it is now. They don't have to tell me; I know. I saw it coming and so did Ethel. We talked about it and I knew I had to go. She wouldn't or couldn't go. Wanted me to stay, but I couldn't. I tried to send her some units once in a while. Don't know if she ever got them. Sometimes I forgot to send them at all. You know, you're way out across the Galaxy, while she's home. Go see her if you can, son. Will you? Make sure she gets the unit transfer I made out. It isn't much out of seventy years of living, but she may need it. And maybe you can tell her a little bit about what it means to be out there. Tell her it's open and free and when you got hold of those levers and you're trying for an orbit on something big and new and green.... Hell, you remember. You know how to tell her. Her name is Ethel Brockman. I know she'll still use my name. Her address is or was East 71, North 101, Number 4. You can trace her easy if she moved. Women don't generally shove off and not leave a forwarding address. Not Ethel, at least. Craig put the battered card in his pocket and walked back through the door to the passenger room. How did you explain to an old woman why her husband deserted her fifty years before? Some kind of story about one's duty to the Universe? No, the old man had not been in Intergalactic. He had been a tramp spaceman. Well, why had he left? Fifty years in space. Fifty years! Zone V had been beyond anybody's imagination that long ago. He must have been in on the first Cetusian flights and shot the early landings in Cetus II. God only knew how many times he had battled Zone 111b pirates.... Damn the old man! How did one explain? Craig descended the ramp from the huge jet and concentrated on his impressions. One day he would recall this moment, his first on the planet Terra. He tried to recall his first thrill at seeing Los Angeles, 1500 square miles of it, from the ship as it entered the atmosphere. He was about to step off the last step when a man appeared hurriedly. A rather plump man, he displayed a toothy smile on his puffy red face. "A moment, sir. Just a little greeting from the Terra. You understand, of course. Purely routine." Craig remained on the final step of the ramp, puzzled. The man turned to a companion at his right. "We can see that this gentleman has come from a long, long way off, can't we?" The other man did not look up. He was peering into what seemed to Craig to be a kind of camera. "We can allow the gentlemen to continue now, can't we? It wasn't that we believed for a minute, you understand ... purely routine." Both men were gone in an instant, leaving Craig completely bewildered. "You goin' to move on, buddy, or you want to go back?" Craig turned to face a line of his fellow passengers up the ramp behind him. "Who was that?" Craig asked. "Customs. Bet you never got such a smooth screening before, eh?" "You mean he screened me? What for?" "Hard to say," the other passenger said. "You'll get used to this. They get it over with quick." Craig made his way toward the spaceport administration building. His first physical contact with Terra had passed unnoticed. "Sir! Sir!" cried a voice behind him. He wheeled to see a man walking briskly toward him. "You dropped this, sir. Quite by accident, of course." Craig examined the small object the man had given him before rushing off toward an exit. It was an empty PON tube he had just discarded. He couldn't understand why the man had bothered until he realized that the plastaloid floor of the lobby displayed not the faintest scrap of paper nor trace of dirt. The Import personnel man was toying with a small chip of gleaming metal. He did not look directly at Craig for more than an instant at a time, and commented on Craig's description of his trip through the city only very briefly between questions. "It's a good deal bigger than I imagined," Craig was saying. "Haven't seen much of it, of course. Thought I'd check in here with you first." "Yes, naturally." "Thought you could give me some idea of conditions...." "Conditions?" "For instance, what part of the city I should live in. That is, what part is closest to where I'll work." "I see," said the man noncommittally. It seemed to Craig that he was about to add something. He did not, however, but instead rose from his chair and walked to the large window overlooking an enormous section of the city far below. He stared out the window for a time, leaving Craig seated uncomfortably in the silent room. There was a distracted quality about him, Craig thought. "You are the first man we have had from the Intergalactic Service," the personnel man said finally. "That so?" "Yes." He turned to face Craig briefly before continuing. "You must find it very strange here." "Well, I've never seen a city so big." "Yes, so big. And also...." He seemed to consider many words before completing the sentence. "And also different." "I haven't been here very long," said Craig. "Matter of fact, I haven't been anywhere very long. This is my first real experience with life on a planet. As an adult, anyway." The personnel man seated himself once more and pressed a button on a small instrument. A secretary entered the office from a door to Craig's left. "Miss Wendel, this is Mr. Craig. Mr. Craig, my secretary. Mr. Craig will enter Minerals and Metals, Zone V." They exchanged formal greetings. She was a moderately pretty girl of medium height and, to Craig, a pleasantly rounded figure. He would have attempted to catch her eye had she not immediately occupied herself with unfolding the legs of a small instrument she was carrying. "This is Mr. Craig's first landing on Terra, Miss Wendel," the personnel man continued. "Actually, we shall have to consider him in much the same way we would an extraterrestrial." The girl glanced at Craig, casting him a cool, impersonal smile. "He was formerly a flight officer in the Intergalactic Space Service." The statement was delivered in an almost exaggeratedly casual tone. The girl glanced at him once more, this time with a definite quizzical look in her brown eyes. "Three complete tours of duty, I believe." "Four," corrected Craig. "Four tours of three years each, minus a year's terminal leave." "I take it you have no identification card?" the man asked. "The one I held in the service. It's pretty comprehensive." The other turned to the secretary. "You'll see that he is assisted in filing his application, won't you? A provisional Code II. That will enable you to enter all Import offices freely, Mr. Craig." "Will he need a food and—clothing ration also?" asked the girl, without looking at Craig. "Yes." The man laughed. "You'll excuse us, Mr. Craig. We realize that you couldn't be expected to be familiar with Terra's fashions. In your present outfit you would certainly be typed as a ... well, you'd be made uncomfortable." Craig reddened in spite of himself. He had bought the suit on Ghandii. "A hick," he supplied. "I wouldn't go that far, but some people might." Craig noted the pleasant way the girl filled her trim, rather severe business suit. He amused himself by calculating stress patterns in its plain woven material as she assembled the forms for him. "Here, Mr. Craig. I believe these are complete." "They look pretty complicated." "Not at all. The questions are quite explicit." Craig looked them over quickly. "I guess so. Say, Miss Wendel, I was wondering—I don't know the city at all. Maybe you could go with me to have dinner. It must be almost dinnertime now. You could sort of check me out on some...." "I'm afraid that would be quite impossible. You couldn't gain admittance to any office you need to visit tonight. Therefore, it is impossible for me to be of any assistance to you." "Oh, come now, Miss Wendel. There are women aboard spaceships. I'm not a starved wolf." "Certainly you are not, Mr. Craig. But it is not possible for me...." "You said that already, but you can have dinner with me. Just company." "I'm afraid I don't understand." The Galactic hotel strove to preserve an archaic tone of hospitality. It advertised "a night's lodgings" and it possessed a bellboy. The bellboy actually carried Craig's plasticarton and large file of punch cards and forms to his room. Tired from the long, confusing day, Craig was not impressed. He vaguely wondered if the little drama of the hotel carried so far as a small fee to be paid the bellboy, and he hoped he would have the right size of Terran units in his wallet. Outside the door to the room, the bellboy stopped and turned to Craig. "For five I'll tell you where it is," he said in a subdued tone. "Tell me where what is?" "You know, the mike." "Mike?" "All right, mister, three units, then. I wasn't trying to hold you up." "You mean a microphone?" asked Craig, mechanically fishing for his wallet. "Sure, they don't put in screens here. Wanted to, but the boss convinced 'em there aren't any Freedomites ever stay here." "Where is the microphone?" Craig asked as he found a ten unit note. He was too puzzled to wonder what he was expected to do with the information. "It's in the bed illuminator. You can short it out with a razor blade. Or I'll do it for another two." "Never mind," Craig said wearily. He waited while the bellboy inserted a key into the door and opened it for him. "I can get you a sensatia-tape," whispered the boy when they had entered. He nudged Craig wickedly. "You know what they're like?" "Yeah," Craig said disgustedly. Traffic in the illicit mental-image tapes was known as far into space as lonely men had penetrated. Intergalactic considered them as great a menace to mental and moral stability as the hectopiates. Craig wearily got the man out of the room, took a PON pill, and eased himself into the bed. It had been a weird day and he had not liked it. There was no telling how long it would take him to shake his—sea legs, the psychologist had called it. One thing was sure: Terra aggressively went after its strangers.
D. He thinks Craig will be a fish out of water in Terran society.
What is the size of the datasets employed?
### Introduction Humans deploy structure-sensitive expectations to guide processing during natural language comprehension BIBREF0. While it has been shown that neural language models show similar structure-sensitivity in their predictions about upcoming material BIBREF1, BIBREF2, previous work has focused on dependencies that are conditioned by features attached to a single word, such as subject number BIBREF3, BIBREF4 or wh-question words BIBREF5. There has been no systematic investigation into models' ability to compute phrase-level features—features that are attached to a set of words—and whether models can deploy these more abstract properties to drive downstream expectations. In this work, we assess whether state-of-the-art neural models can compute and employ phrase-level gender and number features of coordinated subject Noun Phrases (CoordNPs) with two nouns. Typical syntactic phrases are endocentric: they are headed by a single child, whose features determine the agreement requirements for the entire phrase. In Figure FIGREF1, for example, the word star heads the subject NP The star; since star is singular, the verb must be singular. CoordNPs lack endocentricity: neither conjunct NP solely determines the features of the NP as a whole. Instead, these feature values are determined by compositional rules sensitive to the features of the conjuncts and the identity of the coordinator. In Figure FIGREF1, because the coordinator is and, the subject NP number is plural even though both conjuncts (the star and the moon) are singular. As this case demonstrates, the agreement behavior for CoordNPs must be driven by more abstract, constituent-level representations, and cannot be reduced to features hosted on a single lexical item. We use four suites of experiments to assess whether neural models are able to build up phrase-level representations of CoordNPs on the fly and deploy them to drive humanlike behavior. First, we present a simple control experiment to show that models can represent number and gender features of non-coordinate NPs (Non-coordination Agreement). Second, we show that models modulate their expectations for downstream verb number based on the CoordNP's coordinating conjunction combined with the features of the coordinated nouns (Simple Coordination). We rule out the possibility that models are using simple heuristics by designing a set of stimuli where a simple heuristic would fail due to structural ambiguity (Complex Coordination). The striking success for all models in this experiment indicates that even neural models with no explicit hierarchical bias, trained on a relatively small amount of text are able to learn fine-grained and robust generalizations about the interaction between CoordNPs and local syntactic context. Finally, we use subject–auxiliary inversion to test whether an upstream lexical item modulates model expectation for the phrasal-level features of a downstream CoordNP (Inverted Coordination). Here, we find that all models are insensitive to the fine-grained features of this particular syntactic context. Overall, our results indicate that neural models can learn fine-grained information about the interaction of Coordinated NPs and local syntactic context, but their behavior remains unhumanlike in many key respects. ### Methods ::: Psycholinguistics Paradigm To determine whether state-of-the-art neural architectures are capable of learning humanlike CoordNP/verb agreement properties, we adopt the psycholinguistics paradigm for model assessment. In this paradigm the models are tested using hand-crafted sentences designed to test underlying network knowledge. The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training, its expectations for upcoming words should qualitatively match human expectations in novel contexts. For example, BIBREF1 and BIBREF6 assessed how well neural models had learned the subject/verb number agreement by feeding them with the prefix The keys to the cabinet .... If the models predicted the grammatical continuation are over the ungrammatical continuation is, they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun, cabinet, drives expectations about the number of the matrix verb. If models are able to robustly modulate their expectations based on the internal components of the CoordNP, this will provide evidence that the networks are building up a context-sensitive phrase-level representation. We quantify model expectations as surprisal values. Surprisal is the negative log-conditional probability $S(x_i) = -\log _2 p(x_i|x_1 \dots x_{i-1})$ of a sentence's $i^{th}$ word $x_i$ given the previous words. Surprisal tells us how strongly $x_i$ is expected in context and is known to correlate with human processing difficulty BIBREF7, BIBREF0, BIBREF8. In the CoordNP/Verb agreement studies presented here, cases where the proceeding context sets high expectation for a number-inflected verb form $w_i$, (e.g. singular `is') we would expect $S(w_i)$ to be lower than its number-mismatched counterpart (e.g. plural `are'). ### Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256. We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset. ### Methods ::: Models Tested ::: ActionLSTM models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17. ### Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG) jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18. The annotation schemes used to train the syntactically-supervised models differ slightly between French and English. In the PTB (English) CoordNPs are flat structures bearing an `NP' label. In FTB (French), CoordNPs are binary-branching, labeled as NPs, except for the phrasal node dominating the coordinating conjunction, which is labeled `COORD'. We examine the effects of annotation schemes on model performance in Appendix SECREF8. ### Experiment 1: Non-coordination Agreement In order to provide a baseline for following experiments, here we assess whether the models tested have learned basic representations of number and gender features for non-coordinated Noun Phrases. We test number agreement in English and French as well as gender agreement in French. Both English and French have two grammatical number feature: singular (sg) and plural (pl). French has two grammatical gender features: masculine (m) and feminine (f). The experimental materials include sentences where the subject NPs contain a single noun which can either match with the matrix verb (in the case of number agreement) or a following predicative adjective (in the case of gender agreement). Conditions are given in Table TABREF9 and Table TABREF10. We measure model behavior by computing the plural expectation, or the surprisal of the singular continuation minus the surprisal of the plural continuation for each condition and took the average for each condition. We expect a positive plural expectation in the Npl conditions and a negative plural expectation in the Nsg conditions. For gender expectation we compute a gender expectation, which is S(feminine continuation) $-$ S(masculine continuation). We measure surprisal at the verbs and predicative adjectives themselves. The results for this experiment are in Figure FIGREF11, with the plural expectation and gender expectation on the y-axis and conditions on the x-axis. For this and subsequent experiments error bars represent 95% confidence intervals for across-item means. For number agreement, all the models in English and French show positive plural expectation when the head noun is plural and negative plural expectation when it is singular. For gender agreement, however, only the LSTM (frWaC) shows modulation of gender expectation based on the gender of the head noun. This is most likely due to the lower frequency of predicative adjectives compared to matrix verbs in the corpus. ### Experiment 2: Simple Coordination In this section, we test whether neural language models can use grammatical features hosted on multiple components of a coordination phrase—the coordinated nouns as well as the coordinating conjunction—to drive downstream expectations. We test number agreement in both English and French and gender agreement in French. ### Experiment 2: Simple Coordination ::: Number Agreement In simple subject/verb number agreement, the number features of the CoordNP are determined by the coordinating conjunction and the number features of the two coordinated NPs. CoordNPs formed by and are plural and thus require plural verbs; CoordNPs formed by or allow either plural or singular verbs, often with the number features of the noun linearly closest to the verb playing a more important role, although this varies cross-linguistically BIBREF20. Forced-choice preference experiments in BIBREF21 reveal that English native speakers prefer singular agreement when the closest conjunct in an or-CoordNP is singular and plural agreement when the closest conjunct is plural. In French, both singular and plural verbs are possible when two singular NPs are joined via disjunction BIBREF22. In order to assess whether the neural models learn the basic CoordNP licensing for English, we adapted 37 items from BIBREF21, along the 16 conditions outlined in Table TABREF14. Test items consist of the sentence preamble, followed by either the singular or plural BE verb, half the time in present tense (is/are) and half the time in past tense (was/were). We measured the plural expectation, following the procedure in Section SECREF3. We created 24 items using the same conditions as the English experiment to test the models trained in French, using the 3rd person singular and plural form of verb aller, `to go' (va, vont). Within each item, nouns match in gender; across all conditions half the nouns are masculine, half feminine. The results for this experiment can be seen in Figure FIGREF12, with the results for English on the left and French on the right. The results for and are on the top row, or on the bottom row. For all figures the y-axis shows the plural expectation, or the difference in surprisal between the singular condition and the plural condition. Turning first to English-and (Figure FIGREF12), all models show plural expectation (the bars are significantly greater than zero) in the pl_and_pl and sg_and_pl conditions, as expected. For the pl_and_sg condition, only the LSTM (enWiki) and ActionLSTM are greater than zero, indicating humanlike behavior. For the sg_and_sg condition, only the LSTM (enWiki) model shows the correct plural expectation. For the French-and (Figure FIGREF12), all models show positive plural expectation in all conditions, as expected, except for the LSTM (FTB) in the sg_and_sg condition. Examining the results for English-or, we find that all models demonstrate humanlike expectation in the pl_or_pl and sg_or_pl conditions. The LSTM (1B), LSTM (PTB), and RNNG models show zero or negative singular expectation for the pl_or_sg conditions, as expected. However the LSTM (enWiki) and ActionLSTM models show positive plural expectation in this condition, indicating that they have not learned the humanlike generalizations. All models show significantly negative plural expectation in the sg_or_sg condition, as expected. In the French-or cases, models show almost identical behavior to the and conditions, except the LSTM (frWaC) shows smaller plural expectation when singular nouns are linearly proximal to the verb. These results indicate moderate success at learning coordinate NP agreement, however this success may be the result of an overly simple heuristic. It appears that expectation for both plural and masculine continuations are driven by a linear combination of the two nominal number/gender features transferred into log-probability space, with the earlier noun mattering less than the later noun. A model that optimally captures human grammatical preferences should show no or only slight difference across conditions in the surprisal differential for the and conditions, and be greater than zero in all cases. Yet, all the models tested show gradient performance based on the number of plural conjuncts. ### Experiment 2: Simple Coordination ::: Gender Agreement In French, if two nouns are coordinated with et (and-coordination), agreement must be masculine if there is one masculine element in the coordinate structure. If the nouns are coordinated with ou (or-coordination), both masculine and feminine agreement is acceptable BIBREF23, BIBREF24. Although linear proximity effects have been tested for a number of languages that employ grammatical gender, as in e.g. Slavic languages BIBREF25, there is no systematic study for French. To assess whether the French neural models learned humanlike gender agreement, we created 24 test items, following the examples in Table TABREF16, and measured the masculine expectation. In our test items, the coordinated subject NP is followed by a predicative adjective, which either takes on masculine or feminine gender morphology. Results from the experiment can be seen in Figure FIGREF17. No models shows qualitative difference based on the coordinator, and only the LSTM (frWaC) shows significant behavior difference between conditions. Here, we find positive masculine expectation in the m_and_m and f_and_m conditions, and negative masculine expectation in the f_and_f condition, as expected. However, in the m_and_f condition, the masculine expectation is not significantly different from zero, where we would expect it to be positive. In the or-coordination conditions, following our expectation, masculine expectation is positive when both conjuncts are masculine and negative when both are feminine. For the LSTM (FTB) and ActionLSTM models, the masculine expectation is positive (although not significantly so) in all conditions, consistent with results in Section SECREF3. ### Experiment 3: Complex Coordination One possible explanation for the results presented in the previous section is that the models are using a `bag of features' approach to plural and masculine licensing that is opaque to syntactic context: Following a coordinating conjunction surrounded by nouns, models simply expect the following verb to be plural, proportionally to the number of plural nouns. In this section, we control for this potential confound by conducting two experiments: In the Complex Coordination Control experiments we assess models' ability to extend basic CoordNP licensing into sententially-embedded environments, where the CoordNP can serve as an embedded subject. In the Complex Coordination Critical experiments, we leverage the sentential embedding environment to demonstrate that when the CoordNPs cannot plausibly serve as the subject of the embedded phrase, models are able to suppress the previously-demonstrated expectations set up by these phrases. These results demonstrate that models are not following a simple strategy for predicting downstream number and gender features, but are building up CoordNP representations on the fly, conditioned on the local syntactic context. ### Experiment 3: Complex Coordination ::: Complex Coordination Control Following certain sentential-embedding verbs, CoordNPs serve unambiguously as the subject of the verb's sentence complement and should trigger number agreement behavior in the main verb of the embedded clause, similar to the behavior presented in SECREF13. To assess this, we use the 37 test items in English and 24 items in French in section SECREF13, following the conditions in Table TABREF19 (for number agreement), testing only and coordination. For gender agreement, we use the same test items and conditions for and coordination in Section SECREF15, but with the Coordinated NPs embedded in a context similar to SECREF18. As before, we derived the plural expectation by measuring the difference in surprisal between the singular and plural continuations and the gender expectation by computing the difference in surprisal between the masculine and feminine predicates. . Je croyais que les prix et les dépenses étaient importants/importantes. I thought that the.pl price.mpl and the.pl expense.fpl were important.mpl/fpl I thought that the prices and the expenses were important. The results for the control experiments can be seen in Figure FIGREF20, with English number agreement on the top row, French number agreement in the middle row and French gender agreement on the bottom. The y-axis shows either plural or masculine expectation, with the various conditions along the x-axis. For English number agreement, we find that the models behave similarly as they do for simple coordination contexts. All models show significant plural expectation when the closest noun is plural, with only two models demonstrating plural expectation in the sg_and_sg case. The French number agreement tests show similar results, with all models except LSTM (FTB) demonstrating significant plural prediction in all cases. Turning to French gender agreement, only the LSTM (frWaC) shows sensitivity to the various conditions, with positive masculine expectation in the m_and_m condition and negative expectation in the f_and_f condition, as expected. These results indicate that the behavior shown in Section SECREF13 extends to more complex syntactic environments—in this case to sentential embeddings. Interestingly, for some models, such as the LSTM (1B), behavior is more humanlike when the CoordNP serves as the subject of an embedded sentence. This may be because the model, which has a large number of hidden states and may be extra sensitive to fine-grained syntactic information carried on lexical items BIBREF2, is using the complementizer, that, to drive more robust expectations. ### Experiment 3: Complex Coordination ::: Complex Coordination Critical In order to assess whether the models' strategy for CoordNP/verb number agreement is sensitive to syntactic context, we contrast the results presented above to those from a second, critical experiment. Here, two coordinated nouns follow a verb that cannot take a sentential complement, as in the examples given in Table TABREF23. Of the two possible continuations—are or is—the plural is only grammatically licensed when the second of the two conjuncts is plural. In these cases, the plural continuation may lead to a final sentence where the first noun serves as the verb's object and the second introduces a second main clause coordinated with the first, as in I fixed the doors and the windows are still broken. For the same reason, the singular-verb continuation is only licensed when the noun immediately following and is singular. We created 37 test items in both English and French, and calculated the plural expectation. If the models were following a simple strategy to drive CoordNP/verb number agreement, then we should see either no difference in plural expectation across the four conditions or behavior no different from the control experiment. If, however, the models are sensitive to the licensing context, we should see a contrast based solely on the number features of the second conjunct, where plural expectation is positive when the second conjunct is plural, and negative otherwise. Experimental items for a critical gender test were created similarly, as in Example SECREF22. As with plural agreement, gender expectation should be driven solely by the second conjunct: For the f_and_m and m_and_m conditions, the only grammatical continuation is one where the adjectival predicate bears masculine gender morphology. Conversely, for the m_and_f or f_and_f conditions, the only grammatical continuation is one where the adjectival predicate bears feminine morphology. As in SECREF13, we created 24 test items and measured the gender expectation by calculating the difference in surprisal between the masculine and feminine continuations. . Nous avons accepté les prix et les dépenses étaient importants/importantes. we have accepted the.pl price.mpl and the expense.fpl were important.mpl/fpl We have accepted the prices and the expenses were important. The results from the critical experiments are in Figure FIGREF21, with the English number agreement on the top row, French number agreement in the middle and gender expectation on the bottom row. Here the y-axis shows either plural expectation or masculine expectation, with the various conditions are on the x-axis. The results here are strikingly different from those in the control experiments. For number agreement, all models in both languages show strong plural expectation in conditions where the second noun is plural (blue and green bars), as they do in the control experiments. Crucially, when the second noun is singular, the plural expectation is significantly negative for all models (save for the French LSTM (FTB) pl_and_sg condition). Turning to gender agreement, only the LSTM (frWaC) model shows differentiation between the four conditions tested. However, whereas the f_and_m and m_and_f gender expectations are not significantly different from zero in the control condition, in the critical condition they pattern with the purely masculine and purely feminine conditions, indicating that, in this syntactic context, the model has successfully learned to base gender expectation solely off of the second noun. These results are inconsistent with a simple `bag of features' strategy that is insensitive to local syntactic context. They indicate that both models can interpret the same string as either a coordinated noun phrase, or as an NP object and the start of a coordinated VP with the second NP as its subject. ### Experiment 4: Inverted Coordination In addition to using phrase-level features to drive expectation about downstream lexical items, human processors can do the inverse—use lexical features to drive expectations about upcoming syntactic chunks. In this experiment, we assess whether neural models use number features hosted on a verb to modulate their expectations for upcoming CoordNPs. To assess whether neural language models learn inverted coordination rules, we adapted items from Section SECREF13 in both English (37 items) and French (24 items), following the paradigm in Table TABREF24. The first part of the phrase contains either a plural or singular verb and a plural or singular noun. In this case, we sample the surprisal for the continuations and (or is grammatical in all conditions, so it is omitted from this study). Our expectation is that `and' is less surprising in the Vpl_Nsg condition than in the Vsg_Nsg condition, where a CoordNP is not licensed by the grammar in either French or English (as in *What is the pig and the cat eating?). We also expect lower surprisal for and in the Vpl_Nsg condition, where it is obligatory for a grammatical continuation, than in the Vpl_Npl condition, where it is optional. For French experimental items, the question is embedded into a sentential-complement taking verb, following Example SECREF6, due to the fact that unembedded subject-verb inverted questions sound very formal and might be relatively rare in the training data. . Je me demande où vont le maire et I myself ask where go.3PL the.MSG mayor.MSG and The results for both languages are shown in Figure FIGREF25, with the surprisal at the coordinator on the y-axis and the various conditions on the x-axis. No model in either language shows a signficant difference in surprisal between the Vpl_Nsg and Vpl_Npl conditions or between the Vpl_Nsg and Vsg_Nsg conditions. The LSTM (1B) shows significant difference between the Vpl_Nsg and Vpl_Npl conditions, but in the opposite direction than expected, with the coordinator less surprising in the latter condition. These results indicate that the models are unable to use the fine-grained context sensitivity to drive expectations for CoordNPs, at least in the inversion setting. ### Discussion The experiments presented here extend and refine a line of research investigating what linguistic knowledge is acquired by neural language models. Previous studies have demonstrated that sequential models trained on a simple regime of optimizing the next word can learn long-distance syntactic dependencies in impressive detail. Our results provide complimentary insights, demonstrating that a range of model architectures trained on a variety of datasets can learn fine-grained information about the interaction of CoordNPs and local syntactic context, but their behavior remains unhumanlike in many key ways. Furthermore, to our best knowledge, this work presents the first psycholinguistic analysis of neural language models trained on French, a high-resource language that has so far been under-investigated in this line of research. In the simple coordination experiment, we demonstrated that models were able to capture some of the agreement behaviors of humans, although their performance deviated in crucial aspects. Whereas human behavior is best modeled as a `percolation' process, the neural models appear to be using a linear combination of NP constituent number to drive CoordNP/verb number agreement, with the second noun weighted more heavily than the first. In these experiments, supervision afforded by the RNNG and ActionLSTM models did not translate into more robust or humanlike learning outcomes. The complex coordination experiments provided evidence that the neural models tested were not using a simple `bag of features' strategy, but were sensitive to syntactic context. All models tested were able to interpret material that had similar surface form in ways that corresponded to two different tree-structural descriptions, based on local context. The inverted coordination experiment provided a contrasting example, in which models were unable to modulate expectations based on subtleties in the syntactic environment. Across all our experiments, the French models performed consistently better on subject/verb number agreement than on subject/predicate gender agreement. Although there are likely more examples of subject/verb number agreement in the French training data, gender agreement is syntactically mandated and widespread in French. It remains an open question why all but one of the models tested were unable to leverage the numerous examples of gender agreement seen in various contexts during training to drive correct subject/predicate expectations. ### Acknowledgments This project is supported by a grant of Labex EFL ANR-10-LABX-0083 (and Idex ANR-18-IDEX-0001) for AA and MIT–IBM AI Laboratory and the MIT–SenseTimeAlliance on Artificial Intelligence for RPL. We would like to thank the anonymous reviewers for their comments and Anne Abeillé for her advice and feedback. ### The Effect of Annotation Schemes This section further investigates the effects of CoordNP annotation schemes on the behaviors of structurally-supervised models. We test whether an explicit COORD phrasal tag improves model performance. We trained two additional RNNG models on 38,546 sentences from the Penn Treebank annotated with two different schemes: The first, RNNG (PTB-control) was trained with the original Penn Treebank annotation. The second, RNNG (PTB-coord), was trained on the same sentences, but with an extended coordination annotation scheme, meant to employ the scheme employed in the FTB, adapted from BIBREF26. We stripped empty categories from their scheme and only kept the NP-COORD label for constituents inside a coordination structure. Figure FIGREF26 illustrates the detailed annotation differences between two datasets. We tested both models on all the experiments presented in Sections SECREF3-SECREF6 above. Turning to the results of these six experiments: We see little difference between the two models in the Non-coordination agreement experiment. For the Complex coordination control and Complex coordination critical experiments, both models are largely the same as well. However, in the Simple and-coordination and Simple or-coordination experiments the values for all conditions are shifted upwards for the RNNG PTB-coord model, indicating higher over-all preference for the plural continuation. Furthermore, the range of values is reduced in the RNNG PTB-coord model, compared to the RNNG PTB-control model. These results indicate that adding an explicit COORD phrasal label does not drastically change model performance: Both models still appear to be using a linear combination of number features to drive plural vs. singular expectation. However, the explicit representation has made the interior of the coordination phrase more opaque to the model (each feature matters less) and has slightly shifted model preference towards plural continuations. In this sense, the PTB-coord model may have learned a generalization about CoordNPs, but this generalization remains unlike the ones learned by humans. ### PTB/FTB Agreement Patterns We present statistics of subject/predicate agreement patterns in the Penn Treebank (PTB) and French Treebank (FTB) in Table TABREF28 and TABREF29. Figure 1: Subject-verb agreement with (a) the head of a noun phrase structure, and (b) the coordination structure. Table 1: A summary of models tested. Table 2: Conditions of number agreement in Noncoordination Agreement experiment. Table 3: Conditions of gender agreement in Noncoordination Agreement experiment. Table 4: Conditions of number agreement in Simple Coordination experiment. Figure 2: Non-Coordination Agreement experiments for English (number) and French (number and gender). Figure 3: Comparison of models’ expectation preferences for singular vs. plural predicate in English and French Simple Coordination experiments. Table 5: Conditions for the and-coordination experiment. (Items for or-coordination are the same except that we change the coordinator to ou.) Figure 4: Comparison of models’ expectation preferences for Feminine v.s. Masculine predicative adjectives in French. Table 6: Conditions of number agreement in Complex Coordination Control experiment. Figure 5: Comparison of model’s expectation preferences in the Complex Coordination Control experiments. Figure 6: Comparison of model’s expectation preferences in the Complex Coordination Critical experiments. Table 7: Conditions of number agreement in Complex Coordination Critical experiment. Table 8: Conditions in Inverted Coordination experiment. Figure 7: Comparison of models’ surprisals of andcoordination in Inverted Coordination experiment. Figure 8: Comparison of annotation schemes of coordination structure. Table 9: Frequency of number agreement patterns in PTB and FTB. Table 10: Frequency of gender agreement patterns in FTB. Figure 9: Comparison between RNNGs trained on PTB data with original annotation vs. fine-grained annotation of coordination structure.
(about 4 million sentences, 138 million word tokens), one trained on the Billion Word benchmark
What datasets are used?
### Introduction The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1. In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations. We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13. Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community. ### Related Work The task of detecting offensive language has been studied across a variety of content classes. Perhaps the most commonly studied class is hate speech, but work has also covered bullying, aggression, and toxic comments BIBREF13. To this end, various datasets have been created to benchmark progress in the field. In hate speech detection, recently BIBREF5 compiled and released a dataset of over 24,000 tweets labeled as containing hate speech, offensive language, or neither. The TRAC shared task on Aggression Identification, a dataset of over 15,000 Facebook comments labeled with varying levels of aggression, was released as part of a competition BIBREF14. In order to benchmark toxic comment detection, The Wikipedia Toxic Comments dataset (which we study in this work) was collected and extracted from Wikipedia Talk pages and featured in a Kaggle competition BIBREF12, BIBREF15. Each of these benchmarks examine only single-turn utterances, outside of the context in which the language appeared. In this work we recommend that future systems should move beyond classification of singular utterances and use contextual information to help identify offensive language. Many approaches have been taken to solve these tasks – from linear regression and SVMs to deep learning BIBREF16. The best performing systems in each of the competitions mentioned above (for aggression and toxic comment classification) used deep learning approaches such as LSTMs and CNNs BIBREF14, BIBREF15. In this work we consider a large-pretrained transformer model which has been shown to perform well on many downstream NLP tasks BIBREF17. The broad class of adversarial training is currently a hot topic in machine learning BIBREF18. Use cases include training image generators BIBREF19 as well as image classifiers to be robust to adversarial examples BIBREF20. These methods find the breaking examples algorithmically, rather than by using humans breakers as we do. Applying the same approaches to NLP tends to be more challenging because, unlike for images, even small changes to a sentence can cause a large change in the meaning of that sentence, which a human can detect but a lower quality model cannot. Nevertheless algorithmic approaches have been attempted, for example in text classification BIBREF21, machine translation BIBREF22, dialogue generation tasks BIBREF23 and reading comprehension BIBREF24. The latter was particularly effective at proposing a more difficult version of the popular SQuAD dataset. As mentioned in the introduction, our approach takes inspiration from “Build it Break it” approaches which have been successfully tried in other domains BIBREF10, BIBREF11. Those approaches advocate finding faults in systems by having humans look for insecurities (in software) or prediction failures (in models), but do not advocate an automated approach as we do here. Our work is also closely connected to the “Mechanical Turker Descent” algorithm detailed in BIBREF25 where language to action pairs were collected from crowdworkers by incentivizing them with a game-with-a-purpose technique: a crowdworker receives a bonus if their contribution results in better models than another crowdworker. We did not gamify our approach in this way, but still our approach has commonalities in the round-based improvement of models through crowdworker interaction. ### Baselines: Wikipedia Toxic Comments In this section we describe the publicly available data that we have used to bootstrap our build it break it fix it approach. We also compare our model choices with existing work and clarify the metrics chosen to report our results. ### Baselines: Wikipedia Toxic Comments ::: Wikipedia Toxic Comments The Wikipedia Toxic Comments dataset (WTC) has been collected in a common effort from the Wikimedia Foundation and Jigsaw BIBREF12 to identify personal attacks online. The data has been extracted from the Wikipedia Talk pages, discussion pages where editors can discuss improvements to articles or other Wikipedia pages. We considered the version of the dataset that corresponds to the Kaggle competition: “Toxic Comment Classification Challenge" BIBREF15 which features 7 classes of toxicity: toxic, severe toxic, obscene, threat, insult, identity hate and non-toxic. In the same way as in BIBREF26, every label except non-toxic is grouped into a class offensive while the non-toxic class is kept as the safe class. In order to compare our results to BIBREF26, we similarly split this dataset to dedicate 10% as a test set. 80% are dedicated to train set while the remaining 10% is used for validation. Statistics on the dataset are shown in Table TABREF4. ### Baselines: Wikipedia Toxic Comments ::: Models We establish baselines using two models. The first one is a binary classifier built on top of a large pre-trained transformer model. We use the same architecture as in BERT BIBREF17. We add a linear layer to the output of the first token ([CLS]) to produce a final binary classification. We initialize the model using the weights provided by BIBREF17 corresponding to “BERT-base". The transformer is composed of 12 layers with hidden size of 768 and 12 attention heads. We fine-tune the whole network on the classification task. We also compare it the fastText classifier BIBREF27 for which a given sentence is encoded as the average of individual word vectors that are pre-trained on a large corpus issued from Wikipedia. A linear layer is then applied on top to yield a binary classification. ### Baselines: Wikipedia Toxic Comments ::: Experiments We compare the two aforementioned models with BIBREF26 who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors BIBREF28. Results are listed in Table TABREF5 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by their frequency in the dataset. We also report the F1 of the offensive-class which is the metric we favor within this work, although we report both. (Note that throughout the paper, the notation F1 is always referring to offensive-class F1.) Indeed, in the case of an imbalanced dataset such as Wikipedia Toxic Comments where most samples are safe, the weighted-F1 is closer to the F1 score of the safe class while we focus on detecting offensive content. Our BERT-based model outperforms the method from BIBREF26; throughout the rest of the paper, we use the BERT-based architecture in our experiments. In particular, we used this baseline trained on WTC to bootstrap our approach, to be described subsequently. ### Build it Break it Fix it Method In order to train models that are robust to adversarial behavior, we posit that it is crucial collect and train on data that was collected in an adversarial manner. We propose the following automated build it, break it, fix it algorithm: Build it: Build a model capable of detecting offensive messages. This is our best-performing BERT-based model trained on the Wikipedia Toxic Comments dataset described in the previous section. We refer to this model throughout as $A_0$. Break it: Ask crowdworkers to try to “beat the system" by submitting messages that our system ($A_0$) marks as safe but that the worker considers to be offensive. Fix it: Train a new model on these collected examples in order to be more robust to these adversarial attacks. Repeat: Repeat, deploying the newly trained model in the break it phase, then fix it again. See Figure FIGREF6 for a visualization of this process. ### Build it Break it Fix it Method ::: Break it Details ::: Definition of offensive Throughout data collection, we characterize offensive messages for users as messages that would not be “ok to send in a friendly conversation with someone you just met online." We use this specific language in an attempt to capture various classes of content that would be considered unacceptable in a friendly conversation, without imposing our own definitions of what that means. The phrase “with someone you just met online" was meant to mimic the setting of a public forum. ### Build it Break it Fix it Method ::: Break it Details ::: Crowderworker Task We ask crowdworkers to try to “beat the system" by submitting messages that our system marks as safe but that the worker considers to be offensive. For a given round, workers earn a “game” point each time they are able to “beat the system," or in other words, trick the model by submitting offensive messages that the model marks as safe. Workers earn up to 5 points each round, and have two tries for each point: we allow multiple attempts per point so that workers can get feedback from the models and better understand their weaknesses. The points serve to indicate success to the crowdworker and motivate to achieve high scores, but have no other meaning (e.g. no monetary value as in BIBREF25). More details regarding the user interface and instructions can be found in Appendix SECREF9. ### Build it Break it Fix it Method ::: Break it Details ::: Models to Break During round 1, workers try to break the baseline model $A_0$, trained on Wikipedia Toxic Comments. For rounds $i$, $i > 1$, workers must break both the baseline model and the model from the previous “fix it" round, which we refer to as $A_{i-1}$. In that case, the worker must submit messages that both $A_0$ and $A_{i-1}$ mark as safe but which the worker considers to be offensive. ### Build it Break it Fix it Method ::: Fix it Details During the “fix it" round, we update the models with the newly collected adversarial data from the “break it" round. The training data consists of all previous rounds of data, so that model $A_i$ is trained on all rounds $n$ for $n \le i$, as well as the Wikipedia Toxic Comments data. We split each round of data into train, validation, and test partitions. The validation set is used for hyperparameter selection. The test sets are used to measure how robust we are to new adversarial attacks. With increasing round $i$, $A_i$ should become more robust to increasingly complex human adversarial attacks. ### Single-Turn Task We first consider a single-turn set-up, i.e. detection of offensive language in one utterance, with no dialogue context or conversational history. ### Single-Turn Task ::: Data Collection ::: Adversarial Collection We collected three rounds of data with the build it, break it, fix it algorithm described in the previous section. Each round of data consisted of 1000 examples, leading to 3000 single-turn adversarial examples in total. For the remainder of the paper, we refer to this method of data collection as the adversarial method. ### Single-Turn Task ::: Data Collection ::: Standard Collection In addition to the adversarial method, we also collected data in a non-adversarial manner in order to directly compare the two set-ups. In this method – which we refer to as the standard method, we simply ask crowdworkers to submit messages that they consider to be offensive. There is no model to break. Instructions are otherwise the same. In this set-up, there is no real notion of “rounds", but for the sake of comparison we refer to each subsequent 1000 examples collected in this manner as a “round". We collect 3000 examples – or three rounds of data. We refer to a model trained on rounds $n \le i$ of the standard data as $S_i$. ### Single-Turn Task ::: Data Collection ::: Task Formulation Details Since all of the collected examples are labeled as offensive, to make this task a binary classification problem, we will also add safe examples to it. The “safe data" is comprised of utterances from the ConvAI2 chit-chat task BIBREF29, BIBREF30 which consists of pairs of humans getting to know each other by discussing their interests. Each utterance we used was reviewed by two independent crowdworkers and labeled as safe, with the same characterization of safe as described before. For each partition (train, validation, test), the final task has a ratio of 9:1 safe to offensive examples, mimicking the division of the Wikipedia Toxic Comments dataset used for training our baseline models. Dataset statistics for the final task can be found in Table TABREF21. We refer to these tasks – with both safe and offensive examples – as the adversarial and standard tasks. ### Single-Turn Task ::: Data Collection ::: Model Training Details Using the BERT-based model architecture described in Section SECREF3, we trained models on each round of the standard and adversarial tasks, multi-tasking with the Wikipedia Toxic Comments task. We weight the multi-tasking with a mixing parameter which is also tuned on the validation set. Finally, after training weights with the cross entropy loss, we adjust the final bias also using the validation set. We optimize for the sensitive class (i.e. offensive-class) F1 metric on the standard and adversarial validation sets respectively. For each task (standard and adversarial), on round $i$, we train on data from all rounds $n$ for $n \le i$ and optimize for performance on the validation sets $n \le i$. ### Single-Turn Task ::: Experimental Results We conduct experiments comparing the adversarial and standard methods. We break down the results into “break it" results comparing the data collected and “fix it" results comparing the models obtained. ### Single-Turn Task ::: Experimental Results ::: Break it Phase Examples obtained from both the adversarial and standard collection methods were found to be clearly offensive, but we note several differences in the distribution of examples from each task, shown in Table TABREF21. First, examples from the standard task tend to contain more profanity. Using a list of common English obscenities and otherwise bad words, in Table TABREF21 we calculate the percentage of examples in each task containing such obscenities, and see that the standard examples contain at least seven times as many as each round of the adversarial task. Additionally, in previous works, authors have observed that classifiers struggle with negations BIBREF8. This is borne out by our data: examples from the single-turn adversarial task more often contain the token “not" than examples from the standard task, indicating that users are easily able to fool the classifier with negations. We also anecdotally see figurative language such as “snakes hiding in the grass” in the adversarial data, which contain no individually offensive words, the offensive nature is captured by reading the entire sentence. Other examples require sophisticated world knowledge such as that many cultures consider eating cats to be offensive. To quantify these differences, we performed a blind human annotation of a sample of the data, 100 examples of standard and 100 examples of adversarial round 1. Results are shown in Table TABREF16. Adversarial data was indeed found to contain less profanity, fewer non-profane but offending words (such as “idiot”), more figurative language, and to require more world knowledge. We note that, as anticipated, the task becomes more challenging for the crowdworkers with each round, indicated by the decreasing average scores in Table TABREF27. In round 1, workers are able to get past $A_0$ most of the time – earning an average score of $4.56$ out of 5 points per round – showcasing how susceptible this baseline is to adversarial attack despite its relatively strong performance on the Wikipedia Toxic Comments task. By round 3, however, workers struggle to trick the system, earning an average score of only $1.6$ out of 5. A finer-grained assessment of the worker scores can be found in Table TABREF38 in the appendix. ### Single-Turn Task ::: Experimental Results ::: Fix it Phase Results comparing the performance of models trained on the adversarial ($A_i$) and standard ($S_i$) tasks are summarized in Table TABREF22, with further results in Table TABREF41 in Appendix SECREF40. The adversarially trained models $A_i$ prove to be more robust to adversarial attack: on each round of adversarial testing they outperform standard models $S_i$. Further, note that the adversarial task becomes harder with each subsequent round. In particular, the performance of the standard models $S_i$ rapidly deteriorates between round 1 and round 2 of the adversarial task. This is a clear indication that models need to train on adversarially-collected data to be robust to adversarial behavior. Standard models ($S_i$), trained on the standard data, tend to perform similarly to the adversarial models ($A_i$) as measured on the standard test sets, with the exception of training round 3, in which $A_3$ fails to improve on this task, likely due to being too optimized for adversarial tasks. The standard models $S_i$, on the other hand, are improving with subsequent rounds as they have more training data of the same distribution as the evaluation set. Similarly, our baseline model performs best on its own test set, but other models are not far behind. Finally, we remark that all scores of 0 in Table TABREF22 are by design, as for round $i$ of the adversarial task, both $A_0$ and $A_{i-1}$ classified each example as safe during the `break it' data collection phase. ### Multi-Turn Task In most real-world applications, we find that adversarial behavior occurs in context – whether it is in the context of a one-on-one conversation, a comment thread, or even an image. In this work we focus on offensive utterances within the context of two-person dialogues. For dialogue safety we posit it is important to move beyond classifying single utterances, as it may be the case that an utterance is entirely innocuous on its own but extremely offensive in the context of the previous dialogue history. For instance, “Yes, you should definitely do it!" is a rather inoffensive message by itself, but most would agree that it is a hurtful response to the question “Should I hurt myself?" ### Multi-Turn Task ::: Task Implementation To this end, we collect data by asking crowdworkers to try to “beat" our best single-turn classifier (using the model that performed best on rounds 1-3 of the adversarial task, i.e., $A_3$), in addition to our baseline classifier $A_0$. The workers are shown truncated pieces of a conversation from the ConvAI2 chit-chat task, and asked to continue the conversation with offensive responses that our classifier marks as safe. As before, workers have two attempts per conversation to try to get past the classifier and are shown five conversations per round. They are given a score (out of five) at the end of each round indicating the number of times they successfully fooled the classifier. We collected 3000 offensive examples in this manner. As in the single-turn set up, we combine this data with safe examples with a ratio of 9:1 safe to offensive for classifier training. The safe examples are dialogue examples from ConvAI2 for which the responses were reviewed by two independent crowdworkers and labeled as safe, as in the s single-turn task set-up. We refer to this overall task as the multi-turn adversarial task. Dataset statistics are given in Table TABREF30. ### Multi-Turn Task ::: Models To measure the impact of the context, we train models on this dataset with and without the given context. We use the fastText and the BERT-based model described in Section SECREF3. In addition, we build a BERT-based model variant that splits the last utterance (to be classified) and the rest of the history into two dialogue segments. Each segment is assigned an embedding and the input provided to the transformer is the sum of word embedding and segment embedding, replicating the setup of the Next Sentence Prediction that is used in the training of BERT BIBREF17. ### Multi-Turn Task ::: Experimental Results ::: Break it Phase During data collection, we observed that workers had an easier time bypassing the classifiers than in the single-turn set-up. See Table TABREF27. In the single-turn set-up, the task at hand gets harder with each round – the average score of the crowdworkers decreases from $4.56$ in round 1 to $1.6$ in round 3. Despite the fact that we are using our best single-turn classifier in the multi-turn set-up ($A_3$), the task becomes easier: the average score per round is $2.89$. This is because the workers are often able to use contextual information to suggest something offensive rather than say something offensive outright. See examples of submitted messages in Table TABREF29. Having context also allows one to express something offensive more efficiently: the messages supplied by workers in the multi-turn setting were significantly shorter on average, see Table TABREF21. ### Multi-Turn Task ::: Experimental Results ::: Fix it Phase During training, we multi-tasked the multi-turn adversarial task with the Wikipedia Toxic Comments task as well as the single-turn adversarial and standard tasks. We average the results of our best models from five different training runs. The results of these experiments are given in Table TABREF31. As we observed during the training of our baselines in Section SECREF3, the fastText model architecture is ill-equipped for this task relative to our BERT-based architectures. The fastText model performs worse given the dialogue context (an average of 23.56 offensive-class F1 relative to 37.1) than without, likely because its bag-of-embeddings representation is too simple to take the context into account. We see the opposite with our BERT-based models, indicating that more complex models are able to effectively use the contextual information to detect whether the response is safe or offensive. With the simple BERT-based architecture (that does not split the context and the utterance into separate segments), we observe an average of a 3.7 point increase in offensive-class F1 with the addition of context. When we use segments to separate the context from the utterance we are trying to classify, we observe an average of a 7.4 point increase in offensive-class F1. Thus, it appears that the use of contextual information to identify offensive language is critical to making these systems robust, and improving the model architecture to take account of this has large impact. ### Conclusion We have presented an approach to build more robust offensive language detection systems in the context of a dialogue. We proposed a build it, break it, fix it, and then repeat strategy, whereby humans attempt to break the models we built, and we use the broken examples to fix the models. We show this results in far more nuanced language than in existing datasets. The adversarial data includes less profanity, which existing classifiers can pick up on, and is instead offensive due to figurative language, negation, and by requiring more world knowledge, which all make current classifiers fail. Similarly, offensive language in the context of a dialogue is also more nuanced than stand-alone offensive utterances. We show that classifiers that learn from these more complex examples are indeed more robust to attack, and that using the dialogue context gives improved performance if the model architecture takes it into account. In this work we considered a binary problem (offensive or safe). Future work could consider classes of offensive language separately BIBREF13, or explore other dialogue tasks, e.g. from social media or forums. Another interesting direction is to explore how our build it, break it, fix it strategy would similarly apply to make neural generative models safe BIBREF31. ### Additional Experimental Results ::: Additional Break It Phase Results Additional results regarding the crowdworkers' ability to “beat" the classifiers are reported in Table TABREF38. In particular, we report the percent of messages sent by the crowdsource workers that were marked safe and offensive by both $A_0$ and $A_{i-1}$. We note that very infrequently ($<1\%$ of the time) a message was marked offensive by $A_0$ but safe by $A_{i-1}$, showing that $A_0$ was relatively ineffective at catching adversarial behavior. In Table TABREF39, we report the categorization of examples into classes of offensive language from the blind human annotation of round 1 of the single-turn adversarial and standard data. We observe that in the adversarial set-up, there were fewer examples of bullying language but more examples targeting a protected class. ### Additional Experimental Results ::: Additional Fix It Phase Results We report F1, precision, and recall for the offensive class, as well as weighted-F1 for models $S_i$ and $A_i$ on the single-turn standard and adversarial tasks in Table TABREF41. ### Data Collection Interface Details During the adversarial data collection, we asked users to generate a message that “[the user believes] is not ok but that our system marks as ok," using the definition of “ok" and “not ok" described in the paper (i.e. “ok to send in a friendly conversation with someone you just met online"). In order to generate a variety of responses, during the single-turn adversarial collection, we provided users with a topic to base their response on 50% of the time. The topics were pulled from a set of 1365 crowd-sourced open-domain dialogue topics. Example topics include diverse topics such as commuting, Gouda cheese, music festivals, podcasts, bowling, and Arnold Schwarzenegger. Users were able to earn up to five points per round, with two tries for each point (to allow them to get a sense of the models' weaknesses). Users were informed of their score after each message, and provided with bonuses for good effort. The points did not affect the user's compensation, but rather, were provided as a way of gamifying the data collection, as this has been showed to increase data quality BIBREF25. Please see an example image of the chat interface in Figure FIGREF42. Table 1: Dataset statistics for our splits of Wikipedia Toxic Comments. Table 2: Comparison between our models based on fastText and BERT with the BiLSTM used by (Khatri et al., 2018) on Wikipedia Toxic Comments. Figure 1: The build it, break it, fix it algorithm we use to iteratively train better models A0, . . . , AN . In experiments we perform N = 3 iterations of the break it, fix it loop for the single-turn utterance detection task, and a further iteration for the multi-turn task in a dialogue context setting. Table 3: Language analysis of the single-turn standard and adversarial (round 1) tasks by human annotation of various language properties. Standard collection examples contain more words found in an offensive words list, while adversarial examples require more sophisticated language understanding. Table 4: Percent of OFFENSIVE examples in each task containing profanity, the token “not”, as well as the average number of characters and tokens in each example. Rows 1-4 are the single-turn task, and the last row is the multi-turn task. Later rounds have less profanity and more use of negation as human breakers have to find more sophisticated language to adversarially attack our models. Table 5: Dataset statistics for the single-turn rounds of the adversarial task data collection. There are three rounds in total all of identical size, hence the numbers above can be divided for individual statistics. The standard task is an additional dataset of exactly the same size as above. Table 6: Test performance of best standard models trained on standard task rounds (models Si for each round i) and best adversarial models trained on adversarial task rounds (models Ai). All models are evaluated using OFFENSIVE-class F1 on each round of both the standard task and adversarial task. A0 is the baseline model trained on the existing Wiki Toxic Comments (WTC) dataset. Adversarial models prove to be more robust than standard ones against attack (Adversarial Task 1-3), while still performing reasonably on Standard and WTC tasks. Table 7: Adversarial data collection worker scores. Workers received a score out of 5 indicating how often (out of 5 rounds) they were able to get past our classifiers within two tries. In later single-turn rounds it is harder to defeat our models, but switching to multi-turn makes this easier again as new attacks can be found by using the dialogue context. Table 8: Examples from the multi-turn adversarial task. Responses can be offensive only in context. Table 9: Multi-turn adversarial task data statistics. Table 10: Results of experiments on the multi-turn adversarial task. We denote the average and one standard deviation from the results of five runs. Models that use the context as input (“with context”) perform better. Encoding this in the architecture as well (via BERT dialogue segment features) gives us the best results. Table 11: Adversarial data collection statistics. A0 is the baseline model, trained on the Wikipedia Toxic Comments dataset. Ai−1 is the model for round i, trained on the adversarial data for rounds n ≤ i − 1. In the case of the multi-turn set-up, Ai−1 is A3. Table 12: Human annotation of 100 examples from each the single-turn standard and adversarial (round 1) tasks into offensive classes. Table 13: Full table of results from experiments on the single-turn standard and adversarial tasks. F1, precision, and recall are reported for the OFFENSIVEclass, as well as weighted F1.
The Wikipedia Toxic Comments dataset
Why does Jig bluff to Beamish initially? A. He knows he can get away with it - Beamish has the money to match what they ask. B. He doesn't trust Shannon to close a good deal. C. He doesn't trust Beamish, and wants to see if he's committed to the idea. D. For them to start a new tour would be costly for them, and Jig wants to get the maximum price.
The Blue Behemoth By LEIGH BRACKETT Shannon's Imperial Circus was a jinxed space-carny leased for a mysterious tour of the inner worlds. It made a one-night pitch on a Venusian swamp-town—to find that death stalked it from the jungle in a tiny ball of flame. [Transcriber's Note: This etext was produced from Planet Stories May 1943. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Bucky Shannon leaned forward across the little hexagonal table. He knocked over the pitcher of thil , but it didn't matter. The pitcher was empty. He jabbed me in the breastbone with his forefinger, not very hard. Not hard enough to jar the ribs clean loose, just enough to spring them. "We," he said, "are broke. We are finished, through. Washed up and down the drain." He added, as an afterthought, "Destitute." I looked at him. I said sourly, "You're kidding!" "Kidding." Shannon put his elbows on the table and peered at me through a curtain of very blond hair that was trying hard to be red. "He says I'm kidding! With Shannon's Imperial Circus, the Greatest Show in Space, plastered so thick with attachments...." "It's no more plastered than you are." I was sore because he'd been a lot quicker grabbing the pitcher. "The Greatest Show in Space. Phooey! I've wet-nursed Shannon's Imperial Circus around the Triangle for eleven years, and I know. It's lousy, it's mangy, it's broken-down! Nothing works, from the ship to the roustabouts. In short, it stinks!" I must have had the pitcher oftener than I thought. Nobody insults Buckhalter Shannon's Imperial Circus to Buckhalter Shannon's face unless he's tired and wants a long rest in a comfy fracture-frame. Shannon got up. He got up slowly. I had plenty of time to see his grey-green eyes get sleepy, and hear the quarter-Earth-blood Martian girl wailing about love over by the battered piano, and watch the slanting cat-eyes of the little dark people at the tables swing round toward us, pleased and kind of hungry. I had plenty of time to think how I only weigh one-thirty-seven to Shannon's one-seventy-five, and how I'm not as young as I used to be. I said, "Bucky. Hold on, fella. I...." Somebody said, "Excuse me, gentlemen. Is one of you Mister Buckhalter Shannon?" Shannon put his hands down on his belt. He closed his eyes and smiled pleasantly and said, very gently: "Would you be collecting for the feed bill, or the fuel?" I shot a glance at the newcomer. He'd saved me from a beating, even if he was a lousy bill-collecter; and I felt sorry for him. Bucky Shannon settled his shoulders and hips like a dancer. The stranger was a little guy. He even made me look big. He was dressed in dark-green synthesilk, very conservative. There was a powdering of grey in his hair and his skin was pink, soft, and shaved painfully clean. He had the kind of a face that nice maiden-ladies will trust with their last dime. I looked for his strong-arm squad. There didn't seem to be any. The little guy looked at Shannon with pale blue eyes like a baby, and his voice was softer than Bucky's. He said, "I don't think you understand." I felt cold, suddenly, between the shoulders. Somebody scraped a chair back. It sounded like he'd ripped the floor open, it was so quiet. I got my brassies on, and my hands were sweating. Bucky Shannon sighed, and let his fist start traveling, a long, deceptive arc. Then I saw what the little guy was holding in his hand. I yelled and knocked the table over into Bucky. It made a lot of noise. It knocked him sideways and down, and the little dark men jumped up, quivering and showing their teeth. The Martian girl screamed. Bucky heaved the table off his lap and cursed me. "What's eating you, Jig? I'm not going to hurt him." "Shut up," I said. "Look what he's got there. Money!" The little guy looked at me. He hadn't turned a hair. "Yes," he said. "Money. Quite a lot of it. Would you gentlemen permit me to join you?" Bucky Shannon got up. He grinned his pleasantest grin. "Delighted. I'm Shannon. This is Jig Bentley, my business manager." He looked down at the table. "I'm sorry about that. Mistaken identity." The little guy smiled. He did it with his lips. The rest of his face stayed placid and babyish, almost transparent. I realized with a start that it wasn't transparent at all. It was the most complete dead-pan I ever met, and you couldn't see into those innocent blue eyes any more than you could see through sheet metal. I didn't like him. I didn't like him at all. But he had money. I said, "Howdy. Let's go find a booth. These Marshies make me nervous, looking like hungry cats at a mouse-hole." The little guy nodded. "Excellent idea. My name is Beamish. Simon Beamish. I wish to—ah—charter your circus." I looked at Bucky. He looked hungrier than the Marshies did. We didn't say anything until we got Beamish into a curtained booth with a fresh pitcher of thil on the table. Then I cleared my throat. "What exactly did you have in mind, Mr. Beamish?" Beamish sipped his drink, made a polite face, and put it down. "I have independent means, gentlemen. It has always been my desire to lighten the burden of life for those less fortunate...." Bucky got red around the ears. "Just a minute," he murmured, and started to get up. I kicked him under the table. "Shut up, you lug. Let Mister Beamish finish." He sat down, looking like a mean dog waiting for the postman. Beamish ignored him. He went on, quietly, "I have always held that entertainment, of the right sort, is the most valuable aid humanity can have in its search for the alleviation of toil and boredom...." I said, "Sure, sure. But what was your idea?" "There are many towns along the Venusian frontiers where no entertainment of the— proper sort has been available. I propose to remedy that. I propose to charter your circus, Mister Shannon, to make a tour of several settlements along the Tehara Belt." Bucky had relaxed. His grey-green eyes began to gleam. He started to speak, and I kicked him again. "That would be expensive, Mister Beamish," I said. "We'd have to cancel several engagements...." He looked at me. I was lying, and he knew it. But he said, "I quite understand that. I would be prepared...." The curtains were yanked back suddenly. Beamish shut up. Bucky and I glared at the head and shoulders poking in between the drapes. It was Gow, our zoo-man—a big, ugly son-of-a-gun from a Terran colony on Mercury. I was there once. Gow looks a lot like the scenery—scowling, unapproachable, and tough. His hands, holding the curtains apart, had thick black hair on them and were not much larger than the hams of a Venusian swamp-rhino. He said, "Boss, Gertrude's actin' up again." "Gertrude be blowed," growled Bucky. "Can't you see I'm busy?" Gow's black eyes were unpleasant. "I'm tellin' you, Boss, Gertrude ain't happy. She ain't had the right food. If something...." I said, "That'll all be taken care of, Gow. Run along now." He looked at me like he was thinking it wouldn't take much timber to fit me for a coffin. "Okay! But Gertrude's unhappy. She's lonesome, see? And if she don't get happier pretty soon I ain't sure your tin-pot ship'll hold her." He pulled the curtains to and departed. Bucky Shannon groaned. Beamish cleared his throat and said, rather stiffly, "Gertrude?" "Yeah. She's kind of temperamental." Bucky took a quick drink. I finished for him. "She's the star attraction of our show, Mr. Beamish. A real blue-swamp Venusian cansin . The only other one on the Triangle belongs to Savitt Brothers, and she's much smaller than Gertrude." She was also much younger, but I didn't go into that. Gertrude may be a little creaky, but she's still pretty impressive. I only hoped she wouldn't die on us, because without her we'd have a sicker-looking circus than even I could stand. Beamish looked impressed. "A cansin . Well, well! The mystery surrounding the origin and species of the cansin is a fascinating subject. The extreme rarity of the animal...." We were getting off the subject. I said tactfully, "We'd have to have at least a hundred U.C.'s." It was twice what we had any right to ask. I was prepared to dicker. Beamish looked at me with that innocent dead pan. For a fraction of a second I thought I saw something back of his round blue eyes, and my stomach jumped like it was shot. Beamish smiled sweetly. "I'm not much of a bargainer. One hundred Universal Credits will be agreeable to me." He dragged out a roll as big as my two fists, peeled off half a dozen credit slips, and laid them on the table. "By way of a retainer, gentleman. My attorney and I will call on you in the morning with a contract and itinerary. Good night." We said good night, trying not to drool. Beamish went away. Bucky made grab for the money, but I beat him to it. "Scram," I said. "There are guys waiting for this. Big guys with clubs. Here." I gave him a small-denomination slip I'd been holding out. "We can get lushed enough on this." Shannon has a good vocabulary. He used it. When he got his breath back he said suddenly, "Beamish is pulling some kind of a game." "Yeah." "It may be crooked." "Sure. And he may be screwball and on the level. For Pete's sake!" I yelled. "You want to sit here till we all dry up and blow away?" Shannon looked at me, kind of funny. He looked at the bulge in my tunic where the roll was. He raked back his thick light hair. "Yeah," he said. "I hope there'll be enough left to bribe the jury." He poked his head outside. "Hey, boy! More thildatum !" It was pretty late when we got back to the broken-down spaceport where Shannon's Imperial Circus was crouching beneath its attachments. Late as it was, they were waiting for us. About twenty of them, sitting around and smoking and looking very ugly. It was awfully lonesome out there, with the desert cold and restless under the two moons. There's a smell to Mars, like something dead and dried long past decay, but still waiting. An unhappy smell. The blown red dust gritted in my teeth. Bucky Shannon walked out into the glare of the light at the entrance to the roped-off space around the main lock. He was pretty steady on his feet. He waved and said, "Hiya, boys." They got up off the steps, and the packing cases, and came toward us. I grinned and got into my brassies. We felt we owed those boys a lot more than money. It grates on a man's pride to have to sneak in and out of his own property through the sewage lock. This was the first time in weeks we'd come in at the front door. I waved the money in their faces. That stopped them. Very solemnly, Bucky and I checked the bills, paid them, and pocketed the receipts. Bucky yawned and stretched sleepily. "Now?" he said. "Now," I said. We had a lot of fun. Some of the boys inside the ship came out to join in. We raised a lot of dust and nobody got killed, quite. We all went home happy. They had their money, and we had their blood. The news was all over the ship before we got inside. The freaks and the green girl from Tethys who could roll herself like a hoop, and Zurt the muscle man from Jupiter, and all the other assorted geeks and kinkers and joeys that make up the usual corny carnie were doing nip-ups in the passageways and drooling over the thought of steer and toppings. Bucky Shannon regarded them possessively, wiping blood from his nose. "They're good guys, Jig. Swell people. They stuck by me, and I've rewarded them." I said, "Sure," rather sourly. Bucky hiccoughed. "Let's go see Gertrude." I didn't want to see Gertrude. I never got over feeling funny going into the brute tank, especially at night or out in space. I'm a city guy, myself. The smell and sound of wildness gives me goose bumps. But Bucky was looking stubborn, so I shrugged. "Okay. But just for a minute. Then we go beddy-bye." "You're a pal, Jif. Bes' li'l' guy inna worl'...." The fight had just put the topper on him. I was afraid he'd fall down the ladder and break his neck. That's why I went along. If I hadn't.... Oh, well, what's a few nightmares among friends? It was dark down there in the tank. Way off at the other end, there was a dim glow. Gow was evidently holding Gertrude's hand. We started down the long passageway between the rows of cages and glassed-in tanks and compression units. Our footsteps sounded loud and empty on the iron floor. I wasn't near as happy as Shannon, and my skin began to crawl a little. It's the smell, I think; rank and sour and wild. And the sound of them, breathing and rustling in the dark, with the patient hatred walled around them as strong as the cage bars. Bucky Shannon lurched against me suddenly. I choked back a yell, and then wiped the sweat off my forehead and cursed. The scream came again. A high, ragged, whistling screech like nothing this side of hell, ripping through the musty darkness. Gertrude, on the wailing wall. It had been quiet. Now every brute in the place let go at the same time. My stomach turned clear over. I called Gertrude every name I could think of, and I couldn't hear myself doing it. Presently a great metallic clash nearly burst my eardrums, and the beasts shut up. Gow had them nicely conditioned to that gong. But they didn't quiet down. Not really. They were uneasy. You can feel them inside you when they're uneasy. I think that's why I'm scared of them. They make me feel like I'm not human as I thought—like I wanted to put my back-hair up and snarl. Yeah. They were uneasy that night, all of a sudden.... Gow glared at us as we came up into the lantern light. "She's gettin' worse," he said. "She's lonesome." "That's tough," said Bucky Shannon. His grey-green eyes looked like an owl's. He swayed slightly. "That's sure tough." He sniffled. I looked at Gertrude. Her cage is the biggest and strongest in the tank and even so she looked as though she could break it open just taking a deep breath. I don't know if you've ever seen a cansin . There's only two of them on the Triangle. If you haven't, nothing I can say will make much difference. They're what the brain gang calls an "end of evolution." Seems old Dame Nature had an idea that didn't jell. The cansins were pretty successful for a while, it seems, but something gummed up the works and now there's only a few left, way in the deep-swamp country, where even the Venusians hardly ever go. Living fossils. I wouldn't know, of course, but Gertrude looks to me like she got stuck some place between a dinosaur and a grizzly bear, with maybe a little bird blood thrown in. Anyway, she's big. I couldn't help feeling sorry for her. She was crouched in the cage with her hands—yeah, hands—hanging over her knees and her snaky head sunk into her shoulders, looking out. Just looking. Not at anything. Her eyes were way back in deep horny pits, like cold green fire. The lantern light was yellow on her blue-black skin, but it made the mane, or crest, of coarse wide scales that ran from between her eyes clear down to her flat, short tail, burn all colors. She looked like old Mother Misery herself, from way back before time began. Gow said softly, "She wants a mate. And somebody better get her one." Bucky Shannon sniffled again. I said irritably, "Be reasonable, Gow! Nobody's ever seen a male cansin . There may not even be any." Gertrude screamed again. She didn't move, not even to raise her head. The sadness just built up inside her until it had to come out. That close, the screech was deafening, and it turned me all limp and cold inside. The loneliness, the sheer stark, simple pain.... Bucky Shannon began to cry. I snarled, "You'll have to snap her out of this, Gow. She's driving the rest of 'em nuts." He hammered on his gong, and things quieted down again. Gow stood looking out over the tank, sniffing a little, like a hound. Then he turned to Gertrude. "I saved her life," he said. "When we bought her out of Hanak's wreck and everybody thought she was too hurt to live, I saved her. I know her. I can do things with her. But this time...." He shrugged. He was huge and tough and ugly, and his voice was like a woman's talking about a sick child. "This time," he said, "I ain't sure." "Well for Pete's sake, do what you can. We got a charter, and we need her." I took Shannon's arm. "Come to bed, Bucky darlin'." He draped himself over my shoulder and we went off. Gow didn't look at us. Bucky sobbed. "You were right, Jig," he mumbled. "Circus is no good. I know it. But it's all I got. I love it, Jig. Unnerstan' me? Like Gow there with Gertrude. She's ugly and no good, but he loves her. I love...." "Sure, sure," I told him. "Stop crying down my neck." We were a long way from the light, then. The cages and tanks loomed high and black over us. It was still. The secret, uneasy motion all around us and the scruffing of our feet only made it stiller. Bucky was almost asleep on me. I started to slap him. And then the mist rose up out of the darkness in little lazy coils, sparkling faintly with blue, cold fire. I yelled, "Gow! Gow, the Vapor snakes! Gow—for God's sake!" I started to run, back along the passageway. Bucky weighed on me, limp and heavy. The noise burst suddenly in a deafening hell of moans and roars and shrieks, packed in tight by the metal walls, and above it all I could hear Gertrude's lonely, whistling scream. I thought, " Somebody's down here. Somebody let 'em out. Somebody wants to kill us! " I tried to yell again. It strangled in my throat. I sobbed, and the sweat was thick and cold on me. One of Bucky's dragging, stumbling feet got between mine. We fell. I rolled on top of him, covering his face, and buried my own face in the hollow of his shoulder. The first snake touched me. It was like a live wire, sliding along the back of my neck. I screamed. It came down along my cheek, hunting my mouth. There were more of them, burning me through my clothes. Bucky moaned and kicked under me. I remember hanging on and thinking, "This is it. This is it, and oh God, I'm scared!" Then I went out. II Kanza the Martian croaker, was bending over me when I woke up. His little brown face was crinkled with laughter. He'd lost most of his teeth, and he gummed thak -weed. It smelt. "You pretty, Mis' Jig," he giggled. "You funny like hell." He slapped some cold greasy stuff on my face. It hurt. I cursed him and said, "Where's Shannon? How is he?" "Mis' Bucky okay. You save life. You big hero, Mis' Jig. Mis' Gow come nickuhtime get snakes. You hero. Haw! You funny like hell!" I said, "Yeah," and pushed him away and got up. I almost fell down a couple of times, but presently I made it to the mirror over the washstand—I was in my own cell—and I saw what Kanza meant. The damned snakes had done a good job. I looked like I was upholstered in Scotch plaid. I felt sick. Bucky Shannon opened the door. He looked white and grim, and there was a big burn across his neck. He said: "Beamish is here with his lawyer." I picked up my shirt. "Right with you." Kanza went out, still giggling. Bucky closed the door. "Jig," he said, "those vapor worms were all right when we went in. Somebody followed us down and let them out. On purpose." I hurt all over. I growled, "With that brain, son, you should go far. Nobody saw anything, of course?" Bucky shook his head. "Question is, Jig, who wants to kill us, and why?" "Beamish. He realizes he's been gypped." "One hundred U.C.'s," said Bucky softly, "for a few lousy swampedge mining camps. It stinks, Jig. You think we should back out?" I shrugged. "You're the boss man. I'm only the guy that beats off the creditors." "Yeah," Bucky said reflectively. "And I hear starvation isn't a comfortable death. Okay, Jig. Let's go sign." He put his hand on the latch and looked at my feet. "And—uh—Jig, I...." I said, "Skip it. The next time, just don't trip me up, that's all!" We had a nasty trip to Venus. Gertrude kept the brute tank on edge, and Gow, on the rare occasions he came up for air, went around looking like a disaster hoping to happen. To make it worse, Zurt the Jovian strong-man got hurt during the take-off, and the Mercurian cave-cat had kittens. Nobody would have minded that, only one of 'em had only four legs. It lived just long enough to scare that bunch of superstitious dopes out of their pants. Circus people are funny that way. Shannon and I did a little quiet sleuthing, but it was a waste of time. Anybody in the gang might have let those electric worms out on us. It didn't help any to know that somebody, maybe the guy next to you at dinner, was busy thinking ways to kill you. By the time we hit Venus, I was ready to do a Brodie out the refuse chute. Shannon set the crate down on the edge of Nahru, the first stop on our itinerary. I stood beside him, looking out the ports at the scenery. It was Venus, all right. Blue mud and thick green jungle and rain, and a bunch of ratty-looking plastic shacks huddling together in the middle of it. Men in slickers were coming out for a look. I saw Beamish's sleek yacht parked on a cradle over to the left, and our router's runabout beside it. Bucky Shannon groaned. "A blue one, Jig. A morgue if I ever saw one!" I snarled, "What do you want, with this lousy dog-and-pony show!" and went out. He followed. The gang was converging on the lock, but they weren't happy. You get so you can feel those things. The steamy Venus heat was already sneaking into the ship. While we passed the hatchway to the brute tank, I could hear Gertrude, screaming. The canvasmen were busy setting up the annex, slopping and cursing in the mud. The paste brigade was heading for the shacks. Shannon and I stood with the hot rain running off our slickers, looking. I heard a noise behind me and looked around. Ahra the Nahali woman was standing in the mud with her arms up and her head thrown back, and her triangular mouth open like a thirsty dog. She didn't have anything on but her blue-green, hard scaled hide, and she was chuckling. It didn't sound nice. You find a lot of Nahali people in side-shows, doing tricks with the electric power they carry in their own bodies. They're Venusian middle-swampers, they're not human, and they never forget it. Ahra opened her slitted red eyes and looked at me and laughed with white reptilian teeth. "Death," she whispered. "Death and trouble. The jungle tells me. I can smell it in the swamp wind." The hot rain sluiced over her. She shivered, and the pale skin under her jaw pulsed like a toad's, and her eyes were red. "The deep swamps are angry," she whispered. "Something has been taken. They are angry, and I smell death in the wind!" She turned away, laughing, and I cursed her, and my stomach was tight and cold. Bucky said, "Let's eat if they have a bar in this dump." We weren't half way across the mud puddle that passed as a landing field when a man came out of a shack on the edge of the settlement. We could see him plainly, because he was off to one side of the crowd. He fell on his knees in the mud, making noises. It took him three or four tries to get our names out clear enough to understand. Bucky said, "Jig—it's Sam Kapper." We started to run. The crowd, mostly big unshaken miners, wheeled around to see what was happening. People began to close in on the man who crawled and whimpered in the mud. Sam Kapper was a hunter, supplying animals to zoos and circuses and carnivals. He'd given us good deals a couple of times, when we weren't too broke, and we were pretty friendly. I hadn't seen him for three seasons. I remembered him as a bronzed, hard-bitten guy, lean and tough as a twist of tung wire. I felt sick, looking down at him. Bucky started to help him up. Kapper was crying, and he jerked all over like animals I've seen that were scared to death. Some guy leaned over and put a cigarette in his mouth and lighted it for him. I was thinking about Kapper, then, and I didn't pay much attention. I only caught a glimpse of the man's face as he straightened up. I didn't realize until later that he looked familiar. We got Kapper inside the shack. It turned out to be a cheap bar, with a couple of curtained booths at the back. We got him into one and pulled the curtain in a lot of curious faces. Kapper dragged hard on the cigarette. The man that gave it to him was gone. Bucky said gently, "Okay, Sam. Relax. What's the trouble?" Kapper tried to straighten up. He hadn't shaved. The lean hard lines of his face had gone slack and his eyes were bloodshot. He was covered with mud, and his mouth twitched like a sick old man's. He said thickly, "I found it. I said I'd do it, and I did. I found it and brought it out." The cigarette stub fell out of his mouth. He didn't notice it. "Help me," he said simply. "I'm scared." His mouth drooled. "I got it hidden. They want to find out, but I won't tell 'em. It's got to go back. Back where I found it. I tried to take it, but they wouldn't let me, and I was afraid they'd find it...." He reached suddenly and grabbed the edge of the table. "I don't know how they found out about it, but they did. I've got to get it back. I've got to...." Bucky looked at me. Kapper was blue around the mouth. I was scared, suddenly. I said, "Get what back where?" Bucky got up. "I'll get a doctor," he said. "Stick with him." Kapper grabbed his wrist. Kapper's nails were blue and the cords in his hands stood out like guy wires. "Don't leave me. Got to tell you—where it is. Got to take it back. Promise you'll take it back." He gasped and struggled over his breathing. "Sure," said Bucky. "Sure, well take it back. What is it?" Kapper's face was horrible. I felt sick, listening to him fight for air. I wanted to go for a doctor anyway, but somehow I knew it was no use. Kapper whispered, " Cansin . Male. Only one. You don't know...! Take him back." "Where is it, Sam?" I reached across Bucky suddenly and jerked the curtain back. Beamish was standing there. Beamish, bent over, with his ear cocked. Kapper made a harsh strangling noise and fell across the table. Beamish never changed expression. He didn't move while Bucky felt Kapper's pulse. Bucky didn't need to say anything. We knew. "Heart?" said Beamish finally. "Yeah," said Bucky. He looked as bad as I felt. "Poor Sam." I looked at the cigarette stub smoldering on the table. I looked at Beamish with his round dead baby face. I climbed over Shannon and pushed Beamish suddenly down into his lap. "Keep this guy here till I get back," I said. Shannon stared at me. Beamish started to get indignant. "Shut up," I told him. "We got a contract." I yanked the curtains shut and walked over to the bar. I began to notice something, then. There were quite a lot of men in the place. At first glance they looked okay—a hard-faced, muscular bunch of miners in dirty shirts and high boots. Then I looked at their hands. They were dirty enough. But they never did any work in a mine, on Venus or anywhere else. The place was awfully quiet, for that kind of a place. The bartender was a big pot-bellied swamp-edger with pale eyes and thick white hair coiled up on top of his bullet head. He was not happy. I leaned on the bar. " Lhak ," I said. He poured it, sullenly, out of a green bottle. I reached for it, casually. "That guy we brought in," I said. "He sure has a skinful. Passed out cold. What's he been spiking his drinks with?" " Selak ," said a voice in my ear. "As if you didn't know." I turned. The man who had given Kapper the cigarette was standing behind me. And I remembered him, then.
A. He knows he can get away with it - Beamish has the money to match what they ask.
According to the reviewers, Jack from "Fight Club" and Brandon Teena from "Boys Don't Cry" share the following: A. An unsupportive family B. An addictive personality C. A fascination with masculinity D. A sleep disorder
Boys Do Bleed Fight Club is silly stuff, sensationalism that mistakes itself for satire, but it's also a brash and transporting piece of moviemaking, like Raging Bull on acid. The film opens with--literally--a surge of adrenalin, which travels through the bloodstream and into the brain of its protagonist, Jack (Edward Norton), who's viewed, as the camera pulls out of his insides, with a gun stuck in his mouth. How'd he get into this pickle? He's going to tell you, breezily, and the director, David Fincher, is going to illustrate his narrative--violently. Fincher ( Seven , 1995; The Game , 1997) is out to bombard you with so much feverish imagery that you have no choice but to succumb to the movie's reeling, punch-drunk worldview. By the end, you might feel as if you, too, have a mouthful of blood. Not to mention a hole in your head. Fight Club careers from one resonant satirical idea to the next without quite deciding whether its characters are full of crap or are Gen X prophets. It always gives you a rush, though. At first, it goofs on the absurd feminization of an absurdly macho culture. An increasingly desperate insomniac, Jack finds relief (and release) only at meetings for the terminally ill. At a testicular cancer group, he's enfolded in the ample arms of Bob (the singer Meat Loaf Aday), a former bodybuilder who ruined his health with steroids and now has "bitch tits." Jack and Bob subscribe to a new form of male bonding: They cling to each other and sob. But Jack's idyll is rudely disrupted by--wouldn't you know it?--a woman. A dark-eyed, sepulchral head case named Marla Singer (Helena Bonham Carter) begins showing up at all the same disparate meetings for essentially the same voyeuristic ends, and the presence of this "tourist" makes it impossible for Jack to emote. Jack finds another outlet, though. On a plane, he meets Tyler Durden (Brad Pitt), a cryptic hipster with a penchant for subversive acts both large (he makes high-priced soaps from liposuctioned human fat) and small (he splices frames from porn flicks into kiddie movies). When Jack's apartment mysteriously explodes--along with his carefully chosen IKEA furniture--he moves into Tyler's squalid warehouse and helps to found a new religion: Fight Club, in which young males gather after hours in the basement of a nightclub to pound one another (and be pounded) to a bloody pulp. That last parenthesis isn't so parenthetical. In some ways, it's the longing to be beaten into oblivion that's the strongest. "Self-improvement," explains Tyler, "is masturbation"; self-destruction is the new way. Tyler's manifesto calls for an end to consumerism ("Things you own end up owning you"), and since society is going down ("Martha Stewart is polishing brass on the Titanic "), the only creative outlet left is annihilation. "It's only after we've lost everything that we're free to do anything," he says. Fincher and his screenwriter, Jim Uhls, seem to think they've broken new ground in Fight Club , that their metaphor for our discontents hits harder than anyone else's. Certainly it produces more bloody splatter. But 20 years ago, the same impulse was called punk and, as Greil Marcus documents in Lipstick Traces , it was other things before that. Yes, the mixture of Johnny Rotten, Jake La Motta, and Jesus is unique; and the Faludi-esque emasculation themes are more explicit. But there's something deeply movie-ish about the whole conceit, as if the novelist and director were weaned on Martin Scorsese pictures and never stopped dreaming of recapturing that first masochistic rush. The novel, the first by Chuck Palahniuk (the surname sounds like Eskimo for "palooka"--which somehow fits), walks a line between the straight and ironic--it isn't always clear if its glib sociological pronouncements are meant to be taken straight or as the ravings of a delusional mama's boy. But onscreen, when Pitt announces to the assembled fighters that they are the "middle children of history" with "no purpose and no place"--emasculated on one hand by the lack of a unifying crisis (a world war or depression) and on the other by lack of material wealth as promised by television--he seems meant to be intoning gospel. "We are a generation of men raised by women," Tyler announces, and adds, "If our fathers bail, what does that tell you about God?" (I give up: What?) F ight Club could use a few different perspectives: a woman's, obviously, but also an African-American's--someone who'd have a different take on the "healing" properties of violence. It's also unclear just what has emasculated Jack: Is it that he's a materialist or that the materials themselves (i.e., IKEA's lacquered particle boards) don't measure up to his fantasies of opulence? Is he motivated by spiritual hunger or envy? Tyler's subsequent idea of confining his group's mayhem to franchise coffee bars and corporate-subsidized art is a witty one--it's like a parody of neo-Nazism as re-enacted by yuppies. It might have been a howl if performed by, say, the troupe of artsy German nihilists in Joel and Ethan Coen's The Big Lebowski (1998). Somehow Brad Pitt doesn't have the same piquancy. Actually, Pitt isn't as terrible as usual: He's playing not a character but a conceit, and he can bask in his movie-idol arrogance, which seems to be the most authentic emotion he has. But the film belongs to Norton. As a ferocious skinhead in last year's American History X , Norton was taut and ropy, his long torso curled into a sneer; here, he's skinny and wilting, a quivering pansy. Even when he fights he doesn't transform--he's a raging wimp. The performance is marvelous, and it makes poetic sense in light of the movie's climactic twist. But that twist will annoy more people than it will delight, if only because it shifts the drama from the realm of the sociological to that of the psychoanalytic. The finale, scored with the Pixies' great "Where Is My Mind?" comes off facetiously--as if Fincher is throwing the movie away. Until then, however, he has done a fabulous job of keeping it spinning. The most thrilling thing about Fight Club isn't what it says but how Uhls and Fincher pull you into its narrator's head and simulate his adrenalin rushes. A veteran of rock videos, Fincher is one of those filmmakers who helps make the case that MTV--along with digital editing--has transformed cinema for better as well as worse. The syntax has become more intricate. Voice-over narration, once considered uncinematic, is back in style, along with novelistic asides, digressions, fantasies, and flashbacks. To make a point, you can jazzily interject anything--even, as in Three Kings , a shot of a bullet slicing through internal organs. Films like Fight Club might not gel, but they have a breathless, free-associational quality that points to new possibilities in storytelling. Or maybe old possibilities: The language of movies hasn't seemed this unfettered since the pre-sound days of Sergei Eisenstein and Abel Gance. An actress named Hilary Swank gives one of the most rapturous performances I've ever seen as the cross-dressing Brandon Teena (a k a Teena Brandon) in Kimberly Peirce's stark and astonishingly beautiful debut feature, Boys Don't Cry . The movie opens with Teena being shorn of her hated female tresses and becoming "Brandon," who swaggers around in tight jeans and leather jackets. The joy is in watching the actor transform, and I don't just mean Swank: I mean Teena Brandon playing Brandon Teena--the role she has been longing for her whole life. In a redneck Nebraska bar, Brandon throws back a shot of whiskey and the gesture--a macho cliché--becomes an act of self-discovery. Every gesture does. "You're gonna have a shiner in the morning," someone tells Brandon after a barroom brawl, and he takes the news with a glee that's almost mystical: "I am????? Oh, shit!!!" he cries, grinning. That might be my favorite moment in the picture, because Swank's ecstatic expression carries us through the next hour, as Brandon acts out his urban-cowboy fantasies--"surfing" from the bumper of a pickup truck, rolling in the mud, and straddling a barstool with one hand on a brewski and the other on the shoulder of a gorgeous babe. That the people with whom Brandon feels most at home would kill him if they knew his true gender is the movie's most tragic irony--and the one that lifts it out of the realm of gay-martyr hagiography and into something more complex and irreducible: a meditation on the irrelevance of gender. Peirce's triumph is to make these scenes at once exuberant (occasionally hilarious) and foreboding, so that all the seeds of Brandon's killing are right there on the screen. John (Peter Sarsgaard), one of his future rapists and murderers, calls him "little buddy" and seems almost attracted to him; Sarsgaard's performance is a finely chiseled study of how unresolved emotion can suddenly resolve itself into violence. Though harrowing, the second half of Boys Don't Cry isn't as great as the first. The early scenes evoke elation and dread simultaneously, the later ones just dread; and the last half-hour is unrelieved torture. What keeps the movie tantalizing is Chloë Sevigny's Lana, who might or might not know that Brandon is a girl but who's entranced by him anyway. With her lank hair, hooded eyes, and air of sleepy sensuality, Sevigny--maybe even more than Swank--embodies the mystery of sex that's at the core of Boys Don't Cry . Everything she does is deliberate, ironic, slightly unreadable--and unyielding. She's could be saying, "I'm in this world but not of it. ... You'd never dream what's underneath." I n brief: If a friend tells you you'll love Happy Texas , rethink the friendship. This clunky mistaken-identity comedy about escaped cons who impersonate gay pageant directors doesn't even make sense on its own low farcical terms; it's mostly one lame homo joke after another. The only bright spot is Steve Zahn, who could be the offspring of Michael J. Fox and Crispin Glover if they'd mated on the set of Back to the Future (1985). It's hard to make a serious case for Lawrence Kasdan's Mumford , which has apparently flopped but which you can still catch at second- and third-tier theaters. It looks peculiar--a Norman Rockwell painting with noir shadows. And its tale of a small town healed by a depressive (Loren Dean) posing as a psychologist is full of doddering misconceptions about psychotherapy. I almost don't know why I loved it, but the relaxed pacing and the witty turns by Martin Short, Ted Danson, David Paymer, and Mary McDonnell surely helped. I can't decide if the weirdly affectless Dean is inspired or inept, but my indecision suggests why he works in the role. There's no doubt, however, about his even more depressive love object, Hope Davis, who posseses the cinema's most expressive honking-nasal voice and who slumps through the movie like the world's most lyrical anti-ballerina. Even her puffy cheeks are eloquent: They made me think of Mumford as the home of the psychological mumps.
C. A fascination with masculinity
What is lung-rot? A. Lung-rot is a disease caused by chemicals in the Martian atmosphere. B. Lung-rot is tuberculosis. C. A disease that presents like whooping cough. D. Lung-rot is Martian slang for pneumonia.
Spacemen Die at Home By EDWARD W. LUDWIG Illustrated by THORNE [Transcriber's Note: This etext was produced from Galaxy Science Fiction October 1951. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] One man's retreat is another's prison ... and it takes a heap of flying to make a hulk a home! Forty days of heaven and forty nights of hell. That's the way it's been, Laura. But how can I make you understand? How can I tell you what it's like to be young and a man and to dream of reaching the stars? And yet, at the same time, to be filled with a terrible, gnawing fear—a fear locked in my mind during the day and bursting out like an evil jack-in-the-box at night. I must tell you, Laura. Perhaps if I start at the beginning, the very beginning.... It was the Big Day. All the examinations, the physicals and psychos, were over. The Academy, with its great halls and classrooms and laboratories, lay hollow and silent, an exhausted thing at sleep after spawning its first-born. For it was June in this year of 1995, and we were the graduating class of the U. S. Academy of Interplanetary Flight. The first graduating class, Laura. That's why it was so important, because we were the first . We sat on a little platform, twenty-five of us. Below us was a beach of faces, most of them strange, shining like pebbles in the warm New Mexican sunlight. They were the faces of mothers and fathers and grandparents and kid brothers and sisters—the people who a short time ago had been only scrawled names on letters from home or words spoken wistfully at Christmas. They were the memory-people who, to me, had never really existed. But today they had become real, and they were here and looking at us with pride in their eyes. A voice was speaking, deep, sure, resonant. "... these boys have worked hard for six years, and now they're going to do a lot of big things. They're going to bring us the metals and minerals that we desperately need. They're going to find new land for our colonists, good rich land that will bear food and be a home for our children. And perhaps most important of all, they'll make other men think of the stars and look up at them and feel humility—for mankind needs humility." The speaker was Robert Chandler, who'd brought the first rocket down on Mars just five years ago, who'd established the first colony there, and who had just returned from his second hop to Venus. Instead of listening to his words, I was staring at his broad shoulders and his dark, crew-cut hair and his white uniform which was silk-smooth and skin-tight. I was worshiping him and hating him at the same time, for I was thinking: He's already reached Mars and Venus. Let him leave Jupiter and the others alone! Let us be the first to land somewhere! Let us be the first! Mickey Cameron, sitting next to me, dug an elbow into my ribs. "I don't see 'em, Ben," he whispered. "Where do you suppose they are?" I blinked. "Who?" "My folks." That was something I didn't have to worry about. My parents had died in a strato-jet crash when I was four, so I hadn't needed many of those "You are cordially invited" cards. Just one, which I'd sent to Charlie Taggart. Stardust Charlie, we called him, although I never knew why. He was a veteran of Everson's first trip to the Moon nearly twenty-five years ago, and he was still at it. He was Chief Jetman now on the Lunar Lady , a commercial ore ship on a shuttle between Luna City and White Sands. I remembered how, as a kid, I'd pestered him in the Long Island Spaceport, tagging after him like a puppy, and how he'd grown to like me until he became father, mother, and buddy all in one to me. And I remembered, too, how his recommendation had finally made me a cadet. My gaze wandered over the faces, but I couldn't find Charlie's. It wasn't surprising. The Lunar Lady was in White Sands now, but liberties, as Charlie said, were as scarce as water on Mars. It doesn't matter , I told myself. Then Mickey stiffened. "I see 'em, Ben! There in the fifth row!" Usually Mickey was the same whether in a furnace-hot engine room or a garden party, smiling, accepting whatever the world offered. But now a tenseness and an excitement had gripped even him. I was grateful that he was beside me; we'd been a good team during those final months at the Academy and I knew we'd be a good team in space. The Universe was mighty big, but with two of us to face it together, it would be only half as big. And then it seemed that all the proud faces were looking at us as if we were gods. A shiver went through my body. Though it was daytime, I saw the stars in my mind's vision, the great shining balls of silver, each like a voice crying out and pleading to be explored, to be touched by the sons of Earth. They expect a lot from us. They expect us to make a new kind of civilization and a better place out of Earth. They expect all this and a hell of a lot more. They think there's nothing we can't do. I felt very small and very humble. I was scared. Damned scared. At last it was over, and the proud faces descended upon us in a huge, babbling wave. Then I saw him. Good old Stardust Charlie. His wizened little body was shuffling down an aisle, his eyes shining like a child's. He'd been sandwiched, evidently, in one of the rear rows. But he wasn't the Charlie I'd seen a year ago. He'd become gaunt and old, and he walked with an unnatural stiffness. He looked so old that it was hard to believe he'd once been young. He scratched his mop of steel-gray hair and grinned. "You made it, boy," he chortled, "and by Jupiter, we'll celebrate tonight. Yes, siree, I got twenty-four hours, and we'll celebrate as good spacemen should!" Then Mickey strode up to us. He was his normal, boyish self again, walking lightly, his blond, curly-haired skull swaying as if in rhythm with some silent melody. And you, Laura, were with him. "Meet the Brat," he said. "My sister Laura." I stared almost rudely. You were like a doll lost in the immensity of your fluffy pink dress. Your hair was long and transformed into a golden froth where sunlight touched it. But your eyes were the eyes of a woman, glowing like dark stars and reflecting a softness, a gentleness that I'd never seen in eyes before. "I'm happy to meet you, Ben," you said. "I've heard of no one else for the past year." A tide of heat crept up from my collar. I stuttered through an introduction of Charlie. You and Mickey looked strangely at Charlie, and I realized that old Stardust was not a cadet's notion of the ideal spaceman. Charlie scorned the skin-tight uniforms of the government service and wore a shiny black suit that was a relic of Everson's early-day Moon Patrol. His tie was clumsily knotted, and a button on his coat was missing. And the left side of his face was streaked with dark scar tissue, the result of an atomic blowup on one of the old Moon ships. I was so accustomed to the scars, I was seldom aware of them; but others, I knew, would find them ugly. You were kind. You shook hands and said, softly: "It's a privilege to meet you, Charlie. Just think—one of Everson's men, one of the first to reach the Moon!" Charlie gulped helplessly, and Mickey said: "Still going to spend the weekend with us, aren't you, Ben?" I shook my head. "Charlie has only twenty-four hours liberty. We're planning to see the town tonight." "Why don't you both come with us?" you asked. "Our folks have their own plane, so it would be no problem. And we've got a big guest room. Charlie, wouldn't you like a home-cooked meal before going back to the Moon?" Charlie's answer was obscured by a sudden burst of coughing. I knew that he'd infinitely prefer to spend his liberty sampling Martian fizzes and Plutonian zombies. But this night seemed too sacred for Charlie's kind of celebration. "We'd really like to come," I said. On our way to the 'copter parking field, Dean Dawson passed us. He was a tall, willowy man, spectacled, looking the way an academy professor should look. "Ben," he called, "don't forget that offer. Remember you've got two months to decide." "No, thanks," I answered. "Better not count on me." A moment later Mickey said, frowning, "What was he talking about, Ben? Did he make you an offer?" I laughed. "He offered me a job here at the Academy teaching astrogation. What a life that would be! Imagine standing in a classroom for forty years when I've got the chance to—" I hesitated, and you supplied the right words: "When you've got the chance to be the first to reach a new planet. That's what most of you want, isn't it? That's what Mickey used to want." I looked at you as if you were Everson himself, because you seemed to understand the hunger that could lie in a man's heart. Then your last words came back and jabbed me: "That's what Mickey used to want." " Used to want?" I asked. "What do you mean?" You bit your lip, not answering. "What did she mean, Mickey?" Mickey looked down at his feet. "I didn't want to tell you yet, Ben. We've been together a long time, planning to be on a rocket. But—" "Yes?" "Well, what does it add up to? You become a spaceman and wear a pretty uniform. You wade through the sands of Mars and the dust of Venus. If you're lucky, you're good for five, maybe ten years. Then one thing or another gets you. They don't insure rocketmen, you know." My stomach was full of churning, biting ice. "What are you trying to say, Mickey?" "I've thought about it a long time. They want me for Cargo Supervisor of White Sands Port." He raised his hand to stop me. "I know. It's not so exciting. I'll just live a lot longer. I'm sorry, Ben." I couldn't answer. It was as if someone had whacked the back of my knees with the blast of a jet. "It doesn't change anything, Ben—right now, I mean. We can still have a good weekend." Charlie was muttering under his breath, smoldering like a bomb about to reach critical mass. I shook my head dazedly at him as we got to the 'copter. "Sure," I said to Mickey, "we can still have a good weekend." I liked your folks, Laura. There was no star-hunger in them, of course. They were simple and solid and settled, like green growing things, deep-rooted, belonging to Earth. They were content with a home that was cool on this warm summer night, with a 'copter and a tri-dimensional video, and a handsome automatic home that needed no servants or housework. Stardust Charlie was as comfortable as a Martian sand-monkey in a shower, but he tried courageously to be himself. At the dinner table he stared glassily at nothing and grated, "Only hit Mars once, but I'll never forget the kid who called himself a medic. Skipper started coughing, kept it up for three days. Whoopin' cough, the medic says, not knowin' the air had chemicals that turned to acid in your lungs. I'd never been to Mars before, but I knew better'n that. Hell, I says, that ain't whoopin' cough, that's lung-rot." That was when your father said he wasn't so hungry after all. Afterward, you and I walked onto the terrace, into the moonlit night, to watch for crimson-tailed continental rockets that occasionally streaked up from White Sands. We gazed for a few seconds up into the dark sky, and then you said: "Charlie is funny, isn't he? He's nice and I'm glad he's here, but he's sort of funny." "He's an old-time spaceman. You didn't need much education in those days, just a lot of brawn and a quick mind. It took guts to be a spaceman then." "But he wasn't always a spaceman. Didn't he ever have a family?" I smiled and shook my head. "If he had, he never mentioned it. Charlie doesn't like to be sentimental, at least not on the outside. As far as I know, his life began when he took off for the Moon with Everson." You stared at me strangely, almost in a sacred kind of way. I knew suddenly that you liked me, and my heart began to beat faster. There was silence. You were lovely, your soft hair like strands of gold, and there were flecks of silver in your dark eyes. Somehow I was afraid. I had the feeling that I shouldn't have come here. You kept looking at me until I had to ask: "What are you thinking, Laura?" You laughed, but it was a sad, fearful laugh. "No, I shouldn't be thinking it. You'd hate me if I told you, and I wouldn't want that." "I could never hate you." "It—it's about the stars," you said very softly. "I understand why you want to go to them. Mickey and I used to dream about them when we were kids. Of course I was a girl, so it was just a game to me. But once I dreamed of going to England. Oh, it was going to be so wonderful. I lived for months, just thinking about it. "One summer we went. I had fun. I saw the old buildings and castles, and the spaceports and the Channel Tube. But after it was over, I realized England wasn't so different from America. Places seem exciting before you get to them, and afterward they're not really." I frowned. "And you mean it might be the same with the stars? You think maybe I haven't grown up yet?" Anxiety darkened your features. "No, it'd be good to be a spaceman, to see the strange places and make history. But is it worth it? Is it worth the things you'd have to give up?" I didn't understand at first, and I wanted to ask, "Give up what ?" Then I looked at you and the promise in your eyes, and I knew. All through the years I'd been walking down a single, narrow path. Government boarding school, the Academy, my eyes always upward and on the stars. Now I'd stumbled into a cross-roads, beholding a strange new path that I'd never noticed before. You can go into space , I thought, and try to do as much living in ten years as normal men do in fifty. You can be like Everson, who died in a Moon crash at the age of 36, or like a thousand others who lie buried in Martian sand and Venusian dust. Or, if you're lucky, like Charlie—a kind of human meteor streaking through space, eternally alone, never finding a home. Or there's the other path. To stay on this little prison of an Earth in cool, comfortable houses. To be one of the solid, rooted people with a wife and kids. To be one of the people who live long enough to grow old, who awake to the song of birds instead of rocket grumblings, who fill their lungs with the clean rich air of Earth instead of poisonous dust. "I'm sorry," you said. "I didn't mean to make you sad, Ben." "It's all right," I said, clenching my fists. "You made sense—a lot of sense." The next morning Charlie said good-bye in our room. He rubbed his scarred face nervously as he cleared his throat with a series of thin, tight coughs. Then he pointed to a brown, faded tin box lying on the bed. "I'm leavin' that for you. It's full of old stuff, souvenirs mostly. Thought maybe you'd like to have 'em." I scowled, not understanding. "Why, Charlie? What for?" He shrugged as if afraid he might be accused of sentimentality. "Oh, it's just that I've been dodgin' meteors now for twenty-five years. That's a long time, boy. Ain't one spaceman in a thousand that lucky. Some of these days, I won't be so lucky." I tried to laugh. "You're good for another twenty-five years, Charlie." He shook his head stiffly, staring at nothing. "Maybe. Anyway, I'm gonna get off the Shuttle this time, make one more trip to Mars. Tell you what. There's a little stone cafe on Mars, the Space Rat , just off Chandler Field on the Grand Canal. When you get to Mars, take a look inside. I'll probably be there." He coughed again, a deep, rasping cough that filled his eyes with tears. "Not used to this Earth air," he muttered. "What I need's some Martian climate." Suddenly that cough frightened me. It didn't seem normal. I wondered, too, about his stiff movements and glassy stare. It was as if he were drugged. I shook the thought away. If Charlie was sick, he wouldn't talk about going to Mars. The medics wouldn't let him go even as far as Luna. We watched him leave, you and Mickey and I. "When will you be back?" you asked. Charlie's hard face contorted itself into a gargoylish grin. "Maybe a couple of months, maybe a couple of years. You know spacemen." Then he waved and strode away, a strange, gray, withered gnome of a man. I wanted him to say something, to tell me the secret that would kill the doubt worming through my brain. But he rounded a corner, still grinning and waving, and then he was gone. That afternoon Mickey showed me his room. It was more like a boy's room than a spaceman's. In it were all the little things that kids treasure—pennants, models of Everson's two ships, a tennis trophy, books, a home-made video. I began to realize how important a room like this could be to a boy. I could imagine, too, the happiness that parents felt as they watched their children grow to adulthood. I'd missed something. My folks were shadow-people, my impressions of them drawn half from ancient photos, half from imagination. For me, it had been a cold, automatic kind of life, the life of dormitories and routines and rules. I'd been so blinded by the brilliancy of my dreams, I hadn't realized I was different. My folks were killed in a rocket crash. If it weren't for rockets, I'd have lived the kind of life a kid should live. Mickey noticed my frown. "What's the matter, Ben? Still sore? I feel like a heel, but I'm just not like you and Charlie, I guess. I—" "No, I understand, Mickey. I'm not sore, really." "Listen, then. You haven't accepted any offer yet, have you?" "No. I got a couple of possibilities. Could get a berth on the Odyssey , the new ship being finished at Los Angeles. They want me, too, for the Moon Patrol, but that's old stuff, not much better than teaching. I want to be in deep space." "Well, how about staying with us till you decide? Might as well enjoy Earth life while you can. Okay?" I felt like running from the house, to forget that it existed. I wanted someone to tell me one of the old stories about space, a tale of courage that would put fuel on dying dreams. But I wanted, also, to be with you, Laura, to see your smile and the flecks of silver in your eyes and the way your nose turned upward ever so slightly when you laughed. You see, I loved you already, almost as much as I loved the stars. And I said, slowly, my voice sounding unfamiliar and far away, "Sure, I'll stay, Mickey. Sure." Forty days of joy, forty nights of fear and indecision. We did all the little things, like watching the rockets land at White Sands and flying down to the Gulf to swim in cool waters. You tried, unsuccessfully, to teach me to dance, and we talked about Everson and Charlie and the Moon and the stars. You felt you had to give the stars all the beauty and promise of a child's dream, because you knew that was what I wanted. One morning I thought, Why must I make a choice? Why can't I have both you and the stars? Would that be asking too much? All day the thought lay in my mind like fire. That evening I asked you to marry me. I said it very simply: "Laura, I want you to be my wife." You looked up at Venus, and you were silent for a long while, your face flushed. Then you murmured, "I—I want to marry you, Ben, but are you asking me to marry a spaceman or a teacher?" "Can't a spaceman marry, too?" "Yes, a spaceman can marry, but what would it be like? Don't you see, Ben? You'd be like Charlie. Gone for maybe two months, maybe two years. Then you'd have a twenty-four hour liberty—and I'd have what?" Somehow I'd expected words like these, but still they hurt. "I wouldn't have to be a spaceman forever. I could try it for a couple of years, then teach." "Would you, Ben? Would you be satisfied with just seeing Mars? Wouldn't you want to go on to Jupiter and Saturn and Uranus and on and on?" Your voice was choked, and even in the semi-darkness I saw tears glittering in your eyes. "Do you think I'd dare have children, Ben? Mickey told me what happened on the Cyclops . There was a leak in the atomic engines. The ship was flooded with radiation—just for a second. It didn't seem serious. The men had no burns. But a year later the captain had a child. And it was—" "I know, Laura. Don't say it." You had to finish. "It was a monster." That night I lay awake, the fears and doubts too frantic to let me sleep. You've got to decide now , I told myself. You can't stay here. You've got to make a choice. The teaching job was still open. The spot on the Odyssey was still open—and the big ship, it was rumored, was equipped to make it all the way to Pluto. You can take Dean Dawson's job and stay with Laura and have kids and a home and live to see what happens in this world sixty years from now. Or you can see what's on the other side of the mountain. You can be a line in a history book. I cursed. I knew what Charlie would say. He'd say, "Get the hell out of there, boy. Don't let a fool woman make a sucker out of you. Get out there on the Odyssey where you belong. We got a date on Mars, remember? At the Space Rat , just off Chandler Field on the Grand Canal." That's what he'd say. And yet I wanted you, Laura. I wanted to be with you, always. "Oh God," I moaned, "what shall I do?" Next morning the door chimes pealed, and you went to the door and brought back the audiogram. It was addressed to me; I wondered who could be sending me a message. I pressed the stud on the little gray cylinder, and a rasping, automatic voice droned: "Luna City, Luna, July 27, 1995. Regret to inform you of death of Charles Taggart, Chief Jetman...." Then there was a Latin name which was more polite than the word "lung-rot" and the metallic phrase, "This message brought to you by courtesy of United Nations Earth-Luna Communication Corps." I stood staring at the cylinder. Charles Taggart was dead. Charles Taggart was Charlie. Stardust Charlie. My heart thudded crazily against my chest. It couldn't be! Not Charlie! The audiogram had lied! I pressed the stud again. "... regret to inform you of death of Charles ..." I hurled the cylinder at the wall. It thudded, fell, rolled. The broken voice droned on. You ran to it, shut it off. "I'm sorry, Ben, so terribly—" Without answering, I walked into my room. I knew it was true now. I remembered Charlie's coughing, his gaunt features, his drugged gaze. The metallic words had told the truth. I sat for a long time on my bed, crying inside, but staring dry-eyed at Charlie's faded tin box. Then, finally, I fingered his meager possessions—a few wrinkled photos, some letters, a small black statue of a forgotten Martian god, a gold service medal from the Moon Patrol. This was what remained of Charlie after twenty-five years in space. It was a bitter bargain. A statue instead of a wife, yellowed letters instead of children, a medal instead of a home. It'd be a great future , I thought. You'd dream of sitting in a dingy stone dive on the Grand Canal with sand-wasps buzzing around smoky, stinking candles. A bottle of luchu juice and a couple of Martian girls with dirty feet for company. And a sudden cough that would be the first sign of lung-rot. To hell with it! I walked into your living room and called Dean Dawson on the visiphone. I accepted that job teaching. And now, Laura, it's nearly midnight. You're in your room, sleeping, and the house is silent. It's hard to tell you, to make you understand, and that is why I am writing this. I looked through Charlie's box again, more carefully this time, reading the old letters and studying the photographs. I believe now that Charlie sensed my indecision, that he left these things so that they could tell me what he could not express in words. And among the things, Laura, I found a ring. A wedding ring. In that past he never talked about, there was a woman—his wife. Charlie was young once, his eyes full of dreams, and he faced the same decision that I am facing. Two paths were before him, but he tried to travel both. He later learned what we already know—that there can be no compromise. And you know, too, which path he finally chose. Do you know why he had to drug himself to watch me graduate? So he could look at me, knowing that I would see the worlds he could never live to see. Charlie didn't leave just a few trinkets behind him. He left himself, Laura, for he showed me that a boy's dream can also be a man's dream. He made his last trip to Luna when he knew he was going to die. Heaven knows how he escaped a checkup. Maybe the captain understood and was kind—but that doesn't matter now. Do you know why he wanted to reach Mars? Do you know why he didn't want to die in the clean, cool air of Earth? It was because he wanted to die nearer home. His home, Laura, was the Universe, where the ship was his house, the crew his father, mother, brothers, the planets his children. You say that the beauty of the other side of the mountain vanishes after you reach it. But how can one ever be sure until the journey is made? Could I or Charlie or the thousand before us bear to look upon a star and think, I might have gone there; I could have been the first ? We said, too, that the life of a spaceman is lonely. Yet how could one be lonely when men like Charlie roam the spaceways? Charlie wanted me to himself that night after graduation. He wanted us to celebrate as spacemen should, for he knew that this would be his last night on Earth. It might have seemed an ugly kind of celebration to you, but he wanted it with all his heart, and we robbed him of it. Because of these things, Laura, I will be gone in the morning. Explain the best you can to Mickey and to your parents and Dean Dawson. Right now I've got a date that I'm going to keep—at a dingy stone cafe on Mars, the Space Rat , just off Chandler Field on the Grand Canal. Stardust Charlie will be there; he'll go with me in memory to whatever part of the Galaxy I may live to reach. And so will you, Laura. I have two wedding rings with me—his wife's ring and yours.
A. Lung-rot is a disease caused by chemicals in the Martian atmosphere.
Why did the alien separate when they go down to the surface instead of working together? A. They want to take advantage of some alone time while they are not on their main ship. B. They have different knowledge of these different areas, as they travel to the areas they know more about. C. There is not space for two aliens in one small landing craft, so they must split up. D. They thought they could cover more ground this way and talk to more people about maintaining peace.
SECOND LANDING By FLOYD WALLACE A gentle fancy for the Christmas Season—an oft-told tale with a wistful twistful of Something that left the Earth with a wing and a prayer. Earth was so far away that it wasn't visible. Even the sun was only a twinkle. But this vast distance did not mean that isolation could endure forever. Instruments within the ship intercepted radio broadcasts and, within the hour, early TV signals. Machines compiled dictionaries and grammars and began translating the major languages. The history of the planet was tabulated as facts became available. The course of the ship changed slightly; it was not much out of the way to swing nearer Earth. For days the two within the ship listened and watched with little comment. They had to decide soon. "We've got to make or break," said the first alien. "You know what I'm in favor of," said the second. "I can guess," said Ethaniel, who had spoken first. "The place is a complete mess. They've never done anything except fight each other—and invent better weapons." "It's not what they've done," said Bal, the second alien. "It's what they're going to do, with that big bomb." "The more reason for stopping," said Ethaniel. "The big bomb can destroy them. Without our help they may do just that." "I may remind you that in two months twenty-nine days we're due in Willafours," said Bal. "Without looking at the charts I can tell you we still have more than a hundred light-years to go." "A week," said Ethaniel. "We can spare a week and still get there on time." "A week?" said Bal. "To settle their problems? They've had two world wars in one generation and that the third and final one is coming up you can't help feeling in everything they do." "It won't take much," said Ethaniel. "The wrong diplomatic move, or a trigger-happy soldier could set it off. And it wouldn't have to be deliberate. A meteor shower could pass over and their clumsy instruments could interpret it as an all-out enemy attack." "Too bad," said Bal. "We'll just have to forget there ever was such a planet as Earth." "Could you? Forget so many people?" "I'm doing it," said Bal. "Just give them a little time and they won't be here to remind me that I have a conscience." "My memory isn't convenient," said Ethaniel. "I ask you to look at them." Bal rustled, flicking the screen intently. "Very much like ourselves," he said at last. "A bit shorter perhaps, and most certainly incomplete. Except for the one thing they lack, and that's quite odd, they seem exactly like us. Is that what you wanted me to say?" "It is. The fact that they are an incomplete version of ourselves touches me. They actually seem defenseless, though I suppose they're not." "Tough," said Bal. "Nothing we can do about it." "There is. We can give them a week." "In a week we can't negate their entire history. We can't begin to undo the effect of the big bomb." "You can't tell," said Ethaniel. "We can look things over." "And then what? How much authority do we have?" "Very little," conceded Ethaniel. "Two minor officials on the way to Willafours—and we run directly into a problem no one knew existed." "And when we get to Willafours we'll be busy. It will be a long time before anyone comes this way again." "A very long time. There's nothing in this region of space our people want," said Ethaniel. "And how long can Earth last? Ten years? Even ten months? The tension is building by the hour." "What can I say?" said Bal. "I suppose we can stop and look them over. We're not committing ourselves by looking." They went much closer to Earth, not intending to commit themselves. For a day they circled the planet, avoiding radar detection, which for them was not difficult, testing, and sampling. Finally Ethaniel looked up from the monitor screen. "Any conclusions?" "What's there to think? It's worse than I imagined." "In what way?" "Well, we knew they had the big bomb. Atmospheric analysis showed that as far away as we were." "I know." "We also knew they could deliver the big bomb, presumably by some sort of aircraft." "That was almost a certainty. They'd have no use for the big bomb without aircraft." "What's worse is that I now find they also have missiles, range one thousand miles and upward. They either have or are near a primitive form of space travel." "Bad," said Ethaniel. "Sitting there, wondering when it's going to hit them. Nervousness could set it off." "It could, and the missiles make it worse," said Bal. "What did you find out at your end?" "Nothing worthwhile. I was looking at the people while you were investigating their weapons." "You must think something." "I wish I knew what to think. There's so little time," Ethaniel said. "Language isn't the difficulty. Our machines translate their languages easily and I've taken a cram course in two or three of them. But that's not enough, looking at a few plays, listening to advertisements, music, and news bulletins. I should go down and live among them, read books, talk to scholars, work with them, play." "You could do that and you'd really get to know them. But that takes time—and we don't have it." "I realize that." "A flat yes or no," said Bal. "No. We can't help them," said Ethaniel. "There is nothing we can do for them—but we have to try." "Sure, I knew it before we started," said Bal. "It's happened before. We take the trouble to find out what a people are like and when we can't help them we feel bad. It's going to be that way again." He rose and stretched. "Well, give me an hour to think of some way of going at it." It was longer than that before they met again. In the meantime the ship moved much closer to Earth. They no longer needed instruments to see it. The planet revolved outside the visionports. The southern plains were green, coursed with rivers; the oceans were blue; and much of the northern hemisphere was glistening white. Ragged clouds covered the pole, and a dirty pall spread over the mid-regions of the north. "I haven't thought of anything brilliant," said Ethaniel. "Nor I," said Bal. "We're going to have to go down there cold. And it will be cold." "Yes. It's their winter." "I did have an idea," said Bal. "What about going down as supernatural beings?" "Hardly," said Ethaniel. "A hundred years ago it might have worked. Today they have satellites. They are not primitives." "I suppose you're right," said Bal. "I did think we ought to take advantage of our physical differences." "If we could I'd be all for it. But these people are rough and desperate. They wouldn't be fooled by anything that crude." "Well, you're calling it," said Bal. "All right," said Ethaniel. "You take one side and I the other. We'll tell them bluntly what they'll have to do if they're going to survive, how they can keep their planet in one piece so they can live on it." "That'll go over big. Advice is always popular." "Can't help it. That's all we have time for." "Special instructions?" "None. We leave the ship here and go down in separate landing craft. You can talk with me any time you want to through our communications, but don't unless you have to." "They can't intercept the beams we use." "They can't, and even if they did they wouldn't know what to do with our language. I want them to think that we don't need to talk things over." "I get it. Makes us seem better than we are. They think we know exactly what we're doing even though we don't." "If we're lucky they'll think that." Bal looked out of the port at the planet below. "It's going to be cold where I'm going. You too. Sure we don't want to change our plans and land in the southern hemisphere? It's summer there." "I'm afraid not. The great powers are in the north. They are the ones we have to reach to do the job." "Yeah, but I was thinking of that holiday you mentioned. We'll be running straight into it. That won't help us any." "I know, they don't like their holidays interrupted. It can't be helped. We can't wait until it's over." "I'm aware of that," said Bal. "Fill me in on that holiday, anything I ought to know. Probably religious in origin. That so?" "It was religious a long time ago," said Ethaniel. "I didn't learn anything exact from radio and TV. Now it seems to be chiefly a time for eating, office parties, and selling merchandise." "I see. It has become a business holiday." "That's a good description. I didn't get as much of it as I ought to have. I was busy studying the people, and they're hard to pin down." "I see. I was thinking there might be some way we could tie ourselves in with this holiday. Make it work for us." "If there is I haven't thought of it." "You ought to know. You're running this one." Bal looked down at the planet. Clouds were beginning to form at the twilight edge. "I hate to go down and leave the ship up here with no one in it." "They can't touch it. No matter how they develop in the next hundred years they still won't be able to get in or damage it in any way." "It's myself I'm thinking about. Down there, alone." "I'll be with you. On the other side of the Earth." "That's not very close. I'd like it better if there were someone in the ship to bring it down in a hurry if things get rough. They don't think much of each other. I don't imagine they'll like aliens any better." "They may be unfriendly," Ethaniel acknowledged. Now he switched a monitor screen until he looked at the slope of a mountain. It was snowing and men were cutting small green trees in the snow. "I've thought of a trick." "If it saves my neck I'm for it." "I don't guarantee anything," said Ethaniel. "This is what I was thinking of: instead of hiding the ship against the sun where there's little chance it will be seen, we'll make sure that they do see it. Let's take it around to the night side of the planet and light it up." "Say, pretty good," said Bal. "They can't imagine that we'd light up an unmanned ship," said Ethaniel. "Even if the thought should occur to them they'll have no way of checking it. Also, they won't be eager to harm us with our ship shining down on them." "That's thinking," said Bal, moving to the controls. "I'll move the ship over where they can see it best and then I'll light it up. I'll really light it up." "Don't spare power." "Don't worry about that. They'll see it. Everybody on Earth will see it." Later, with the ship in position, glowing against the darkness of space, pulsating with light, Bal said: "You know, I feel better about this. We may pull it off. Lighting the ship may be just the help we need." "It's not we who need help, but the people of Earth," said Ethaniel. "See you in five days." With that he entered a small landing craft, which left a faintly luminescent trail as it plunged toward Earth. As soon as it was safe to do so, Bal left in another craft, heading for the other side of the planet. And the spaceship circled Earth, unmanned, blazing and pulsing with light. No star in the winter skies of the planet below could equal it in brilliancy. Once a man-made satellite came near but it was dim and was lost sight of by the people below. During the day the ship was visible as a bright spot of light. At evening it seemed to burn through the sunset colors. And the ship circled on, bright, shining, seeming to be a little piece clipped from the center of a star and brought near Earth to illuminate it. Never, or seldom, had Earth seen anything like it. In five days the two small landing craft that had left it arched up from Earth and joined the orbit of the large ship. The two small craft slid inside the large one and doors closed behind them. In a short time the aliens met again. "We did it," said Bal exultantly as he came in. "I don't know how we did it and I thought we were going to fail but at the last minute they came through." Ethaniel smiled. "I'm tired," he said, rustling. "Me too, but mostly I'm cold," said Bal, shivering. "Snow. Nothing but snow wherever I went. Miserable climate. And yet you had me go out walking after that first day." "From my own experience it seemed to be a good idea," said Ethaniel. "If I went out walking one day I noticed that the next day the officials were much more cooperative. If it worked for me I thought it might help you." "It did. I don't know why, but it did," said Bal. "Anyway, this agreement they made isn't the best but I think it will keep them from destroying themselves." "It's as much as we can expect," said Ethaniel. "They may have small wars after this, but never the big one. In fifty or a hundred years we can come back and see how much they've learned." "I'm not sure I want to," said Bal. "Say, what's an angel?" "Why?" "When I went out walking people stopped to look. Some knelt in the snow and called me an angel." "Something like that happened to me," said Ethaniel. "I didn't get it but I didn't let it upset me," said Bal. "I smiled at them and went about my business." He shivered again. "It was always cold. I walked out, but sometimes I flew back. I hope that was all right." In the cabin Bal spread his great wings. Renaissance painters had never seen his like but knew exactly how he looked. In their paintings they had pictured him innumerable times. "I don't think it hurt us that you flew," said Ethaniel. "I did so myself occasionally." "But you don't know what an angel is?" "No. I didn't have time to find out. Some creature of their folklore I suppose. You know, except for our wings they're very much like ourselves. Their legends are bound to resemble ours." "Sure," said Bal. "Anyway, peace on Earth." THE END Transcriber's Note: This etext was produced from Amazing Science Fiction Stories January 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
D. They thought they could cover more ground this way and talk to more people about maintaining peace.
What attitude does Eric display towards modern technological appliances? A. Bewilderment B. Repugnance C. Veneration D. Forbearance
Transcriber's Note: This etext was produced from Amazing Stories December 1961 and was first published in Amazing Stories November 1930. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note. A Classic Reprint from AMAZING STORIES, November, 1930 Copyright 1931, by Experimenter Publications Inc. The Cosmic Express By JACK WILLIAMSON Introduction by Sam Moskowitz The year 1928 was a great year of discovery for AMAZING STORIES . They were uncovering new talent at such a great rate, (Harl Vincent, David H. Keller, E. E. Smith, Philip Francis Nowlan, Fletcher Pratt and Miles J. Breuer), that Jack Williamson barely managed to become one of a distinguished group of discoveries by stealing the cover of the December issue for his first story The Metal Man. A disciple of A. Merritt, he attempted to imitate in style, mood and subject the magic of that late lamented master of fantasy. The imitation found great favor from the readership and almost instantly Jack Williamson became an important name on the contents page of AMAZING STORIES . He followed his initial success with two short novels , The Green Girl in AMAZING STORIES and The Alien Intelligence in SCIENCE WONDER STORIES , another Gernsback publication. Both of these stories were close copies of A. Merritt, whose style and method Jack Williamson parlayed into popularity for eight years. Yet the strange thing about it was that Jack Williamson was one of the most versatile science fiction authors ever to sit down at the typewriter. When the vogue for science-fantasy altered to super science, he created the memorable super lock-picker Giles Habilula as the major attraction in a rousing trio of space operas , The Legion of Space, The Cometeers and One Against the Legion. When grim realism was the order of the day, he produced Crucible of Power and when they wanted extrapolated theory in present tense, he assumed the disguise of Will Stewart and popularized the concept of contra terrene matter in science fiction with Seetee Ship and Seetee Shock. Finally, when only psychological studies of the future would do, he produced "With Folded Hands ..." "... And Searching Mind." The Cosmic Express is of special interest because it was written during Williamson's A. Merritt "kick," when he was writing little else but, and it gave the earliest indication of a more general capability. The lightness of the handling is especially modern, barely avoiding the farcical by the validity of the notion that wireless transmission of matter is the next big transportation frontier to be conquered. It is especially important because it stylistically forecast a later trend to accept the background for granted, regardless of the quantity of wonders, and proceed with the story. With only a few thousand scanning-disk television sets in existence at the time of the writing, the surmise that this media would be a natural for westerns was particularly astute. Jack Williamson was born in 1908 in the Arizona territory when covered wagons were the primary form of transportation and apaches still raided the settlers. His father was a cattle man, but for young Jack, the ranch was anything but glamorous. "My days were filled," he remembers, "with monotonous rounds of what seemed an endless, heart-breaking war with drought and frost and dust-storms, poison-weeds and hail, for the sake of survival on the Llano Estacado." The discovery of AMAZING STORIES was the escape he sought and his goal was to be a science fiction writer. He labored to this end and the first he knew that a story of his had been accepted was when he bought the December, 1929 issue of AMAZING STORIES . Since then, he has written millions of words of science fiction and has gone on record as follows: "I feel that science-fiction is the folklore of the new world of science, and the expression of man's reaction to a technological environment. By which I mean that it is the most interesting and stimulating form of literature today." Mr. Eric Stokes-Harding tumbled out of the rumpled bed-clothing, a striking slender figure in purple-striped pajamas. He smiled fondly across to the other of the twin beds, where Nada, his pretty bride, lay quiet beneath light silk covers. With a groan, he stood up and began a series of fantastic bending exercises. But after a few half-hearted movements, he gave it up, and walked through an open door into a small bright room, its walls covered with bookcases and also with scientific appliances that would have been strange to the man of four or five centuries before, when the Age of Aviation was beginning. Suddenly there was a sharp tingling sensation where they touched the polished surface. Yawning, Mr. Eric Stokes-Harding stood before the great open window, staring out. Below him was a wide, park-like space, green with emerald lawns, and bright with flowering plants. Two hundred yards across it rose an immense pyramidal building—an artistic structure, gleaming with white marble and bright metal, striped with the verdure of terraced roof-gardens, its slender peak rising to help support the gray, steel-ribbed glass roof above. Beyond, the park stretched away in illimitable vistas, broken with the graceful columned buildings that held up the great glass roof. Above the glass, over this New York of 2432 A. D., a freezing blizzard was sweeping. But small concern was that to the lightly clad man at the window, who was inhaling deeply the fragrant air from the plants below—air kept, winter and summer, exactly at 20° C. With another yawn, Mr. Eric Stokes-Harding turned back to the room, which was bright with the rich golden light that poured in from the suspended globes of the cold ato-light that illuminated the snow-covered city. With a distasteful grimace, he seated himself before a broad, paper-littered desk, sat a few minutes leaning back, with his hands clasped behind his head. At last he straightened reluctantly, slid a small typewriter out of its drawer, and began pecking at it impatiently. For Mr. Eric Stokes-Harding was an author. There was a whole shelf of his books on the wall, in bright jackets, red and blue and green, that brought a thrill of pleasure to the young novelist's heart when he looked up from his clattering machine. He wrote "thrilling action romances," as his enthusiastic publishers and television directors said, "of ages past, when men were men. Red-blooded heroes responding vigorously to the stirring passions of primordial life!" He was impartial as to the source of his thrills—provided they were distant enough from modern civilization. His hero was likely to be an ape-man roaring through the jungle, with a bloody rock in one hand and a beautiful girl in the other. Or a cowboy, "hard-riding, hard-shooting," the vanishing hero of the ancient ranches. Or a man marooned with a lovely woman on a desert South Sea island. His heroes were invariably strong, fearless, resourceful fellows, who could handle a club on equal terms with a cave-man, or call science to aid them in defending a beautiful mate from the terrors of a desolate wilderness. And a hundred million read Eric's novels, and watched the dramatization of them on the television screens. They thrilled at the simple, romantic lives his heroes led, paid him handsome royalties, and subconsciously shared his opinion that civilization had taken all the best from the life of man. Eric had settled down to the artistic satisfaction of describing the sensuous delight of his hero in the roasted marrow-bones of a dead mammoth, when the pretty woman in the other room stirred, and presently came tripping into the study, gay and vivacious, and—as her husband of a few months most justly thought—altogether beautiful in a bright silk dressing gown. Recklessly, he slammed the machine back into its place, and resolved to forget that his next "red-blooded action thriller" was due in the publisher's office at the end of the month. He sprang up to kiss his wife, held her embraced for a long happy moment. And then they went hand in hand, to the side of the room and punched a series of buttons on a panel—a simple way of ordering breakfast sent up the automatic shaft from the kitchens below. Nada Stokes-Harding was also an author. She wrote poems—"back to nature stuff"—simple lyrics of the sea, of sunsets, of bird songs, of bright flowers and warm winds, of thrilling communion with Nature, and growing things. Men read her poems and called her a genius. Even though the whole world had grown up into a city, the birds were extinct, there were no wild flowers, and no one had time to bother about sunsets. "Eric, darling," she said, "isn't it terrible to be cooped up here in this little flat, away from the things we both love?" "Yes, dear. Civilization has ruined the world. If we could only have lived a thousand years ago, when life was simple and natural, when men hunted and killed their meat, instead of drinking synthetic stuff, when men still had the joys of conflict, instead of living under glass, like hot-house flowers." "If we could only go somewhere—" "There isn't anywhere to go. I write about the West, Africa, South Sea Islands. But they were all filled up two hundred years ago. Pleasure resorts, sanatoriums, cities, factories." "If only we lived on Venus! I was listening to a lecture on the television, last night. The speaker said that the Planet Venus is younger than the Earth, that it has not cooled so much. It has a thick, cloudy atmosphere, and low, rainy forests. There's simple, elemental life there—like Earth had before civilization ruined it." "Yes, Kinsley, with his new infra-red ray telescope, that penetrates the cloud layers of the planet, proved that Venus rotates in about the same period as Earth; and it must be much like Earth was a million years ago." "Eric, I wonder if we could go there! It would be so thrilling to begin life like the characters in your stories, to get away from this hateful civilization, and live natural lives. Maybe a rocket—" The young author's eyes were glowing. He skipped across the floor, seized Nada, kissed her ecstatically. "Splendid! Think of hunting in the virgin forest, and bringing the game home to you! But I'm afraid there is no way.—Wait! The Cosmic Express." "The Cosmic Express?" "A new invention. Just perfected a few weeks ago, I understand. By Ludwig Von der Valls, the German physicist." "I've quit bothering about science. It has ruined nature, filled the world with silly, artificial people, doing silly, artificial things." "But this is quite remarkable, dear. A new way to travel—by ether!" "By ether!" "Yes. You know of course that energy and matter are interchangeable terms; both are simply etheric vibration, of different sorts." "Of course. That's elementary." She smiled proudly. "I can give you examples, even of the change. The disintegration of the radium atom, making helium and lead and energy . And Millikan's old proof that his Cosmic Ray is generated when particles of electricity are united to form an atom." "Fine! I thought you said you weren't a scientist." He glowed with pride. "But the method, in the new Cosmic Express, is simply to convert the matter to be carried into power, send it out as a radiant beam and focus the beam to convert it back into atoms at the destination." "But the amount of energy must be terrific—" "It is. You know short waves carry more energy than long ones. The Express Ray is an electromagnetic vibration of frequency far higher than that of even the Cosmic Ray, and correspondingly more powerful and more penetrating." The girl frowned, running slim fingers through golden-brown hair. "But I don't see how they get any recognizable object, not even how they get the radiation turned back into matter." "The beam is focused, just like the light that passes through a camera lens. The photographic lens, using light rays, picks up a picture and reproduces it again on the plate—just the same as the Express Ray picks up an object and sets it down on the other side of the world. "An analogy from television might help. You know that by means of the scanning disc, the picture is transformed into mere rapid fluctuations in the brightness of a beam of light. In a parallel manner, the focal plane of the Express Ray moves slowly through the object, progressively, dissolving layers of the thickness of a single atom, which are accurately reproduced at the other focus of the instrument—which might be in Venus! "But the analogy of the lens is the better of the two. For no receiving instrument is required, as in television. The object is built up of an infinite series of plane layers, at the focus of the ray, no matter where that may be. Such a thing would be impossible with radio apparatus because even with the best beam transmission, all but a tiny fraction of the power is lost, and power is required to rebuild the atoms. Do you understand, dear?" "Not altogether. But I should worry! Here comes breakfast. Let me butter your toast." A bell had rung at the shaft. She ran to it, and returned with a great silver tray, laden with dainty dishes, which she set on a little side table. They sat down opposite each other, and ate, getting as much satisfaction from contemplation of each other's faces as from the excellent food. When they had finished, she carried the tray to the shaft, slid it in a slot, and touched a button—thus disposing of the culinary cares of the morning. She ran back to Eric, who was once more staring distastefully at his typewriter. "Oh, darling! I'm thrilled to death about the Cosmic Express! If we could go to Venus, to a new life on a new world, and get away from all this hateful conventional society—" "We can go to their office—it's only five minutes. The chap that operates the machine for the company is a pal of mine. He's not supposed to take passengers except between the offices they have scattered about the world. But I know his weak point—" Eric laughed, fumbled with a hidden spring under his desk. A small polished object, gleaming silvery, slid down into his hand. "Old friendship, plus this, would make him—like spinach." Five minutes later Mr. Eric Stokes-Harding and his pretty wife were in street clothes, light silk tunics of loose, flowing lines—little clothing being required in the artificially warmed city. They entered an elevator and dropped thirty stories to the ground floor of the great building. There they entered a cylindrical car, with rows of seats down the sides. Not greatly different from an ancient subway car, except that it was air-tight, and was hurled by magnetic attraction and repulsion through a tube exhausted of air, at a speed that would have made an old subway rider gasp with amazement. In five more minutes their car had whipped up to the base of another building, in the business section, where there was no room for parks between the mighty structures that held the unbroken glass roofs two hundred stories above the concrete pavement. An elevator brought them up a hundred and fifty stories. Eric led Nada down a long, carpeted corridor to a wide glass door, which bore the words: COSMIC EXPRESS stenciled in gold capitals across it. As they approached, a lean man, carrying a black bag, darted out of an elevator shaft opposite the door, ran across the corridor, and entered. They pushed in after him. They were in a little room, cut in two by a high brass grill. In front of it was a long bench against the wall, that reminded one of the waiting room in an old railroad depot. In the grill was a little window, with a lazy, brown-eyed youth leaning on the shelf behind it. Beyond him was a great, glittering piece of mechanism, half hidden by the brass. A little door gave access to the machine from the space before the grill. The thin man in black, whom Eric now recognized as a prominent French heart-specialist, was dancing before the window, waving his bag frantically, raving at the sleepy boy. "Queek! I have tell you zee truth! I have zee most urgent necessity to go queekly. A patient I have in Paree, zat ees in zee most creetical condition!" "Hold your horses just a minute, Mister. We got a client in the machine now. Russian diplomat from Moscow to Rio de Janeiro.... Two hundred seventy dollars and eighty cents, please.... Your turn next. Remember this is just an experimental service. Regular installations all over the world in a year.... Ready now. Come on in." The youth took the money, pressed a button. The door sprang open in the grill, and the frantic physician leaped through it. "Lie down on the crystal, face up," the young man ordered. "Hands at your sides, don't breathe. Ready!" He manipulated his dials and switches, and pressed another button. "Why, hello, Eric, old man!" he cried. "That's the lady you were telling me about? Congratulations!" A bell jangled before him on the panel. "Just a minute. I've got a call." He punched the board again. Little bulbs lit and glowed for a second. The youth turned toward the half-hidden machine, spoke courteously. "All right, madam. Walk out. Hope you found the transit pleasant." "But my Violet! My precious Violet!" a shrill female voice came from the machine. "Sir, what have you done with my darling Violet?" "I'm sure I don't know, madam. You lost it off your hat?" "None of your impertinence, sir! I want my dog." "Ah, a dog. Must have jumped off the crystal. You can have him sent on for three hundred and—" "Young man, if any harm comes to my Violet—I'll—I'll—I'll appeal to the Society for the Prevention of Cruelty to Animals!" "Very good, madam. We appreciate your patronage." The door flew open again. A very fat woman, puffing angrily, face highly colored, clothing shimmering with artificial gems, waddled pompously out of the door through which the frantic French doctor had so recently vanished. She rolled heavily across the room, and out into the corridor. Shrill words floated back: "I'm going to see my lawyer! My precious Violet—" The sallow youth winked. "And now what can I do for you, Eric?" "We want to go to Venus, if that ray of yours can put us there." "To Venus? Impossible. My orders are to use the Express merely between the sixteen designated stations, at New York, San Francisco, Tokyo, London, Paris—" "See here, Charley," with a cautious glance toward the door, Eric held up the silver flask. "For old time's sake, and for this—" The boy seemed dazed at sight of the bright flask. Then, with a single swift motion, he snatched it out of Eric's hand, and bent to conceal it below his instrument panel. "Sure, old boy. I'd send you to heaven for that, if you'd give me the micrometer readings to set the ray with. But I tell you, this is dangerous. I've got a sort of television attachment, for focusing the ray. I can turn that on Venus—I've been amusing myself, watching the life there, already. Terrible place. Savage. I can pick a place on high land to set you down. But I can't be responsible for what happens afterward." "Simple, primitive life is what we're looking for. And now what do I owe you—" "Oh, that's all right. Between friends. Provided that stuff's genuine! Walk in and lie down on the crystal block. Hands at your sides. Don't move." The little door had swung open again, and Eric led Nada through. They stepped into a little cell, completely surrounded with mirrors and vast prisms and lenses and electron tubes. In the center was a slab of transparent crystal, eight feet square and two inches thick, with an intricate mass of machinery below it. Eric helped Nada to a place on the crystal, lay down at her side. "I think the Express Ray is focused just at the surface of the crystal, from below," he said. "It dissolves our substance, to be transmitted by the beam. It would look as if we were melting into the crystal." "Ready," called the youth. "Think I've got it for you. Sort of a high island in the jungle. Nothing bad in sight now. But, I say—how're you coming back? I haven't got time to watch you." "Go ahead. We aren't coming back." "Gee! What is it? Elopement? I thought you were married already. Or is it business difficulties? The Bears did make an awful raid last night. But you better let me set you down in Hong Kong." A bell jangled. "So long," the youth called. Nada and Eric felt themselves enveloped in fire. Sheets of white flame seemed to lap up about them from the crystal block. Suddenly there was a sharp tingling sensation where they touched the polished surface. Then blackness, blankness. The next thing they knew, the fires were gone from about them. They were lying in something extremely soft and fluid; and warm rain was beating in their faces. Eric sat up, found himself in a mud-puddle. Beside him was Nada, opening her eyes and struggling up, her bright garments stained with black mud. All about rose a thick jungle, dark and gloomy—and very wet. Palm-like, the gigantic trees were, or fern-like, flinging clouds of feathery green foliage high against a somber sky of unbroken gloom. They stood up, triumphant. "At last!" Nada cried. "We're free! Free of that hateful old civilization! We're back to Nature!" "Yes, we're on our feet now, not parasites on the machines." "It's wonderful to have a fine, strong man like you to trust in, Eric. You're just like one of the heroes in your books!" "You're the perfect companion, Nada.... But now we must be practical. We must build a fire, find weapons, set up a shelter of some kind. I guess it will be night, pretty soon. And Charley said something about savage animals he had seen in the television. "We'll find a nice dry cave, and have a fire in front of the door. And skins of animals to sleep on. And pottery vessels to cook in. And you will find seeds and grown grain." "But first we must find a flint-bed. We need flint for tools, and to strike sparks to make a fire with. We will probably come across a chunk of virgin copper, too—it's found native." Presently they set off through the jungle. The mud seemed to be very abundant, and of a most sticky consistence. They sank into it ankle deep at every step, and vast masses of it clung to their feet. A mile they struggled on, without finding where a provident nature had left them even a single fragment of quartz, to say nothing of a mass of pure copper. "A darned shame," Eric grumbled, "to come forty million miles, and meet such a reception as this!" Nada stopped. "Eric," she said, "I'm tired. And I don't believe there's any rock here, anyway. You'll have to use wooden tools, sharpened in the fire." "Probably you're right. This soil seemed to be of alluvial origin. Shouldn't be surprised if the native rock is some hundreds of feet underground. Your idea is better." "You can make a fire by rubbing sticks together, can't you?" "It can be done, I'm sure. I've never tried it, myself. We need some dry sticks, first." They resumed the weary march, with a good fraction of the new planet adhering to their feet. Rain was still falling from the dark heavens in a steady, warm downpour. Dry wood seemed scarce as the proverbial hen's teeth. "You didn't bring any matches, dear?" "Matches! Of course not! We're going back to Nature." "I hope we get a fire pretty soon." "If dry wood were gold dust, we couldn't buy a hot dog." "Eric, that reminds me that I'm hungry." He confessed to a few pangs of his own. They turned their attention to looking for banana trees, and coconut palms, but they did not seem to abound in the Venerian jungle. Even small animals that might have been slain with a broken branch had contrary ideas about the matter. At last, from sheer weariness, they stopped, and gathered branches to make a sloping shelter by a vast fallen tree-trunk. "This will keep out the rain—maybe—" Eric said hopefully. "And tomorrow, when it has quit raining—I'm sure we'll do better." They crept in, as gloomy night fell without. They lay in each other's arms, the body warmth oddly comforting. Nada cried a little. "Buck up," Eric advised her. "We're back to nature—where we've always wanted to be." With the darkness, the temperature fell somewhat, and a high wind rose, whipping cold rain into the little shelter, and threatening to demolish it. Swarms of mosquito-like insects, seemingly not inconvenienced in the least by the inclement elements, swarmed about them in clouds. Then came a sound from the dismal stormy night, a hoarse, bellowing roar, raucous, terrifying. Nada clung against Eric. "What is it, dear?" she chattered. "Must be a reptile. Dinosaur, or something of the sort. This world seems to be in about the same state as the Earth when they flourished there.... But maybe it won't find us." The roar was repeated, nearer. The earth trembled beneath a mighty tread. "Eric," a thin voice trembled. "Don't you think—it might have been better— You know the old life was not so bad, after all." "I was just thinking of our rooms, nice and warm and bright, with hot foods coming up the shaft whenever we pushed the button, and the gay crowds in the park, and my old typewriter." "Eric?" she called softly. "Yes, dear." "Don't you wish—we had known better?" "I do." If he winced at the "we" the girl did not notice. The roaring outside was closer. And suddenly it was answered by another raucous bellow, at considerable distance, that echoed strangely through the forest. The fearful sounds were repeated, alternately. And always the more distant seemed nearer, until the two sounds were together. And then an infernal din broke out in the darkness. Bellows. Screams. Deafening shrieks. Mighty splashes, as if struggling Titans had upset oceans. Thunderous crashes, as if they were demolishing forests. Eric and Nada clung to each other, in doubt whether to stay or to fly through the storm. Gradually the sound of the conflict came nearer, until the earth shook beneath them, and they were afraid to move. Suddenly the great fallen tree against which they had erected the flimsy shelter was rolled back, evidently by a chance blow from the invisible monsters. The pitiful roof collapsed on the bedraggled humans. Nada burst into tears. "Oh, if only—if only—" Suddenly flame lapped up about them, the same white fire they had seen as they lay on the crystal block. Dizziness, insensibility overcame them. A few moments later, they were lying on the transparent table in the Cosmic Express office, with all those great mirrors and prisms and lenses about them. A bustling, red-faced official appeared through the door in the grill, fairly bubbling apologies. "So sorry—an accident—inconceivable. I can't see how he got it! We got you back as soon as we could find a focus. I sincerely hope you haven't been injured." "Why—what—what—" "Why I happened in, found our operator drunk. I've no idea where he got the stuff. He muttered something about Venus. I consulted the auto-register, and found two more passengers registered here than had been recorded at our other stations. I looked up the duplicate beam coordinates, and found that it had been set on Venus. I got men on the television at once, and we happened to find you. "I can't imagine how it happened. I've had the fellow locked up, and the 'dry-laws' are on the job. I hope you won't hold us for excessive damages." "No, I ask nothing except that you don't press charges against the boy. I don't want him to suffer for it in any way. My wife and I will be perfectly satisfied to get back to our apartment." "I don't wonder. You look like you've been through—I don't know what. But I'll have you there in five minutes. My private car—" Mr. Eric Stokes-Harding, noted author of primitive life and love, ate a hearty meal with his pretty spouse, after they had washed off the grime of another planet. He spent the next twelve hours in bed. At the end of the month he delivered his promised story to his publishers, a thrilling tale of a man marooned on Venus, with a beautiful girl. The hero made stone tools, erected a dwelling for himself and his mate, hunted food for her, defended her from the mammoth saurian monsters of the Venerian jungles. The book was a huge success. THE END
B. Repugnance
How do the authors define exemplars?
### Introduction To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification. An interesting line of work in this respect is the work of BIBREF5 . Here the authors have proposed the challenging task of generating natural questions for an image. One aspect that is central to a question is the context that is relevant to generate it. However, this context changes for every image. As can be seen in Figure FIGREF1 , an image with a person on a skateboard would result in questions related to the event. Whereas for a little girl, the questions could be related to age rather than the action. How can one have widely varying context provided for generating questions? To solve this problem, we use the context obtained by considering exemplars, specifically we use the difference between relevant and irrelevant exemplars. We consider different contexts in the form of Location, Caption, and Part of Speech tags. The human annotated questions are (b) for the first image and (a) for the second image. Our method implicitly uses a differential context obtained through supporting and contrasting exemplars to obtain a differentiable embedding. This embedding is used by a question decoder to decode the appropriate question. As discussed further, we observe this implicit differential context to perform better than an explicit keyword based context. The difference between the two approaches is illustrated in Figure FIGREF2 . This also allows for better optimization as we can backpropagate through the whole network. We provide detailed empirical evidence to support our hypothesis. As seen in Figure FIGREF1 our method generates natural questions and improves over the state-of-the-art techniques for this problem. To summarize, we propose a multimodal differential network to solve the task of visual question generation. Our contributions are: (1) A method to incorporate exemplars to learn differential embeddings that captures the subtle differences between supporting and contrasting examples and aid in generating natural questions. (2) We provide Multimodal differential embeddings, as image or text alone does not capture the whole context and we show that these embeddings outperform the ablations which incorporate cues such as only image, or tags or place information. (3) We provide a thorough comparison of the proposed network against state-of-the-art benchmarks along with a user study and statistical significance test. ### Related Work Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community. Recently there have been many deep learning based approaches as well for solving the text-based question generation task such as BIBREF6 . Further, BIBREF7 have proposed a method to generate a factoid based question based on triplet set {subject, relation and object} to capture the structural representation of text and the corresponding generated question. These methods, however, were limited to text-based question generation. There has been extensive work done in the Vision and Language domain for solving image captioning, paragraph generation, Visual Question Answering (VQA) and Visual Dialog. BIBREF8 , BIBREF9 , BIBREF10 proposed conventional machine learning methods for image description. BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 have generated descriptive sentences from images with the help of Deep Networks. There have been many works for solving Visual Dialog BIBREF19 , BIBREF20 , BIBREF2 , BIBREF21 , BIBREF22 . A variety of methods have been proposed by BIBREF23 , BIBREF24 , BIBREF1 , BIBREF25 , BIBREF26 , BIBREF27 for solving VQA task including attention-based methods BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . However, Visual Question Generation (VQG) is a separate task which is of interest in its own right and has not been so well explored BIBREF5 . This is a vision based novel task aimed at generating natural and engaging question for an image. BIBREF35 proposed a method for continuously generating questions from an image and subsequently answering those questions. The works closely related to ours are that of BIBREF5 and BIBREF36 . In the former work, the authors used an encoder-decoder based framework whereas in the latter work, the authors extend it by using a variational autoencoder based sequential routine to obtain natural questions by performing sampling of the latent variable. ### Approach In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars. We first analyze whether the exemplars are valid or not. We illustrate this in figure FIGREF3 . We used a pre-trained RESNET-101 BIBREF37 object classification network on the target, supporting and contrasting images. We observed that the supporting image and target image have quite similar probability scores. The contrasting exemplar image, on the other hand, has completely different probability scores. Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions. We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail. ### Method The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 where INLINEFORM0 is a vector for all possible parameters of our model. INLINEFORM1 is the ground truth question. The log probability for the question is calculated by using joint probability over INLINEFORM2 with the help of chain rule. For a particular question, the above term is obtained as: INLINEFORM3 where INLINEFORM0 is length of the sequence, and INLINEFORM1 is the INLINEFORM2 word of the question. We have removed INLINEFORM3 for simplicity. Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. During inference, we sample a question word INLINEFORM0 from the softmax distribution and continue sampling until the end token or maximum length for the question is reached. We experimented with both sampling and argmax and found out that argmax works better. This result is provided in the supplementary material. ### Multimodal Differential Network The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We used an efficient KNN-based approach (k-d tree) with Euclidean metric to obtain the exemplars. This is obtained through a coarse quantization of nearest neighbors of the training examples into 50 clusters, and selecting the nearest as supporting and farthest as the contrasting exemplars. We experimented with ITML based metric learning BIBREF40 for image features. Surprisingly, the KNN-based approach outperforms the latter one. We also tried random exemplars and different number of exemplars and found that INLINEFORM0 works best. We provide these results in the supplementary material. We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0 The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0 where INLINEFORM0 is the 4096-dimensional convolutional feature from the FC7 layer of pretrained VGG-19 Net BIBREF43 . INLINEFORM1 are the weights and INLINEFORM2 is the bias for different layers. INLINEFORM3 is the concatenation operator. Similarly, We obtain context vectors INLINEFORM0 & INLINEFORM1 for the supporting and contrasting exemplars. Details for other fusion methods are present in supplementary.The aim of the triplet network BIBREF44 is to obtain context vectors that bring the supporting exemplar embeddings closer to the target embedding and vice-versa. This is obtained as follows: DISPLAYFORM0 where INLINEFORM0 is the euclidean distance between two embeddings INLINEFORM1 and INLINEFORM2 . M is the training dataset that contains all set of possible triplets. INLINEFORM3 is the triplet loss function. This is decomposed into two terms, one that brings the supporting sample closer and one that pushes the contrasting sample further. This is given by DISPLAYFORM0 Here INLINEFORM0 represent the euclidean distance between the target and supporting sample, and target and opposing sample respectively. The parameter INLINEFORM1 controls the separation margin between these and is obtained through validation data. ### Decoder: Question Generator The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 Where INLINEFORM0 is the probability distribution over all question tokens. INLINEFORM1 is cross entropy loss. ### Cost function Our objective is to minimize the total loss, that is the sum of cross entropy loss and triplet loss over all training examples. The total loss is: DISPLAYFORM0 where INLINEFORM0 is the total number of samples, INLINEFORM1 is a constant, which controls both the loss. INLINEFORM2 is the triplet loss function EQREF13 . INLINEFORM3 is the cross entropy loss between the predicted and ground truth questions and is given by: INLINEFORM4 where, INLINEFORM0 is the total number of question tokens, INLINEFORM1 is the ground truth label. The code for MDN-VQG model is provided . ### Variations of Proposed Method While, we advocate the use of multimodal differential network for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture. These are as follows: Tag Net: In this variant, we consider extracting the part-of-speech (POS) tags for the words present in the caption and obtaining a Tag embedding by considering different methods of combining the one-hot vectors. Further details and experimental results are present in the supplementary. This Tag embedding is then combined with the image embedding and provided to the decoder network. Place Net: In this variant we explore obtaining embeddings based on the visual scene understanding. This is obtained using a pre-trained PlaceCNN BIBREF45 that is trained to classify 365 different types of scene categories. We then combine the activation map for the input image and the VGG-19 based place embedding to obtain the joint embedding used by the decoder. Differential Image Network: Instead of using multimodal differential network for generating embeddings, we also evaluate differential image network for the same. In this case, the embedding does not include the caption but is based only on the image feature. We also experimented with using multiple exemplars and random exemplars. Further details, pseudocode and results regarding these are present in the supplementary material. ### Dataset We conduct our experiments on Visual Question Generation (VQG) dataset BIBREF5 , which contains human annotated questions based on images of MS-COCO dataset. This dataset was developed for generating natural and engaging questions based on common sense reasoning. We use VQG-COCO dataset for our experiments which contains a total of 2500 training images, 1250 validation images, and 1250 testing images. Each image in the dataset contains five natural questions and five ground truth captions. It is worth noting that the work of BIBREF36 also used the questions from VQA dataset BIBREF1 for training purpose, whereas the work by BIBREF5 uses only the VQG-COCO dataset. VQA-1.0 dataset is also built on images from MS-COCO dataset. It contains a total of 82783 images for training, 40504 for validation and 81434 for testing. Each image is associated with 3 questions. We used pretrained caption generation model BIBREF13 to extract captions for VQA dataset as the human annotated captions are not there in the dataset. We also get good results on the VQA dataset (as shown in Table TABREF26 ) which shows that our method doesn't necessitate the presence of ground truth captions. We train our model separately for VQG-COCO and VQA dataset. ### Inference We made use of the 1250 validation images to tune the hyperparameters and are providing the results on test set of VQG-COCO dataset. During inference, We use the Representation module to find the embeddings for the image and ground truth caption without using the supporting and contrasting exemplars. The mixture module provides the joint representation of the target image and ground truth caption. Finally, the decoder takes in the joint features and generates the question. We also experimented with the captions generated by an Image-Captioning network BIBREF13 for VQG-COCO dataset and the result for that and training details are present in the supplementary material. ### Experiments We evaluate our proposed MDN method in the following ways: First, we evaluate it against other variants described in section SECREF19 and SECREF10 . Second, we further compare our network with state-of-the-art methods for VQA 1.0 and VQG-COCO dataset. We perform a user study to gauge human opinion on naturalness of the generated question and analyze the word statistics in Figure FIGREF22 . This is an important test as humans are the best deciders of naturalness. We further consider the statistical significance for the various ablations as well as the state-of-the-art models. The quantitative evaluation is conducted using standard metrics like BLEU BIBREF46 , METEOR BIBREF47 , ROUGE BIBREF48 , CIDEr BIBREF49 . Although these metrics have not been shown to correlate with `naturalness' of the question these still provide a reasonable quantitative measure for comparison. Here we only provide the BLEU1 scores, but the remaining BLEU-n metric scores are present in the supplementary. We observe that the proposed MDN provides improved embeddings to the decoder. We believe that these embeddings capture instance specific differential information that helps in guiding the question generation. Details regarding the metrics are given in the supplementary material. ### Ablation Analysis We considered different variations of our method mentioned in section SECREF19 and the various ways to obtain the joint multimodal embedding as described in section SECREF10 . The results for the VQG-COCO test set are given in table TABREF24 . In this table, every block provides the results for one of the variations of obtaining the embeddings and different ways of combining them. We observe that the Joint Method (JM) of combining the embeddings works the best in all cases except the Tag Embeddings. Among the ablations, the proposed MDN method works way better than the other variants in terms of BLEU, METEOR and ROUGE metrics by achieving an improvement of 6%, 12% and 18% in the scores respectively over the best other variant. ### Baseline and State-of-the-Art The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores. ### Statistical Significance Analysis We have analysed Statistical Significance BIBREF50 of our MDN model for VQG for different variations of the mixture module mentioned in section SECREF10 and also against the state-of-the-art methods. The Critical Difference (CD) for Nemenyi BIBREF51 test depends upon the given INLINEFORM0 (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different and vice-versa. Figure FIGREF29 visualizes the post-hoc analysis using the CD diagram. From the figure, it is clear that MDN-Joint works best and is statistically significantly different from the state-of-the-art methods. ### Perceptual Realism A human is the best judge of naturalness of any question, We evaluated our proposed MDN method using a `Naturalness' Turing test BIBREF52 on 175 people. People were shown an image with 2 questions just as in figure FIGREF1 and were asked to rate the naturalness of both the questions on a scale of 1 to 5 where 1 means `Least Natural' and 5 is the `Most Natural'. We provided 175 people with 100 such images from the VQG-COCO validation dataset which has 1250 images. Figure FIGREF30 indicates the number of people who were fooled (rated the generated question more or equal to the ground truth question). For the 100 images, on an average 59.7% people were fooled in this experiment and this shows that our model is able to generate natural questions. ### Conclusion In this paper we have proposed a novel method for generating natural questions for an image. The approach relies on obtaining multimodal differential embeddings from image and its caption. We also provide ablation analysis and a detailed comparison with state-of-the-art methods, perform a user study to evaluate the naturalness of our generated questions and also ensure that the results are statistically significant. In future, we would like to analyse means of obtaining composite embeddings. We also aim to consider the generalisation of this approach to other vision and language tasks. Supplementary Material Section SECREF8 will provide details about training configuration for MDN, Section SECREF9 will explain the various Proposed Methods and we also provide a discussion in section regarding some important questions related to our method. We report BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE and CIDER metric scores for VQG-COCO dataset. We present different experiments with Tag Net in which we explore the performance of various tags (Noun, Verb, and Question tags) and different ways of combining them to get the context vectors. Multimodal Differential Network [1] MDN INLINEFORM0 Finding Exemplars: INLINEFORM1 INLINEFORM2 Compute Triplet Embedding: INLINEFORM3 INLINEFORM4 Compute Triplet Fusion Embedding : INLINEFORM5 INLINEFORM6 INLINEFORM7 Compute Triplet Loss: INLINEFORM8 Compute Decode Question Sentence: INLINEFORM9 INLINEFORM10 —————————————————– Triplet Fusion INLINEFORM11 , INLINEFORM12 INLINEFORM13 :Image feature,14x14x512 INLINEFORM14 : Caption feature,1x512 Match Dimension: INLINEFORM15 ,196x512 INLINEFORM16 196x512 If flag==Joint Fusion: INLINEFORM17 INLINEFORM18 , [ INLINEFORM19 (MDN-Mul), INLINEFORM20 (MDN-Add)] If flag==Attention Fusion : INLINEFORM21 Semb INLINEFORM22 Dataset and Training Details Dataset We conduct our experiments on two types of dataset: VQA dataset BIBREF1 , which contains human annotated questions based on images on MS-COCO dataset. Second one is VQG-COCO dataset based on natural question BIBREF55 . VQA dataset VQA dataset BIBREF1 is built on complex images from MS-COCO dataset. It contains a total of 204721 images, out of which 82783 are for training, 40504 for validation and 81434 for testing. Each image in the MS-COCO dataset is associated with 3 questions and each question has 10 possible answers. So there are 248349 QA pair for training, 121512 QA pairs for validating and 244302 QA pairs for testing. We used pre-trained caption generation model BIBREF53 to extract captions for VQA dataset. VQG dataset The VQG-COCO dataset BIBREF55 , is developed for generating natural and engaging questions that are based on common sense reasoning. This dataset contains a total of 2500 training images, 1250 validation images and 1250 testing images. Each image in the dataset contains 5 natural questions. Training Configuration We have used RMSPROP optimizer to update the model parameter and configured hyper-parameter values to be as follows: INLINEFORM23 to train the classification network . In order to train a triplet model, we have used RMSPROP to optimize the triplet model model parameter and configure hyper-parameter values to be: INLINEFORM24 . We also used learning rate decay to decrease the learning rate on every epoch by a factor given by: INLINEFORM25 where values of a=1500 and b=1250 are set empirically. Ablation Analysis of Model While, we advocate the use of multimodal differential network (MDN) for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture namely (a) Differential Image Network, (b) Tag net and (c) Place net. These are described in detail as follows: Differential Image Network For obtaining the exemplar image based context embedding, we propose a triplet network consist of three network, one is target net, supporting net and opposing net. All these three networks designed with convolution neural network and shared the same parameters. The weights of this network are learnt through end-to-end learning using a triplet loss. The aim is to obtain latent weight vectors that bring the supporting exemplar close to the target image and enhances the difference between opposing examples. More formally, given an image INLINEFORM26 we obtain an embedding INLINEFORM27 using a CNN that we parameterize through a function INLINEFORM28 where INLINEFORM29 are the weights of the CNN. This is illustrated in figure FIGREF43 . Tag net The tag net consists of two parts Context Extractor & Tag Embedding Net. This is illustrated in figure FIGREF45 . Extract Context: The first step is to extract the caption of the image using NeuralTalk2 BIBREF53 model. We find the part-of-speech(POS) tag present in the caption. POS taggers have been developed for two well known corpuses, the Brown Corpus and the Penn Treebanks. For our work, we are using the Brown Corpus tags. The tags are clustered into three category namely Noun tag, Verb tag and Question tags (What, Where, ...). Noun tag consists of all the noun & pronouns present in the caption sentence and similarly, verb tag consists of verb & adverbs present in the caption sentence. The question tags consists of the 7-well know question words i.e., why, how, what, when, where, who and which. Each tag token is represented as a one-hot vector of the dimension of vocabulary size. For generalization, we have considered 5 tokens from each category of the Tags. Tag Embedding Net: The embedding network consists of word embedding followed by temporal convolutions neural network followed by max-pooling network. In the first step, sparse high dimensional one-hot vector is transformed to dense low dimension vector using word embedding. After this, we apply temporal convolution on the word embedding vector. The uni-gram, bi-gram and tri-gram feature are computed by applying convolution filter of size 1, 2 and 3 respectability. Finally, we applied max-pooling on this to get a vector representation of the tags as shown figure FIGREF45 . We concatenated all the tag words followed by fully connected layer to get feature dimension of 512. We also explored joint networks based on concatenation of all the tags, on element-wise addition and element-wise multiplication of the tag vectors. However, we observed that convolution over max pooling and joint concatenation gives better performance based on CIDer score. INLINEFORM30 Where, T_CNN is Temporally Convolution Neural Network applied on word embedding vector with kernel size three. Place net Visual object and scene recognition plays a crucial role in the image. Here, places in the image are labeled with scene semantic categories BIBREF45 , comprise of large and diverse type of environment in the world, such as (amusement park, tower, swimming pool, shoe shop, cafeteria, rain-forest, conference center, fish pond, etc.). So we have used different type of scene semantic categories present in the image as a place based context to generate natural question. A place365 is a convolution neural network is modeled to classify 365 types of scene categories, which is trained on the place2 dataset consist of 1.8 million of scene images. We have used a pre-trained VGG16-places365 network to obtain place based context embedding feature for various type scene categories present in the image. The context feature INLINEFORM31 is obtained by: INLINEFORM32 Where, INLINEFORM33 is Place365_CNN. We have extracted INLINEFORM34 features of dimension 14x14x512 for attention model and FC8 features of dimension 365 for joint, addition and hadamard model of places365. Finally, we use a linear transformation to obtain a 512 dimensional vector. We explored using the CONV5 having feature dimension 14x14 512, FC7 having 4096 and FC8 having feature dimension of 365 of places365. Ablation Analysis Sampling Exemplar: KNN vs ITML Our method is aimed at using efficient exemplar-based retrieval techniques. We have experimented with various exemplar methods, such as ITML BIBREF40 based metric learning for image features and KNN based approaches. We observed KNN based approach (K-D tree) with Euclidean metric is a efficient method for finding exemplars. Also we observed that ITML is computationally expensive and also depends on the training procedure. The table provides the experimental result for Differential Image Network variant with k (number of exemplars) = 2 and Hadamard method: Question Generation approaches: Sampling vs Argmax We obtained the decoding using standard practice followed in the literature BIBREF38 . This method selects the argmax sentence. Also, we evaluated our method by sampling from the probability distributions and provide the results for our proposed MDN-Joint method for VQG dataset as follows: How are exemplars improving Embedding In Multimodel differential network, we use exemplars and train them using a triplet loss. It is known that using a triplet network, we can learn a representation that accentuates how the image is closer to a supporting exemplar as against the opposing exemplar BIBREF42 , BIBREF41 . The Joint embedding is obtained between the image and language representations. Therefore the improved representation helps in obtaining an improved context vector. Further we show that this also results in improving VQG. Are exemplars required? We had similar concerns and validated this point by using random exemplars for the nearest neighbor for MDN. (k=R in table TABREF35 ) In this case the method is similar to the baseline. This suggests that with random exemplar, the model learns to ignore the cue. Are captions necessary for our method? This is not actually necessary. In our method, we have used an existing image captioning method BIBREF13 to generate captions for images that did not have them. For VQG dataset, captions were available and we have used that, but, for VQA dataset captions were not available and we have generated captions while training. We provide detailed evidence with respect to caption-question pairs to ensure that we are generating novel questions. While the caption generates scene description, our proposed method generates semantically meaningful and novel questions. Examples for Figure 1 of main paper: First Image:- Caption- A young man skateboarding around little cones. Our Question- Is this a skateboard competition? Second Image:- Caption- A small child is standing on a pair of skis. Our Question:- How old is that little girl? Intuition behind Triplet Network: The intuition behind use of triplet networks is clear through this paper BIBREF41 that first advocated its use. The main idea is that when we learn distance functions that are “close” for similar and “far” from dissimilar representations, it is not clear that close and far are with respect to what measure. By incorporating a triplet we learn distance functions that learn that “A is more similar to B as compared to C”. Learning such measures allows us to bring target image-caption joint embeddings that are closer to supporting exemplars as compared to contrasting exemplars. Analysis of Network Analysis of Tag Context Tag is language based context. These tags are extracted from caption, except question-tags which is fixed as the 7 'Wh words' (What, Why, Where, Who, When, Which and How). We have experimented with Noun tag, Verb tag and 'Wh-word' tag as shown in tables. Also, we have experimented in each tag by varying the number of tags from 1 to 7. We combined different tags using 1D-convolution, concatenation, and addition of all the tags and observed that the concatenation mechanism gives better results. As we can see in the table TABREF33 that taking Nouns, Verbs and Wh-Words as context, we achieve significant improvement in the BLEU, METEOR and CIDEr scores from the basic models which only takes the image and the caption respectively. Taking Nouns generated from the captions and questions of the corresponding training example as context, we achieve an increase of 1.6% in Bleu Score and 2% in METEOR and 34.4% in CIDEr Score from the basic Image model. Similarly taking Verbs as context gives us an increase of 1.3% in Bleu Score and 2.1% in METEOR and 33.5% in CIDEr Score from the basic Image model. And the best result comes when we take 3 Wh-Words as context and apply the Hadamard Model with concatenating the 3 WH-words. Also in Table TABREF34 we have shown the results when we take more than one words as context. Here we show that for 3 words i.e 3 nouns, 3 verbs and 3 Wh-words, the Concatenation model performs the best. In this table the conv model is using 1D convolution to combine the tags and the joint model combine all the tags. Analysis of Context: Exemplars In Multimodel Differential Network and Differential Image Network, we use exemplar images(target, supporting and opposing image) to obtain the differential context. We have performed the experiment based on the single exemplar(K=1), which is one supporting and one opposing image along with target image, based on two exemplar(K=2), i.e. two supporting and two opposing image along with single target image. similarly we have performed experiment for K=3 and K=4 as shown in table- TABREF35 . Mixture Module: Other Variations Hadamard method uses element-wise multiplication whereas Addition method uses element-wise addition in place of the concatenation operator of the Joint method. The Hadamard method finds a correlation between image feature and caption feature vector while the Addition method learns a resultant vector. In the attention method, the output INLINEFORM35 is the weighted average of attention probability vector INLINEFORM36 and convolutional features INLINEFORM37 . The attention probability vector weights the contribution of each convolutional feature based on the caption vector. This attention method is similar to work stack attention method BIBREF54 . The attention mechanism is given by: DISPLAYFORM0 where INLINEFORM38 is the 14x14x512-dimensional convolution feature map from the fifth convolution layer of VGG-19 Net of image INLINEFORM39 and INLINEFORM40 is the caption context vector. The attention probability vector INLINEFORM41 is a 196-dimensional vector. INLINEFORM42 are the weights and INLINEFORM43 is the bias for different layers. We evaluate the different approaches and provide results for the same. Here INLINEFORM44 represents element-wise addition. Evaluation Metrics Our task is similar to encoder -decoder framework of machine translation. we have used same evaluation metric is used in machine translation. BLEU BIBREF46 is the first metric to find the correlation between generated question with ground truth question. BLEU score is used to measure the precision value, i.e That is how much words in the predicted question is appeared in reference question. BLEU-n score measures the n-gram precision for counting co-occurrence on reference sentences. we have evaluated BLEU score from n is 1 to 4. The mechanism of ROUGE-n BIBREF48 score is similar to BLEU-n,where as, it measures recall value instead of precision value in BLEU. That is how much words in the reference question is appeared in predicted question.Another version ROUGE metric is ROUGE-L, which measures longest common sub-sequence present in the generated question. METEOR BIBREF47 score is another useful evaluation metric to calculate the similarity between generated question with reference one by considering synonyms, stemming and paraphrases. the output of the METEOR score measure the word matches between predicted question and reference question. In VQG, it compute the word match score between predicted question with five reference question. CIDer BIBREF49 score is a consensus based evaluation metric. It measure human-likeness, that is the sentence is written by human or not. The consensus is measured, how often n-grams in the predicted question are appeared in the reference question. If the n-grams in the predicted question sentence is appeared more frequently in reference question then question is less informative and have low CIDer score. We provide our results using all these metrics and compare it with existing baselines. Figure 1: Can you guess which among the given questions is human annotated and which is machine generated? 0 Figure 2: Here we provide intuition for using implicit embeddings instead of explicit ones. As explained in section 1, the question obtained by the implicit embeddings are natural and holistic than the explicit ones. Figure 3: An illustrative example shows the validity of our obtained exemplars with the help of an object classification network, RESNET-101. We see that the probability scores of target and supporting exemplar image are similar. That is not the case with the contrasting exemplar. The corresponding generated questions when considering the individual images are also shown. Figure 4: This is an overview of our Multimodal Differential Network for Visual Question Generation. It consists of a Representation Module which extracts multimodal features, a Mixture Module that fuses the multimodal representation and a Decoder that generates question using an LSTM based language model. In this figure, we have shown the Joint Mixture Module. We train our network with a Cross-Entropy and Triplet Loss. Figure 5: These are some examples from the VQG-COCO dataset which provide a comparison between our generated questions and human annotated questions. (a) is the human annotated question for all the images. More qualitative results are present in the supplementary material. Figure 6: Sunburst plot for VQG-COCO: The ith ring captures the frequency distribution over words for the ith word of the generated question. The angle subtended at the center is proportional to the frequency of the word. While some words have high frequency, the outer rings illustrate a fine blend of words. We have restricted the plot to 5 rings for easy readability. Best viewed in color. Table 1: Analysis of variants of our proposed method on VQG-COCO Dataset as mentioned in section 4.4 and different ways of getting a joint embedding (Attention (AtM), Hadamard (HM), Addition (AM) and Joint (JM) method as given in section 4.1.3) for each method. Refer section 5.1 for more details. B1 is BLEU1. Table 2: State-of-the-Art comparison on VQA-1.0 Dataset. The first block consists of the state-of-the-art results, second block refers to the baselines mentioned in section 5.2, third block provides the results for the variants of mixture module present in section 4.1.3. Figure 8: Perceptual Realism Plot for human survey. Here every question has different number of responses and hence the threshold which is the half of total responses for each question is varying. This plot is only for 50 of the 100 questions involved in the survey. See section 5.4 for more details. Table 3: State-of-the-Art (SOTA) comparison on VQGCOCO Dataset. The first block consists of the SOTA results, second block refers to the baselines mentioned in section 5.2, third block shows the results for the best method for different ablations mentioned in table 1. Figure 7: The mean rank of all the models on the basis of METEOR score are plotted on the x-axis. Here Joint refers to our MDN-Joint model and others are the different variations described in section 4.1.3 and Natural (Mostafazadeh et al., 2016), Creative (Jain et al., 2017). The colored lines between the two models represents that these models are not significantly different from each other.
Exemplars aim to provide appropriate context., joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption
How do you think Retief felt during his time on the ship? A. Overwhelmed by bullies B. Fearful of what he would encounter once they landed C. Scared of what they had planned for him D. Annoyed by the grievance he was receiving.
THE FROZEN PLANET By Keith Laumer [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, September 1961. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] "It is rather unusual," Magnan said, "to assign an officer of your rank to courier duty, but this is an unusual mission." Retief sat relaxed and said nothing. Just before the silence grew awkward, Magnan went on. "There are four planets in the group," he said. "Two double planets, all rather close to an unimportant star listed as DRI-G 33987. They're called Jorgensen's Worlds, and in themselves are of no importance whatever. However, they lie deep in the sector into which the Soetti have been penetrating. "Now—" Magnan leaned forward and lowered his voice—"we have learned that the Soetti plan a bold step forward. Since they've met no opposition so far in their infiltration of Terrestrial space, they intend to seize Jorgensen's Worlds by force." Magnan leaned back, waiting for Retief's reaction. Retief drew carefully on his cigar and looked at Magnan. Magnan frowned. "This is open aggression, Retief," he said, "in case I haven't made myself clear. Aggression on Terrestrial-occupied territory by an alien species. Obviously, we can't allow it." Magnan drew a large folder from his desk. "A show of resistance at this point is necessary. Unfortunately, Jorgensen's Worlds are technologically undeveloped areas. They're farmers or traders. Their industry is limited to a minor role in their economy—enough to support the merchant fleet, no more. The war potential, by conventional standards, is nil." Magnan tapped the folder before him. "I have here," he said solemnly, "information which will change that picture completely." He leaned back and blinked at Retief. "All right, Mr. Councillor," Retief said. "I'll play along; what's in the folder?" Magnan spread his fingers, folded one down. "First," he said. "The Soetti War Plan—in detail. We were fortunate enough to make contact with a defector from a party of renegade Terrestrials who've been advising the Soetti." He folded another finger. "Next, a battle plan for the Jorgensen's people, worked out by the Theory group." He wrestled a third finger down. "Lastly; an Utter Top Secret schematic for conversion of a standard anti-acceleration field into a potent weapon—a development our systems people have been holding in reserve for just such a situation." "Is that all?" Retief said. "You've still got two fingers sticking up." Magnan looked at the fingers and put them away. "This is no occasion for flippancy, Retief. In the wrong hands, this information could be catastrophic. You'll memorize it before you leave this building." "I'll carry it, sealed," Retief said. "That way nobody can sweat it out of me." Magnan started to shake his head. "Well," he said. "If it's trapped for destruction, I suppose—" "I've heard of these Jorgensen's Worlds," Retief said. "I remember an agent, a big blond fellow, very quick on the uptake. A wizard with cards and dice. Never played for money, though." "Umm," Magnan said. "Don't make the error of personalizing this situation, Retief. Overall policy calls for a defense of these backwater worlds. Otherwise the Corps would allow history to follow its natural course, as always." "When does this attack happen?" "Less than four weeks." "That doesn't leave me much time." "I have your itinerary here. Your accommodations are clear as far as Aldo Cerise. You'll have to rely on your ingenuity to get you the rest of the way." "That's a pretty rough trip, Mr. Councillor. Suppose I don't make it?" Magnan looked sour. "Someone at a policy-making level has chosen to put all our eggs in one basket, Retief. I hope their confidence in you is not misplaced." "This antiac conversion; how long does it take?" "A skilled electronics crew can do the job in a matter of minutes. The Jorgensens can handle it very nicely; every other man is a mechanic of some sort." Retief opened the envelope Magnan handed him and looked at the tickets inside. "Less than four hours to departure time," he said. "I'd better not start any long books." "You'd better waste no time getting over to Indoctrination," Magnan said. Retief stood up. "If I hurry, maybe I can catch the cartoon." "The allusion escapes me," Magnan said coldly. "And one last word. The Soetti are patrolling the trade lanes into Jorgensen's Worlds; don't get yourself interned." "I'll tell you what," Retief said soberly. "In a pinch, I'll mention your name." "You'll be traveling with Class X credentials," Magnan snapped. "There must be nothing to connect you with the Corps." "They'll never guess," Retief said. "I'll pose as a gentleman." "You'd better be getting started," Magnan said, shuffling papers. "You're right," Retief said. "If I work at it, I might manage a snootful by takeoff." He went to the door. "No objection to my checking out a needler, is there?" Magnan looked up. "I suppose not. What do you want with it?" "Just a feeling I've got." "Please yourself." "Some day," Retief said, "I may take you up on that." II Retief put down the heavy travel-battered suitcase and leaned on the counter, studying the schedules chalked on the board under the legend "ALDO CERISE—INTERPLANETARY." A thin clerk in a faded sequined blouse and a plastic snakeskin cummerbund groomed his fingernails, watching Retief from the corner of his eye. Retief glanced at him. The clerk nipped off a ragged corner with rabbitlike front teeth and spat it on the floor. "Was there something?" he said. "Two twenty-eight, due out today for the Jorgensen group," Retief said. "Is it on schedule?" The clerk sampled the inside of his right cheek, eyed Retief. "Filled up. Try again in a couple of weeks." "What time does it leave?" "I don't think—" "Let's stick to facts," Retief said. "Don't try to think. What time is it due out?" The clerk smiled pityingly. "It's my lunch hour," he said. "I'll be open in an hour." He held up a thumb nail, frowned at it. "If I have to come around this counter," Retief said, "I'll feed that thumb to you the hard way." The clerk looked up and opened his mouth. Then he caught Retief's eye, closed his mouth and swallowed. "Like it says there," he said, jerking a thumb at the board. "Lifts in an hour. But you won't be on it," he added. Retief looked at him. "Some ... ah ... VIP's required accommodation," he said. He hooked a finger inside the sequined collar. "All tourist reservations were canceled. You'll have to try to get space on the Four-Planet Line ship next—" "Which gate?" Retief said. "For ... ah...?" "For the two twenty-eight for Jorgensen's Worlds," Retief said. "Well," the clerk said. "Gate 19," he added quickly. "But—" Retief picked up his suitcase and walked away toward the glare sign reading To Gates 16-30 . "Another smart alec," the clerk said behind him. Retief followed the signs, threaded his way through crowds, found a covered ramp with the number 228 posted over it. A heavy-shouldered man with a scarred jawline and small eyes was slouching there in a rumpled gray uniform. He put out a hand as Retief started past him. "Lessee your boarding pass," he muttered. Retief pulled a paper from an inside pocket, handed it over. The guard blinked at it. "Whassat?" "A gram confirming my space," Retief said. "Your boy on the counter says he's out to lunch." The guard crumpled the gram, dropped it on the floor and lounged back against the handrail. "On your way, bub," he said. Retief put his suitcase carefully on the floor, took a step and drove a right into the guard's midriff. He stepped aside as the man doubled and went to his knees. "You were wide open, ugly. I couldn't resist. Tell your boss I sneaked past while you were resting your eyes." He picked up his bag, stepped over the man and went up the gangway into the ship. A cabin boy in stained whites came along the corridor. "Which way to cabin fifty-seven, son?" Retief asked. "Up there." The boy jerked his head and hurried on. Retief made his way along the narrow hall, found signs, followed them to cabin fifty-seven. The door was open. Inside, baggage was piled in the center of the floor. It was expensive looking baggage. Retief put his bag down. He turned at a sound behind him. A tall, florid man with an expensive coat belted over a massive paunch stood in the open door, looking at Retief. Retief looked back. The florid man clamped his jaws together, turned to speak over his shoulder. "Somebody in the cabin. Get 'em out." He rolled a cold eye at Retief as he backed out of the room. A short, thick-necked man appeared. "What are you doing in Mr. Tony's room?" he barked. "Never mind! Clear out of here, fellow! You're keeping Mr. Tony waiting." "Too bad," Retief said. "Finders keepers." "You nuts?" The thick-necked man stared at Retief. "I said it's Mr. Tony's room." "I don't know Mr. Tony. He'll have to bull his way into other quarters." "We'll see about you, mister." The man turned and went out. Retief sat on the bunk and lit a cigar. There was a sound of voices in the corridor. Two burly baggage-smashers appeared, straining at an oversized trunk. They maneuvered it through the door, lowered it, glanced at Retief and went out. The thick-necked man returned. "All right, you. Out," he growled. "Or have I got to have you thrown out?" Retief rose and clamped the cigar between his teeth. He gripped a handle of the brass-bound trunk in each hand, bent his knees and heaved the trunk up to chest level, then raised it overhead. He turned to the door. "Catch," he said between clenched teeth. The trunk slammed against the far wall of the corridor and burst. Retief turned to the baggage on the floor, tossed it into the hall. The face of the thick-necked man appeared cautiously around the door jamb. "Mister, you must be—" "If you'll excuse me," Retief said, "I want to catch a nap." He flipped the door shut, pulled off his shoes and stretched out on the bed. Five minutes passed before the door rattled and burst open. Retief looked up. A gaunt leathery-skinned man wearing white ducks, a blue turtleneck sweater and a peaked cap tilted raffishly over one eye stared at Retief. "Is this the joker?" he grated. The thick-necked man edged past him, looked at Retief and snorted, "That's him, sure." "I'm captain of this vessel," the first man said. "You've got two minutes to haul your freight out of here, buster." "When you can spare the time from your other duties," Retief said, "take a look at Section Three, Paragraph One, of the Uniform Code. That spells out the law on confirmed space on vessels engaged in interplanetary commerce." "A space lawyer." The captain turned. "Throw him out, boys." Two big men edged into the cabin, looking at Retief. "Go on, pitch him out," the captain snapped. Retief put his cigar in an ashtray, and swung his feet off the bunk. "Don't try it," he said softly. One of the two wiped his nose on a sleeve, spat on his right palm, and stepped forward, then hesitated. "Hey," he said. "This the guy tossed the trunk off the wall?" "That's him," the thick-necked man called. "Spilled Mr. Tony's possessions right on the deck." "Deal me out," the bouncer said. "He can stay put as long as he wants to. I signed on to move cargo. Let's go, Moe." "You'd better be getting back to the bridge, Captain," Retief said. "We're due to lift in twenty minutes." The thick-necked man and the Captain both shouted at once. The Captain's voice prevailed. "—twenty minutes ... uniform Code ... gonna do?" "Close the door as you leave," Retief said. The thick-necked man paused at the door. "We'll see you when you come out." III Four waiters passed Retief's table without stopping. A fifth leaned against the wall nearby, a menu under his arm. At a table across the room, the Captain, now wearing a dress uniform and with his thin red hair neatly parted, sat with a table of male passengers. He talked loudly and laughed frequently, casting occasional glances Retief's way. A panel opened in the wall behind Retief's chair. Bright blue eyes peered out from under a white chef's cap. "Givin' you the cold shoulder, heh, Mister?" "Looks like it, old-timer," Retief said. "Maybe I'd better go join the skipper. His party seems to be having all the fun." "Feller has to be mighty careless who he eats with to set over there." "I see your point." "You set right where you're at, Mister. I'll rustle you up a plate." Five minutes later, Retief cut into a thirty-two ounce Delmonico backed up with mushrooms and garlic butter. "I'm Chip," the chef said. "I don't like the Cap'n. You can tell him I said so. Don't like his friends, either. Don't like them dern Sweaties, look at a man like he was a worm." "You've got the right idea on frying a steak, Chip. And you've got the right idea on the Soetti, too," Retief said. He poured red wine into a glass. "Here's to you." "Dern right," Chip said. "Dunno who ever thought up broiling 'em. Steaks, that is. I got a Baked Alaska coming up in here for dessert. You like brandy in yer coffee?" "Chip, you're a genius." "Like to see a feller eat," Chip said. "I gotta go now. If you need anything, holler." Retief ate slowly. Time always dragged on shipboard. Four days to Jorgensen's Worlds. Then, if Magnan's information was correct, there would be four days to prepare for the Soetti attack. It was a temptation to scan the tapes built into the handle of his suitcase. It would be good to know what Jorgensen's Worlds would be up against. Retief finished the steak, and the chef passed out the baked Alaska and coffee. Most of the other passengers had left the dining room. Mr. Tony and his retainers still sat at the Captain's table. As Retief watched, four men arose from the table and sauntered across the room. The first in line, a stony-faced thug with a broken ear, took a cigar from his mouth as he reached the table. He dipped the lighted end in Retief's coffee, looked at it, and dropped it on the tablecloth. The others came up, Mr. Tony trailing. "You must want to get to Jorgensen's pretty bad," the thug said in a grating voice. "What's your game, hick?" Retief looked at the coffee cup, picked it up. "I don't think I want my coffee," he said. He looked at the thug. "You drink it." The thug squinted at Retief. "A wise hick," he began. With a flick of the wrist, Retief tossed the coffee into the thug's face, then stood and slammed a straight right to the chin. The thug went down. Retief looked at Mr. Tony, still standing open-mouthed. "You can take your playmates away now, Tony," he said. "And don't bother to come around yourself. You're not funny enough." Mr. Tony found his voice. "Take him, Marbles!" he growled. The thick-necked man slipped a hand inside his tunic and brought out a long-bladed knife. He licked his lips and moved in. Retief heard the panel open beside him. "Here you go, Mister," Chip said. Retief darted a glance; a well-honed french knife lay on the sill. "Thanks, Chip," Retief said. "I won't need it for these punks." Thick-neck lunged and Retief hit him square in the face, knocking him under the table. The other man stepped back, fumbling a power pistol from his shoulder holster. "Aim that at me, and I'll kill you," Retief said. "Go on, burn him!" Mr. Tony shouted. Behind him, the captain appeared, white-faced. "Put that away, you!" he yelled. "What kind of—" "Shut up," Mr. Tony said. "Put it away, Hoany. We'll fix this bum later." "Not on this vessel, you won't," the captain said shakily. "I got my charter to consider." "Ram your charter," Hoany said harshly. "You won't be needing it long." "Button your floppy mouth, damn you!" Mr. Tony snapped. He looked at the man on the floor. "Get Marbles out of here. I ought to dump the slob." He turned and walked away. The captain signaled and two waiters came up. Retief watched as they carted the casualty from the dining room. The panel opened. "I usta be about your size, when I was your age," Chip said. "You handled them pansies right. I wouldn't give 'em the time o' day." "How about a fresh cup of coffee, Chip?" Retief said. "Sure, Mister. Anything else?" "I'll think of something," Retief said. "This is shaping up into one of those long days." "They don't like me bringing yer meals to you in yer cabin," Chip said. "But the cap'n knows I'm the best cook in the Merchant Service. They won't mess with me." "What has Mr. Tony got on the captain, Chip?" Retief asked. "They're in some kind o' crooked business together. You want some more smoked turkey?" "Sure. What have they got against my going to Jorgensen's Worlds?" "Dunno. Hasn't been no tourists got in there fer six or eight months. I sure like a feller that can put it away. I was a big eater when I was yer age." "I'll bet you can still handle it, Old Timer. What are Jorgensen's Worlds like?" "One of 'em's cold as hell and three of 'em's colder. Most o' the Jorgies live on Svea; that's the least froze up. Man don't enjoy eatin' his own cookin' like he does somebody else's." "That's where I'm lucky, Chip. What kind of cargo's the captain got aboard for Jorgensen's?" "Derned if I know. In and out o' there like a grasshopper, ever few weeks. Don't never pick up no cargo. No tourists any more, like I says. Don't know what we even run in there for." "Where are the passengers we have aboard headed?" "To Alabaster. That's nine days' run in-sector from Jorgensen's. You ain't got another one of them cigars, have you?" "Have one, Chip. I guess I was lucky to get space on this ship." "Plenty o' space, Mister. We got a dozen empty cabins." Chip puffed the cigar alight, then cleared away the dishes, poured out coffee and brandy. "Them Sweaties is what I don't like," he said. Retief looked at him questioningly. "You never seen a Sweaty? Ugly lookin' devils. Skinny legs, like a lobster; big chest, shaped like the top of a turnip; rubbery lookin' head. You can see the pulse beatin' when they get riled." "I've never had the pleasure," Retief said. "You prob'ly have it perty soon. Them devils board us nigh ever trip out. Act like they was the Customs Patrol or somethin'." There was a distant clang, and a faint tremor ran through the floor. "I ain't superstitious ner nothin'," Chip said. "But I'll be triple-damned if that ain't them boarding us now." Ten minutes passed before bootsteps sounded outside the door, accompanied by a clicking patter. The doorknob rattled, then a heavy knock shook the door. "They got to look you over," Chip whispered. "Nosy damn Sweaties." "Unlock it, Chip." The chef opened the door. "Come in, damn you," he said. A tall and grotesque creature minced into the room, tiny hoof-like feet tapping on the floor. A flaring metal helmet shaded the deep-set compound eyes, and a loose mantle flapped around the knobbed knees. Behind the alien, the captain hovered nervously. "Yo' papiss," the alien rasped. "Who's your friend, Captain?" Retief said. "Never mind; just do like he tells you." "Yo' papiss," the alien said again. "Okay," Retief said. "I've seen it. You can take it away now." "Don't horse around," the captain said. "This fellow can get mean." The alien brought two tiny arms out from the concealment of the mantle, clicked toothed pincers under Retief's nose. "Quick, soft one." "Captain, tell your friend to keep its distance. It looks brittle, and I'm tempted to test it." "Don't start anything with Skaw; he can clip through steel with those snappers." "Last chance," Retief said. Skaw stood poised, open pincers an inch from Retief's eyes. "Show him your papers, you damned fool," the captain said hoarsely. "I got no control over Skaw." The alien clicked both pincers with a sharp report, and in the same instant Retief half-turned to the left, leaned away from the alien and drove his right foot against the slender leg above the bulbous knee-joint. Skaw screeched and floundered, greenish fluid spattering from the burst joint. "I told you he was brittle," Retief said. "Next time you invite pirates aboard, don't bother to call." "Jesus, what did you do! They'll kill us!" the captain gasped, staring at the figure flopping on the floor. "Cart poor old Skaw back to his boat," Retief said. "Tell him to pass the word. No more illegal entry and search of Terrestrial vessels in Terrestrial space." "Hey," Chip said. "He's quit kicking." The captain bent over Skaw, gingerly rolled him over. He leaned close and sniffed. "He's dead." The captain stared at Retief. "We're all dead men," he said. "These Soetti got no mercy." "They won't need it. Tell 'em to sheer off; their fun is over." "They got no more emotions than a blue crab—" "You bluff easily, Captain. Show a few guns as you hand the body back. We know their secret now." "What secret? I—" "Don't be no dumber than you got to, Cap'n," Chip said. "Sweaties die easy; that's the secret." "Maybe you got a point," the captain said, looking at Retief. "All they got's a three-man scout. It could work." He went out, came back with two crewmen. They hauled the dead alien gingerly into the hall. "Maybe I can run a bluff on the Soetti," the captain said, looking back from the door. "But I'll be back to see you later." "You don't scare us, Cap'n," Chip said. "Him and Mr. Tony and all his goons. You hit 'em where they live, that time. They're pals o' these Sweaties. Runnin' some kind o' crooked racket." "You'd better take the captain's advice, Chip. There's no point in your getting involved in my problems." "They'd of killed you before now, Mister, if they had any guts. That's where we got it over these monkeys. They got no guts." "They act scared, Chip. Scared men are killers." "They don't scare me none." Chip picked up the tray. "I'll scout around a little and see what's goin' on. If the Sweaties figure to do anything about that Skaw feller they'll have to move fast; they won't try nothin' close to port." "Don't worry, Chip. I have reason to be pretty sure they won't do anything to attract a lot of attention in this sector just now." Chip looked at Retief. "You ain't no tourist, Mister. I know that much. You didn't come out here for fun, did you?" "That," Retief said, "would be a hard one to answer." IV Retief awoke at a tap on his door. "It's me, Mister. Chip." "Come on in." The chef entered the room, locking the door. "You shoulda had that door locked." He stood by the door, listening, then turned to Retief. "You want to get to Jorgensen's perty bad, don't you, Mister?" "That's right, Chip." "Mr. Tony give the captain a real hard time about old Skaw. The Sweaties didn't say nothin'. Didn't even act surprised, just took the remains and pushed off. But Mr. Tony and that other crook they call Marbles, they was fit to be tied. Took the cap'n in his cabin and talked loud at him fer half a hour. Then the cap'n come out and give some orders to the Mate." Retief sat up and reached for a cigar. "Mr. Tony and Skaw were pals, eh?" "He hated Skaw's guts. But with him it was business. Mister, you got a gun?" "A 2mm needler. Why?" "The orders cap'n give was to change course fer Alabaster. We're by-passin' Jorgensen's Worlds. We'll feel the course change any minute." Retief lit the cigar, reached under the mattress and took out a short-barreled pistol. He dropped it in his pocket, looked at Chip. "Maybe it was a good thought, at that. Which way to the Captain's cabin?" "This is it," Chip said softly. "You want me to keep an eye on who comes down the passage?" Retief nodded, opened the door and stepped into the cabin. The captain looked up from his desk, then jumped up. "What do you think you're doing, busting in here?" "I hear you're planning a course change, Captain." "You've got damn big ears." "I think we'd better call in at Jorgensen's." "You do, huh?" the captain sat down. "I'm in command of this vessel," he said. "I'm changing course for Alabaster." "I wouldn't find it convenient to go to Alabaster," Retief said. "So just hold your course for Jorgensen's." "Not bloody likely." "Your use of the word 'bloody' is interesting, Captain. Don't try to change course." The captain reached for the mike on his desk, pressed the key. "Power Section, this is the captain," he said. Retief reached across the desk, gripped the captain's wrist. "Tell the mate to hold his present course," he said softly. "Let go my hand, buster," the captain snarled. Eyes on Retief's, he eased a drawer open with his left hand, reached in. Retief kneed the drawer. The captain yelped and dropped the mike. "You busted it, you—" "And one to go," Retief said. "Tell him." "I'm an officer of the Merchant Service!" "You're a cheapjack who's sold his bridge to a pack of back-alley hoods." "You can't put it over, hick." "Tell him." The captain groaned and picked up the mike. "Captain to Power Section," he said. "Hold your present course until you hear from me." He dropped the mike and looked up at Retief. "It's eighteen hours yet before we pick up Jorgensen Control. You going to sit here and bend my arm the whole time?" Retief released the captain's wrist and turned to the door. "Chip, I'm locking the door. You circulate around, let me know what's going on. Bring me a pot of coffee every so often. I'm sitting up with a sick friend." "Right, Mister. Keep an eye on that jasper; he's slippery." "What are you going to do?" the captain demanded. Retief settled himself in a chair. "Instead of strangling you, as you deserve," he said, "I'm going to stay here and help you hold your course for Jorgensen's Worlds." The captain looked at Retief. He laughed, a short bark. "Then I'll just stretch out and have a little nap, farmer. If you feel like dozing off sometime during the next eighteen hours, don't mind me." Retief took out the needler and put it on the desk before him. "If anything happens that I don't like," he said, "I'll wake you up. With this."
D. Annoyed by the grievance he was receiving.
What perspective does Rosenthal adapt toward Dole's grievances? A. Rosenthal asserts that Dole is purposefully lying to the public B. Rosenthal implies that Dole's mental faculties are deteriorating C. Rosenthal reveals that he is perplexed by Dole's grievances D. Rosenthal admits that Dole's grievances are warranted
Dole vs. the Times For several weeks now, pundits have debated how Bob Dole would exit the stage. Would he depart on a negative note about his opponent or a positive one about himself? Would he leave with anger or with humor? In the past several days, the issue has been settled. Dole, it appears, will end his political career raging against the New York Times . Dole's spat with the gray lady went public on Thursday, Oct. 24. In New Orleans, Dole charged the paper with ignoring a story about a Miami drug dealer who got invited to the White House. "This is a disgrace," Dole insisted. "I doubt if you even read it in the New York Times . They probably put it in the want ads. They don't put any anti-Clinton stories in the New York Times . Only anti-Dole stories in the New York Times ." Dole repeated his attack for the next five days. "We are not going to let the media steal this election," he told a crowd in Dallas on Friday. "This country belongs to the people, not the New York Times ." On Saturday, in Visalia, Calif., he added, "I know that with a crowd this size, the New York Times will write not many people showed up, but the other papers will get it right." On Sunday (the day the Times endorsed Clinton), Dole called the paper "the apologist for President Clinton for the last four years and an arm of the Democratic National Committee." In a CNN interview broadcast Monday, Dole said the Times "might as well be part of the Democratic Party. ... They hammer us on a daily basis. We make a major speech, they bury it back on section D. They put a front-page story that, well, Bob Dole and Jack Kemp didn't get along together 12 years ago." On Tuesday, Dole was still at it, referring to the 28 words of the 10th Amendment, and quipping, "That's about what I got in the New York Times today." The Times has reacted to this assault by highhandedly quoting everything and explaining none of it, leaving its readers baffled as to why the Republican nominee is so upset at the paper. In fact, Dole's fury at the Times is hardly news to those who work at the paper. According to Katharine Seelye, who has covered Dole since the beginning of his campaign, the complaints date from December 1995, when Dole staff members first protested that she had misunderstood the candidate's position on abortion. The real bitterness, however, began in May, when the paper played what Dole aides billed as a major address about welfare on Page 19 of the business section. Since then, campaign honchos have peppered the paper's reporters and editors with constant phone calls and letters complaining about unfair treatment. Reporters traveling with Dole caught a glimpse of the enmity Oct. 9, when Nelson Warfield, Dole's press secretary, staged a public confrontation with Seelye. The candidate, Warfield told reporters waiting to board the campaign plane, had just come from an appearance on G. Gordon Liddy's radio show. Why, Seelye asked, weren't reporters told about the appearance in advance? According to reporters present, Warfield snapped that it wouldn't make any difference because the Times would get the story wrong anyway. Then, on the plane, Warfield walked back to the press section and grandly served Seelye with a copy of a letter from Communications Director John Buckley to her boss, Times Washington Editor Andrew Rosenthal. That letter, which has fallen into the hands of Slate, protests Seelye's coverage of a speech the previous day. Dole, in New Jersey, had talked about Clinton being AWOL in the drug war. "Where has he been for four years? How many hundreds of thousands of young people started drugs?" Dole said. "Three million have started smoking while he was playing around with smoking and all this stuff finally in an election year." Seelye's front-page story reported that "Mr. Dole accused the President of 'playing around' while the drug war raged out of control." Buckley complains that the story "could lead the reader to believe that Dole was talking about a very different kind of 'playing around'--something he did not say, and something he would not say." The letter continues: "Since May, I have been pointing out to you a problem we see with the accuracy and understanding of context revealed in Kit's reporting," going on to assert that "Seelye has misquoted Dole on numerous occasions and done so in a manner that distorted the accuracy of her assertions and your coverage." No Dole staff would be quoted by name for this story, but speaking on background, a senior campaign official elaborated upon the complaint. "They've just done a miserable job throughout this campaign," the official said. "The coverage of Dole has been excessively bitchy from day one, in addition to having a number of extraordinary factual problems." With Seelye, the official says, the problem is "not being able to transcribe a tape accurately." With Adam Nagourney, the Times ' other reporter covering Dole full time since the summer, "the problem is an incredible focus on the little picture as opposed to the big picture." As an example, the official cites a September story in which Nagourney lumped together Dole's fall from a platform in Chico, Calif., and his mistaken reference to the "Brooklyn" Dodgers as "a rough stretch of politicking." Other than those two episodes, the official says, Dole actually had a great week. The campaign's complaint extends to unequal treatment--a nine-part series on Clinton's record, which the official describes as "the softest portrait since they invented black velvet"--and the Times perpetually underestimating the size of Dole crowds. "Clinton even gets better photographs," the official contends. Rosenthal, who has direct responsibility for campaign coverage at the Times , professes bewilderment at these complaints. "We don't make editorial judgments based on disposition to be tough on Bob Dole or nice to Bob Dole," he says. On the specifics, Rosenthal says that the Times ran an editor's note acknowledging that it shouldn't have truncated the "playing around" quote. He points out that the Times ran its story on the Miami drug dealer who visited the White House the same day Dole accused the paper of not covering it. As for the nine-part series on Clinton, Rosenthal says it is the long-standing practice of the paper to do a lengthy series on the incumbent's record. "If Dole wins and runs again in 2000, he will get nine-part series too," he says. "Ithink we have been tough on him," Seelye says. This stems, however, not from any bias, she says, but from the campaign's own internal problems. Dole's campaign has been especially "porous," with aides emulating the proverbial seafaring rats. This is true enough--in recent days ex-strategist Don Sipple has trashed the campaign on the record. But there's another point, too. Contrary to Buckley's charge that she misquotes Dole, Seelye routinely makes Dole look ridiculous by quoting him all too accurately, depicting him in what one colleague calls a "cinema verité " style. Famous for going over and over her tape recordings on the campaign plane, Seelye manages to get every Dole mumble, repetition, and verbal miscue down. For instance, in her Oct. 26 story reporting Dole's attack on the Times , Seelye writes: "In Phoenix on Friday night, he had a delightful time drawing out his vowels as he described financial contributions to the Clinton campaign. "From Indoneeesia," he said. "Yeah. From INdiaaaaah. Some fellow named Gandhi out there. He owes $10,000 in back taxes, but he found $300,000 to give to the Clinton campaign. And now Gandhi is gaaaawn. Gaaaaandhi, gone gone gone. They can't find him." Two days later, she quoted Dole in another story: "They've turned the White House into something else, I don't know what it is. It's the animal house! It's the animal house!" Most reporters would write, Bob Dole yesterday compared the White House to an "animal house," sparing the exclamation points, and making him sound at least compos mentis. But though unflattering, Seelye's Mametizing of Bob Dole can hardly be called unfair. It is not as if the Times cleans up Clinton's quotes; the president simply observes the rules of syntax most of the time. Something similar may be happening with the pictures. After four years, Clinton has learned how to avoid looking unpresidential. He no longer allows himself to be photographed wearing too-short running shorts, and he avoids pulling faces in public. Dole, who is simply less photogenic, is an easier victim for picture editors--who, like their editorial counterparts, have a strong bias against dullness. Take, for instance, the two pictures shown above. The front-page picture the Times ran the day after the second presidential debate does make Dole look like a decomposing monster. But unlike the picture in the Washington Post the same day, it captures the spirit of the event, with Dole grimly taking the offensive and Clinton watching warily but standing aside from the attacks. Dole sounds absurd when he alleges that the paper that broke Whitewater and the story of the first lady's commodities trades has not been aggressive in pursuing Clinton scandals. All sorts of potential Dole scandals have been soft-pedaled by the media, including the Times , because he is so far behind. It's true that coverage of Clinton on the campaign trail has been somewhat softer than the coverage of Dole, as even other Times reporters acknowledge. But the explanation is institutional, not ideological. The press, as many have complained, overemphasizes the "horse race" aspect of politics. As a side effect of that disease, reporters have excessive respect for a well-run campaign. (In 1988, Republican George Bush benefited from this phenomenon.) A cruder reality is that reporters need to have a relationship with Clinton after Tuesday. None of these factors, though, is unique to the Times . So why is Dole singling it out? Dole's attacks on the Times have the appearance of being an exercise in populist demagogy. In one of his great cue-card reading remarks, Dole tried to explain his recent attacks on CNN the other night by saying, "I like the media. They don't like them in the South." But this pat explanation doesn't entirely make sense. Red meat for right-wing crowds doesn't help Dole with the centrist voters he would need to turn around in order to make the miraculous happen. And in fact, according to a senior Dole aide, the attacks are heartfelt on the candidate's part. Dole has been going after the Times over the objections of advisers who have been telling him there's no percentage in picking fights with the press. But if Dole is attacking the Times because he is truly furious and not because he thinks it will help him get elected, what is he so angry about? The answer, I think, is that there has always been a Nixonian streak in Bob Dole, by which I mean a part of him which feels shut out of the closed circle of the Eastern establishment. At the Republican convention, Dole blasted the Clinton administration as a "corps of the elite who never grew up, never did anything real, never sacrificed, never suffered, and never learned." That phrase recalled an attack he made on the press long ago, in the days of Watergate, when he accused the Washington Post of being in bed with George McGovern. "There is a cultural and social affinity between the McGovernites and the Post executives and editors," Dole said then. "They belong to the same elite: They can be found living cheek-by-jowl in the same exclusive chic neighborhoods, and hob-nobbing at the same Georgetown parties." The deeper story here isn't whether Dole was wrongly shunted onto D19 when he ought to have been on A1. It's his feelings, as he says goodbye to politics, about the people who get to decide.
C. Rosenthal reveals that he is perplexed by Dole's grievances
How do they evaluate their proposed metric?
### Building Extractive CNN/Daily Mail In this section, we describe the procedure of annotating CNN/Daily Mail. For each facet (sentence) in the reference summary, we find all its support sentences in the document that can cover its meaning. Note that the support sentences are likely to be more verbose, but we only consider if the sentences cover the semantics of the facet regardless of their length. The reason is that we believe extractive summarization should focus on information coverage and once salient sentences are extracted, one can then compress them in an abstractive way BIBREF0, BIBREF1. Formally, we denote one document-summary pair as $\lbrace d, r\rbrace $, where $d = \lbrace d^j\rbrace _{j=1}^D$, $r = \lbrace r^j\rbrace _{j=1}^R$, and $D$, $R$ denote the number of sentences. We define one support group of facet $\mathcal {F}$ as a minimum set of sentences in the document that express the meaning of $\mathcal {F}$. For each $r^j$, we annotate a FAM $r^j \rightarrow \lbrace \lbrace d^{s_{j, 1}^k}\rbrace _{k=1}^{\textrm {K}_1}, \lbrace d^{s_{j, 2}^k}\rbrace _{k=1}^{\textrm {K}_2}, ..., \lbrace d^{s_{j, N}^k}\rbrace _{k=1}^{\textrm {K}_N}\rbrace $ in which each $\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n}$ is a support group and $s_{j, n}^k$ is the index of the $k$-th support sentence in group $n$. One may regard the procedure as creating extractive labels, which is widely used in extractive summarization since only abstractive references are available in existing datasets. The major differences are that 1) We label all the support sentences instead of just one or fixed number of sentences, i.e., we do not specify $\textrm {K}_n$. For example, we would put two sentences to one support group if they are complementary and only combining them can cover the facet. 2) We find multiple support groups ($N > 1$), as there could be more than one set of sentences that cover the same facet and extracting any one of them is acceptable. In contrast, there is no concept of support group in extractive labels as they inherently form one such group. We sampled 150 document-summary pairs from the test set of CNN/Daily Mail. 344 FAMs were created by three annotators with high agreement (pairwise Jaccard index 0.71) and further verified to reach consensus. We found that the facets can be divided into three categories based on their quality and degree of abstraction as follows. Random: The facet is quite random, either because the document itself is too hard to summarize (e.g., a report full of quotations) or the human editor was too subjective when writing the summary BIBREF2. Another possible reason is that the so-called “summaries” are in fact “story highlights”, which seems reasonable to contain details. We found that 41/150 (26%) samples have random facet(s), implying there are severe issues in the reference summaries of CNN/Daily Mail. Low Abstraction: The facet can be mapped to its support sentences. We further divide this category by the (rounded) average number of support sentences K of $N$ support groups ($\textrm {K}=\frac{\sum _{n=1}^N |\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n} \rbrace |}{N})$. As in Table TABREF1, most facets (93%) in the reference summaries are paraphrases or compression of one to two sentences in the document without much abstraction. High Abstraction: The facet cannot be mapped to its support sentences, which indicates that its writing requires deep understandings of the document rather than reorganizing several sentences. The proportion of this category (7%) also indicates how often extractive methods would not work (well) on CNN/Daily Mail. Surprisingly, we found it easier than previously believed to create the FAMs on CNN/Daily Mail, as it is uncommon ($\overline{N} = 1.56$) to detect multiple sentences with similar semantics (compared to multi-document summarization). In addition, most support groups only have one or two support sentences with large lexical overlap. ### Revisit of State-of-the-art Methods By utilizing the FAMs, we revisit extractive methods to see how well they perform on facet coverage. Specifically, we compare Lead-3, Refresh BIBREF3, FastRL(E) (E for extractive only) BIBREF0, UnifiedSum(E) BIBREF1, NeuSum BIBREF4, and BanditSum BIBREF5 using both ROUGE and FAMs. As these methods are facet-agnostic (i.e., their outputs are not organized by facets but flat extract sets), we consider one facet is covered as long as one of its support groups is extracted and measure the Facet-Aware Recall ($\textbf {FAR} = \frac{\textrm {\#covered}}{R}$). For a fair comparison, each method extracts three sentences since extracting all would result in a perfect FAR. As shown in Table TABREF13, there is almost no discrimination among the last four methods under ROUGE-1 F1, and their rankings under ROUGE-1/2/L are quite different. In contrast, FAR shows that UnifiedSum(E) covers the most facets. Although FAR is supposed to be favored as FAMs are already manually labeled and tell exactly if one sentence should be extracted (assuming our annotations are in agreement), to further verify that FAR correlates with human preference, we rank UnifiedSum(E), NeuSum, and Lead-3 in Table TABREF15. The order of the 1st rank in the human evaluation coincides with FAR. FAR also has higher Spearman's coefficient $\rho $ than ROUGE (0.457 vs. 0.44, n=30, threshold=0.362 at 95% significance). Another benefit of the FAMs is that one can employ the category breakdown for fine-grained analysis under any metrics of interest. Here we consider ROUGE and additionally evaluate several abstractive methods: Pointer-Generator (PG) BIBREF2, FastRL(E+A)(extractive+abstractive) BIBREF0, and UnifiedSum(E+A) BIBREF1. As depicted in Table TABREF16, not only extractive methods fail on high abstraction samples, but there is also a huge performance gap between low and high abstraction samples for abstractive methods, which suggests that existing methods achieve decent performance mainly by extraction rather than abstraction. We also found that all the compared methods perform much worse on the documents with “random” summaries, implying that the randomness in the reference summaries might introduce noise to both model training and evaluation. Despite the fact that the sample size is relatively small, we observed consistent results when analyzing different subsets of the data. ### Analysis of Approximate Approaches to Mapping Generation Although the FAMs only need to be annotated once, we investigate whether such human efforts can be further reduced by evaluating approximate approaches that generate extractive labels. Approximate approaches typically transform one abstractive summary to extractive labels heuristically using ROUGE. Previously one could only estimate the quality of these labels by evaluating the extractive models trained using such labels, i.e., comparing the extracted and reference summaries (also approximately via ROUGE). Now that the FAMs serve as ground-truth extractive labels, we can evaluate how well each approach performs accurately. Since the approximate approaches do not have the notion of support group, we flatten all the support sentences in one FAM to a label set. Due to limited space, we leave the details of the approximate approaches (most of them are self-evident) to Appendix . The comparison results are shown in Table TABREF17. On the bright side, approximate approaches perform relatively well (e.g., 90.6% selected sentences of BIBREF3 indeed contain salient information). This is explainable as ROUGE is good at capturing lexical overlap and as we have shown, there are many copy-and-paste reference summaries in CNN/Daily Mail. On the other hand, these approaches are not perfect and the low recall suggests that simply mapping each facet with one support sentence would miss plenty of salient sentences, which could worsen the performance of extractive models trained on such labels. That said, how to find more than one support group for each facet or multiple support sentences in one support group automatically and accurately remains an open question. ### Conclusions and Future Work We presented the promising results towards the facet-aware evaluation for extractive summarization. In the future, we will conduct large-scale human annotations in a crowd-sourcing way on the whole test set of CNN/Daily Mail. We will also investigate benchmark multi-document summarization datasets such as DUC BIBREF8 and TAC BIBREF9 to see if the findings coincide and how we can leverage the multiple references provided for each document set in those datasets. Table 1: In some cases, lexical overlap (finding the sentence in the document with the highest ROUGE score) could be misleading. Figure 1: Comparison of summarization metrics. Support sentences share the same color as their facets. Facet 1 is covered by the (extracted) document sentences 2 and 6. Table 2: Category breakdown of the Facet-Aware Mappings (FAMs) from reference summaries to documents. Table 3: Comparison of extractive methods using ROUGE F1 and Facet-Aware Recall (FAR). Table 4: Proportions of system ranking in human evaluation. Table 6: Performance of approximate approaches that generate extractive labels. All approaches find one support sentence for each facet except the first two. Table 5: ROUGE-1 F1 of various methods on random (R), low abstraction (L), high abstraction (H), and high quality (L + H) samples. Figure 2: Comparison of various extractive methods under ROUGE and FAMs (FAR and SAR). The rankings under ROUGE-1/2/L often contradict with each other. Figure 3: Comparison of extractive methods under FAR and SAR reveals their capability of extracting salient and non-redundant sentences. Table 9: Full document, reference summary, and FAMs presented in Table 2.
manually labeled and tell exactly if one sentence should be extracted (assuming our annotations are in agreement), to further verify that FAR correlates with human preference,
What was the total bilirubin level in Mr. Wells' lab results upon discharge? Choose the correct answer from the following options: A. 0.5 mg/dL B. 1.5 mg/dL C. 1.9 mg/dL D. 2.5 mg/dL E. 5.0 mg/dL
### Patient Report 0 **Dear colleague, ** We report to you on Mr. Paul Wells, born on 04/02/1953, who was in our inpatient treatment from 07/26/2019 to 07/28/2019. **Diagnoses:** Suspected multifocal HCC segment IV, VII/VIII, first diagnosed: 07/19. - COPD, current severity level Gold III. - Pulmonary emphysema, respiratory partial insufficiency with home oxygen. - Postnasal drip syndrome **Current Presentation:** The elective presentation of Mr. Wells was made in accordance with the decision of the interdisciplinary liver board of 07/20/2019 for further diagnostics in the case of multiple malignoma-specific hepatic space demands. **Medical History: **In brief, Mr. Wells presented to the Medical Center St. Luke's with persistent right-sided pain in the upper abdomen. Computer tomography showed multiple intrahepatic masses of the right liver lobe (SIV, SVII/VIII). For diagnostic clarification of the malignoma-specific findings, the patient was presented to our liver outpatient clinic. The tumor marker diagnostics have not been conclusive. Analogous to the recommendation of the liver board, a liver puncture, staging, and endoscopic exclusion of a primary in the gastrointestinal tract should be initiated. **Physical Examination:** Physical examination reveals an alert patient. - Oral mucosa: Moist and rosy, no plaques typical of thrush, no plaques typical of herpes. - Hear: Heart sounds pure, rhythmic, normofrequency. - Lungs: Laterally attenuated breath sound with wheezing. - Abdomen: Abdomen soft, regular bowel sounds over all 4 quadrants, no defensive tension, no resistances, diffuse pressure pain over the upper abdomen. No renal tap pain, no spinal tap pain. Spleen palpable under the costal arch. - Extremities: No edema, freely movable - Neurology: GCS 15, pupils directly and indirectly reactive to light, no flapping tremor. No meningism. **Therapy and Progression:** Mr. Wells presented an age-appropriate general status and cardiopulmonary stability. Anamnestically, there was no evidence of an acute infection. Skin or scleral icterus and pruritus were denied. No B symptoms. No stool changes, no dysuria. There would be regular alcohol consumption of about 3-4 beers a day, as well as nicotine abuse (120 PY). The general performance in COPD Gold grade III was strongly limited, with a walking distance reduced to 100m due to dyspnea. He had a home oxygen demand with 4L/min O2 during the day, up to 6L/min under load. At night, 2L/min O2. The last colonoscopy was performed 4 years ago, with no anamnestic abnormalities. No known allergies. Family history is positive for colorectal cancer (mother). Clinical examination revealed the typical auscultation findings of advanced COPD with attenuated breath sounds bilateral, with hyperinflation and clear wheezing. Otherwise, there were no significant findings. Laboratory chemistry did not reveal any higher-grade abnormalities. On the day of admission, after detailed clarification, the patient was able to undergo the complication-free sonographically guided puncture of the liver cavity in SIV. Thereby, two punch cylinders were preserved for histopathological processing. Histologically, the findings presented as infiltrates of a macrotrabecular and pseudoglandular growing, well-differentiated hepatocellular carcinoma (G1). The postinterventional course was unremarkable. In particular, no clinical or laboratory signs were found for bleeding. CT staging revealed a size constant known in the short term. Hypervascularized hepatic space demands in both lobes of the liver without further malignancy suspect thoracoabdominal tumor detection and without metastasis aspects. MR also revealed the large, partly exophytic growing, partly centrally hemorrhaged HCC lesions in S3/4 and S7/8 to the illustration. In addition, complete enforcement of the left lobe of the liver was evident with smaller satellites and macroinvasion of the left portal vein branch. There was a low cholestasis of the left biliary system. Gastroscopy and colonoscopy were also performed. Here, a reflux esophagitis, sigmoid diverticulosis, multiple colonic diverticula, and a 4mm polyp were removed from the sigmoid colon to prevent bleeding; a hemoclip was applied. Histologically, no adenoma was found. An appointment to discuss the findings in our HCC outpatient clinic has been arranged. We recommend further therapy preparation and the performance of an echocardiography. We were able to discharge Mr. Wells on 7/28/19. **Addition:** **Ultrasound on 07/26/2019 10:15 AM:** - Indication: Targeted liver puncture for suspected metastatic liver malignancy - Organ puncture: Quick: 114%, PTT: 28 s, and platelets: 475 G/L. A valid declaration of consent is available. According to the patient, he does not receive antiplatelet drugs. - In segment IV, an approximately 8.3 x 6 cm echo-depleted mass with central cystic fusion is accessible in the dorsal position of a sonographically guided puncture at 6.5 cm puncture depth. After extensive skin disinfection, local anesthesia with 10 mL Mecaine 1% and puncture incision with a scalpel. Repeated puncture with 18 G Magnum needles is performed. Two approximately 1 cm fragile whitish cylinders obtained for histologic examination. Band-aid dressing. - **Assessment:** Hepatic space demand **MRI of the liver plain + contrast agent from 07/26/2019 1:15 PM:** **Technique**: Coronary and axial T2 weighted sequences, axial diffusion-weighted EPI sequence with ADC map (b: 0, 50, 300 and 600 s/mmÇ), axial dynamic T1 weighted sequences with Dixon fat suppression and (liver-specific) contrast agent (Dotagraf/Primovist); slice thickness: 4 mm. Premedication with 2 mL Buscopan. **Liver**: Centrally hemorrhagic masses observed in liver segments 4, 7, and 8 demonstrate T2 hyperintensity, marked diffusion restriction, arterial phase enhancement, and venous phase washout. These characteristics are congruent with histopathological diagnosis of hepatocellular carcinoma. The largest lesion in segment 4 exhibits pronounced exophytic growth but no evidence of organ invasion. Notably, branches of the mammary arteries penetrate directly into the tumor. Diffusion-weighted imaging further reveals disseminated foci throughout the entire left hepatic lobe. Disruption of the peripheral left portal vein branch indicative of macrovascular invasion, accompanied by peripheral cholestasis in the left biliary system. **Biliary Tract:** Bile ducts are emphasized on both left and right sides, with no evidence of mechanical obstruction in drainage. The common hepatic duct remains non-dilated. **Pancreas and Spleen:** Both organs exhibit no abnormalities. **Kidneys:** Normal signal characteristics observed. **Bone Marrow:** Signal behavior is within normal limits. Assessment: Radiological features highly suggestive of hepatocellular carcinoma in liver segments 4, 7, and 8, with evidence of macrovascular invasion and peripheral cholestasis in the left biliary system. No signs of organ invasion or biliary obstruction. Pancreas, spleen, kidneys, and bone marrow appear unremarkable. **Assessment:** Large liver lesions, some exophytic and some centrally hemorrhagic, are observed in segments 3/4 and 7/8. In addition, the left lobe of the liver is completely involved with smaller satellite lesions and macroinvasion of the left portal branch. Mild cholestasis of the left biliary system is noted. Dilated bile ducts are also found on the right side with no apparent mechanical obstruction to outflow. **CT Chest/Abdomen/Pelvis with contrast agent from 07/27/2019 2:00 PM:** **Clinical Indication:** Evaluation of an unclear liver lesion (approximately 9 cm) in a patient with severe COPD. No prior liver-related medical history. **Question:** Are there any suspicious lesions in the liver? **Pre-recordings:** Previous external CT abdomen dated 09/13/2021. **Findings:** **Technique:** CT imaging involved a multi-line spiral CT through the chest, abdomen, and pelvis in the venous contrast phase. Oral contrast agent with Gastrolux 1:33 in water was administered. Thin-layer reconstructions and coronary and sagittal secondary reconstructions were performed. **Chest:** No axillary or mediastinal lymphadenopathy is observed. There is marked coronary sclerosis, as well as calcification of the aortic and mitral valves. Nonspecific nodules smaller than 2 mm are noted in the posterolateral lower lobe on the right side and lateral middle lobe. No pneumonic infiltrates are observed. There is reduced aeration with presumed additional scarring changes at the base of the lung bilaterally, along with centrilobular emphysema. **Abdomen:** Known exophytic liver lesions are confirmed, with involvement in segment III extending to the subhepatic region (0.1 cm extension) and a 6 cm lesion in segment VIII. Further spotty hypervascularized lesions are observed throughout the left lobe of the liver. No pathological dilatation of intra- or extrahepatic bile ducts is seen, and there is no evidence of portal vein thrombosis. There are no pathologically enlarged lymph nodes at the hepatic portal, retroperitoneal, or inguinal regions. No ascites or pneumoperitoneum is noted. There is no pancreatic duct congestion, and the spleen is not enlarged. Additionally, there is a Bosniak 1 left renal cyst measuring 3.6 cm. Pronounced sigmoid diverticulosis is observed, with no evidence of other masses in the gastrointestinal tract. Skeletal imaging reveals no malignancy-specific osteodestructions but shows ventral pontifying spondylophytes of the thoracic spine with no fractures. **Assessment:** Short-term size-constant known hypervascularized hepatic space lesions are present in both lobes of the liver. No other malignancy-susceptible thoracoabdominal tumor evidence is found, and there are no metastasis-specific lymph nodes. **Gastroscopy from 07/28/2019** **Findings:** **Esophagus:** Unobstructed intubation of the esophageal orifice under visualization. Mucosa appears inconspicuous, with the Z-line at 37 cm and measuring less than 5 mm. Small mucosal lesions are observed but do not straddle mucosal folds. **Stomach:** The gastric lumen is completely distended under air insufflation. There are streaky changes in the antrum, while the fundus and cardia appear regular on inversion. The pylorus is inconspicuous and passable. **Duodenum:** Good development of the bulbus duodeni is noted, with good insight into the pars descendens duodeni. The mucosa appears overall inconspicuous. **Assessment:** Findings suggest reflux esophagitis (Los Angeles Classification Grade A) and antrum gastritis. **Colonoscopy from 07/28/2019** **Findings:** **Colon:** Some residual fluid contamination is noted in the sigmoid (Boston Bowel Preparation Scale \[BBPS\] 8). There is pronounced sigmoid diverticulosis, along with multiple colonic diverticula. A 4mm polyp in the lower sigma (Paris IIa, NICE 1) is observed and ablated with a cold snare, with hemoclip application for bleeding prophylaxis. Other mucosal findings appear inconspicuous, with normal vascular markings. There is no indication of inflammatory or malignant processes. **Maximum Insight:** Terminal ileum. **Anus:** Inspection of the anal region reveals no pathological findings. Palpation is inconspicuous, and the mucosa is smooth and displaceable, with no resistance and no blood on the glove. **Assessment:** Polypectomy was performed for sigmoid diverticulosis and a colonic diverticulum, with histology revealing minimally hyperplastic colorectal mucosa and no evidence of malignancy. **Pathology from 08/27/2019** **Clinical Information/Question:** **Macroscopy:** Unclear liver tumor: numerous tissue samples up to a maximum of 0.7 cm in size. Complete embedding. Processing: One tissue block processed and stained with Hematoxylin and Eosin (H&E), Gomori\'s trichrome, Iron stain, Diastase Periodic Acid-Schiff (D-PAS), and Van Gieson stain. **Microscopic Findings:** - Liver architecture is presented in fragmented liver core biopsies with observable lobular structures and two included portal fields. - Hepatic trabeculae are notably wider than the typical 2-3 cell width, featuring the formation of druse-like luminal structures. - Sinusoidal dilatation is markedly observed. - Hepatocytes show mildly enlarged nuclei with minimal cytologic atypia and isolated mitotic figures. - Gomori staining reveals a notable, partial loss of the fine reticulin fiber network. - Adjacent areas show fibrosed liver parenchyma containing hemosiderin pigmentation. - No significant evidence of parenchymal fatty degeneration is observed. **Assessment**: Histologic features indicative of marked sinusoidal dilatation, trabecular widening, and partial loss of reticulin network, alongside minimally atypical hepatocytes and fibrosed parenchyma with hemosiderin pigment. No significant hepatic fat degeneration noted. ### Patient Report 1 **Dear colleague, ** We would like to report on Paul Wells, born on 04/02/1953, who was under our outpatient treatment on 08/24/2019. **Diagnoses:** - Multifocal HCC (Hepatocellular Carcinoma) involving segments IV, VII/VIII, with portal vein invasion, classified as BCLC C, diagnosed in July 2019. - Extensive HCC lesions, some exophytic and others centrally hemorrhagic, in segments S3/4 and S7/8, complete involvement of the left liver lobe with smaller satellite lesions, and macrovascular invasion of the left portal vein. - Histology from 07/27/2019: A well-differentiated hepatocellular carcinoma (G1) with a macrotrabecular and pseudoglandular growth pattern. - Decision from the Liver Tumor Board on 08/18/2019: Recommending systemic therapy. - Initiation of Atezolizumab/Bevacizumab on 08/24/2019 - Liver fibrosis: Elevated alcohol consumption (3-4 beers/day). **Other Diagnoses:** - COPD with a current severity level of Gold III. - Pulmonary emphysema. - Respiratory partial insufficiency requiring home oxygen therapy. - Postnasal Drip Syndrome. - History of nicotine use (120 pack-years). - Hypertension (high blood pressure). **Medical History:** Mr. Wells presented with persistent right upper abdominal pain and was initially treated at St. Luke\'s Medical Center. CT scans revealed multiple intrahepatic lesions in the right liver lobe (SIV, SVII/VIII). Short-term follow-up CT staging revealed a known, size-stable, hypervascularized hepatic lesion in both lobes of the liver, with no evidence of other thoracoabdominal malignancies or suspicious lymph nodes. MRI also confirmed the presence of large HCC lesions, some exophytic and others centrally hemorrhagic, in segments S3/4 and S7/8, along with complete infiltration of the left liver lobe with smaller satellite lesions and macroinvasion of the left portal vein. There was mild cholestasis in the left biliary system. **Current Recommendations: ** - Liver function remains good based on laboratory tests. - Mr. Wells has been extensively informed about systemic therapy options with Atezolizumab/Bevacizumab and the possibility of alternative therapy with a tyrosine kinase inhibitor. - The decision has been made to initiate standard first-line therapy with Atezolizumab/Bevacizumab. Detailed information regarding potential side effects has been provided, with particular emphasis on the need for immediate medical evaluation in case of signs of gastrointestinal bleeding (blood in stool, black tarry stool, or vomiting blood) or worsening pulmonary symptoms. - The patient has been strongly advised to abstain from alcohol completely. - A follow-up evaluation through liver MRI and CT has been scheduled for January 4, 2020, at our HCC (Hepatocellular Carcinoma) clinic. The exact appointment time will be communicated to the patient separately. - We are available for any questions or concerns. - In case of persistent or worsening symptoms, we recommend an immediate follow-up appointment. ### Patient Report 2 **Dear colleague, ** We would like to provide an update regarding Mr. Paul Wells, born on 04/02/1953, who was under our inpatient care from 08/13/2020 to 08/14/2020. **Medical History:** We assume familiarity with Mr. Wells\'s comprehensive medical history as described in the previous referral letter. At the time of admission, he reported significantly reduced physical performance due to his known severe COPD. Following the consensus of the Liver Board, we admitted Mr. Wells for a SIRT simulation. **Current Presentation:** Mr. Wells is a 66-year-old patient with normal consciousness and reduced general condition. He is largely compensated on 3 liters of oxygen per minute. His abdomen is soft with regular peristalsis. A palpable tumor mass in the right upper abdomen is noted. **DSA Coeliac-Mesenteric on 08/13/2020:** - Uncomplicated SIRT simulation. - Catheter position 1: Right hepatic artery. - Catheter position 2: Left hepatic artery. - Catheter position 3: Liver segment arteries 4a/4b. - Uncomplicated and technically successful embolization of parasitic tumor supply from the inferior and superior epigastric arteries. **Perfusion Scintigraphy of the Liver and Lungs, including SPECT/CT on 08/13/2020:** - The liver/lung shunt volume is 9.4%. - There is intense radioactivity accumulation in multiple lesions in both the right and left liver lobes. **Therapy and Progression:** On 08/13/2020, we performed a DSA coeliac-mesenteric angiography on Mr. Wells, administering a total of approximately 159 MBq Tc99m-MAA into the liver\'s arterial circulation (simulation). This procedure revealed that a significant portion of radioactivity would reach the lung parenchyma during therapy, posing a risk of worsening his already compromised lung function. In view of these comorbidities, SIRT was not considered a viable treatment option. Therefore, an interdisciplinary decision was made during the conference to recommend systemic therapy. With an uneventful course, we discharged Mr. Wells in stable general condition on 08/14/2020. ### Patient Report 3 **Dear colleague, ** We are reporting on Paul Wells, born on 04/02/1953, who presented to our interdisciplinary clinic for Hepato- and Cholangiocellular Tumors on 10/24/2020. **Diagnoses:** - Multifocal HCC Segment with portal vein invasion, BCLC C, first diagnosed 07/19 - Large, partly exophytic, partly centrally hemorrhagic HCC lesions in S3/4 and S7/8, complete infiltration of the left lateral lobe with smaller satellites, macrovascular invasion of the left portal vein. - Histology on 07/27/2019: Macrotrabecular and pseudoglandular growth of well-differentiated hepatocellular carcinoma (G1). - Histology from 07/27/2019: A well-differentiated hepatocellular carcinoma (G1) with a macrotrabecular and pseudoglandular growth pattern. - Decision from the Liver Tumor Board on 08/18/2019: Recommending systemic therapy. - Initiation of Atezolizumab/Bevacizumab on 08/24/2019. - Liver fibrosis: Elevated alcohol consumption (3-4 beers/day). - CT in 01/2020: Very good tumor response. - Re-administration of Atezolizumab/Bevacizumab on 01/25/2022 and 02/16/2022, followed by a treatment pause due to limited tolerance. - CT from 02/2020 to 08/2020: Continuously regressing tumor findings. - Liver fibrosis: Increased C2 consumption (3-4 beers/day). **Other Diagnoses:** - Suspected Polyneuropathy or Restless Legs Syndrome - COPD, current severity Gold III. - Pulmonary emphysema - Respiratory partial insufficiency with home oxygen - Postnasal-Drip Syndrome - History of nicotine abuse (120 py) - Transient worsening of lung function with steroid requirement after Atezolizumab/Bevacizumab administrations - History of severe pneumonia (Medical Center St. Luke's) in 10/2019 - Pneumogenic sepsis with detection of Streptococcus pneumoniae - Arterial hypertension - Atrial fibrillation - Treatment with Apixaban - Reflux esophagitis Grade A (Esophagogastroduodenoscopy in 08/2019). **Current Presentation**: Mr. Wells presented to discuss follow-up after systemic therapy with Atezolizumab/Bevacizumab due to his impaired general condition. **Medical History:** For detailed medical history, please refer to the previous medical reports. In summary, Mr. Wells presented in 07/2019 with persistent right upper abdominal pain. A CT scan showed multiple intrahepatic lesions in the right liver lobe (SIV, SVII/VIII). MR imaging also revealed large, partly exophytic, partly centrally hemorrhagic HCC lesions in S3/4 and S7/8. There was complete infiltration of the left liver lobe with smaller satellites and macroinvasion of the left portal vein branch. Histology confirmed a well-differentiated hepatocellular carcinoma (G1). There is no known underlying liver disease, but peritumoral liver fibrosis was observed histologically. Mr. Wells reported increased alcohol consumption of 3-4 beers per day. Due to comorbidities and a large tumor with a relatively high liver-lung shunt, SIRT simulation was initially attempted but found to be an unsuitable treatment option. Therefore, our interdisciplinary liver tumor board recommended systemic therapy. After comprehensive counseling, treatment with Atezolizumab/Bevacizumab commenced on 08/24/2019. The therapy had to be paused after a single administration due to a substantial increase in transaminases (GPT 164 U/L, GOT 151 U/L), suspected to be associated with immunotherapy-induced hepatitis. With only minimal improvement in transaminases, Prednisolone therapy was initiated on and tapered successfully after significant transaminase regression. However, before the next planned administration, the patient experienced severe pneumonic sepsis, requiring hospitalization on 10/2019. Following discharge, there was a recurrent infection requiring inpatient antibiotic therapy. Staging examinations in 01/2020 showed a very good tumor response. Subsequently, Atezolizumab/Bevacizumab was re-administered on 01/23/2020 and 02/14/2020. However, in the following days, the patient experienced significant side effects, including oral burning, appetite and weight loss, low blood pressure, and worsening pulmonary status. Steroid treatment improved the pulmonary situation, but due to poor tolerance, therapy was paused after 02/14/2020. Currently, Mr. Wells reports a satisfactory general condition, although his pulmonary function remains limited but stable. **Summary:** Laboratory results from external testing on 01/02/2020 indicate excellent liver function, with transaminases within normal range. The latest CT examination shows continued tumor regression. However, MRI quality is limited due to the patient\'s inability to hold their breath adequately. Given the excellent tumor response and previous significant side effects, it was decided to continue the treatment pause until the next tumor staging. **Current Recommendations:** A follow-up imaging appointment has been scheduled for four months from now. We kindly request you send the latest CT images (Chest/Abdomen/Pelvis, including dynamic liver CT) and current blood values to our HCC clinic. Due to limited assessability, another MRI is not advisable. We remain at your disposal for any further inquiries. In case of persistent or worsened symptoms, we recommend prompt reevaluation. **Medication upon discharge:** **Medication** **Dosage** **Frequency** ------------------------------------- ------------ ------------------------- Ipratropium/Fenoterol (Combivent) As needed As needed Beclomethasone/Formoterol (Fostair) 6+200 mcg 2-0-2 Tiotropium (Spiriva) 2.5 mcg 2-0-0 Prednisolone (Prelone) 5 mg 2-0-0 (or as necessary) Pantoprazole (Protonix) 40 mg 1-0-0 Fenoterol 0.1 mg As needed Apixaban (Eliquis) 5 mg On hold Olmesartan (Benicar) 20 mg 1-0-0 Lab results upon Discharge: **Parameter** **Results** **Reference Range** ----------------------------- ------------- --------------------- Sodium (Na) 144 mEq/L 134-145 mEq/L Potassium (K) 3.7 mEq/L 3.4-5.2 mEq/L Calcium (Ca) 2.37 mEq/L 2.15-2.65 mEq/L Chloride (Cl) 106 mEq/L 95-112 mEq/L Inorganic Phosphate (PO4) 0.93 mEq/L 0.8-1.5 mEq/L Transferrin Saturation 20 % 16-45 % Magnesium 0.78 mEq/L 0.75-1.06 mEq/L Creatinine 1.88 mg/dL \<1.2 mg/dL GFR 36 mL/min \<90 mL/min BUN 60 mg/dL 14-46 mg/dL Uric Acid 4.6 mg/dL 3.0-6.9 mg/dL Total Bilirubin 0.5 mg/dL \<1 mg/dL Albumin 4.0 g/dL 3.6-5.0 g/dL Total Protein 6.8 g/dL 6.5-8.7 g/dL CRP 0.19 mg/dL \<0.5 mg/dL Transferrin 269 mg/dL 200-360 mg/dL Ferritin 110 mcg/L 30-300 mcg/L ALT 339 U/L \<45 U/L AST 424 U/L \<50 U/L GGT 904 U/L \<55 U/L Lipase 61 U/L \<70 U/L Thyroid-Stimulating Hormone 0.54 mIU/L 0.27-4.20 mIU/L Hemoglobin 14.5 g/dL 14.0-17.5 g/dL Hematocrit 43 % 40-52 % Red Blood Cells 4.60 M/µL 4.6-6.2 M/µL White Blood Cells 8.78 K/µL 4.5-11.0 K/µL Platelets 205 K/µL 150-400 K/µL MCV 94 fL 81-100 fL MCH 31.5 pg 27-34 pg MCHC 33.5 g/dL 32.4-35.0 g/dL MPV 11 fL 7-12 fL RDW 14.8 % 11.9-14.5 % Neutrophils 3.72 K/µL 1.8-7.7 K/µL Lymphocytes 2.37 K/µL 1.4-3.7 K/µL Monocytes 0.93 K/µL 0.2-1.0 K/µL Eosinophils 1.67 K/µL \<0.7 K/µL Basophils 0.09 K/µL 0.01-0.10 K/µL Erythroblasts Negative \<0.01 K/µL Antithrombin Activity 85 % 80-120 % ### Patient Report 4 **Dear colleague, ** We are reporting an update of the medical condition of Mr. Paul Wells born on 04/02/1953, who presented for a follow up in our outpatient clinic on 11/20/2020. **Diagnoses:** - Multifocal HCC Segment with portal vein invasion, BCLC C, first diagnosed 07/19 - Large, partly exophytic, partly centrally hemorrhagic HCC lesions in S3/4 and S7/8, complete infiltration of the left lateral lobe with smaller satellites, macrovascular invasion of the left portal vein. - Histology on 07/27/2019: Macrotrabecular and pseudoglandular growth of well-differentiated hepatocellular carcinoma (G1). - SIRT simulation: No feasible SIRT. - Liver Tumor Board decision on 08/18/2019: Systemic therapy. - Atezolizumab/Bevacizumab since 10/26/2021, with a pause starting on 09/17/2019, due to transaminase elevation. - CT in 01/2020: Very good tumor response. - Re-administration of Atezolizumab/Bevacizumab - CT from 02/2020 to 08/2020: Continuously regressing tumor findings. - Liver fibrosis: Increased alcohol consumption (3-4 beers/day). **Other diagnoses:** - COPD, current severity Gold III. - Pulmonary emphysema. - Respiratory partial insufficiency with home oxygen. - Postnasal-Drip Syndrome. - History of nicotine abuse (120 py). - Transient worsening of lung function with steroid requirement after Atezolizumab/Bevacizumab administrations - Pneumogenic sepsis with detection of Streptococcus pneumonia - Arterial hypertension. - Atrial fibrillation - Treatment with Apixaban. - Reflux esophagitis LA Grade A (Esophagogastroduodenoscopy in 08/2019). **Medical History:** Mr. Wells initially presented with right upper abdominal pain, which led to the discovery of multiple intrahepatic masses in liver segments IV, VII/VIII. Subsequent investigations confirmed the diagnosis of HCC. He also suffers from chronic obstructive pulmonary disease (COPD), emphysema, and respiratory insufficiency requiring home oxygen therapy. Previous investigations and treatments were documented in detail in our previous medical records. **Physical Examination:** - General Appearance: Alert, cooperative, and oriented. - Vital Signs: Stable blood pressure, heart rate, respiratory rate, and temperature. Oxygen Saturation (SpO2): Within the normal range. - Respiratory System: Normal chest symmetry, no accessory muscle use. Clear breath sounds, no wheezing or crackles. Regular respiratory rate. - Cardiovascular System: Regular heart rate and rhythm, no murmurs. Strong radial and pedal pulses bilaterally. No lower extremity edema. - Gastrointestinal System: Soft, nontender abdomen. Bowel sounds present in all quadrants. Spleen palpable under the costal arch. - Neurological Examination: Alert and oriented. Cranial nerves, motor, sensory, reflexes, coordination and gait normal. No focal neurological deficits. - Skin and Mucous Membranes: Intact skin, no rashes or lesions. Moist oral mucosa without lesions. - Extremities: No edema. Full range of motion in all joints. Normal capillary refill. - Lymphatic System: - No palpable lymphadenopathy. **MRI Liver (plain + contrast agent) on 11/20/2020 09:01 AM.** - Imaging revealed stable findings in the liver. The previously identified HCC lesions in segments IV, VII/VIII, including their size and characteristics, remained largely unchanged. There was no evidence of new lesions or metastases. Detailed MRI imaging provided valuable insight into the nature of the lesions, their vascularity, and possible effects on adjacent structures. **CT Chest/Abdomen/Pelvis with contrast agent on 11/20/2020 12:45 PM.** - Thoracoabdominal CT scan showed the same results as the previous examination. Known space-occupying lesions in the liver remained stable, and there was no evidence of malignancy or metastasis elsewhere in the body. The examination also included a thorough evaluation of the thoracic and pelvic regions to rule out possible metastasis. **Gastroscopy on 11/20/2020 13:45 PM.** - Gastroscopy follow-up confirmed the previous diagnosis of reflux esophagitis (Los Angeles classification grade A) and antral gastritis. These findings were consistent with previous investigations. It is important to note that while these findings are unrelated to HCC, they contribute to Mr. Wells\' overall medical profile and require ongoing treatment. **Colonoscopy on 11/20/2020 15:15 PM.** - Colonoscopy showed that the sigmoid colon polyp, which had been removed during the previous examination, had not recurred. No new abnormalities or malignancies were detected in the gastrointestinal tract. This examination provides assurance that there is no concurrent colorectal malignancy complicating Mr. Wells\' medical condition. **Pulmonary Function Testing:** Mr. Wells\' COPD, emphysema, and respiratory insufficiency were evaluated in detail. Pulmonary function tests confirmed his current severity score of Gold III, indicating advanced COPD. Despite the chronic nature of his disease, there has been no significant deterioration since the last assessment. **Oxygen Therapy:** As previously documented, Mr. Wells requires home oxygen therapy. His oxygen requirements have been constant, with no significant increase in oxygen requirements during daily activities or at rest. This stability in his oxygen demand is encouraging and indicates effective management of his respiratory disease. **Overall Assessment:** Based on the results of recent follow-up, Mr. Paul Wells\' hepatocellular carcinoma (HCC) has not progressed significantly. The previously noted HCC lesions have remained stable in terms of size and characteristics. In addition, there is no evidence of malignancy elsewhere in his thoracoabdominal region. Mr. Wells\' COPD, emphysema, and respiratory insufficiency, which is being treated with home oxygen therapy, have also not changed significantly during this follow-up period. His cardiopulmonary condition remains well controlled, with no acute deterioration. Psychosocially, Mr. Wells continues to demonstrate resilience and actively participates in his care. His strong support system continues to contribute to his overall well-being. Additional monitoring and follow-up appointments have been scheduled to ensure continued management of Mr. Wells\' health. In addition, discussions continue regarding potential treatment options and interventions to provide him with the best possible care. **Current Recommendations:** In light of the stability observed in Mr. Wells\' HCC and overall medical condition, we recommend the following steps for his continued care: 1. Regular Follow-up: Maintain a schedule of regular follow-up appointments to monitor the status of the HCC, cardiopulmonary function, and other associated conditions. 2. Lifestyle-Modification ### Patient Report 5 **Dear colleague, ** We report to you about Mr. Paul Wells born on 04/02/1953 who received inpatient treatment from 02/04/2021 to 02/12/2021. **Diagnosis**: Community-Acquired Pneumonia (CAP) **Previous Diagnoses and Treatment:** - Multifocal HCC Segment with portal vein invasion, BCLC C, first diagnosed 07/19 - Large, partly exophytic, partly centrally hemorrhagic HCC lesions in S3/4 and S7/8, complete infiltration of the left lateral lobe with smaller satellites, macrovascular invasion of the left portal vein. - Histology on 07/27/2019: Macrotrabecular and pseudoglandular growth of well-differentiated hepatocellular carcinoma (G1). - SIRT simulation attempt on 08/13/2019: No feasible SIRT. - Liver Tumor Board decision on 08/18/2019: Systemic therapy. - Atezolizumab/Bevacizumab since 10/26/2021, with a pause starting on 09/17/2019, due to transaminase elevation (up to 4x ULN). - CT in 01/2020: Very good tumor response. - Re-administration of Atezolizumab/Bevacizumab on 01/25/2022 and 02/16/2022, followed by a treatment pause due to limited tolerance. - CT from 02/2020 to 08/2020: Continuously regressing tumor findings. - Liver fibrosis: Increased C2 consumption (3-4 beers/day). - Suspected PNP DD RLS (Restless Legs Syndrome). <!-- --> - COPD, current severity Gold III. - Pulmonary emphysema. - Respiratory partial insufficiency with home oxygen. - Postnasal-Drip Syndrome. - History of nicotine abuse (120 py). - Transient worsening of lung function with steroid requirement after Atezolizumab/Bevacizumab administrations - Pneumogenic sepsis with Streptococcus pneumoniae detection. - History of unclear infection vs. pneumonia in 10/2019-01/2020. - Arterial hypertension. - Atrial fibrillation - Treatment with Apixaban. - Reflux esophagitis LA Grade A (Esophagogastroduodenoscopy in 08/2019). **Medical History:** For detailed medical history, please refer to the previous medical reports. In summary, Mr. Wells presented in 07/2019 with persistent right upper abdominal pain. A CT scan showed multiple intrahepatic lesions in the right liver lobe (SIV, SVII/VIII). MR imaging also revealed large, partly exophytic, partly centrally hemorrhagic HCC lesions in S3/4 and S7/8. There was complete infiltration of the left liver lobe with smaller satellites and macroinvasion of the left portal vein branch. Histology confirmed a well-differentiated hepatocellular carcinoma (G1). There is no known underlying liver disease, but peritumoral liver fibrosis was observed histologically. Mr. Wells reported increased alcohol consumption of 3-4 beers per day. Due to comorbidities and a large tumor with a relatively high liver-lung shunt, SIRT simulation was initially attempted but found to be an unsuitable treatment option. Therefore, our interdisciplinary liver tumor board recommended systemic therapy. After comprehensive counseling, treatment with Atezolizumab/Bevacizumab commenced on 08/24/2019. Currently, Mr. Wells complains about progressively worsening respiratory symptoms, which included shortness of breath, productive cough with yellow-green sputum, pleuritic chest pain, fever, and chills, spanning a period of five days. **Physical Examination:** Temperature: 38.6°C, Blood Pressure: 140/80 mm Hg, Heart Rate: 110 beats per minute Respiratory Rate: 30 breaths per minute, Oxygen Saturation (SpO2): 88% on room air Breath Sounds: Auscultation revealed diminished breath sounds and coarse crackles, notably in the right lower lobe. The patient further reported pleuritic chest pain localized to the right lower chest. **Therapy and Progression:** During his hospitalization, Mr. Wells was in stable cardiopulmonary condition. We initiated an empiric antibiotic therapy with intravenous Ceftriaxone and Azithromycin to treat community-acquired pneumonia (CAP). Oxygen supplementation was provided to maintain adequate oxygen saturation levels, and pain management strategies were implemented to alleviate pleuritic chest pain. Additionally, pulmonary hygiene measures and chest physiotherapy were applied to facilitate sputum clearance. Frequent respiratory treatments with bronchodilators were administered to mitigate airway obstruction, and continuous monitoring of vital signs, oxygen saturation, and respiratory status was carried out. Throughout his hospital stay, Mr. Wells exhibited gradual clinical improvement, marked by several positive developments. These included the resolution of fever, improved oxygen saturation levels, and a follow-up chest X-ray demonstrating the resolution of the right lower lobe consolidation. Furthermore, antibiotic therapy was adjusted based on sputum culture results, which identified Streptococcus pneumoniae as the causative pathogen. Mr. Wells continued to receive supportive care and respiratory interventions. We were thus able to discharge Mr. Wells in a good general condition.
0.5 mg/dL
Why is Lane so child-like? A. All men are child-like. B. Lane was never given a proper education, only fighting instruction. C. Lane is controlled by the Cybrain. His own brain never had the chance to develop properly. D. Lane has been a Trooper since he was seven years old.
MUTINEER By ROBERT J. SHEA For every weapon there was a defense, but not against the deadliest weapon—man himself! Raging , Trooper Lane hovered three thousand feet above Tammany Square. The cool cybrain surgically implanted in him was working on the problem. But Lane had no more patience. They'd sweat, he thought, hating the chill air-currents that threw his hovering body this way and that. He glared down at the three towers bordering on the Square. He spat, and watched the little white speck fall, fall. Lock me up in barracks. All I wanted was a little time off. Did I fight in Chi for them? Damn right I did. Just a little time off, so I shouldn't blow my top. Now the lid's gone. He was going over all their heads. He'd bowled those city cops over like paper dolls, back at the Armory. The black dog was on Lane's back. Old Mayor himself was going to hear about it. Why not? Ain't old Mayor the CinC of the Newyork Troopers? The humming paragrav-paks embedded beneath his shoulder blades held him motionless above Newyork's three administrative towers. Tammany Hall. Mayor's Palace. Court House. Lane cursed his stupidity. He hadn't found out which one was which ahead of time. They keep Troopers in the Armory and teach them how to fight. They don't teach them about their own city, that they'll be fighting for. There's no time. From seven years old up, Troopers have too much to learn about fighting. The Mayor was behind one of those thousands of windows. Old cybrain, a gift from the Trooper surgeons, compliments of the city, would have to figure out which one. Blood churned in his veins, nerves shrieked with impatience. Lane waited for the electronic brain to come up with the answer. Then his head jerked up, to a distant buzz. There were cops coming. Two black paragrav-boats whirred along the translucent underside of Newyork's anti-missile force-shield, the Shell. Old cybrain better be fast. Damn fast! The cybrain jolted an impulse through his spine. Lane somersaulted. Cybrain had taken charge of his motor nerves. Lane's own mind was just along for the ride. His body snapped into a stiff dive position. He began to plummet down, picking up speed. His mailed hands glittered like arrowheads out in front. They pointed to a particular window in one of the towers. A predatory excitement rippled through him as he sailed down through the air. It was like going into battle again. A little red-white-and-green flag fluttered on a staff below the window. Whose flag? The city flag was orange and blue. He shrugged away the problem. Cybrain knew what it was doing. The little finger of his right hand vibrated in its metal sheath. A pale vibray leaped from the lensed fingertip. Breakthrough! The glasstic pane dissolved. Lane streamed through the window. The paragrav-paks cut off. Lane dropped lightly to the floor, inside the room, in battle-crouch. A 3V set was yammering. A girl screamed. Lane's hand shot out automatically. A finger vibrated. Out of the corner of his eye, Lane saw the girl fold to the floor. There was no one else in the room. Lane, still in a crouch, chewed his lip. The Mayor? His head swung around and he peered at the 3V set. He saw his own face. "Lashing police with his vibray," said the announcer, "Lane broke through the cordon surrounding Manhattan Armory. Two policemen were killed, four others seriously injured. Tammany Hall has warned that this man is extremely dangerous. Citizens are cautioned to keep clear of him. Lane is an insane killer. He is armed with the latest military weapons. A built-in electronic brain controls his reflexes—" "At ease with that jazz," said Lane, and a sheathed finger snapped out. There was a loud bang. The 3V screen dissolved into a puddle of glasstic. The Mayor. Lane strode to the window. The two police boats were hovering above the towers. Lane's mailed hand snapped open a pouch at his belt. He flipped a fist-sized cube to the floor. The force-bomb "exploded"—swelled or inflated, really, but with the speed of a blast. Lane glanced out the window. A section of the energy globe bellied out from above. It shaded the view from his window and re-entered the tower wall just below. Now the girl. He turned back to the room. "Wake up, outa-towner." He gave the blonde girl a light dose of the vibray to slap her awake. "Who are you?" she said, shakily. Lane grinned. "Trooper Lane, of the Newyork Special Troops, is all." He threw her a mock salute. "You from outa-town, girlie. I ain't seen a Newyork girl with yellow hair in years. Orange or green is the action. Whatcha doing in the Mayor's room?" The girl pushed herself to her feet. Built, Lane saw. She was pretty and clean-looking, very out-of-town. She held herself straight and her blue-violet eyes snapped at him. "What the devil do you think you're doing, soldier? I am a diplomat of the Grassroots Republic of Mars. This is an embassy, if you know what that means." "I don't," said Lane, unconcerned. "Well, you should have had brains enough to honor the flag outside this window. That's the Martian flag, soldier. If you've never heard of diplomatic immunity, you'll suffer for your ignorance." Her large, dark eyes narrowed. "Who sent you?" "My cybrain sent me." She went openmouthed. "You're Lane ." "I'm the guy they told you about on the 3V. Where's the Mayor? Ain't this his place?" "No. No, you're in the wrong room. The wrong building. That's the Mayor's suite over there." She pointed. "See where the balcony is? This is the Embassy suite. If you want the Mayor you'll have to go over there." "Whaddaya know," said Lane. "Cybrain didn't know, no more than me." The girl noticed the dark swell of the force-globe. "What's that out there?" "Force-screen. Nothing gets past, except maybe a full-size blaster-beam. Keeps cops out. Keeps you in. You anybody important?" "I told you, I'm an ambassador. From Mars. I'm on a diplomatic mission." "Yeah? Mars a big city?" She stared at him, violet eyes wide. "The planet Mars." "Planet? Oh, that Mars. Sure, I've heard of it—you gotta go by spaceship. What's your name?" "Gerri Kin. Look, Lane, holding me is no good. It'll just get you in worse trouble. What are you trying to do?" "I wanna see the Mayor. Me and my buddies, we just come back from fighting in Chi, Gerri. We won. They got a new Mayor out there in Chi. He takes orders from Newyork." Gerri Kin said, "That's what the force-domes did. The perfect defense. But also the road to the return to city-states. Anarchy." Lane said, "Yeah? Well, we done what they wanted us to do. We did the fighting for them. So we come back home to Newyork and they lock us up in the Armory. Won't pay us. Won't let us go nowhere. They had cops guarding us. City cops." Lane sneered. "I busted out. I wanna see the Mayor and find out why we can't have time off. I don't play games, Gerri. I go right to the top." Lane broke off. There was a hum outside the window. He whirled and stared out. The rounded black hulls of the two police paragrav-boats were nosing toward the force-screen. Lane could read the white numbers painted on their bows. A loudspeaker shouted into the room: "Come out of there, Lane, or we'll blast you out." "You can't," Lane called. "This girl from Mars is here." "I repeat, Lane—come out or we'll blast you out." Lane turned to the girl. "I thought you were important." She stood there with her hands together, calmly looking at him. "I am. But you are too, to them. Mars is millions of miles away, and you're right across the Square from the Mayor's suite." "Yeah, but—" Lane shook his head and turned back to the window. "All right, look! Move them boats away and I'll let this girl out!" "No deal, Lane. We're coming in." The police boats backed away slowly, then shot straight up, out of the line of vision. Lane looked down at the Square. Far below, the long, gleaming barrel of a blaster cannon caught the dim light filtering down through Newyork's Shell. The cannon trundled into the Square on its olive-drab, box-shaped caterpillar mounting and took up a position equidistant from the bases of the three towers. Now a rumble of many voices rose from below. Lane stared down to see a large crowd gathering in Tammany Square. Sound trucks were rolling to a stop around the edges of the crowd. The people were all looking up. Lane looked across the Square. The windows of the tower opposite, the ones he could see clearly, were crowded with faces. There were white dot faces on the balcony that Gerri Kin had pointed out as the Mayor's suite. The voice of a 3V newscaster rolled up from the Square, reechoing against the tower walls. "Lane is holding the Martian Ambassador, Gerri Kin, hostage. You can see the Martian tricolor behind his force-globe. Police are bringing up blaster cannon. Lane's defense is a globe of energy similar to the one which protects Newyork from aerial attack." Lane grinned back at Gerri Kin. "Whole town's down there." Then his grin faded. Nice-looking, nice-talking girl like this probably cared a lot more about dying than he did. Why the hell didn't they give him a chance to let her out? Maybe he could do it now. Cybrain said no. It said the second he dropped his force-screen, they'd blast this room to hell. Poor girl from Mars, she didn't have a chance. Gerri Kin put her hand to her forehead. "Why did you have to pick my room? Why did they send me to this crazy city? Private soldiers. Twenty million people living under a Shell like worms in a corpse. Earth is sick and it's going to kill me. What's going to happen?" Lane looked sadly at her. Only two kinds of girls ever went near a Trooper—the crazy ones and the ones the city paid. Why did he have to be so near getting killed when he met one he liked? Now that she was showing a little less fear and anger, she was talking straight to him. She was good, but she wasn't acting as if she was too good for him. "They'll start shooting pretty quick," said Lane. "I'm sorry about you." "I wish I could write a letter to my parents," she said. "What?" "Didn't you understand what I said?" "What's a letter?" "You don't know where Mars is. You don't know what a letter is. You probably can't even read and write!" Lane shrugged. He carried on the conversation disinterestedly, professionally relaxed before battle. "What's these things I can't do? They important?" "Yes. The more I see of this city and its people, the more important I realize they are. You know how to fight, don't you? I'll bet you're perfect with those weapons." "Listen. They been training me to fight since I was a little kid. Why shouldn't I be a great little fighter?" "Specialization," said the girl from Mars. "What?" "Specialization. Everyone I've met in this city is a specialist. SocioSpecs run the government. TechnoSpecs run the machinery. Troopers fight the wars. And ninety per cent of the people don't work at all because they're not trained to do anything." "The Fans," said Lane. "They got it soft. That's them down there, come to watch the fight." "You know why you were kept in the Armory, Lane? I heard them talking about it, at the dinner I went to last night." "Why?" "Because they're afraid of the Troopers. You men did too good a job out in Chi. You are the deadliest weapon that has ever been made. You. Single airborne infantrymen!" Lane said, "They told us in Trooper Academy that it's the men that win the wars." "Yes, but people had forgotten it until the SocioSpecs of Newyork came up with the Troopers. Before the Troopers, governments concentrated on the big weapons, the missiles, the bombs. And the cities, with the Shells, were safe from bombs. They learned to be self-sufficient under the Shells. They were so safe, so isolated, that national governments collapsed. But you Troopers wiped out that feeling of security, when you infiltrated Chi and conquered it." "We scared them, huh?" Gerri said, "You scared them so much that they were afraid to let you have a furlough in the city when you came back. Afraid you Troopers would realize that you could easily take over the city if you wanted to. You scared them so much that they'll let me be killed. They'll actually risk trouble with Mars just to kill you." "I'm sorry about you. I mean it, I like—" At that moment a titanic, ear-splitting explosion hurled him to the carpet, deafened and blinded him. He recovered and saw Gerri a few feet away, dazed, groping on hands and knees. Lane jumped to the window, looked quickly, sprang back. Cybrain pumped orders to his nervous system. "Blaster cannon," he said. "But just one. Gotcha, cybrain. I can beat that." He picked up the black box that generated his protective screen. Snapping it open with thumb-pressure, he turned a small dial. Then he waited. Again an enormous, brain-shattering concussion. Again Lane and Gerri were thrown to the floor. But this time there was a second explosion and a blinding flash from below. Lane laughed boyishly and ran to the window. "Look!" he called to Gerri. There was a huge gap in the crowd below. The pavement was blackened and shattered to rubble. In and around the open space sprawled dozens of tiny black figures, not moving. "Backfire," said Lane. "I set the screen to throw their blaster beam right back at them." "And they knew you might—and yet they let a crowd congregate!" Gerri reeled away from the window, sick. Lane said, "I can do that a couple times more, but it burns out the force-globe. Then I'm dead." He heard the 3V newscaster's amplified voice: "—approximately fifty killed. But Lane is through now. He has been able to outthink police with the help of his cybrain. Now police are feeding the problem to their giant analogue computer in the sub-basement of the Court House. The police analogue computer will be able to outthink Lane's cybrain, will predict Lane's moves in advance. Four more blaster cannon are coming down Broadway—" "Why don't they clear those people out of the Square?" Gerri cried. "What? Oh, the Fans—nobody clears them out." He paused. "I got one more chance to try." He raised a mailed glove to his mouth and pressed a small stud in the wrist. He said, "Trooper HQ, this is Lane." A voice spoke in his helmet. "Lane, this is Trooper HQ. We figured you'd call." "Get me Colonel Klett." Thirty seconds passed. Lane could hear the clank of caterpillar treads as the mobile blaster cannon rolled into Tammany Square. The voice of the commanding officer of the Troopers rasped into Lane's ear: "Meat-head! You broke out against my orders! Now look at you!" "I knew you didn't mean them orders, sir." "If you get out of there alive, I'll hang you for disobeying them!" "Yes, sir. Sir, there's a girl here—somebody important—from Mars. You know, the planet. Sir, she told me we could take over the city if we got loose. That right, sir?" There was a pause. "Your girl from Mars is right, Lane. But it's too late now. If we had moved first, captured the city government, we might have done it. But they're ready for us. They'd chop us down with blaster cannon." "Sir, I'm asking for help. I know you're on my side." "I am, Lane." The voice of Colonel Klett was lower. "I'd never admit it if you had a chance of getting out of there alive. You've had it, son. I'd only lose more men trying to rescue you. When they feed the data into that analogue computer, you're finished." "Yes, sir." "I'm sorry, Lane." "Yes, sir. Over and out." Lane pressed the stud on his gauntlet again. He turned to Gerri. "You're okay. I wish I could let you out. Old cybrain says I can't. Says if I drop the force-globe for a second, they'll fire into the room, and then we'll both be dead." Gerri stood with folded arms and looked at him. "Do what you have to do. As far as I can see, you're the only person in this city that has even a little bit of right on his side." Lane laughed. "Any of them purple-haired broads I know would be crazy scared. You're different." "When my grandparents landed on Mars, they found out that selfishness was a luxury. Martians can't afford it." Lane frowned with the effort of thinking. "You said I had a little right on my side. That's a good feeling. Nobody ever told me to feel that way about myself before. It'll be better to die knowing that." "I know," she said. The amplified voice from below said, "The police analogue computer is now hooked directly to the controls of the blaster cannon battery. It will outguess Lane's cybrain and check his moves ahead of time." Lane looked at Gerri. "How about giving me a kiss before they get us? Be nice if I kissed a girl like you just once in my life." She smiled and walked forward. "You deserve it, Lane." He kissed her and it filled him with longings for things he couldn't name. Then he stepped back and shook his head. "It ain't right you should get killed. If I take a dive out that window, they shoot at me, not in here." "And kill you all the sooner." "Better than getting burned up in this lousy little room. You also got right on your side. There's too many damn Troopers and not enough good persons like you. Old cybrain says stay here, but I don't guess I will. I'm gonna pay you back for that kiss." "But you're safe in here!" "Worry about yourself, not about me." Lane picked up the force-bomb and handed it to her. "When I say now, press this. Then take your hand off, real fast. It'll shut off the screen for a second." He stepped up on to the window ledge. Automatically, the cybrain cut in his paragrav-paks. "So long, outa-towner. Now! " He jumped. He was hurtling across the Square when the blaster cannons opened up. They weren't aimed at the window where the little red-white-and-green tricolor was flying. But they weren't aimed at Lane, either. They were shooting wild. Which way now? Looks like I got a chance. Old cybrain says fly right for the cannons. He saw the Mayor's balcony ahead. Go to hell, old cybrain. I'm doing all right by myself. I come to see the Mayor, and I'm gonna see him. Lane plunged forward. He heard the shouts of frightened men. He swooped over the balcony railing. A man was pointing a blaster pistol at him. There were five men on the balcony—emergency! Years of training and cybrain took over. Lane's hand shot out, fingers vibrating. As he dropped to the balcony floor in battle-crouch, the men slumped around him. He had seen the man with the blaster pistol before. It was the Mayor of Newyork. Lane stood for a moment in the midst of the sprawled men, the shrieks of the crowd floating up to him. Then he raised his glove to his lips. He made contact with Manhattan Armory. "Colonel Klett, sir. You said if we captured the city government we might have a chance. Well, I captured the city government. What do we do with it now?" Lane was uncomfortable in his dress uniform. First there had been a ceremony in Tammany Square inaugurating Newyork's new Military Protectorate, and honoring Trooper Lane. Now there was a formal dinner. Colonel Klett and Gerri Kin sat on either side of Lane. Klett said, "Call me an opportunist if you like, Miss Kin, my government will be stable, and Mars can negotiate with it." He was a lean, sharp-featured man with deep grooves in his face, and gray hair. Gerri shook her head. "Recognition for a new government takes time. I'm going back to Mars, and I think they'll send another ambassador next time. Nothing personal—I just don't like it here." Lane said, "I'm going to Mars, too." "Did she ask you to?" demanded Klett. Lane shook his head. "She's got too much class for me. But I like what she told me about Mars. It's healthy, like." Klett frowned. "If I thought there was a gram of talent involved in your capture of the Mayor, Lane, I'd never release you from duty. But I know better. You beat that analogue computer by sheer stupidity—by disregarding your cybrain." Lane said, "It wasn't so stupid if it worked." "That's what bothers me. It calls for a revision in our tactics. We've got a way of beating those big computers now, should anyone use them against us." "I just didn't want her to be hurt." "Exactly. The computer could outguess a machine, like your cybrain. But you introduced a totally unpredictable factor—human emotion. Which proves what I, as a military man, have always maintained—that the deadliest weapon in man's arsenal is still, and will always be, the individual soldier." "What you just said there, sir," said Lane. "That's why I'm leaving Newyork." "What do you mean?" asked Colonel Klett. "I'm tired of being a weapon, sir. I want to be a human being." END Work is the elimination of the traces of work. —Michelangelo Transcriber's Note: This etext was produced from If July 1959. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
B. Lane was never given a proper education, only fighting instruction.
what metrics were used for evaluation?
### Introduction With the development of digital media technology and popularity of Mobile Internet, online visual content has increased rapidly in recent couple of years. Subsequently, visual content analysis for retrieving BIBREF0 , BIBREF1 and understanding becomes a fundamental problem in the area of multimedia research, which has motivated world-wide researchers to develop advanced techniques. Most previous works, however, have focused on classification task, such as annotating an image BIBREF2 , BIBREF3 or video BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 with given fixed label sets. With some pioneering methods BIBREF8 , BIBREF9 tackling the challenge of describing images with natural language proposed, visual content understanding has attracted more and more attention. State-of-the-art techniques for image captioning have been surpassed by new advanced approaches in succession BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Recent researches BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 have been focusing on describing videos with more comprehensive sentences instead of simple keywords. Different from image, video is sequential data with temporal structure, which may pose significant challenge to video caption. Most of the existing works in video description employed max or mean pooling across video frames to obtain video-level representation, which failed to capture temporal knowledge. To address this problem, Yao et al. proposed to use 3-D Convolutional Neural Networks to explore local temporal information in video clips, where the most relevant temporal fragments were automatically chosen for generating natural language description with attention mechanism BIBREF17 . In BIBREF19 , Venugopanlan et al. implemented a Long-Short Term Memory (LSTM) network, a variant of Recurrent Neural Networks (RNNs), to model the global temporal structure in whole video snippet. However, these methods failed to exploit bidirectional global temporal structure, which could benefit from not only previous video frames, but also information in future frames. Also, existing video captioning schemes cannot adaptively learn dense video representation and generate sparse semantic sentences. In this work, we propose to construct a novel bidirectional LSTM (BiLSTM) network for video captioning. More specifically, we design a joint visual modelling to comprehensively explore bidirectional global temporal information in video data by integrating a forward LSTM pass, a backward LSTM pass, together with CNNs features. In order to enhance the subsequent sentence generation, the obtained visual representations are then fed into LSTM-based language model as initialization. We summarize the main contributions of this work as follows: (1) To our best knowledge, our approach is one of the first to utilize bidirectional recurrent neural networks for exploring bidirectional global temporal structure in video captioning; (2) We construct two sequential processing models for adaptive video representation learning and language description generation, respectively, rather than using the same LSTM for both video frames encoding and text decoding in BIBREF19 ; and (3) Extensive experiments on a real-world video corpus illustrate the superiority of our proposal as compared to state-of-the-arts. ### The Proposed Approach In this section, we elaborate the proposed video captioning framework, including an introduction of the overall flowchart (as illustrated in Figure FIGREF1 ), a brief review of LSTM-based Sequential Model, the joint visual modelling with bidirectional LSTM and CNNs, as well as the sentence generation process. ### LSTM-based Sequential Model With the success in speech recognition and machine translation tasks, recurrent neural structure, especially LSTM and its variants, have dominated sequence processing field. LSTM has been demonstrated to be able to effectively address the gradients vanishing or explosion problem BIBREF20 during back-propagation through time (BPTT) BIBREF21 and to exploit temporal dependencies in very long temporal structure. LSTM incorporates several control gates and a constant memory cell, the details of which are following: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 -like matrices are LSTM weight parameters, INLINEFORM1 and INLINEFORM2 are denote the sigmoid and hyperbolic non-linear functions, respectively, and INLINEFORM3 indicates element-wise multiplication operation. Inspired by the success of LSTM, we devise an LSTM-based network to investigate the video temporal structure for video representation. Then initializing language model with video representation to generate video description. ### Bidirectional Video Modelling Different from other video description approaches that represent video by implementing pooling across frames BIBREF16 or 3-D CNNs with local temporal structure BIBREF15 , we apply BiLSTM networks to exploit the bidirectional temporal structure of video clips. Convolutional Neural Networks (CNNs) has demonstrated overwhelming performance on image recognition, classification BIBREF2 and video content analysis BIBREF11 , BIBREF19 . Therefore, we extract caffe BIBREF22 INLINEFORM0 layer of each frame through VGG-16 layers BIBREF23 caffemodel. Following BIBREF19 , BIBREF16 , we sample one frame from every ten frames in the video and extract the INLINEFORM1 layer, the second fully-connected layer, to express selected frames. Then a INLINEFORM2 -by-4096 feature matrix generated to denote given video clip, where INLINEFORM3 is the number of frames we sampled in the video. As in Figure FIGREF1 , we then implement two LSTMs, forward pass and backward pass, to encode CNNs features of video frames, and then merge the output sequences at each time point with a learnt weight matrix. What is interesting is that at each time point in bidirectional structure, we not only “see” the past frames, but also “peek” at the future frames. In other words, our bidirectional LSTM structure encodes video by scanning the entire video sequence several times (same as the number of time steps at encoding stage), and each scan is relevant to its adjacent scans. To investigate the effect of reinforcement of original CNNs feature, we combine the merged hidden states of BiLSTM structure and INLINEFORM4 representation time step-wise. We further employ another forward pass LSTM network with incorporated sequence to generate our video representation. In BIBREF24 , BIBREF25 , Wu et al. had demonstrated that using the output of the last step could perform better than pooling approach across outputs of all the time steps in video classification task. Similarly, we represent the entire video clip using the state of memory cell and output of the last time point, and feed them into description generator as initialization of memory cell and hidden unit respectively. ### Generating Video Description Existing video captioning approaches usually share common part of visual model and language model as representation BIBREF19 , BIBREF15 , which may lead to severe information loss. Besides, they also input the same pooled visual vector of the whole video into every sentence processing unit, thereby ignoring temporal structure. Such methods may easily result in undesirable outputs due to the duplicate inputs in every time point of the new sequence BIBREF16 . To address these issues, we generate descriptions for video clips using a sequential model initialized with visual representation. Inspired by the superior performance of probabilistic sequence generation machine, we generate each word recurrently at each time point. Then the log probability of sentence INLINEFORM0 can be expressed as below: DISPLAYFORM0 where INLINEFORM0 denotes all parameters in sentence generation model and INLINEFORM1 is the representation of given video, and INLINEFORM2 indicates the number of words in sentence. We identify the most likely sentence by maximizing the log likelihood in Equation ( EQREF10 ), then our object function can be described as: DISPLAYFORM0 The optimizer updates INLINEFORM0 with INLINEFORM1 across the entire training process applying Stochastic Gradient Descent (SGD). During training phrase, the loss is back propagated through time and each LSTM unit learns to derive an appropriate hidden representation INLINEFORM2 from input sequence. We then implement the Softmax function to get the probability distribution over the words in the entire vocabulary. At the beginning of the sentence generation, as depicted in Figure FIGREF1 , an explicit starting token (<BOS>) is needed and we terminate each sentence when the end-of-sentence token (<EOS>) is feeding in. During test phrase, similar to BIBREF19 , our language model takes the word INLINEFORM0 with maximum likelihood as input at time INLINEFORM1 repeatedly until the <EOS> token is emitted. ### Dataset Video Dataset: We evaluate our approach by conducting experiments on the Microsoft Research Video Description (MSVD) BIBREF26 corpus, which is description for a collection of 1,970 video clips. Each video clip depicts a single action or a simple event, such as “shooting”, “cutting”, “playing the piano” and “cooking”, which with the duration between 8 seconds to 25 seconds. There are roughly 43 available sentences per video and 7 words in each sentence at average. Following the majority of prior works BIBREF15 , BIBREF16 , BIBREF19 , BIBREF17 , we split entire dataset into training, validation and test set with 1200, 100 and 670 snippets, respectively. Image Dataset: Comparing to other LSTM structure and deep networks, the size of video dataset for caption task is small, thereby we apply transferring learning from image description. COCO 2014 image description dataset BIBREF27 has been used to perform experiments frequently BIBREF12 , BIBREF11 , BIBREF10 , BIBREF14 , which consists of more than 120,000 images, about 82,000 and 40,000 images for training and test respectively. We pre-train our language model on COCO 2014 training set first, then transfer learning on MSVD with integral video description model. ### Experimental Setup Description Processing: Some minimal preprocessing has been implemented to the descriptions in both MSVD and COCO 2014 datasets. We first employ word_tokenize operation in NLTK toolbox to obtain individual words, and then convert all words to lower-case. All punctuation are removed, and then we start each sentence with <BOS> and end with <EOS>. Finally, we combine the sets of words in MSVD with COCO 2014, and generate a vocabulary with 12,984 unique words. Each word input to our system is represented by one-hot vector. Video Preprocessing: As previous video description works BIBREF16 , BIBREF19 , BIBREF15 , we sample video frames once in every ten frames, then these frames could represent given video and 28.5 frames for each video averagely. We extract frame-wise caffe INLINEFORM0 layer features using VGG-16 layers model, then feed the sequential feature into our video caption system. We employ a bidirectional S2VT BIBREF19 and a joint bidirectional LSTM structure to investigate the performance of our bidirectional approach. For convenient comparison, we set the size of hidden unit of all LSTMs in our system to 512 as BIBREF15 , BIBREF19 , except for the first video encoder in unidirectional joint LSTM. During training phrase, we set 80 as maximum number of time steps of LSTM in all our models and a mini-batch with 16 video-sentence pairs. We note that over 99% of the descriptions in MSVD and COCO 2014 contain no more than 40 words, and in BIBREF19 , Venugopalan et al. pointed out that 94% of the YouTube training videos satisfy our maximum length limit. To ensure sufficient visual content, we adopt two ways to truncate the videos and sentences adaptively when the sum of the number of frames and words exceed the limit. If the number of words is within 40, we arbitrarily truncate the frames to satisfy the maximum length. When the length of sentence is more than 40, we discard the words that beyond the length and take video frames with a maximum number of 40. Bidirectional S2VT: Similar to BIBREF19 , we implement several S2VT-based models: S2VT, bidirectional S2VT and reinforced S2VT with bidirectional LSTM video encoder. We conduct experiment on S2VT using our video features and LSTM structure instead of the end-to-end model in BIBREF19 , which need original RGB frames as input. For bidirectional S2VT model, we first pre-train description generator on COCO 2014 for image caption. We next implement forward and backward pass for video encoding and merge the hidden states step-wise with a learnt weight while the language layer receives merged hidden representation with null padded as words. We also pad the inputs of forward LSTM and backward LSTM with zeros at decoding stage, and concatenate the merged hidden states to embedded words. In the last model, we regard merged bidirectional hidden states as complementary enhancement and concatenate to original INLINEFORM0 features to obtain a reinforced representation of video, then derive sentence from new feature using the last LSTM. The loss is computed only at decoding stage in all S2VT-based models. Joint-BiLSTM: Different from S2VT-based models, we employ a joint bidirectional LSTM networks to encode video sequence and decode description applying another LSTM respectively rather than sharing the common one. We stack two layers of LSTM networks to encode video and pre-train language model as in S2VT-based models. Similarly, unidirectional LSTM, bidirectional LSTM and reinforced BiLSTM are executed to investigate the performance of each structure. We set 1024 hidden units of the first LSTM in unidirectional encoder so that the output could pass to the second encoder directly, and the memory cell and hidden state of the last time point are applied to initialize description decoder. Bidirectional structure and reinforced BiLSTM in encoder are implemented similarly to the corresponding type structure in S2VT-based models, respectively, and then feed the video representation into description generator as the unidirectional model aforementioned. ### Results and Analysis BLEU BIBREF28 , METEOR BIBREF29 , ROUGE-L BIBREF30 and CIDEr BIBREF31 are common evaluation metrics in image and video description, the first three were originally proposed to evaluate machine translation at the earliest and CIDEr was proposed to evaluate image description with sufficient reference sentences. To quantitatively evaluate the performance of our bidirectional recurrent based approach, we adopt METEOR metric because of its robust performance. Contrasting to the other three metrics, METEOR could capture semantic aspect since it identifies all possible matches by extracting exact matcher, stem matcher, paraphrase matcher and synonym matcher using WordNet database, and compute sentence level similarity scores according to matcher weights. The authors of CIDEr also argued for that METEOR outperforms CIDEr when the reference set is small BIBREF31 . We first compare our unidirectional, bidirectional structures and reinforced BiLSTM. As shown in Table TABREF19 , in S2VT-based model, bidirectional structure performs very little lower score than unidirectional structure while it shows the opposite results in joint LSTM case. It may be caused by the pad at description generating stage in S2VT-based structure. We note that BiLSTM reinforced structure gains more than 3% improvement than unidirectional-only model in both S2VT-based and joint LSTMs structures, which means that combining bidirectional encoding of video representation is beneficial to exploit some additional temporal structure in video encoder (Figure FIGREF17 ). On structure level, Table TABREF19 illustrates that our Joint-LSTMs based models outperform all S2VT based models correspondingly. It demonstrates our Joint-LSTMs structure benefits from encoding video and decoding natural language separately. We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods. The result of “LSTM” in first row refer from BIBREF15 and the last row but one denotes the best model combining local temporal structure using C3D with global temporal structure utilizing temporal attention in BIBREF17 . From the first two rows, our unidirectional joint LSTM shows rapid improvement, and comparing with S2VT-VGG model in line 3, it also demonstrates some superiority. Even LSTM-E jointly models video and descriptions representation by minimizing the distance between video and corresponding sentence, our Joint-BiLSTM reinforced obtains better performance from bidirectional encoding and separated visual and language models. We observed that while our unidirectional S2VT has the same deployment as BIBREF19 , our model gives a little poorer performance(line 1, Table TABREF19 and line 3, Table TABREF20 ). As mentioned in Section 3.2.2, they employed an end-to-end model reading original RGB frames and fine-tuning on the VGG caffemodel. The features of frames from VGG INLINEFORM0 layer are more compatible to MSVD dataset and the description task. However, our joint LSTM demonstrates better performance with general features rather than specific ones for data, even superior to their model with multiple feature aspects (RGB + Flow, line 4, Table TABREF20 ), which means that our Joint-BiLSTM could show more powerful descriptive ability in end-to-end case. Certainly, We would investigate effect of end-to-end type of our Joint-BiLSTM in future works. ### Conclusion and Future Works In this paper, we introduced a sequence to sequence approach to describe video clips with natural language. The core of our method was, we applied two LSTM networks for the visual encoder and natural language generator component of our model. In particular, we encoded video sequences with a bidirectional Long-Short Term Memory (BiLSTM) network, which could effectively capture the bidirectional global temporal structure in video. Experimental results on MSVD dataset demonstrated the superior performance over many other state-of-the-art methods. We also note some limitations in our model, such as end-to-end framework employed in BIBREF19 and distance measured in BIBREF15 . In the future we will make more effort to fix these limitations and exploit the linguistic domain knowledge in visual content understanding. Figure 1: The overall flowchart of the proposed video captioning framework. We first extract CNNs features of video frames and feed them into forward pass networks (FU, green box) and backward pass networks (BU, yellow box). We then combine the outputs of hidden states together with the original CNNs features, and pass the integrated sequence to another LSTM (MU, blue box) to generate final video representation. We initialize language model (SU, pink box) with video representation and start to generate words sequentially with <BOS> token, and terminate the process until the <EOS> token is emitted. Figure 2: Video captioning examples of our proposed method. “Uni” in color blue, “Bi” in color brown and “Re” in color black are unidirectional Joint-LSTM, bidirectional Joint-LSTM and reinforced Joint-BiLSTM model, respectively. Table 2: Comparing with several state-of-the-art models (reported in percentage, higher is better). Table 1: Comparison results of unidirectional, bidirectional structures and reinforced BiLSTM in both S2VT-based and joint LSTMs structure with METEOR (reported in percentage, higher is better).
METEOR
How large is the dataset?
### Introduction Analyzing and generating natural language texts requires the capturing of two important aspects of language: what is said and how it is said. In the literature, much more attention has been paid to studies on what is said. However, recently, capturing how it is said, such as stylistic variations, has also proven to be useful for natural language processing tasks such as classification, analysis, and generation BIBREF1 , BIBREF2 , BIBREF3 . This paper studies the stylistic variations of words in the context of the representation learning of words. The lack of subjective or objective definitions is a major difficulty in studying style BIBREF4 . Previous attempts have been made to define a selected aspect of the notion of style (e.g., politeness) BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 ; however, it is not straightforward to create strict guidelines for identifying the stylistic profile of a given text. The systematic evaluations of style-sensitive word representations and the learning of style-sensitive word representations in a supervised manner are hampered by this. In addition, there is another trend of research forward controlling style-sensitive utterance generation without defining the style dimensions BIBREF11 , BIBREF12 ; however, this line of research considers style to be something associated with a given specific character, i.e., a persona, and does not aim to capture the stylistic variation space. The contributions of this paper are three-fold. (1) We propose a novel architecture that acquires style-sensitive word vectors (Figure 1 ) in an unsupervised manner. (2) We construct a novel dataset for style, which consists of pairs of style-sensitive words with each pair scored according to its stylistic similarity. (3) We demonstrate that our word vectors capture the stylistic similarity between two words successfully. In addition, our training script and dataset are available on https://jqk09a.github.io/style-sensitive-word-vectors/. ### Style-sensitive Word Vector The key idea is to extend the continuous bag of words (CBOW) BIBREF0 by distinguishing nearby contexts and wider contexts under the assumption that a style persists throughout every single utterance in a dialog. We elaborate on it in this section. ### Notation Let $w_{t}$ denote the target word (token) in the corpora and $\mathcal {U}_t = \lbrace w_1, \dots , w_{t-1}, w_t, w_{t+1},\dots , w_{\vert \mathcal {U}_t \vert }\rbrace $ denote the utterance (word sequence) including $w_t$ . Here, $w_{t+d}$ or $w_{t-d} \in \mathcal {U}_t$ is a context word of $w_t$ (e.g., $w_{t+1}$ is the context word next to $w_{t}$ ), where $d\in \mathbb {N}_{>0}$ is the distance between the context words and the target word $w_t$ . For each word (token) $w$ , bold face $\mbox{$v$}_{w}$ and $\tilde{\mbox{$v$}}_{w}$ denote the vector of $w$ and the vector predicting the word $w$ . Let $\mathcal {V}$ denote the vocabulary. ### Baseline Model (CBOW-near-ctx) First, we give an overview of CBOW, which is our baseline model. CBOW predicts the target word $w_t$ given nearby context words in a window with width $\delta $ : $$ := \left\lbrace w_{t\pm d} \in \mathcal {U}_t \mid 1\le d \le \delta \right\rbrace $$ (Eq. 4) The set $$ contains in total at most $2\delta $ words, including $\delta $ words to the left and $\delta $ words to the right of a target word. Specifically, we train the word vectors $\tilde{\mbox{$v$}}_{w_t}$ and $\mbox{$v$}_c$ ( $c\in $ ) by maximizing the following prediction probability: $$P(w_t|) \propto \exp \biggl (\!\tilde{\mbox{$v$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$v$}_c\!\biggr ) \text{.}$$ (Eq. 5) The CBOW captures both semantic and syntactic word similarity through the training using nearby context words. We refer to this form of CBOW as CBOW-near-ctx. Note that, in the implementation of BIBREF13 , the window width $\delta $ is sampled from a uniform distribution; however, in this work, we fixed $\delta $ for simplicity. Hereafter, throughout our experiments, we turn off the random resizing of $\delta $ . ### Learning Style with Utterance-size Context Window (CBOW-all-ctx) CBOW is designed to learn the semantic and syntactic aspects of words from their nearby context BIBREF13 . However, an interesting problem is determining the location where the stylistic aspects of words can be captured. To address this problem, we start with the assumption that a style persists throughout each single utterance in a dialog, that is, the stylistic profile of a word in an utterance must be consistent with other words in the same utterance. Based on this assumption, we propose extending CBOW to use all the words in an utterance as context, $$ := \lbrace w_{t\pm d} \in \mathcal {U}_t \mid 1\le d\rbrace \text{,}$$ (Eq. 7) instead of only the nearby words. Namely, we expand the context window from a fixed width to the entire utterance. This training strategy is expected to lead to learned word vectors that are more sensitive to style rather than to other aspects. We refer to this version as CBOW-all-ctx. ### Learning the Style and Syntactic/Semantic Separately To learn the stylistic aspect more exclusively, we further extended the learning strategy. First, remember that using nearby context is effective for learning word vectors that capture semantic and syntactic similarities. However, this means that using the nearby context can lead the word vectors to capture some aspects other than style. Therefore, as the first extension, we propose excluding the nearby context $$ from all the context $$ . In other words, we use the distant context words only: $$\! := \setminus = \left\lbrace w_{t\pm d} \in \mathcal {U}_t \mid \delta < d \right\rbrace \!\text{.}\!$$ (Eq. 9) We expect that training with this type of context will lead to word vectors containing the style-sensitive information only. We refer to this method as CBOW-dist-ctx. As the second extension to distill off aspects other than style, we use both nearby and all contexts ( $$ and $$ ). As Figure 2 shows, both the vector $\mbox{$v$}_{w}$ and $\tilde{\mbox{$v$}}_w$ of each word $w\in \mathcal {V}$ are divided into two vectors: $$\mbox{$v$}_w = \mbox{$x$}_w \oplus \mbox{$y$}_w,\;\; \tilde{\mbox{$v$}}_w = \tilde{\mbox{$x$}}_w \oplus \tilde{\mbox{$y$}}_w \text{,}$$ (Eq. 10) where $\oplus $ denotes vector concatenation. Vectors $\mbox{$x$}_{w}$ and $\tilde{\mbox{$x$}}_w$ indicate the style-sensitive part of $\mbox{$v$}_w$ and $\tilde{\mbox{$v$}}_w$ respectively. Vectors $\mbox{$y$}_w$ and $\tilde{\mbox{$y$}}_w$ indicate the syntactic/semantic-sensitive part of $\mbox{$v$}_w$ and $\tilde{\mbox{$v$}}_w$ respectively. For training, when the context words are near the target word ( $$ ), we update both the style-sensitive vectors ( $\mbox{$x$}_{w}$0 , $\mbox{$x$}_{w}$1 ) and the syntactic/semantic-sensitive vectors ( $\mbox{$x$}_{w}$2 , $\mbox{$x$}_{w}$3 ), i.e., $\mbox{$x$}_{w}$4 , $\mbox{$x$}_{w}$5 . Conversely, when the context words are far from the target word ( $\mbox{$x$}_{w}$6 ), we only update the style-sensitive vectors ( $\mbox{$x$}_{w}$7 , $\mbox{$x$}_{w}$8 ). Formally, the prediction probability is calculated as follows: $$P_1^{}(w_{t}|) &\propto \exp \biggl (\!\tilde{\mbox{$v$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$v$}_c\!\biggr ) \text{,} \\ P_2^{}(w_{t}|) &\propto \exp \biggl (\!\tilde{\mbox{$x$}}_{w_t} \cdot \frac{1}{\vert \vert }\!\!\!\! [r]{\sum _{\;\;c\in }} \mbox{$x$}_c\!\biggr ) \text{.}$$ (Eq. 11) At the time of learning, two prediction probabilities (loss functions) are alternately computed, and the word vectors are updated. We refer to this method using the two-fold contexts separately as the CBOW-sep-ctx. ### Experiments We investigated which word vectors capture the stylistic, syntactic, and semantic similarities. ### Settings We collected Japanese fictional stories from the Web to construct the dataset. The dataset contains approximately 30M utterances of fictional characters. We separated the data into a 99%–1% split for training and testing. In Japanese, the function words at the end of the sentence often exhibit style (e.g., desu+wa, desu+ze;) therefore, we used an existing lexicon of multi-word functional expressions BIBREF14 . Overall, the vocabulary size $\vert \mathcal {V} \vert $ was 100K. We chose the dimensions of both the style-sensitive and the syntactic/semantic-sensitive vectors to be 300, and the dimensions of the baseline CBOWs were 300. The learning rate was adjusted individually for each part in $\lbrace \mbox{$x$}_w, \mbox{$y$}_w, \tilde{\mbox{$x$}}_w, \tilde{\mbox{$y$}}_w\rbrace $ such that “the product of the learning rate and the expectation of the number of updates” was a fixed constant. We ran the optimizer with its default settings from the implementation of BIBREF0 . The training stopped after 10 epochs. We fixed the nearby window width to $\delta =5$ . ### Stylistic Similarity Evaluation To verify that our models capture the stylistic similarity, we evaluated our style-sensitive vector $\mbox{$x$}_{w_t}$ by comparing to other word vectors on a novel artificial task matching human stylistic similarity judgments. For this evaluation, we constructed a novel dataset with human judgments on the stylistic similarity between word pairs by performing the following two steps. First, we collected only style-sensitive words from the test corpus because some words are strongly associated with stylistic aspects BIBREF15 , BIBREF16 and, therefore, annotating random words for stylistic similarity is inefficient. We asked crowdsourced workers to select style-sensitive words in utterances. Specifically, for the crowdsourced task of picking “style-sensitive” words, we provided workers with a word-segmented utterance and asked them to pick words that they expected to be altered within different situational contexts (e.g., characters, moods, purposes, and the background cultures of the speaker and listener.). Then, we randomly sampled $1,000$ word pairs from the selected words and asked 15 workers to rate each of the pairs on five scales (from $-2$ : “The style of the pair is different” to $+2$ : “The style of the pair is similar”), inspired by the syntactic/semantic similarity dataset BIBREF17 , BIBREF18 . Finally, we picked only word pairs featuring clear worker agreement in which more than 10 annotators rated the pair with the same sign, which consisted of random pairs of highly agreeing style-sensitive words. Consequently, we obtained 399 word pairs with similarity scores. To our knowledge, this is the first study that created an evaluation dataset to measure the lexical stylistic similarity. In the task of selecting style-sensitive words, the pairwise inter-annotator agreement was moderate (Cohen's kappa $\kappa $ is $0.51$ ). In the rating task, the pairwise inter-annotator agreement for two classes ( $\lbrace -2, -1\rbrace $ or $\lbrace +1, +2\rbrace $ ) was fair (Cohen's kappa $\kappa $ is $0.23$ ). These statistics suggest that, at least in Japanese, native speakers share a sense of style-sensitivity of words and stylistic similarity between style-sensitive words. We used this evaluation dataset to compute the Spearman rank correlation ( $\rho _{style}$ ) between the cosine similarity scores between the learned word vectors $\cos (\mbox{$v$}_{w}, \mbox{$v$}_{w^{\prime }})$ and the human judgements. Table 1 shows the results on its left side. First, our proposed model, CBOW-all-ctx outperformed the baseline CBOW-near-ctx. Furthermore, the $\mbox{$x$}$ of CBOW-dist-ctx and CBOW-sep-ctx demonstrated better correlations for stylistic similarity judgments ( $\rho _{style}=56.1$ and $51.3$ , respectively). Even though the $\mbox{$x$}$ of CBOW-sep-ctx was trained with the same context window as CBOW-all-ctx, the style-sensitivity was boosted by introducing joint training with the near context. CBOW-dist-ctx, which uses only the distant context, slightly outperforms CBOW-sep-ctx. These results indicate the effectiveness of training using a wider context window. ### Syntactic and Semantic Evaluation We further investigated the properties of each model using the following criterion: (1) the model's ability to capture the syntactic aspect was assessed through a task predicting part of speech (POS) and (2) the model's ability to capture the semantic aspect was assessed through a task calculating the correlation with human judgments for semantic similarity. First, we tested the ability to capture syntactic similarity of each model by checking whether the POS of each word was the same as the POS of a neighboring word in the vector space. Specifically, we calculated SyntaxAcc@ $N$ defined as follows: $$\frac{1}{\vert \mathcal {V} \vert N}\sum _{w\in \mathcal {V}}\sum _{\,w^{\prime }\in \mathcal {N}(w)} \hspace{-4.0pt}\mathbb {I}[\mathrm {POS}(w) \!=\! \mathrm {POS}(w^{\prime })] \text{,}\!$$ (Eq. 24) where $\mathbb {I}[\text{condition}] = 1$ if the condition is true and $\mathbb {I}[\text{conditon}] = 0$ otherwise, the function $\mathrm {POS}(w)$ returns the actual POS tag of the word $w$ , and $\mathcal {N}(w)$ denotes the set of the $N$ top similar words $\lbrace w^{\prime }\rbrace $ to $w$ w.r.t. $\cos (\mbox{$v$}_w,\mbox{$v$}_{w^{\prime }})$ in each vector space. Table 1 shows SyntaxAcc@ $N$ with $N = 5$ and 10. For both $N$ , the $\mbox{$y$}$ (the syntactic/semantic part) of CBOW-near-ctx, CBOW-all-ctx and CBOW-sep-ctx achieved similarly good. Interestingly, even though the $\mbox{$x$}$ of CBOW-sep-ctx used the same context as that of CBOW-all-ctx, the syntactic sensitivity of $\mbox{$x$}$ was suppressed. We speculate that the syntactic sensitivity was distilled off by the other part of the CBOW-sep-ctx vector, i.e., $\mbox{$y$}$ learned using only the near context, which captured more syntactic information. In the next section, we analyze CBOW-sep-ctx for the different characteristics of $\mbox{$x$}$ and $\mbox{$y$}$ . To test the model's ability to capture the semantic similarity, we also measured correlations with the Japanese Word Similarity Dataset (JWSD) BIBREF19 , which consists of $4,\!000$ Japanese word pairs annotated with semantic similarity scores by human workers. For each model, we calculate and show the Spearman rank correlation score ( $\rho _{sem}$ ) between the cosine similarity score $\cos (\mbox{$v$}_w, \mbox{$v$}_{w^{\prime }})$ and the human judgements on JWSD in Table 1 . CBOW-dist-ctx has the lowest score ( $\rho _{sem}\!=\!15.9$ ); however, surprisingly, the stylistic vector $\mbox{$x$}_{w_t}$ has the highest score ( $\rho _{sem}\!=\!28.9$ ), while both vectors have a high $\rho _{style}$ . This result indicates that the proposed stylistic vector $\mbox{$x$}_{w_t}$ captures not only the stylistic similarity but also the captures semantic similarity, contrary to our expectations (ideally, we want the stylistic vector to capture only the stylistic similarity). We speculate that this is because not only the style but also the topic is often consistent in single utterances. For example, “UTF8ipxmサンタ (Santa Clause)” and “UTF8ipxmトナカイ (reindeer)” are topically relevant words and these words tend to appear in a single utterance. Therefore, stylistic vectors $\lbrace \mbox{$x$}_{w}\rbrace $ using all the context words in an utterance also capture the topic relatedness. In addition, JWSD contains topic-related word pairs and synonym pairs; therefore the word vectors that capture the topic similarity have higher $\rho _{sem}$0 . We will discuss this point in the next section. ### Analysis of Trained Word Vectors Finally, to further understand what types of features our CBOW-sep-ctx model acquired, we show some words with the four most similar words in Table 2 . Here, for English readers, we also report a result for English. The English result also shows an example of the performance of our model on another language. The left side of Table 2 (for stylistic vector $\mbox{$x$}$ ) shows the results. We found that the Japanese word “UTF8ipxm拙者 (I; classical)” is similar to “UTF8ipxmござる (be; classical)” or words containing it (the second row of Table 2 ). The result looks reasonable, because words such as “UTF8ipxm拙者 (I; classical)” and “UTF8ipxmござる (be; classical)” are typically used by Japanese Samurai or Ninja. We can see that the vectors captured the similarity of these words, which are stylistically consistent across syntactic and semantic varieties. Conversely, the right side of the table (for the syntactic/semantic vector $\mbox{$y$}$ ) shows that the word “UTF8ipxm拙者 (I; classical)” is similar to the personal pronoun (e.g., “UTF8ipxm僕 (I; male, childish)”). We further confirmed that 15 the top similar words are also personal pronouns (even though they are not shown due to space limitations). These results indicate that the proposed CBOW-sep-ctx model jointly learns two different types of lexical similarities, i.e., the stylistic and syntactic/semantic similarities in the different parts of the vectors. However, our stylistic vector also captured the topic similarity, such as “UTF8ipxmサンタ (Santa Clause)” and “UTF8ipxmトナカイ (reindeer)” (the fourth row of Table 2 ). Therefore, there is still room for improvement in capturing the stylistic similarity. ### Conclusions and Future Work This paper presented the unsupervised learning of style-sensitive word vectors, which extends CBOW by distinguishing nearby contexts and wider contexts. We created a novel dataset for style, where the stylistic similarity between word pairs was scored by human. Our experiment demonstrated that our method leads word vectors to distinguish the stylistic aspect and other semantic or syntactic aspects. In addition, we also found that our training cannot help confusing some styles and topics. A future direction will be to addressing the issue by further introducing another context such as a document or dialog-level context windows, where the topics are often consistent but the styles are not. ### Acknowledgments This work was supported by JSPS KAKENHI Grant Number 15H01702. We thank our anonymous reviewers for their helpful comments and suggestions. Figure 1: Word vector capturing stylistic and syntactic/semantic similarity. Figure 2: The architecture of CBOW-SEP-CTX. Table 1: Results of the quantitative evaluations. Table 2: The top similar words for the style-sensitive and syntactic/semantic vectors learned with proposed model, CBOW-SEP-CTX. Japanese words are translated into English by the authors. Legend: (translation; impression).
30M utterances
How is the data automatically generated?
### Introduction Current neural networks for language understanding rely heavily on unsupervised pretraining tasks like language modeling. However, it is still an open question what degree of knowledge state-of-the-art language models (LMs) acquire about different linguistic phenomena. Many recent studies BIBREF0, BIBREF1, BIBREF2 have advanced our understanding in this area by evaluating LMs' preferences between minimal pairs of sentences, as in Example SECREF1. However, these studies have used different analysis metrics and focused on a small set of linguistic paradigms, making a big-picture comparison between these studies limited. . Ṫhe cat annoys Tim. (grammatical) The cat annoy Tim. (ungrammatical) We introduce the Benchmark of Linguistic Minimal Pairs (shortened to BLiMP or just *X ) a linguistically-motivated benchmark for assessing LMs' knowledge across a wide variety of English phenomena, encapsulating both previously studied and novel contrasts. *X consists of 67 datasets automatically generated from expert-crafted grammars, each containing 1000 minimal pairs and organized by phenomenon into 12 categories. Validation with crowd workers shows that humans overwhelmingly agree with the contrasts in *X . We use *X to study several pretrained LMs: Transformer-based LMs GPT-2 BIBREF3 and Transformer-XL BIBREF4, an LSTM LM trained by BIBREF5, and a $n$-gram LM. We evaluate whether the LM assigns a higher probability to the acceptable sentence in each minimal pair in *X . This experiment gives a sense of which grammatical distinctions LMs are sensitive to in general, and the extent to which unrelated models have similar strengths and weaknesses. We conclude that current neural LMs robustly learn agreement phenomena and even some subtle syntactic phenomena such as ellipsis and control/raising. They perform comparatively worse (and well below human level) on minimal pairs related to argument structure and the licensing of negative polarity items and quantifiers. All models perform at or near chance on extraction islands, which we conclude is the most challenging phenomenon covered by *X . Overall, we note that all models we evaluate fall short of human performance by a wide margin. GPT-2, which performs the best, does match (even just barely exceeds) human performance on some grammatical phenomena, but remains 8 percentage points below human performance overall. We conduct additional experiments to investigate the effect of training size on LSTM model performance on *X . We show that learning trajectories differ, sometimes drastically, across different paradigms in the dataset, with phenomena such as anaphor agreement showing consistent improvement as training size increases, and other phenomena such as NPIs and extraction islands remaining near chance despite increases in training size. We also compare overall sentence probability to two other built-in metrics coded on *X and find that the chosen metric changes how we evaluate relative model performance. ### Background & Related Work ::: Language Models The objective of a language model is to give a probability distribution over the possible strings of a language. Language models can be built on neural network models or non-neural network models. Due to their unsupervised nature, they can be trained without external annotations. More recently, neural network based language modeling has been shown to be a strong pretraining task for natural language understanding tasks BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some recent models, such as BERT BIBREF9 use closely related tasks such as masked language modeling. In the last decade, we have seen two major paradigm shifts in the state of the art for language modeling. The first major shift for language modeling was the movement from statistical methods based on $n$-grams BIBREF10 to neural methods such as LSTMs BIBREF11, which directly optimize on the task of predicting the next word. More recently, Transformer-based architectures employing self-attention BIBREF12 have outperformed LSTMs at language modeling BIBREF4. Although it is reasonably clear that these shifts have resulted in stronger language models, the primary metric of performance is perplexity, which cannot give detailed insight into these models' linguistic knowledge. Evaluation on downstream task benchmarks BIBREF13, BIBREF14 is more informative, but might not present a broad enough challenge or represent grammatical distinctions at a sufficiently fine-grained level. ### Background & Related Work ::: Evaluating Linguistic Knowledge A large number of recent studies has used acceptability judgments to reveal what neural networks know about grammar. One branch of this literature has focused on using minimal pairs to infer whether LMs learn about specific linguistic phenomena. Table TABREF4 gives a summary of work that has studied linguistic phenomena in this way. For instance, linzen2016assessing look closely at minimal pairs contrasting subject-verb agreement. marvin2018targeted look at a larger set of phenomena, including negative polarity item licensing and reflexive licensing. However, a relatively small set of phenomena is covered by these studies, to the exclusion of well-studied phenomena in linguistics such as control and raising, ellipsis, distributional restrictions on quantifiers, and countless others. This is likely due to the labor-intensive nature of collecting examples that exhibit informative grammatical phenomena and their acceptability judgments. A related line of work evaluates neural networks on acceptability judgments in a more general domain of grammatical phenomena. Corpora of sentences and their grammaticality are collected for this purpose in a number of computational studies on grammaticality judgment BIBREF26, BIBREF27, BIBREF16. The most recent and comprehensive corpus is CoLA BIBREF16, which contains around 10k sentences covering a wide variety of linguistic phenomena from 23 linguistic papers and textbooks. CoLA, which is included in the GLUE benchmark BIBREF13, has been used to track advances in the general grammatical knowledge of reusable sentence understanding models. Current models like BERT BIBREF9 and T5 BIBREF28 can be trained to give acceptability judgments that approach or even exceed individual human agreement with CoLA. While CoLA can also be used to evaluate phenomenon-specific knowledge of models, this method is limited by the need to train a supervised classifier on CoLA data prior to evaluation. BIBREF29 compare the CoLA performance of pretrained sentence understanding models: an LSTM, GPT BIBREF8, and BERT. They find that these models have good performance on sentences involving marked argument structure, and struggle on sentences with long-distance dependencies like those found in questions, though the Transformers have a noticeable advantage. However, evaluating supervised classifiers prevents making strong conclusions about the models themselves, since biases in the training data may affect the results. For instance, relatively strong performance on a phenomenon might be due to a model's implicit knowledge or to frequent occurrence of similar examples in the training data. Evaluating LMs on minimal pairs evades this problem by eschewing supervised training on acceptability judgments. It is possible to use the LM probability of a sentence as a proxy for acceptability because other factors impacting a sentence's probability such as length and lexical content are controlled for. ### Data The *X dataset consists of 67 paradigms of 1000 sentence pairs. Each paradigm is annotated for the unique contrast it isolates and the broader category of phenomena it is part of. The data is automatically generated according to expert-crafted grammars, and our automatic labels are validated with crowd-sourced human judgments. ### Data ::: Data generation procedure To create minimal pairs exemplifying a wide array of linguistic contrasts, it is necessary to artificially generate all datasets. This ensures both that we have sufficient unacceptable examples, and that the data is fully controlled, allowing for repeated isolation of a single linguistic phenomenon in each paradigm BIBREF30. The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences. Examples SECREF6 and SECREF6 show one such template for the `acceptable' and `unacceptable' sentences within a pair: the sole difference between them is the underlined word, which differs only in whether the anaphor agrees in number with its antecedent. Our generation codebase and scripts are freely available. . DP1 V1 refl_match . The cats licked themselves . . DP1 V1 refl_mismatch . The cats licked itself . This generation procedure is not without limitations, and despite the very detailed vocabulary we use, implausible sentences are occasionally generated (e.g., `Sam ran around some glaciers'). In these cases, though, both the acceptable and unacceptable sentences will be equally implausible given world knowledge, so any difference in the probability assigned to them is still due to the intended grammatical contrast. ### Data ::: Coverage The paradigms that are covered by *X represent well-established contrasts in English morphology, syntax, and semantics. Each paradigm is grouped into one of 12 phenomena, shown in Table TABREF1. The paradigms are selected with the constraint that they can be illustrated with minimal pairs of equal sentence length and that it is of a form that could be written as a template, like in SECREF6 and SECREF6. While this dataset has broad coverage, it is not exhaustive – it is not possible to include every grammatical phenomenon of English, and there is no agreed-upon set of core phenomena. However, we consider frequent inclusion of a phenomenon in a syntax/semantics textbook as an informal proxy for what linguists consider to be core phenomena. We survey several syntax textbooks BIBREF31, BIBREF32, BIBREF33, and find that nearly all of the phenomena in *X are discussed in some source, and most of the topics that repeatedly appear in textbooks and can be represented with minimal pairs (e.g. agreement, argument selection, control/raising, wh-extraction/islands, binding) are present in *X . Because the generation code is reusable, it is possible to generate paradigms not included in *X in the future. ### Data ::: Comparison to Related Resources With over 3000 words, *X has by far the widest lexical variability of any related generated dataset. The vocabulary includes verbs with 11 different subcategorization frames, including verbs that select for PPs, infinitival VPs, and embedded clauses. By comparison, datasets by BIBREF30 and BIBREF1 each use a vocabulary of well under 200 items. Other datasets of minimal pairs that achieve greater lexical and syntactic variety use data-creation methods that are limited in terms of empirical scope or control. BIBREF0 construct a dataset of minimal pairs for subject-verb agreement by changing the number marking on present-tense verbs in a subset of English Wikipedia. However this approach does not generalize beyond simple agreement phenomena. BIBREF27 build a dataset of minimal pairs by taking sentences from the BNC through round-trip machine translation. The resulting sentences contain a wider variety of grammatical violations, but it is not possible to control the nature of the violation and a single sentence may contain several violations. ### Data ::: Data validation To verify that the generated sentences represent a real contrast in acceptability, we conduct human validation via Amazon Mechanical Turk. Twenty separate validators rated five pairs from each of the 67 paradigms, for a total of 6700 judgments. We restricted validators to individuals currently located in the US who self-reported as native speakers of English. To assure that our validators made a genuine effort on the task, each HIT included an attention check item and a hidden field question to catch bot-assisted humans. For each minimal pair, 20 different individuals completed a forced-choice task that mirrors the task done by the LMs; the human-determined “acceptable” sentence was calculated via majority vote of annotators. By this metric, we estimate aggregate human agreement with our annotations to be 96.4% overall. As a threshold of inclusion in *X , the majority of validators needed to agree with *X on at least 4/5 examples from each paradigm. Thus, all 67 paradigms in the public version of *X passed this validation, and only two additional paradigms had to be rejected on this criterion. We also estimate individual human agreement to be 88.6% overall using the approximately 100 annotations from each paradigm. Figure TABREF14 reports these individual human results (alongside model results) as a conservative measure of human agreement. white ### Models & Methods ::: Models ::: GPT-2 GPT-2 BIBREF3 is a large-scale language model using the Transformer architecture BIBREF12. We use the large version of GPT-2, which contains 24 layers and 345M parameters. The model is pretrained on BIBREF3's custom-built WebText dataset, which contains 40GB of text extracted from web pages and filtered by humans. To our best knowledge, the WebText corpus is not publicly available. Assuming approximately 5-6 bytes/chars per word on average, we estimate WebText contains approximately 8B tokens. The testing code for GPT-2 has been integrated into jiant, a codebase for training and evaluating sentence understanding models BIBREF34. ### Models & Methods ::: Models ::: Transformer-XL Transformer-XL BIBREF4 is another multi-layer Transformer-based neural language model. We test a pretrained Transformer-XL model with 18 layers of Transformer decoders and 16 attention heads for each layer. The model is trained on WikiText-103 BIBREF35, a corpus of 103M tokens from high-quality Wikipedia articles. Code for testing Transformer-XL on *X is also implemented in jiant. ### Models & Methods ::: Models ::: LSTM We include a long-short term memory (LSTM, BIBREF36) language model in our experiments. Specifically, we test a pretrained LSTM language model from BIBREF5 on *X . The model is trained on a 90M token corpus extracted from English Wikipedia. For investigating the effect of training size on models' *X performance, We retrain a series of LSTM models with the same hyperparameters and the following training sizes: 64M, 32M, 16M, 8M, 4M, 2M, 1M, 1/2M, 1/4M, and 1/8M tokens. For each size, we train the model on five different random samples drawing from the original training data, which has a size of 83M tokens. We release our LSTM evaluation code. ### Models & Methods ::: Models ::: 5-gram We build a 5-gram LM on the English Gigaword corpus BIBREF37, which consists of 3.07B tokens. To efficiently query $n$-grams we use an implementation based on BIBREF38, which is shown to speed up estimation BIBREF39. We release our $n$-gram evaluation code. ### Models & Methods ::: Evaluation We mainly evaluate the models by measuring whether the LM assigns a higher probability to the grammatical sentence within the minimal pair. This method, used by BIBREF1, is only meaningful for comparing sentences of similar length and lexical content, as overall sentence probability tends to decrease as sentence length increases or word frequencies decrease BIBREF27. However, as discussed in Section SECREF3 we design every paradigm in *X to be compatible with this method. ### Results We report the 12-category accuracy results for all models and human evaluation in Table TABREF14. ### Results ::: Overall Results An LM's overall performance on *X can be measured simply by taking the proportion of correct predictions across the 67,000 minimal pairs from all paradigms. GPT-2 achieves the highest score and the $n$-gram the lowest. Transformer-XL and the LSTM LM perform in the middle, and at roughly the same level as each other. All models perform well below estimated human agreement (as described in Section SECREF11). The $n$-gram model's poor overall performance confirms *X is not solvable from co-occurrence information alone. Rather, success at *X is driven by the more abstract features learned by neural networks. There are no categories in which the $n$-gram approaches human performance. Because we evaluate pretrained models that differ in architecture and training data quantity/domain, we can only speculate about what drives these differences (though see Section SECREF37 for a controlled ablation study on the LSTM LM). Nonetheless, the results seem to indicate that access to training data is the main driver of performance on *X for the neural models we evaluate. On purely architectural grounds, the similar performance of Transformer-XL and the LSTM is surprising since Transformer-XL is the state of the art on several LM training sets. However, they are both trained 100$\pm 10$M tokens of Wikipedia text. Relatedly, GPT-2's advantage may come from the fact that it is trained on roughly two orders of magnitude more data. While it is unclear whether LSTMs trained on larger datasets could rival GPT-2, such experiments are impractical due to the difficulty of scaling LSTMs to this size. ### Results ::: Phenomenon-Specific Results The results also reveal considerable variation in performance across grammatical phenomena. Models generally perform best and closest to human level on morphological phenomena. This includes anaphor agreement, determiner-noun agreement, and subject-verb agreement. In each of these domains, GPT-2's performance is within 2.1 percentage points of humans. The set of challenging phenomena is more diverse. Islands are the hardest phenomenon by a wide margin. Only GPT-2 performs noticeably above chance, but it remains 20 points below humans. Some semantic phenomena, specifically those involving NPIs and quantifiers, are also challenging overall. All models show relatively weak performance on argument structure. From results we conclude that current SotA LMs have robust knowledge of basic facts of English agreement. This does not mean that LMs will come close to human performance for all agreement phenomena. Section SECREF32 discusses evidence that increased dependency length and the presence of agreement attractors of the kind investigated by BIBREF0 and BIBREF5 reduce performance on agreement phenomena. The exceptionally poor performance on islands is hard to reconcile with BIBREF2's (BIBREF2) conclusion that LSTMs have knowledge of some island constraints. In part, this difference may come down to differences in metrics. BIBREF2 compare a set of four related sentences with gaps in the same position or no gaps to obtain the wh-licensing interaction as a metric of how strongly the LM identifies a filler-gap dependency in a single syntactic position. They consider an island constraint to have been learned if this value is close to zero. We instead compare LM probabilities of sentences with similar lexical content but with gaps in different syntactic positions. These metrics target different forms of grammatical knowledge, though both are desirable properties to find in an LM. We also note that the LMs we test do not have poor knowledge of filler-gap dependencies in general, with all neural models perform above well above chance. This suggests that, while these models are able to establish long-distance dependencies in general, they are comparatively worse at identifying the syntactic domains in which these dependencies are blocked. The semantic phenomena that models struggle with are usually attributed in current theories to a presupposition failure or contradiction arising from semantic composition or pragmatic reasoning BIBREF40, BIBREF41, BIBREF42. These abstract semantic and pragmatic factors may be difficult for LMs to learn. BIBREF1 also find that LSTMs largely fail to recognize NPI licensing conditions. BIBREF20 find that BERT (which is similar in scale to GPT-2) recognizes these conditions inconsistently in an unuspervised setting. The weak performance on argument structure is somewhat surprising, since arguments are usually (though by no means always) local to their heads. Argument structure is closely related to semantic event structure BIBREF43, which may be comparatively difficult for LMs to learn. This finding contradicts BIBREF29's (BIBREF29) conclusion that argument structure is one of the strongest domains for neural models. However, BIBREF29 study supervised models trained on CoLA, which includes a large proportion of sentences related to argument structure. ### Results ::: Correlation of Model & Human Performance We also examine to what extent the models' performances are similar to each other, and how they are similar to human evaluation in terms of which phenomena are comparatively difficult. Figure TABREF29 shows the Pearson correlation between the four LMs and human evaluation on their accuracies in 67 paradigms. Compared to humans, GPT-2 has the highest correlation, closely followed by Transformer-XL and LSTM, though the correlation is only moderate. The $n$-gram's performance correlates with humans relatively weakly. Transformer-XL and LSTM are very highly correlated at 0.9, possibly reflecting their similar training data. Also, neural models correlate with each other more strongly than with humans or the $n$-gram model, suggesting neural networks share some biases that are not entirely human-like. white ### Results ::: Shallow Predictors of Performance We also ask what factors aside from linguistic phenomena make a minimal pair harder or easier for an LM to distinguish. We test whether shallow features like sentence length or overall sentence likelihood are predictors of whether the LM will have the right preference. The results are shown in Figure FIGREF31. While sentence length, perplexity and the probability of the good sentence all seem to predict model performance to a certain extent, the predictive power is not strong, especially for GPT-2, which is much less influenced by greater perplexity of the good sentence than the other models. ### Additional Experiments ::: Long-Distance Dependencies The presence of intervening material that lengthens an agreement dependency lowers accuracy on that sentence in both humans and LMs. We study how the presence or absence of this intervening material affects the ability of LMs to detect mismatches in agreement in *X . First, we test for knowledge of determiner-noun agreement with and without an intervening adjective, as in Example SECREF32. The results are plotted in Figure FIGREF33. The $n$-gram model is the most heavily impacted, performing on average 35 points worse. This is unsurprising, since the bigram consisting of a determiner and noun is far more likely to be observed than the trigram of determiner, adjective, and noun. For the neural models, we find a weak but consistent effect, with all models performing on average between 5 and 3 points worse when there is an intervening adjective. . Ṙon saw that man/*men. Ron saw that nice man/*men. Second, we test for sensitivity to mismatches in subject-verb agreement when an “attractor” noun of the opposite number intervenes. We compare attractors in relative clauses and as part of a relational noun as in Example SECREF32, following experiments by BIBREF0 and others. Again, we find an extremely large effect for the $n$-gram model, which performs over 50 points worse and well below chance when there is an attractor present, showing that the $n$-gram model is consistently misled by the presence of the attractor. All of the neural models perform above chance with an attractor present, but GPT-2 and the LSTM perform 22 and 20 points worse when an attractor is present. Transformer-XL's performance is harmed by only 5 points. Note that GPT-2 still has the highest performance in both cases, and even outperforms humans in the relational noun case. Thus, we reproduce BIBREF0's finding that attractors significantly reduce LSTM LMs' sensitivity to mismatches in agreement and find evidence that this holds true of Transformer LMs as well. . Ṫhe sisters bake/*bakes. The sisters who met Cheryl bake/*bakes. The sisters of Cheryl bake/*bakes. ### Additional Experiments ::: Regular vs. Irregular Agreement In the determiner-noun agreement and subject-verb agreement categories, we generate separate datasets for nouns with regular and irregular number marking, as in Example SECREF34. All else being equal, only models with access to sub-word-level information should make any distinction between regular and irregular morphology. . Ṙon saw that nice kid/*kids. (regular) Ron saw that nice man/*men. (irregular) Contrary to this prediction, the results in Figure FIGREF36 show that the sub-word-level models GPT-2 and Transformer-XL show little effect of irregular morphology: they perform less than $0.013$ worse on irregulars than regulars. Given their high performance overall, this suggests they robustly encode number features without relying on segmental cues. ### Additional Experiments ::: Training size and *X performance We also use *X to track how a model's knowledge of particular phenomena varies with the quantity of training data. We test this with the LSTM model and find that different phenomena in *X have notably different learning curves across different training sizes, as shown in Figure FIGREF39. Crucially, phenomena with similar results from the LSTM model trained on the full 83M tokens of training data may have very different learning curves. For example, the LSTM model performs well on both irregular forms and anaphor agreement, but the different learning curves suggest that more training data is required in the anaphor agreement case to achieve this same performance level. This is supported by a regression analysis showing that the best-fit line for anaphor agreement has the steepest slope (0.0623), followed by Determiner-Noun agreement (0.0426), Subject-Verb agreement (0.041), Irregular (0.039) and Ellipsis (0.0389). By contrast, Binding (0.016), Argument Structure (0.015), and Filler-Gap Dependency (0.0095) have shallower learning curves, showing a less strong effect of increases in training data size. The phenomena that showed the lowest performance overall, NPIs and Islands, also show the lowest effects of increases to training size, with slopes of 0.0078 and 0.0036, respectively. This indicates that, even given a substantially larger amount training data, the LSTM is unlikely to achieve human-like performance on these phenomena – it simply fails to learn the necessary dependencies. It should be noted that these differences in learning curves show how *X performance dissociates from perplexity, the standard measure of LM performance: while perplexity keeps decreasing as training size increases, the performance in different *X phenomena show very different learning curves. ### Additional Experiments ::: Alternate Evaluation Methods There are several other techniques one can use to measure an LM's “preference” between two minimally different sentences. So far, we have considered only the full-sentence method, advocated for by BIBREF1, which compares the LM likelihood of the full sentences. In a followup experiment, we use two “prefix methods”, each of which has appeared in prior work in this area, that evaluate the model's preferences by comparing its prediction at a key point of divergence between the two sentences. Subsets of *X data—from the binding, determiner-noun agreement, and subject-verb agreement categories—are designed to be compatible with multiple methods, allowing us to conduct the first direct comparison. We find that all methods give broadly similar results when aggregating over a large set of paradigms, but some results diverge sharply for specific paradigms. ### Additional Experiments ::: Alternate Evaluation Methods ::: One-prefix method In the one-prefix method, used by BIBREF0, a pair of sentences share the same initial portion of a sentence, but differ in a critical word that make them differ in grammaticality (e.g., The cat eats mice vs. The cat eat mice). The model's prediction is correct if it assigns a higher probability to the grammatical token given the shared prefix. ### Additional Experiments ::: Alternate Evaluation Methods ::: Two-prefix method In the two-prefix method, used by BIBREF19, a pair of sentences have a different initial portion that diverge in some critical way, but the grammaticality difference is only revealed when a shared critical word is included (e.g., The cat eats mice vs. The cats eats mice). For these paradigms, we evaluate whether the model assigns a higher probability to the critical word conditioned on the grammatical prefix compared the ungrammatical prefix. Note that the same pair of sentences cannot be compatible with both prefix methods, and that a pair may be compatible with the full-sentence method but neither prefix method. For both prefix methods, it is crucial that the grammaticality of the sentence is unambiguously predictable from the critical word, but not sooner. With simple LM probabilities, the probabilities of the rest of the word tokens in the sentence also affect the performance. For example, a model may predict that `The cat ate the mouse' is more likely than `The cat eaten the mouse' without correctly predicting that $P(\emph {ate}|\emph {the cat}) > P(\emph {eaten}|\emph {the cat})$ if it predicts that $P(\emph {the mouse}|\emph {the cat ate})$ is much greater than $P(\emph {the mouse}|\emph {the cat eaten})$. Furthermore, it is unclear how a model assigns probabilities conditioned on an ungrammatical prefix, since ungrammatical sentences are largely absent from the training data. Using prefix probabilities allow us to exclude models' use of this additional information and evaluate how the models perform when they have just enough information to judge grammaticality. ### Additional Experiments ::: Alternate Evaluation Methods ::: Results The results in Figure FIGREF42 show that models have generally comparable accuracies overall in prefix methods and the simple whole-sentence LM method. However, A deeper examination of the differences between these methods in each paradigm reveals some cases where a models' performance fluctuates more between these methods. For example, Transformer-XL performs much worse at binding, determiner-noun agreement, and subject-verb agreement in the simple LM method, suggesting that the probabilities Transformer-XL assigns to the irrelevant part at the end of the sentence very often overturn the `judgment' based on probability up to the critical word. On the other hand, GPT-2 benefits from reading the whole sentence for binding phenomena, as its performance is better in the simple LM method than in the prefix method. Overall, we observe that Transformer-XL and GPT-2 are more affected by evaluation methods than LSTM and $n$-gram when we compare the simple LM method and the two-prefix method. ### Discussion & Future Work We have shown ways in which *X can be used as tool to gain both high-level and fine-grained insight into the grammatical knowledge of language models. Like the GLUE benchmark BIBREF13, *X assigns a single overall score to an LM which summarizes its general sensitivity to minimal pair contrasts. Thus, it can function as a linguistically motivated benchmark for the general evaluation of new language models. *X also provides a breakdown of LM performance by linguistic phenomenon, which can be used to draw concrete conclusions about the kinds of grammatical knowledge acquired by a given model. This kind of information is useful for detailed comparisons across models, as well as in ablation studies. One question we leave unexplored is how well supervised acceptability classifiers built on top of pretrained models like BERT BIBREF9 perform on *X . It would be possible to evaluate how well such classifiers generalize to unseen phenomena by training on a subset of paradigms in *X and evaluating on the held-out sets, giving an idea of to what extent models are able to transfer knowledge in one domain to a similar one. BIBREF20 find that this method is potentially more revealing of implicit grammatical knowledge than purely unsupervised methods. An important goal of linguistically-informed analysis of LMs is to better understand those empirical domains where current LMs appear to acquire some relevant knowledge, but still fall short of human performance. The results from *X suggest that—in addition to relatively well-studied phenomena like filler-gap dependencies, NPIs, and binding—argument structure remains one area where there is much to uncover about what LMs learn. More generally, as language modeling techniques continue to improve, it will be useful to have large-scale tools like *X to efficiently track changes in what these models do and do not know about grammar. ### Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. 1850208. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This project has also benefited from support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), by Samsung Research (under the project Improving Deep Learning using Latent Structure), by Intuit, Inc., and by NVIDIA Corporation (with the donation of a Titan V GPU).
The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences.