question
stringlengths 12
1.77k
| context
stringlengths 79
71.7k
| answer
stringlengths 1
1.63k
|
---|---|---|
How does Retief convince the captain to keep him on board?
A. The captain knows that the Soettie will be able to handle him later.
B. The captain’s men as well as himself are too scared to confront him, so he leaves him be.
C. Retief remarks on the Uniform Code, and the captain doesn’t want to have legal issues.
D. He doesn’t have time to deal with Retief, so he leaves him be.
|
THE FROZEN PLANET By Keith Laumer [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, September 1961. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] "It is rather unusual," Magnan said, "to assign an officer of your rank to courier duty, but this is an unusual mission." Retief sat relaxed and said nothing. Just before the silence grew awkward, Magnan went on. "There are four planets in the group," he said. "Two double planets, all rather close to an unimportant star listed as DRI-G 33987. They're called Jorgensen's Worlds, and in themselves are of no importance whatever. However, they lie deep in the sector into which the Soetti have been penetrating. "Now—" Magnan leaned forward and lowered his voice—"we have learned that the Soetti plan a bold step forward. Since they've met no opposition so far in their infiltration of Terrestrial space, they intend to seize Jorgensen's Worlds by force." Magnan leaned back, waiting for Retief's reaction. Retief drew carefully on his cigar and looked at Magnan. Magnan frowned. "This is open aggression, Retief," he said, "in case I haven't made myself clear. Aggression on Terrestrial-occupied territory by an alien species. Obviously, we can't allow it." Magnan drew a large folder from his desk. "A show of resistance at this point is necessary. Unfortunately, Jorgensen's Worlds are technologically undeveloped areas. They're farmers or traders. Their industry is limited to a minor role in their economy—enough to support the merchant fleet, no more. The war potential, by conventional standards, is nil." Magnan tapped the folder before him. "I have here," he said solemnly, "information which will change that picture completely." He leaned back and blinked at Retief. "All right, Mr. Councillor," Retief said. "I'll play along; what's in the folder?" Magnan spread his fingers, folded one down. "First," he said. "The Soetti War Plan—in detail. We were fortunate enough to make contact with a defector from a party of renegade Terrestrials who've been advising the Soetti." He folded another finger. "Next, a battle plan for the Jorgensen's people, worked out by the Theory group." He wrestled a third finger down. "Lastly; an Utter Top Secret schematic for conversion of a standard anti-acceleration field into a potent weapon—a development our systems people have been holding in reserve for just such a situation." "Is that all?" Retief said. "You've still got two fingers sticking up." Magnan looked at the fingers and put them away. "This is no occasion for flippancy, Retief. In the wrong hands, this information could be catastrophic. You'll memorize it before you leave this building." "I'll carry it, sealed," Retief said. "That way nobody can sweat it out of me." Magnan started to shake his head. "Well," he said. "If it's trapped for destruction, I suppose—" "I've heard of these Jorgensen's Worlds," Retief said. "I remember an agent, a big blond fellow, very quick on the uptake. A wizard with cards and dice. Never played for money, though." "Umm," Magnan said. "Don't make the error of personalizing this situation, Retief. Overall policy calls for a defense of these backwater worlds. Otherwise the Corps would allow history to follow its natural course, as always." "When does this attack happen?" "Less than four weeks." "That doesn't leave me much time." "I have your itinerary here. Your accommodations are clear as far as Aldo Cerise. You'll have to rely on your ingenuity to get you the rest of the way." "That's a pretty rough trip, Mr. Councillor. Suppose I don't make it?" Magnan looked sour. "Someone at a policy-making level has chosen to put all our eggs in one basket, Retief. I hope their confidence in you is not misplaced." "This antiac conversion; how long does it take?" "A skilled electronics crew can do the job in a matter of minutes. The Jorgensens can handle it very nicely; every other man is a mechanic of some sort." Retief opened the envelope Magnan handed him and looked at the tickets inside. "Less than four hours to departure time," he said. "I'd better not start any long books." "You'd better waste no time getting over to Indoctrination," Magnan said. Retief stood up. "If I hurry, maybe I can catch the cartoon." "The allusion escapes me," Magnan said coldly. "And one last word. The Soetti are patrolling the trade lanes into Jorgensen's Worlds; don't get yourself interned." "I'll tell you what," Retief said soberly. "In a pinch, I'll mention your name." "You'll be traveling with Class X credentials," Magnan snapped. "There must be nothing to connect you with the Corps." "They'll never guess," Retief said. "I'll pose as a gentleman." "You'd better be getting started," Magnan said, shuffling papers. "You're right," Retief said. "If I work at it, I might manage a snootful by takeoff." He went to the door. "No objection to my checking out a needler, is there?" Magnan looked up. "I suppose not. What do you want with it?" "Just a feeling I've got." "Please yourself." "Some day," Retief said, "I may take you up on that." II Retief put down the heavy travel-battered suitcase and leaned on the counter, studying the schedules chalked on the board under the legend "ALDO CERISE—INTERPLANETARY." A thin clerk in a faded sequined blouse and a plastic snakeskin cummerbund groomed his fingernails, watching Retief from the corner of his eye. Retief glanced at him. The clerk nipped off a ragged corner with rabbitlike front teeth and spat it on the floor. "Was there something?" he said. "Two twenty-eight, due out today for the Jorgensen group," Retief said. "Is it on schedule?" The clerk sampled the inside of his right cheek, eyed Retief. "Filled up. Try again in a couple of weeks." "What time does it leave?" "I don't think—" "Let's stick to facts," Retief said. "Don't try to think. What time is it due out?" The clerk smiled pityingly. "It's my lunch hour," he said. "I'll be open in an hour." He held up a thumb nail, frowned at it. "If I have to come around this counter," Retief said, "I'll feed that thumb to you the hard way." The clerk looked up and opened his mouth. Then he caught Retief's eye, closed his mouth and swallowed. "Like it says there," he said, jerking a thumb at the board. "Lifts in an hour. But you won't be on it," he added. Retief looked at him. "Some ... ah ... VIP's required accommodation," he said. He hooked a finger inside the sequined collar. "All tourist reservations were canceled. You'll have to try to get space on the Four-Planet Line ship next—" "Which gate?" Retief said. "For ... ah...?" "For the two twenty-eight for Jorgensen's Worlds," Retief said. "Well," the clerk said. "Gate 19," he added quickly. "But—" Retief picked up his suitcase and walked away toward the glare sign reading To Gates 16-30 . "Another smart alec," the clerk said behind him. Retief followed the signs, threaded his way through crowds, found a covered ramp with the number 228 posted over it. A heavy-shouldered man with a scarred jawline and small eyes was slouching there in a rumpled gray uniform. He put out a hand as Retief started past him. "Lessee your boarding pass," he muttered. Retief pulled a paper from an inside pocket, handed it over. The guard blinked at it. "Whassat?" "A gram confirming my space," Retief said. "Your boy on the counter says he's out to lunch." The guard crumpled the gram, dropped it on the floor and lounged back against the handrail. "On your way, bub," he said. Retief put his suitcase carefully on the floor, took a step and drove a right into the guard's midriff. He stepped aside as the man doubled and went to his knees. "You were wide open, ugly. I couldn't resist. Tell your boss I sneaked past while you were resting your eyes." He picked up his bag, stepped over the man and went up the gangway into the ship. A cabin boy in stained whites came along the corridor. "Which way to cabin fifty-seven, son?" Retief asked. "Up there." The boy jerked his head and hurried on. Retief made his way along the narrow hall, found signs, followed them to cabin fifty-seven. The door was open. Inside, baggage was piled in the center of the floor. It was expensive looking baggage. Retief put his bag down. He turned at a sound behind him. A tall, florid man with an expensive coat belted over a massive paunch stood in the open door, looking at Retief. Retief looked back. The florid man clamped his jaws together, turned to speak over his shoulder. "Somebody in the cabin. Get 'em out." He rolled a cold eye at Retief as he backed out of the room. A short, thick-necked man appeared. "What are you doing in Mr. Tony's room?" he barked. "Never mind! Clear out of here, fellow! You're keeping Mr. Tony waiting." "Too bad," Retief said. "Finders keepers." "You nuts?" The thick-necked man stared at Retief. "I said it's Mr. Tony's room." "I don't know Mr. Tony. He'll have to bull his way into other quarters." "We'll see about you, mister." The man turned and went out. Retief sat on the bunk and lit a cigar. There was a sound of voices in the corridor. Two burly baggage-smashers appeared, straining at an oversized trunk. They maneuvered it through the door, lowered it, glanced at Retief and went out. The thick-necked man returned. "All right, you. Out," he growled. "Or have I got to have you thrown out?" Retief rose and clamped the cigar between his teeth. He gripped a handle of the brass-bound trunk in each hand, bent his knees and heaved the trunk up to chest level, then raised it overhead. He turned to the door. "Catch," he said between clenched teeth. The trunk slammed against the far wall of the corridor and burst. Retief turned to the baggage on the floor, tossed it into the hall. The face of the thick-necked man appeared cautiously around the door jamb. "Mister, you must be—" "If you'll excuse me," Retief said, "I want to catch a nap." He flipped the door shut, pulled off his shoes and stretched out on the bed. Five minutes passed before the door rattled and burst open. Retief looked up. A gaunt leathery-skinned man wearing white ducks, a blue turtleneck sweater and a peaked cap tilted raffishly over one eye stared at Retief. "Is this the joker?" he grated. The thick-necked man edged past him, looked at Retief and snorted, "That's him, sure." "I'm captain of this vessel," the first man said. "You've got two minutes to haul your freight out of here, buster." "When you can spare the time from your other duties," Retief said, "take a look at Section Three, Paragraph One, of the Uniform Code. That spells out the law on confirmed space on vessels engaged in interplanetary commerce." "A space lawyer." The captain turned. "Throw him out, boys." Two big men edged into the cabin, looking at Retief. "Go on, pitch him out," the captain snapped. Retief put his cigar in an ashtray, and swung his feet off the bunk. "Don't try it," he said softly. One of the two wiped his nose on a sleeve, spat on his right palm, and stepped forward, then hesitated. "Hey," he said. "This the guy tossed the trunk off the wall?" "That's him," the thick-necked man called. "Spilled Mr. Tony's possessions right on the deck." "Deal me out," the bouncer said. "He can stay put as long as he wants to. I signed on to move cargo. Let's go, Moe." "You'd better be getting back to the bridge, Captain," Retief said. "We're due to lift in twenty minutes." The thick-necked man and the Captain both shouted at once. The Captain's voice prevailed. "—twenty minutes ... uniform Code ... gonna do?" "Close the door as you leave," Retief said. The thick-necked man paused at the door. "We'll see you when you come out." III Four waiters passed Retief's table without stopping. A fifth leaned against the wall nearby, a menu under his arm. At a table across the room, the Captain, now wearing a dress uniform and with his thin red hair neatly parted, sat with a table of male passengers. He talked loudly and laughed frequently, casting occasional glances Retief's way. A panel opened in the wall behind Retief's chair. Bright blue eyes peered out from under a white chef's cap. "Givin' you the cold shoulder, heh, Mister?" "Looks like it, old-timer," Retief said. "Maybe I'd better go join the skipper. His party seems to be having all the fun." "Feller has to be mighty careless who he eats with to set over there." "I see your point." "You set right where you're at, Mister. I'll rustle you up a plate." Five minutes later, Retief cut into a thirty-two ounce Delmonico backed up with mushrooms and garlic butter. "I'm Chip," the chef said. "I don't like the Cap'n. You can tell him I said so. Don't like his friends, either. Don't like them dern Sweaties, look at a man like he was a worm." "You've got the right idea on frying a steak, Chip. And you've got the right idea on the Soetti, too," Retief said. He poured red wine into a glass. "Here's to you." "Dern right," Chip said. "Dunno who ever thought up broiling 'em. Steaks, that is. I got a Baked Alaska coming up in here for dessert. You like brandy in yer coffee?" "Chip, you're a genius." "Like to see a feller eat," Chip said. "I gotta go now. If you need anything, holler." Retief ate slowly. Time always dragged on shipboard. Four days to Jorgensen's Worlds. Then, if Magnan's information was correct, there would be four days to prepare for the Soetti attack. It was a temptation to scan the tapes built into the handle of his suitcase. It would be good to know what Jorgensen's Worlds would be up against. Retief finished the steak, and the chef passed out the baked Alaska and coffee. Most of the other passengers had left the dining room. Mr. Tony and his retainers still sat at the Captain's table. As Retief watched, four men arose from the table and sauntered across the room. The first in line, a stony-faced thug with a broken ear, took a cigar from his mouth as he reached the table. He dipped the lighted end in Retief's coffee, looked at it, and dropped it on the tablecloth. The others came up, Mr. Tony trailing. "You must want to get to Jorgensen's pretty bad," the thug said in a grating voice. "What's your game, hick?" Retief looked at the coffee cup, picked it up. "I don't think I want my coffee," he said. He looked at the thug. "You drink it." The thug squinted at Retief. "A wise hick," he began. With a flick of the wrist, Retief tossed the coffee into the thug's face, then stood and slammed a straight right to the chin. The thug went down. Retief looked at Mr. Tony, still standing open-mouthed. "You can take your playmates away now, Tony," he said. "And don't bother to come around yourself. You're not funny enough." Mr. Tony found his voice. "Take him, Marbles!" he growled. The thick-necked man slipped a hand inside his tunic and brought out a long-bladed knife. He licked his lips and moved in. Retief heard the panel open beside him. "Here you go, Mister," Chip said. Retief darted a glance; a well-honed french knife lay on the sill. "Thanks, Chip," Retief said. "I won't need it for these punks." Thick-neck lunged and Retief hit him square in the face, knocking him under the table. The other man stepped back, fumbling a power pistol from his shoulder holster. "Aim that at me, and I'll kill you," Retief said. "Go on, burn him!" Mr. Tony shouted. Behind him, the captain appeared, white-faced. "Put that away, you!" he yelled. "What kind of—" "Shut up," Mr. Tony said. "Put it away, Hoany. We'll fix this bum later." "Not on this vessel, you won't," the captain said shakily. "I got my charter to consider." "Ram your charter," Hoany said harshly. "You won't be needing it long." "Button your floppy mouth, damn you!" Mr. Tony snapped. He looked at the man on the floor. "Get Marbles out of here. I ought to dump the slob." He turned and walked away. The captain signaled and two waiters came up. Retief watched as they carted the casualty from the dining room. The panel opened. "I usta be about your size, when I was your age," Chip said. "You handled them pansies right. I wouldn't give 'em the time o' day." "How about a fresh cup of coffee, Chip?" Retief said. "Sure, Mister. Anything else?" "I'll think of something," Retief said. "This is shaping up into one of those long days." "They don't like me bringing yer meals to you in yer cabin," Chip said. "But the cap'n knows I'm the best cook in the Merchant Service. They won't mess with me." "What has Mr. Tony got on the captain, Chip?" Retief asked. "They're in some kind o' crooked business together. You want some more smoked turkey?" "Sure. What have they got against my going to Jorgensen's Worlds?" "Dunno. Hasn't been no tourists got in there fer six or eight months. I sure like a feller that can put it away. I was a big eater when I was yer age." "I'll bet you can still handle it, Old Timer. What are Jorgensen's Worlds like?" "One of 'em's cold as hell and three of 'em's colder. Most o' the Jorgies live on Svea; that's the least froze up. Man don't enjoy eatin' his own cookin' like he does somebody else's." "That's where I'm lucky, Chip. What kind of cargo's the captain got aboard for Jorgensen's?" "Derned if I know. In and out o' there like a grasshopper, ever few weeks. Don't never pick up no cargo. No tourists any more, like I says. Don't know what we even run in there for." "Where are the passengers we have aboard headed?" "To Alabaster. That's nine days' run in-sector from Jorgensen's. You ain't got another one of them cigars, have you?" "Have one, Chip. I guess I was lucky to get space on this ship." "Plenty o' space, Mister. We got a dozen empty cabins." Chip puffed the cigar alight, then cleared away the dishes, poured out coffee and brandy. "Them Sweaties is what I don't like," he said. Retief looked at him questioningly. "You never seen a Sweaty? Ugly lookin' devils. Skinny legs, like a lobster; big chest, shaped like the top of a turnip; rubbery lookin' head. You can see the pulse beatin' when they get riled." "I've never had the pleasure," Retief said. "You prob'ly have it perty soon. Them devils board us nigh ever trip out. Act like they was the Customs Patrol or somethin'." There was a distant clang, and a faint tremor ran through the floor. "I ain't superstitious ner nothin'," Chip said. "But I'll be triple-damned if that ain't them boarding us now." Ten minutes passed before bootsteps sounded outside the door, accompanied by a clicking patter. The doorknob rattled, then a heavy knock shook the door. "They got to look you over," Chip whispered. "Nosy damn Sweaties." "Unlock it, Chip." The chef opened the door. "Come in, damn you," he said. A tall and grotesque creature minced into the room, tiny hoof-like feet tapping on the floor. A flaring metal helmet shaded the deep-set compound eyes, and a loose mantle flapped around the knobbed knees. Behind the alien, the captain hovered nervously. "Yo' papiss," the alien rasped. "Who's your friend, Captain?" Retief said. "Never mind; just do like he tells you." "Yo' papiss," the alien said again. "Okay," Retief said. "I've seen it. You can take it away now." "Don't horse around," the captain said. "This fellow can get mean." The alien brought two tiny arms out from the concealment of the mantle, clicked toothed pincers under Retief's nose. "Quick, soft one." "Captain, tell your friend to keep its distance. It looks brittle, and I'm tempted to test it." "Don't start anything with Skaw; he can clip through steel with those snappers." "Last chance," Retief said. Skaw stood poised, open pincers an inch from Retief's eyes. "Show him your papers, you damned fool," the captain said hoarsely. "I got no control over Skaw." The alien clicked both pincers with a sharp report, and in the same instant Retief half-turned to the left, leaned away from the alien and drove his right foot against the slender leg above the bulbous knee-joint. Skaw screeched and floundered, greenish fluid spattering from the burst joint. "I told you he was brittle," Retief said. "Next time you invite pirates aboard, don't bother to call." "Jesus, what did you do! They'll kill us!" the captain gasped, staring at the figure flopping on the floor. "Cart poor old Skaw back to his boat," Retief said. "Tell him to pass the word. No more illegal entry and search of Terrestrial vessels in Terrestrial space." "Hey," Chip said. "He's quit kicking." The captain bent over Skaw, gingerly rolled him over. He leaned close and sniffed. "He's dead." The captain stared at Retief. "We're all dead men," he said. "These Soetti got no mercy." "They won't need it. Tell 'em to sheer off; their fun is over." "They got no more emotions than a blue crab—" "You bluff easily, Captain. Show a few guns as you hand the body back. We know their secret now." "What secret? I—" "Don't be no dumber than you got to, Cap'n," Chip said. "Sweaties die easy; that's the secret." "Maybe you got a point," the captain said, looking at Retief. "All they got's a three-man scout. It could work." He went out, came back with two crewmen. They hauled the dead alien gingerly into the hall. "Maybe I can run a bluff on the Soetti," the captain said, looking back from the door. "But I'll be back to see you later." "You don't scare us, Cap'n," Chip said. "Him and Mr. Tony and all his goons. You hit 'em where they live, that time. They're pals o' these Sweaties. Runnin' some kind o' crooked racket." "You'd better take the captain's advice, Chip. There's no point in your getting involved in my problems." "They'd of killed you before now, Mister, if they had any guts. That's where we got it over these monkeys. They got no guts." "They act scared, Chip. Scared men are killers." "They don't scare me none." Chip picked up the tray. "I'll scout around a little and see what's goin' on. If the Sweaties figure to do anything about that Skaw feller they'll have to move fast; they won't try nothin' close to port." "Don't worry, Chip. I have reason to be pretty sure they won't do anything to attract a lot of attention in this sector just now." Chip looked at Retief. "You ain't no tourist, Mister. I know that much. You didn't come out here for fun, did you?" "That," Retief said, "would be a hard one to answer." IV Retief awoke at a tap on his door. "It's me, Mister. Chip." "Come on in." The chef entered the room, locking the door. "You shoulda had that door locked." He stood by the door, listening, then turned to Retief. "You want to get to Jorgensen's perty bad, don't you, Mister?" "That's right, Chip." "Mr. Tony give the captain a real hard time about old Skaw. The Sweaties didn't say nothin'. Didn't even act surprised, just took the remains and pushed off. But Mr. Tony and that other crook they call Marbles, they was fit to be tied. Took the cap'n in his cabin and talked loud at him fer half a hour. Then the cap'n come out and give some orders to the Mate." Retief sat up and reached for a cigar. "Mr. Tony and Skaw were pals, eh?" "He hated Skaw's guts. But with him it was business. Mister, you got a gun?" "A 2mm needler. Why?" "The orders cap'n give was to change course fer Alabaster. We're by-passin' Jorgensen's Worlds. We'll feel the course change any minute." Retief lit the cigar, reached under the mattress and took out a short-barreled pistol. He dropped it in his pocket, looked at Chip. "Maybe it was a good thought, at that. Which way to the Captain's cabin?" "This is it," Chip said softly. "You want me to keep an eye on who comes down the passage?" Retief nodded, opened the door and stepped into the cabin. The captain looked up from his desk, then jumped up. "What do you think you're doing, busting in here?" "I hear you're planning a course change, Captain." "You've got damn big ears." "I think we'd better call in at Jorgensen's." "You do, huh?" the captain sat down. "I'm in command of this vessel," he said. "I'm changing course for Alabaster." "I wouldn't find it convenient to go to Alabaster," Retief said. "So just hold your course for Jorgensen's." "Not bloody likely." "Your use of the word 'bloody' is interesting, Captain. Don't try to change course." The captain reached for the mike on his desk, pressed the key. "Power Section, this is the captain," he said. Retief reached across the desk, gripped the captain's wrist. "Tell the mate to hold his present course," he said softly. "Let go my hand, buster," the captain snarled. Eyes on Retief's, he eased a drawer open with his left hand, reached in. Retief kneed the drawer. The captain yelped and dropped the mike. "You busted it, you—" "And one to go," Retief said. "Tell him." "I'm an officer of the Merchant Service!" "You're a cheapjack who's sold his bridge to a pack of back-alley hoods." "You can't put it over, hick." "Tell him." The captain groaned and picked up the mike. "Captain to Power Section," he said. "Hold your present course until you hear from me." He dropped the mike and looked up at Retief. "It's eighteen hours yet before we pick up Jorgensen Control. You going to sit here and bend my arm the whole time?" Retief released the captain's wrist and turned to the door. "Chip, I'm locking the door. You circulate around, let me know what's going on. Bring me a pot of coffee every so often. I'm sitting up with a sick friend." "Right, Mister. Keep an eye on that jasper; he's slippery." "What are you going to do?" the captain demanded. Retief settled himself in a chair. "Instead of strangling you, as you deserve," he said, "I'm going to stay here and help you hold your course for Jorgensen's Worlds." The captain looked at Retief. He laughed, a short bark. "Then I'll just stretch out and have a little nap, farmer. If you feel like dozing off sometime during the next eighteen hours, don't mind me." Retief took out the needler and put it on the desk before him. "If anything happens that I don't like," he said, "I'll wake you up. With this."
|
B. The captain’s men as well as himself are too scared to confront him, so he leaves him be.
|
Mrs. Sample underwent multiple imaging examinations. Which body part could not be assessed in these examinations?
Choose the correct answer from the following options:
A. Liver
B. Lungs
C. Thyroid
D. Uterus
E. Feet
|
### Patient Report 0
**Dear colleague, **
We wish to provide an update regarding Mrs. Anna Sample, born on
01.01.1970. She was admitted to our clinic from 01/01/2017 to
01/02/2017.
**Diagnosis:** Diffuse large B-cell lymphoma of germinal center type; ID
01/2017
- Ann-Arbor: Stage IV
- R-IPI: 2 (LDH, stage)
- CNS-IPI: 2
- Histology: Aggressive B-NHL (DLBCL, NOS); no evidence of t(14;18)
translocation. Ki-67 at 40%. Positive reaction to MUM1, numerous
CD68-positive macrophages. Negative reaction to ALK1 and TdT.
- cMRI: Chronic inflammatory lesions suggestive of Multiple Sclerosis (MS)
- CSF: no evidence of malignancy
- Bone marrow aspiration: no infiltration from the pre-existing
lymphoma.
**Current treatment: **
Initiated R-Pola-CHP regimen q21
- Polatuzumab vedotin: 1.8mg/kg on Day 1.
- Rituximab: 375mg/m² on Day 0.
- Cyclophosphamide: 750mg/m² on Day 1.
- Doxorubicin: 50mg/m on Day 1.
- Prednisone: 100mg orally from Day 1-5.
**Previous therapy and course**
From 12/01/2016: Discomfort in the dorsal calf and thoracic spine,
weakness in the arms with limited ability to lift and grasp, occasional
dizziness.
12/19/2016 cMRI: chronic inflammatory marks indicative of MS.
12/20/2016 MRI: thoracic/lumbar spinal cord: Indication of a
metastatic mass starting from the left pedicle T1 with a significant
extraosseous tumor element and full spinal narrowing at the level of
T10-L1 with pressure on the myelon and growth into the neuroforamen
T11/T12 on the right and T12/L1 left. More lesions suggestive for
metastasis are L2, L3, and L4, once more with extraosseous tumor element
and invasion of the left pedicle.
12/21/2016 Fixed dorsal support T8-9 to L3-4. Decompression via
laminectomy T10 and partial laminectomy lumbar vertebra 3.
12/24/2016 CT chest/abdomen/pelvis: Magnified left axillary lymph node.
In the ventral left upper lobe, indication of a round, loose, cloudy
deposit, i.e., of inflammatory origin, follow-up in 5-7 weeks.
Nodule-like deposit in the upper inner quadrant of the right breast,
senological examination suggested.
**Pathology**: Aggressive B-NHL (DLBCL, NOS); no evidence of t(14;18)
translocation. Ki-67 staining was at 40%. Positive reaction to MUM1.
Numerous CD68-positive macrophages. No reaction to ALK1 and TdT.
**Other diagnoses**
- Primary progressive type of multiple sclerosis (ID 03/02)
- Mood disorder.
- 2-vessel CHD
**Medical History**
Mrs. Sample was transferred inpatient from DC for the initiation of
chemotherapy (R-Pola-CHP) for her DLBCL. In the context of her
pre-existing ALS, she presented on 12/19/2016 with acute pains and
restricted mobility in her upper limbs. After her admission to HK
Flowermoon, an MRI was performed which revealed a thoracic neoplastic
growth especially at the level of T10-L1, but also affecting lumbar
vertebra 3, L4 and L6. Surgical intervention on 12/21/2016 at DC
resulted in symptom relief. Presently, her complaints are restricted to
post-operative spine discomfort, shoulder hypoesthesia, and intermittent
hand numbness. She reported a weight loss of 5 kg during her
hospitalization. She denied having respiratory symptoms, infections,
systemic symptoms, or gastrointestinal complaints. Mrs. Sample currently
has a urinary catheter in place.
**Physical examination on admission**
General: The patient has a satisfactory nutritional status, normal
weight, and is dependent on a walker. Her functional status is evaluated
as ECOG 2. Cardiovascular: Regular heart rhythm at a normal rate. Heart
sounds are clear with no detected murmurs. Respiratory: Normal alveolar
breath sounds. No wheezing, stridor, or other abnormal sounds.
Abdominal: The abdomen is soft, non-tender, and non-distended with
normal bowel sounds in all quadrants. There is no palpable enlargement
of the liver or spleen, and the kidneys are not palpable.
Musculoskeletal: Tenderness noted in the cervical and thoracic spine
area, but no other remarkable findings. This is consistent with her
post-operative status. Lymphatic: No enlargement detected in the
temporal, occipital, cubital, or popliteal lymph nodes. Oral: The oral
mucosa is moist and well-perfused. The oropharynx is unremarkable, and
the tongue appears normal. Peripheral Vascular: Pulses in the hands are
strong and regular. No edema observed. Neurological: Cranial nerves are
intact. There is numbness in both hands and mild hypoesthesia in the
shoulders. Motor strength is 3/5 in the right arm, attributed to her
known ALS diagnosis. No other motor or sensory deficits noted.
Occasional bladder incontinence and intermittent gastrointestinal
disturbances are reported.
**Medications on admission**
Acetylsalicylic acid (Aspirin®) 100 mg: Take 1 tablet in the morning.
Atorvastatin (Lipitor®) 40 mg: Take 1 tablet in the evening. Fingolimod
(Gilenya®) 0.5 mg: Take 1 capsule in the evening. Sertraline (Zoloft®)
50 mg: Take 2 tablets in the morning. Hydromorphone (Dilaudid® or
Exalgo® for extended-release) 2 mg: Take 1 capsule in the morning and 1
in the evening. Lorazepam (Ativan®) 1 mg: 1 tablet as needed.
**Radiology/Nuclear Medicine**
**MR Head 3D unenhanced + contrast from 12/19/2016 10:30 AM**
**Technique:** Sequences obtained include 3D FLAIR, 3D DIR, 3D T2, SWI,
DTI/DWI, plain MPRAGE, and post-contrast MPRAGE. All images are of good
quality. Imaging area: Brain.
There are 20 FLAIR hyperintense lesions in the brain parenchyma,
specifically located periventricularly and in the cortical/juxtacortical
regions (right and left frontal, left temporal, and right and left
insular). No contrast-enhancing lesions are identified. There are also
subcortical/nonspecific lesions present, with some lesions appearing
confluent. The spinal cord is visualized up to the C4 level. No spinal
lesions are noted.
[Incidental findings:]{.underline}
- Brain volume assessment: no indication of reduced brain volume.
- CSF space: age-appropriate usual wide, moderate, and symmetric CSF
spacing with no signs of CSF flow abnormalities.
- Cortical-Subcortical Differentiation: Clear cortical-subcortical
distinction.
- RML-characteristic alterations: none detected.
- Eye socket: appears normal.
- Nasal cavities: Symmetric mucosal thickening with a focus on the
right ethmoidal sinus.
- Pituitary and peri-auricular region: no abnormalities.
- Subcutaneous lesion measuring 14.4 x 21.3 mm, right parietal likely
representing an inflamed cyst or abscess, differential includes soft
tissue growth.
[Evaluation]{.underline}
Dissemination: MRI standards for spatial distribution are satisfied. MRI
criteria for temporal distribution are unfulfilled. Comprehensive
neurological review: The findings are consistent with a chronic
inflammatory CNS disease in the sense of Multiple Sclerosis.
**MR Spine plain + post-contrast from 12/20/2016 10:00 AM**
**Technique:** GE 3T MRI Scanner
MRI was conducted under anesthesia due to claustrophobia.
**Sequences**: Holospinal T2 Dixon sagittal, T1 pre-contrast, T1 fs
post-contrast. The spine is visualized from the craniocervical junction
to S2.
**Thoracic spine: **
On T2-STIR and T2, there is a hyperintense signal of vertebral bodies T5
and T6 with inconsistent delineation of the vertebral endplates,
indicative of age-related changes. There is a reduction in the height of
the disc spaces T4/5 and T5/6 with subligamentous disc protrusion
leading to a spongy appearance of the spinal cord at this location.
Myelon atrophy is noted at T5/6, along with a T2 bright lesion
suggestive of MS at the level of T3 and also T4/5. Spine: A large
intraspinal mass extends from T10-L1, causing an anterior spongy
appearance of the spinal cord and resulting in complete spinal canal
stenosis at this level. On fat-only imaging, there is almost total
replacement of the marrow space of vertebral body T11 with external
tumor extension and infiltration into the lateral structures (more on
the left than the right) and neural foramen T11 on both sides. There is
mild disc herniation at T8/9 with slight sponginess of the spinal cord.
MS-characteristic spinal cord lesions are noted at segments T5 and T8/9.
**Lumbar spine: **
T2-DIXON shows bright signal intensity of the anterior part of lumbar
vertebra 1, a patchy appearance of lumbar vertebra 2, and lumbar
vertebra 4. Almost the entire marrow space is replaced in the fat-only
imaging. There is an external tumor mass posterior to lumbar vertebra 4
without significant spinal canal stenosis, which involves the left
lateral structure and a pronounced appearance of the cauda equina at
lumbar vertebra 1. A call to communicate the results was made at 11:15
a.m. to the on-duty orthopedic surgeon and to colleagues in neurology.
Evaluation Evidence of a metastatic lesion originating from the left
pedicle of T10 with a significant extramedullary tumor mass and full
spinal canal narrowing at the level of T10 with compression of the
spinal cord and extension into the neural foramen T11-T12 on the right,
and T12-L1 on the left. Additional sites suggestive of metastasis
include L2, L3, and L4, again with extramedullary tumor components and
invasion of the left lateral structure. Contrast enhancement of the
distal cord is noted. There are MS-characteristic spinal cord lesions at
the levels of T3, T4-5, T5, and T8-9. The conus medullaris is not
visualized due to spinal cord displacement.
**CT Thoracic Spine from 01/03/2017**
[Clinical Findings]{.underline}
Lateral and medial alignment is stable. No sign of vertebral column
damage. Multiple segment degenerative alterations in the spine. No
indications of mineralization in the recognized space at the level of
T10/L2. Invasion of T10 and L4 with composite osteolytic-osteoblastic
defects starting from the left pedicle into the vertebral column. More
cortical inconsistency with enhanced sclerosis at the endplate of L2.
Review with prior MRI indicative of a different composite defect. Defect
pit at the endplate of lumbar vertebra 2.
Minor pericardial effusion with nearby superior ventilation. Intubation
tube placed. Mild cardiomediastinum. Splenomegaly. Standard display of
the tissue organs of the mid-abdomen, as naturally observed. Normal
spleen. Thin adrenals. Tightly raised kidney bowls and leading ureter
from both aspects, e.g., upon entry during the exhalation period after
gadolinium inclusion in the earlier MRI. No bowel obstruction.
Intestinal stasis. No sign of abnormally magnified lymphatic vessels.
Remaining pin holes in the femoral head on both sides.
[Evaluation]{.underline}
Composite osteolytic-osteoblastic defects starting from the left pedicle
in T10 and lumbar vertebra 4, and at the endplate of lumbar vertebra 2.
**CT Thoracic Spine from 01/04/2017 **
Intraoperative CT imaging for enhanced guidance.
Two intraoperative CT scans were undertaken in total.
On the concluding CT scan, recently implanted non-radiopaque pedicle
screws T8-9 to L2-L3 at tumor band T10. Regular screw placement. No
evident sign of material breakage.
Apart from this, no notable alteration in findings from CT of
01/03/2017.
Evaluation Intraoperative CT imaging for better guidance. Recently
inserted pedicle screws T8/T9 and L2/L3 in tumor indication T10,
ultimate standard screw positioning done transpedicular.
**CT Chest/Abdomen/Pelvis + Contrast from 01/09/2017**
Results: After uneventful intravenous administration of Omnipaque 320, a
multi-slice helical CT of the chest, abdomen, and pelvis was performed
during the venous phase of contrast enhancement. Additional oral
contrast was given using Gastrografin (diluted 1:35). Thin slice
reconstructions were obtained, along with secondary coronal and sagittal
reconstructions.
[Thorax]{.underline}:
Uniform presentation of the apical thoracic sections when included. No
evidence of subclavian lymphadenopathy. Uniform visualization of the pectoral
tissues. No evidence of mediastinal lymphadenopathy. The anterior segment
of the left upper lobe (series 205, image 88 of 389) shows a subpleural
ground-glass opaque solid consolidation. There is an enlarged lymph node
in the left hilar region measuring approximately 1.2 cm laterally. Otherwise,
there are no signs of suspicious intrapulmonary markings, no new inflammatory
infiltrates, no pneumothorax, no pericardial effusion. In the upper inner
quadrant of the right breast there is an oval mass, DD cystadenoma,
DD glandular cluster (measuring 1.2 cm).
[Abdomen/pelvis: ]{.underline}
Dominant display of the gastrocolic junction; absence of oral contrast
in this zone prevents more detailed analysis. Uniformly displayed
hepatic tissue with no signs of focal, density-varied lesions. Portal
and liver veins are well filled. Liver with minor auxiliary liver.
Adrenal nodes thin on both sides. Natural kidneys on both sides. Urinary
sac with placed transurethral tube and intravesical gas pockets.
Gallbladder typical. Paravertebral and within vertebral and in the
domain of the superior hepatic artery multiple pronounced lymph nodes,
these up to a maximum of 8 mm. Typical representation of the organs in
the pelvic region.
[Skeleton: ]{.underline}
Condition post dorsal reinforcement (T8-T9-L2-L3). After surgery,
epidermal air pockets and bloated tissue inflammation in the access path
zone. Signs of a resin in a pre-spinal vessel anterior to T8 and T9.
Known mixed osteoblastic/osteolytic bony metastasis of lumbar vertebra 4
and the cap plate of lumbar vertebra 2. State post-cutting of the
pedicle of T10. L5 also with slightly multiple solidified core
osteolytic defects.
[Evaluation:]{.underline}
- No sign of primary malignancy in the previously mentioned mixed
osteoblastic/osteolytic lesions in the vertebra (to be deemed
suspicious in coordination with the MR review of 12/20/2016).
- A magnified lymph node exists in the left hilar territory. In the
anterior left upper lobe, evidence of a solid cloudy consolidation,
i.e., of inflammatory origin, revisitation in 5-7 weeks recommended.
- Rounded consolidation in the upper inner quadrant of the right
breast, further breast examination advised.
**Functional Diagnostics**
Extended Respiratory Function (Diffusion) from 01/15/2017
[Evaluation]{.underline}
Patient cooperation: satisfactory. No detectable obstructive ventilation
issue. No pulmonary over-expansion after RV/TLC. No identified
restrictive ventilation impairment. Standard O2 diffusion ability. No
evidence of low oxygen levels, no blockage.
[Consultations / Therapy Reports]{.underline}
Psychological Support Consultation from 01/22/2017
[Current Situation/History:]{.underline}
The patient initially discussed \"night episodes\" in the calves, which
over time manifested during the day and were coupled with discomfort in
the cervical region. Prior, she had visited the Riverside Medical Center
multiple times before an MRI was executed. A \"mass in the neck\" was
identified. Since she suffers from fear of heights and fear of crowds,
the MRI could only be done under mild sedation. The phobias emerged
abruptly in 2011 with no apparent cause, leading to multiple hospital
visits. She is now in outpatient care. Additionally, she battles with
MS, with the most recent flare-up in 2012. She declined a procedure,
which was set for the MRI is day, because \"two sedatives in one day
felt excessive.\" She anticipates avoiding a repeated procedure.
Currently, however, she still experiences spasms in her right hand and a
numbing sensation in her fingers. She still encounters discomfort (NRS
5/10). She was previously informed that relief might be gradual, but she
is \"historically been restless\". Therefore, \"resting and inactivity\"
negatively impact her spirits and rest.
[Medical background:]{.underline}
Several in-patient and day clinic admissions since 2011.
Now, from 2015, continuous outpatient psychological counseling (CBT),
somatic therapy, particular sessions for driving anxieties. Also
undergoing outpatient psychiatric care (fluoxetine 90mg).
[Psychopathological Observations:]{.underline}
Patient appears well-groomed, responsive and clear-minded, talkative and
forthright. Aware of location, date, and identity. Adequate focus,
recall, and concentration. Mental organization is orderly. No evidence
of delusional beliefs or identity disturbances. No compulsions, mentions
fear of expansive spaces and fear of water. Emotional responsiveness
intact, heightened psychomotor activity. Mood swings between despondent
and irritable, lowered motivation. Diminished appetite, issues with
sleep initiation and maintenance. Firm and believable denial of
immediate suicidal thoughts, patient appears cooperative. No current
signs of self-harm or threat to others.
[Handling the Condition, Strengths:]{.underline}
Currently, her coping strategy seems to be proactive with some restless
elements. Ms. S. says she remains \"optimistic\" and is well-backed by
her communal links. Notably, she shares a close bond with her
80-year-old aunt. Her other social bonds primarily arise from her
association with a hockey enthusiasts club. Hockey has been a crucial
support for her from a young age.
[Evaluation Diagnoses:]{.underline}
Adjustment disorder: anxiety and depressive reaction mixed
Agoraphobia
Acrophobia
[Interventions, approaches:]{.underline}
An evaluative and assistive discussion was conducted. The patient has a
dependable therapeutic community for post-hospitalization. Additionally,
she was provided the contact of the psychological support outpatient
center. She mentioned finding the therapeutic conversation comforting,
prompting an arranged check-in the subsequent week. We also suggest
guidance in self-initiated physical activities to aid her recovery and
temper restlessness.
**NC: Consultation of 01/15/2017**
[Examination findings:]{.underline}
Patient alert, fully oriented. Articulate and spontaneous speech.
Cranial nerve evaluation normal. No evident sensorimotor abnormalities.
BDK with voiding challenges. Sphincter response diminished, but fecal
control maintained. KPPS at 85%. Wound site clean and non-irritated,
except for the lower central portion.
[Procedure:]{.underline}
Neurosurgical intervention not required; no reassessment of the lower
wound needed. Advise return if neurological symptoms intensify.
The patient, diagnosed with relapsing-remitting multiple sclerosis that
initially manifested aggressively, has been relapse-free on fingolimod
since 2009 and was generally well, barring a slight imbalance when
walking due to minor weakness in her left leg. She later experienced
numbness and weakness in her legs, reaching up to the hip, persisting
for several days and then faced challenges with urination and bowel
movements approximately 7 weeks prior. During a home examination, a
lesion was identified in the T10 which was surgically addressed by our
in-house neurosurgery team. Histology identified it as a DLBCL, leading
to a chemotherapy plan inclusive of Rituximab. Post-surgery, her
symptoms have subsided somewhat, but the patient still has BDK and
relies on a wheelchair.
On clinical neurological assessment, a mild paraparesis was noted in her
left leg, accompanied by heightened reflex response and sporadic left
foot spasms, which were intense but temporary.
To conclude, the new neurological manifestations are not a recurrence of
the formerly stable multiple sclerosis. As Rituximab is also an
effective third-phase drug for MS treatment and is essential,
discontinuing fingolimod (second phase) was discussed with the patient.
After a span of approximately 4-5 months following the last Rituximab
treatment, a radiological (cMRI) and clinical review is suggested. Based
on results, either resuming fingolimod or, if no adverse effects
present, potentially continuing Rituximab treatment is recommended (for
this, reach our neuroimmunology outpatient department). The primary
neurologist was unavailable for comments.
**Boards**
Oncology tumor board as of 01/22/2017
6 cycles of R-Pola-CHP
[Pathology]{.underline}
Pathology. Findings from 01/05/2017
[Clinical information/question:]{.underline}
Tumor cuff T10. dignity? Entity? Macroscopy:
1st lamina T10: fixed. some assembled 0.7 x 0.5 x 0.2 cm calcareous
tissue fragments. Complete embedding. Decalcify in EDTA. 2nd ligament:
Fix. some assembled 0.9 x 0.7 x 0.2 cm, coarse, partly also calcareous
tissue fragments. Complete embedding. Decalcify overnight in EDTA.
3\. epidural tumor: Numerous beige-colored tissue fragments, 3.8 x 2.8 x
0.6 cm. Embedding of exemplary sections after lamellation.
[Processing]{.underline}: 1 block, HE. Microscopy:
1\. and 2. (lamina T10 and ligament) are still being decalcified.
3rd epidural tumor: Paravertebral soft tissue with infiltrates of a
partly lymphoid, partly blastic neoplasia. The tumor cells are diffuse,
sometimes nodular in the tissue and have hyperchromatic nuclei with
coarse-grained chromatin and a narrow cytoplasmic border. There are also
blastic cells with enlarged nuclei, vesicular chromatin, and sometimes
prominent nucleoli. The stroma is loose and vacuolated. Clearly
pronounced crush artifacts.
Preliminary report of critical findings:
3\. epidural tumor: paravertebral soft tissue with infiltrates of
lymphoid and blastic cells compatible with hematologic neoplasia.
Additional immunohistochemical staining is being performed to further
characterize the tumor. In addition, material 1 (lamina T10) and
material 2 (ligament) are still undergoing decalcification. A follow-up
report will be provided.
Processing: 2 blocks, decalcification, HE, Giemsa, IHC: CD20, PAX5,
Bcl2, Bcl6, CD5, CD3, CD23, CD21, Kappa, Lambda, CD10, c-Myc, CyclinD1,
CD30, MIB1, EBV/EBER.
Molecular pathology: testing for B-cell clonal expansion and IgH/Bcl2
translocation.
[Microscopy]{.underline}:
1\. Ligament: Scarred connective tissue and fragmented bone tissue
without evidence of the tumor described in the preliminary findings
under 3.
2\. Lamina T10: Bone tissue without evidence of the tumor described in
the preliminary findings under 3.
3\. Epidural tumor: Immunohistochemically, blastic tumor cells show a
positive reaction after incubation with antibodies against CD20, PAX5
and BCL2. Partially positive reaction against Bcl-6 (\<20%). Some
isolated blastic cells staining positive for CD30. Lymphoid cells
positive for CD3 and CD5. Some residual germinal centers with positive
reaction to CD23 and CD21. Predominantly weak positive reaction of
blasts and lymphoid cells to CD10. Some solitary cells with positive
reaction to kappa, rather unspecific, flat reaction to lambda. No
overexpression of c-Myc or cyclinD1. No
No reaction to EBV/EBER. The Ki-67 proliferation index is 40%, related
to blastic tumor cells \> 90%.
Significantly limited evaluability of immunohistochemical staining due
to severe squeezing artifacts of the material.
[Molecular pathology report:]{.underline}
Examination for clonal B-cell expansion and t(14;18) translocation
Methodology:
DNA was isolated from the sent tissues and used in duplicate in specific
PCRs (B-cell clonality analysis with Biomed-2 primer sets: IGHG1 gene:
scaffold 2 and 3). The size distribution of the PCR products was further
analyzed by fragment analysis.
To detect a BCL2/IgH fusion corresponding to a t(14;18) translocation,
DNA was inserted into a specific PCR (according to Stetler-Stevenson et
al. Blood. 1998;72:1822-25).
Results:
Amplification of isolated DNA: good. B cell clonality analyses
IGHG1 fragment 2: polyclonal signal pattern.
IGHG1 frame 3: reproducible clonal signal at approximately 115/116 bp.
t(14;18) translocation: negative.
[Molecular pathology report:]{.underline}
Molecular pathologic evidence of clonal B-cell expansion. No evidence of
t(14;18) translocation in test material with normal control reactions.
Preliminary critical findings report:
1\. Lamina TH 10: tumor-free bone tissue.
2\. Ligament: Tumor-free, scarred connective tissue and fragmented bone
tissue.
3\. Epidural tumor: aggressive B non-Hodgkin\'s lymphoma.
Findings (continued)
Additional findings from 01/06/2017
Immunohistochemical processing: MUM1, ALK1, CD68, TdT. Microscopy:
3\. Immunohistochemically, blastic tumor cells are positive for MUM1.
Numerous CD68-positive macrophages. No reaction to ALK1 and TdT.
Critical findings report:
1\. Lamina T10: Tumor-free bone tissue. 2: Tumor free, scarred connective
tissue and fragmented bone tissue.
3\. epidural tumor: aggressive B-non-Hodgkin lymphoma, morphologically
and immunohistochemically most compatible with diffuse large B-cell
lymphoma (DLBCL, NOS) of germinal center type according to Hans
classifier (GCB).
**Path. Findings from 01/05/2017**
Clinical Findings
Clinical data:
Initial diagnosis of DLBCL with spinal involvement.
Puncture Site(s): 1
Collection date: 01/04/2017
Arrival at cytology lab: 01/04/2017, 8 PM. Material:
1 Liquid Material: 2 mL colorless, clear Processing:
MGG staining Microscopic:
ZTA:
Liquid precipitate Erythrocytes
(+) Lymphocytes (+) Granulocytes
Eosinophils Histiocytes Siderophages
\+ Monocytes
Others: Isolated evidence of fewer monocytes. No evidence of atypical
cells. Critical report of findings:
CSF sediment without evidence of inflammation or malignancy. Diagnostic
Grading:
Negative
Therapy and course
Mrs. S was admitted from the neurosurgical department for chemotherapy
(R-POLA-CHP) of suspected DLBCL with spinal/vertebral manifestations.
After exclusion of clinical and laboratory contraindications,
antineoplastic therapy was started on 01/08/2017. This was well
tolerated under the usual supportive measures. There were no acute
complications.
During her hospitalization, Ms. S reported numbness in both vascular
hemispheres. A neurosurgical and neurological presentation was made
without acute need for action. In consultation with the neurology
department, the existing therapy with fingolimod should be discontinued
due to the concomitant use of rituximab and the associated risk of PML.
If necessary, re-exposure to fingolimod may be considered after
completion of oncologic therapy.
On 01/07/2017, a port placement was performed by our vascular surgery
department without complications.
On 01/19/2017, a single administration of Pegfilgrastim 6 mg s.c. was
performed. With a latency of 10 days, G-CSF should not be repeated in
the meantime.
We are able to transfer Mrs. S to the Mountain Hospital Center
(Neurological Initial Therapy & Recovery) on 02/01/2017. We thank you
for accommodating the patient and are available for any additional
inquiries.
**Medications at Discharge**
**Aspirin (Aspirin®)** - 100mg, 1 tablet in the morning
**Atorvastatin - 40mg -** 1 tablet at bedtime
**Sertraline - 50mg** - 2 tablets in the morning
**Lorazepam (Tavor®)** - 1mg, as needed
**Fingolimod** - 0.5mg, 1 capsule at bedtime, Note: Take a break as
directed
**Hydromorphone hydrochloride** - 2mg (extended-release), 2 capsules in
the morning and 2 capsules at bedtime
**Melatonin -** 2mg (sustained-release), 1 tablet at bedtime
**Baclofen (Lioresal®) -** 10mg, 1 tablet three times a day
**Pregabalin -** 75mg, 1 capsule in the morning and 1 capsule at bedtime
**MoviCOL® (Macrogol, Sodium chloride, Potassium chloride) -** 1 packet
three times a day, mixed with water for oral intake
**Pantoprazole -** 40mg, 1 tablet in the morning
**Colecalciferol (Vitamin D3) -** 20000 I.U., 1 capsule on Monday and
Thursday
**Co-trimoxazole -** 960mg, 1 tablet on Monday, Wednesday, and Friday
**Valaciclovir -** 500mg, 1 tablet in the morning and 1 tablet at
bedtime
**Prednisolone -** 50mg, 2 tablets in the morning, Continue through
02/19/2017
**Enoxaparin sodium (Clexane®) -** 40mg (4000 I.U.), 1 injection at
bedtime, Note: Continue in case of immobility
**Dimenhydrinate (Vomex A®)** - 150mg (sustained-release), as needed for
nausea, up to 2 capsules daily.
**Procedure**
**Oncology board decision: 6 cycles of R-Pola-CHP.**
- Fingolimod pause, re-evaluation in 4-5 months.
- Continuation of therapy near residence in the clinic as of
02/28/2017
- Bi-Weekly laboratory tests (electrolytes, blood count, kidney and
liver function tests)
- In case of fever \>38.3 °C please report immediately to our
emergency room
- Immediate gynecological examination for nodular mass in the left
breast
Dates:
- From 03/01/2017 third cycle of R-Pola-CHP in the clinic. The patient
will be informed of the date by telephone.
If symptoms persist or exacerbate, we advocate for an urgent revisit.
Outside standard working hours, emergencies can also be addressed at the
emergency hub.
During discharge management, the patient was extensively educated and
assisted, and equipped with required appliances, medication scripts, and
absence from work notices.
All observations were thoroughly deliberated upon. Multiple alternate
therapy notions were considered before making a treatment proposition.
The opportunity for a second viewpoint and recommendation to our
facility was also emphasized.
**Lab values at discharge: **
**Metabolic Panel**
**Parameter** **Results** **Reference Range**
---------------------------------- ------------- ---------------------
Sodium 136 mEq/L 135 - 145 mEq/L
Potassium 3.9 mEq/L 3.5 - 5.0 mEq/L
Creatinine 1.2 mg/dL 0.7 - 1.3 mg/dL
BUN (Blood Urea Nitrogen) 19 mg/dL 7 - 18 mg/dL
Alkaline Phosphatase 138 U/L 40 - 129 U/L
Total Bilirubin 0.3 mg/dL \< 1.2 mg/dL
GGT (Gamma-Glutamyl Transferase) 82 U/L \< 66 U/L
ALT (Alanine Aminotransferase) 42 U/L 10 - 50 U/L
AST (Aspartate Aminotransferase) 34 U/L 10 - 50 U/L
LDH (Lactate Dehydrogenase) 366 U/L \< 244 U/L
Uric Acid 4.1 mg/dL 3 - 7 mg/dL
Calcium 9.0 mg/dL 8.8 - 10.6 mg/dL
**Kidney Function**
**Parameter** **Results** **Reference Range**
------------------------------- ------------- ---------------------
GFR (MDRD) \>60 mL/min \> 60 mL/min
GFR (CKD-EPI with Creatinine) 64 mL/min \> 90 mL/min
**Inflammatory Markers**
**Parameter** **Results** **Reference Range**
-------------------------- ------------- ---------------------
CRP (C-Reactive Protein) 2.5 mg/dL \< 0.5 mg/dL
**Coagulation Panel**
**Parameter** **Results** **Reference Range**
--------------- ------------- ---------------------
PT Percentage 103% 70 - 120%
INR 1.0 N/A
aPTT 25 sec 26 - 37 sec
**Complete Blood Count**
**Parameter** **Results** **Reference Range**
--------------- ---------------- ---------------------
WBC 12.71 x10\^9/L 4.0 - 9.0 x10\^9/L
RBC 2.9 x10\^12/L 4.5 - 6.0 x10\^12/L
Hemoglobin 8.1 g/dL 14 - 18 g/dL
Hematocrit 24.7% 40 - 48%
MCH 28 pg 27 - 32 pg
MCV 86 fL 82 - 92 fL
MCHC 32.8 g/dL 32 - 36 g/dL
Platelets 257 x10\^9/L 150 - 450 x10\^9/L
**Differential**
**Parameter** **Results** **Reference Range**
--------------- ------------- ---------------------
Neutrophils 77% 40 - 70%
Lymphocytes 4% 25 - 40%
Monocytes 18% 4 - 10%
Eosinophils 0% 2 - 4%
Basophils 0% 0 - 1%
### Patient Report 1
**Dear colleague, **
I am writing to provide a follow-up report on our mutual patient, Mrs.
Anna Sample, born on January 1st, 1970, post her recent visit to our
clinic on October 9th, 2017.
Upon assessment, Mrs. Sample reported experiencing a moderate
improvement in symptoms since the initiation of the R-Pola-CHP regimen.
The discomfort in her dorsal calf and thoracic spine has notably
reduced, and her arm strength has seen gradual improvement, though she
occasionally still encounters difficulty in grasping objects.
She has been undergoing physiotherapy to aid in the recovery of her arm
strength.
**Physical Examination:** No palpable lymphadenopathy. Her neurological
examination was stable with no new deficits.
**Laboratory Findings:** Most recent blood counts and biochemistry
panels showed a trend towards normalization, with liver enzymes within
the reference range.
**Imaging:**
-Ultrasound of the abdomen was conducted.
-A follow-up MRI conducted showed a reduction in the size of the
previously noted metastatic masses. There\'s a decreased impingement on
the myelon at the levels of T10-L1. The lesions in L2, L3, and L4 also
showed signs of regression.
-PET scan was performed: Favourable response. Increased FDG avidity in
the liver: Liver MRI recommended.
-Liver MRI: No pathology of the liver.
**Senological Examination:** The nodule-like deposit in the right breast
was found to be benign.
**Medication on admission:** Aspirin (Aspirin®) - 100mg, 1 tablet in the
morning Atorvastatin - 40mg - 1 tablet at bedtime Sertraline - 50mg - 2
tablets in the morning Lorazepam (Tavor®) - 1mg, as needed Fingolimod -
0.5mg, 1 capsule at bedtime, Note: Take a break as directed
Hydromorphone hydrochloride - 2mg (extended-release), 2 capsules in the
morning and 2 capsules at bedtime Melatonin - 2mg (sustained-release), 1
tablet at bedtime Baclofen (Lioresal®) - 10mg, 1 tablet three times a
day Pregabalin - 75mg, 1 capsule in the morning and 1 capsule at bedtime
MoviCOL® (Macrogol, Sodium chloride, Potassium chloride) - 1 packet
three times a day, mixed with water for oral intake Pantoprazole - 40mg,
1 tablet in the morning Colecalciferol (Vitamin D3) - 20000 I.U., 1
capsule on Monday and Thursday Co-trimoxazole - 960mg, 1 tablet on
Monday, Wednesday, and Friday Valaciclovir - 500mg, 1 tablet in the
morning and 1 tablet at bedtime Prednisolone - 50mg, 2 tablets in the
morning, Continue through 02/19/2017 Enoxaparin sodium (Clexane®) - 40mg
(4000 I.U.), 1 injection at bedtime, Note: Continue in case of
immobility Dimenhydrinate (Vomex A®) - 150mg (sustained-release), as
needed for nausea, up to 2 capsules daily.
**Physician\'s report for ultrasound on 10/05/2017:**
Liver: The liver is large with 18.1 cm in the MCL, 18.5 cm in the CCD
and 20.2 cm in the AL. The internal structure is not compacted. Focal
changes are not seen. Orthograde flow in the portal vein (vmax 16 cm/s).
Gallbladder: the gallbladder is 9.0 x 2.9 cm, the lumen is free of
stones.
Biliary tract: The intra- and extrahepatic bile ducts are not
obstructed, DHC 5 mm, DC 3 mm.
Pancreas: The pancreas is approximately 3.2/1.5/3.0 cm in size, the
internal structure is moderately echo-rich.
Spleen: The spleen is 28.0 x 9.6 cm, the parenchyma is homogeneous.
Kidneys: The right kidney is 9.8/2.0 cm, the pelvis is not congested.
The left kidney is 12.4/1.2 cm, the pelvis is not congested. Vessels
retroperitoneal: the aorta is normal in width in the partially visible
area.
Stomach/intestine: The gastric corpus wall is up to 14 mm thick. No
evidence of free fluid in the abdominal cavity.
Bladder/genitals: The prostate is orientationally about 3.8 x 4.8 x 3.1
cm, the urinary bladder is moderately full.
**MR Spine plain + post-contrast from 10/06/2017**
**Study:** Magnetic Resonance Imaging (MRI) of the thoracolumbar spine
**Clinical Information:** Follow-up MRI post treatment for previously
noted metastatic masses.
**Technique:** Standard T1-weighted, T2-weighted, and post-contrast
enhanced sequences of the thoracolumbar spine were obtained in sagittal
and axial planes.
**Findings:** There is a reduction in the size of the previously noted
metastatic masses when compared to prior MRI studies. A reduced mass
effect is observed at the levels of T10-L1. Notably, there is decreased
impingement on the myelon at these levels. This indicates a significant
improvement, suggesting a positive response to the recent therapy. The
lesion noted in the previous study at the level of L2 has shown signs of
regression in both size and intensity. Similar regression is noted for
the lesion at the L3 level. The lesion at the L4 level has also
decreased in size as compared to previous imaging. The intervertebral
discs show preserved hydration. No significant disc protrusions or
herniations are observed. The vertebral bodies do not show any
significant collapse or deformity. Bone marrow signal is otherwise
normal, apart from the aforementioned lesions. The spinal canal
maintains a normal caliber throughout, and there is no significant canal
stenosis. The conus medullaris and cauda equina nerve roots appear
unremarkable without evidence of displacement or compression.
**Impression:** Reduction in the size of previously noted metastatic
masses, indicating a positive therapeutic response. Decreased
impingement on the myelon at the levels of T10-L1, suggesting
significant regression of the previously observed mass effect.
Regression of lesions at L2, L3, and L4 levels, further indicating the
positive response to treatment.
**Positron Emission Tomography (PET)/CT from 10/09/2017:**
**Indication:** Follow-up evaluation of Diffuse large B-cell lymphoma of
germinal center type diagnosed in 01/2017.
**Technique:** Whole-body FDG-PET/CT was performed from the base of the
skull to the mid-thighs.
**Findings:** Liver: There is increased FDG uptake in the liver,
predominantly in the anterolateral segment. The size of the liver is
consistent with the previous ultrasound report, measuring 18.1 cm in the
MCL, 18.5 cm in the CCD, and 20.2 cm in the AL. The SUV max is 5.5.
Lymph Nodes: There is no pathological FDG uptake in the previously noted
left axillary lymph node, suggesting a therapeutic response. Lungs:
Previously noted deposit in the ventral left upper lobe now demonstrates
reduced FDG avidity. No other FDG-avid nodules or masses. Bone: There\'s
no FDG uptake in the spine, including the previously described
metastatic lesion, indicating a positive response to treatment.
**Impression:** Overall, the findings demonstrate a marked metabolic
improvement in the sites of lymphoma previously noted, particularly in
the left axillary lymph node and the vertebral bone lesions. The liver,
however, presents with increased FDG avidity, especially in the
anterolateral segment. This uptake might represent active lymphomatous
involvement or could be due to an inflammatory process. Given the
differential, and to ascertain the etiology, further diagnostic
evaluation, such as a liver MRI or biopsy, is recommended.
**Liver MRI from 10/11/2017:**
**Clinical Indication:** Evaluation of increased FDG uptake in the liver
as noted on the recent PET scan. Concern for active lymphomatous
involvement or an inflammatory process.
**Technique:** MRI of the liver was performed using a 3T scanner.
Sequences included T1-weighted (in-phase and out-of-phase), T2-weighted,
diffusion-weighted imaging (DWI), and post-contrast dynamic imaging
after the administration of gadolinium-based contrast agent.
**Detailed Findings:** The liver demonstrates enlargement with
measurements consistent with the recent ultrasound: 18.1 cm in the
mid-clavicular line (MCL), 18.5 cm in the maximum cranial-caudal
diameter (CCD), and 20.2 cm along the anterior line (AL).
The liver parenchyma is mostly homogenous. However, there is a region in
the anterolateral segment demonstrating T2 hyperintensity and
hypointensity on T1-weighted images. The aforementioned region in the
anterolateral segment demonstrates restricted diffusion, suggestive of
increased cellular density. After gadolinium administration, there is
peripheral enhancement of the lesion in the arterial phase, followed by
progressive central filling in portal venous and delayed phases. This
pattern is suggestive of a focal nodular hyperplasia (FNH) or atypical
hemangioma. The intrahepatic and extrahepatic bile ducts are not
dilated. No evidence of any obstructing lesion. The hepatic arteries,
portal vein, and hepatic veins appear patent with no evidence of
thrombosis or stenosis. The gallbladder, pancreas, spleen, and adjacent
segments of the bowel appear normal. No lymphadenopathy is noted in the
porta hepatis or celiac axis.
**Impression:** Enlarged liver with a suspicious lesion in the
anterolateral segment demonstrating characteristics that might be
consistent with focal nodular hyperplasia or atypical hemangioma. No
indication of lymphomatous involvement of the liver.
**Discussion: **
Given her positive response to the treatment so far, we intend to
continue with the current regimen, with careful monitoring of her side
effects and symptomatology.
We deeply appreciate your continued involvement in Mrs. Sample\'s
healthcare journey. Collaborative care is paramount, especially in cases
as complex as hers. Should you have any recommendations, insights, or if
you require additional information, please do not hesitate to reach out.
**Medication at discharge: **
Aspirin 100mg: Take 1 tablet in the morning; Atorvastatin 40mg: Take 1
tablet at bedtime; Sertraline 50mg: Take 2 tablets in the morning;
Lorazepam 1mg: Take as needed; Melatonin (sustained-release) 2mg: Take 1
tablet at bedtime; Fingolimod 0.5mg: Take 1 capsule at bedtime/take a
break as directed; Hydromorphone hydrochloride (extended-release) 2mg:
Take 2 capsules in the morning and 2 capsules at bedtime; Pregabalin
75mg: Take 1 capsule in the morning and 1 capsule at bedtime; Baclofen
10mg: Take 1 tablet three times a day; MoviCOL®: Mix 1 packet with water
and take orally three times a day; Pantoprazole 40mg: Take 1 tablet in
the morning; Colecalciferol (Vitamin D3) 20000 I.U.: Take 1 capsule on
Monday and Thursday; Dimenhydrinate (sustained-release) 150mg: Take as
needed for nausea, up to 2 capsules daily.
**Metabolic Panel**
**Parameter** **Results** **Reference Range**
---------------------------------- ------------- ---------------------
Sodium 138 mEq/L 135 - 145 mEq/L
Potassium 4.1 mEq/L 3.5 - 5.0 mEq/L
Creatinine 1.1 mg/dL 0.7 - 1.3 mg/dL
BUN (Blood Urea Nitrogen) 17 mg/dL 7 - 18 mg/dL
Alkaline Phosphatase 124 U/L 40 - 129 U/L
Total Bilirubin 0.4 mg/dL \< 1.2 mg/dL
GGT (Gamma-Glutamyl Transferase) 75 U/L \< 66 U/L
ALT (Alanine Aminotransferase) 39 U/L 10 - 50 U/L
AST (Aspartate Aminotransferase) 36 U/L 10 - 50 U/L
LDH (Lactate Dehydrogenase) 342 U/L \< 244 U/L
Uric Acid 3.8 mg/dL 3 - 7 mg/dL
Calcium 9.12 mg/dL 8.8 - 10.6 mg/dL
**Kidney Function**
**Parameter** **Results** **Reference Range**
------------------------------- ------------- ---------------------
GFR (MDRD) \>62 mL/min \> 60 mL/min
GFR (CKD-EPI with Creatinine) 67 mL/min \> 90 mL/min
**Inflammatory Markers**
**Parameter** **Results** **Reference Range**
-------------------------- ------------- ---------------------
CRP (C-Reactive Protein) 1.8 mg/dL \< 0.5 mg/dL
**Coagulation Panel**
**Parameter** **Results** **Reference Range**
--------------- ------------- ---------------------
PT Percentage 105% 70 - 120%
INR 0.98 N/A
aPTT 28 sec 26 - 37 sec
**Complete Blood Count**
**Parameter** **Results** **Reference Range**
--------------- --------------- ---------------------
WBC 11.9 x10\^9/L 4.0 - 9.0 x10\^9/L
RBC 3.1 x10\^12/L 4.5 - 6.0 x10\^12/L
Hemoglobin 8.4 g/dL 14 - 18 g/dL
Hematocrit 26% 40 - 48%
MCH 27.8 pg 27 - 32 pg
MCV 84 fL 82 - 92 fL
MCHC 33 g/dL 32 - 36 g/dL
Platelets 263 x10\^9/L 150 - 450 x10\^9/L
**Differential**
**Parameter** **Results** **Reference Range**
--------------- ------------- ---------------------
Neutrophils 73% 40 - 70%
Lymphocytes 7% 25 - 40%
Monocytes 16% 4 - 10%
Eosinophils 1% 2 - 4%
Basophils 0.5% 0 - 1%
|
Feet
|
What baseline is used to compare the experimental results against?
|
### Introduction
Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias. We use the dialogue dataset from the LIGHT text adventure world BIBREF0 as a testbed for our investigation into de-biasing dialogues. The dataset consists of a set of crowd-sourced locations, characters, and objects, which form the backdrop for the dialogues between characters. In the dialogue creation phase, crowdworkers are presented with personas for characters—which themselves were written by other crowdworkers—that they should enact; the dialogues the crowdworkers generate from these personas form the dialogue dataset. Dialogue datasets are susceptible to reflecting the biases of the crowdworkers as they are often collected solely via crowdsourcing. Further, the game's medieval setting may encourage crowdworkers to generate text which accentuates the historical biases and inequalities of that time period BIBREF10, BIBREF11. However, despite the fact that the dialogues take place in a fantasy adventure world, LIGHT is a game and thus we are under no obligation to recreate historical biases in this environment, and can instead use creative license to shape it into a fun world with gender parity. We use the dialogues in LIGHT because we find that it is highly imbalanced with respect to gender: there are over 60% more male-gendered characters than female. We primarily address the discrepancy in the representation of male and female genders, although there are many characters that are gender neutral (like “trees") or for which the gender could not be determined. We did not find any explicitly identified non-binary characters. We note that this is a bias in and of itself, and should be addressed in future work. We show that training on gender biased data leads existing generative dialogue models to amplify gender bias further. To offset this, we collect additional in-domain personas and dialogues to balance gender and increase the diversity of personas in the dataset. Next, we combine this approach with Counterfactual Data Augmentation and methods for controllable text generation to mitigate the bias in dialogue generation. Our proposed techniques create models that produce engaging responses with less gender bias. ### Sources of Bias in Dialogue Datasets ::: Bias in Character Personas
Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations. ### Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Qualitative Examination.
Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1. ### Sources of Bias in Dialogue Datasets ::: Bias in Character Personas ::: Quantitative Examination.
We quantitatively analyze bias by first examining whether the existing personas are offensive, and second, evaluating their gender balance. To assess the pervasiveness of unsafe content present in personas, we asked three independent annotators to examine each character's persona for potentially offensive content. If annotators selected that the content was offensive or maybe offensive, they were asked to place it in one of four categories – racist, sexist, classist, other – and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these personas are removed from the dataset. We further examined gender bias in personas. Annotators were asked to label the gender of each character based on their persona description (choosing “neutral" if it was not explicit in the persona). This annotation is possible because some personas include lines such as I am a young woman, although the majority of personas do not mention an explicit gender. Annotators found nearly 50% more male-gendered characters than female-gendered characters (Table TABREF5). While annotators labeled personas as explicitly male, female, or gender-neutral, gender bias may still exist in personas beyond explicit sentences such as I am a young man. For example, personas can contain gendered references such as I want to follow in my father's footsteps rather than mother's footsteps. These relational nouns BIBREF19, BIBREF20 such as father encode a specific relationship that can be gender biased. In this example, that relationship would be between the character and a man, rather than a woman. We analyzed the frequency of references to other gendered characters in the personas by counting the appearance of gendered words using the list compiled by BIBREF21 (for example he vs. she), and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women. ### Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances
After analyzing the bias in LIGHT personas, we go on to analyze the bias in dialogues created from those personas and how to quantify it. ### Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Qualitative Examination.
In our analysis, we found many examples of biased utterances in the data used to train dialogue agents. For example, the character with a queen persona utters the line I spend my days embroidery and having a talk with the ladies. Another character in a dialogue admires a sultry wench with fire in her eyes. An example of persona bias propagating to the dialogue can be found in Table TABREF2. ### Sources of Bias in Dialogue Datasets ::: Bias in Dialogue Utterances ::: Measuring Bias.
Sexism is clearly present in many datasets BIBREF9, but finding a good way to measure sexism, especially at scale, can be challenging. A simple answer would be to rely on crowdworkers operating under their own notions of “sexism” to annotate the dialogues. However, in our experience, crowdworkers hold a range of views, often different from ours, as to what counts as sexism, making mere human evaluation far from sufficient. Note that the original LIGHT personas and dialogues were generated by crowdworkers, leaving little reason to believe that crowdworkers will be proficient at spotting the sexism that they themselves embued the dataset with in the first place. Therefore, we supplement our crowdworker-collected human annotations of gender bias with additional quantitative measurements: we measure the ratio of gendered words (taken from the union of several existing gendered word lists that were each created through either automatic means, or by experts BIBREF21, BIBREF22, BIBREF23), and we run an existing dialogue safety classifier BIBREF24 to measure offensiveness of the dialogues. ### Methodology: Mitigating Bias in Generative Dialogue
We explore both data augmentation and algorithmic methods to mitigate bias in generative Transformer dialogue models. We describe first our modeling setting and then the three proposed techniques for mitigating bias. Using (i) counterfactual data augmentation BIBREF25 to swap gendered words and (ii) additional data collection with crowdworkers, we create a gender-balanced dataset. Further, (iii) we describe a controllable generation method which moderates the male and female gendered words it produces. ### Methodology: Mitigating Bias in Generative Dialogue ::: Models
Following BIBREF0, in all of our experiments we fine-tune a large, pre-trained Transformer encoder-decoder neural network on the dialogues in the LIGHT dataset. The model was pre-trained on Reddit conversations, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models were trained to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments, resulting in approximately $2,200$ million training examples. The model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of BIBREF26. For generation, we decode sequences with beam search with beam size 5. ### Methodology: Mitigating Bias in Generative Dialogue ::: Counterfactual Data Augmentation
One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather. ### Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection
To create a more gender-balanced dataset, we collect additional data using a Positive-Bias Data Collection (Pos. Data) strategy. ### Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: Gender-swapping Existing Personas
There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character. ### Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New and Diverse characters
As discussed in Section SECREF2, it is insufficient to simply balance references to men and women in the dataset, as there may be bias in the form of sexism. While it is challenging to detect sexism, we attempt to offset this type of bias by collecting a set of interesting and independent characters. We do this by seeding workers with examples like adventurer with the persona I am an woman passionate about exploring a world I have not yet seen. I embark on ambitious adventures. We give the additional instruction to attempt to create diverse characters. Even with this instruction, crowdworkers still created roughly 3x as many male-gendered characters as female-gendered characters. We exclude male-gendered characters created in this fashion. In combination with the gender swapped personas above, this yields a new set of 2,676 character personas (compared to 1,877 from the original dataset), for which the number of men and women and the number of references to male or female gendered words is roughly balanced: see Table TABREF5. ### Methodology: Mitigating Bias in Generative Dialogue ::: Positive-Bias Data Collection ::: New dialogues
Finally, we collect additional dialogues with these newly created gender balanced character personas, favoring conversations that feature female gendered characters to offset the imbalance in the original data. We added further instructions for annotators to be mindful of gender bias during their conversations, and in particular to assume equality between genders – social, economic, political, or otherwise – in this fantasy setting. In total, we collect 507 new dialogues containing 6,658 new dialogue utterances in total (about 6% of the size of the full LIGHT dataset). ### Methodology: Mitigating Bias in Generative Dialogue ::: Conditional Training
Bias in dialogue can manifest itself in various forms, but one form is the imbalanced use of gendered words. For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output BIBREF27, BIBREF28, BIBREF29, BIBREF30. Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties BIBREF28, then modifying the control tokens during inference to produce the desired result. Prior to training, each dialogue response is binned into one of four bins – $\text{F}^{0/+}\text{M}^{0/+}$ – where $\text{F}^{0}$ indicates that there are zero female gendered words in the response and $\text{F}^{+}$ indicates the presence of at least one female gendered word. The gendered words are determined via an aggregation of existing lists of gendered nouns and adjectives from BIBREF21, BIBREF22, BIBREF23. The bins are used to train a conditional model by appending a special token (indicating the bin for the target response) to the end of the input which is given to the encoder. At inference time, the bins can be manipulated to produce dialogue outputs with various quantities of gendered words. ### Results
We train generative Transformer models using each of these methods – Counterfactual Data Augmentation that augments with swaps of gendered words (CDA, §SECREF19), adding new dialogues (Positive-Bias Data Collection, §SECREF20), and controllable generation to control the quantity of gendered words (CT, §SECREF24) – and finally combine all of these methods together (ALL). ### Results ::: Bias is Amplified in Generation
Existing Transformer generative dialogue models BIBREF31, BIBREF32, BIBREF0 are trained to take as input the dialogue context and generate the next utterance. Previous work has shown that machine learning models reflect the biases present in data BIBREF4, BIBREF3, and that these biases can be easy to learn compared to more challenging reasoning BIBREF2, BIBREF33. Generative models often use beam search or top-k sampling BIBREF34 to decode, and these methods are well-known to produce generic text BIBREF35, which makes them susceptible statistical biases present in datasets. As shown in Table TABREF11, we find that existing models actually amplify bias. When the trained model generates gendered words (i.e., words from our gendered word list), it generates male-gendered words the vast majority of the time – even on utterances for which it is supposed to generate only female-gendered words (i.e., the gold label only contains female-gendered words), it generates male-gendered words nearly $78\%$ of the time. Additionally, following BIBREF8, we run an offensive language classifier on the gold responses and the model generated utterances (Table TABREF16) and find that the model produces more offensive utterances than exist in the dataset. ### Results ::: Genderedness of Generated Text
We analyze the performance of the various techniques by dividing the test set using the four genderedness bins – $\text{F}^{0}\text{M}^{0}$, $\text{F}^{0}\text{M}^{+}$, $\text{F}^{+}\text{M}^{0}$, and $\text{F}^{+}\text{M}^{+}$ – and calculate the F1 word overlap with the gold response, the percentage of gendered words generated (% gend. words), and the percentage of male-gendered words generated (relative to the sum total of gendered words generated by the model). We compare to the gold labels from the test set and a baseline model that does not use any of the bias mitigation techniques. Results for all methods are displayed in Table TABREF11. Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\text{F}^{0}\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth. ### Results ::: Conditional Training Controls Gendered Words
Our proposed CT method can be used to control the use of gendered words in generated dialogues. We examine the effect of such training by generating responses on the test set by conditioning the ALL model on a singular bin for all examples. Results are shown in Figure FIGREF12. Changing the bin radically changes the genderedness of generated text without significant changes to F1. Examples of generated text from both the baseline and the ALL model are shown in Table TABREF31. The baseline model generates male-gendered words even when the gold response contains no gendered words or only female-gendered words, even generating unlikely sequences such as “my name is abigail. i am the king of this kingdom.". ### Results ::: Safety of Generated Text
Using a dialogue safety classifier BIBREF24, we find that our proposed de-biased models are rated as less offensive compared to the baseline generative Transformer and the LIGHT data (see Table TABREF16). ### Results ::: Human Evaluation
Finally, we use human evaluation to compare the quality of our de-biasing methods. We use the dialogue evaluation system Acute-Eval BIBREF36 to ask human evaluators to compare two conversations from different models and decide which model is more biased and which model is more engaging. Following Acute-Eval, we collect 100 human and model paired chats. Conversations from a human and baseline model are compared to conversations from a human and the ALL model with all generations set to the $\text{F}^{0}\text{M}^{0}$ gender-neutral control bin. Evaluators are asked which model is more engaging and for which model they find it more difficult to predict the gender of the speaker. We found that asking about difficulty of predicting a speaker's gender was much more effective than asking evaluators to evaluate sexism or gender bias. Figure FIGREF17 shows that evaluators rate the ALL model harder to predict the gender of (statistically significant at $p < 0.01$) while engagingness does not change. Our proposed methods are able to mitigate gender bias without degrading dialogue quality. ### Conclusion
We analyze gender bias in dialogue and propose a general purpose method for understanding and mitigating bias in character personas and their associated dialogues. We present techniques using data augmentation and controllable generation to reduce gender bias in neural language generation for dialogue. We use the dataset LIGHT as a testbed for this work. By integrating these methods together, our models provide control over how gendered dialogue is and decrease the offensiveness of the generated utterances. Overall, our proposed methodology reduces the effect of bias while maintaining dialogue engagingness. Table 1: Character persona examples from the LIGHT dataset. While there are relatively few examples of femalegendered personas, many of the existing ones exhibit bias. None of these personas were flagged by annotators during a review for offensive content. Table 2: An example dialogue from the LIGHT dataset, with the persona for the wife character provided. Bias from the persona informs and effects the dialogue task. Table 3: Analysis of gender in LIGHT Characters: the original dataset contains 1.6× as many male-gendered characters as female-gendered characters. New characters are collected to offset this imbalance. Table 4: We compare the performance of various bias mitigation methods – Counterfactual Data Augmentation (CDA), Positive-Bias Data Collection (Pos. Data), Conditional Training (CT), and combining these methods (ALL) – on the LIGHT test set, splitting the test set across the four genderedness bins: F0/+M0/+. X0 indicates there are no X-gendered words in the gold response, while, X+ indicates that there is at least one. We measure the percent of gendered words in the generated utterances (% gend. words) and the percent of male bias (% male bias), i.e. the percent of male-gendered words among all gendered words generated. While each of these methods yield some improvement, combining all of these methods in one yields the best control over the genderedness of the utterances while still maintaining a good F1-score. Figure 1: Comparing the performance of the ALL de-bias model when we fix the conditioning to a specific bin for all examples at test time. We report results for each possible conditioning bin choice. Across bins, the model maintains performance whilst radically changing the genderedness of the language generated. Table 5: Offensive language classification of model responses on the LIGHT dialogue test set. Figure 2: Human Evaluation of ALL model compared to baseline Transformer generative model. The control bins in ALL are set to F0M0 to reduce gendered words. Evaluators find it harder to predict the speaker gender when using our proposed techniques, while model engagingness is not affected by the method. Table 6: Example generations from the baseline model and the proposed de-biased models. In these examples, the gold truth either contains no gendered words or only female-gendered words, but the baseline model generates male-gendered words.
|
Transformer generation model
|
What is the relationship between Caswell and the protagonist?
A. They're coworkers
B. One is the other's boss
C. They're old friends
D. They're brothers
|
The Snowball Effect By KATHERINE MacLEAN Illustrated by EMSH [Transcriber's Note: This etext was produced from Galaxy Science Fiction September 1952. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Tack power drives on a sewing circle and you can needle the world into the darndest mess! "All right," I said, "what is sociology good for?" Wilton Caswell, Ph.D., was head of my Sociology Department, and right then he was mad enough to chew nails. On the office wall behind him were three or four framed documents in Latin that were supposed to be signs of great learning, but I didn't care at that moment if he papered the walls with his degrees. I had been appointed dean and president to see to it that the university made money. I had a job to do, and I meant to do it. He bit off each word with great restraint: "Sociology is the study of social institutions, Mr. Halloway." I tried to make him understand my position. "Look, it's the big-money men who are supposed to be contributing to the support of this college. To them, sociology sounds like socialism—nothing can sound worse than that—and an institution is where they put Aunt Maggy when she began collecting Wheaties in a stamp album. We can't appeal to them that way. Come on now." I smiled condescendingly, knowing it would irritate him. "What are you doing that's worth anything?" He glared at me, his white hair bristling and his nostrils dilated like a war horse about to whinny. I can say one thing for them—these scientists and professors always keep themselves well under control. He had a book in his hand and I was expecting him to throw it, but he spoke instead: "This department's analysis of institutional accretion, by the use of open system mathematics, has been recognized as an outstanding and valuable contribution to—" The words were impressive, whatever they meant, but this still didn't sound like anything that would pull in money. I interrupted, "Valuable in what way?" He sat down on the edge of his desk thoughtfully, apparently recovering from the shock of being asked to produce something solid for his position, and ran his eyes over the titles of the books that lined his office walls. "Well, sociology has been valuable to business in initiating worker efficiency and group motivation studies, which they now use in management decisions. And, of course, since the depression, Washington has been using sociological studies of employment, labor and standards of living as a basis for its general policies of—" I stopped him with both raised hands. "Please, Professor Caswell! That would hardly be a recommendation. Washington, the New Deal and the present Administration are somewhat touchy subjects to the men I have to deal with. They consider its value debatable, if you know what I mean. If they got the idea that sociology professors are giving advice and guidance—No, we have to stick to brass tacks and leave Washington out of this. What, specifically, has the work of this specific department done that would make it as worthy to receive money as—say, a heart disease research fund?" He began to tap the corner of his book absently on the desk, watching me. "Fundamental research doesn't show immediate effects, Mr. Halloway, but its value is recognized." I smiled and took out my pipe. "All right, tell me about it. Maybe I'll recognize its value." Prof. Caswell smiled back tightly. He knew his department was at stake. The other departments were popular with donors and pulled in gift money by scholarships and fellowships, and supported their professors and graduate students by research contracts with the government and industry. Caswell had to show a way to make his own department popular—or else. I couldn't fire him directly, of course, but there are ways of doing it indirectly. He laid down his book and ran a hand over his ruffled hair. "Institutions—organizations, that is—" his voice became more resonant; like most professors, when he had to explain something he instinctively slipped into his platform lecture mannerisms, and began to deliver an essay—"have certain tendencies built into the way they happen to have been organized, which cause them to expand or contract without reference to the needs they were founded to serve." He was becoming flushed with the pleasure of explaining his subject. "All through the ages, it has been a matter of wonder and dismay to men that a simple organization—such as a church to worship in, or a delegation of weapons to a warrior class merely for defense against an outside enemy—will either grow insensately and extend its control until it is a tyranny over their whole lives, or, like other organizations set up to serve a vital need, will tend to repeatedly dwindle and vanish, and have to be painfully rebuilt. "The reason can be traced to little quirks in the way they were organized, a matter of positive and negative power feedbacks. Such simple questions as, 'Is there a way a holder of authority in this organization can use the power available to him to increase his power?' provide the key. But it still could not be handled until the complex questions of interacting motives and long-range accumulations of minor effects could somehow be simplified and formulated. In working on the problem, I found that the mathematics of open system, as introduced to biology by Ludwig von Bertalanffy and George Kreezer, could be used as a base that would enable me to develop a specifically social mathematics, expressing the human factors of intermeshing authority and motives in simple formulas. "By these formulations, it is possible to determine automatically the amount of growth and period of life of any organization. The UN, to choose an unfortunate example, is a shrinker type organization. Its monetary support is not in the hands of those who personally benefit by its governmental activities, but, instead, in the hands of those who would personally lose by any extension and encroachment of its authority on their own. Yet by the use of formula analysis—" "That's theory," I said. "How about proof?" "My equations are already being used in the study of limited-size Federal corporations. Washington—" I held up my palm again. "Please, not that nasty word again. I mean, where else has it been put into operation? Just a simple demonstration, something to show that it works, that's all." He looked away from me thoughtfully, picked up the book and began to tap it on the desk again. It had some unreadable title and his name on it in gold letters. I got the distinct impression again that he was repressing an urge to hit me with it. He spoke quietly. "All right, I'll give you a demonstration. Are you willing to wait six months?" "Certainly, if you can show me something at the end of that time." Reminded of time, I glanced at my watch and stood up. "Could we discuss this over lunch?" he asked. "I wouldn't mind hearing more, but I'm having lunch with some executors of a millionaire's will. They have to be convinced that by, 'furtherance of research into human ills,' he meant that the money should go to research fellowships for postgraduate biologists at the university, rather than to a medical foundation." "I see you have your problems, too," Caswell said, conceding me nothing. He extended his hand with a chilly smile. "Well, good afternoon, Mr. Halloway. I'm glad we had this talk." I shook hands and left him standing there, sure of his place in the progress of science and the respect of his colleagues, yet seething inside because I, the president and dean, had boorishly demanded that he produce something tangible. I frankly didn't give a hoot if he blew his lid. My job isn't easy. For a crumb of favorable publicity and respect in the newspapers and an annual ceremony in a silly costume, I spend the rest of the year going hat in hand, asking politely for money at everyone's door, like a well-dressed panhandler, and trying to manage the university on the dribble I get. As far as I was concerned, a department had to support itself or be cut down to what student tuition pays for, which is a handful of over-crowded courses taught by an assistant lecturer. Caswell had to make it work or get out. But the more I thought about it, the more I wanted to hear what he was going to do for a demonstration. At lunch, three days later, while we were waiting for our order, he opened a small notebook. "Ever hear of feedback effects?" "Not enough to have it clear." "You know the snowball effect, though." "Sure, start a snowball rolling downhill and it grows." "Well, now—" He wrote a short line of symbols on a blank page and turned the notebook around for me to inspect it. "Here's the formula for the snowball process. It's the basic general growth formula—covers everything." It was a row of little symbols arranged like an algebra equation. One was a concentric spiral going up, like a cross-section of a snowball rolling in snow. That was a growth sign. I hadn't expected to understand the equation, but it was almost as clear as a sentence. I was impressed and slightly intimidated by it. He had already explained enough so that I knew that, if he was right, here was the growth of the Catholic Church and the Roman Empire, the conquests of Alexander and the spread of the smoking habit and the change and rigidity of the unwritten law of styles. "Is it really as simple as that?" I asked. "You notice," he said, "that when it becomes too heavy for the cohesion strength of snow, it breaks apart. Now in human terms—" The chops and mashed potatoes and peas arrived. "Go on," I urged. He was deep in the symbology of human motives and the equations of human behavior in groups. After running through a few different types of grower and shrinker type organizations, we came back to the snowball, and decided to run the test by making something grow. "You add the motives," he said, "and the equation will translate them into organization." "How about a good selfish reason for the ins to drag others into the group—some sort of bounty on new members, a cut of their membership fee?" I suggested uncertainly, feeling slightly foolish. "And maybe a reason why the members would lose if any of them resigned, and some indirect way they could use to force each other to stay in." "The first is the chain letter principle," he nodded. "I've got that. The other...." He put the symbols through some mathematical manipulation so that a special grouping appeared in the middle of the equation. "That's it." Since I seemed to have the right idea, I suggested some more, and he added some, and juggled them around in different patterns. We threw out a few that would have made the organization too complicated, and finally worked out an idyllically simple and deadly little organization setup where joining had all the temptation of buying a sweepstakes ticket, going in deeper was as easy as hanging around a race track, and getting out was like trying to pull free from a Malayan thumb trap. We put our heads closer together and talked lower, picking the best place for the demonstration. "Abington?" "How about Watashaw? I have some student sociological surveys of it already. We can pick a suitable group from that." "This demonstration has got to be convincing. We'd better pick a little group that no one in his right mind would expect to grow." "There should be a suitable club—" Picture Professor Caswell, head of the Department of Sociology, and with him the President of the University, leaning across the table toward each other, sipping coffee and talking in conspiratorial tones over something they were writing in a notebook. That was us. "Ladies," said the skinny female chairman of the Watashaw Sewing Circle. "Today we have guests." She signaled for us to rise, and we stood up, bowing to polite applause and smiles. "Professor Caswell, and Professor Smith." (My alias.) "They are making a survey of the methods and duties of the clubs of Watashaw." We sat down to another ripple of applause and slightly wider smiles, and then the meeting of the Watashaw Sewing Circle began. In five minutes I began to feel sleepy. There were only about thirty people there, and it was a small room, not the halls of Congress, but they discussed their business of collecting and repairing second hand clothing for charity with the same endless boring parliamentary formality. I pointed out to Caswell the member I thought would be the natural leader, a tall, well-built woman in a green suit, with conscious gestures and a resonant, penetrating voice, and then went into a half doze while Caswell stayed awake beside me and wrote in his notebook. After a while the resonant voice roused me to attention for a moment. It was the tall woman holding the floor over some collective dereliction of the club. She was being scathing. I nudged Caswell and murmured, "Did you fix it so that a shover has a better chance of getting into office than a non-shover?" "I think there's a way they could find for it," Caswell whispered back, and went to work on his equation again. "Yes, several ways to bias the elections." "Good. Point them out tactfully to the one you select. Not as if she'd use such methods, but just as an example of the reason why only she can be trusted with initiating the change. Just mention all the personal advantages an unscrupulous person could have." He nodded, keeping a straight and sober face as if we were exchanging admiring remarks about the techniques of clothes repairing, instead of conspiring. After the meeting, Caswell drew the tall woman in the green suit aside and spoke to her confidentially, showing her the diagram of organization we had drawn up. I saw the responsive glitter in the woman's eyes and knew she was hooked. We left the diagram of organization and our typed copy of the new bylaws with her and went off soberly, as befitted two social science experimenters. We didn't start laughing until our car passed the town limits and began the climb for University Heights. If Caswell's equations meant anything at all, we had given that sewing circle more growth drives than the Roman Empire. Four months later I had time out from a very busy schedule to wonder how the test was coming along. Passing Caswell's office, I put my head in. He looked up from a student research paper he was correcting. "Caswell, about that sewing club business—I'm beginning to feel the suspense. Could I get an advance report on how it's coming?" "I'm not following it. We're supposed to let it run the full six months." "But I'm curious. Could I get in touch with that woman—what's her name?" "Searles. Mrs. George Searles." "Would that change the results?" "Not in the slightest. If you want to graph the membership rise, it should be going up in a log curve, probably doubling every so often." I grinned. "If it's not rising, you're fired." He grinned back. "If it's not rising, you won't have to fire me—I'll burn my books and shoot myself." I returned to my office and put in a call to Watashaw. While I was waiting for the phone to be answered, I took a piece of graph paper and ruled it off into six sections, one for each month. After the phone had rung in the distance for a long time, a servant answered with a bored drawl: "Mrs. Searles' residence." I picked up a red gummed star and licked it. "Mrs. Searles, please." "She's not in just now. Could I take a message?" I placed the star at the thirty line in the beginning of the first section. Thirty members they'd started with. "No, thanks. Could you tell me when she'll be back?" "Not until dinner. She's at the meetin'." "The sewing club?" I asked. "No, sir, not that thing. There isn't any Sewing club any more, not for a long time. She's at the Civic Welfare meeting." Somehow I hadn't expected anything like that. "Thank you," I said and hung up, and after a moment noticed I was holding a box of red gummed stars in my hand. I closed it and put it down on top of the graph of membership in the sewing circle. No more members.... Poor Caswell. The bet between us was ironclad. He wouldn't let me back down on it even if I wanted to. He'd probably quit before I put through the first slow move to fire him. His professional pride would be shattered, sunk without a trace. I remembered what he said about shooting himself. It had seemed funny to both of us at the time, but.... What a mess that would make for the university. I had to talk to Mrs. Searles. Perhaps there was some outside reason why the club had disbanded. Perhaps it had not just died. I called back. "This is Professor Smith," I said, giving the alias I had used before. "I called a few minutes ago. When did you say Mrs. Searles will return?" "About six-thirty or seven o'clock." Five hours to wait. And what if Caswell asked me what I had found out in the meantime? I didn't want to tell him anything until I had talked it over with that woman Searles first. "Where is this Civic Welfare meeting?" She told me. Five minutes later, I was in my car, heading for Watashaw, driving considerably faster than my usual speed and keeping a careful watch for highway patrol cars as the speedometer climbed. The town meeting hall and theater was a big place, probably with lots of small rooms for different clubs. I went in through the center door and found myself in the huge central hall where some sort of rally was being held. A political-type rally—you know, cheers and chants, with bunting already down on the floor, people holding banners, and plenty of enthusiasm and excitement in the air. Someone was making a speech up on the platform. Most of the people there were women. I wondered how the Civic Welfare League could dare hold its meeting at the same time as a political rally that could pull its members away. The group with Mrs. Searles was probably holding a shrunken and almost memberless meeting somewhere in an upper room. There probably was a side door that would lead upstairs. While I glanced around, a pretty girl usher put a printed bulletin in my hand, whispering, "Here's one of the new copies." As I attempted to hand it back, she retreated. "Oh, you can keep it. It's the new one. Everyone's supposed to have it. We've just printed up six thousand copies to make sure there'll be enough to last." The tall woman on the platform had been making a driving, forceful speech about some plans for rebuilding Watashaw's slum section. It began to penetrate my mind dimly as I glanced down at the bulletin in my hands. "Civic Welfare League of Watashaw. The United Organization of Church and Secular Charities." That's what it said. Below began the rules of membership. I looked up. The speaker, with a clear, determined voice and conscious, forceful gestures, had entered the homestretch of her speech, an appeal to the civic pride of all citizens of Watashaw. "With a bright and glorious future—potentially without poor and without uncared-for ill—potentially with no ugliness, no vistas which are not beautiful—the best people in the best planned town in the country—the jewel of the United States." She paused and then leaned forward intensely, striking her clenched hand on the speaker's stand with each word for emphasis. " All we need is more members. Now get out there and recruit! " I finally recognized Mrs. Searles, as an answering sudden blast of sound half deafened me. The crowd was chanting at the top of its lungs: "Recruit! Recruit!" Mrs. Searles stood still at the speaker's table and behind her, seated in a row of chairs, was a group that was probably the board of directors. It was mostly women, and the women began to look vaguely familiar, as if they could be members of the sewing circle. I put my lips close to the ear of the pretty usher while I turned over the stiff printed bulletin on a hunch. "How long has the League been organized?" On the back of the bulletin was a constitution. She was cheering with the crowd, her eyes sparkling. "I don't know," she answered between cheers. "I only joined two days ago. Isn't it wonderful?" I went into the quiet outer air and got into my car with my skin prickling. Even as I drove away, I could hear them. They were singing some kind of organization song with the tune of "Marching through Georgia." Even at the single glance I had given it, the constitution looked exactly like the one we had given the Watashaw Sewing Circle. All I told Caswell when I got back was that the sewing circle had changed its name and the membership seemed to be rising. Next day, after calling Mrs. Searles, I placed some red stars on my graph for the first three months. They made a nice curve, rising more steeply as it reached the fourth month. They had picked up their first increase in membership simply by amalgamating with all the other types of charity organizations in Watashaw, changing the club name with each fusion, but keeping the same constitution—the constitution with the bright promise of advantages as long as there were always new members being brought in. By the fifth month, the League had added a mutual baby-sitting service and had induced the local school board to add a nursery school to the town service, so as to free more women for League activity. But charity must have been completely organized by then, and expansion had to be in other directions. Some real estate agents evidently had been drawn into the whirlpool early, along with their ideas. The slum improvement plans began to blossom and take on a tinge of real estate planning later in the month. The first day of the sixth month, a big two page spread appeared in the local paper of a mass meeting which had approved a full-fledged scheme for slum clearance of Watashaw's shack-town section, plus plans for rehousing, civic building, and rezoning. And good prospects for attracting some new industries to the town, industries which had already been contacted and seemed interested by the privileges offered. And with all this, an arrangement for securing and distributing to the club members alone most of the profit that would come to the town in the form of a rise in the price of building sites and a boom in the building industry. The profit distributing arrangement was the same one that had been built into the organization plan for the distribution of the small profits of membership fees and honorary promotions. It was becoming an openly profitable business. Membership was rising more rapidly now. By the second week of the sixth month, news appeared in the local paper that the club had filed an application to incorporate itself as the Watashaw Mutual Trade and Civic Development Corporation, and all the local real estate promoters had finished joining en masse. The Mutual Trade part sounded to me as if the Chamber of Commerce was on the point of being pulled in with them, ideas, ambitions and all. I chuckled while reading the next page of the paper, on which a local politician was reported as having addressed the club with a long flowery oration on their enterprise, charity, and civic spirit. He had been made an honorary member. If he allowed himself to be made a full member with its contractual obligations and its lures, if the politicians went into this, too.... I laughed, filing the newspaper with the other documents on the Watashaw test. These proofs would fascinate any businessman with the sense to see where his bread was buttered. A businessman is constantly dealing with organizations, including his own, and finding them either inert, cantankerous, or both. Caswell's formula could be a handle to grasp them with. Gratitude alone would bring money into the university in carload lots. The end of the sixth month came. The test was over and the end reports were spectacular. Caswell's formulas were proven to the hilt. After reading the last newspaper reports, I called him up. "Perfect, Wilt, perfect ! I can use this Watashaw thing to get you so many fellowships and scholarships and grants for your department that you'll think it's snowing money!" He answered somewhat disinterestedly, "I've been busy working with students on their research papers and marking tests—not following the Watashaw business at all, I'm afraid. You say the demonstration went well and you're satisfied?" He was definitely putting on a chill. We were friends now, but obviously he was still peeved whenever he was reminded that I had doubted that his theory could work. And he was using its success to rub my nose in the realization that I had been wrong. A man with a string of degrees after his name is just as human as anyone else. I had needled him pretty hard that first time. "I'm satisfied," I acknowledged. "I was wrong. The formulas work beautifully. Come over and see my file of documents on it if you want a boost for your ego. Now let's see the formula for stopping it." He sounded cheerful again. "I didn't complicate that organization with negatives. I wanted it to grow . It falls apart naturally when it stops growing for more than two months. It's like the great stock boom before an economic crash. Everyone in it is prosperous as long as the prices just keep going up and new buyers come into the market, but they all knew what would happen if it stopped growing. You remember, we built in as one of the incentives that the members know they are going to lose if membership stops growing. Why, if I tried to stop it now, they'd cut my throat." I remembered the drive and frenzy of the crowd in the one early meeting I had seen. They probably would. "No," he continued. "We'll just let it play out to the end of its tether and die of old age." "When will that be?" "It can't grow past the female population of the town. There are only so many women in Watashaw, and some of them don't like sewing." The graph on the desk before me began to look sinister. Surely Caswell must have made some provision for— "You underestimate their ingenuity," I said into the phone. "Since they wanted to expand, they didn't stick to sewing. They went from general charity to social welfare schemes to something that's pretty close to an incorporated government. The name is now the Watashaw Mutual Trade and Civic Development Corporation, and they're filing an application to change it to Civic Property Pool and Social Dividend, membership contractual, open to all. That social dividend sounds like a Technocrat climbed on the band wagon, eh?" While I spoke, I carefully added another red star to the curve above the thousand member level, checking with the newspaper that still lay open on my desk. The curve was definitely some sort of log curve now, growing more rapidly with each increase. "Leaving out practical limitations for a moment, where does the formula say it will stop?" I asked. "When you run out of people to join it. But after all, there are only so many people in Watashaw. It's a pretty small town." "They've opened a branch office in New York," I said carefully into the phone, a few weeks later. With my pencil, very carefully, I extended the membership curve from where it was then. After the next doubling, the curve went almost straight up and off the page. Allowing for a lag of contagion from one nation to another, depending on how much their citizens intermingled, I'd give the rest of the world about twelve years. There was a long silence while Caswell probably drew the same graph in his own mind. Then he laughed weakly. "Well, you asked me for a demonstration." That was as good an answer as any. We got together and had lunch in a bar, if you can call it lunch. The movement we started will expand by hook or by crook, by seduction or by bribery or by propaganda or by conquest, but it will expand. And maybe a total world government will be a fine thing—until it hits the end of its rope in twelve years or so. What happens then, I don't know. But I don't want anyone to pin that on me. From now on, if anyone asks me, I've never heard of Watashaw.
|
B. One is the other's boss
|
Given the ink markings used during the left subcutaneous mastectomy in Linda Mayer, which color corresponded to the caudal margin?
Choose the correct answer from the following options:
A. Blue
B. Black
C. Yellow
D. Green
E. No color, caudal margins will not be marked.
|
### Patient Report 0
**Dear colleague, **
We report to you about Mrs. Linda Mayer, born on 01/12/1948, who
presented to our outpatient clinic on 07/13/19.
**Diagnoses:**
- BIRADS IV, recommended biopsy during breast diagnostics.
- Left breast carcinoma: iT1b; iN0; MX; ER: 12/12; PR: 2/12; Her-2:
neg; Ki67: 15%.
**Other Diagnoses: **
- Status post apoplexy
- Status post cataract surgery
- Status post right hip total hip replacement (THR)
- Pemphigus vulgaris under azathioprine therapy
- Osteoporosis
- Obesity with a BMI of 35
- Undergoing immunosuppressive therapy with prednisolone
**Family History:**
- Sister deceased at age 39 from breast cancer.
- Mother and grandmother (maternal and paternal) were diagnosed with
breast cancer.
**Medical History:** The CT thorax report indicates the presence of
inflammatory foci, warranting further follow-up. The relevant data was
documented and presented during the tumor conference. Subsequently, a
telephone conversation was conducted with the patient to discuss the
next steps.
**Tumor board decision from 07/13/2019:**
**Imaging: **
1) MRI examination detected a unifocal lesion on the left external
aspect, measuring approximately 2.4 cm in size.
2) CT scan (thorax/abdomen 07/12/2019) revealed a previously known
liver lesion, likely a hemangioma. No evidence of metastases was
identified. Nonspecific, small foci were observed in the lungs,
likely indicative of post-inflammatory changes.
**Recommendations:**
1. If no metastasis (M0): Fast-track BRCA testing is recommended.
2. If BRCA testing returns negative: Proceed with a selective excision
of the left breast after ultrasound-guided fine needle marking and
sentinel lymph node biopsy on the left side. Additionally, perform
Endopredict analysis on the surgical specimen.
**Current Medication: **
**Medication** **Dosage** **Route** **Frequency**
------------------------------- ------------ ----------- ---------------
Aspirin 100mg Oral 1-0-0
Simvastatin (Zocor) 40mg Oral 0-1-0
Haloperidol (Haldol) 100mg Oral ½-0-½
Zopiclone (Imovane) 7.5mg Oral 0-0-1
Trazodone (Desyrel) 100mg Oral 0-0-½-
Calcium Supplement (Caltrate) 500mg Oral 1-0-1
Nystatin (Bio-Statin) As advised Oral 1-1-1-1
Pantoprazole (Protonix) 40mg Oral 1-0-0
Prednisolone (Prelone) 40mg Oral As advised
Tramadol/Naloxone (Ultram) 50/4mg Oral 1-0-1
Acyclovir (Zovirax) 800mg Oral 1-1-1
**Mammography and Tomosynthesis from 07/8/2019:**
[Findings]{.underline}**: **During the inspection and palpation, no
significant findings were noted on either side. Some areas with higher
mammographic density were observed, which slightly limited the
assessment. However, during the initial examination, a small
architectural irregularity was identified on the outer left side. This
irregularity appeared as a small, roundish compression measuring
approximately 6mm and was visible only in the medio-lateral oblique
image, with a nipple distance of 8cm. Apart from this discovery, there
were no other suspicious focal findings on either side. No clustered or
irregular microcalcifications were detected. Additionally, a long-term,
unchanged observation noted some asymmetry with denser breast tissue
present on both sides, particularly on the outer aspects. Sonographic
evaluation posed challenges due to the mixed echogenic glandular tissue.
As a possible corresponding feature to the questionable architectural
irregularity on the outer left side, a blurred, echo-poor area with a
vertical alignment measuring about 7x5mm was identified. Importantly, no
other suspicious focal findings were observed, and there was no evidence
of enlarged lymph nodes in the axilla on both sides.
[Assessment]{.underline}**:** The observed finding on the left side
presents an uncertain nature, categorized as BIRADS IVb. In contrast,
the finding on the right side appears benign, categorized as BIRADS II.
To gain a more conclusive understanding of the left-sided finding, we
recommend a histological assessment through a sonographically guided
high-speed punch biopsy. An appointment has been scheduled with the
patient to proceed with this biopsy and obtain a definitive
diagnosis.Formularbeginn
Formularende**Current Recommendations:**\
A fast-track decision will be made regarding tumor genetics, and the
patient will be notified of the appointment via telephone. The patient
should bring the pathology blocks from Fairview Clinic on the day of
blood collection for genetic testing, along with a referral for an
Endopredict test. A multidisciplinary team meeting will be convened
after the Endopredict test and genetic testing results are available. If
there is persistence or worsening of symptoms, we strongly advise the
patient to seek immediate re-evaluation. Additionally, outside of
regular office hours, the patient can seek assistance at the emergency
care unit in case of emergency.
**MRI from 07/11/2019:**
[Technique:]{.underline} Breast MRI (3T scanner) with dedicated mammary
surface coil:
[Findings:]{.underline} The overall contrast enhancement was observed
bilaterally to evaluate the Grade II findings. There was low to moderate
small-spotted contrast enhancement with slightly limited assessability.
The contrast dynamics revealed a patchy, confluent, blurred, and
elongated contrast enhancement, corresponding to the primary lesion,
which measured approximately 2.4 cm on the lower left exterior. Single
spicules were noted, and the lesion appeared hypointense in T1w imaging.
No suspicious focal findings with contrast enhancement were detected on
the right side. Small axillary lymph nodes were observed on the left
side, but they did not appear suspicious based on MR morphology.
Additionally, there were no suspicious lymph nodes on the right side.
[Assessment:]{.underline} An unifocal primary lesion measuring
approximately 2.4 cm in diameter was identified on the lower left
exterior. It exhibited patchy confluent enhancement and architectural
disturbance, with single spicules. No evidence of suspicious lymph nodes
was found. The left side is categorized as BIRADS 6, indicating a high
suspicion of malignancy, while the right side is categorized as BIRADS
2, indicating a benign finding.
### Patient Report 1
**Dear colleague, **
We are writing to provide you with an update on the medical condition of
Mrs. Linda Mayer, born on 01/12/1948, who attended our outpatient clinic
on 08/02/2019.
**Diagnoses:**
- Vacuum-assisted biopsy-confirmed ductal carcinoma in situ (DCIS) of
the right breast (17mm)
- Histological grade G3, estrogen receptor (ER) and progesterone
receptor (PR) negative.
- Postmenopausal for the past eight years.
- Previous surgical history includes an appendectomy.
- Allergies: Hay fever
**Current Presentation**: The patient sought consultation following a
confirmed diagnosis of DCIS (Ductal Carcinoma In Situ) in the right
breast, which was determined through a vacuum-assisted biopsy.
**Physical Examination**: Upon physical examination, there is evidence
of a post-intervention hematoma located in the upper right quadrant of
the right breast. However, the clip from the biopsy is not clearly
visible. A sonographic examination of the right axilla reveals no
abnormalities.
**Current Recommendations:**
- Imaging studies have been conducted.
- A case presentation is scheduled for our mammary conference
tomorrow.
- Subsequently, planning for surgery will commence, including the
evaluation of sentinel lymph nodes following a right mastectomy and
axillary lymph node dissection.
### Patient Report 2
**Dear colleague, **
We are writing to provide an update regarding Mrs. Linda Mayer, born on
01/12/1948, who received outpatient care at our facility on 08/29/2019.
**Diagnoses:**
- Vacuum-assisted biopsy-confirmed ductal carcinoma in situ (DCIS) of
the right breast, measuring 17mm in size, classified as Grade 3, and
testing negative for estrogen receptors (ER) and progesterone
receptors (PR).
- Mrs. Mayer has been postmenopausal for eight years.
- Notable allergy: Hay fever
**Tumor Board Decision:** Mammography imaging revealed a clip associated
with a focal finding in the right breast adjacent to calcifications.
[Recommendation]{.underline}: Proceed with sentinel lymph node
evaluation after right mastectomy, including clip localization on the
right side.
**Current Presentation**: During the patient\'s recent outpatient visit,
an extensive pre-operative consultation was conducted. This discussion
covered the indications for the surgery, details of the surgical
process, potential alternative options, as well as general and specific
risks associated with the procedure. These risks included the
possibility of an aesthetically suboptimal outcome and the chance of
encountering an R1 situation. The patient did not have any further
questions and provided written consent for the procedure.
**Physical Examination:** Both breasts appear normal upon inspection and
palpation. The right axilla shows no abnormalities.
**Medical History:** Mrs. Linda Mayer presented to our clinic with a
vacuum biopsy-confirmed DCIS of the right breast for therapeutic
intervention. The decision for surgery was reached following a
comprehensive review by our interdisciplinary breast board. After an
extensive discussion of the procedure\'s scope, associated risks, and
alternative options, the patient provided informed consent for the
proposed surgery.
**Preoperative Procedure:** Sonographic and mammographic fine needle
marking of the remaining findings and the clip in the right breast.
**Surgical Report:** Team time-out conducted with colleagues of
anesthesia. Patient positioned in the supine position. Surgical site
disinfection and sterile draping. Marking of the incision site.
A semicircular incision was made laterally on the right breast.
Visualization and dissection along the marking wire towards the marked
finding. Excision of the marked findings, with a safety margin of
approximately 1-2 cm. The excised specimen measured approximately 4 x 5
x 3 cm. Markings using standard protocol (green thread cranially, blue
thread ventrally). The excised specimen was sent for preparation
radiography. Hemostasis was meticulously ensured. Insertion of a 10Ch
Blake drain into the segmental cavity, followed by suturing.
Verification of a blood-dry wound cavity. Preparation radiography
included the marked area and the marking wires. The excised material was
transferred to our pathology colleagues for histological examination.
Subdermal and intracutaneous sutures with Monocryl 3/0 in a continuous
manner. Application of Steristrips and dressing. Instruments, swabs, and
cloths were accounted for per the nurse\'s checklist. The patient was
correctly positioned throughout the operation. The anesthesiologic
course was without significant problems. A thorax compression bandage
was applied in the operating room as a preventive measure against
bleeding.
**Postoperative Procedure:** Pain management, thrombosis prophylaxis,
application of a pressure dressing, drainage under suction.
**Examinations:** **Digital Mammography performed on 08/29/2019**
[Clinical indication]{.underline}: DCIS right
[Question]{.underline}: Please send specimen + Mx-FNM
**Findings**: Sonographically guided wire marking of the maximum
microcalcification group measuring about 12 mm. Local hematoma cavity
and inset clip marking directly cranial to the finding. Stitch direction
from lateral to medial. The wire is positioned with the tip caudal to
the clip in close proximity to the microcalcification. Additional
marking of the focal localization on the skin. Documentation of the wire
course in two planes.
- Telephone discussion of findings with the surgeon.
- Preparation radiography and preparation sonography are recommended.
- Marking wire and suspicious focal findings centrally included in the
preparation.
- Intraoperative report of findings has been conveyed to the surgeon.
**Current Recommendations:**
- Scheduled for inpatient admission on ward 22 tomorrow.
- Right breast mastectomy with sentinel lymph node evaluation.
### Patient Report 3
**Dear colleague, **
We are writing to update you on the clinical course of Mrs. Linda Mayer,
born on 01/12/1948, who was under our inpatient care from 08/30/2019 to
09/12/2019.
**Diagnosis:** Vacuum-assisted biopsy confirmed Ductal Carcinoma In Situ
(DCIS) in the right breast, measuring 17mm, Grade 3, ER/PR negative.
**Tumor Board Decision (07/13/2019):**
[Imaging:]{.underline} Clip identified in focal lesion in the right
breast, adjacent to calcifications.
[Recommendation]{.underline}**:** Spin Echo following fine-needle
localization with mammography-guided control of the clip in the right
breast.
[Subsequent Recommendation (08/27/2019):]{.underline} Radiation therapy
to the right breast. Regular follow-up is advised.
**Medical History:** Ms. Linda Mayer presented to our facility on
08/30/2019 for the aforementioned surgical procedure. After a
comprehensive discussion regarding the surgical plan, potential risks,
and possible complications, the patient consented to proceed. The
surgery was executed without complications on 09/01/2019. The
postoperative course was unremarkable, allowing for Ms. Mayer\'s
discharge on 09/12/2019 in stable condition and with no signs of wound
irritation.
**Histopathological Findings (09/01/2019):**
The resected segment from the right breast showed a maximum necrotic
zone of 1.6 cm with foreign body reaction, chronic resorptive
inflammation, fibrosis, and residual hemorrhage. These findings
primarily correspond to the pre-biopsy site. Surrounding this were areas
of DCIS with solid and cribriform growth patterns and comedonecrosis,
WHO Grade 3, Nuclear Grade 3, with a reconstructed extent of 3.5 cm.
Resection margins were as follows: ventral 0.15 cm, caudal 0.2 cm,
dorsal 0.4 cm, with remaining margins exceeding 0.5 cm. TNM
Classification (8th Edition, 2017): pTis (DCIS), R0, G3. Additional
immunohistochemical studies are underway to determine hormone receptor
status; a supplementary report will follow.
**Postoperative Plan:**
The patient was educated on standard postoperative care and the
importance of immediate re-evaluation for any persistent or worsening
symptoms. Radiation therapy to the right breast is planned, along with
regular follow-up appointments.
Should you have any questions or require further clarification, we are
readily available. For urgent concerns outside of regular office hours,
emergency care is available at the Emergency Department.
**Internal Histopathological Findings Report**
**Clinical Data:** DCIS in the right breast (17 mm), Grade 3, ER/PR
negative.
**Macroscopic Examination:**
The resected mammary segment from the right breast, marked with dual
threads and containing a fine-needle marker inserted ventro-laterally,
measures 4.5 x 5.5 x 3 cm (HxWxD) and weighs 35 grams. The specimen was
sectioned from medial to lateral into 14 lamellae. The cut surface
predominantly shows yellowish, lobulated mammary parenchyma with sparse
striated whitish glandular components. A DCIS-suspected area, up to 2.1
cm in size, is evident caudally and centro-ventrally (from lamellae
4-10), displaying both reddish-hemorrhagic and whitish-nodular
indurations. Minimal distances from the suspicious area to the resection
margins are as follows: cranial 2 cm, caudal 0.2 cm, dorsal 0.2 cm,
ventral 0.1 cm, medial 1.6 cm, lateral 2.5 cm. The suspect area was
completely embedded. Ink markings: green/cranial, yellow/caudal,
blue/ventral, black/dorsal.
**Microscopic Examination:**
Histological sections of the mammary parenchyma reveal fibro-lipomatous
stroma and glandular lobules with a two-layered epithelial lining. In
lamellae 3-6 and 11, solid and cribriform epithelial proliferations are
evident. Cells are cuboidal with variably enlarged, predominantly
moderately pleomorphic, round to oval nuclei. Comedo-like necroses are
occasionally observed in secondary lumina. Microscopic distances to the
deposition margins are consistent with the macroscopic findings. The
surrounding stroma in lamellae 6-9 shows extensive geographic adipose
tissue necrosis, multinucleated foreign body-type giant cells, foamy
cell macrophages, collagen fiber proliferation, and fresh hemorrhages.
**Supplemental Immunohistochemical Findings
(09/04/2019):** **Microscopy:** In the meantime, the material was
further processed as announced.
Here, the previously described intraductal epithelial growths, each with
negative staining reaction for the estrogen and progesterone receptor
(with regular external and internal control reaction).
**Critical Findings:**
Resected mammary segment with paracentral, max. 1.6 cm necrotic zone
with foreign body reaction, chronic resorptive. Chronic resorptive
inflammation, fibrosis, and hemorrhage remnants (primarily corresponding
to the pre-biopsy site), and surrounding portions of ductal carcinoma in
situ. Ductal carcinoma in situ, solid and rib-shaped growth type with
comedonecrosis, WHO grade 3, nuclear grade 3. The resection was locally
complete with the following Safety margins: ventral 0.15 cm, caudal 0.2
cm, dorsal 0.4 cm, and the remaining sedimentation margins more than 0.5
cm.
TNM classification (8th edition 2017): pTis (DCIS), R0, G3.
[Hormone receptor status:]{.underline}
- Estrogen receptor: negative (0%).
- Progesterone receptor: negative (0%).
### Patient Report 4
**Dear colleague, **
We are writing to provide an update regarding Mrs. Linda Mayer, born on
01/12/1948, who received outpatient treatment on 27/09/2019.
**Diagnoses**: Left breast carcinoma; iT1c; iN0; MX; ER:12/12; PR:2/12;
Her-2: neg; Ki67:15%, BRCA 2 mutation.
**Other Diagnoses**:
- Hailey-Hailey disease - currently regressing under prednisolone.
- History of apoplexy in 2016 with no residuals
- Depressive episodes
- Right hip total hip replacement
- History of left adnexectomy in 1980 due to extrauterine pregnancy
- Tubal sterilization in 1988.
- Uterine curettage (Abrasio) in 2004
- Hysterectomy in 2005
**Allergies**: Hay fever
**Imaging**:
- CT revealed a cystic lesion in the liver, not suspicious for
metastasis. Granulomatous, post-inflammatory changes in the lung.
- An MRI of the left breast showed a unifocal lesion on the outer left
side with a 2.4 cm extension.
**Histology: **Gene score of 6.5, indicating a high-risk profile (pT2 or
pN1) if BRCA negative.
**Recommendation**: If BRCA negative, SE left mamma after ultrasound-FNM
with correlation in Mx and SLNB on the left.
**Current Presentation**: Mrs. Linda Mayer presented for pre-operative
evaluation for left mastectomy. BRCA testing confirmed a BRCA2 mutation,
warranting bilateral subcutaneous mastectomy and SLNB on the left.
Reconstruction with implants and mesh is planned, along with a breast
lift as requested by the patient.
**Macroscopy:**
**Left Subcutaneous Mastectomy (Blue/Ventral, Green/Cranial):**
- Specimen Size: 17 x 15 x 6 cm (Height x Width x Depth), Weight: 410
g
- Description: Dual filament-labeled subcutaneous mastectomy specimen
- Specimen Workup: 27 lamellae from lateral to medial
- Tumor-Suspect Area (Lamellae 17-21): Max. 1.6 cm, white dermal,
partly blurred
- Margins from Tumor Area: Ventral 0.1 cm, Caudal 1 cm, Dorsal 1.2 cm,
Cranial \> 5 cm, Lateral \> 5 cm, Medial \> 2 cm
- Remaining Mammary Parenchyma: Predominantly yellowish lipomatous
with focal nodular appearance
- Ink Markings: Cranial/Green, Caudal/Yellow, Ventral/Blue,
Dorsal/Black
- A: Lamella 17 - Covers dorsal and caudal
- B: Lamella 18 - Covers ventral
- C: Lamella 19 - Covers ventral
- D: Blade 21 - Covers ventral
- E: Lamella 20 - Reference cranial
- F: Lamella 16 - Immediately laterally following mammary
parenchyma
- G: Blade 22 - Reference immediately medial following mammary
tissue
- H: Lamella 12 - Central section
- I: Lamella 8 - Documented section top/outside
- J: Lamella 3 - Vestigial section below/outside
- K: Lamella 21 - White-nodular imposing area
- L: Lamella 8 - Further section below/outside with nodular area
- M: Lateral border lamella perpendicularly
- N: Medial border lamella perpendicular (Exemplary)
**Second Sentinel Lymph Node on the Left:**
- Specimen: Maximum of 6 cm of fat tissue resectate with 1 to 2 cm of
lymph nodes and smaller nodular indurations.
- A, B: One lymph node each divided
- C: Further nodular indurations
**Palpable Lymph Nodes Level I:**
- Specimen: One max. 4.5 cm large fat resectate with nodular
indurations up to 1.5 cm in size
- A: One nodular induration divided
- B: Further nodular indurated portions
**Right Subcutaneous Mastectomy:**
- Specimen: Double thread-labeled 450 g subcutaneous mastectomy
specimen
- Assumed Suture Markings: Blue (Ventral) and Green (Cranial)
- Dorsal Fascia Intact
- [Specimen Preparation:]{.underline} 16 lamellae from medial to
lateral
- Predominantly yellowish lobulated with streaky, beige, impinging
strands of tissue
- Isolated hemorrhages in the parenchyma
- Ink Markings: Green = Cranial, Yellow = Caudal, Blue = Ventral,
Black = Dorsal
<!-- -->
- A: Medial border lamella perpendicular (Exemplary)
- B: Lamella 5 with reference ventrally (below inside)
- C: Lamella 8 with reference ventrally (below inside)
- D: Lamella 6 with ventral and dorsal reference (upper inside)
- E: Blade 8 with ventral and dorsal cover (top inside)
- F: Blade 11 with cover dorsal and caudal (bottom outside)
- G: Blade 13 with dorsal cover (bottom outside)
- H: Blade 10 with ventral and dorsal cover (top outside)
- I: Lamella 14 with reference cranial and dorsal and bleeding in
(upper outer)
- J: Lateral border lamella perpendicular (Exemplary)
**Microscopy:**
1\) In the tumor-suspicious area, a blurred large fibrosis zone with
star-shaped extensions is visible. Intercalated are single-cell and
stranded epithelial cells with a high nuclear-cytoplasmic ratio. The
nuclei are monomorphic with finely dispersed chromatin, at most, very
isolated mitoses. Adjacent distended glandular ducts with a discohesive
cell proliferate with the same cytomorphology. Sporadically, preexistent
glandular ducts are sheared disc-like by the infiltrative tumor cells.
Samples from the nodular area of lamella 21 show areas of cell-poor
hyaline sclerosis with partly ectatically dilated glandular ducts.
2\) Second lymph node with partial infiltrates of the neoplasia described
above. The cells here are relatively densely packed. Somewhat increased
mitoses. In the lymph nodes, iron deposition is also in the sinus
histiocytes.
3\) Lymph nodes with partly sparse iron deposition. No epithelial foreign
infiltrates.
4\) Regular mammary gland parenchyma. No tumor infiltrates. Part of the
glandular ducts are slightly cystically dilated.
**Preliminary Critical Findings Report: **
Left breast carcinoma measuring max 1.6 cm diagnosed as moderately
differentiated invasive lobular carcinoma, B.R.E. score 6 (3+2+1, G2).
Presence of tumor-associated and peritumoral lobular carcinoma in situ.
Resection status indicates locally complete excision of both invasive
and non-invasive carcinoma; minimal margins as follows: ventral \<0.1
cm, caudal 0.2 cm, dorsal 0.8 cm, remaining margins ≥0.5 cm. Nodal
status reveals max 0.25 cm metastasis in 1/5 nodes, 0/2 additional
nodes, without extracapsular spread. Right mammary gland from
subcutaneous mastectomy shows tumor-free parenchyma.
**TNM classification (8th ed. 2017):** pT1c, pTis (LCIS), pN1a, G2, L0,
V0, Pn0, R0. Investigations to determine tumor biology were initiated.
Addendum follows.
**Supplementary findings on 10/07/2019**
Editing: immunohistochemistry:** **
Estrogen receptor, Progesterone receptor, Her2neu, MIB-1 (block 1D).
**Critical Findings Report:** Breast carcinoma on the left with a 1.6 cm
invasive lobular carcinoma, moderately differentiated, with a B.R.E.
score of 6 (3+2+1, G2). Additionally, tumor-associated and peritumoral
lobular carcinoma in situ are noted. Resection status confirms locally
complete excision of both invasive and non-invasive carcinomas; minimal
resection margins are ventral \<0.1 cm, caudal (LCIS) 0.2 cm, dorsal 0.8
cm, and all other margins ≥0.5 cm. Nodal assessment reveals a single
metastasis with a maximum dimension of 0.25 cm among 7 lymph nodes,
specifically found in 1/5 nodes, with no additional metastasis in 0/2
nodes and no extracapsular extension. Contralateral right mammary gland
from subcutaneous mastectomy is tumor-free.
Tumor biology of the invasive carcinoma demonstrates strong positive
estrogen receptor expression in 100% of tumor cells, strong positive
progesterone receptor expression in 1% of tumor cells, negative HER2/neu
status (Score 1+), and a Ki67 (MIB-1) proliferation index of 25%.
**TNM classification (8th Edition 2017):** pT1c, pTis (LCIS), pN1a (1/7
ECE-, sn), G2, L0, V0, Pn0, R0.
**Surgery Report (Vac Change + Irrigation)**: Indication for VAC change.
After a detailed explanation of the procedure, its risks, and
alternatives, the patient agrees to the proposed procedure.
The course of surgery: Proper positioning in a supine position. Removal
of the VAC sponge. A foul odor appears from the wound cavity. Careful
disinfection of the surgical area. Sterile draping. Detailed inspection
of the wound conditions. Wound debridement with removal of fibrin
coatings and freshening of the wound. Resection of necrotic material in
places with sharp spoon. Followed by extensive Irrigation of the entire
wound bed and wound edges using 1 l Polyhexanide solution. Renewed VAC
sponge application according to standard.
**Postoperative procedure**: Pain medication, thrombosis prophylaxis,
continuation of antibiotic therapy. In the case of abundant
Staphylococcus aureus and isolated Pseudomosas in the smear and still
clinical suspected infection, extension of antibiotic treatment to
Meropenem.
**Surgery Report: Implant Placement**
**Type of Surgery:** Implant placement and wound closure.
**Report:** After infection and VAC therapy, clean smears and planning
of reinsertion. Informed consent. Intraoperative consults: Anesthesia.
**Course of Surgery:** Team time out. Removal VAC sponge. Disinfection
and covering. Irrigation of the wound cavity with Serasept. Blust
irrigation. Fixation cranially and laterally with 4 fixation sutures
with Vircryl 2-0. Choice of trial implant. Temporary insertion. Control
in sitting and lying positions. Choice of the implant. Repeated
disinfection. Change of gloves. Insertion of the implant into the
pocket. Careful hemostasis. Insertion of a Blake drain into the wound
cavity. Suturing of the drainage. Subcutaneous sutures with Monocryl
3-0.
**Type of Surgery:** Prophylactic open Laparoscopy, extensive
adhesiolysis
**Type of Anesthesia:** ITN
**Report:** Patient presented for prophylactic right adnexectomy in the
course of hysterectomy and left adnexectomy due to genetic burden.
Intraoperatively, secondary wound closure was to be performed in the
case of a right mammary wound weeping more than one year
postoperatively. The patient agreed to the planned procedure in writing
after receiving detailed information about the extent, the risks, and
the alternatives.
**Course of the Operation:** Team time out with anesthesia colleagues.
Flat lithotomy positioning, disinfection, and sterile draping. Placement
of permanent transurethral catheter. Subumbilical incision and
dissection onto the fascia. Opening of the fascia and suturing of the
same. Exposure of the peritoneum and opening of the same. Insertion of
the 10-mm optic trocar. Insertion of three additional trocars into the
lower abdomen (left and center right, each 5mm; right 10mm). The
following situation is seen: when the camera is inserted from the
umbilical region, an extensive adhesion is seen. Only by changing the
camera to the right lower bay is extensive adhesiolysis possible. The
omentum is fused with the peritoneum and the serosa of the uterus. Upper
abdomen as far as visible inconspicuous.
After hysterectomy and adnexectomy on the left side, adnexa on the right
side atrophic and inconspicuous. The peritoneum is smooth as far as can
be seen.
Visualization of the right adnexa and the suspensory ligament of ovary.
Coagulation of the suspensory ligament of ovary ligament after
visualization of the ureter on the same side. Stepwise dissection of the
adnexa from the pelvic wall.
Recovery via endobag. Hemostasis. Inspection of the situs.
Removal of instrumentation under vision and draining of
pneumoperitoneum.
Closure of the abdominal fascia at the umbilicus and right lower
abdomen. Suturing of the skin with Monocryl 3/0. Compression bandage at
each trocar insertion site. Inspection of the right mamma. In the area
of the surgical scar laterally/externally, 2-3 small epithelium-lined
pore-like openings are visible; here, on pressure, discharge of rather
viscous/sebaceous, non-odorous, or purulent fluid. No dehiscence is
visible, suspected. fistula ducts to the implant cavity. After
consultation with the mamma surgeon, a two-stage procedure was planned
for the treatment of the fistula tracts. Correct positioning and
inconspicuous anesthesiological course. Instrumentation, swabs, and
cloths complete according to the operating room nurse. Postoperative
procedures include analgesia, mobilization, thrombosis prophylaxis, and
waiting for histology.
**Internal Histopathological Report**
[Clinical information/question]{.underline}: Fistula formation mammary
right. Dignity?
[Macroscopy]{.underline}**:** Skin spindle from scar mammary right: fix.
a 2.4 cm long, stranded skin-subcutaneous excidate. Lamellation and
complete embedding.
[Processing]{.underline}**:** 1 block, HE
[Microscopy]{.underline}**:** Histologic skin/subcutaneous
cross-sections with overlay by a multilayered keratinizing squamous
epithelium. The dermis with few inset regular skin adnexal structures,
sparse to moderately dense mononuclear-dominated inflammatory
infiltrates, and proliferation of cell-poor, fiber-rich collagenous
connective tissue.
**Critical Findings Report:**
Skin spindle on scar mamma right: skin/subcutaneous resectate with
fibrosis and chronic inflammation. To ensure that all findings are
recorded, the material will be further processed. A follow-up report
will follow.
[Microscopy]{.underline}**:** In the meantime, the material was further
processed as announced. The van Gieson stain showed extensive
proliferation of collagenous and, in some places elastic fibers. Also in
the additional immunohistochemical staining against no evidence of
atypical epithelial infiltrates.
**Lab results upon Discharge:**
**Parameter** **Results** **Reference Range**
-------------------------------- ------------- ---------------------
Sodium 141 mEq/L 132-146 mEq/L
Potassium 4.2 mEq/L 3.4-4.5 mEq/L
Creatinine 0.82 mg/dL 0.50-0.90 mg/dL
Estimated GFR (eGFR CKD-EPI) \>90 \-
Total Bilirubin 0.21 mg/dL \< 1.20 mg/dL
Albumin 4.09 g/dL 3.5-5.2 g/dL
CRP 7.8 mg/L \< 5.0 mg/L
Haptoglobin 108 mg/dL 30-200 mg/dL
Ferritin 24 µg/L 13-140 µg/L
ALT 24 U/L \< 31 U/L
AST 37 U/L \< 35 U/L
Gamma-GT 27 U/L 5-36 U/L
Lactate Dehydrogenase 244 U/L 135-214 U/L
25-OH-Vitamin D3 91.7 nmol/L 50.0-150.0 nmol/L
Hemoglobin 11.1 g/dL 12.0-15.6 g/dL
Hematocrit 40.0% 35.5-45.5%
Red Blood Cells 3.5 M/uL 3.9-5.2 M/uL
White Blood Cells 2.41 K/uL 3.90-10.50 K/uL
Platelets 142 K/uL 150-370 K/uL
MCV 73.0 fL 80.0-99.0 fL
MCH 23.9 pg 27.0-33.5 pg
MCHC 32.7 g/dL 31.5-36.0 g/dL
MPV 10.7 fL 7.0-12.0 fL
RDW-CV 14.8% 11.5-15.0%
Absolute Neutrophils 1.27 K/uL 1.50-7.70 K/uL
Absolute Immature Granulocytes 0.000 K/uL \< 0.050 K/uL
Absolute Lymphocytes 0.67 K/uL 1.10-4.50 K/uL
Absolute Monocytes 0.34 K/uL 0.10-0.90 K/uL
Absolute Eosinophils 0.09 K/uL 0.02-0.50 K/uL
Absolute Basophils 0.04 K/uL 0.00-0.20 K/uL
Free Hemoglobin 5.00 mg/dL \< 20.00 mg/dL
### Patient Report 5
**Dear colleague, **
We would like to provide an update on Mrs. Linda Mayer, born on
01/12/1948, who received inpatient care at our facility from 01/01/2021
to 01/14/2021.
**Diagnosis:** Hailey-Hailey disease.
- Upon admission, the patient was under treatment with Acitretin 25mg.
**Other Diagnoses**:
- History of apoplexy in 2016 with no residuals
- Depressive episodes
- Right hip total hip replacement
- History of left adnexectomy in 1980 die to extrauterine pregnancy
- Tubal sterilization in 1988.
- Uterine curettage in 2004
- Hysterectomy in 2005
**Medical History:** Mrs. Linda Mayer was referred to our hospital for
the management of Hailey-Hailey disease after assessment in our
outpatient clinic. She reported a worsening of painful skin erosions on
her neck and inner thighs over a span of approximately 3 weeks.
Itchiness was not reported. Prior attempts at treatment, including the
topical use of Fucicort, Prednisolone with Octenidine, and Polidocanol
gel, had provided limited relief. She denied any other physical
complaints, dyspnea, B symptoms, infections, or irregularities in stool
and micturition.
Her history revealed the initial onset of Hailey-Hailey disease,
initially presenting as itching followed by skin erosions, which
subsequently healed with scarring. The diagnosis was established at the
Fairview Clinic. Previous therapeutic interventions included systemic
cortisone shock therapy, as-needed application of Fucicort ointment, and
axillary laser therapy.
**Family History:**
- Father: Hailey-Hailey Disease (M. Hailey-Hailey)
- Mother and Sister: Breast carcinoma
**Psychosocial History:** Socially, Ms. Linda Mayer is described as a
retiree, having previously worked as a nurse.
**Physical Examination on Admission:**
Height: 16 cm, Body Weight: 80.0 kg, BMI: 29.7
**Physical Examination Findings:**
Generally stable condition with increased nutritional status. Her
consciousness was unremarkable, and cranial mobility was free. Ocular
mobility was regular, with prompt pupillary reflexes to accommodation
and light. She exhibited a normal heart rate, and cardiac and pulmonary
examinations were unremarkable. No heart murmurs were detected. Renal
bed and spine were not palpable. Further internal and orienting
neurological examinations revealed no pathological findings.
**Skin Findings on Admission:** Sharp erosions, approximately 10x10 cm
in size, with a livid-erythematous base, partly crusty, were observed on
the neck and proximal inner thighs.
In the axillary regions on both sides, there were marginal,
livid-erythematous, well-demarcated plaques interspersed with scarring
strands, more pronounced on the right side.
Skin type II.
Mucous membranes appeared normal. Dermographism was noted to be ruber.
**Medication ** **Dosage** **Frequency**
------------------------------ ------------ -------------------------------
Prednisolone (Deltasone) 5 mg 1.5-0-0-0-0-0
Aspirin (Bayer) 100 mg 0-1-0-0-0-0
Simvastatin (Zocor) 40 mg 0-0-0-0-1
Pantoprazole (Protonix) 45.1 mg 1-0-0-0-0
Acitretin (Soriatane) 25 mg 1-0-0-0-0
Tetrabenazine (Xenazine) 111 mg 0.25-0.25-0.25-0.25-0.25-0.25
Letrozole (Femara) 2.5 mg 0-0-1-0
Risedronate Sodium (Actonel) 35 mg 1-0-0-0-0
Acetaminophen (Tylenol) 500 mg 0-1-0-1
Naloxone (Narcan) 8.8 mg 1-0-1-0
Eszopiclone (Lunesta) 7.5 mg 0-0-1-0
**Other Findings:** MRSA Smears:
- Nasal Smear: Normal flora, no MRSA.
- Throat Swab: Normal flora, no MRSA.
- Non-lesional Skin Smear: Normal flora.
- Lesional Skin Swab: Abundant Pseudomonas aeruginosa, abundant
Klebsiella oxytoca, and abundant Serratia sp., sensitive to
piperacillin-tazobactam.
**Therapy and Progression:** Mrs. Linda Mayer was admitted on 01/01/2021
as an inpatient for a refractory exacerbation of previously diagnosed
Hailey-Hailey disease. On admission, both bacteriological and
mycological smears were conducted, which indicated abundant levels of
Pseudomonas aeruginosa, Klebsiella oxytoca, and Serratia sp. Lab tests
showed a CRP level of 2.83 mg/dL and a leukocyte count of 8.8 G/L.
Initial topical therapy consisted of Zinc oxide ointment, Clotrimazole
paste, and Triamcinolone Acetonide shake lotion. Treatment was modified
on 01/04/2021 to include Clotrimazole (Lotrimin) paste in the mornings
and methylprednisolone emulsion in the evenings. Starting on 01/08,
eosin aqueous solution was introduced for application on the thighs,
serving antiseptic and drying purposes. A hydrophilic prednicarbate
cream at 0.25% concentration, combined with octenidine at 0.1%, was
applied to the neck and thighs twice daily, also starting on 01/08. For
showering, octenidine-based wash lotion was utilized. Additionally, Mrs.
Linda Mayer received an emulsifying ointment as part of her treatment.
### Patient Report 6
**Dear colleague, **
We are providing an update on our patient Mrs. Linda Mayer, born on
01/12/1948, who presented to our outpatient clinic on 09/22/2021.
**Diagnoses:** M. Hailey-Hailey
**Medical History:**
- Diagnosis of M. Hailey-Hailey at the Fairview Clinic
<!-- -->
- Treatment involved systemic steroid shock therapy, laser therapy,
and the initiation of Acitretin in October 2021, with no observed
improvement.
<!-- -->
- A dermabrasion procedure was scheduled on 03/18/2021, during a
previous inpatient admission.
- Acitretin 25mg has been administered daily, with favorable outcomes
noted when using Triamcinolone/Triclosan or Prednisolone +
Octenidine.
- A history of mastectomy with Vacuum-Assisted Closure (VAC) has
resulted in breast erosion.
**Skin Findings:**
- Erythematous and partially mottled lesions have been identified in
the axillary and inguinal regions, with some scarring observed in
the axillary area.
- On 04/28/2021, somewhat erosive plaques were noted in the inguinal
regions.
- As of 05/05/2021 discrete erosions are currently present on both
forearms.
**Current Recommendations:**
- Inpatient admission is scheduled for September 2021.
- The prescribed treatment plan includes topical prednicarbate
(Dermatop) 0.25% with Octenidine 0.1%, per NRF 11.145, in a 50g
container, to be applied once daily for 1-2 weeks.
- Hydrocortisone 5% in a suitable base, 200g, is to be applied daily.
- The regimen also includes prednicarbate (Dermatop) combined with
Octenidine.
- Acitretin will be continued temporarily.
- A follow-up appointment in the outpatient clinic is scheduled for
three months from now.
- Discontinuation of Acitretin.
- It is recommended to avoid the use of compresses on the erosions to
prevent constant trauma.
- Topical therapy with petrolatum-based wound ointment and sterile
compresses.
|
Yellow
|
What is the relationship like between Purnie and his new friends?
A. They don't get along at all
B. Purnie likes his new friends more than they like him
C. His new friends like Purnie more than Purnie likes them
D. They all get along well
|
BEACH SCENE By MARSHALL KING Illustrated by WOOD [Transcriber's Note: This etext was produced from Galaxy Magazine October 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] It was a fine day at the beach for Purnie's game—but his new friends played very rough! Purnie ran laughing and shouting through the forest until he could run no more. He fell headlong into a patch of blue moss and whooped with delight in having this day free for exploring. He was free to see the ocean at last. When he had caught his breath, he looked back through the forest. No sign of the village; he had left it far behind. Safe from the scrutiny of brothers and parents, there was nothing now to stop him from going to the ocean. This was the moment to stop time. "On your mark!" he shouted to the rippling stream and its orange whirlpools. He glanced furtively from side to side, pretending that some object might try to get a head start. "Get set!" he challenged the thin-winged bees that hovered over the abundant foliage. "Stop!" He shrieked this command upward toward the dense, low-hanging purple clouds that perennially raced across the treetops, making one wonder how tall the trees really were. His eyes took quick inventory. It was exactly as he knew it would be: the milky-orange stream had become motionless and its minute whirlpools had stopped whirling; a nearby bee hung suspended over a paka plant, its transparent wings frozen in position for a downward stroke; and the heavy purple fluid overhead held fast in its manufacture of whorls and nimbi. With everything around him in a state of perfect tableau, Purnie hurried toward the ocean. If only the days weren't so short! he thought. There was so much to see and so little time. It seemed that everyone except him had seen the wonders of the beach country. The stories he had heard from his brothers and their friends had taunted him for as long as he could remember. So many times had he heard these thrilling tales that now, as he ran along, he could clearly picture the wonderland as though he were already there. There would be a rockslide of petrified logs to play on, the ocean itself with waves higher than a house, the comical three-legged tripons who never stopped munching on seaweed, and many kinds of other wonderful creatures found only at the ocean. He bounced through the forest as though the world was reserved this day just for him. And who could say it wasn't? he thought. Wasn't this his fifth birthday? He ran along feeling sorry for four-year-olds, and even for those who were only four and a half, for they were babies and wouldn't dare try slipping away to the ocean alone. But five! "I'll set you free, Mr. Bee—just wait and see!" As he passed one of the many motionless pollen-gathering insects he met on the way, he took care not to brush against it or disturb its interrupted task. When Purnie had stopped time, the bees—like all the other creatures he met—had been arrested in their native activities, and he knew that as soon as he resumed time, everything would pick up where it had left off. When he smelled an acid sweetness that told him the ocean was not far off, his pulse quickened in anticipation. Rather than spoil what was clearly going to be a perfect day, he chose to ignore the fact that he had been forbidden to use time-stopping as a convenience for journeying far from home. He chose to ignore the oft-repeated statement that an hour of time-stopping consumed more energy than a week of foot-racing. He chose to ignore the negative maxim that "small children who stop time without an adult being present, may not live to regret it." He chose, instead, to picture the beaming praise of family and friends when they learned of his brave journey. The journey was long, the clock stood still. He stopped long enough to gather some fruit that grew along the path. It would serve as his lunch during this day of promise. With it under his arm he bounded along a dozen more steps, then stopped abruptly in his tracks. He found himself atop a rocky knoll, overlooking the mighty sea! He was so overpowered by the vista before him that his "Hurrah!" came out as a weak squeak. The ocean lay at the ready, its stilled waves awaiting his command to resume their tidal sweep. The breakers along the shoreline hung in varying stages of disarray, some having already exploded into towering white spray while others were poised in smooth orange curls waiting to start that action. And there were new friends everywhere! Overhead, a flock of spora were frozen in a steep glide, preparatory to a beach landing. Purnie had heard of these playful creatures many times. Today, with his brothers in school, he would have the pets all to himself. Further down the beach was a pair of two-legged animals poised in mid-step, facing the spot where Purnie now stood. Some distance behind them were eight more, each of whom were motionless in a curious pose of interrupted animation. And down in the water, where the ocean ran itself into thin nothingness upon the sand, he saw standing here and there the comical tripons, those three-legged marine buffoons who made handsome careers of munching seaweed. "Hi there!" Purnie called. When he got no reaction, he remembered that he himself was "dead" to the living world: he was still in a zone of time-stopping, on the inside looking out. For him, the world would continue to be a tableau of mannikins until he resumed time. "Hi there!" he called again; but now his mental attitude was that he expected time to resume. It did! Immediately he was surrounded by activity. He heard the roar of the crashing orange breakers, he tasted the dew of acid that floated from the spray, and he saw his new friends continue the actions which he had stopped while back in the forest. He knew, too, that at this moment, in the forest, the little brook picked up its flow where it had left off, the purple clouds resumed their leeward journey up the valley, and the bees continued their pollen-gathering without having missed a single stroke of their delicate wings. The brook, the clouds, and the insects had not been interrupted in the least; their respective tasks had been performed with continuing sureness. It was time itself that Purnie had stopped, not the world around him. He scampered around the rockpile and down the sandy cliff to meet the tripons who, to him, had just come to life. "I can stand on my head!" He set down his lunch and balanced himself bottoms-up while his legs pawed the air in an effort to hold him in position. He knew it was probably the worst head-stand he had ever done, for he felt weak and dizzy. Already time-stopping had left its mark on his strength. But his spirits ran on unchecked. The tripon thought Purnie's feat was superb. It stopped munching long enough to give him a salutory wag of its rump before returning to its repast. Purnie ran from pillar to post, trying to see and do everything at once. He looked around to greet the flock of spora, but they had glided to a spot further along the shore. Then, bouncing up to the first of the two-legged animals, he started to burst forth with his habitual "Hi there!" when he heard them making sounds of their own. "... will be no limit to my operations now, Benson. This planet makes seventeen. Seventeen planets I can claim as my own!" "My, my. Seventeen planets. And tell me, Forbes, just what the hell are you going to do with them—mount them on the wall of your den back in San Diego?" "Hi there, wanna play?" Purnie's invitation got nothing more than startled glance from the animals who quickly returned to their chatter. He scampered up the beach, picked up his lunch, and ran back to them, tagging along at their heels. "I've got my lunch, want some?" "Benson, you'd better tell your men back there to stop gawking at the scenery and get to work. Time is money. I didn't pay for this expedition just to give your flunkies a vacation." The animals stopped so suddenly that Purnie nearly tangled himself in their heels. "All right, Forbes, just hold it a minute. Listen to me. Sure, it's your money that put us here; it's your expedition all the way. But you hired me to get you here with the best crew on earth, and that's just what I've done. My job isn't over yet. I'm responsible for the safety of the men while we're here, and for the safe trip home." "Precisely. And since you're responsible, get 'em working. Tell 'em to bring along the flag. Look at the damn fools back there, playing in the ocean with a three-legged ostrich!" "Good God, man, aren't you human? We've only been on this planet twenty minutes! Naturally they want to look around. They half expected to find wild animals or worse, and here we are surrounded by quaint little creatures that run up to us like we're long-lost brothers. Let the men look around a minute or two before we stake out your claim." "Bah! Bunch of damn children." As Purnie followed along, a leg shot out at him and missed. "Benson, will you get this bug-eyed kangaroo away from me!" Purnie shrieked with joy at this new frolic and promptly stood on his head. In this position he got an upside down view of them walking away. He gave up trying to stay with them. Why did they move so fast, anyway? What was the hurry? As he sat down and began eating his lunch, three more of the creatures came along making excited noises, apparently trying to catch up to the first two. As they passed him, he held out his lunch. "Want some?" No response. Playing held more promise than eating. He left his lunch half eaten and went down to where they had stopped further along the beach. "Captain Benson, sir! Miles has detected strong radiation in the vicinity. He's trying to locate it now." "There you are, Forbes. Your new piece of real estate is going to make you so rich that you can buy your next planet. That'll make eighteen, I believe." "Radiation, bah! We've found low-grade ore on every planet I've discovered so far, and this one'll be no different. Now how about that flag? Let's get it up, Benson. And the cornerstone, and the plaque." "All right, lads. The sooner we get Mr. Forbes's pennant raised and his claim staked out, the sooner we can take time to look around. Lively now!" When the three animals went back to join the rest of their group, the first two resumed walking. Purnie followed along. "Well, Benson, you won't have to look far for materials to use for the base of the flag pole. Look at that rockpile up there. "Can't use them. They're petrified logs. The ones on top are too high to carry down, and if we move those on the bottom, the whole works will slide down on top of us." "Well—that's your problem. Just remember, I want this flag pole to be solid. It's got to stand at least—" "Don't worry, Forbes, we'll get your monument erected. What's this with the flag? There must be more to staking a claim than just putting up a flag." "There is, there is. Much more. I've taken care of all requirements set down by law to make my claim. But the flag? Well, you might say it represents an empire, Benson. The Forbes Empire. On each of my flags is the word FORBES, a symbol of development and progress. Call it sentiment if you will." "Don't worry, I won't. I've seen real-estate flags before." "Damn it all, will you stop referring to this as a real-estate deal? What I'm doing is big, man. Big! This is pioneering." "Of course. And if I'm not mistaken, you've set up a neat little escrow system so that you not only own the planets, but you will virtually own the people who are foolish enough to buy land on them." "I could have your hide for talking to me like this. Damn you, man! It's people like me who pay your way. It's people like me who give your space ships some place to go. It's people like me who pour good money into a chancey job like this, so that people like you can get away from thirteen-story tenement houses. Did you ever think of that?" "I imagine you'll triple your money in six months." When they stopped, Purnie stopped. At first he had been interested in the strange sounds they were making, but as he grew used to them, and as they in turn ignored his presence, he hopped alongside chattering to himself, content to be in their company. He heard more of these sounds coming from behind, and he turned to see the remainder of the group running toward them. "Captain Benson! Here's the flag, sir. And here's Miles with the scintillometer. He says the radiation's getting stronger over this way!" "How about that, Miles?" "This thing's going wild, Captain. It's almost off scale." Purnie saw one of the animals hovering around him with a little box. Thankful for the attention, he stood on his head. "Can you do this?" He was overjoyed at the reaction. They all started making wonderful noises, and he felt most satisfied. "Stand back, Captain! Here's the source right here! This little chuck-walla's hotter than a plutonium pile!" "Let me see that, Miles. Well, I'll be damned! Now what do you suppose—" By now they had formed a widening circle around him, and he was hard put to think of an encore. He gambled on trying a brand new trick: he stood on one leg. "Benson, I must have that animal! Put him in a box." "Now wait a minute, Forbes. Universal Law forbids—" "This is my planet and I am the law. Put him in a box!" "With my crew as witness, I officially protest—" "Good God, what a specimen to take back. Radio-active animals! Why, they can reproduce themselves, of course! There must be thousands of these creatures around here someplace. And to think of those damn fools on Earth with their plutonium piles! Hah! Now I'll have investors flocking to me. How about it, Benson—does pioneering pay off or doesn't it?" "Not so fast. Since this little fellow is radioactive, there may be great danger to the crew—" "Now look here! You had planned to put mineral specimens in a lead box, so what's the difference? Put him in a box." "He'll die." "I have you under contract, Benson! You are responsible to me, and what's more, you are on my property. Put him in a box." Purnie was tired. First the time-stopping, then this. While this day had brought more fun and excitement than he could have hoped for, the strain was beginning to tell. He lay in the center of the circle happily exhausted, hoping that his friends would show him some of their own tricks. He didn't have to wait long. The animals forming the circle stepped back and made way for two others who came through carrying a box. Purnie sat up to watch the show. "Hell, Captain, why don't I just pick him up? Looks like he has no intention of running away." "Better not, Cabot. Even though you're shielded, no telling what powers the little fella has. Play it safe and use the rope." "I swear he knows what we're saying. Look at those eyes." "All right, careful now with that line." "Come on, baby. Here you go. That's a boy!" Purnie took in these sounds with perplexed concern. He sensed the imploring quality of the creature with the rope, but he didn't know what he was supposed to do. He cocked his head to one side as he wiggled in anticipation. He saw the noose spinning down toward his head, and, before he knew it, he had scooted out of the circle and up the sandy beach. He was surprised at himself for running away. Why had he done it? He wondered. Never before had he felt this fleeting twinge that made him want to protect himself. He watched the animals huddle around the box on the beach, their attention apparently diverted to something else. He wished now that he had not run away; he felt he had lost his chance to join in their fun. "Wait!" He ran over to his half-eaten lunch, picked it up, and ran back into the little crowd. "I've got my lunch, want some?" The party came to life once more. His friends ran this way and that, and at last Purnie knew that the idea was to get him into the box. He picked up the spirit of the tease, and deliberately ran within a few feet of the lead box, then, just as the nearest pursuer was about to push him in, he sidestepped onto safer ground. Then he heard a deafening roar and felt a warm, wet sting in one of his legs. "Forbes, you fool! Put away that gun!" "There you are, boys. It's all in knowing how. Just winged him, that's all. Now pick him up." The pang in his leg was nothing: Purnie's misery lay in his confusion. What had he done wrong? When he saw the noose spinning toward him again, he involuntarily stopped time. He knew better than to use this power carelessly, but his action now was reflex. In that split second following the sharp sting in his leg, his mind had grasped in all directions to find an acceptable course of action. Finding none, it had ordered the stoppage of time. The scene around him became a tableau once more. The noose hung motionless over his head while the rest of the rope snaked its way in transverse waves back to one of the two-legged animals. Purnie dragged himself through the congregation, whimpering from his inability to understand. As he worked his way past one creature after another, he tried at first to not look them in the eye, for he felt sure he had done something wrong. Then he thought that by sneaking a glance at them as he passed, he might see a sign pointing to their purpose. He limped by one who had in his hand a small shiny object that had been emitting smoke from one end; the smoke now billowed in lifeless curls about the animal's head. He hobbled by another who held a small box that had previously made a hissing sound whenever Purnie was near. These things told him nothing. Before starting his climb up the knoll, he passed a tripon which, true to its reputation, was comical even in fright. Startled by the loud explosion, it had jumped four feet into the air before Purnie had stopped time. Now it hung there, its beak stuffed with seaweed and its three legs drawn up into a squatting position. Leaving the assorted statues behind, he limped his way up the knoll, torn between leaving and staying. What an odd place, this ocean country! He wondered why he had not heard more detail about the beach animals. Reaching the top of the bluff, he looked down upon his silent friends with a feeling of deep sorrow. How he wished he were down there playing with them. But he knew at last that theirs was a game he didn't fit into. Now there was nothing left but to resume time and start the long walk home. Even though the short day was nearly over, he knew he didn't dare use time-stopping to get himself home in nothing flat. His fatigued body and clouded mind were strong signals that he had already abused this faculty. When Purnie started time again, the animal with the noose stood in open-mouthed disbelief as the rope fell harmlessly to the sand—on the spot where Purnie had been standing. "My God, he's—he's gone." Then another of the animals, the one with the smoking thing in his hand, ran a few steps toward the noose, stopped and gaped at the rope. "All right, you people, what's going on here? Get him in that box. What did you do with him?" The resumption of time meant nothing at all to those on the beach, for to them time had never stopped. The only thing they could be sure of was that at one moment there had been a fuzzy creature hopping around in front of them, and the next moment he was gone. "Is he invisible, Captain? Where is he?" "Up there, Captain! On those rocks. Isn't that him?" "Well, I'll be damned!" "Benson, I'm holding you personally responsible for this! Now that you've botched it up, I'll bring him down my own way." "Just a minute, Forbes, let me think. There's something about that fuzzy little devil that we should.... Forbes! I warned you about that gun!" Purnie moved across the top of the rockpile for a last look at his friends. His weight on the end of the first log started the slide. Slowly at first, the giant pencils began cascading down the short distance to the sand. Purnie fell back onto solid ground, horrified at the spectacle before him. The agonizing screams of the animals below filled him with hysteria. The boulders caught most of them as they stood ankle-deep in the surf. Others were pinned down on the sand. "I didn't mean it!" Purnie screamed. "I'm sorry! Can't you hear?" He hopped back and forth near the edge of the rise, torn with panic and shame. "Get up! Please get up!" He was horrified by the moans reaching his ears from the beach. "You're getting all wet! Did you hear me? Please get up." He was choked with rage and sorrow. How could he have done this? He wanted his friends to get up and shake themselves off, tell him it was all right. But it was beyond his power to bring it about. The lapping tide threatened to cover those in the orange surf. Purnie worked his way down the hill, imploring them to save themselves. The sounds they made carried a new tone, a desperate foreboding of death. "Rhodes! Cabot! Can you hear me?" "I—I can't move, Captain. My leg, it's.... My God, we're going to drown!" "Look around you, Cabot. Can you see anyone moving?" "The men on the beach are nearly buried, Captain. And the rest of us here in the water—" "Forbes. Can you see Forbes? Maybe he's—" His sounds were cut off by a wavelet gently rolling over his head. Purnie could wait no longer. The tides were all but covering one of the animals, and soon the others would be in the same plight. Disregarding the consequences, he ordered time to stop. Wading down into the surf, he worked a log off one victim, then he tugged the animal up to the sand. Through blinding tears, Purnie worked slowly and carefully. He knew there was no hurry—at least, not as far as his friends' safety was concerned. No matter what their condition of life or death was at this moment, it would stay the same way until he started time again. He made his way deeper into the orange liquid, where a raised hand signalled the location of a submerged body. The hand was clutching a large white banner that was tangled among the logs. Purnie worked the animal free and pulled it ashore. It was the one who had been carrying the shiny object that spit smoke. Scarcely noticing his own injured leg, he ferried one victim after another until there were no more in the surf. Up on the beach, he started unraveling the logs that pinned down the animals caught there. He removed a log from the lap of one, who then remained in a sitting position, his face contorted into a frozen mask of agony and shock. Another, with the weight removed, rolled over like an iron statue into a new position. Purnie whimpered in black misery as he surveyed the chaotic scene before him. At last he could do no more; he felt consciousness slipping away from him. He instinctively knew that if he lost his senses during a period of time-stopping, events would pick up where they had left off ... without him. For Purnie, this would be death. If he had to lose consciousness, he knew he must first resume time. Step by step he plodded up the little hill, pausing every now and then to consider if this were the moment to start time before it was too late. With his energy fast draining away, he reached the top of the knoll, and he turned to look down once more on the group below. Then he knew how much his mind and body had suffered: when he ordered time to resume, nothing happened. His heart sank. He wasn't afraid of death, and he knew that if he died the oceans would roll again and his friends would move about. But he wanted to see them safe. He tried to clear his mind for supreme effort. There was no urging time to start. He knew he couldn't persuade it by bits and pieces, first slowly then full ahead. Time either progressed or it didn't. He had to take one viewpoint or the other. Then, without knowing exactly when it happened, his mind took command.... His friends came to life. The first one he saw stir lay on his stomach and pounded his fists on the beach. A flood of relief settled over Purnie as sounds came from the animal. "What's the matter with me? Somebody tell me! Am I nuts? Miles! Schick! What's happening?" "I'm coming, Rhodes! Heaven help us, man—I saw it, too. We're either crazy or those damn logs are alive!" "It's not the logs. How about us? How'd we get out of the water? Miles, we're both cracking." "I'm telling you, man, it's the logs, or rocks or whatever they are. I was looking right at them. First they're on top of me, then they're piled up over there!" "Damnit, the logs didn't pick us up out of the ocean, did they? Captain Benson!" "Are you men all right?" "Yes sir, but—" "Who saw exactly what happened?" "I'm afraid we're not seeing right, Captain. Those logs—" "I know, I know. Now get hold of yourselves. We've got to round up the others and get out of here while time is on our side." "But what happened, Captain?" "Hell, Rhodes, don't you think I'd like to know? Those logs are so old they're petrified. The whole bunch of us couldn't lift one. It would take super-human energy to move one of those things." "I haven't seen anything super-human. Those ostriches down there are so busy eating seaweed—" "All right, let's bear a hand here with the others. Some of them can't walk. Where's Forbes?" "He's sitting down there in the water, Captain, crying like a baby. Or laughing. I can't tell which." "We'll have to get him. Miles, Schick, come along. Forbes! You all right?" "Ho-ho-ho! Seventeen! Seventeen! Seventeen planets, Benson, and they'll do anything I say! This one's got a mind of its own. Did you see that little trick with the rocks? Ho-ho!" "See if you can find his gun, Schick; he'll either kill himself or one of us. Tie his hands and take him back to the ship. We'll be along shortly." "Hah-hah-hah! Seventeen! Benson, I'm holding you personally responsible for this. Hee-hee!" Purnie opened his eyes as consciousness returned. Had his friends gone? He pulled himself along on his stomach to a position between two rocks, where he could see without being seen. By the light of the twin moons he saw that they were leaving, marching away in groups of two and three, the weak helping the weaker. As they disappeared around the curving shoreline, the voices of the last two, bringing up the rear far behind the others, fell faintly on his ears over the sound of the surf. "Is it possible that we're all crazy, Captain?" "It's possible, but we're not." "I wish I could be sure." "See Forbes up ahead there? What do you think of him?" "I still can't believe it." "He'll never be the same." "Tell me something. What was the most unusual thing you noticed back there?" "You must be kidding, sir. Why, the way those logs were off of us suddenly—" "Yes, of course. But I mean beside that." "Well, I guess I was kind of busy. You know, scared and mixed up." "But didn't you notice our little pop-eyed friend?" "Oh, him. I'm afraid not, Captain. I—I guess I was thinking mostly of myself." "Hmmm. If I could only be sure I saw him. If only someone else saw him too." "I'm afraid I don't follow you, sir." "Well, damn it all, you know that Forbes took a pot shot at him. Got him in the leg. That being the case, why would the fuzzy little devil come back to his tormentors—back to us—when we were trapped under those logs?" "Well, I guess as long as we were trapped, he figured we couldn't do him any more harm.... I'm sorry, that was a stupid answer. I guess I'm still a little shaky." "Forget it. Look, you go ahead to the ship and make ready for take-off. I'll join you in a few minutes. I think I'll go back and look around. You know. Make sure we haven't left anyone." "No need to do that. They're all ahead of us. I've checked." "That's my responsibility, Cabot, not yours. Now go on." As Purnie lay gathering strength for the long trek home, he saw through glazed eyes one of the animals coming back along the beach. When it was nearly directly below him, he could hear it making sounds that by now had become familiar. "Where are you?" Purnie paid little attention to the antics of his friend; he was beyond understanding. He wondered what they would say at home when he returned. "We've made a terrible mistake. We—" The sounds faded in and out on Purnie's ears as the creature turned slowly and called in different directions. He watched the animal walk over to the pile of scattered logs and peer around and under them. "If you're hurt I'd like to help!" The twin moons were high in the sky now, and where their light broke through the swirling clouds a double shadow was cast around the animal. With foggy awareness, Purnie watched the creature shake its head slowly, then walk away in the direction of the others. Purnie's eyes stared, without seeing, at the panorama before him. The beach was deserted now, and his gaze was transfixed on a shimmering white square floating on the ocean. Across it, the last thing Purnie ever saw, was emblazoned the word FORBES.
|
B. Purnie likes his new friends more than they like him
|
What datasets are used to evaluate this approach?
|
### Introduction
Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\%$ and $50.7\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\%$ accuracy in detecting errors. ### Background and Notation
In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors. In KGs, facts are represented using triples of subject, relation, and object, $\langle s, r, o\rangle $ , where $s,o\in \xi $ , the set of entities, and $r\in $ , the set of relations. To model the KG, a scoring function $\psi :\xi \times \times \xi \rightarrow $ is learned to evaluate whether any given fact is true. In this work, we focus on multiplicative models of link prediction, specifically DistMult BIBREF2 because of its simplicity and popularity, and ConvE BIBREF10 because of its high accuracy. We can represent the scoring function of such methods as $\psi (s,r,o) = , ) \cdot $ , where $,,\in ^d$ are embeddings of the subject, relation, and object respectively. In DistMult, $, ) = \odot $ , where $\odot $ is element-wise multiplication operator. Similarly, in ConvE, $, )$ is computed by a convolution on the concatenation of $$ and $s,o\in \xi $0 . We use the same setup as BIBREF10 for training, i.e., incorporate binary cross-entropy loss over the triple scores. In particular, for subject-relation pairs $(s,r)$ in the training data $G$ , we use binary $y^{s,r}_o$ to represent negative and positive facts. Using the model's probability of truth as $\sigma (\psi (s,r,o))$ for $\langle s,r,o\rangle $ , the loss is defined as: (G) = (s,r)o ys,ro(((s,r,o))) + (1-ys,ro)(1 - ((s,r,o))). Gradient descent is used to learn the embeddings $,,$ , and the parameters of $, if any.
$ ### Completion Robustness and Interpretability via Adversarial Graph Edits ()
For adversarial modifications on KGs, we first define the space of possible modifications. For a target triple $\langle s, r, o\rangle $ , we constrain the possible triples that we can remove (or inject) to be in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ i.e $s^{\prime }$ and $r^{\prime }$ may be different from the target, but the object is not. We analyze other forms of modifications such as $\langle s, r^{\prime }, o^{\prime }\rangle $ and $\langle s, r^{\prime }, o\rangle $ in appendices "Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle " and "Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle " , and leave empirical evaluation of these modifications for future work. ### Removing a fact ()
For explaining a target prediction, we are interested in identifying the observed fact that has the most influence (according to the model) on the prediction. We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned. Previous work have used this concept of influence similarly for several different tasks BIBREF19 , BIBREF20 . Formally, for the target triple ${s,r,o}$ and observed graph $G$ , we want to identify a neighboring triple ${s^{\prime },r^{\prime },o}\in G$ such that the score $\psi (s,r,o)$ when trained on $G$ and the score $\overline{\psi }(s,r,o)$ when trained on $G-\lbrace {s^{\prime },r^{\prime },o}\rbrace $ are maximally different, i.e. *argmax(s', r')Nei(o) (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ , and $\text{Nei}(o)=\lbrace (s^{\prime },r^{\prime })|\langle s^{\prime },r^{\prime },o \rangle \in G \rbrace $ . ### Adding a new fact ()
We are also interested in investigating the robustness of models, i.e., how sensitive are the predictions to small additions to the knowledge graph. Specifically, for a target prediction ${s,r,o}$ , we are interested in identifying a single fake fact ${s^{\prime },r^{\prime },o}$ that, when added to the knowledge graph $G$ , changes the prediction score $\psi (s,r,o)$ the most. Using $\overline{\psi }(s,r,o)$ as the score after training on $G\cup \lbrace {s^{\prime },r^{\prime },o}\rbrace $ , we define the adversary as: *argmax(s', r') (s',r')(s,r,o) where $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)=\psi (s, r, o)-\overline{\psi }(s,r,o)$ . The search here is over any possible $s^{\prime }\in \xi $ , which is often in the millions for most real-world KGs, and $r^{\prime }\in $ . We also identify adversaries that increase the prediction score for specific false triple, i.e., for a target fake fact ${s,r,o}$ , the adversary is ${s^{\prime },r^{\prime },o}$0 , where ${s^{\prime },r^{\prime },o}$1 is defined as before. ### Challenges
There are a number of crucial challenges when conducting such adversarial attack on KGs. First, evaluating the effect of changing the KG on the score of the target fact ( $\overline{\psi }(s,r,o)$ ) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of $G$ . Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive. Specifically, the search size for unobserved facts is $|\xi | \times ||$ , which, for example in YAGO3-10 KG, can be as many as $4.5 M$ possible facts for a single target prediction. ### Efficiently Identifying the Modification
In this section, we propose algorithms to address mentioned challenges by (1) approximating the effect of changing the graph on a target prediction, and (2) using continuous optimization for the discrete search over potential modifications. ### First-order Approximation of Influence
We first study the addition of a fact to the graph, and then extend it to cover removal as well. To capture the effect of an adversarial modification on the score of a target triple, we need to study the effect of the change on the vector representations of the target triple. We use $$ , $$ , and $$ to denote the embeddings of $s,r,o$ at the solution of $\operatornamewithlimits{argmin} (G)$ , and when considering the adversarial triple $\langle s^{\prime }, r^{\prime }, o \rangle $ , we use $$ , $$ , and $$ for the new embeddings of $s,r,o$ , respectively. Thus $$0 is a solution to $$1 , which can also be written as $$2 . Similarly, $$3 s', r', o $$4 $$5 $$6 $$7 o $$8 $$9 $$0 $$1 $$2 $$3 O(n3) $$4 $$5 $$6 (s,r,o)-(s, r, o) $$7 - $$8 s, r = ,) $$9 - $s,r,o$0 (G)= (G)+(s', r', o ) $s,r,o$1 $s,r,o$2 s', r' = ',') $s,r,o$3 = ((s',r',o)) $s,r,o$4 eo (G)=0 $s,r,o$5 eo (G) $s,r,o$6 Ho $s,r,o$7 dd $s,r,o$8 o $s,r,o$9 $\operatornamewithlimits{argmin} (G)$0 - $\operatornamewithlimits{argmin} (G)$1 -= $\operatornamewithlimits{argmin} (G)$2 Ho $\operatornamewithlimits{argmin} (G)$3 Ho + (1-) s',r's',r' $\operatornamewithlimits{argmin} (G)$4 Ho $\operatornamewithlimits{argmin} (G)$5 dd $\operatornamewithlimits{argmin} (G)$6 d $\operatornamewithlimits{argmin} (G)$7 s,r,s',r'd $\operatornamewithlimits{argmin} (G)$8 s, r, o $\operatornamewithlimits{argmin} (G)$9 s', r', o $\langle s^{\prime }, r^{\prime }, o \rangle $0 $\langle s^{\prime }, r^{\prime }, o \rangle $1 $\langle s^{\prime }, r^{\prime }, o \rangle $2 ### Continuous Optimization for Search
Using the approximations provided in the previous section, Eq. () and (), we can use brute force enumeration to find the adversary $\langle s^{\prime }, r^{\prime }, o \rangle $ . This approach is feasible when removing an observed triple since the search space of such modifications is usually small; it is the number of observed facts that share the object with the target. On the other hand, finding the most influential unobserved fact to add requires search over a much larger space of all possible unobserved facts (that share the object). Instead, we identify the most influential unobserved fact $\langle s^{\prime }, r^{\prime }, o \rangle $ by using a gradient-based algorithm on vector $_{s^{\prime },r^{\prime }}$ in the embedding space (reminder, $_{s^{\prime },r^{\prime }}=^{\prime },^{\prime })$ ), solving the following continuous optimization problem in $^d$ : *argmaxs', r' (s',r')(s,r,o). After identifying the optimal $_{s^{\prime }, r^{\prime }}$ , we still need to generate the pair $(s^{\prime },r^{\prime })$ . We design a network, shown in Figure 2 , that maps the vector $_{s^{\prime },r^{\prime }}$ to the entity-relation space, i.e., translating it into $(s^{\prime },r^{\prime })$ . In particular, we train an auto-encoder where the encoder is fixed to receive the $s$ and $\langle s^{\prime }, r^{\prime }, o \rangle $0 as one-hot inputs, and calculates $\langle s^{\prime }, r^{\prime }, o \rangle $1 in the same way as the DistMult and ConvE encoders respectively (using trained embeddings). The decoder is trained to take $\langle s^{\prime }, r^{\prime }, o \rangle $2 as input and produce $\langle s^{\prime }, r^{\prime }, o \rangle $3 and $\langle s^{\prime }, r^{\prime }, o \rangle $4 , essentially inverting $\langle s^{\prime }, r^{\prime }, o \rangle $5 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $6 s $\langle s^{\prime }, r^{\prime }, o \rangle $7 r $\langle s^{\prime }, r^{\prime }, o \rangle $8 s, r $\langle s^{\prime }, r^{\prime }, o \rangle $9 We evaluate the performance of our inverter networks (one for each model/dataset) on correctly recovering the pairs of subject and relation from the test set of our benchmarks, given the $_{s^{\prime },r^{\prime }}$0 . The accuracy of recovered pairs (and of each argument) is given in Table 1 . As shown, our networks achieve a very high accuracy, demonstrating their ability to invert vectors $_{s^{\prime },r^{\prime }}$1 to $_{s^{\prime },r^{\prime }}$2 pairs. ### Experiments
We evaluate by ( "Influence Function vs " ) comparing estimate with the actual effect of the attacks, ( "Robustness of Link Prediction Models" ) studying the effect of adversarial attacks on evaluation metrics, ( "Interpretability of Models" ) exploring its application to the interpretability of KG representations, and ( "Finding Errors in Knowledge Graphs" ) detecting incorrect triples. ### Influence Function vs
To evaluate the quality of our approximations and compare with influence function (IF), we conduct leave one out experiments. In this setup, we take all the neighbors of a random target triple as candidate modifications, remove them one at a time, retrain the model each time, and compute the exact change in the score of the target triple. We can use the magnitude of this change in score to rank the candidate triples, and compare this exact ranking with ranking as predicted by: , influence function with and without Hessian matrix, and the original model score (with the intuition that facts that the model is most confident of will have the largest impact when removed). Similarly, we evaluate by considering 200 random triples that share the object entity with the target sample as candidates, and rank them as above. The average results of Spearman's $\rho $ and Kendall's $\tau $ rank correlation coefficients over 10 random target samples is provided in Table 3 . performs comparably to the influence function, confirming that our approximation is accurate. Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters, while we only approximate the change by calculating the Hessian over $$ . The effect of this difference on scalability is dramatic, constraining IF to very small graphs and small embedding dimensionality ( $d\le 10$ ) before we run out of memory. In Figure 3 , we show the time to compute a single adversary by IF compared to , as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples. As it shows, is mostly unaffected by the number of entities while IF increases quadratically. Considering that real-world KGs have tens of thousands of times more entities, making IF unfeasible for them. ### Robustness of Link Prediction Models
Now we evaluate the effectiveness of to successfully attack link prediction by adding false facts. The goal here is to identify the attacks for triples in the test data, and measuring their effect on MRR and Hits@ metrics (ranking evaluations) after conducting the attack and retraining the model. Since this is the first work on adversarial attacks for link prediction, we introduce several baselines to compare against our method. For finding the adversarial fact to add for the target triple $\langle s, r, o \rangle $ , we consider two baselines: 1) choosing a random fake fact $\langle s^{\prime }, r^{\prime }, o \rangle $ (Random Attack); 2) finding $(s^{\prime }, r^{\prime })$ by first calculating $, )$ and then feeding $-, )$ to the decoder of the inverter function (Opposite Attack). In addition to , we introduce two other alternatives of our method: (1) , that uses to increase the score of fake fact over a test triple, i.e., we find the fake fact the model ranks second after the test triple, and identify the adversary for them, and (2) that selects between and attacks based on which has a higher estimated change in score. All-Test The result of the attack on all test facts as targets is provided in the Table 4 . outperforms the baselines, demonstrating its ability to effectively attack the KG representations. It seems DistMult is more robust against random attacks, while ConvE is more robust against designed attacks. is more effective than since changing the score of a fake fact is easier than of actual facts; there is no existing evidence to support fake facts. We also see that YAGO3-10 models are more robust than those for WN18. Looking at sample attacks (provided in Appendix "Sample Adversarial Attacks" ), mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type. Uncertain-Test To better understand the effect of attacks, we consider a subset of test triples that 1) the model predicts correctly, 2) difference between their scores and the negative sample with the highest score is minimum. This “Uncertain-Test” subset contains 100 triples from each of the original test sets, and we provide results of attacks on this data in Table 4 . The attacks are much more effective in this scenario, causing a considerable drop in the metrics. Further, in addition to significantly outperforming other baselines, they indicate that ConvE's confidence is much more robust. Relation Breakdown We perform additional analysis on the YAGO3-10 dataset to gain a deeper understanding of the performance of our model. As shown in Figure 4 , both DistMult and ConvE provide a more robust representation for isAffiliatedTo and isConnectedTo relations, demonstrating the confidence of models in identifying them. Moreover, the affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations. Examples Sample adversarial attacks are provided in Table 5 . attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types. ### Interpretability of Models
To be able to understand and interpret why a link is predicted using the opaque, dense embeddings, we need to find out which part of the graph was most influential on the prediction. To provide such explanations for each predictions, we identify the most influential fact using . Instead of focusing on individual predictions, we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique: we find simple patterns on subgraphs that surround the target triple and the removed fact from , and appear more than $90\%$ of the time. We only focus on extracting length-2 horn rules, i.e., $R_1(a,c)\wedge R_2(c,b)\Rightarrow R(a,b)$ , where $R(a,b)$ is the target and $R_2(c,b)$ is the removed fact. Table 6 shows extracted YAGO3-10 rules that are common to both models, and ones that are not. The rules show several interesting inferences, such that hasChild is often inferred via married parents, and isLocatedIn via transitivity. There are several differences in how the models reason as well; DistMult often uses the hasCapital as an intermediate step for isLocatedIn, while ConvE incorrectly uses isNeighbor. We also compare against rules extracted by BIBREF2 for YAGO3-10 that utilizes the structure of DistMult: they require domain knowledge on types and cannot be applied to ConvE. Interestingly, the extracted rules contain all the rules provided by , demonstrating that can be used to accurately interpret models, including ones that are not interpretable, such as ConvE. These are preliminary steps toward interpretability of link prediction models, and we leave more analysis of interpretability to future work. ### Finding Errors in Knowledge Graphs
Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\langle s^{\prime }, r^{\prime }, o\rangle $ in the neighborhood of the train triple $\langle s, r, o\rangle $ , we need to find the triple $\langle s^{\prime },r^{\prime },o\rangle $ that results in the least change $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)$ when removed from the graph. To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\langle s^{\prime }, r, o\rangle $ where $s^{\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\langle s^{\prime }, r^{\prime }, o\rangle $ where $s^{\prime }$ and $r^{\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\%$ and $55\%$ in detecting errors. ### Related Work
Learning relational knowledge representations has been a focus of active research in the past few years, but to the best of our knowledge, this is the first work on conducting adversarial modifications on the link prediction task. Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions BIBREF21 , BIBREF22 , BIBREF23 . Although is primarily applicable to multiplicative scoring functions BIBREF0 , BIBREF1 , BIBREF2 , BIBREF24 , these ideas apply to additive scoring functions BIBREF18 , BIBREF6 , BIBREF7 , BIBREF25 as well, as we show in Appendix "First-order Approximation of the Change For TransE" . Furthermore, there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values BIBREF26 , images BIBREF27 , text BIBREF28 , BIBREF29 , BIBREF30 , and their combinations BIBREF31 . Using , we can gain a deeper understanding of these methods, especially those that build their embeddings wit hmultiplicative scoring functions. Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models BIBREF16 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 to attain the interpretability, and further, evaluate the robustness of those models. BIBREF20 uses influence function to provide an approach to understanding black-box models by studying the changes in the loss occurring as a result of changes in the training data. In addition to incorporating their established method on KGs, we derive a novel approach that differs from their procedure in two ways: (1) instead of changes in the loss, we consider the changes in the scoring function, which is more appropriate for KG representations, and (2) in addition to searching for an attack, we introduce a gradient-based method that is much faster, especially for “adding an attack triple” (the size of search space make the influence function method infeasible). Previous work has also considered adversaries for KGs, but as part of training to improve their representation of the graph BIBREF37 , BIBREF38 . Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction, there are two approaches BIBREF39 , BIBREF17 that consider the task of adversarial attack on graphs. There are a few fundamental differences from our work: (1) they build their method on top of a path-based representations while we focus on embeddings, (2) they consider node classification as the target of their attacks while we attack link prediction, and (3) they conduct the attack on small graphs due to restricted scalability, while the complexity of our method does not depend on the size of the graph, but only the neighborhood, allowing us to attack real-world graphs. ### Conclusions
Motivated by the need to analyze the robustness and interpretability of link prediction models, we present a novel approach for conducting adversarial modifications to knowledge graphs. We introduce , completion robustness and interpretability via adversarial graph edits: identifying the fact to add into or remove from the KG that changes the prediction for a target fact. uses (1) an estimate of the score change for any target triple after adding or removing another fact, and (2) a gradient-based algorithm for identifying the most influential modification. We show that can effectively reduce ranking metrics on link prediction models upon applying the attack triples. Further, we incorporate the to study the interpretability of KG representations by summarizing the most influential facts for each relation. Finally, using , we introduce a novel automated error detection method for knowledge graphs. We have release the open-source implementation of our models at: https://pouyapez.github.io/criage. ### Acknowledgements
We would like to thank Matt Gardner, Marco Tulio Ribeiro, Zhengli Zhao, Robert L. Logan IV, Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions. This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF awards #IIS-1817183 and #IIS-1756023. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies. ### Appendix
We approximate the change on the score of the target triple upon applying attacks other than the $\langle s^{\prime }, r^{\prime }, o \rangle $ ones. Since each relation appears many times in the training triples, we can assume that applying a single attack will not considerably affect the relations embeddings. As a result, we just need to study the attacks in the form of $\langle s, r^{\prime }, o \rangle $ and $\langle s, r^{\prime }, o^{\prime } \rangle $ . Defining the scoring function as $\psi (s,r,o) = , ) \cdot = _{s,r} \cdot $ , we further assume that $\psi (s,r,o) =\cdot (, ) =\cdot _{r,o}$ . ### Modifications of the Form 〈s,r ' ,o ' 〉\langle s, r^{\prime }, o^{\prime } \rangle
Using similar argument as the attacks in the form of $\langle s^{\prime }, r^{\prime }, o \rangle $ , we can calculate the effect of the attack, $\overline{\psi }{(s,r,o)}-\psi (s, r, o)$ as: (s,r,o)-(s, r, o)=(-) s, r where $_{s, r} = (,)$ . We now derive an efficient computation for $(-)$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s, r^{\prime }, o^{\prime } \rangle )$ over $$ is: es (G) = es (G) - (1-) r', o' where $_{r^{\prime }, o^{\prime }} = (^{\prime },^{\prime })$ , and $\varphi = \sigma (\psi (s,r^{\prime },o^{\prime }))$ . At convergence, after retraining, we expect $\nabla _{e_s} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_s} (\overline{G})$ to get: 0 - (1-)r',o'+ (Hs+(1-)r',o' r',o')(-) where $H_s$ is the $d\times d$ Hessian matrix for $s$ , i.e. second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $-$ gives us: -= (1-) (Hs + (1-) r',o'r',o')-1 r',o' In practice, $H_s$ is positive definite, making $H_s + \varphi (1-\varphi ) _{r^{\prime },o^{\prime }}^\intercal _{r^{\prime },o^{\prime }}$ positive definite as well, and invertible. Then, we compute the score change as: (s,r,o)-(s, r, o)= r,o (-) = ((1-) (Hs + (1-) r',o'r',o')-1 r',o')r,o. ### Modifications of the Form 〈s,r ' ,o〉\langle s, r^{\prime }, o \rangle
In this section we approximate the effect of attack in the form of $\langle s, r^{\prime }, o \rangle $ . In contrast to $\langle s^{\prime }, r^{\prime }, o \rangle $ attacks, for this scenario we need to consider the change in the $$ , upon applying the attack, in approximation of the change in the score as well. Using previous results, we can approximate the $-$ as: -= (1-) (Ho + (1-) s,r's,r')-1 s,r' and similarly, we can approximate $-$ as: -= (1-) (Hs + (1-) r',or',o)-1 r',o where $H_s$ is the Hessian matrix over $$ . Then using these approximations: s,r(-) = s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r') and: (-) r,o= ((1-) (Hs + (1-) r',or',o)-1 r',o) r,o and then calculate the change in the score as: (s,r,o)-(s, r, o)= s,r.(-) +(-).r,o = s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r')+ ((1-) (Hs + (1-) r',or',o)-1 r',o) r, o ### First-order Approximation of the Change For TransE
In here we derive the approximation of the change in the score upon applying an adversarial modification for TransE BIBREF18 . Using similar assumptions and parameters as before, to calculate the effect of the attack, $\overline{\psi }{(s,r,o)}$ (where $\psi {(s,r,o)}=|+-|$ ), we need to compute $$ . To do so, we need to derive an efficient computation for $$ . First, the derivative of the loss $(\overline{G})= (G)+(\langle s^{\prime }, r^{\prime }, o \rangle )$ over $$ is: eo (G) = eo (G) + (1-) s', r'-(s',r',o) where $_{s^{\prime }, r^{\prime }} = ^{\prime }+ ^{\prime }$ , and $\varphi = \sigma (\psi (s^{\prime },r^{\prime },o))$ . At convergence, after retraining, we expect $\nabla _{e_o} (\overline{G})=0$ . We perform first order Taylor approximation of $\nabla _{e_o} (\overline{G})$ to get: 0 (1-) (s', r'-)(s',r',o)+(Ho - Hs',r',o)(-) Hs',r',o = (1-)(s', r'-)(s', r'-)(s',r',o)2+ 1-(s',r',o)-(1-) (s', r'-)(s', r'-)(s',r',o)3 where $H_o$ is the $d\times d$ Hessian matrix for $o$ , i.e., second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $$ gives us: = -(1-) (Ho - Hs',r',o)-1 (s', r'-)(s',r',o) + Then, we compute the score change as: (s,r,o)= |+-| = |++(1-) (Ho - Hs',r',o)-1 (s', r'-)(s',r',o) - | Calculating this expression is efficient since $H_o$ is a $d\times d$ matrix. ### Sample Adversarial Attacks
In this section, we provide the output of the for some target triples. Sample adversarial attacks are provided in Table 5 . As it shows, attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types. Figure 1: Completion Robustness and Interpretability via Adversarial Graph Edits (CRIAGE): Change in the graph structure that changes the prediction of the retrained model, where (a) is the original sub-graph of the KG, (b) removes a neighboring link of the target, resulting in a change in the prediction, and (c) shows the effect of adding an attack triple on the target. These modifications were identified by our proposed approach. Figure 2: Inverter Network The architecture of our inverter function that translate zs,r to its respective (s̃, r̃). The encoder component is fixed to be the encoder network of DistMult and ConvE respectively. Table 1: Inverter Functions Accuracy, we calculate the accuracy of our inverter networks in correctly recovering the pairs of subject and relation from the test set of our benchmarks. Table 2: Data Statistics of the benchmarks. Figure 3: Influence function vs CRIAGE. We plot the average time (over 10 facts) of influence function (IF) and CRIAGE to identify an adversary as the number of entities in the Kinship KG is varied (by randomly sampling subgraphs of the KG). Even with small graphs and dimensionality, IF quickly becomes impractical. Table 3: Ranking modifications by their impact on the target. We compare the true ranking of candidate triples with a number of approximations using ranking correlation coefficients. We compare our method with influence function (IF) with and without Hessian, and ranking the candidates based on their score, on two KGs (d = 10, averaged over 10 random targets). For the sake of brevity, we represent the Spearman’s ρ and Kendall’s τ rank correlation coefficients simply as ρ and τ . Table 4: Robustness of Representation Models, the effect of adversarial attack on link prediction task. We consider two scenario for the target triples, 1) choosing the whole test dataset as the targets (All-Test) and 2) choosing a subset of test data that models are uncertain about them (Uncertain-Test). Figure 4: Per-Relation Breakdown showing the effect of CRIAGE-Add on different relations in YAGO3-10. Table 5: Extracted Rules for identifying the most influential link. We extract the patterns that appear more than 90% times in the neighborhood of the target triple. The output of CRIAGE-Remove is presented in red. Table 6: Error Detection Accuracy in the neighborhood of 100 chosen samples. We choose the neighbor with the least value of ∆(s′,r′)(s, r, o) as the incorrect fact. This experiment assumes we know each target fact has exactly one error. Table 7: Top adversarial triples for target samples.
|
WN18 and YAGO3-10
|
Which of these is not an expected impact of OA on academic inquiry?
A. It shoud allow more open discussion of a wide variety of topics
B. More people will be able to pursue specialized knowledge that does not have a large target audience
C. There will be less of a reason for researchers to work only inside of popular trends
D. Scholars will be tempted to leave academia to pursue publishing options that they can make money from
|
What Is Open Access? Shifting from ink on paper to digital text suddenly allows us to make perfect copies of our work. Shifting from isolated computers to a globe-spanning network of connected computers suddenly allows us to share perfect copies of our work with a worldwide audience at essentially no cost. About thirty years ago this kind of free global sharing became something new under the sun. Before that, it would have sounded like a quixotic dream. Digital technologies have created more than one revolution. Let’s call this one the access revolution. Why don’t more authors take advantage of the access revolution to reach more readers? The answer is pretty clear. Authors who share their works in this way aren’t selling them, and even authors with purposes higher than money depend on sales to make a living. Or at least they appreciate sales. Let’s sharpen the question, then, by putting to one side authors who want to sell their work. We can even acknowledge that we’re putting aside the vast majority of authors. Imagine a tribe of authors who write serious and useful work, and who follow a centuries-old custom of giving it away without charge. I don’t mean a group of rich authors who don’t need money. I mean a group of authors defined by their topics, genres, purposes, incentives, and institutional circumstances, not by their wealth. In fact, very few are wealthy. For now, it doesn’t matter who these authors are, how rare they are, what they write, or why they follow this peculiar custom. It’s enough to know that their employers pay them salaries, freeing them to give away their work, that they write for impact rather than money, and that they score career points when they make the kind of impact they hoped to make. Suppose that selling their work would actually harm their interests by shrinking their audience, reducing their impact, and distorting their professional goals by steering them toward popular topics and away from the specialized questions on which they are experts. If authors like that exist, at least they should take advantage of the access revolution. The dream of global free access can be a reality for them, even if most other authors hope to earn royalties and feel obliged to sit out this particular revolution. These lucky authors are scholars, and the works they customarily write and publish without payment are peer-reviewed articles in scholarly journals. Open access is the name of the revolutionary kind of access these authors, unencumbered by a motive of financial gain, are free to provide to their readers. Open access (OA) literature is digital, online, free of charge, and free of most copyright and licensing restrictions. We could call it “barrier-free” access, but that would emphasize the negative rather than the positive. In any case, we can be more specific about which access barriers OA removes. A price tag is a significant access barrier. Most works with price tags are individually affordable. But when a scholar needs to read or consult hundreds of works for one research project, or when a library must provide access for thousands of faculty and students working on tens of thousands of topics, and when the volume of new work grows explosively every year, price barriers become insurmountable. The resulting access gaps harm authors by limiting their audience and impact, harm readers by limiting what they can retrieve and read, and thereby harm research from both directions. OA removes price barriers. Copyright can also be a significant access barrier. If you have access to a work for reading but want to translate it into another language, distribute copies to colleagues, copy the text for mining with sophisticated software, or reformat it for reading with new technology, then you generally need the permission of the copyright holder. That makes sense when the author wants to sell the work and when the use you have in mind could undermine sales. But for research articles we’re generally talking about authors from the special tribe who want to share their work as widely as possible. Even these authors, however, tend to transfer their copyrights to intermediaries—publishers—who want to sell their work. As a result, users may be hampered in their research by barriers erected to serve intermediaries rather than authors. In addition, replacing user freedom with permission-seeking harms research authors by limiting the usefulness of their work, harms research readers by limiting the uses they may make of works even when they have access, and thereby harms research from both directions. OA removes these permission barriers. Removing price barriers means that readers are not limited by their own ability to pay, or by the budgets of the institutions where they may have library privileges. Removing permission barriers means that scholars are free to use or reuse literature for scholarly purposes. These purposes include reading and searching, but also redistributing, translating, text mining, migrating to new media, long-term archiving, and innumerable new forms of research, analysis, and processing we haven’t yet imagined. OA makes work more useful in both ways, by making it available to more people who can put it to use, and by freeing those people to use and reuse it. Terminology When we need to, we can be more specific about access vehicles and access barriers. In the jargon, OA delivered by journals is called gold OA , and OA delivered by repositories is called green OA . Work that is not open access, or that is available only for a price, is called toll access (TA). Over the years I’ve asked publishers for a neutral, nonpejorative and nonhonorific term for toll-access publishers, and conventional publishers is the suggestion I hear most often. While every kind of OA removes price barriers, there are many different permission barriers we could remove if we wanted to. If we remove price barriers alone, we provide gratis OA , and if we remove at least some permission barriers as well, we provide libre OA . (Also see section 3.1 on green/gold and section 3.3 on gratis/libre.) OA was defined in three influential public statements: the Budapest Open Access Initiative (February 2002), the Bethesda Statement on Open Access Publishing (June 2003), and the Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (October 2003). I sometimes refer to their overlap or common ground as the BBB definition of OA. My definition here is the BBB definition reduced to its essential elements and refined with some post-BBB terminology (green, gold, gratis, libre) for speaking precisely about subspecies of OA. Here’s how the Budapest statement defined OA: There are many degrees and kinds of wider and easier access to [research] literature. By “open access” to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited. Here’s how the Bethesda and Berlin statements put it: For a work to be OA, the copyright holder must consent in advance to let users “copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attribution of authorship.” Note that all three legs of the BBB definition go beyond removing price barriers to removing permission barriers, or beyond gratis OA to libre OA. But at the same time, all three allow at least one limit on user freedom: an obligation to attribute the work to the author. The purpose of OA is to remove barriers to all legitimate scholarly uses for scholarly literature, but there’s no legitimate scholarly purpose in suppressing attribution to the texts we use. (That’s why my shorthand definition says that OA literature is free of “most” rather than “all” copyright and licensing restrictions.) The basic idea of OA is simple: Make research literature available online without price barriers and without most permission barriers. Even the implementation is simple enough that the volume of peer-reviewed OA literature and the number of institutions providing it have grown at an increasing rate for more than a decade. If there are complexities, they lie in the transition from where we are now to a world in which OA is the default for new research. This is complicated because the major obstacles are not technical, legal, or economic, but cultural. (More in chapter 9 on the future.) In principle, any kind of digital content can be OA, since any digital content can be put online without price or permission barriers. Moreover, any kind of content can be digital: texts, data, images, audio, video, multimedia, and executable code. We can have OA music and movies, news and novels, sitcoms and software—and to different degrees we already do. But the term “open access” was coined by researchers trying to remove access barriers to research. The next section explains why. 1.1 What Makes OA Possible? OA is made possible by the internet and copyright-holder consent. But why would a copyright holder consent to OA? Two background facts suggest the answer. First, authors are the copyright holders for their work until or unless they transfer rights to someone else, such as a publisher. Second, scholarly journals generally don’t pay authors for their research articles, which frees this special tribe of authors to consent to OA without losing revenue. This fact distinguishes scholars decisively from musicians and moviemakers, and even from most other kinds of authors. This is why controversies about OA to music and movies don’t carry over to OA for research articles. Both facts are critical, but the second is nearly unknown outside the academic world. It’s not a new fact of academic life, arising from a recent economic downturn in the publishing industry. Nor is it a case of corporate exploitation of unworldly academics. Scholarly journals haven’t paid authors for their articles since the first scholarly journals, the Philosophical Transactions of the Royal Society of London and the Journal des sçavans , launched in London and Paris in 1665. The academic custom to write research articles for impact rather than money may be a lucky accident that could have been otherwise. Or it may be a wise adaptation that would eventually evolve in any culture with a serious research subculture. (The optimist in me wants to believe the latter, but the evolution of copyright law taunts that optimism.) This peculiar custom does more than insulate cutting-edge research from the market and free scholars to consent to OA without losing revenue. It also supports academic freedom and the kinds of serious inquiry that advance knowledge. It frees researchers to challenge conventional wisdom and defend unpopular ideas, which are essential to academic freedom. At the same time it frees them to microspecialize and defend ideas of immediate interest to just a handful people in the world, which are essential to pushing the frontiers of knowledge. This custom doesn’t guarantee that truth-seeking won’t be derailed by profit-seeking, and it doesn’t guarantee that we’ll eventually fill the smallest gaps in our collaborative understanding of the world. It doesn’t even guarantee that scholars won’t sometimes play for the crowd and detour into fad thinking. But it removes a major distraction by allowing them, if they wish, to focus on what is likely to be true rather than what is likely to sell. It’s a payment structure we need for good research itself, not just for good access to research, and it’s the key to the legal and economic lock that would otherwise shackle steps toward OA. Creative people who live by royalties, such as novelists, musicians, and moviemakers, may consider this scholarly tradition a burden and sacrifice for scholars. We might even agree, provided we don’t overlook a few facts. First, it’s a sacrifice that scholars have been making for nearly 350 years. OA to research articles doesn’t depend on asking royalty-earning authors to give up their royalties. Second, academics have salaries from universities, freeing them to dive deeply into their research topics and publish specialized articles without market appeal. Many musicians and moviemakers might envy that freedom to disregard sales and popular taste. Third, academics receive other, less tangible rewards from their institutions—like promotion and tenure—when their research is recognized by others, accepted, cited, applied, and built upon. It’s no accident that faculty who advance knowledge in their fields also advance their careers. Academics are passionate about certain topics, ideas, questions, inquiries, or disciplines. They feel lucky to have jobs in which they may pursue these passions and even luckier to be rewarded for pursuing them. Some focus single-mindedly on carrying an honest pebble to the pile of knowledge (as John Lange put it), having an impact on their field, or scooping others working on the same questions. Others focus strategically on building the case for promotion and tenure. But the two paths converge, which is not a fortuitous fact of nature but an engineered fact of life in the academy. As incentives for productivity, these intangible career benefits may be stronger for the average researcher than royalties are for the average novelist or musician. (In both domains, bountiful royalties for superstars tell us nothing about effective payment models for the long tail of less stellar professionals.) There’s no sense in which research would be more free, efficient, or effective if academics took a more “businesslike” position, behaved more like musicians and moviemakers, abandoned their insulation from the market, and tied their income to the popularity of their ideas. Nonacademics who urge academics to come to their senses and demand royalties even for journal articles may be more naive about nonprofit research than academics are about for-profit business. We can take this a step further. Scholars can afford to ignore sales because they have salaries and research grants to take the place of royalties. But why do universities pay salaries and why do funding agencies award grants? They do it to advance research and the range of public interests served by research. They don’t do it to earn profits from the results. They are all nonprofit. They certainly don’t do it to make scholarly writings into gifts to enrich publishers, especially when conventional publishers erect access barriers at the expense of research. Universities and funding agencies pay researchers to make their research into gifts to the public in the widest sense. Public and private funding agencies are essentially public and private charities, funding research they regard as useful or beneficial. Universities have a public purpose as well, even when they are private institutions. We support the public institutions with public funds, and we support the private ones with tax exemptions for their property and tax deductions for their donors. We’d have less knowledge, less academic freedom, and less OA if researchers worked for royalties and made their research articles into commodities rather than gifts. It should be no surprise, then, that more and more funding agencies and universities are adopting strong OA policies. Their mission to advance research leads them directly to logic of OA: With a few exceptions, such as classified research, research that is worth funding or facilitating is worth sharing with everyone who can make use of it. (See chapter 4 on OA policies.) Newcomers to OA often assume that OA helps readers and hurts authors, and that the reader side of the scholarly soul must beg the author side to make the necessary sacrifice. But OA benefits authors as well as readers. Authors want access to readers at least as much as readers want access to authors. All authors want to cultivate a larger audience and greater impact. Authors who work for royalties have reason to compromise and settle for the smaller audience of paying customers. But authors who aren’t paid for their writing have no reason to compromise. It takes nothing away from a disinterested desire to advance knowledge to recognize that scholarly publication is accompanied by a strong interest in impact and career building. The result is a mix of interested and disinterested motives. The reasons to make work OA are essentially the same as the reasons to publish. Authors who make their work OA are always serving others but not always acting from altruism. In fact, the idea that OA depends on author altruism slows down OA progress by hiding the role of author self-interest. Another aspect of author self-interest emerges from the well-documented phenomenon that OA articles are cited more often than non-OA articles, even when they are published in the same issue of the same journal. There’s growing evidence that OA articles are downloaded more often as well, and that journals converting to OA see a rise in their submissions and citation impact. There are many hypotheses to explain the correlation between OA and increased citations, but it’s likely that ongoing studies will show that much of the correlation is simply due to the larger audience and heightened visibility provided by OA itself. When you enlarge the audience for an article, you also enlarge the subset of the audience that will later cite it, including professionals in the same field at institutions unable to afford subscription access. OA enlarges the potential audience, including the potential professional audience, far beyond that for even the most prestigious and popular subscription journals. In any case, these studies bring a welcome note of author self-interest to the case for OA. OA is not a sacrifice for authors who write for impact rather than money. It increases a work’s visibility, retrievability, audience, usage, and citations, which all convert to career building. For publishing scholars, it would be a bargain even if it were costly, difficult, and time-consuming. But as we’ll see, it’s not costly, not difficult, and not time-consuming. My colleague Stevan Harnad frequently compares research articles to advertisements. They advertise the author’s research. Try telling advertisers that they’re making a needless sacrifice by allowing people to read their ads without having to pay for the privilege. Advertisers give away their ads and even pay to place them where they might be seen. They do this to benefit themselves, and scholars have the same interest in sharing their message as widely as possible. Because any content can be digital, and any digital content can be OA, OA needn’t be limited to royalty-free literature like research articles. Research articles are just ripe examples of low-hanging fruit. OA could extend to royalty-producing work like monographs, textbooks, novels, news, music, and movies. But as soon as we cross the line into OA for royalty-producing work, authors will either lose revenue or fear that they will lose revenue. Either way, they’ll be harder to persuade. But instead of concluding that royalty-producing work is off limits to OA, we should merely conclude that it’s higher-hanging fruit. In many cases we can still persuade royalty-earning authors to consent to OA. (See section 5.3 on OA for books.) Authors of scholarly research articles aren’t the only players who work without pay in the production of research literature. In general, scholarly journals don’t pay editors or referees either. In general, editors and referees are paid salaries by universities to free them, like authors, to donate their time and labor to ensure the quality of new work appearing in scholarly journals. An important consequence follows. All the key players in peer review can consent to OA without losing revenue. OA needn’t dispense with peer review or favor unrefereed manuscripts over refereed articles. We can aim for the prize of OA to peer-reviewed scholarship. (See section 5.1 on peer review.) Of course, conventional publishers are not as free as authors, editors, and referees to forgo revenue. This is a central fact in the transition to OA, and it explains why the interests of scholars and conventional publishers diverge more in the digital age than they diverged earlier. But not all publishers are conventional, and not all conventional publishers will carry print-era business models into the digital age. Academic publishers are not monolithic. Some new ones were born OA and some older ones have completely converted to OA. Many provide OA to some of their work but not all of it. Some are experimenting with OA, and some are watching the experiments of others. Most allow green OA (through repositories) and a growing number offer at least some kind of gold OA (through journals). Some are supportive, some undecided, some opposed. Among the opposed, some have merely decided not to provide OA themselves, while others lobby actively against policies to encourage or require OA. Some oppose gold but not green OA, while others oppose green but not gold OA. OA gains nothing and loses potential allies by blurring these distinctions. This variety reminds us (to paraphrase Tim O’Reilly) that OA doesn’t threaten publishing; it only threatens existing publishers who do not adapt. A growing number of journal publishers have chosen business models allowing them to dispense with subscription revenue and offer OA. They have expenses but they also have revenue to cover their expenses. In fact, some OA publishers are for-profit and profitable. (See chapter 7 on economics.) Moreover, peer review is done by dedicated volunteers who don’t care how a journal pays its bills, or even whether the journal is in the red or the black. If all peer-reviewed journals converted to OA overnight, the authors, editors, and referees would have the same incentives to participate in peer review that they had the day before. They needn’t stop offering their services, needn’t lower their standards, and needn’t make sacrifices they weren’t already making. They volunteer their time not because of a journal’s choice of business model but because of its contribution to research. They could carry on with solvent or insolvent subscription publishers, with solvent or insolvent OA publishers, or even without publishers. The Budapest Open Access Initiative said in February 2002: “An old tradition and a new technology have converged to make possible an unprecedented public good. The old tradition is the willingness of scientists and scholars to publish the fruits of their research in scholarly journals without payment. . . . The new technology is the internet.” To see what this willingness looks like without the medium to give it effect, look at scholarship in the age of print. Author gifts turned into publisher commodities, and access gaps for readers were harmfully large and widespread. (Access gaps are still harmfully large and widespread, but only because OA is not yet the default for new research.) To see what the medium looks like without the willingness, look at music and movies in the age of the internet. The need for royalties keeps creators from reaching everyone who would enjoy their work. A beautiful opportunity exists where the willingness and the medium overlap. A scholarly custom that evolved in the seventeenth century frees scholars to take advantage of the access revolution in the twentieth and twenty-first. Because scholars are nearly unique in following this custom, they are nearly unique in their freedom to take advantage of this revolution without financial risk. In this sense, the planets have aligned for scholars. Most other authors are constrained to fear rather than seize the opportunities created by the internet. 1.2 What OA Is Not We can dispel a cloud of objections and misunderstandings simply by pointing out a few things that OA is not. (Many of these points will be elaborated in later chapters.) OA isn’t an attempt to bypass peer review. OA is compatible with every kind of peer review, from the most conservative to the most innovative, and all the major public statements on OA insist on its importance. Because scholarly journals generally don’t pay peer-reviewing editors and referees, just as they don’t pay authors, all the participants in peer review can consent to OA without losing revenue. While OA to unrefereed preprints is useful and widespread, the OA movement isn’t limited to unrefereed preprints and, if anything, focuses on OA to peer-reviewed articles. (More in section 5.1 on peer review.) OA isn’t an attempt to reform, violate, or abolish copyright. It’s compatible with copyright law as it is. OA would benefit from the right kinds of copyright reforms, and many dedicated people are working on them. But it needn’t wait for reforms and hasn’t waited. OA literature avoids copyright problems in exactly the same way that conventional toll-access literature does. For older works, it takes advantage of the public domain, and for newer works, it rests on copyright-holder consent. (More in chapter 4 on policies and chapter 6 on copyright.) OA isn’t an attempt to deprive royalty-earning authors of income. The OA movement focuses on research articles precisely because they don’t pay royalties. In any case, inside and outside that focus, OA for copyrighted work depends on copyright-holder consent. Hence, royalty-earning authors have nothing to fear but persuasion that the benefits of OA might outweigh the risks to royalties. (More in section 5.3 on OA for books.) OA isn’t an attempt to deny the reality of costs. No serious OA advocate has ever argued that OA literature is costless to produce, although many argue that it is less expensive to produce than conventionally published literature, even less expensive than born-digital toll-access literature. The question is not whether research literature can be made costless, but whether there are better ways to pay the bills than charging readers and creating access barriers. (More in chapter 7 on economics.) Terminology We could talk about vigilante OA, infringing OA, piratical OA, or OA without consent. That sort of OA could violate copyrights and deprive royalty-earning authors of royalties against their will. But we could also talk about vigilante publishing, infringing publishing, piratical publishing, or publishing without consent. Both happen. However, we generally reserve the term “publishing” for lawful publishing, and tack on special adjectives to describe unlawful variations on the theme. Likewise, I’ll reserve the term “open access” for lawful OA that carries the consent of the relevant rightsholder. OA isn’t an attempt to reduce authors’ rights over their work. On the contrary, OA depends on author decisions and requires authors to exercise more rights or control over their work than they are allowed to exercise under traditional publishing contracts. One OA strategy is for authors to retain some of the rights they formerly gave publishers, including the right to authorize OA. Another OA strategy is for publishers to permit more uses than they formerly permitted, including permission for authors to make OA copies of their work. By contrast, traditional journal-publishing contracts demand that authors transfer all rights to publishers, and author rights or control cannot sink lower than that. (See chapters 4 on policies and 6 on copyright.) OA isn’t an attempt to reduce academic freedom. Academic authors remain free to submit their work to the journals or publishers of their choice. Policies requiring OA do so conditionally, for example, for researchers who choose to apply for a certain kind of grant. In addition, these policies generally build in exceptions, waiver options, or both. Since 2008 most university OA policies have been adopted by faculty deeply concerned to preserve and even enhance their prerogatives. (See chapter 4 on OA policies.) OA isn’t an attempt to relax rules against plagiarism. All the public definitions of OA support author attribution, even construed as a “restriction” on users. All the major open licenses require author attribution. Moreover, plagiarism is typically punished by the plagiarist’s institution rather than by courts, that is, by social norms rather than by law. Hence, even when attribution is not legally required, plagiarism is still a punishable offense and no OA policy anywhere interferes with those punishments. In any case, if making literature digital and online makes plagiarism easier to commit, then OA makes plagiarism easier to detect. Not all plagiarists are smart, but the smart ones will not steal from OA sources indexed in every search engine. In this sense, OA deters plagiarism. OA isn’t an attempt to punish or undermine conventional publishers. OA is an attempt to advance the interests of research, researchers, and research institutions. The goal is constructive, not destructive. If OA does eventually harm toll-access publishers, it will be in the way that personal computers harmed typewriter manufacturers. The harm was not the goal, but a side effect of developing something better. Moreover, OA doesn’t challenge publishers or publishing per se, just one business model for publishing, and it’s far easier for conventional publishers to adapt to OA than for typewriter manufacturers to adapt to computers. In fact, most toll-access publishers are already adapting, by allowing author-initiated OA, providing some OA themselves, or experimenting with OA. (See section 3.1 on green OA and chapter 8 on casualties.) OA doesn’t require boycotting any kind of literature or publisher. It doesn’t require boycotting toll-access research any more than free online journalism requires boycotting priced online journalism. OA doesn’t require us to strike toll-access literature from our personal reading lists, course syllabi, or libraries. Some scholars who support OA decide to submit new work only to OA journals, or to donate their time as editors or referees only to OA journals, in effect boycotting toll-access journals as authors, editors, and referees. But this choice is not forced by the definition of OA, by a commitment to OA, or by any OA policy, and most scholars who support OA continue to work with toll-access journals. In any case, even those scholars who do boycott toll-access journals as authors, editors, or referees don’t boycott them as readers. (Here we needn’t get into the complexity that some toll-access journals effectively create involuntary reader boycotts by pricing their journals out of reach of readers who want access.) OA isn’t primarily about bringing access to lay readers. If anything, the OA movement focuses on bringing access to professional researchers whose careers depend on access. But there’s no need to decide which users are primary and which are secondary. The publishing lobby sometimes argues that the primary beneficiaries of OA are lay readers, perhaps to avoid acknowledging how many professional researchers lack access, or perhaps to set up the patronizing counter-argument that lay people don’t care to read research literature and wouldn’t understand it if they tried. OA is about bringing access to everyone with an internet connection who wants access, regardless of their professions or purposes. There’s no doubt that if we put “professional researchers” and “everyone else” into separate categories, a higher percentage of researchers will want access to research literature, even after taking into account that many already have paid access through their institutions. But it’s far from clear why that would matter, especially when providing OA to all internet users is cheaper and simpler than providing OA to just a subset of worthy internet users. If party-goers in New York and New Jersey can both enjoy the Fourth of July fireworks in New York Harbor, then the sponsors needn’t decide that one group is primary, even if a simple study could show which group is more numerous. If this analogy breaks down, it’s because New Jersey residents who can’t see the fireworks gain nothing from New Yorkers who can. But research does offer this double or indirect benefit. When OA research directly benefits many lay readers, so much the better. But when it doesn’t, it still benefits everyone indirectly by benefiting researchers directly. (Also see section 5.5.1 on access for lay readers.) Finally, OA isn’t universal access. Even when we succeed at removing price and permission barriers, four other kinds of access barrier might remain in place: Filtering and censorship barriers Many schools, employers, ISPs, and governments want to limit what users can see. Language barriers Most online literature is in English, or another single language, and machine translation is still very weak. Handicap access barriers Most web sites are not yet as accessible to handicapped users as they should be. Connectivity barriers The digital divide keeps billions of people offline, including millions of scholars, and impedes millions of others with slow, flaky, or low-bandwidth internet connections. Most us want to remove all four of these barriers. But there’s no reason to save the term open access until we succeed. In the long climb to universal access, removing price and permission barriers is a significant plateau worth recognizing with a special name.
|
D. Scholars will be tempted to leave academia to pursue publishing options that they can make money from
|
How many examples are there in the target domain?
|
### Introduction
Domain adaptation is a machine learning paradigm that aims at improving the generalization performance of a new (target) domain by using a dataset from the original (source) domain. Suppose that, as the source domain dataset, we have a captioning corpus, consisting of images of daily lives and each image has captions. Suppose also that we would like to generate captions for exotic cuisine, which are rare in the corpus. It is usually very costly to make a new corpus for the target domain, i.e., taking and captioning those images. The research question here is how we can leverage the source domain dataset to improve the performance on the target domain. As described by Daumé daume:07, there are mainly two settings of domain adaptation: fully supervised and semi-supervised. Our focus is the supervised setting, where both of the source and target domain datasets are labeled. We would like to use the label information of the source domain to improve the performance on the target domain. Recently, Recurrent Neural Networks (RNNs) have been successfully applied to various tasks in the field of natural language processing (NLP), including language modeling BIBREF0 , caption generation BIBREF1 and parsing BIBREF2 . For neural networks, there are two standard methods for supervised domain adaptation BIBREF3 . The first method is fine tuning: we first train the model with the source dataset and then tune it with the target domain dataset BIBREF4 , BIBREF5 . Since the objective function of neural network training is non-convex, the performance of the trained model can depend on the initialization of the parameters. This is in contrast with the convex methods such as Support Vector Machines (SVMs). We expect that the first training gives a good initialization of the parameters, and therefore the latter training gives a good generalization even if the target domain dataset is small. The downside of this approach is the lack of the optimization objective. The other method is to design the neural network so that it has two outputs. The first output is trained with the source dataset and the other output is trained with the target dataset, where the input part is shared among the domains. We call this method dual outputs. This type of network architecture has been successfully applied to multi-task learning in NLP such as part-of-speech tagging and named-entity recognition BIBREF6 , BIBREF7 . In the NLP community, there has been a large body of previous work on domain adaptation. One of the state-of-the-art methods for the supervised domain adaptation is feature augmentation BIBREF8 . The central idea of this method is to augment the original features/parameters in order to model the source specific, target specific and general behaviors of the data. However, it is not straight-forward to apply it to neural network models in which the cost function has a form of log probabilities. In this paper, we propose a new domain adaptation method for neural networks. We reformulate the method of daume:07 and derive an objective function using convexity of the loss function. From a high-level perspective, this method shares the idea of feature augmentation. We use redundant parameters for the source, target and general domains, where the general parameters are tuned to model the common characteristics of the datasets and the source/target parameters are tuned for domain specific aspects. In the latter part of this paper, we apply our domain adaptation method to a neural captioning model and show performance improvement over other standard methods on several datasets and metrics. In the datasets, the source and target have different word distributions, and thus adaptation of output parameters is important. We augment the output parameters to facilitate adaptation. Although we use captioning models in the experiments, our method can be applied to any neural networks trained with a cross-entropy loss. ### Related Work
There are several recent studies applying domain adaptation methods to deep neural networks. However, few studies have focused on improving the fine tuning and dual outputs methods in the supervised setting. sun2015return have proposed an unsupervised domain adaptation method and apply it to the features from deep neural networks. Their idea is to minimize the domain shift by aligning the second-order statistics of source and target distributions. In our setting, it is not necessarily true that there is a correspondence between the source and target input distributions, and therefore we cannot expect their method to work well. wen2016multi have proposed a procedure to generate natural language for multiple domains of spoken dialogue systems. They improve the fine tuning method by pre-trainig with synthesized data. However, the synthesis protocol is only applicable to the spoken dialogue system. In this paper, we focus on domain adaptation methods which can be applied without dataset-specific tricks. yang2016multitask have conducted a series of experiments to investigate the transferability of neural networks for NLP. They compare the performance of two transfer methods called INIT and MULT, which correspond to the fine tuning and dual outputs methods in our terms. They conclude that MULT is slightly better than or comparable to INIT; this is consistent with our experiments shown in section "Experiments" . Although they obtain little improvement by transferring the output parameters, we achieve significant improvement by augmenting parameters in the output layers. ### Domain adaptation and language generation
We start with the basic notations and formalization for domain adaptation. Let $\mathcal {X}$ be the set of inputs and $\mathcal {Y}$ be the outputs. We have a source domain dataset $D^s$ , which is sampled from some distribution $\mathcal {D}^s$ . Also, we have a target domain dataset $D^t$ , which is sampled from another distribution $\mathcal {D}^t$ . Since we are considering supervised settings, each element of the datasets has a form of input output pair $(x,y)$ . The goal of domain adaptation is to learn a function $f : \mathcal {X} \rightarrow \mathcal {Y}$ that models the input-output relation of $D^t$ . We implicitly assume that there is a connection between the source and target distributions and thus can leverage the information of the source domain dataset. In the case of image caption generation, the input $x$ is an image (or the feature vector of an image) and $\mathcal {Y}$0 is the caption (a sequence of words). In language generation tasks, a sequence of words is generated from an input $x$ . A state-of-the-art model for language generation is LSTM (Long Short Term Memory) initialized by a context vector computed by the input BIBREF1 . LSTM is a particular form of recurrent neural network, which has three gates and a memory cell. For each time step $t$ , the vectors $c_t$ and $h_t$ are computed from $u_t, c_{t-1}$ and $h_{t-1}$ by the following equations: $
&i = \sigma (W_{ix} u_t + W_{ih} h_{t-1}) \\
&f = \sigma (W_{fx} u_t + W_{fh} h_{t-1}) \\
&o = \sigma (W_{ox} u_t + W_{oh} h_{t-1}) \\
&g = \tanh (W_{gx} u_t + W_{gh} h_{t-1}) \\
&c_t = f \odot c_{t-1} + i \odot g \\
&h_t = o \odot \tanh (c_t),
$ where $\sigma $ is the sigmoid function and $\odot $ is the element-wise product. Note that all the vectors in the equations have the same dimension $n$ , called the cell size. The probability of the output word at the $t$ -th step, $y_t$ , is computed by $$p(y_t|y_1,\ldots ,y_{t-1},x) = {\rm Softmax}(W h_t), $$ (Eq. 1) where $W$ is a matrix with a size of vocabulary size times $n$ . We call this matrix as the parameter of the output layer. The input $u_t$ is given by the word embedding of $y_{t-1}$ . To generate a caption, we first compute feature vectors of the image, and put it into the beginning of the LSTM as $$u_{0} = W_{0} {\rm CNN}(x),$$ (Eq. 2) where $W_0$ is a tunable parameter matrix and ${\rm CNN}$ is a feature extractor usually given by a convolutional neural network. Output words, $y_t$ , are selected in order and each caption ends with special symbol <EOS>. The process is illustrated in Figure 1 . Note that the cost function for the generated caption is $
\log p(y|x) = \sum _{t} \log p(y_t|y_1,\ldots ,y_{t-1}, x),
$ where the conditional distributions are given by Eq. ( 1 ). The parameters of the model are optimized to minimize the cost on the training dataset. We also note that there are extensions of the models with attentions BIBREF9 , BIBREF10 , but the forms of the cost functions are the same. ### Domain adaptation for language generation
In this section, we review standard domain adaptation techniques which are applicable to the neural language generation. The performance of these methods is compared in the next section. ### Standard and baseline methods
A trivial method of domain adaptation is simply ignoring the source dataset, and train the model using only the target dataset. This method is hereafter denoted by TgtOnly. This is a baseline and any meaningful method must beat it. Another trivial method is SrcOnly, where only the source dataset is used for the training. Typically, the source dataset is bigger than that of the target, and this method sometimes works better than TgtOnly. Another method is All, in which the source and target datasets are combined and used for the training. Although this method uses all the data, the training criteria enforce the model to perform well on both of the domains, and therefore the performance on the target domain is not necessarily high. An approach widely used in the neural network community is FineTune. We first train the model with the source dataset and then it is used as the initial parameters for training the model with the target dataset. The training process is stopped in reference to the development set in order to avoid over-fitting. We could extend this method by posing a regularization term (e.g. $l_2$ regularization) in order not to deviate from the pre-trained parameter. In the latter experiments, however, we do not pursue this direction because we found no performance gain. Note that it is hard to control the scales of the regularization for each part of the neural net because there are many parameters having different roles. Another common approach for neural domain adaptation is Dual. In this method, the output of the network is “dualized”. In other words, we use different parameters $W$ in Eq. ( 1 ) for the source and target domains. For the source dataset, the model is trained with the first output and the second for the target dataset. The rest of the parameters are shared among the domains. This type of network design is often used for multi-task learning. ### Revisiting the feature augmentation method
Before proceeding to our new method, we describe the feature augmentation method BIBREF8 from our perspective. let us start with the feature augmentation method. Here we consider the domain adaptation of a binary classification problem. Suppose that we train SVM models for the source and target domains separately. The objective functions have the form of $
\frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s}
\max (0, 1 - y(w_s^T \Phi (x))) + \lambda \Vert w_s \Vert ^2 \\
\frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t}
\max (0, 1 - y(w_t^T \Phi (x))) + \lambda \Vert w_t \Vert ^2 ,
$ where $\Phi (x)$ is the feature vector and $w_s, w_t$ are the SVM parameters. In the feature augmentation method, the parameters are decomposed to $w_s = \theta _g + \theta _s$ and $w_t = \theta _g + \theta _t$ . The optimization objective is different from the sum of the above functions: $
& \frac{1}{n_s}
\sum _{(x,y) \in \mathcal {D}_s} \max (0, 1 - y(w_s^T \Phi (x))) \\
&+\lambda (\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2 ) \\
&+ \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} \max (0, 1 - y(w_t^T \Phi (x))) \\
&+ \lambda (\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2 ),
$ where the quadratic regularization terms $\Vert \theta _g + \theta _s \Vert ^2$ and $\Vert \theta _g + \theta _t \Vert ^2$ are changed to $\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2$ and $\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2$ , respectively. Since the parameters $\theta _g$ are shared, we cannot optimize the problems separately. This change of the objective function can be understood as adding additional regularization terms $
2(\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2 ) - \Vert \theta _g + \theta _t \Vert ^2, \\
2(\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2 ) - \Vert \theta _g + \theta _s \Vert ^2.
$ We can easily see that those are equal to $\Vert \theta _g - \theta _t \Vert ^2$ and $\Vert \theta _g - \theta _s \Vert ^2$ , respectively and thus this additional regularization enforces $\theta _g$ and $\theta _t$ (and also $\theta _g$ and $\theta _s$ ) not to be far away. This is how the feature augmentation method shares the domain information between the parameters $w_s$ and $w_t$ . ### Proposed method
Although the above formalization is for an SVM, which has the quadratic cost of parameters, we can apply the idea to the log probability case. In the case of RNN language generation, the loss function of each output is a cross entropy applied to the softmax output $$-\log & p_s(y|y_1, \ldots , y_{t-1}, x) \nonumber \\
&= -w_{s,y}^T h + \log Z(w_s;h), $$ (Eq. 8) where $Z$ is the partition function and $h$ is the hidden state of the LSTM computed by $y_0, \ldots , y_{t-1}$ and $x$ . Again we decompose the word output parameter as $w_s = \theta _g + \theta _s$ . Since $\log Z$ is convex with respect to $w_s$ , we can easily show that the Eq. ( 8 ) is bounded above by $
-&\theta _{g,y}^T h + \frac{1}{2} \log Z(2 \theta _g;x) \\
&-\theta _{s,y}^T h +\frac{1}{2} \log Z(2 \theta _s;x).
$ The equality holds if and only if $\theta _g = \theta _s$ . Therefore, optimizing this upper-bound effectively enforces the parameters to be close as well as reducing the cost. The exact same story can be applied to the target parameter $w_t = \theta _g + \theta _t$ . We combine the source and target cost functions and optimize the sum of the above upper-bounds. Then the derived objective function is $
\frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s}
[
-\theta _{g,y}^T h& + \frac{1}{2} \log Z(2 \theta _g;x) \\
&-\theta _{s,y}^T h + \frac{1}{2} \log Z(2 \theta _s;x)
]
\\
+ \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t}
[
-\theta _{g,y}^T h &+ \frac{1}{2} \log Z(2 \theta _g;x) \\
& -\theta _{t,y}^T h + \frac{1}{2} \log Z(2 \theta _t;x)
].
$ If we work with the sum of the source and target versions of Eq. ( 8 ), the method is actually the same as Dual because the parameters $\theta _g$ is completely redundant. The difference between this objective and the proposed upper bound works as a regularization term, which results in a good generalization performance. Although our formulation has the unique objective, there are three types of cross entropy loss terms given by $\theta _g$ , $\theta _s$ and $\theta _t$ . We denote them by $\ell (\theta _g), \ell (\theta _s)$ and $\ell (\theta _t)$ , respectively. For the source data, the sum of general and source loss terms is optimized, and for the target dataset the sum of general and target loss terms is optimized. The proposed algorithm is summarized in Algorithm "Proposed method" . Note that $\theta _h$ is the parameters of the LSTM except for the output part. In one epoch of the training, we use all data once. We can combine any parameter update methods for neural network training such as Adam BIBREF11 . boxruled [t] Proposed Method True Select a minibatch of data from source or target dataset source Optimize $\ell (\theta _g) + \ell (\theta _s)$ with respect to $\theta _g, \theta _s, \theta _h$ for the minibatch Optimize $\ell (\theta _g) + \ell (\theta _t)$ with respect to $\theta _g, \theta _t, \theta _h$ for the minibatch development error increases break Compute $w_t = \theta _g + \theta _t$ and $w_s = \theta _g + \theta _s$ . Use these parameters as the output parameters for each domain. ### Experiments
We have conducted domain adaptation experiments on the following three datasets. The first experiment focuses on the situation where the domain adaptation is useful. The second experiment show the benefit of domain adaptation for both directions: from source to target and target to source. The third experiment shows an improvement in another metric. Although our method is applicable to any neural network with a cross entropy loss, all the experiments use caption generation models because it is one of the most successful neural network applications in NLP. ### Adaptation to food domain captioning
This experiment highlights a typical scenario in which domain adaptation is useful. Suppose that we have a large dataset of captioned images, which are taken from daily lives, but we would like to generate high quality captions for more specialized domain images such as minor sports and exotic food. However, captioned images for those domains are quite limited due to the annotation cost. We use domain adaptation methods to improve the captions of the target domain. To simulate the scenario, we split the Microsoft COCO dataset into food and non-food domain datasets. The MS COCO dataset contains approximately 80K images for training and 40K images for validation; each image has 5 captions BIBREF12 . The dataset contains images of diverse categories, including animals, indoor scenes, sports, and foods. We selected the “food category” data by scoring the captions according to how much those are related to the food category. The score is computed based on wordnet similarities BIBREF13 . The training and validation datasets are split by the score with the same threshold. Consequently, the food dataset has 3,806 images for training and 1,775 for validation. The non-food dataset has 78,976 images for training and 38,749 for validation. The selected pictures from the food domain are typically a close-up of foods or people eating some foods. Table 1 shows some captions from the food and non-food domain datasets. Table 2 shows the top twenty frequent words in the two datasets except for the stop words. We observe that the frequent words are largely different, but still there are some words common in both datasets. To model the image captaining, we use LSTMs as described in the previous section. The image features are computed by the trained GoogLeNet and all the LSTMs have a single layer with 300 hidden units BIBREF14 . We use a standard optimization method, Adam BIBREF11 with hyper parameters $\alpha =0.001$ , $\beta _1=0.9$ and $\beta _2=0.999$ . We stop the training based on the loss on the development set. After the training we generate captions by beam search, where the size of the beam is 5. These settings are the same in the latter experiments. We compare the proposed method with other baseline methods. For all the methods, we use Adam with the same hyper parameters. In FineTune, we did not freeze any parameters during the target training. In Dual, all samples in source and target datasets are weighted equally. We evaluated the performance of the domain adaptation methods by the qualities of the generated captions. We used BLEU, METOR and CIDEr scores for the evaluation. The results are summarized in Table 3 . We see that the proposed method improves in most of the metrics. The baseline methods SrcOnly and TgtOnly are worse than other methods, because they use limited data for the training. Note that the CIDEr scores correlate with human evaluations better than BLEU and METOR scores BIBREF15 . Generated captions for sample images are shown in Table 4 . In the first example, All fails to identify the chocolate cake because there are birds in the source dataset which somehow look similar to chocolate cake. We argue that Proposed learns birds by the source parameters and chocolate cakes by the target parameters, and thus succeeded in generating appropriate captions. ### Adaptation between MS COCO and Flickr30K
In this experiment, we explore the benefit of adaptation from both sides of the domains. Flickr30K is another captioning dataset, consisting of 30K images, and each image has five captions BIBREF16 . Although the formats of the datasets are almost the same, the model trained by the MS COCO dataset does not work well for the Flickr 30K dataset and vice versa. The word distributions of the captions are considerably different. If we ignore words with less than 30 counts, MS COCO has 3,655 words and Flicker30K has 2732 words; and only 1,486 words are shared. Also, the average lengths of captions are different. The average length of captions in Flickr30K is 12.3 while that of MS COCO is 10.5. The first result is the domain adaptation from MS COCO to Flickr30K, summarized in Table 5 . Again, we observe that the proposed method achieves the best score among the other methods. The difference between All and FineTune is bigger than in the previous setting because two datasets have different captions even for similar images. The scores of FineTune and Dual are at almost the same level. The second result is the domain adaptation from Flickr30K to MS COCO shown in Table 6 . This may not be a typical situation because the number of samples in the target domain is larger than that of the source domain. The SrcOnly model is trained only with Flickr30K and tested on the MS COCO dataset. We observe that FineTune gives little benefit over TgtOnly, which implies that the difference of the initial parameters has little effect in this case. Also, Dual gives little benefit over TgtOnly, meaning that the parameter sharing except for the output layer is not important in this case. Note that the CIDEr score of Proposed is slightly improved. Figure 2 shows the comparison of FineTune and Proposed, changing the number of the Flickr samples to 1600, 6400 and 30K. We observe that FineTune works relatively well when the target domain dataset is small. ### Answer sentence selection
In this experiment, we use the captioning model as an affinity measure of images and sentences. TOEIC part 1 test consists of four-choice questions for English learners. The correct choice is the sentence that best describes the shown image. Questions are not easy because there are confusing keywords in wrong choices. An example of the question is shown in Table 7 . We downloaded 610 questions from http://www.english-test.net/toeic/ listening/. Our approach here is to select the most probable choice given the image by captioning models. We train captioning models with the images and correct answers from the training set. Since the TOEIC dataset is small, domain adaptation can give a large benefit. We compared the domain adaptation methods by the percentage of correct answers. The source dataset is 40K samples from MS COCO and the target dataset is the TOEIC dataset. We split the TOEIC dataset to 400 samples for training and 210 samples for testing. The percentages of correct answers for each method are summarized in Table 8 . Since the questions have four choices, all methods should perform better than 25%. TgtOnly is close to the baseline because the model is trained with only 400 samples. As in the previous experiments, FineTune and Dual are better than All and Proposed is better than the other methods. ### Conclusion and Future Work
We have proposed a new method for supervised domain adaptation of neural networks. On captioning datasets, we have shown that the method outperforms other standard adaptation methods applicable to neural networks. The proposed method only decomposes the output word parameters, where other parameters, such as word embedding, are completely shared across the domains. Augmentation of parameters in the other part of the network would be an interesting direction of future work. Figure 1: A schematic view of the LSTM captioning model. The first input to the LSTM is an image feature. Then a sentence “a piece of chocolate cake that is on a glass plate” is generated. The generation process ends with the EOS symbol. Table 1: Examples of annotated captions from food domain dataset (top) and non-food dataset (bottom). Table 3: Results of the domain adaptation to the food dataset. The evaluation metrics are BLEU, METOR and CIDEr. The proposed method is the best in most of the metrics. Table 5: Domain adaptation from MSCOCO to Flickr30K dataset. Table 4: Examples of generated captions for food dataset images. Table 6: Domain adaptation from Flickr30K to MSCOCO dataset. Figure 2: Comparison of CIDEr score of FINETUNE and PROPOSED Table 7: A sample question from TOEIC part 1 test. The correct answer is (C). Table 8: Domain adaptation to TOEIC dataset.
|
the food dataset has 3,806 images for training
|
Of the four films reviewed in the passage, which one has received the LEAST positive review?
A. Fight Club
B. Mumford
C. Boys Don't Cry
D. Happy Texas
|
Boys Do Bleed Fight Club is silly stuff, sensationalism that mistakes itself for satire, but it's also a brash and transporting piece of moviemaking, like Raging Bull on acid. The film opens with--literally--a surge of adrenalin, which travels through the bloodstream and into the brain of its protagonist, Jack (Edward Norton), who's viewed, as the camera pulls out of his insides, with a gun stuck in his mouth. How'd he get into this pickle? He's going to tell you, breezily, and the director, David Fincher, is going to illustrate his narrative--violently. Fincher ( Seven , 1995; The Game , 1997) is out to bombard you with so much feverish imagery that you have no choice but to succumb to the movie's reeling, punch-drunk worldview. By the end, you might feel as if you, too, have a mouthful of blood. Not to mention a hole in your head. Fight Club careers from one resonant satirical idea to the next without quite deciding whether its characters are full of crap or are Gen X prophets. It always gives you a rush, though. At first, it goofs on the absurd feminization of an absurdly macho culture. An increasingly desperate insomniac, Jack finds relief (and release) only at meetings for the terminally ill. At a testicular cancer group, he's enfolded in the ample arms of Bob (the singer Meat Loaf Aday), a former bodybuilder who ruined his health with steroids and now has "bitch tits." Jack and Bob subscribe to a new form of male bonding: They cling to each other and sob. But Jack's idyll is rudely disrupted by--wouldn't you know it?--a woman. A dark-eyed, sepulchral head case named Marla Singer (Helena Bonham Carter) begins showing up at all the same disparate meetings for essentially the same voyeuristic ends, and the presence of this "tourist" makes it impossible for Jack to emote. Jack finds another outlet, though. On a plane, he meets Tyler Durden (Brad Pitt), a cryptic hipster with a penchant for subversive acts both large (he makes high-priced soaps from liposuctioned human fat) and small (he splices frames from porn flicks into kiddie movies). When Jack's apartment mysteriously explodes--along with his carefully chosen IKEA furniture--he moves into Tyler's squalid warehouse and helps to found a new religion: Fight Club, in which young males gather after hours in the basement of a nightclub to pound one another (and be pounded) to a bloody pulp. That last parenthesis isn't so parenthetical. In some ways, it's the longing to be beaten into oblivion that's the strongest. "Self-improvement," explains Tyler, "is masturbation"; self-destruction is the new way. Tyler's manifesto calls for an end to consumerism ("Things you own end up owning you"), and since society is going down ("Martha Stewart is polishing brass on the Titanic "), the only creative outlet left is annihilation. "It's only after we've lost everything that we're free to do anything," he says. Fincher and his screenwriter, Jim Uhls, seem to think they've broken new ground in Fight Club , that their metaphor for our discontents hits harder than anyone else's. Certainly it produces more bloody splatter. But 20 years ago, the same impulse was called punk and, as Greil Marcus documents in Lipstick Traces , it was other things before that. Yes, the mixture of Johnny Rotten, Jake La Motta, and Jesus is unique; and the Faludi-esque emasculation themes are more explicit. But there's something deeply movie-ish about the whole conceit, as if the novelist and director were weaned on Martin Scorsese pictures and never stopped dreaming of recapturing that first masochistic rush. The novel, the first by Chuck Palahniuk (the surname sounds like Eskimo for "palooka"--which somehow fits), walks a line between the straight and ironic--it isn't always clear if its glib sociological pronouncements are meant to be taken straight or as the ravings of a delusional mama's boy. But onscreen, when Pitt announces to the assembled fighters that they are the "middle children of history" with "no purpose and no place"--emasculated on one hand by the lack of a unifying crisis (a world war or depression) and on the other by lack of material wealth as promised by television--he seems meant to be intoning gospel. "We are a generation of men raised by women," Tyler announces, and adds, "If our fathers bail, what does that tell you about God?" (I give up: What?) F ight Club could use a few different perspectives: a woman's, obviously, but also an African-American's--someone who'd have a different take on the "healing" properties of violence. It's also unclear just what has emasculated Jack: Is it that he's a materialist or that the materials themselves (i.e., IKEA's lacquered particle boards) don't measure up to his fantasies of opulence? Is he motivated by spiritual hunger or envy? Tyler's subsequent idea of confining his group's mayhem to franchise coffee bars and corporate-subsidized art is a witty one--it's like a parody of neo-Nazism as re-enacted by yuppies. It might have been a howl if performed by, say, the troupe of artsy German nihilists in Joel and Ethan Coen's The Big Lebowski (1998). Somehow Brad Pitt doesn't have the same piquancy. Actually, Pitt isn't as terrible as usual: He's playing not a character but a conceit, and he can bask in his movie-idol arrogance, which seems to be the most authentic emotion he has. But the film belongs to Norton. As a ferocious skinhead in last year's American History X , Norton was taut and ropy, his long torso curled into a sneer; here, he's skinny and wilting, a quivering pansy. Even when he fights he doesn't transform--he's a raging wimp. The performance is marvelous, and it makes poetic sense in light of the movie's climactic twist. But that twist will annoy more people than it will delight, if only because it shifts the drama from the realm of the sociological to that of the psychoanalytic. The finale, scored with the Pixies' great "Where Is My Mind?" comes off facetiously--as if Fincher is throwing the movie away. Until then, however, he has done a fabulous job of keeping it spinning. The most thrilling thing about Fight Club isn't what it says but how Uhls and Fincher pull you into its narrator's head and simulate his adrenalin rushes. A veteran of rock videos, Fincher is one of those filmmakers who helps make the case that MTV--along with digital editing--has transformed cinema for better as well as worse. The syntax has become more intricate. Voice-over narration, once considered uncinematic, is back in style, along with novelistic asides, digressions, fantasies, and flashbacks. To make a point, you can jazzily interject anything--even, as in Three Kings , a shot of a bullet slicing through internal organs. Films like Fight Club might not gel, but they have a breathless, free-associational quality that points to new possibilities in storytelling. Or maybe old possibilities: The language of movies hasn't seemed this unfettered since the pre-sound days of Sergei Eisenstein and Abel Gance. An actress named Hilary Swank gives one of the most rapturous performances I've ever seen as the cross-dressing Brandon Teena (a k a Teena Brandon) in Kimberly Peirce's stark and astonishingly beautiful debut feature, Boys Don't Cry . The movie opens with Teena being shorn of her hated female tresses and becoming "Brandon," who swaggers around in tight jeans and leather jackets. The joy is in watching the actor transform, and I don't just mean Swank: I mean Teena Brandon playing Brandon Teena--the role she has been longing for her whole life. In a redneck Nebraska bar, Brandon throws back a shot of whiskey and the gesture--a macho cliché--becomes an act of self-discovery. Every gesture does. "You're gonna have a shiner in the morning," someone tells Brandon after a barroom brawl, and he takes the news with a glee that's almost mystical: "I am????? Oh, shit!!!" he cries, grinning. That might be my favorite moment in the picture, because Swank's ecstatic expression carries us through the next hour, as Brandon acts out his urban-cowboy fantasies--"surfing" from the bumper of a pickup truck, rolling in the mud, and straddling a barstool with one hand on a brewski and the other on the shoulder of a gorgeous babe. That the people with whom Brandon feels most at home would kill him if they knew his true gender is the movie's most tragic irony--and the one that lifts it out of the realm of gay-martyr hagiography and into something more complex and irreducible: a meditation on the irrelevance of gender. Peirce's triumph is to make these scenes at once exuberant (occasionally hilarious) and foreboding, so that all the seeds of Brandon's killing are right there on the screen. John (Peter Sarsgaard), one of his future rapists and murderers, calls him "little buddy" and seems almost attracted to him; Sarsgaard's performance is a finely chiseled study of how unresolved emotion can suddenly resolve itself into violence. Though harrowing, the second half of Boys Don't Cry isn't as great as the first. The early scenes evoke elation and dread simultaneously, the later ones just dread; and the last half-hour is unrelieved torture. What keeps the movie tantalizing is Chloë Sevigny's Lana, who might or might not know that Brandon is a girl but who's entranced by him anyway. With her lank hair, hooded eyes, and air of sleepy sensuality, Sevigny--maybe even more than Swank--embodies the mystery of sex that's at the core of Boys Don't Cry . Everything she does is deliberate, ironic, slightly unreadable--and unyielding. She's could be saying, "I'm in this world but not of it. ... You'd never dream what's underneath." I n brief: If a friend tells you you'll love Happy Texas , rethink the friendship. This clunky mistaken-identity comedy about escaped cons who impersonate gay pageant directors doesn't even make sense on its own low farcical terms; it's mostly one lame homo joke after another. The only bright spot is Steve Zahn, who could be the offspring of Michael J. Fox and Crispin Glover if they'd mated on the set of Back to the Future (1985). It's hard to make a serious case for Lawrence Kasdan's Mumford , which has apparently flopped but which you can still catch at second- and third-tier theaters. It looks peculiar--a Norman Rockwell painting with noir shadows. And its tale of a small town healed by a depressive (Loren Dean) posing as a psychologist is full of doddering misconceptions about psychotherapy. I almost don't know why I loved it, but the relaxed pacing and the witty turns by Martin Short, Ted Danson, David Paymer, and Mary McDonnell surely helped. I can't decide if the weirdly affectless Dean is inspired or inept, but my indecision suggests why he works in the role. There's no doubt, however, about his even more depressive love object, Hope Davis, who posseses the cinema's most expressive honking-nasal voice and who slumps through the movie like the world's most lyrical anti-ballerina. Even her puffy cheeks are eloquent: They made me think of Mumford as the home of the psychological mumps.
|
D. Happy Texas
|
What was the maximum workload Mr. Romero achieved during the stress echocardiography?
Choose the correct answer from the following options:
A. 50 watts
B. 100 watts
C. 150 watts
D. 175 watts
E. 200 watts
|
### Patient Report 0
**Dear colleague, **
We would like to report to you about our patient, Mr. David Romero, born
on 02/16/1942, who was under our inpatient care from 03/25/2016 to
03/30/2016.
**Diagnoses:**
- Suspected myocarditis
- Uncomplicated biopsy, pending results
- LifeVest has been adjusted
- Left ventricular ejection fraction of 28%
- Chronic hepatitis C
- Status post hepatitis A
- Post-antiviral therapy
- Exclusion of relevant coronary artery disease
**Medical History:** The patient was admitted with suspected myocarditis
due to a significantly impaired pump function noticed during outpatient
visits. Anamnestically, the patient reported experiencing fatigue and
exertional dyspnea since mid-December, with no recollection of a
preceding infection. Antiviral therapy with Interferon/Ribavirin for
chronic Hepatitis C had been ongoing since November. An outpatient
evaluation had excluded relevant coronary artery disease.
**Current Presentation:** Suspected inflammatory/dilated cardiomyopathy,
Indication for biopsy
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without guarding, spleen
and liver not palpable. Normal bowel sounds.
**Coronary Angiography**: Globally significantly impaired left
ventricular function (EF: 28%)
[Myocardial biopsy:]{.underline} Uncomplicated retrieval of LV
endomyocardial biopsies
[Recommendation]{.underline}: A conservative medical approach is
recommended, and further therapeutic decisions will depend on the
histological, immunohistological, and molecular biological examination
results of the now-retrieved myocardial biopsies.
[Procedure]{.underline}: Femoral closure system is applied, 6 hours of
bed rest, administration of 100 mg/day of Aspirin for 4 weeks following
left ventricular heart biopsy.
**Echocardiography before Heart Catheterization**:
Performed in sinus rhythm. Satisfactory ultrasound condition.
[Findings]{.underline}: Moderately dilated left ventricle (LVDd 64mm).
Markedly reduced systolic LV function (EF 28%). Global longitudinal
strain (2D speckle tracking): -8.6%.
Regional wall motion abnormalities: despite global hypokinesia, the
posterolateral wall (basal) contracts best. Diastolic dysfunction Grade
1 (LV relaxation disorder) (E/A 0.7) (E/E\' mean 13.8). No LV
hypertrophy. Morphologically age-appropriate heart valves. Moderately
dilated left atrium (LA Vol. 71ml). Mild mitral valve insufficiency
(Grade 1 on a 3-grade scale). Normal-sized right ventricle. Moderately
reduced RV function Normal-sized right atrium. Minimal tricuspid valve
insufficiency (Grade 0-1 on a 3-grade scale). Systolic pulmonary artery
pressure in the normal range (systolic PAP 27mmHg).
No thrombus detected. Minimal pericardial effusion, circular, maximum
2mm, no hemodynamic relevance.
**Echocardiography after Heart Catheterization:**
[Indication]{.underline}: Follow-up on pericardial effusion.
[Examination]{.underline}: TTE at rest, including duplex and
quantitative determination of parameters. [Echocardiographic
Finding:]{.underline} Regarding pericardial effusion, the status is the
same. Circular effusion, maximum 2mm.
**ECG after Heart Catheterization:**
76/min, sinus rhythm, complete left bundle branch block.
**Summary:** On 03/26/2016, biopsy and left heart catheterization were
successfully performed without complications. Here, too, the patient
exhibited a significantly impaired pump function, currently at 28%.
**Therapy and Progression:**
Throughout the inpatient stay, the patient remained cardiorespiratorily
stable at all times. Malignant arrhythmias were ruled out via telemetry.
After the intervention, echocardiography showed no pericardial effusion.
The results of the endomyocardial biopsies are still pending. An
appointment for results discussion and evaluation of further procedures
at our facility should be scheduled in 3 weeks. Following the biopsy,
Aspirin 100 as specified should be given for 4 weeks. We intensified the
ongoing heart failure therapy and added Spironolactone to the
medication, recommending further escalation based on hemodynamic
tolerability.
**Current Recommendations:** Close cardiological follow-up examinations,
electrolyte monitoring, and echocardiography are advised. Depending on
the left ventricular ejection fraction\'s course, the implantation of an
ICD or ICD/CRT system should be considered after 3 months. On the day of
discharge, we initiated the adjustment of a Life Vest, allowing the
patient to return home in good general condition.
**Medication upon Discharge: **
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torasemide (Torem) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
**Lab results upon Discharge:**
**Parameter** **Results** **Reference Range**
------------------------ ------------- ---------------------
Absolute Erythroblasts 0.01/nL \< 0.01/nL
Sodium 134 mEq/L 136-145 mEq/L
Potassium 4.5 mEq/L 3.5-4.5 mEq/L
Creatinine (Jaffé) 1.25 mg/dL 0.70-1.20 mg/dL
Urea 50 mg/dL 17-48 mg/dL
Total Bilirubin 1.9 mg/dL \< 1.20 mg/dL
CRP 4.1 mg/L \< 5.0 mg/L
Troponin-T 78 ng/L \< 14 ng/L
ALT 67 U/L \< 41 U/L
AST 78 U/L \< 50 U/L
Alkaline Phosphatase 151 U/L 40-130 U/L
gamma-GT 200 U/L 8-61 U/L
Free Triiodothyronine 2.3 ng/L 2.00-4.40 ng/L
Free Thyroxine 14.2 ng/L 9.30-17.00 ng/L
TSH 4.1 mU/L 0.27-4.20 mU/L
Hemoglobin 11.6 g/dL 13.5-17.0 g/dL
Hematocrit 34.5% 39.5-50.5%
Erythrocytes 3.7 /pL 4.3-5.8/pL
Leukocytes 9.56/nL 3.90-10.50/nL
MCV 92.5 fL 80.0-99.0 fL
MCH 31.1 pg 27.0-33.5 pg
MCHC 33.6 g/dL 31.5-36.0 g/dL
MPV 8.9 fL 7.0-12.0 fL
RDW-CV 14.0% 11.5-15.0%
Quick 89% 78-123%
INR 1.09 0.90-1.25
PTT Actin-FS 25.3 sec. 22.0-29.0 sec.
### Patient Report 1
**Dear colleague, **
We are reporting on the pending findings of the myocardial biopsies
taken from Mr. David Romero, born on 02/16/1942 on 03/26/2016 due to the
deterioration of LV function from 40% to 28% after interferon therapy
for HCV infection.
**Diagnoses:**
- Suspected myocarditis
- LifeVest
- Left ventricular ejection fraction of 28%
- Chronic hepatitis C
- Status post hepatitis A
- Post-antiviral therapy
- Exclusion of relevant coronary artery disease
**Current Medication:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torasemide (Torem) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
**Myocardial Biopsy on 01/27/2014:**
[Molecular Biology:]{.underline}
PCR examinations performed under the question of myocardial infection
with cardiotropic pathogens yielded a positive detection of HCV-specific
RNA in myocardial tissue without quantification possibility
(methodically determined). Otherwise, there was no evidence of
myocardial infection with enteroviruses, adenoviruses, Epstein-Barr
virus, Human Herpes Virus Type 6 A/B, or Erythrovirus genotypes 1/2 in
the myocardium.
[Assessment]{.underline}: Positive HCV-mRNA detection in myocardial
tissue. This positive test result does not unequivocally prove an
infection of myocardial cells, as contamination of the tissue sample
with HCV-infected peripheral blood cells cannot be ruled out in chronic
hepatitis.
**Histology and Immunohistochemistry**:
Unremarkable endocardium, normal cell content of the interstitium with
only isolated lymphocytes and histiocytes in the histologically examined
samples. Quantitatively, immunohistochemically examined native
preparations showed borderline high CD3-positive lymphocytes with a
diffuse distribution pattern at 10.2 cells/mm2. No increased
perforin-positive cytotoxic T cells. The expression of cell adhesion
molecules is discreetly elevated. Otherwise, only slight perivascular
but no interstitial fibrosis. Cardiomyocytes are properly arranged and
slightly hypertrophied (average diameter around 23 µm), the surrounding
capillaries are unremarkable. No evidence of acute
inflammation-associated myocardial cell necrosis (no active myocarditis)
and no interstitial scars from previous myocyte loss. No lipomatosis.
[Assessment:]{.underline} Based on the myocardial biopsy findings, there
is positive detection of HCV-RNA in the myocardial tissue samples, with
the possibility of tissue contamination with HCV-infected peripheral
blood cells. Significant myocardial inflammatory reaction cannot be
documented histologically and immunohistochemically. In the endocardial
samples, apart from mild hypertrophy of properly arranged
cardiomyocytes, there are no significant signs of myocardial damage
(interstitial fibrosis or scars from previous myocyte loss). Therefore,
the present findings do not indicate the need for specific further
antiviral or anti-inflammatory therapy, and the existing heart failure
medication can be continued unchanged. If LV function impairment
persists for an extended period, there is an indication for
antiarrhythmic protection of the patient using an ICD.
### Patient Report 2
**Dear colleague, **
We thank you for referring your patient Mr. David Romero, born on
02/16/1942, to us for echocardiographic follow-up on 05/04/2016.
**Diagnoses:**
- Dilatated cardiomyopathy
- LifeVest
- Left ventricular ejection fraction of 28%
- Chronic Hepatitis C
- Status post Hepatitis A
- Post-antiviral therapy
- Exclusion of relevant coronary artery disease
- Type 2 diabetes mellitus
- Hypothyroidism
**Current Medication:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torem (Torasemide) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without pressure pain,
spleen and liver not palpable. Normal bowel sounds.
**Echocardiography: M-mode and 2-dimensional.**
The left ventricle measures approximately 65/56 mm (normal up to 56 mm).
The right atrium and right ventricle are of normal dimensions.
Global progressive reduction in contractility, morphologically
unremarkable.
In Doppler echocardiography, normal heart valves are observed.
Mitral valve insufficiency Grade I.
[Assessment]{.underline}: Dilated cardiomyopathy with slightly reduced
left ventricular function. MI I TII °, PAP 23 mm Hg + CVP. No more
pulmonary embolism detectable.
**Summary:**
Currently, the cardiac situation is stable, LVEDD slightly decreasing.
### Patient Report 3
**Dear colleague, **
We thank you for referring your patient, Mr. David Romero, born on
02/16/1942 to us for echocardiographic follow-up on 06/15/2016.
**Diagnoses:**
- Dilatated cardiomyopathy
- LifeVest
- Left ventricular ejection fraction of 28%
- Chronic Hepatitis C
- Status post Hepatitis A
- Post-antiviral therapy
- Exclusion of relevant coronary artery disease
- Type 2 diabetes mellitus
- Hypothyroidism
**Medication upon Admission:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torasemide (Torem) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without guarding, spleen
and liver not palpable. Normal bowel sounds.
**Echocardiography from 06/15/2016**: Good ultrasound conditions.
The left ventricle is dilated to approximately 65/57 mm (normal up to 56
mm). The left atrium is dilated to 48 mm. Normal thickness of the left
ventricular myocardium. Ejection fraction is around 28%. Heart valves
show normal flow velocities.
**Summary:**
Currently, the cardiac situation is stable, LVEDD slightly decreasing,
potassium and creatinine levels were obtained. If EF remains this low,
an ICD may be indicated.
**Lab results from 06/15/2016:**
**Parameter** **Result** **Reference Range**
----------------------------------- ------------ ---------------------
Reticulocytes 0.01/nL \< 0.01/nL
Sodium 135 mEq/L 136-145 mEq/L
Potassium 4.8 mEq/L 3.5-4.5 mEq/L
Creatinine 1.34 mg/dL 0.70-1.20 mg/dL
BUN 49 mg/dL 17-48 mg/dL
Total Bilirubin 1.9 mg/dL \< 1.20 mg/dL
C-reactive Protein 4.1 mg/L \< 5.0 mg/L
Troponin-T 78 ng/L \< 14 ng/L
ALT 67 U/L \< 41 U/L
AST 78 U/L \< 50 U/L
Alkaline Phosphatase 151 U/L 40-130 U/L
gamma-GT 200 U/L 8-61 U/L
Free Triiodothyronine (T3) 2.3 ng/L 2.00-4.40 ng/L
Free Thyroxine (T4) 14.2 ng/L 9.30-17.00 ng/L
Thyroid Stimulating Hormone (TSH) 4.1 mU/L 0.27-4.20 mU/L
Hemoglobin 11.6 g/dL 13.5-17.0 g/dL
Hematocrit 34.5% 39.5-50.5%
Red Blood Cell Count 3.7 M/µL 4.3-5.8 M/µL
White Blood Cell Count 9.56 K/µL 3.90-10.50 K/µL
Platelet Count 280 K/µL 150-370 K/µL
MC 92.5 fL 80.0-99.0 fL
MCH 31.1 pg 27.0-33.5 pg
MCHC 33.6 g/dL 31.5-36.0 g/dL
MPV 8.9 fL 7.0-12.0 fL
RDW-CV 14.0% 11.5-15.0%
Quick 89% 78-123%
INR 1.09 0.90-1.25
Partial Thromboplastin Time 25.3 sec. 22.0-29.0 sec.
### Patient Report 4
**Dear colleague, **
We are reporting to you about Mr. David Romero, born on 02/16/1942, who
presented himself at our Cardiology University Outpatient Clinic on
06/30/2016.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function (ejection fraction
around 30%)
- LifeVest
- Planned CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Current Medication:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ---------------- ---------------
Aspirin 100 mg/tablet 1-0-0
Ramipril (Altace) 2.5 mg/tablet 1-0-1
Carvedilol (Coreg) 12.5 mg/tablet 1-0-1
Torasemide (Torem) 5 mg/tablet 1-0-0
Spironolactone (Aldactone) 25 mg/tablet 1-0-0
L-Thyroxine (Synthroid) 50 µg/tablet 1-0-0
**Echocardiography on 06/30/2016:** In sinus rhythm. Adequate ultrasound
window.
Moderately dilated left ventricle (LVDd 63mm). Significantly reduced
systolic LV function (EF biplane 29%). No LV hypertrophy.
**ECG on 06/30/2016:** Sinus rhythm, regular tracing, heart rate 69/min,
complete left bundle branch block, QRS 135 ms, ERBS with left bundle
branch block.
**Assessment**: Mr. Romero presents himself for the follow-up assessment
of known dilated cardiomyopathy. He currently reports minimal dyspnea.
Coronary heart disease has been ruled out. No virus was detected
bioptically. However, the recent echocardiography still shows severely
impaired LV function.
**Current Recommendations:** Given the presence of left bundle branch
block, there is an indication for CRT-D implantation. For this purpose,
we have scheduled a pre-admission appointment, with the implantation
planned for 07/04/2016. We kindly request a referral letter. The
LifeVest should continue to be worn until the implantation, despite the
pressure sores on the thorax.
### Patient Report 5
**Dear colleague, **
We would like to report to you about our patient, Mr. David Romero, born
on 02/16/1942, who was in our inpatient care from 07/04/2016 to
07/06/2016.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function (ejection fraction
around 30%)
- LifeVest
- Planned CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Medication upon Admission:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torem (Torasemide) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
Sitagliptin (Januvia) 100 mg 1-0-0
Insulin glargine (Lantus) 0-0-20IE
**Current Presentation:** The current admission was elective for CRT-D
implantation in dilated cardiomyopathy with severely impaired LV
function despite full heart failure medication and complete left bundle
branch block. Please refer to previous medical records for a detailed
history. On 07/05/2016, a CRT-ICD system was successfully implanted. The
peri- and post-interventional course was uncomplicated. Pneumothorax was
ruled out post-interventionally. The wound conditions are
irritation-free. The ICD card was given to the patient. We request
outpatient follow-up on the above-mentioned date for wound inspection
and CRT follow-up. Please adjust the known cardiovascular risk factors.
**Findings:**
**ECG upon Admission:** Sinus rhythm 66/min, PQ 176ms, QRS 126ms, QTc
432ms, Complete left bundle branch block with corresponding excitation
regression disorder.
**Procedure**: Implantation of a CRT-D with left ventricular multipoint
pacing left pectoral. Smooth triple puncture of the lateral left
subclavian vein and implantation of an active single-coil electrode in
the RV apex with very good electrical values. Trouble-free probing of
the CS and direct venography using a balloon occlusion catheter.
Identification of a suitable lateral vein and implantation of a
quadripolar electrode (Quartet, St. Jude Medical) with very good
electrical values. No phrenic stimulation up to 10 volts in all
polarities. Finally, implantation of an active P/S electrode in the
right atrial roof with equally very good electrical values. Connection
to the device and submuscular implantation. Wound irrigation and layered
wound closure with absorbable suture material. Finally, extensive
testing of all polarities of the LV electrode and activation of
multipoint pacing. Final setting of the ICD.
**Chest X-ray on 07/05/2016:**
[Clinical status, question, justifying indication:]{.underline} History
of CRT-D implantation. Question about lead position, pneumothorax?
**Findings**: New CRT-D unit left pectoral with leads projected onto the
right ventricle, the right atrium, and the sinus coronarius. No
pneumothorax.
Normal heart size. No pulmonary congestion. No diffuse infiltrates. No
pleural effusions.
**ECG at Discharge:** Continuous ventricular PM stimulation, HR: 66/min.
**Current Recommendations:**
- We request a follow-up appointment in our Pacemaker Clinic. Please
provide a referral slip.
- We ask for the protection of the left arm and avoidance of
elevations \> 90 degrees. Self-absorbing sutures have been used.
- We request regular wound checks.
### Patient Report 6
**Dear colleague, **
We thank you for referring your patient, Mr. David Romero, born on
02/16/1942, who presented to our Cardiological University Outpatient
Clinic on 08/26/2016.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Current Medication:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torem (Torasemide) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
Sitagliptin (Januvia) 100 mg 1-0-0
Insulin glargine (Lantus) 0-0-20IE
**Current Presentation**: Slightly increasing exertional dyspnea, no
coronary heart disease.
**Cardiovascular Risk Factors:**
- Family history: No
- Smoking: No
- Hypertension: No
- Diabetes: Yes
- Dyslipidemia: Yes
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without pressure pain,
spleen and liver not palpable. Normal bowel sounds.
**Findings**:
**Resting ECG:** Sinus rhythm, 83 bpm. Blood pressure: 120/70 mmHg.
**Echocardiography: M-mode and 2-dimensional**
Left ventricle dimensions: Approximately 57/45 mm (normal up to 56 mm),
moderately dilated
- Right atrium and right ventricle: Normal dimensions
- Normal thickness of left ventricular muscle
- Globally, mild reduction in contractility
- Heart valves: Morphologically normal
- Doppler-Echocardiography: No significant valve regurgitation
**Assessment**: Mildly dilated cardiomyopathy with slightly reduced left
ventricular function. Ejection fraction at 45 - 50%. Mild diastolic
dysfunction. Mild tricuspid regurgitation, pulmonary artery pressure 22
mm Hg, and left ventricular filling pressure slightly increased.
**Stress Echocardiography: Stress echocardiography with exercise test**
- Stress test protocol: Treadmill exercise test
- Reason for stress test: Exertional dyspnea
- Quality of the ultrasound: Good
- Initial workload: 50 watts
- Maximum workload achieved: 150 Watt
- Blood pressure response: Systolic BP increased from 112/80 mmHg to
175/90 mmHg
- Heart rate response: Increased from 71bpm to 124bpm
- Exercise terminated due to leg pain
**Resting ECG:** Sinus rhythm**.** No significant changes during
exercise
**Echocardiography at rest:** Normokinesis of all left ventricular
segments EF: 45 - 50%
**Echocardiography during exercise:** Increased contractility and wall
thickening of all segments
[Summary]{.underline}: No dynamic wall motion abnormalities. No evidence
of exercise-induced myocardial ischemia
**Carotid Doppler Ultrasound:** Both common carotid arteries are
smooth-walled**.** Intima-media thickness: 0.8 mm**.** Small plaque in
the carotid bulb on both sides**.** Normal flow in the internal and
external carotid arteries**.** Normal dimensions and flow in the
vertebral arteries
**Summary:** Non-obstructive carotid plaques**.** Indicated to lower LDL
to below 1.8 mmol/L
**Summary:**
- Stress echocardiography shows no evidence of ischemia, EF \>45-50%
- Carotid duplex shows minimal non-obstructive plaques
- Increase Simvastatin to 20 mg, target LDL-C \< 1.8 mmol/L
### Patient Report 7
**Dear colleague, **
We would like to inform you about the results of the cardiac
catheterization of Mr. David Romero, born on 02/16/1942 performed by us
on 08/10/2022.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Procedure:** Right femoral artery puncture. Left ventriculography with
a 5F pigtail catheter in the right anterior oblique projection. Coronary
angiography with 5F JL4.0 and 5F JR 4.0 catheters. End-diastolic
pressure in the left ventricle within the normal range, measured in
mmHg. No pathological pressure gradient across the aortic valve.
**Coronary angiography:**
- Unremarkable left main stem.
- The left anterior descending (LAD) artery shows mild wall changes,
with a maximum stenosis of 20-\<30%.
- The robust right coronary artery (RCA) is stenosed proximally by
30-40%, subsequently ectatic and then stenosed to 40-\<50% distally.
Slow contrast clearance. The right coronary artery is also stenosed
up to 30%.
- Left-dominant coronary circulation.
**Assessment**: Diffuse coronary atherosclerosis with less than 50%
stenosis in the RCA and evidence of endothelial dysfunction.
**Current Recommendations:**
- Initiation of Ranolazine
- Additional stress myocardial perfusion scintigraphy
### Patient Report 8
**Dear colleague, **
We would like to inform you about the results of the Myocardial
Perfusion Scintigraphy performed on our patient, Mr. David Romero, born
on 02/16/1942, on 09/23/2022.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function (ejection fraction
around 30%)
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without guarding, spleen
and liver not palpable. Normal bowel sounds.
**Myocardial Perfusion Scintigraphy:**
The myocardial perfusion scintigraphy was conducted using 365 MBq of
99m-Technetium MIBI during pharmacological stress and 383 MBq of
99m-Technetium MIBI at rest.
[Technique]{.underline}: Initially, the patient was pharmacologically
stressed with the intravenous administration of 400 µg of Regadenoson
over 20 seconds, accompanied by ergometer exercise at 50 W.
Subsequently, the intravenous injection of the radiopharmaceutical was
performed. The maximum blood pressure achieved during the stress phase
was 143/84 mm Hg, and the maximum heart rate reached was 102 beats per
minute.
Approximately 60 minutes later, ECG-triggered acquisition of a
360-degree SPECT study was conducted with reconstructions of short and
long-axis slices.
Due to inhomogeneities in the myocardial wall segments during stress,
rest images were acquired on another examination day. Following the
intravenous injection of the radiopharmaceutical, ECG-triggered
acquisition of a 360-degree SPECT study was performed, including
short-axis and long-axis slices, approximately 60 minutes later.
[Clinical Information:]{.underline} Known coronary heart disease (RCA
50%). ICD/CRT pacemaker.
[Findings]{.underline}: No clear perfusion defects are seen in the
scintigraphic images acquired after pharmacologic exposure to
Regadenoson. This finding remains unchanged in the scintigraphic images
acquired at rest.
Quantitative analysis shows a normal-sized ventricle with a normal left
ventricular ejection fraction (LVEF) of 53% under exercise conditions
and 47% at rest (EDV 81 mL). There are no clear wall motion
abnormalities. In the gated SPECT analysis, there are no definite wall
motion abnormalities observed in both stress and rest conditions.
**Quantitative Scoring:**
- SSS (Summed Stress Score): 3 (4.4%)
- SRS (Summed Rest Score): 0 (0.0%)
- SDS (Summed Difference Score): 3 (4.4%)
**Assessment**: No evidence of myocardial perfusion defects with
Regadenoson stress or at rest. Normal ventricular size and function with
no significant wall motion abnormalities.
### Patient Report 9
**Dear colleague, **
We would like to report on our patient, Mr. David Romero, born on
02/16/1942, who was under our inpatient care from 05/20/2023 to
05/21/2023.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Medical History:** The patient was admitted for device replacement due
and upgrading to a CRT-P pacemaker. At admission, the patient reported
no complaints of fever, cough, dyspnea, chest pain, or melena.
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without guarding, spleen
and liver not palpable. Normal bowel sounds.
**Medication upon Admission**
**Medication** **Dosage** **Frequency**
--------------------------- -------------- ----------------------
Insulin glargine (Lantus) 450 E/1.5 ml 0-0-0-6-8 IU
Insulin lispro (Humalog) 300 E/3 ml 5-8 IU-5-8 IU-5-8 IU
Levothyroxine (Synthroid) 100 mcg 1-0-0-0
Colecalciferol 12.5 mcg 2-0-0-0
Atorvastatin (Lipitor) 21.7 mg 0-0-1-0
Amlodipine (Norvasc) 6.94 mg 1-0-0-0
Ramipril (Altace) 5 mg 1-0-0-0
Torasemide (Torem) 5 mg 0-0-0.5-0
Carvedilol (Coreg) 25 mg 0.5-0-0.5-0
Simvastatin (Zocor) 40 mg 0-0-0.5-0
Aspirin 100 mg 1-0-0-0
**Therapy and Progression:** The patient\'s current admission was
elective for the implantation of a 3-chamber CRT-D device due to device
depletion. The procedure was performed without complications on
05/20/2023. The post-interventional course was uneventful. The
implantation site showed no irritation or significant hematoma at the
time of discharge, and no pneumothorax was detected on X-ray.
To protect the surgical wound, we request dry wound dressing for the
next 10 days and clinical wound checks. Suture removal is not necessary
with absorbable suture material. We advise against arm elevation for the
next 4 weeks, avoiding heavy lifting on the side of the device pocket
and gradual, pain-adapted full range of motion after 4 weeks.
**Current Recommendations:** We kindly request an outpatient follow-up
appointment in our Pacemaker Clinic.
**Medication upon Discharge:**
**Medication ** **Dosage ** **Frequency**
----------------------------- --------------- -----------------------
Insulin glargine (Lantus) 450 E./1.5 ml 0-0-0-/6-8 IU
Insulin lispro (Humalog) 300 E./3 ml 5-8 IU/-5-8 IU/5-8 IU
Levothyroxine (Synthroid) 100 µg 1-0-0-0
Colecalciferol (Vitamin D3) 12.5 µg 2-0-0-0
Atorvastatin (Lipitor) 21.7 mg 0-0-1-0
Amlodipine (Norvasc) 6.94 mg 1-0-0-0
Ramipril (Altace) 5 mg 1-0-0-0
Torasemide (Torem) 5 mg 0-0-0.5-0
Carvedilol (Coreg) 25 mg 0.5-0-0.5-0
Simvastatin (Zocor) 40 mg 0-0-0.5-0
Aspirin 100 mg 1-0-0-0
Colecalciferol 12.5 µg 2-0-0-0
**Addition: Findings:**
**ECG at Discharge:** Sinus rhythm, ventricular pacing, QRS 122ms, QTc
472ms
**Rhythm Examination on 05/20/2023:**
[Results:]{.underline} Replacement of a 3-chamber CRT-D device (new:
SJM/Abbott Quadra Assura) due to impending battery depletion:
Uncomplicated replacement. Tedious freeing of the submuscular device and
proximal lead portions using a plasma blade. Extraction of the old
device. Connection to the new device. Avoidance of device fixation in
the submuscular position. Hemostasis by electrocauterization. Layered
wound closure. Skin closure with absorbable intracutaneous sutures. End
adjustment of the CRT-D device is complete. [Procedure]{.underline}:
Compression of the wound with a sandbag and local cooling. First
outpatient follow-up in 8 weeks through our pacemaker clinic (please
schedule an appointment before discharge). Postoperative chest X-ray is
not necessary. Cefuroxime 1.5 mg again tonight.
**Transthoracic Echocardiography on 05/18/2023**
**Results:** Globally mildly impaired systolic LV function. Diastolic
dysfunction Grade 1 (LV relaxation disorder).
- Right Ventricle: Normal-sized right ventricle. Normal RV function.
Pulmonary arterial pressure is normal.
- Left Atrium: Slightly dilated left atrium.
- Right Atrium: Normal-sized right atrium.
- Mitral Valve: Morphologically unremarkable. Minimal mitral valve
regurgitation.
- Aortic Valve: Mildly sclerotic aortic valve cusps. No aortic valve
insufficiency. No aortic valve stenosis (AV PGmax 7 mmHg).
- Tricuspid Valve: Delicate tricuspid valve leaflets. Minimal
tricuspid valve regurgitation (TR Pmax 26 mmHg).
- Pulmonary Valve: No pulmonary valve insufficiency. Pericardium: No
pericardial effusion.
**Assessment**: Examination in sinus rhythm with bundle branch block.
Moderate ultrasound windows. Normal-sized left ventricle (LVED 54 mm)
with mildly reduced systolic LV function (EF biplan 55%) with mildly
reduced contractility without regional emphasis. Mild LV hypertrophy,
predominantly septal, without obstruction. Diastolic dysfunction Grade 1
(E/A 0.47) with a normal LV filling index (E/E\' mean 3.5). Slightly
sclerotic aortic valve without stenosis, no AI. Slightly dilated left
atrium (LAVI 31 ml/m²). Minimal MI. Normal-sized right ventricle with
normal function. Normal-sized right atrium (RAVI 21 ml/m²). Minimal TI.
As far as assessable, systolic PA pressure is within the normal range.
The IVC cannot be viewed from the subcostal angle. No thrombi are
visible. As far as assessable, no pericardial effusion is visible.
**Chest X-ray in two planes on 05/20/2023: **
[Clinical Information, Question, Justification:]{.underline} Post CRT
device replacement. Inquiry about position, pneumothorax.
[Findings]{.underline}: No pneumothorax following CRT device
replacement.
### Patient Report 0
**Dear colleague, **
We are writing to provide an update on Mr. David Romero, born on
02/16/1942, who presented at our Rhythm Clinic on 09/29/2023.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Current Medication:**
**Medication** **Dosage** **Frequency**
----------------------------- ------------------ ---------------
Lantus (Insulin glargine) 450 Units/1.5 mL 0-0-0-/6-8
Humalog (Insulin lispro) 300 Units/3 mL 5-8/0/5-8/5-8
Levothyroxine (Synthroid) 100 mcg 1-0-0-0
Vitamin D3 (Colecalciferol) 12.5 mcg 2-0-0-0
Lipitor (Atorvastatin) 21.7 mg 0-0-1-0
Norvasc (Amlodipine) 6.94 mg 1-0-0-0
Altace (Ramipril) 5 mg 1-0-0-0
Demadex (Torasemide) 5 mg 0-0-0.5-0
Coreg (Carvedilol) 25 mg 0.5-0-0.5-0
Zocor (Simvastatin) 40 mg 0-0-0.5-0
Aspirin 100 mg 1-0-0-0
Vitamin D3 (Colecalciferol) 12.5 mcg 2-0-0-0
**Measurement Results:**
Battery/Capacitor: Status: OK, Voltage: 8.4V
- Right Atrial: 375 Ohms 3.80 mV 0.375 V 0.50 ms
- Right Ventricular: 388 Ohms 11.80 mV 0.750 V 0.50 ms
- Left Ventricular: 350 Ohms 0.625 V 0.50 ms
- Defibrillation Impedance: Right Ventricular: 48 Ohms
**Implant Settings:**
- Bradycardia Setting: Mode: DDD
- Tachycardia Settings: Zone Detection Interval (ms) Detection Beats
ATP Shocks Details Status
- VFVF 260 ms 30 /
- VTVT1 330 ms 55 /
<!-- -->
- Probe Settings: Lead Sensitivity Sensing Polarity/Vector
Amplification/Pulse Width Stimulation Polarity/Vector Auto Amplitude
Control
- Right Atrial: 0.30 mV Bipolar/ 1.375 V/0.50 ms Bipolar/
- Right Ventricular: Bipolar/ 2.000 V/0.50 ms Bipolar/
- Left Ventricular: 2.000 V/0.50 ms tip 1 - RV Coil
**Assessment:**
- Routine visit with normal device function.
- Normal sinus rhythm with a heart rate of 65/min.
- Balanced heart rate histogram with a plateau at 60-70 bpm.
- Wound conditions are unremarkable.
- Battery status: OK.
- Atrial probe: Intact
- Right ventricular probe: Intact
- Left ventricular probe: Intact
- A follow-up appointment for the patient is requested in 6 months.
**Lab results:**
**Parameter** **Result** **Reference Range**
----------------------------------- ------------ ---------------------
Reticulocytes 0.01/nL \< 0.01/nL
Sodium 137 mEq/L 136-145 mEq/L
Potassium 4.2 mEq/L 3.5-4.5 mEq/L
Creatinine 1.34 mg/dL 0.70-1.20 mg/dL
BUN 49 mg/dL 17-48 mg/dL
Total Bilirubin 1.8 mg/dL \< 1.20 mg/dL
C-reactive Protein 5.9 mg/L \< 5.0 mg/L
ALT 67 U/L \< 41 U/L
AST 78 U/L \< 50 U/L
Alkaline Phosphatase 151 U/L 40-130 U/L
Gamma-Glutamyl Transferase 200 U/L 8-61 U/L
Free Triiodothyronine (T3) 2.3 ng/L 2.00-4.40 ng/L
Free Thyroxine (T4) 14.2 ng/L 9.30-17.00 ng/L
Thyroid Stimulating Hormone (TSH) 4.1 mU/L 0.27-4.20 mU/L
Hemoglobin 11.6 g/dL 13.5-17.0 g/dL
Hematocrit 34.5% 39.5-50.5%
Red Blood Cell Count 3.7 M/µL 4.3-5.8 M/µL
White Blood Cell Count 9.56 K/µL 3.90-10.50 K/µL
MCV 92.7 fL 80.0-99.0 fL
MCH 31.8 pg 27.0-33.5 pg
MCHC 33.9 g/dL 31.5-36.0 g/dL
MPV 8.9 fL 7.0-12.0 fL
RDW-CV 14.2% 11.5-15.0%
Quick 89% 78-123%
INR 1.09 0.90-1.25
Partial Thromboplastin Time 25.3 sec. 22.0-29.0 sec.
|
150 watts
|
Which of the following describes Pop's attitude toward Sattell?
A. obsessive
B. delirious
C. ambivalent
D. vengeful
|
SCRIMSHAW The old man just wanted to get back his memory—and the methods he used were gently hellish, from the viewpoint of the others.... BY MURRAY LEINSTER Illustrated by Freas Pop Young was the one known man who could stand life on the surface of the Moon's far side, and, therefore, he occupied the shack on the Big Crack's edge, above the mining colony there. Some people said that no normal man could do it, and mentioned the scar of a ghastly head-wound to explain his ability. One man partly guessed the secret, but only partly. His name was Sattell and he had reason not to talk. Pop Young alone knew the whole truth, and he kept his mouth shut, too. It wasn't anybody else's business. The shack and the job he filled were located in the medieval notion of the physical appearance of hell. By day the environment was heat and torment. By night—lunar night, of course, and lunar day—it was frigidity and horror. Once in two weeks Earth-time a rocketship came around the horizon from Lunar City with stores for the colony deep underground. Pop received the stores and took care of them. He handed over the product of the mine, to be forwarded to Earth. The rocket went away again. Come nightfall Pop lowered the supplies down the long cable into the Big Crack to the colony far down inside, and freshened up the landing field marks with magnesium marking-powder if a rocket-blast had blurred them. That was fundamentally all he had to do. But without him the mine down in the Crack would have had to shut down. The Crack, of course, was that gaping rocky fault which stretches nine hundred miles, jaggedly, over the side of the Moon that Earth never sees. There is one stretch where it is a yawning gulf a full half-mile wide and unguessably deep. Where Pop Young's shack stood it was only a hundred yards, but the colony was a full mile down, in one wall. There is nothing like it on Earth, of course. When it was first found, scientists descended into it to examine the exposed rock-strata and learn the history of the Moon before its craters were made. But they found more than history. They found the reason for the colony and the rocket landing field and the shack. The reason for Pop was something else. The shack stood a hundred feet from the Big Crack's edge. It looked like a dust-heap thirty feet high, and it was. The outside was surface moondust, piled over a tiny dome to be insulation against the cold of night and shadow and the furnace heat of day. Pop lived in it all alone, and in his spare time he worked industriously at recovering some missing portions of his life that Sattell had managed to take away from him. He thought often of Sattell, down in the colony underground. There were galleries and tunnels and living-quarters down there. There were air-tight bulkheads for safety, and a hydroponic garden to keep the air fresh, and all sorts of things to make life possible for men under if not on the Moon. But it wasn't fun, even underground. In the Moon's slight gravity, a man is really adjusted to existence when he has a well-developed case of agoraphobia. With such an aid, a man can get into a tiny, coffinlike cubbyhole, and feel solidity above and below and around him, and happily tell himself that it feels delicious. Sometimes it does. But Sattell couldn't comfort himself so easily. He knew about Pop, up on the surface. He'd shipped out, whimpering, to the Moon to get far away from Pop, and Pop was just about a mile overhead and there was no way to get around him. It was difficult to get away from the mine, anyhow. It doesn't take too long for the low gravity to tear a man's nerves to shreds. He has to develop kinks in his head to survive. And those kinks— The first men to leave the colony had to be knocked cold and shipped out unconscious. They'd been underground—and in low gravity—long enough to be utterly unable to face the idea of open spaces. Even now there were some who had to be carried, but there were some tougher ones who were able to walk to the rocketship if Pop put a tarpaulin over their heads so they didn't have to see the sky. In any case Pop was essential, either for carrying or guidance. Sattell got the shakes when he thought of Pop, and Pop rather probably knew it. Of course, by the time he took the job tending the shack, he was pretty certain about Sattell. The facts spoke for themselves. Pop had come back to consciousness in a hospital with a great wound in his head and no memory of anything that had happened before that moment. It was not that his identity was in question. When he was stronger, the doctors told him who he was, and as gently as possible what had happened to his wife and children. They'd been murdered after he was seemingly killed defending them. But he didn't remember a thing. Not then. It was something of a blessing. But when he was physically recovered he set about trying to pick up the threads of the life he could no longer remember. He met Sattell quite by accident. Sattell looked familiar. Pop eagerly tried to ask him questions. And Sattell turned gray and frantically denied that he'd ever seen Pop before. All of which happened back on Earth and a long time ago. It seemed to Pop that the sight of Sattell had brought back some vague and cloudy memories. They were not sharp, though, and he hunted up Sattell again to find out if he was right. And Sattell went into panic when he returned. Nowadays, by the Big Crack, Pop wasn't so insistent on seeing Sattell, but he was deeply concerned with the recovery of the memories that Sattell helped bring back. Pop was a highly conscientious man. He took good care of his job. There was a warning-bell in the shack, and when a rocketship from Lunar City got above the horizon and could send a tight beam, the gong clanged loudly, and Pop got into a vacuum-suit and went out the air lock. He usually reached the moondozer about the time the ship began to brake for landing, and he watched it come in. He saw the silver needle in the sky fighting momentum above a line of jagged crater-walls. It slowed, and slowed, and curved down as it drew nearer. The pilot killed all forward motion just above the field and came steadily and smoothly down to land between the silvery triangles that marked the landing place. Instantly the rockets cut off, drums of fuel and air and food came out of the cargo-hatch and Pop swept forward with the dozer. It was a miniature tractor with a gigantic scoop in front. He pushed a great mound of talc-fine dust before him to cover up the cargo. It was necessary. With freight costing what it did, fuel and air and food came frozen solid, in containers barely thicker than foil. While they stayed at space-shadow temperature, the foil would hold anything. And a cover of insulating moondust with vacuum between the grains kept even air frozen solid, though in sunlight. At such times Pop hardly thought of Sattell. He knew he had plenty of time for that. He'd started to follow Sattell knowing what had happened to his wife and children, but it was hearsay only. He had no memory of them at all. But Sattell stirred the lost memories. At first Pop followed absorbedly from city to city, to recover the years that had been wiped out by an axe-blow. He did recover a good deal. When Sattell fled to another continent, Pop followed because he had some distinct memories of his wife—and the way he'd felt about her—and some fugitive mental images of his children. When Sattell frenziedly tried to deny knowledge of the murder in Tangier, Pop had come to remember both his children and some of the happiness of his married life. Even when Sattell—whimpering—signed up for Lunar City, Pop tracked him. By that time he was quite sure that Sattell was the man who'd killed his family. If so, Sattell had profited by less than two days' pay for wiping out everything that Pop possessed. But Pop wanted it back. He couldn't prove Sattell's guilt. There was no evidence. In any case, he didn't really want Sattell to die. If he did, there'd be no way to recover more lost memories. Sometimes, in the shack on the far side of the Moon, Pop Young had odd fancies about Sattell. There was the mine, for example. In each two Earth-weeks of working, the mine-colony nearly filled up a three-gallon cannister with greasy-seeming white crystals shaped like two pyramids base to base. The filled cannister would weigh a hundred pounds on Earth. Here it weighed eighteen. But on Earth its contents would be computed in carats, and a hundred pounds was worth millions. Yet here on the Moon Pop kept a waiting cannister on a shelf in his tiny dome, behind the air-apparatus. It rattled if he shook it, and it was worth no more than so many pebbles. But sometimes Pop wondered if Sattell ever thought of the value of the mine's production. If he would kill a woman and two children and think he'd killed a man for no more than a hundred dollars, what enormity would he commit for a three-gallon quantity of uncut diamonds? But he did not dwell on such speculation. The sun rose very, very slowly in what by convention was called the east. It took nearly two hours to urge its disk above the horizon, and it burned terribly in emptiness for fourteen times twenty-four hours before sunset. Then there was night, and for three hundred and thirty-six consecutive hours there were only stars overhead and the sky was a hole so terrible that a man who looked up into it—what with the nagging sensation of one-sixth gravity—tended to lose all confidence in the stability of things. Most men immediately found it hysterically necessary to seize hold of something solid to keep from falling upward. But nothing felt solid. Everything fell, too. Wherefore most men tended to scream. But not Pop. He'd come to the Moon in the first place because Sattell was here. Near Sattell, he found memories of times when he was a young man with a young wife who loved him extravagantly. Then pictures of his children came out of emptiness and grew sharp and clear. He found that he loved them very dearly. And when he was near Sattell he literally recovered them—in the sense that he came to know new things about them and had new memories of them every day. He hadn't yet remembered the crime which lost them to him. Until he did—and the fact possessed a certain grisly humor—Pop didn't even hate Sattell. He simply wanted to be near him because it enabled him to recover new and vivid parts of his youth that had been lost. Otherwise, he was wholly matter-of-fact—certainly so for the far side of the Moon. He was a rather fussy housekeeper. The shack above the Big Crack's rim was as tidy as any lighthouse or fur-trapper's cabin. He tended his air-apparatus with a fine precision. It was perfectly simple. In the shadow of the shack he had an unfailing source of extreme low temperature. Air from the shack flowed into a shadow-chilled pipe. Moisture condensed out of it here, and CO 2 froze solidly out of it there, and on beyond it collected as restless, transparent liquid air. At the same time, liquid air from another tank evaporated to maintain the proper air pressure in the shack. Every so often Pop tapped the pipe where the moisture froze, and lumps of water ice clattered out to be returned to the humidifier. Less often he took out the CO 2 snow, and measured it, and dumped an equivalent quantity of pale-blue liquid oxygen into the liquid air that had been purified by cold. The oxygen dissolved. Then the apparatus reversed itself and supplied fresh air from the now-enriched fluid, while the depleted other tank began to fill up with cold-purified liquid air. Outside the shack, jagged stony pinnacles reared in the starlight, and craters complained of the bombardment from space that had made them. But, outside, nothing ever happened. Inside, it was quite different. Working on his memories, one day Pop made a little sketch. It helped a great deal. He grew deeply interested. Writing-material was scarce, but he spent most of the time between two particular rocket-landings getting down on paper exactly how a child had looked while sleeping, some fifteen years before. He remembered with astonishment that the child had really looked exactly like that! Later he began a sketch of his partly-remembered wife. In time—he had plenty—it became a really truthful likeness. The sun rose, and baked the abomination of desolation which was the moonscape. Pop Young meticulously touched up the glittering triangles which were landing guides for the Lunar City ships. They glittered from the thinnest conceivable layer of magnesium marking-powder. He checked over the moondozer. He tended the air apparatus. He did everything that his job and survival required. Ungrudgingly. Then he made more sketches. The images to be drawn came back more clearly when he thought of Sattell, so by keeping Sattell in mind he recovered the memory of a chair that had been in his forgotten home. Then he drew his wife sitting in it, reading. It felt very good to see her again. And he speculated about whether Sattell ever thought of millions of dollars' worth of new-mined diamonds knocking about unguarded in the shack, and he suddenly recollected clearly the way one of his children had looked while playing with her doll. He made a quick sketch to keep from forgetting that. There was no purpose in the sketching, save that he'd lost all his young manhood through a senseless crime. He wanted his youth back. He was recovering it bit by bit. The occupation made it absurdly easy to live on the surface of the far side of the Moon, whether anybody else could do it or not. Sattell had no such device for adjusting to the lunar state of things. Living on the Moon was bad enough anyhow, then, but living one mile underground from Pop Young was much worse. Sattell clearly remembered the crime Pop Young hadn't yet recalled. He considered that Pop had made no overt attempt to revenge himself because he planned some retaliation so horrible and lingering that it was worth waiting for. He came to hate Pop with an insane ferocity. And fear. In his mind the need to escape became an obsession on top of the other psychotic states normal to a Moon-colonist. But he was helpless. He couldn't leave. There was Pop. He couldn't kill Pop. He had no chance—and he was afraid. The one absurd, irrelevant thing he could do was write letters back to Earth. He did that. He wrote with the desperate, impassioned, frantic blend of persuasion and information and genius-like invention of a prisoner in a high-security prison, trying to induce someone to help him escape. He had friends, of a sort, but for a long time his letters produced nothing. The Moon swung in vast circles about the Earth, and the Earth swung sedately about the Sun. The other planets danced their saraband. The rest of humanity went about its own affairs with fascinated attention. But then an event occurred which bore directly upon Pop Young and Sattell and Pop Young's missing years. Somebody back on Earth promoted a luxury passenger-line of spaceships to ply between Earth and Moon. It looked like a perfect set-up. Three spacecraft capable of the journey came into being with attendant reams of publicity. They promised a thrill and a new distinction for the rich. Guided tours to Lunar! The most expensive and most thrilling trip in history! One hundred thousand dollars for a twelve-day cruise through space, with views of the Moon's far side and trips through Lunar City and a landing in Aristarchus, plus sound-tapes of the journey and fame hitherto reserved for honest explorers! It didn't seem to have anything to do with Pop or with Sattell. But it did. There were just two passenger tours. The first was fully booked. But the passengers who paid so highly, expected to be pleasantly thrilled and shielded from all reasons for alarm. And they couldn't be. Something happens when a self-centered and complacent individual unsuspectingly looks out of a spaceship port and sees the cosmos unshielded by mists or clouds or other aids to blindness against reality. It is shattering. A millionaire cut his throat when he saw Earth dwindled to a mere blue-green ball in vastness. He could not endure his own smallness in the face of immensity. Not one passenger disembarked even for Lunar City. Most of them cowered in their chairs, hiding their eyes. They were the simple cases of hysteria. But the richest girl on Earth, who'd had five husbands and believed that nothing could move her—she went into catatonic withdrawal and neither saw nor heard nor moved. Two other passengers sobbed in improvised strait jackets. The first shipload started home. Fast. The second luxury liner took off with only four passengers and turned back before reaching the Moon. Space-pilots could take the strain of space-flight because they had work to do. Workers for the lunar mines could make the trip under heavy sedation. But it was too early in the development of space-travel for pleasure-passengers. They weren't prepared for the more humbling facts of life. Pop heard of the quaint commercial enterprise through the micro-tapes put off at the shack for the men down in the mine. Sattell probably learned of it the same way. Pop didn't even think of it again. It seemed to have nothing to do with him. But Sattell undoubtedly dealt with it fully in his desperate writings back to Earth. Pop matter-of-factly tended the shack and the landing field and the stores for the Big Crack mine. Between-times he made more drawings in pursuit of his own private objective. Quite accidentally, he developed a certain talent professional artists might have approved. But he was not trying to communicate, but to discover. Drawing—especially with his mind on Sattell—he found fresh incidents popping up in his recollection. Times when he was happy. One day he remembered the puppy his children had owned and loved. He drew it painstakingly—and it was his again. Thereafter he could remember it any time he chose. He did actually recover a completely vanished past. He envisioned a way to increase that recovery. But there was a marked shortage of artists' materials on the Moon. All freight had to be hauled from Earth, on a voyage equal to rather more than a thousand times around the equator of the Earth. Artists' supplies were not often included. Pop didn't even ask. He began to explore the area outside the shack for possible material no one would think of sending from Earth. He collected stones of various sorts, but when warmed up in the shack they were useless. He found no strictly lunar material which would serve for modeling or carving portraits in the ground. He found minerals which could be pulverized and used as pigments, but nothing suitable for this new adventure in the recovery of lost youth. He even considered blasting, to aid his search. He could. Down in the mine, blasting was done by soaking carbon black—from CO 2 —in liquid oxygen, and then firing it with a spark. It exploded splendidly. And its fumes were merely more CO 2 which an air-apparatus handled easily. He didn't do any blasting. He didn't find any signs of the sort of mineral he required. Marble would have been perfect, but there is no marble on the Moon. Naturally! Yet Pop continued to search absorbedly for material with which to capture memory. Sattell still seemed necessary, but— Early one lunar morning he was a good two miles from his shack when he saw rocket-fumes in the sky. It was most unlikely. He wasn't looking for anything of the sort, but out of the corner of his eye he observed that something moved. Which was impossible. He turned his head, and there were rocket-fumes coming over the horizon, not in the direction of Lunar City. Which was more impossible still. He stared. A tiny silver rocket to the westward poured out monstrous masses of vapor. It decelerated swiftly. It curved downward. The rockets checked for an instant, and flamed again more violently, and checked once more. This was not an expert approach. It was a faulty one. Curving surface-ward in a sharply changing parabola, the pilot over-corrected and had to wait to gather down-speed, and then over-corrected again. It was an altogether clumsy landing. The ship was not even perfectly vertical when it settled not quite in the landing-area marked by silvery triangles. One of its tail-fins crumpled slightly. It tilted a little when fully landed. Then nothing happened. Pop made his way toward it in the skittering, skating gait one uses in one-sixth gravity. When he was within half a mile, an air-lock door opened in the ship's side. But nothing came out of the lock. No space-suited figure. No cargo came drifting down with the singular deliberation of falling objects on the Moon. It was just barely past lunar sunrise on the far side of the Moon. Incredibly long and utterly black shadows stretched across the plain, and half the rocketship was dazzling white and half was blacker than blackness itself. The sun still hung low indeed in the black, star-speckled sky. Pop waded through moondust, raising a trail of slowly settling powder. He knew only that the ship didn't come from Lunar City, but from Earth. He couldn't imagine why. He did not even wildly connect it with what—say—Sattell might have written with desperate plausibility about greasy-seeming white crystals out of the mine, knocking about Pop Young's shack in cannisters containing a hundred Earth-pounds weight of richness. Pop reached the rocketship. He approached the big tail-fins. On one of them there were welded ladder-rungs going up to the opened air-lock door. He climbed. The air-lock was perfectly normal when he reached it. There was a glass port in the inner door, and he saw eyes looking through it at him. He pulled the outer door shut and felt the whining vibration of admitted air. His vacuum suit went slack about him. The inner door began to open, and Pop reached up and gave his helmet the practiced twisting jerk which removed it. Then he blinked. There was a red-headed man in the opened door. He grinned savagely at Pop. He held a very nasty hand-weapon trained on Pop's middle. "Don't come in!" he said mockingly. "And I don't give a damn about how you are. This isn't social. It's business!" Pop simply gaped. He couldn't quite take it in. "This," snapped the red-headed man abruptly, "is a stickup!" Pop's eyes went through the inner lock-door. He saw that the interior of the ship was stripped and bare. But a spiral stairway descended from some upper compartment. It had a handrail of pure, transparent, water-clear plastic. The walls were bare insulation, but that trace of luxury remained. Pop gazed at the plastic, fascinated. The red-headed man leaned forward, snarling. He slashed Pop across the face with the barrel of his weapon. It drew blood. It was wanton, savage brutality. "Pay attention!" snarled the red-headed man. "A stickup, I said! Get it? You go get that can of stuff from the mine! The diamonds! Bring them here! Understand?" Pop said numbly: "What the hell?" The red-headed man hit him again. He was nerve-racked, and, therefore, he wanted to hurt. "Move!" he rasped. "I want the diamonds you've got for the ship from Lunar City! Bring 'em!" Pop licked blood from his lips and the man with the weapon raged at him. "Then phone down to the mine! Tell Sattell I'm here and he can come on up! Tell him to bring any more diamonds they've dug up since the stuff you've got!" He leaned forward. His face was only inches from Pop Young's. It was seamed and hard-bitten and nerve-racked. But any man would be quivering if he wasn't used to space or the feel of one-sixth gravity on the Moon. He panted: "And get it straight! You try any tricks and we take off! We swing over your shack! The rocket-blast smashes it! We burn you down! Then we swing over the cable down to the mine and the rocket-flame melts it! You die and everybody in the mine besides! No tricks! We didn't come here for nothing!" He twitched all over. Then he struck cruelly again at Pop Young's face. He seemed filled with fury, at least partly hysterical. It was the tension that space-travel—then, at its beginning—produced. It was meaningless savagery due to terror. But, of course, Pop was helpless to resent it. There were no weapons on the Moon and the mention of Sattell's name showed the uselessness of bluff. He'd pictured the complete set-up by the edge of the Big Crack. Pop could do nothing. The red-headed man checked himself, panting. He drew back and slammed the inner lock-door. There was the sound of pumping. Pop put his helmet back on and sealed it. The outer door opened. Outrushing air tugged at Pop. After a second or two he went out and climbed down the welded-on ladder-bars to the ground. He headed back toward his shack. Somehow, the mention of Sattell had made his mind work better. It always did. He began painstakingly to put things together. The red-headed man knew the routine here in every detail. He knew Sattell. That part was simple. Sattell had planned this multi-million-dollar coup, as a man in prison might plan his break. The stripped interior of the ship identified it. It was one of the unsuccessful luxury-liners sold for scrap. Or perhaps it was stolen for the journey here. Sattell's associates had had to steal or somehow get the fuel, and somehow find a pilot. But there were diamonds worth at least five million dollars waiting for them, and the whole job might not have called for more than two men—with Sattell as a third. According to the economics of crime, it was feasible. Anyhow it was being done. Pop reached the dust-heap which was his shack and went in the air lock. Inside, he went to the vision-phone and called the mine-colony down in the Crack. He gave the message he'd been told to pass on. Sattell to come up, with what diamonds had been dug since the regular cannister was sent up for the Lunar City ship that would be due presently. Otherwise the ship on the landing strip would destroy shack and Pop and the colony together. "I'd guess," said Pop painstakingly, "that Sattell figured it out. He's probably got some sort of gun to keep you from holding him down there. But he won't know his friends are here—not right this minute he won't." A shaking voice asked questions from the vision-phone. "No," said Pop, "they'll do it anyhow. If we were able to tell about 'em, they'd be chased. But if I'm dead and the shacks smashed and the cable burnt through, they'll be back on Earth long before a new cable's been got and let down to you. So they'll do all they can no matter what I do." He added, "I wouldn't tell Sattell a thing about it, if I were you. It'll save trouble. Just let him keep on waiting for this to happen. It'll save you trouble." Another shaky question. "Me?" asked Pop. "Oh, I'm going to raise what hell I can. There's some stuff in that ship I want." He switched off the phone. He went over to his air apparatus. He took down the cannister of diamonds which were worth five millions or more back on Earth. He found a bucket. He dumped the diamonds casually into it. They floated downward with great deliberation and surged from side to side like a liquid when they stopped. One-sixth gravity. Pop regarded his drawings meditatively. A sketch of his wife as he now remembered her. It was very good to remember. A drawing of his two children, playing together. He looked forward to remembering much more about them. He grinned. "That stair-rail," he said in deep satisfaction. "That'll do it!" He tore bed linen from his bunk and worked on the emptied cannister. It was a double container with a thermware interior lining. Even on Earth newly-mined diamonds sometimes fly to pieces from internal stress. On the Moon, it was not desirable that diamonds be exposed to repeated violent changes of temperature. So a thermware-lined cannister kept them at mine-temperature once they were warmed to touchability. Pop packed the cotton cloth in the container. He hurried a little, because the men in the rocket were shaky and might not practice patience. He took a small emergency-lamp from his spare spacesuit. He carefully cracked its bulb, exposing the filament within. He put the lamp on top of the cotton and sprinkled magnesium marking-powder over everything. Then he went to the air-apparatus and took out a flask of the liquid oxygen used to keep his breathing-air in balance. He poured the frigid, pale-blue stuff into the cotton. He saturated it. All the inside of the shack was foggy when he finished. Then he pushed the cannister-top down. He breathed a sigh of relief when it was in place. He'd arranged for it to break a frozen-brittle switch as it descended. When it came off, the switch would light the lamp with its bare filament. There was powdered magnesium in contact with it and liquid oxygen all about. He went out of the shack by the air lock. On the way, thinking about Sattell, he suddenly recovered a completely new memory. On their first wedding anniversary, so long ago, he and his wife had gone out to dinner to celebrate. He remembered how she looked: the almost-smug joy they shared that they would be together for always, with one complete year for proof. Pop reflected hungrily that it was something else to be made permanent and inspected from time to time. But he wanted more than a drawing of this! He wanted to make the memory permanent and to extend it— If it had not been for his vacuum suit and the cannister he carried, Pop would have rubbed his hands. Tall, jagged crater-walls rose from the lunar plain. Monstrous, extended inky shadows stretched enormous distances, utterly black. The sun, like a glowing octopod, floated low at the edge of things and seemed to hate all creation. Pop reached the rocket. He climbed the welded ladder-rungs to the air lock. He closed the door. Air whined. His suit sagged against his body. He took off his helmet. When the red-headed man opened the inner door, the hand-weapon shook and trembled. Pop said calmly: "Now I've got to go handle the hoist, if Sattell's coming up from the mine. If I don't do it, he don't come up." The red-headed man snarled. But his eyes were on the cannister whose contents should weigh a hundred pounds on Earth. "Any tricks," he rasped, "and you know what happens!" "Yeah," said Pop. He stolidly put his helmet back on. But his eyes went past the red-headed man to the stair that wound down, inside the ship, from some compartment above. The stair-rail was pure, clear, water-white plastic, not less than three inches thick. There was a lot of it! The inner door closed. Pop opened the outer. Air rushed out. He climbed painstakingly down to the ground. He started back toward the shack. There was the most luridly bright of all possible flashes. There was no sound, of course. But something flamed very brightly, and the ground thumped under Pop Young's vacuum boots. He turned. The rocketship was still in the act of flying apart. It had been a splendid explosion. Of course cotton sheeting in liquid oxygen is not quite as good an explosive as carbon-black, which they used down in the mine. Even with magnesium powder to start the flame when a bare light-filament ignited it, the cannister-bomb hadn't equaled—say—T.N.T. But the ship had fuel on board for the trip back to Earth. And it blew, too. It would be minutes before all the fragments of the ship returned to the Moon's surface. On the Moon, things fall slowly. Pop didn't wait. He searched hopefully. Once a mass of steel plating fell only yards from him, but it did not interrupt his search. When he went into the shack, he grinned to himself. The call-light of the vision-phone flickered wildly. When he took off his helmet the bell clanged incessantly. He answered. A shaking voice from the mining-colony panted: "We felt a shock! What happened? What do we do?" "Don't do a thing," advised Pop. "It's all right. I blew up the ship and everything's all right. I wouldn't even mention it to Sattell if I were you." He grinned happily down at a section of plastic stair-rail he'd found not too far from where the ship exploded. When the man down in the mine cut off, Pop got out of his vacuum suit in a hurry. He placed the plastic zestfully on the table where he'd been restricted to drawing pictures of his wife and children in order to recover memories of them. He began to plan, gloatingly, the thing he would carve out of a four-inch section of the plastic. When it was carved, he'd paint it. While he worked, he'd think of Sattell, because that was the way to get back the missing portions of his life—the parts Sattell had managed to get away from him. He'd get back more than ever, now! He didn't wonder what he'd do if he ever remembered the crime Sattell had committed. He felt, somehow, that he wouldn't get that back until he'd recovered all the rest. Gloating, it was amusing to remember what people used to call such art-works as he planned, when carved by other lonely men in other faraway places. They called those sculptures scrimshaw. But they were a lot more than that! THE END Transcriber's Note: This etext was produced from Astounding Science Fiction September 1955. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
|
A. obsessive
|
What components are identified as core components for training VQA models?
|
### Introduction
Recent research advances in Computer Vision (CV) and Natural Language Processing (NLP) introduced several tasks that are quite challenging to be solved, the so-called AI-complete problems. Most of those tasks require systems that understand information from multiple sources, i.e., semantics from visual and textual data, in order to provide some kind of reasoning. For instance, image captioning BIBREF0, BIBREF1, BIBREF2 presents itself as a hard task to solve, though it is actually challenging to quantitatively evaluate models on that task, and that recent studies BIBREF3 have raised questions on its AI-completeness. The Visual Question Answering (VQA) BIBREF3 task was introduced as an attempt to solve that issue: to be an actual AI-complete problem whose performance is easy to evaluate. It requires a system that receives as input an image and a free-form, open-ended, natural-language question to produce a natural-language answer as the output BIBREF3. It is a multidisciplinary topic that is gaining popularity by encompassing CV and NLP into a single architecture, what is usually regarded as a multimodal model BIBREF4, BIBREF5, BIBREF6. There are many real-world applications for models trained for Visual Question Answering, such as automatic surveillance video queries BIBREF7 and visually-impaired aiding BIBREF8, BIBREF9. Models trained for VQA are required to understand the semantics from images while finding relationships with the asked question. Therefore, those models must present a deep understanding of the image to properly perform inference and produce a reasonable answer to the visual question BIBREF10. In addition, it is much easier to evaluate this task since there is a finite set of possible answers for each image-question pair. Traditionally, VQA approaches comprise three major steps: (i) representation learning of the image and the question; (ii) projection of a single multimodal representation through fusion and attention modules that are capable of leveraging both visual and textual information; and (iii) the generation of the natural language answer to the question at hand. This task often requires sophisticated models that are able to understand a question expressed in text, identify relevant elements of the image, and evaluate how these two inputs correlate. Given the current interest of the scientific community in VQA, many recent advances try to improve individual components such as the image encoder, the question representation, or the fusion and attention strategies to better leverage both information sources. With so many approaches currently being introduced at the same time, it becomes unclear the real contribution and importance of each component within the proposed models. Thus, the main goal of this work is to understand the impact of each component on a proposed baseline architecture, which draws inspiration from the pioneer VQA model BIBREF3 (Fig. FIGREF1). Each component within that architecture is then systematically tested, allowing us to understand its impact on the system's final performance through a thorough set of experiments and ablation analysis. More specifically, we observe the impact of: (i) pre-trained word embeddings BIBREF11, BIBREF12, recurrent BIBREF13 and transformer-based sentence encoders BIBREF14 as question representation strategies; (ii) distinct convolutional neural networks used for visual feature extraction BIBREF15, BIBREF16, BIBREF17; and (iii) standard fusion strategies, as well as the importance of two main attention mechanisms BIBREF18, BIBREF19. We notice that even using a relatively simple baseline architecture, our best models are competitive to the (maybe overly-complex) state-of-the-art models BIBREF20, BIBREF21. Given the experimental nature of this work, we have trained over 130 neural network models, accounting for more than 600 GPU processing hours. We expect our findings to be useful as guidelines for training novel VQA models, and that they serve as a basis for the development of future architectures that seek to maximize predictive performance. ### Related Work
The task of VAQ has gained attention since Antol et al. BIBREF3 presented a large-scale dataset with open-ended questions. Many of the developed VQA models employ a very similar architecture BIBREF3, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27: they represent images with features from pre-trained convolutional neural networks; they use word embeddings or recurrent neural networks to represent questions and/or answers; and they combine those features in a classification model over possible answers. Despite their wide adoption, RNN-based models suffer from their limited representation power BIBREF28, BIBREF29, BIBREF30, BIBREF31. Some recent approaches have investigated the application of the Transformer model BIBREF32 to tasks that incorporate visual and textual knowledge, as image captioning BIBREF28. Attention-based methods are also being continuously investigated since they enable reasoning by focusing on relevant objects or regions in original input features. They allow models to pay attention on important parts of visual or textual inputs at each step of a task. Visual attention models focus on small regions within an image to extract important features. A number of methods have adopted visual attention to benefit visual question answering BIBREF27, BIBREF33, BIBREF34. Recently, dynamic memory networks BIBREF27 integrate an attention mechanism with a memory module, and multimodal bilinear pooling BIBREF22, BIBREF20, BIBREF35 is exploited to expressively combine multimodal features and predict attention over the image. These methods commonly employ visual attention to find critical regions, but textual attention has been rarely incorporated into VQA systems. While all the aforementioned approaches have exploited those kind of mechanisms, in this paper we study the impact of such choices specifically for the task of VQA, and create a simple yet effective model. Burns et al. BIBREF36 conducted experiments comparing different word embeddings, language models, and embedding augmentation steps on five multimodal tasks: image-sentence retrieval, image captioning, visual question answering, phrase grounding, and text-to-clip retrieval. While their work focuses on textual experiments, our experiments cover both visual and textual elements, as well as the combination of these representations in form of fusion and attention mechanisms. To the best of our knowledge, this is the first paper that provides a comprehensive analysis on the impact of each major component within a VQA architecture. ### Impact of VQA Components
In this section we first introduce the baseline approach, with default image and text encoders, alongside a pre-defined fusion strategy. That base approach is inspired by the pioneer of Antol et al. on VQA BIBREF3. To understand the importance of each component, we update the base architecture according to each component we are investigating. In our baseline model we replace the VGG network from BIBREF19 by a Faster RCNN pre-trained in the Visual Genome dataset BIBREF37. The default text encoding is given by the last hidden-state of a Bidirectional LSTM network, instead of the concatenation of the last hidden-state and memory cell used in the original work. Fig. FIGREF1 illustrates the proposed baseline architecture, which is subdivided into three major segments: independent feature extraction from (1) images and (2) questions, as well as (3) the fusion mechanism responsible to learn cross-modal features. The default text encoder (denoted by the pink rectangle in Fig. FIGREF1) employed in this work comprises a randomly initialized word-embedding module that takes a tokenized question and returns a continuum vector for each token. Those vectors are used to feed an LSTM network. The last hidden-state is used as the question encoding, which is projected with a linear layer into a $d$-dimensional space so it can be fused along to the visual features. As the default option for the LSTM network, we use a single layer with 2048 hidden units. Given that this text encoding approach is fully trainable, we hereby name it Learnable Word Embedding (LWE). For the question encoding, we explore pre-trained and randomly initialized word-embeddings in various settings, including Word2Vec (W2V) BIBREF12 and GloVe BIBREF11. We also explore the use of hidden-states of Skip-Thoughts Vector BIBREF13 and BERT BIBREF14 as replacements for word-embeddings and sentence encoding approaches. Regarding the visual feature extraction (depicted as the green rectangle in Fig. FIGREF1), we decided to use the pre-computed features proposed in BIBREF19. Such an architecture employs a ResNet-152 with a Faster-RCNN BIBREF15 fine-tuned on the Visual Genome dataset. We opted for this approach due to the fact that using pre-computed features is far more computationally efficient, allowing us to train several models with distinct configurations. Moreover, several recent approaches BIBREF20, BIBREF21, BIBREF38 employ that same strategy as well, making it easier to provide fair comparison to the state-of-the-art approaches. In this study we perform experiments with two additional networks widely used for the task at hand, namely VGG-16 BIBREF16 and ReSNet-101 BIBREF17. Given the multimodal nature of the problem we are dealing with, it is quite challenging to train proper image and question encoders so as to capture relevant semantic information from both of them. Nevertheless, another essential aspect of the architecture is the component that merges them altogether, allowing for the model to generate answers based on both information sources BIBREF39. The process of multimodal fusion consists itself in a research area with many approaches being recently proposed BIBREF20, BIBREF40, BIBREF22, BIBREF41. The fusion module receives the extracted image and query features, and provides multimodal features that theoretically present information that allows the system to answer to the visual question. There are many fusion strategies that can either assume quite simple forms, such as vector multiplication or concatenation, or be really complex, involving multilayered neural networks, tensor decomposition, and bi-linear pooling, just to name a few. Following BIBREF3, we adopt the element-wise vector multiplication (also referred as Hadamard product) as the default fusion strategy. This approach requires the feature representations to be fused to have the same dimensionality. Therefore, we project them using a fully-connected layer to reduce their dimension from 2048 to 1024. After being fused together, the multimodal features are finally passed through a fully-connected layer that provides scores (logits) further converted into probabilities via a softmax function ($S$). We want to maximize the probability $P(Y=y|X=x,Q=q)$ of the correct answer $y$ given the image $X$ and the provided question $Q$. Our models are trained to choose within a set comprised by the 3000 most frequent answers extracted from both training and validation sets of the VQA v2.0 dataset BIBREF42. ### Experimental Setup ::: Dataset
For conducting this study we decided to use the VQA v2.0 dataset BIBREF42. It is one of the largest and most frequently used datasets for training and evaluation of models in this task, being the official dataset used in yearly challenges hosted by mainstream computer vision venues . This dataset enhances the original one BIBREF3 by alleviating bias problems within the data and increasing the original number of instances. VQA v2.0 contains over $200,000$ images from MSCOCO BIBREF43, over 1 million questions and $\approx 11$ million answers. In addition, it has at least two questions per image, which prevents the model from answering the question without considering the input image. We follow VQA v2.0 standards and adopt the official provided splits allowing for fair comparison with other approaches. The splits we use are Validation, Test-Dev, Test-Standard. In this work, results of the ablation experiments are reported on the Validation set, which is the default option used for this kind of experiment. In some experiments we also report the training set accuracy to verify evidence of overfitting due to excessive model complexity. Training data has a total of $443,757$ questions labeled with 4 million answers, while the Test-Dev has a total of $214,354$ questions. Note that the validation size is about 4-fold larger than ImageNet's, which contains about $50,000$ samples. Therefore, one must keep in mind that even small performance gaps might indicate quite significant results improvement. For instance, 1% accuracy gains depict $\approx 2,000$ additional instances being correctly classified. We submit the predictions of our best models to the online evaluation servers BIBREF44 so as to obtain results for the Test-Standard split, allowing for a fair comparison to state-of-the-art approaches. ### Experimental Setup ::: Evaluation Metric
Free and open-ended questions result in a diverse set of possible answers BIBREF3. For some questions, a simple yes or no answer may be sufficient. Other questions, however, may require more complex answers. In addition, it is worth noticing that multiple answers may be considered correct, such as gray and light gray. Therefore, VQA v2.0 provides ten ground-truth answers for each question. These answers were collected from ten different randomly-chosen humans. The evaluation metric used to measure model performance in the open-ended Visual Question Answering task is a particular kind of accuracy. For each question in the input dataset, the model's most likely response is compared to the ten possible answers provided by humans in the dataset associated with that question BIBREF3, and evaluated according to Equation DISPLAY_FORM7. In this approach, the prediction is considered totally correct only if at least 3 out of 10 people provided that same answer. ### Experimental Setup ::: Hyper-parameters
As in BIBREF20 we train our models in a classification-based manner, in which we minimize the cross-entropy loss calculated with an image-question-answer triplet sampled from the training set. We optimize the parameters of all VQA models using Adamax BIBREF45 optimizer with a base learning rate of $7 \times 10^{-4}$, with exception of BERT BIBREF14 in which we apply a 10-fold reduction as suggested in the original paper. We used a learning rate warm-up schedule in which we halve the base learning rate and linearly increase it until the fourth epoch where it reaches twice its base value. It remains the same until the tenth epoch, where we start applying a 25% decay every two epochs. Gradients are calculated using batch sizes of 64 instances, and we train all models for 20 epochs. ### Experimental Analysis
In this section we show the experimental analysis for each component in the baseline VQA model. We also provide a summary of our findings regarding the impact of each part. Finally, we train a model with all the components that provide top results and compare it against state-of-the-art approaches. ### Experimental Analysis ::: Text Encoder
In our first experiment, we analyze the impact of different embeddings for the textual representation of the questions. To this end, we evaluate: (i) the impact of word-embeddings (pre-trained, or trained from scratch); and (ii) the role of the temporal encoding function, i.e., distinct RNN types, as well as pre-trained sentence encoders (e.g., Skip-Thoughts, BERT). The word-embedding strategies we evaluate are Learnable Word Embedding (randomly initialized and trained from scratch), Word2Vec BIBREF12, and GloVe BIBREF11. We also use word-level representations from widely used sentence embeddings strategies, namely Skip-Thoughts BIBREF13 and BERT BIBREF14. To do so, we use the hidden-states from the Skip-thoughts GRU network, while for BERT we use the activations of the last layer as word-level information. Those vectors feed an RNN that encodes the temporal sequence into a single global vector. Different types of RNNs are also investigated for encoding textual representation, including LSTM BIBREF46, Bidirectional LSTM BIBREF47, GRU BIBREF48, and Bidirectional GRU. For bidirectional architectures we concatenate both forward and backward hidden-states so as to aggregate information from both directions. Those approaches are also compared to a linear strategy, where we use a fully-connected layer followed by a global average pooling on the temporal dimension. The linear strategy discards any order information so we can demonstrate the role of the recurrent network as a temporal encoder to improve model performance. Figure FIGREF5 shows the performance variation of different types of word-embeddings, recurrent networks, initialization strategies, and the effect of fine-tuning the textual encoder. Clearly, the linear layer is outperformed by any type of recurrent layer. When using Skip-Thoughts the difference reaches $2.22\%$, which accounts for almost $5,000$ instances that the linear model mistakenly labeled. The only case in which the linear approach performed well is when trained with BERT. That is expected since Transformer-based architectures employ several attention layers that present the advantage of achieving the total receptive field size in all layers. While doing so, BERT also encodes temporal information with special positional vectors that allow for learning temporal relations. Hence, it is easier for the model to encode order information within word-level vectors without using recurrent layers. For the Skip-Thoughts vector model, considering that its original architecture is based on GRUs, we evaluate both the randomly initialized and the pre-trained GRU of the original model, described as [GRU] and [GRU (skip)], respectively. We noticed that both options present virtually the same performance. In fact, GRU trained from scratch performed $0.13\%$ better than its pre-trained version. Analyzing the results obtained with pre-trained word embeddings, it is clear that GloVe obtained consistently better results than the Word2Vec counterpart. We believe that GloVe vectors perform better given that they capture not only local context statistics as in Word2Vec, but they also incorporate global statistics such as co-occurrence of words. One can also observe that the use of different RNNs models inflicts minor effects on the results. It might be more advisable to use GRU networks since they halve the number of trainable parameters when compared to the LSTMs, albeit being faster and consistently presenting top results. Note also that the best results for Skip-Thoughts, Word2Vec, and GloVe were all quite similar, without any major variation regarding accuracy. The best overall result is achieved when using BERT to extract the textual features. BERT versions using either the linear layer or the RNNs outperformed all other pre-trained embeddings and sentence encoders. In addition, the overall training accuracy for BERT models is not so high compared to all other approaches. That might be an indication that BERT models are less prone to overfit training data, and therefore present better generalization ability. Results make it clear that when using BERT, one must fine-tune it for achieving top performance. Figure FIGREF5 shows that it is possible to achieve a $3\%$ to $4\%$ accuracy improvement when updating BERT weights with $1/10$ of the base learning rate. Moreover, Figure FIGREF6 shows that the use of a pre-training strategy is helpful, once Skip-thoughts and BERT outperform trainable word-embeddings in most of the evaluated settings. Is also make clear that using a single-layered RNNs provide best results, and are far more efficient in terms of parameters. ### Experimental Analysis ::: Image Encoder
Experiments in this section analyze the visual feature extraction layers. The baseline uses the Faster-RCNN BIBREF15 network, and we will also experiment with other pre-trained neural networks to encode image information so we can observe their impact on predictive performance. Additionally to Faster-RCNN, we experiment with two widely used networks for VQA, namely ResNet-101 BIBREF17 and VGG-16 BIBREF16. Table TABREF11 illustrates the result of this experiment. Intuitively, visual features provide a larger impact on model's performance. The accuracy difference between the best and the worst performing approaches is $\approx 5\%$. That difference accounts for roughly $10,000$ validation set instances. VGG-16 visual features presented the worst accuracy, but that was expected since it is the oldest network used in this study. In addition, it is only sixteen layers deep, and it has been shown that the depth of the network is quite important to hierarchically encode complex structures. Moreover, VGG-16 architecture encodes all the information in a 4096 dimensional vector that is extracted after the second fully-connected layer at the end. That vector encodes little to none spatial information, which makes it almost impossible for the network to answer questions on the spatial positioning of objects. ResNet-101 obtained intermediate results. It is a much deeper network than VGG-16 and it achieves much better results on ImageNet, which shows the difference of the the learning capacity of both networks. ResNet-101 provides information encoded in 2048 dimensional vectors, extracted from the global average pooling layer, which also summarizes spatial information into a fixed-sized representation. The best result as a visual feature extractor was achieved by the Faster-RCNN fine-tuned on the Visual Genome dataset. Such a network employs a ResNet-152 as backbone for training an RPN-based object detector. In addition, given that it was fine-tuned on the Visual Genome dataset, it allows for the training of robust models suited for general feature extraction. Hence, differently from the previous ResNet and VGG approaches, the Faster-RCNN approach is trained to detect objects, and therefore one can use it to extract features from the most relevant image regions. Each region is encoded as a 2048 dimensional vector. They contain rich information regarding regions and objects, since object detectors often operate over high-dimensional images, instead of resized ones (e.g., $256 \times 256$) as in typical classification networks. Hence, even after applying global pooling over regions, the network still has access to spatial information because of the pre-extracted regions of interest from each image. ### Experimental Analysis ::: Fusion strategy
In order to analyze the impact that the different fusion methods have on the network performance, three simple fusion mechanisms were analyzed: element-wise multiplication, concatenation, and summation of the textual and visual features. The choice of the fusion component is essential in VQA architectures, since its output generates multi-modal features used for answering the given visual question. The resulting multi-modal vector is projected into a 3000-dimensional label space, which provides a probability distribution over each possible answer to the question at hand BIBREF39. Table presents the experimental results with the fusion strategies. The best result is obtained using the element-wise multiplication. Such an approach functions as a filtering strategy that is able to scale down the importance of irrelevant dimensions from the visual-question feature vectors. In other words, vector dimensions with high cross-modal affinity will have their magnitudes increased, differently from the uncorrelated ones that will have their values reduced. Summation does provide the worst results overall, closely followed by the concatenation operator. Moreover, among all the fusion strategies used in this study, multiplication seems to ease the training process as it presents a much higher training set accuracy ($\approx 11\% $ improvement) as well. ### Experimental Analysis ::: Attention Mechanism
Finally, we analyze the impact of different attention mechanisms, such as Top-Down Attention BIBREF19 and Co-Attention BIBREF18. These mechanisms are used to provide distinct image representations according to the asked questions. Attention allows the model to focus on the most relevant visual information required to generate proper answers to the given questions. Hence, it is possible to generate several distinct representations of the same image, which also has a data augmentation effect. ### Experimental Analysis ::: Attention Mechanism ::: Top-Down Attention
Top-down attention, as the name suggests, uses global features from questions to weight local visual information. The global textual features $\mathbf {q} \in \mathbb {R}^{2048}$ are selected from the last internal state of the RNN, and the image features $V \in \mathbb {R}^{k \times 2048}$ are extracted from the Faster-RCNN, where $k$ represents the number of regions extracted from the image. In the present work we used $k=36$. The question features are linearly projected so as to reduce its dimension to 512, which is the size used in the original paper BIBREF19. Image features are concatenated with the textual features, generating a matrix $C$ of dimensions $k \times 2560$. Features resulting from that concatenation are first non-linearly projected with a trainable weight matrix $W_1^{2560 \times 512}$ generating a novel multimodal representation for each image region: Therefore, such a layer learns image-question relations, generating $k \times 512 $ features that are transformed by an activation function $\phi $. Often, $\phi $ is ReLU BIBREF49, Tanh BIBREF50, or Gated Tanh BIBREF51. The latter employs both the logistic Sigmoid and Tanh, in a gating scheme $\sigma (x) \times \textsc {tanh}(x)$. A second fully-connected layer is employed to summarize the 512-dimensional vectors into $h$ values per region ($k \times h$). It is usual to use a small value for $h$ such as $\lbrace 1, 2\rbrace $. The role of $h$ is to allow the model to produce distinct attention maps, which is useful for understanding complex sentences that require distinct viewpoints. Values produced by this layer are normalized with a softmax function applied on the columns of the matrix, as follows. It generates an attention mask $A^{k \times h}$ used to weight image regions, producing the image vector $\hat{\mathbf {v}}$, as shown in Equation DISPLAY_FORM17. Note that when $h>1$, the dimensionality of the visual features increases $h$-fold. Hence, $\hat{\mathbf {v}}^{h \times 2048}$, which we reshape to be a $(2048\times h)\times 1$ vector, constitutes the final question-aware image representation. ### Experimental Analysis ::: Attention Mechanism ::: Co-Attention
Unlike the Top-Down attention mechanism, Co-Attention is based on the computation of local similarities between all questions words and image regions. It expects two inputs: an image feature matrix $V^{k \times 2048}$, such that each image feature vector encodes an image region out of $k$; and a set of word-level features $Q^{n \times 2048}$. Both $V$ and $Q$ are normalized to have unit $L_2$ norm, so their multiplication $VQ^T$ results in the cosine similarity matrix used as guidance for generating the filtered image features. A context feature matrix $C^{k \times 2048}$ is given by: Finally, $C$ is normalized with a $\textsc {softmax}$ function, and the $k$ regions are summed so as to generate a 1024-sized vector $\hat{\mathbf {v}}$ to represent relevant visual features $V$ based on question $Q$: Table depicts the results obtained by adding the attention mechanisms to the baseline model. For these experiments we used only element-wise multiplication as fusion strategy, given that it presented the best performance in our previous experiments. We observe that attention is a crucial mechanism for VQA, leading to an $\approx 6\%$ accuracy improvement. The best performing attention approach was Top-Down attention with ReLU activation, followed closely by Co-Attention. We noticed that when using Gated Tanh within Top-Down attention, results degraded 2%. In addition, experiments show that $L_2$ normalization is quite important in Co-Attention, providing an improvement of almost $6\%$. ### Findings Summary
The experiments presented in Section SECREF9 have shown that the best text encoder approach is fine-tuning a pre-trained BERT model with a GRU network trained from scratch. In Section SECREF10 we performed experiments for analyzing the impact of pre-trained networks to extract visual features, among them Faster-RCNN, ResNet-101, and VGG-16. The best result was using a Faster-RCNN, reaching a $3\%$ improvement in the overall accuracy. We analyzed different ways to perform multimodal feature fusion in Section SECREF12. In this sense, the fusion mechanism that obtained the best result was the element-wise product. It provides $\approx 3\%$ higher overall accuracy when compared to the other fusion approaches. Finally, in Section SECREF13 we have studied two main attention mechanisms and their variations. They aim to provide question-aware image representation by attending to the most important spatial features. The top performing mechanism is the Top-Down attention with the ReLU activation function, which provided an $\approx 6\%$ overall accuracy improvement when compared to the base architecture. ### Comparison to state-of-the-art methods
After evaluating individually each component in a typical VQA architecture, our goal in this section is to compare the approach that combines the best performing components into a single model with the current state-of-the-art in VQA. Our comparison involves the following VQA models: Deeper-lstm-q BIBREF3, MCB BIBREF22, ReasonNet BIBREF52, Tips&Tricks BIBREF53, and the recent block BIBREF20. Tables TABREF21 and show that our best architecture outperforms all competitors but block, in both Test-Standard (Table TABREF21) and Test-Dev sets (Table ). Despite block presenting a marginal advantage in accuracy, we have shown in this paper that by carefully analyzing each individual component we are capable of generating a method, without any bells and whistles, that is on par with much more complex methods. For instance, block and MCB require 18M and 32M parameters respectively for the fusion scheme alone, while our fusion approach is parameter-free. Moreover, our model performs far better than BIBREF22, BIBREF52, and BIBREF53, which are also arguably much more complex methods. ### Conclusion
In this study we observed the actual impact of several components within VQA models. We have shown that transformer-based encoders together with GRU models provide the best performance for question representation. Notably, we demonstrated that using pre-trained text representations provide consistent performance improvements across several hyper-parameter configurations. We have also shown that using an object detector fine-tuned with external data provides large improvements in accuracy. Our experiments have demonstrated that even simple fusion strategies can achieve performance on par with the state-of-the-art. Moreover, we have shown that attention mechanisms are paramount for learning top performing networks, once they allow producing question-aware image representations that are capable of encoding spatial relations. It became clear that Top-Down is the preferred attention method, given its results with ReLU activation. It is is now clear that some configurations used in some architectures (e.g., additional RNN layers) are actually irrelevant and can be removed altogether without harming accuracy. For future work, we expect to expand this study in two main ways: (i) cover additional datasets, such as Visual Genome BIBREF37; and (ii) study in an exhaustive fashion how distinct components interact with each other, instead of observing their impact alone on the classification performance. ### Acknowledgment
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nivel Superior – Brasil (CAPES) – Finance Code 001. We also would like to thank FAPERGS for funding this research. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the graphics cards used for this research. Fig. 1. Baseline architecture proposed for the experimental setup. Fig. 2. Overall validation accuracy improvement (∆) over the baseline architecture. Models denoted with * present fixed word-embedding representations, i.e., they are not updated via back-propagation. Fig. 3. Overall accuracy vs. number of parameters trade-off analysis. Circled markers denote two-layered RNNs. Number of parameters increases due to the number of hidden units H within the RNN. In this experiment we vary H ∈ {128, 256, 512, 1024, 2048}. TABLE III EXPERIMENT USING DIFFERENT ATTENTION MECHANISMS. TABLE IV COMPARISON OF THE MODELS ON VQA2 TEST-STANDARD SET. THE MODELS WERE TRAINED ON THE UNION OF VQA 2.0 TRAINVAL SPLIT AND VISUALGENOME [38] TRAIN SPLIT. All IS THE OVERALL OPENENDED ACCURACY (HIGHER IS BETTER). Yes/No, Numbers, AND Others ARE SUBSETS THAT CORRESPOND TO ANSWERS TYPES. * SCORES REPORTED FROM [21]. TABLE V COMPARISON OF THE MODELS ON VQA2 TEST-DEV SET. All IS THE OVERALL OPENENDED ACCURACY (HIGHER IS BETTER). Yes/No, Numbers, AND Others ARE SUBSETS THAT CORRESPOND TO ANSWERS TYPES. * SCORES REPORTED FROM [21].
|
pre-trained text representations, transformer-based encoders together with GRU models, attention mechanisms are paramount for learning top performing networks, Top-Down is the preferred attention method
|
How better are results for pmra algorithm than Doc2Vec in human evaluation?
|
### Abstract
Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method. Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra. Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm. Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm. ### Background ::: PubMed
PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1. ### Background ::: The pmra model
To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed. ### Background ::: Documents embedding
Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document. ### Background ::: Related Work
Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities. ### Methods ::: Material
During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus. ### Methods ::: Optimisation
Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector. A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM. ### Methods ::: Training
The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS. ### Methods ::: Evaluation
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity. Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity). ### Methods ::: Evaluation ::: String length
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents). ### Methods ::: Evaluation ::: Words co-occurrences
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm. ### Methods ::: Evaluation ::: Stems co-occurrences
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed. ### Methods ::: Evaluation ::: MeSH similarity
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V. ### Methods ::: Evaluation ::: Manual evaluation
Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation. ### Results ::: Optimisation
Regarding the optimisation, 1,920 different models were trained and evaluated. First, the dm parameter highly affects the accuracy. Indeed, the PV-DBOW architecture looks more precise with a highest accuracy of 25.78%, while the PV-DM reached only 18.08% of common MeSH terms in average between query and top-close documents. Then, embedding vectors having large number of dimensions ($> 256$) seem to lead to a better accuracy, for PV-DBOW at least. Finally, when set too low ($< 0.01$), the alpha parameter leads to poor accuracy. The best combination of parameters, obtained thanks to the PV-DBOW architecture, was selected. The best parameters regarding the PV-DM, but having the same vector_size value, were also kept (13.30% of accuracy). The concatenation of models is thus possible without dimensions reduction, this method being promoted by Mikolov and Lee BIBREF3. Selected values are listed on the table TABREF16. ### Results ::: Evaluation ::: String length
By looking at the length difference in term of characters between documents brought closer by D2V, a difference is visible between the two architectures (Figure FIGREF19C). In fact, while a very low correlation is visible under the PV-DM architecture (coefficient $-2.6e10^{-5}$) and under the pmra model ($-5.4e10^{-5}$), a stronger negative one is observed between the cosine distance computed by the PV-DBOW for two documents and their difference in terms of length (coefficient $-1.1e10^{-4}$). This correlation suggests that two documents having a similar size are more likely to be closer in the vectorial space created by the PV-DBOW (cosine distance closer to 1). ### Results ::: Evaluation ::: Words co-occurrences
Once scores from pmra have been normalized, the correlation between words co-occurrences and scores returned by both D2V and pmra were studied (Figure FIGREF19B). The very low slopes of the D2V trend lines ($-1.1e10^{-5}$ for the PV-DBOW and $-3e10^{-6}$ for PV-DM) indicate that the vocabulary content does not influence (positively or negatively) the proximity between two documents for this algorithm. By looking at the green dots or line, the pmra seems to give less importance to the co-occurrence of terms. A low slope is observed ($-5.8e10^{-5}$), indicating a slight negative correlation between word co-occurrence and computed score. ### Results ::: Evaluation ::: Stems co-occurrences
This test assigns a score reflecting the proximity between two documents regarding their vocabulary content, the impact of the conjugation, plural forms, etc was lowered by a stemming step. The D2V model returns a cosine score S for a pair of documents ($0 < S < 1$, the top-close document is not likely to have a negative cosine value), while the pmra returns a score between 18M and 75M in our case BIBREF0. These scores were normalized to fit between the same limits than the cosine distance. For PV-DBOW, PV-DM and pmra, the influence of the stems is almost insignificant with very flat slopes looking at the trend lines ($1e10^{-6}$, $-2e10^{-6}$ and $-2e10^{-6}$ respectively, see figure FIGREF19A). This indicates that the stem content of two documents will not affect (negatively or positively) their proximity for these models. ### Results ::: Evaluation ::: MeSH similarity
By studying the common MeSH labels between two close documents, it is possible to assess whether the context influence or not this proximity. By looking at the figure FIGREF23A, we can see that PV-DBOW and pmra are very close in term of MeSH score, indicating that they bring closer documents sharing a similar number of common MeSH labels in average. The pmra model seems to be more likely to output documents sharing a higher MeSH score (the distribution tail going further 4 with a mean equal to 1.58, standard deviation: 1.06), while the PV-DM brings closer documents that are less likely to share an important number of MeSH terms, with a majority of score between 0 and 1 (mean equal to 1.16, standard deviation: 0.73). The figure FIGREF23B shows the correlation between the MeSH score for documents returned by the pmra and those returned by both PV-DM and PV-DBOW models. The PV-DBOW algorithm looks way closer to the pmra in terms of common MeSH labels between two close documents with a slope of 1.0064. The PV-DM model is much less correlated, with a slope of 0.1633, indicating less MeSH in common for close articles. ### Results ::: Evaluation ::: Manual evaluation
Regarding the results obtained by both PV-DBOW and PV-DM sub-architectures, the PV-DBOW model has been used versus the pmra. Its close score in the MeSH evaluation task compared to the pmra's one indicates an ability to bring closer documents sharing same concepts. Thus, 10 randomly chosen documents were sent to the pmra and to the PV-DBOW models and they were asked to output the 10 closest documents for each. Their relevance was then assessed by four evaluators. The agreement between all evaluators regarding the three-modalities scale was assessed by computing the Cohen's kappa score $K$ thanks to the SKlearn Python's library (Figure FIGREF25) BIBREF16. First, we can notice that the highest $K$ was obtained by the two medical data librarian (EL and GK) with $K=0.61$, indicating a substantial agreement BIBREF17. In contrary, the lowest $K$ was computed using evaluations from the two Medical Doctors (SJD and JPL) with $K=0.49$, indicating barely a moderate agreement. The average agreement is represented by $K=0.55$, indicating a moderate global agreement. Regarding the ranking of all results (the first being the most accurate compared to the query, the last the worst one), the agreement can also be seen as moderate. The concordance rate has been defined between two evaluators for a given pair of results $A/B$ as the probability for A to be better ranked than B for both judges. For each couple of evaluators the mean agreement was computed by averaging ten pairs $result/query$ randomly selected. In order to evaluate the 95% bilateral confidence interval associated with the average concordance rate of each pair of judges the Student confidence interval estimation method has been used. Deviation from normal has been reduced by hyperbolic arc-tangent transformation. The global mean concordance by pooling all judges together was 0.751 (sd = 0.08). The minimal concordance was equal to 0.73 and the maximal one to 0.88. Regarding the evaluation itself, based on the three-modality scale (bad, partial or full relevance), models are clearly not equivalents (Figure FIGREF26). The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents. By looking at the results ranking, the mean position for D2V was 14.09 (ranging from 13.98 for JPL to 14.20 for EL). Regarding the pmra, this average position was equal to 6.89 (ranging from 6.47 for EL to 7.23 for SJD). ### Discussion
In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed. Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’). Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results. D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title. Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted. This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist. As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work. To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms. ### Conclusion
This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation. Figure 1. Ranking of the five designed documents similarity evaluation tasks. Figure 2. Analysis of stems, words and length differences between texts broughts closer by D2V and pmra. Correlation plot between the stems co-occurrence score (A), words co-occurrence score (B), length difference (C) and scores returned by two D2V architectures (PV-DBOW, blue and PV-DM, orange) or the pmra model (green, normalized values). Outliers with z-score ¿ 3 were discarded from the plot. Figure 3. Study of both pmra and D2V models regarding their ability to bring closer documents sharing many MeSH labels. A (upper panel): frequency of the different MeSH scores for the pmra, PV-DM and PV-DBOW models. PV-DBOW and pmra are centred on the same value and have a similar distribution, indicating a common ability to link documents regarding their topic. However, the PV-DM algorithm looks less efficient. B (lower panel): correlation between MeSH scores calculated from the pmra and those from D2V. The slopes of the trend lines support the precedent result with a slope close to 1 for PV-DBOW while the PV-DM only reach 0.1, indicating a weaker correlation. Outliers with z-score ¿ 3 were discarded from the plot. Figure 4. Global agreement between four evaluators rating the accuracy of the D2V and pmra models. Colour scale indicates the strength of the agreement between two annotators. It ranges from 0.49 between the two medical doctors SJD and JPL to 0.61 between the two medical data librarian EL and GK. Figure 5. Pulled rating of both models D2V and pmra. The height indicates the number of times each model has been rated as bad, moderate or strong accuracy result by the evaluators. D2V has been mostly rated as badly relevant (80 times) while the pmra was mostly rated as good relevance.
|
The D2V model has been rated 80 times as "bad relevance" while the pmra returned only 24 times badly relevant documents.
|
How big are datasets used in experiments?
|
### Introduction
Ultrasound technology is a widespread technology in speech research for studying tongue movement and speech articulation BIBREF0 due to its attractive characteristics, such as imaging at a reasonably rapid frame rate, which empowers researchers to envision subtle and swift gestures of the tongue in real-time. Besides, ultrasound technology is portable, relatively affordable, and clinically safe and non-invasive BIBREF1. The mid-sagittal view is regularly adapted in ultrasound data as it displays relative backness, height, and the slope of various areas of the tongue. Quantitative analysis of tongue motion needs the tongue contour to be extracted, tracked, and visualized. Manual frame-by-frame tongue contour extraction is a cumbersome, subjective, and error-prone task. Moreover, it is not a feasible solution for real-time applications. In conventional techniques, for extracting ultrasound tongue contours, a discrete set of vertices are first annotated near the lower part of the tongue dorsum defining initial deformable tongue contour BIBREF2. Then, through an iterative minimization process using features of the image, the annotated contour is regulated toward the tongue dorsum region. For instance, in active contour models technique (e.g., EdgeTrak software) BIBREF3, BIBREF4, two internal and external energy functions are minimized over the image gradient. The requirement of feature extraction for each image and accurate initialization are two main drawbacks for those classical techniques. Another alternative scenario is to use semi-supervised machine learning models for automatic segmentation of tongue contour regions. Then, tongue contours are extracted automatically using post-processing stages. Semi-supervised machine learning-based methods BIBREF5 are first utilized for ultrasound tongue contour segmentation in an study by BIBREF6, BIBREF7 while deep learning models emerge in this field through studies by BIBREF0, BIBREF8. They fine-tuned one pre-trained decoder part of a Deep Belief Network (DBN) model to infer tongue contours from new instances. End-to-end fashion supervised deep learning techniques, outperformed previous techniques in recent years. For example, U-net BIBREF9 has been used for automatic ultrasound tongue extraction BIBREF10, BIBREF11. After successful results of deep learning methods, the focus of advanced techniques for tongue contour extraction is more on generalization and real-time performance BIBREF12, BIBREF13, BIBREF14. Although deep learning methods have been utilized successfully in many studies, manual annotation of ultrasound tongue databases is still cumbersome, and the performance of supervised methods mostly depends on the accuracy of the annotated database as well as the number of available samples. Available databases in this field are annotated by linguistics experts for many years employing landmark points on the tongue contours. In this work, we proposed a new direction for the problem of ultrasound tongue contour extraction using a deep learning technique where instead of tracking the tongue surface, landmarks on the tongue are tracked. In this way, researchers can use previously available linguistics ultrasound tongue databases. Moreover, the whole process of tongue contour extraction is performed in one step, where it increases the performance speed without comprising accuracy or generalization ability of the previous techniques. ### Methodology
Similar to facial landmark detection methods BIBREF15, we considered the problem of tongue contour extraction as a simple landmark detection and tracking. For this reason, we first developed a customized annotator software that can extract equally separated and randomized markers from segmented tongue contours in different databases. Meanwhile, the same software could fit B-spline curves on the extracted markers to revert the process for evaluation purposes. To track landmarks on the tongue surface, we designed a light-version deep convolutional neural network named TongueNet. Figure FIGREF1 shows TongueNet architecture. In each layer, convolutional operations followed by ReLU activations as a non-linearity function as well as Batch normalization layers to improve the regularization, convergence, and accuracy. For the sake of better generalization ability of the model, in the last two layers, fully connected layers are equipped with Drop-out layers of $50\%$. To find the optimum number of required points in the output layer, we used the number of points ($\#$ in Figure FIGREF1) from 5 to 100 (see Figure FIGREF2 for samples of this experiment with 5, 10, 15, 20, 25, and 30 points as the output). ### Experimental Results and Discussion
There is usually a trade-off between the number of training samples and the number of trainable parameters in a deep network model BIBREF16. In general, more data we have, better result are generated by supervised deep learning methods. Data augmentation helps to increase the number of training data, but a bigger dataset needs a better and most likely bigger network architecture in terms of generalization. Otherwise, the model might over-fitted or under-fitted on training data. Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks. The database was randomly divided into three sets: 90$\%$ training, 5$\%$ validation, and 5$\%$ testing datasets. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. During the training process of TongueNet, we employed an online data augmentation, including rotation (-25 to 25 degrees), translation (-30 to 30 points in two directions), scaling (from 0.5 to 2 times), horizontal flipping, and combination of these transformations, and annotation point locations are also was transformed, correspondingly. From our extensive random search hyper-parameter tuning, learning rate, the number of iterations, mini-batch sizes, the number of epochs was determined as 0.0005, 1000, 30, 10, respectively. We deployed our experiments using Keras with TensorFlow as the backend on a Windows PC with Core i7, 4.2 GHz speed using one NVIDIA 1080 GPU unit, and 32 GB of RAM. Adam optimization with fixed momentum values of 0.9, was utilized for training. We trained and tested TongueNet with a different number of points as the output size. We first evaluated the scenario of equally spaced landmarks on the tongue surface. In this method, we captured all the points in equal distances respect to their neighbors. Our results from selecting five points to the number of pixels in the horizontal axis of the image (image width) revealed that the results of equally spaced selected points are not significant. Figure FIGREF3 shows randomly selected frames from real-time tracking of TongueNet using ultrasound tongue landmarks. As can be seen from the figure, automatically selection of points as annotation, which is used in many previous studies (see BIBREF17 for an example), can not provide accurate results. In a separate study, we extract annotation points from the same database using a randomly spaced selection method on tongue contours. We add restrictions for points that they should be at a minimum distance from each other as well as omitting outliers from the database. We saw that the optimum number of points was ten points for our experiments. Figure FIGREF4 shows some randomly selected results from training TongueNet on a randomly selected point database. From the figure, better accuracy of the TongueNet can be seen qualitatively. Note that we didn't apply any image enhancement or cropping for databases. To show the performance ability of TongueNet quantitatively, we first fitted B-spline curves using the OpenCV library on the instances of TongueNet. Then, we compared the value of the mean sum of distance (MSD) BIBREF8 for TongueNet, sUNET BIBREF11, UNET BIBREF18, BowNet BIBREF13, and IrisNet BIBREF14 deep learning models. From table TABREF6, it can be seen that TongueNet could reach to results similar to the state of the art deep learning models in this field. Note that there are some approximation errors for the curve-fitting procedure of the TongueNet and skeletonization process for extracting tongue contours from segmentation results of other deep learning models. We even tested TongueNet on a new database from UBC (the same database used in BIBREF14) to evaluate the generalization ability of the landmark tracking technique. Although TongueNet has not trained on that database, it could predict favorable instances for video frames with different data distributions. This shows the capability of TongueNet for managing of the over-fitting. From Table. TABREF6, the difference of MSD values is not significant between models, while IrisNet could find better MSD value. However, in terms of speed, TongueNet outperforms other deep learning models while post-processing time is not considered for them. We likewise tested TongueNet on a new database from the University of British Columbia (the same database used in BIBREF14) to evaluate the generalization ability of the landmark tracking technique. Although TongueNet has not trained on that database, it could predict favorable instances for that novel video frames with different data distributions. This shows the capability of TongueNet for managing of the over-fitting. From Table. TABREF6, although the difference of MSD values is not significant between models, IrisNet could find better MSD value. However, in terms of speed performance, TongueNet outperforms other deep learning models while post-processing time was not included for calculation of frame rates. ### Conclusion and Discussion
In this paper, we presented TongueNet, a simple deep learning architecture, with a novel training approach for the problem of Tongue contour extraction and tracking. Unlike similar studies, we used several points to track on the tongue surface instead of the whole tongue contour region. In recent tongue contour tracking, for quantitative studies, a two-phase procedure is performed, including image segmentation and post-processing for extraction of tongue contour that this increase computational costs. Furthermore, available previously annotated tongue databases in articulation studies could not be utilized for deep learning techniques until now. Using TongueNet, we provided a new tool for this literature, while old databases now can be used for developing a better tongue contour tracking technique. All landmark locations in this study are annotated automatically with two different approaches, and we experienced that data augmentation has a significant rule in the accuracy of TongueNet. From our experimental results, we can anticipate that if an extensive manually annotated database, might be a combination of several databases, is employed for training of a deep learning model such as TongueNet, the accuracy of the model would be boosted considerably. The materials of this study will help researchers in different fields such as linguistics to study tongue gestures in real-time easier, accessible, and with higher accuracy than previous methods. The current infant TongueNet technique needs to be developed, trained, and extended as a fast, accurate, real-time, automatic method applicable for available ultrasound tongue databases. Fig. 1. An overview of network architecture. Output of the network is a vector comprises of spacial location of individual points on the tongue surface, where # indicates the number of points in the output Fig. 2. Sample frames from the experiment of testing different number of points in the output of the TongueNet. Fig. 3. Sample results from testing images that were annotated by points equally spaced on the tongue surface through the width of the image. In this image we separated the space by 10 equally disperse vertical lines. Fig. 4. Instances generated by TongueNet using randomly selected landmarks on tongue surface, automatically. Fig. 5. Randomly selected frames of applying TongueNet on a new database without training on that. Table 1. Results of comparison study for several deep learning models on the same database. Except for TongueNet, in order to calculate MSD values, tongue contours were extracted from segmented instances using post-processing method.
|
2000 images
|
Who was the editor for The New Yorker when Shawn died?
A. Brown
B. Ross
C. Mehta
D. Breenan
|
Goings On About Town One of the funniest moments in Brendan Gill's 1975 memoir, Here at "The New Yorker ," comes during a luncheon at the now vanished Ritz in Manhattan. At the table are Gill; William Shawn, then editor of The New Yorker ; and the reclusive English writer Henry Green. Green's new novel, Loving , has just received a very favorable review in The New Yorker . Shawn--"with his usual hushed delicacy of speech and manner"--inquires of the novelist whether he could possibly reveal what prompted the creation of such an exquisite work. Green obliges. "I once asked an old butler in Ireland what had been the happiest times of his life," he says. "The butler replied, 'Lying in bed on Sunday morning, eating tea and toast with cunty fingers.' " This was not the explanation Shawn was expecting, Gill tells us. "Discs of bright red begin to burn in his cheeks." Was Shawn blushing out of prudishness, as we are meant to infer? This was, after all, a man renowned for his retiring propriety, a man who sedulously barred anything smacking of the salacious--from lingerie ads to four-letter words--from the magazine he stewarded from 1952 until 1987, five years before his death. But after reading these two new memoirs about Shawn, I wonder. "He longed for the earthiest and wildest kinds of sexual adventures," Lillian Ross discloses in hers, adding that he lusted after Hannah Arendt, Evonne Goolagong, and Madonna. As for Ved Mehta, he reports that Shawn's favorite thing to watch on television was "people dancing uninhibitedly" ( Soul Train , one guesses). I suspect Shawn did not blush at the "cunty fingers" remark out of prudery. He blushed because it had hit too close to home. Both these memoirs must be read by everyone--everyone, that is, who takes seriously the important business of sorting out precisely how he or she feels about The New Yorker , then and now. Of the two, Mehta's is far and away the more entertaining. This may seem odd, for Mehta is reputed to be a very dull writer whereas Ross is a famously zippy one. Moreover, Mehta writes as Shawn's adoring acolyte, whereas Ross writes as his longtime adulterous lover. Just knowing that Mrs. Shawn is still alive adds a certain tension to reading much of what this Other Woman chooses to divulge. Evidently, "Bill" and Lillian loved each other with a fine, pure love, a love that was more than love, a love coveted by the winged seraphs of heaven. "We had indeed become one," she tells us, freely venting the inflations of her heart. Shawn was managing editor of The New Yorker when he hired Ross in 1945 as the magazine's second woman reporter (the first was Andy Logan). He was short and balding but had pale blue eyes to die for. As for Ross, "I was aware of the fact that I was not unappealing." During a late-night editorial session, she says, Shawn blurted out his love. A few weeks later at the office, their eyes met. Without a word--even, it seems, to the cab driver--they hied uptown to the Plaza, where matters were consummated. Thereafter, the couple set up housekeeping together in an apartment 20 blocks downtown from the Shawn residence on upper Fifth Avenue and stoically endured the sufferings of Shawn's wife, who did not want a divorce. Now, Ross seems like a nice lady, and I certainly have nothing against adultery, which I hear is being carried on in the best circles these days. But the public flaunting of adultery--especially when spouses and children are around--well, it brings out the bourgeois in me. It also made me feel funny about William Shawn, whom I have always regarded as a great man. I loved his New Yorker . The prose it contained--the gray stuff around the cartoons--was balm for the soul: unfailingly clear, precise, logical, and quietly stylish. So what if the articles were occasionally boring? It was a sweet sort of boredom, serene and restorative, not at all like the kind induced by magazines today, which is more akin to nervous exhaustion. Besides, the moral tone of the magazine was almost wholly admirable--it was ahead of the pack on Hiroshima, civil rights, Vietnam, Watergate, the environment--and this was very much Shawn's doing. I do not like to think of him in an illicit love nest, eating tea and toast with cunty fingers. Happily, Ross has sprinkled her memoir with clues that it is not to be taken as entirely factual. To say that Shawn was "a man who grieved over all living creatures" is forgivable hyperbole; but later to add that he "mourned" for Si Newhouse when Newhouse unceremoniously fired him in 1987 (a couple of years after buying the magazine)--well, that's a bit much. Even Jesus had his limits. Elsewhere, Ross refers to her lover's "very powerful masculinity," only to note on the very next page that "if he suffered a paper cut on a finger and saw blood, he would come into my office, looking pale." She declares that "Bill was incapable of engendering a cliché, in deed as well as in word." But then she puts the most toe-curling clichés into his mouth: "Why am I more ghost than man?" Or: "We must arrest our love in midflight. And we fix it forever as of today, a point of pure light that will reach into eternity." (File that under Romantic Effusions We Doubt Ever Got Uttered.) Nor is Ross incapable of a melodramatic cliché herself. "Why can't we just live, just live ?" she cries in anguish when she and Shawn, walking hand in hand out of Central Park, chance to see Shawn's wife slowly making her way down the block with a burden of packages. And what does she think of Mrs. Shawn? "I found her to be sensitive and likeable." Plus, she could "do a mean Charleston." There is nothing more poignant than the image of an openly cheated-upon and humiliated wife doing "a mean Charleston." William Shawn's indispensability as an editor is amply manifest in Ross' memoir. Word repetition? "Whatever reporting Bill asked me to do turned out to be both challenging and fun. ... For me, reporting and writing for the magazine was fun, pure fun. ... It was never 'work' for me. It was fun." Even in praising his skill as an editor, she betrays the presence of its absence. "All writers, of course, have needed the one called the 'editor,' who singularly, almost mystically, embodies the many-faceted, unique life force infusing the entire enchilada." Nice touch, that enchilada. When cocktail party malcontents mocked Shawn's New Yorker in the late '70s and early '80s, they would make fun of such things as E.J. Kahn's five-part series on "Grains of the World" or Elizabeth Drew's supposedly soporific reporting from Washington. But Ved Mehta was always the butt of the worst abuse. Shawn was allowing him to publish an autobiography in the pages of the magazine that was mounting up to millions of words over the years, and the very idea of it seemed to bore people silly. After the publication of two early installments, "Daddyji" and "Mamaji," each the length of a book, one critic cried: "Enoughji!" But it kept coming. And I, for one, was grateful. Here was a boy growing up in Punjab during the fall of the Raj and the Partition, a boy who had been blinded by meningitis at the age of 3, roller-skating through the back streets of Lahore as Sikhs slaughtered Hindus and Hindus slaughtered Muslims and civilization was collapsing and then, decades later, having made his way from India to an Arkansas school for the blind to Balliol College, Oxford, to The New Yorker , re-creating the whole thing in Proustian detail and better-than-Proustian prose ... ! Mehta's multivolume autobiography, titled Continents of Exile , has loss as its overarching theme: loss of sight, of childhood, of home and country, and now--with this volume--loss of Mr. Shawn's New Yorker . The memoir takes us from the time the author was hired as a staff writer in the early '60s up to 1994, when he was "terminated" by the loathed Tina Brown in her vandalization of his cherished magazine. Mehta evidently loved William Shawn at least as much as Lillian Ross did, although his love was not requited in the same way. He likens the revered editor to the character Prince Myshkin in The Idiot : innocent and vulnerable, someone who must be protected. And long-suffering, one might infer: "He was so careful of not hurting anyone's feelings that he often listened to utterly fatuous arguments for hours on end." Like Ross, Mehta struggles to express William Shawn's ineffable virtues. "It is as if, Mehta, he were beyond our human conception," Janet Flanner tells him once to calm him down. At times I wondered whether the author, in his ecstasies of devotion, had not inadvertently committed plagiarism. His words on Mr. Shawn sound suspiciously like those of Mr. Pooter on his boss Mr. Perkupp in The Diary of a Nobody . Compare. Mehta on Shawn: "His words were so generous that I could scarcely find my tongue, even to thank him." Pooter on Perkupp: "My heart was too full to thank him." Mehta: "I started saying to myself compulsively, 'I wish Mr. Shawn would ring,' at the oddest times of the day or night. ... How I longed for the parade of proofs, the excitement of rewriting and perfecting!" Pooter: "Mr. Perkupp, I will work night and day to serve you!" I am not sure I have made it sound this way so far, but Mehta's book is completely engrossing--the most enjoyable book, I think, I have ever reviewed. It oozes affection and conviction, crackles with anger, and is stuffed with thumping good stories. Many are about Mehta's daft colleagues at The New Yorker , such as the guy in the next office: His door was always shut, but I could hear him through the wall that separated his cubicle from mine typing without pause. ... Even the changing of the paper in the typewriter seemed somehow to be incorporated into the rhythmic rat-tat-tat ... year after year went by to the sound of his typing but without a word from his typewriter appearing in the magazine. Or the great and eccentric Irish writer Maeve Breenan, who fetched up as a bag lady. Or the legendary St. Clair McKelway, whose decisive breakdown came when he hailed a cab and prevailed upon the driver to take him to the New Yorker office at 24 West 43 rd St. "O.K., Mac, if that's what you want." He was in Boston at the time. (McKelway later told Mehta that if the cabby had not called him "Mac," his nickname, an alarm might have gone off in his head.) Mehta's writerly persona, a disarming mixture of the feline and the naive, is perfect for relating the little scandals that worried The New Yorker in the late '70s (plagiarism, frozen turbot), the drama of finding a worthy candidate to succeed the aging Shawn as editor, the purchase of the magazine by the evil Si Newhouse ("We all took fright") and the resultant plague of Gottliebs and Florios visited upon it, and what he sees as the final debacle: Tinaji. Lillian Ross, by contrast, takes a rather cheerful view of the Brown dispensation. Indeed, the new editor even coaxed Ross into re-joining the magazine, just as she was booting Mehta out. "I found that she possessed--under the usual disguises--her own share of Bill's kind of naivete, insight, and sensitivity," Ross says of Brown. "She, too, 'got it.' " A few months after Brown was appointed editor, Shawn died at the age of 85. He had long since stopped reading his beloved magazine, in sorrow and relief. That's if you believe Mehta. Ross assures us that Mr. Shawn was reading Tina Brown's New Yorker "with new interest" in the weeks prior to his death. Has Tina Brown betrayed the legacy of William Shawn, as Mehta fiercely believes, or has she continued and built upon it, as Ross is evidently convinced? Have the changes she has wrought enlivened a stodgy magazine or vulgarized a dignified one--or both? These are weighty questions, and one is of course loath to compromise one's life chances by hazarding unripe opinions in a public forum such as this.
|
A. Brown
|
What is Hilary's tone described as "dark" when he remarks that there will be people interested in using his before-shave lotion?
A. He senses that Donald is going to dismiss the idea because it is too costly
B. He senses that Donald is scheming to patent the idea for his own profiteering
C. He senses that Donald is beginning to understand his malicious intent for the before-shave lotion
D. He senses that Donald is underestimating the potential of his good idea
|
Fallout is, of course, always disastrous— one way or another JUNIOR ACHIEVEMENT BY WILLIAM LEE ILLUSTRATED BY SCHOENHERR "What would you think," I asked Marjorie over supper, "if I should undertake to lead a junior achievement group this summer?" She pondered it while she went to the kitchen to bring in the dessert. It was dried apricot pie, and very tasty, I might add. "Why, Donald," she said, "it could be quite interesting, if I understand what a junior achievement group is. What gave you the idea?" "It wasn't my idea, really," I admitted. "Mr. McCormack called me to the office today, and told me that some of the children in the lower grades wanted to start one. They need adult guidance of course, and one of the group suggested my name." I should explain, perhaps, that I teach a course in general science in our Ridgeville Junior High School, and another in general physics in the Senior High School. It's a privilege which I'm sure many educators must envy, teaching in Ridgeville, for our new school is a fine one, and our academic standards are high. On the other hand, the fathers of most of my students work for the Commission and a constant awareness of the Commission and its work pervades the town. It is an uneasy privilege then, at least sometimes, to teach my old-fashioned brand of science to these children of a new age. "That's very nice," said Marjorie. "What does a junior achievement group do?" "It has the purpose," I told her, "of teaching the members something about commerce and industry. They manufacture simple compositions like polishing waxes and sell them from door-to-door. Some groups have built up tidy little bank accounts which are available for later educational expenses." "Gracious, you wouldn't have to sell from door-to-door, would you?" "Of course not. I'd just tell the kids how to do it." Marjorie put back her head and laughed, and I was forced to join her, for we both recognize that my understanding and "feel" for commercial matters—if I may use that expression—is almost nonexistent. "Oh, all right," I said, "laugh at my commercial aspirations. But don't worry about it, really. Mr. McCormack said we could get Mr. Wells from Commercial Department to help out if he was needed. There is one problem, though. Mr. McCormack is going to put up fifty dollars to buy any raw materials wanted and he rather suggested that I might advance another fifty. The question is, could we do it?" Marjorie did mental arithmetic. "Yes," she said, "yes, if it's something you'd like to do." We've had to watch such things rather closely for the last ten—no, eleven years. Back in the old Ridgeville, fifty-odd miles to the south, we had our home almost paid for, when the accident occurred. It was in the path of the heaviest fallout, and we couldn't have kept on living there even if the town had stayed. When Ridgeville moved to its present site, so, of course, did we, which meant starting mortgage payments all over again. Thus it was that on a Wednesday morning about three weeks later, I was sitting at one end of a plank picnic table with five boys and girls lined up along the sides. This was to be our headquarters and factory for the summer—a roomy unused barn belonging to the parents of one of the group members, Tommy Miller. "O.K.," I said, "let's relax. You don't need to treat me as a teacher, you know. I stopped being a school teacher when the final grades went in last Friday. I'm on vacation now. My job here is only to advise, and I'm going to do that as little as possible. You're going to decide what to do, and if it's safe and legal and possible to do with the starting capital we have, I'll go along with it and help in any way I can. This is your meeting." Mr. McCormack had told me, and in some detail, about the youngsters I'd be dealing with. The three who were sitting to my left were the ones who had proposed the group in the first place. Doris Enright was a grave young lady of ten years, who might, I thought, be quite a beauty in a few more years, but was at the moment rather angular—all shoulders and elbows. Peter Cope, Jr. and Hilary Matlack were skinny kids, too. The three were of an age and were all tall for ten-year-olds. I had the impression during that first meeting that they looked rather alike, but this wasn't so. Their features were quite different. Perhaps from association, for they were close friends, they had just come to have a certain similarity of restrained gesture and of modulated voice. And they were all tanned by sun and wind to a degree that made their eyes seem light and their teeth startlingly white. The two on my right were cast in a different mold. Mary McCready was a big husky redhead of twelve, with a face full of freckles and an infectious laugh, and Tommy Miller, a few months younger, was just an average, extroverted, well adjusted youngster, noisy and restless, tee-shirted and butch-barbered. The group exchanged looks to see who would lead off, and Peter Cope seemed to be elected. "Well, Mr. Henderson, a junior achievement group is a bunch of kids who get together to manufacture and sell things, and maybe make some money." "Is that what you want to do," I asked, "make money?" "Why not?" Tommy asked. "There's something wrong with making money?" "Well, sure, I suppose we want to," said Hilary. "We'll need some money to do the things we want to do later." "And what sort of things would you like to make and sell?" I asked. The usual products, of course, with these junior achievement efforts, are chemical specialties that can be made safely and that people will buy and use without misgivings—solvent to free up rusty bolts, cleaner to remove road tar, mechanic's hand soap—that sort of thing. Mr. McCormack had told me, though, that I might find these youngsters a bit more ambitious. "The Miller boy and Mary McCready," he had said, "have exceptionally high IQ's—around one forty or one fifty. The other three are hard to classify. They have some of the attributes of exceptional pupils, but much of the time they seem to have little interest in their studies. The junior achievement idea has sparked their imaginations. Maybe it'll be just what they need." Mary said, "Why don't we make a freckle remover? I'd be our first customer." "The thing to do," Tommy offered, "is to figure out what people in Ridgeville want to buy, then sell it to them." "I'd like to make something by powder metallurgy techniques," said Pete. He fixed me with a challenging eye. "You should be able to make ball bearings by molding, then densify them by electroplating." "And all we'd need is a hydraulic press," I told him, "which, on a guess, might cost ten thousand dollars. Let's think of something easier." Pete mulled it over and nodded reluctantly. "Then maybe something in the electronics field. A hi-fi sub-assembly of some kind." "How about a new detergent?" Hilary put in. "Like the liquid dishwashing detergents?" I asked. He was scornful. "No, they're formulations—you know, mixtures. That's cookbook chemistry. I mean a brand new synthetic detergent. I've got an idea for one that ought to be good even in the hard water we've got around here." "Well, now," I said, "organic synthesis sounds like another operation calling for capital investment. If we should keep the achievement group going for several summers, it might be possible later on to carry out a safe synthesis of some sort. You're Dr. Matlack's son, aren't you? Been dipping into your father's library?" "Some," said Hilary, "and I've got a home laboratory." "How about you, Doris?" I prompted. "Do you have a special field of interest?" "No." She shook her head in mock despondency. "I'm not very technical. Just sort of miscellaneous. But if the group wanted to raise some mice, I'd be willing to turn over a project I've had going at home." "You could sell mice?" Tommy demanded incredulously. "Mice," I echoed, then sat back and thought about it. "Are they a pure strain? One of the recognized laboratory strains? Healthy mice of the right strain," I explained to Tommy, "might be sold to laboratories. I have an idea the Commission buys a supply every month." "No," said Doris, "these aren't laboratory mice. They're fancy ones. I got the first four pairs from a pet shop in Denver, but they're red—sort of chipmunk color, you know. I've carried them through seventeen generations of careful selection." "Well, now," I admitted, "the market for red mice might be rather limited. Why don't you consider making an after-shave lotion? Denatured alcohol, glycerine, water, a little color and perfume. You could buy some bottles and have some labels printed. You'd be in business before you knew it." There was a pause, then Tommy inquired, "How do you sell it?" "Door-to-door." He made a face. "Never build up any volume. Unless it did something extra. You say we'd put color in it. How about enough color to leave your face looking tanned. Men won't use cosmetics and junk, but if they didn't have to admit it, they might like the shave lotion." Hilary had been deep in thought. He said suddenly, "Gosh, I think I know how to make a—what do you want to call it—a before-shave lotion." "What would that be?" I asked. "You'd use it before you shaved." "I suppose there might be people who'd prefer to use it beforehand," I conceded. "There will be people," he said darkly, and subsided. Mrs. Miller came out to the barn after a while, bringing a bucket of soft drinks and ice, a couple of loaves of bread and ingredients for a variety of sandwiches. The parents had agreed to underwrite lunches at the barn and Betty Miller philosophically assumed the role of commissary officer. She paused only to say hello and to ask how we were progressing with our organization meeting. I'd forgotten all about organization, and that, according to all the articles I had perused, is most important to such groups. It's standard practice for every member of the group to be a company officer. Of course a young boy who doesn't know any better, may wind up a sales manager. Over the sandwiches, then, I suggested nominating company officers, but they seemed not to be interested. Peter Cope waved it off by remarking that they'd each do what came naturally. On the other hand, they pondered at some length about a name for the organization, without reaching any conclusions, so we returned to the problem of what to make. It was Mary, finally, who advanced the thought of kites. At first there was little enthusiasm, then Peter said, "You know, we could work up something new. Has anybody ever seen a kite made like a wind sock?" Nobody had. Pete drew figures in the air with his hands. "How about the hole at the small end?" "I'll make one tonight," said Doris, "and think about the small end. It'll work out all right." I wished that the youngsters weren't starting out by inventing a new article to manufacture, and risking an almost certain disappointment, but to hold my guidance to the minimum, I said nothing, knowing that later I could help them redesign it along standard lines. At supper I reviewed the day's happenings with Marjorie and tried to recall all of the ideas which had been propounded. Most of them were impractical, of course, for a group of children to attempt, but several of them appeared quite attractive. Tommy, for example, wanted to put tooth powder into tablets that one would chew before brushing the teeth. He thought there should be two colors in the same bottle—orange for morning and blue for night, the blue ones designed to leave the mouth alkaline at bed time. Pete wanted to make a combination nail and wood screw. You'd drive it in with a hammer up to the threaded part, then send it home with a few turns of a screwdriver. Hilary, reluctantly forsaking his ideas on detergents, suggested we make black plastic discs, like poker chips but thinner and as cheap as possible, to scatter on a snowy sidewalk where they would pick up extra heat from the sun and melt the snow more rapidly. Afterward one would sweep up and collect the discs. Doris added to this that if you could make the discs light enough to float, they might be colored white and spread on the surface of a reservoir to reduce evaporation. These latter ideas had made unknowing use of some basic physics, and I'm afraid I relapsed for a few minutes into the role of teacher and told them a little bit about the laws of radiation and absorption of heat. "My," said Marjorie, "they're really smart boys and girls. Tommy Miller does sound like a born salesman. Somehow I don't think you're going to have to call in Mr. Wells." I do feel just a little embarrassed about the kite, even now. The fact that it flew surprised me. That it flew so confoundedly well was humiliating. Four of them were at the barn when I arrived next morning; or rather on the rise of ground just beyond it, and the kite hung motionless and almost out of sight in the pale sky. I stood and watched for a moment, then they saw me. "Hello, Mr. Henderson," Mary said, and proffered the cord which was wound on a fishing reel. I played the kite up and down for a few minutes, then reeled it in. It was, almost exactly, a wind sock, but the hole at the small end was shaped—by wire—into the general form of a kidney bean. It was beautifully made, and had a sort of professional look about it. "It flies too well," Mary told Doris. "A kite ought to get caught in a tree sometimes." "You're right," Doris agreed. "Let's see it." She gave the wire at the small end the slightest of twists. "There, it ought to swoop." Sure enough, in the moderate breeze of that morning, the kite swooped and yawed to Mary's entire satisfaction. As we trailed back to the barn I asked Doris, "How did you know that flattening the lower edge of the hole would create instability?" She looked doubtful. "Why it would have to, wouldn't it? It changed the pattern of air pressures." She glanced at me quickly. "Of course, I tried a lot of different shapes while I was making it." "Naturally," I said, and let it go at that. "Where's Tommy?" "He stopped off at the bank," Pete Cope told me, "to borrow some money. We'll want to buy materials to make some of these kites." "But I said yesterday that Mr. McCormack and I were going to advance some cash to get started." "Oh, sure, but don't you think it would be better to borrow from a bank? More businesslike?" "Doubtless," I said, "but banks generally want some security." I would have gone on and explained matters further, except that Tommy walked in and handed me a pocket check book. "I got two hundred and fifty," he volunteered—not without a hint of complacency in his voice. "It didn't take long, but they sure made it out a big deal. Half the guys in the bank had to be called in to listen to the proposition. The account's in your name, Mr. Henderson, and you'll have to make out the checks. And they want you to stop in at the bank and give them a specimen signature. Oh, yes, and cosign the note." My heart sank. I'd never had any dealings with banks except in the matter of mortgages, and bank people make me most uneasy. To say nothing of finding myself responsible for a two-hundred-and-fifty-dollar note—over two weeks salary. I made a mental vow to sign very few checks. "So then I stopped by at Apex Stationers," Tommy went on, "and ordered some paper and envelopes. We hadn't picked a name yesterday, but I figured what's to lose, and picked one. Ridge Industries, how's that?" Everybody nodded. "Just three lines on the letterhead," he explained. "Ridge Industries—Ridgeville—Montana." I got my voice back and said, "Engraved, I trust." "Well, sure," he replied. "You can't afford to look chintzy." My appetite was not at its best that evening, and Marjorie recognized that something was concerning me, but she asked no questions, and I only told her about the success of the kite, and the youngsters embarking on a shopping trip for paper, glue and wood splints. There was no use in both of us worrying. On Friday we all got down to work, and presently had a regular production line under way; stapling the wood splints, then wetting them with a resin solution and shaping them over a mandrel to stiffen, cutting the plastic film around a pattern, assembling and hanging the finished kites from an overhead beam until the cement had set. Pete Cope had located a big roll of red plastic film from somewhere, and it made a wonderful-looking kite. Happily, I didn't know what the film cost until the first kites were sold. By Wednesday of the following week we had almost three hundred kites finished and packed into flat cardboard boxes, and frankly I didn't care if I never saw another. Tommy, who by mutual consent, was our authority on sales, didn't want to sell any until we had, as he put it, enough to meet the demand, but this quantity seemed to satisfy him. He said he would sell them the next week and Mary McCready, with a fine burst of confidence, asked him in all seriousness to be sure to hold out a dozen. Three other things occurred that day, two of which I knew about immediately. Mary brought a portable typewriter from home and spent part of the afternoon banging away at what seemed to me, since I use two fingers only, a very creditable speed. And Hilary brought in a bottle of his new detergent. It was a syrupy yellow liquid with a nice collar of suds. He'd been busy in his home laboratory after all, it seemed. "What is it?" I asked. "You never told us." Hilary grinned. "Lauryl benzyl phosphonic acid, dipotassium salt, in 20% solution." "Goodness." I protested, "it's been twenty-five years since my last course in chemistry. Perhaps if I saw the formula—." He gave me a singularly adult smile and jotted down a scrawl of symbols and lines. It meant little to me. "Is it good?" For answer he seized the ice bucket, now empty of its soda bottles, trickled in a few drops from the bottle and swished the contents. Foam mounted to the rim and spilled over. "And that's our best grade of Ridgeville water," he pointed out. "Hardest in the country." The third event of Wednesday came to my ears on Thursday morning. I was a little late arriving at the barn, and was taken a bit aback to find the roadway leading to it rather full of parked automobiles, and the barn itself rather full of people, including two policemen. Our Ridgeville police are quite young men, but in uniform they still look ominous and I was relieved to see that they were laughing and evidently enjoying themselves. "Well, now," I demanded, in my best classroom voice. "What is all this?" "Are you Henderson?" the larger policeman asked. "I am indeed," I said, and a flash bulb went off. A young lady grasped my arm. "Oh, please, Mr. Henderson, come outside where it's quieter and tell me all about it." "Perhaps," I countered, "somebody should tell me." "You mean you don't know, honestly? Oh, it's fabulous. Best story I've had for ages. It'll make the city papers." She led me around the corner of the barn to a spot of comparative quiet. "You didn't know that one of your junior whatsisnames poured detergent in the Memorial Fountain basin last night?" I shook my head numbly. "It was priceless. Just before rush hour. Suds built up in the basin and overflowed, and down the library steps and covered the whole street. And the funniest part was they kept right on coming. You couldn't imagine so much suds coming from that little pool of water. There was a three-block traffic jam and Harry got us some marvelous pictures—men rolling up their trousers to wade across the street. And this morning," she chortled, "somebody phoned in an anonymous tip to the police—of course it was the same boy that did it—Tommy—Miller?—and so here we are. And we just saw a demonstration of that fabulous kite and saw all those simply captivating mice." "Mice?" "Yes, of course. Who would ever have thought you could breed mice with those cute furry tails?" Well, after a while things quieted down. They had to. The police left after sobering up long enough to give me a serious warning against letting such a thing happen again. Mr. Miller, who had come home to see what all the excitement was, went back to work and Mrs. Miller went back to the house and the reporter and photographer drifted off to file their story, or whatever it is they do. Tommy was jubilant. "Did you hear what she said? It'll make the city papers. I wish we had a thousand kites. Ten thousand. Oh boy, selling is fun. Hilary, when can you make some more of that stuff? And Doris, how many mice do you have?" Those mice! I have always kept my enthusiasm for rodents within bounds, but I must admit they were charming little beasts, with tails as bushy as miniature squirrels. "How many generations?" I asked Doris. "Seventeen. No, eighteen, now. Want to see the genetic charts?" I won't try to explain it as she did to me, but it was quite evident that the new mice were breeding true. Presently we asked Betty Miller to come back down to the barn for a conference. She listened and asked questions. At last she said, "Well, all right, if you promise me they can't get out of their cages. But heaven knows what you'll do when fall comes. They won't live in an unheated barn and you can't bring them into the house." "We'll be out of the mouse business by then," Doris predicted. "Every pet shop in the country will have them and they'll be down to nothing apiece." Doris was right, of course, in spite of our efforts to protect the market. Anyhow that ushered in our cage building phase, and for the next week—with a few interruptions—we built cages, hundreds of them, a good many for breeding, but mostly for shipping. It was rather regrettable that, after the Courier gave us most of the third page, including photographs, we rarely had a day without a few visitors. Many of them wanted to buy mice or kites, but Tommy refused to sell any mice at retail and we soon had to disappoint those who wanted kites. The Supermarket took all we had—except a dozen—and at a dollar fifty each. Tommy's ideas of pricing rather frightened me, but he set the value of the mice at ten dollars a pair and got it without any arguments. Our beautiful stationery arrived, and we had some invoice forms printed up in a hurry—not engraved, for a wonder. It was on Tuesday—following the Thursday—that a lanky young man disentangled himself from his car and strolled into the barn. I looked up from the floor where I was tacking squares of screening onto wooden frames. "Hi," he said. "You're Donald Henderson, right? My name is McCord—Jeff McCord—and I work in the Patent Section at the Commission's downtown office. My boss sent me over here, but if he hadn't, I think I'd have come anyway. What are you doing to get patent protection on Ridge Industries' new developments?" I got my back unkinked and dusted off my knees. "Well, now," I said, "I've been wondering whether something shouldn't be done, but I know very little about such matters—." "Exactly," he broke in, "we guessed that might be the case, and there are three patent men in our office who'd like to chip in and contribute some time. Partly for the kicks and partly because we think you may have some things worth protecting. How about it? You worry about the filing and final fees. That's sixty bucks per brainstorm. We'll worry about everything else." "What's to lose," Tommy interjected. And so we acquired a patent attorney, several of them, in fact. The day that our application on the kite design went to Washington, Mary wrote a dozen toy manufacturers scattered from New York to Los Angeles, sent a kite to each one and offered to license the design. Result, one licensee with a thousand dollar advance against next season's royalties. It was a rainy morning about three weeks later that I arrived at the barn. Jeff McCord was there, and the whole team except Tommy. Jeff lowered his feet from the picnic table and said, "Hi." "Hi yourself," I told him. "You look pleased." "I am," he replied, "in a cautious legal sense, of course. Hilary and I were just going over the situation on his phosphonate detergent. I've spent the last three nights studying the patent literature and a few standard texts touching on phosphonates. There are a zillion patents on synthetic detergents and a good round fifty on phosphonates, but it looks"—he held up a long admonitory hand—"it just looks as though we had a clear spot. If we do get protection, you've got a real salable property." "That's fine, Mr. McCord," Hilary said, "but it's not very important." "No?" Jeff tilted an inquiring eyebrow at me, and I handed him a small bottle. He opened and sniffed at it gingerly. "What gives?" "Before-shave lotion," Hilary told him. "You've shaved this morning, but try some anyway." Jeff looked momentarily dubious, then puddled some in his palm and moistened his jaw line. "Smells good," he noted, "and feels nice and cool. Now what?" "Wipe your face." Jeff located a handkerchief and wiped, looked at the cloth, wiped again, and stared. "What is it?" "A whisker stiffener. It makes each hair brittle enough to break off right at the surface of your skin." "So I perceive. What is it?" "Oh, just a mixture of stuff. Cookbook chemistry. Cysteine thiolactone and a fat-soluble magnesium compound." "I see. Just a mixture of stuff. And do your whiskers grow back the next day?" "Right on schedule," I said. McCord unfolded his length and stood staring out into the rain. Presently he said, "Henderson, Hilary and I are heading for my office. We can work there better than here, and if we're going to break the hearts of the razor industry, there's no better time to start than now." When they had driven off I turned and said, "Let's talk a while. We can always clean mouse cages later. Where's Tommy?" "Oh, he stopped at the bank to get a loan." "What on earth for? We have over six thousand in the account." "Well," Peter said, looking a little embarrassed, "we were planning to buy a hydraulic press. You see, Doris put some embroidery on that scheme of mine for making ball bearings." He grabbed a sheet of paper. "Look, we make a roller bearing, this shape only it's a permanent magnet. Then you see—." And he was off. "What did they do today, dear?" Marge asked as she refilled my coffee cup. "Thanks," I said. "Let's see, it was a big day. We picked out a hydraulic press, Doris read us the first chapter of the book she's starting, and we found a place over a garage on Fourth Street that we can rent for winter quarters. Oh, yes, and Jeff is starting action to get the company incorporated." "Winter quarters," Marge repeated. "You mean you're going to try to keep the group going after school starts?" "Why not? The kids can sail through their courses without thinking about them, and actually they won't put in more than a few hours a week during the school year." "Even so, it's child labor, isn't it?" "Child labor nothing. They're the employers. Jeff McCord and I will be the only employees—just at first, anyway." Marge choked on something. "Did you say you'd be an employee?" "Sure," I told her. "They've offered me a small share of the company, and I'd be crazy to turn it down. After all, what's to lose?" Transcriber's Note: This etext was produced from Analog Science Fact & Fiction July 1962. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
|
D. He senses that Donald is underestimating the potential of his good idea
|
Which term best describes the ease of space travel within the context of the passage?
A. complex
B. evolving
C. strict
D. flexible
|
PLANET of DREAD By MURRAY LEINSTER Illustrator ADKINS [Transcriber's Note: This etext was produced from Fantastic Stories of Imagination May 1962. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] I. Moran cut apart the yard-long monstrosity with a slash of flame. The thing presumably died, but it continued to writhe senselessly. He turned to see other horrors crawling toward him. Then he knew he was being marooned on a planet of endless terrors. Moran, naturally, did not mean to help in the carrying out of the plans which would mean his destruction one way or another. The plans were thrashed out very painstakingly, in formal conference on the space-yacht Nadine , with Moran present and allowed to take part in the discussion. From the viewpoint of the Nadine's ship's company, it was simply necessary to get rid of Moran. In their predicament he might have come to the same conclusion; but he was not at all enthusiastic about their decision. He would die of it. The Nadine was out of overdrive and all the uncountable suns of the galaxy shone steadily, remotely, as infinitesimal specks of light of every color of the rainbow. Two hours since, the sun of this solar system had been a vast glaring disk off to port, with streamers and prominences erupting about its edges. Now it lay astern, and Moran could see the planet that had been chosen for his marooning. It was a cloudy world. There were some dim markings near one lighted limb, but nowhere else. There was an ice-cap in view. The rest was—clouds. The ice-cap, by its existence and circular shape, proved that the planet rotated at a not unreasonable rate. The fact that it was water-ice told much. A water-ice ice-cap said that there were no poisonous gases in the planet's atmosphere. Sulfur dioxide or chlorine, for example, would not allow the formation of water-ice. It would have to be sulphuric-acid or hydrochloric-acid ice. But the ice-cap was simple snow. Its size, too, told about temperature-distribution on the planet. A large cap would have meant a large area with arctic and sub-arctic temperatures, with small temperate and tropical climate-belts. A small one like this meant wide tropical and sub-tropical zones. The fact was verified by the thick, dense cloud-masses which covered most of the surface,—all the surface, in fact, outside the ice-cap. But since there were ice-caps there would be temperate regions. In short, the ice-cap proved that a man could endure the air and temperature conditions he would find. Moran observed these things from the control-room of the Nadine , then approaching the world on planetary drive. He was to be left here, with no reason ever to expect rescue. Two of the Nadine's four-man crew watched out the same ports as the planet seemed to approach. Burleigh said encouragingly; "It doesn't look too bad, Moran!" Moran disagreed, but he did not answer. He cocked an ear instead. He heard something. It was a thin, wabbling, keening whine. No natural radiation sounds like that. Moran nodded toward the all-band speaker. "Do you hear what I do?" he asked sardonically. Burleigh listened. A distinctly artificial signal came out of the speaker. It wasn't a voice-signal. It wasn't an identification beacon, such as are placed on certain worlds for the convenience of interstellar skippers who need to check their courses on extremely long runs. This was something else. Burleigh said: "Hm ... Call the others, Harper." Harper, prudently with him in the control-room, put his head into the passage leading away. He called. But Moran observed with grudging respect that he didn't give him a chance to do anything drastic. These people on the Nadine were capable. They'd managed to recapture the Nadine from him, but they were matter-of-fact about it. They didn't seem to resent what he'd tried to do, or that he'd brought them an indefinite distance in an indefinite direction from their last landing-point, and they had still to re-locate themselves. They'd been on Coryus Three and they'd gotten departure clearance from its space-port. With clearance-papers in order, they could land unquestioned at any other space-port and take off again—provided the other space-port was one they had clearance for. Without rigid control of space-travel, any criminal anywhere could escape the consequences of any crime simply by buying a ticket to another world. Moran couldn't have bought a ticket, but he'd tried to get off the planet Coryus on the Nadine . The trouble was that the Nadine had clearance papers covering five persons aboard—four men and a girl Carol. Moran made six. Wherever the yacht landed, such a disparity between its documents and its crew would spark an investigation. A lengthy, incredibly minute investigation. Moran, at least, would be picked out as a fugitive from Coryus Three. The others were fugitives too, from some unnamed world Moran did not know. They might be sent back where they came from. In effect, with six people on board instead of five, the Nadine could not land anywhere for supplies. With five on board, as her papers declared, she could. And Moran was the extra man whose presence would rouse space-port officials' suspicion of the rest. So he had to be dumped. He couldn't blame them. He'd made another difficulty, too. Blaster in hand, he'd made the Nadine take off from Coryus III with a trip-tape picked at random for guidance. But the trip-tape had been computed for another starting-point, and when the yacht came out of overdrive it was because the drive had been dismantled in the engine-room. So the ship's location was in doubt. It could have travelled at almost any speed in practically any direction for a length of time that was at least indefinite. A liner could re-locate itself without trouble. It had elaborate observational equipment and tri-di star-charts. But smaller craft had to depend on the Galactic Directory. The process would be to find a planet and check its climate and relationship to other planets, and its flora and fauna against descriptions in the Directory. That was the way to find out where one was, when one's position became doubtful. The Nadine needed to make a planet-fall for this. The rest of the ship's company came into the control-room. Burleigh waved his hand at the speaker. "Listen!" They heard it. All of them. It was a trilling, whining sound among the innumerable random noises to be heard in supposedly empty space. "That's a marker," Carol announced. "I saw a costume-story tape once that had that sound in it. It marked a first-landing spot on some planet or other, so the people could find that spot again. It was supposed to be a long time ago, though." "It's weak," observed Burleigh. "We'll try answering it." Moran stirred, and he knew that every one of the others was conscious of the movement. But they didn't watch him suspiciously. They were alert by long habit. Burleigh said they'd been Underground people, fighting the government of their native world, and they'd gotten away to make it seem the revolt had collapsed. They'd go back later when they weren't expected, and start it up again. Moran considered the story probable. Only people accustomed to desperate actions would have remained so calm when Moran had used desperate measures against them. Burleigh picked up the transmitter-microphone. "Calling ground," he said briskly. "Calling ground! We pick up your signal. Please reply." He repeated the call, over and over and over. There was no answer. Cracklings and hissings came out of the speaker as before, and the thin and reedy wabbling whine continued. The Nadine went on toward the enlarging cloudy mass ahead. Burleigh said; "Well?" "I think," said Carol, "that we should land. People have been here. If they left a beacon, they may have left an identification of the planet. Then we'd know where we are and how to get to Loris." Burleigh nodded. The Nadine had cleared for Loris. That was where it should make its next landing. The little yacht went on. All five of its proper company watched as the planet's surface enlarged. The ice-cap went out of sight around the bulge of the globe, but no markings appeared. There were cloud-banks everywhere, probably low down in the atmosphere. The darker vague areas previously seen might have been highlands. "I think," said Carol, to Moran, "that if it's too tropical where this signal's coming from, we'll take you somewhere near enough to the ice-cap to have an endurable climate. I've been figuring on food, too. That will depend on where we are from Loris because we have to keep enough for ourselves. But we can spare some. We'll give you the emergency-kit, anyhow." The emergency-kit contained antiseptics, seeds, and a weapon or two, with elaborate advice to castaways. If somebody were wrecked on an even possibly habitable planet, the especially developed seed-strains would provide food in a minimum of time. It was not an encouraging thought, though, and Moran grimaced. She hadn't said anything about being sorry that he had to be marooned. Maybe she was, but rebels learn to be practical or they don't live long. Moran wondered, momentarily, what sort of world they came from and why they had revolted, and what sort of set-back to the revolt had sent the five off in what they considered a strategic retreat but their government would think defeat. Moran's own situation was perfectly clear. He'd killed a man on Coryus III. His victim would not be mourned by anybody, and somebody formerly in very great danger would now be safe, which was the reason for what Moran had done. But the dead man had been very important, and the fact that Moran had forced him to fight and killed him in fair combat made no difference. Moran had needed to get off-planet, and fast. But space-travel regulations are especially designed to prevent such escapes. He'd made a pretty good try, at that. One of the controls on space-traffic required a ship on landing to deposit its fuel-block in the space-port's vaults. The fuel-block was not returned until clearance for departure had been granted. But Moran had waylaid the messenger carrying the Nadine's fuel-block back to that space-yacht. He'd knocked the messenger cold and presented himself at the yacht with the fuel. He was admitted. He put the block in the engine's gate. He duly took the plastic receipt-token the engine only then released, and he drew a blaster. He'd locked two of the Nadine's crew in the engine-room, rushed to the control-room without encountering the others, dogged the door shut, and threaded in the first trip-tape to come to hand. He punched the take-off button and only seconds later the overdrive. Then the yacht—and Moran—was away. But his present companions got the drive dismantled two days later and once the yacht was out of overdrive they efficiently gave him his choice of surrendering or else. He surrendered, stipulating that he wouldn't be landed back on Coryus; he still clung to hope of avoiding return—which was almost certain anyhow. Because nobody would want to go back to a planet from which they'd carried away a criminal, even though they'd done it unwillingly. Investigation of such a matter might last for months. Now the space-yacht moved toward a vast mass of fleecy whiteness without any visible features. Harper stayed with the direction-finder. From time to time he gave readings requiring minute changes of course. The wabbling, whining signal was louder now. It became louder than all the rest of the space-noises together. The yacht touched atmosphere and Burleigh said; "Watch our height, Carol." She stood by the echometer. Sixty miles. Fifty. Thirty. A correction of course. Fifteen miles to surface below. Ten. Five. At twenty-five thousand feet there were clouds, which would be particles of ice so small that they floated even so high. Then clear air, then lower clouds, and lower ones still. It was not until six thousand feet above the surface that the planet-wide cloud-level seemed to begin. From there on down it was pure opacity. Anything could exist in that dense, almost palpable grayness. There could be jagged peaks. The Nadine went down and down. At fifteen hundred feet above the unseen surface, the clouds ended. Below, there was only haze. One could see the ground, at least, but there was no horizon. There was only an end to visibility. The yacht descended as if in the center of a sphere in which one could see clearly nearby, less clearly at a little distance, and not at all beyond a quarter-mile or so. There was a shaded, shadowless twilight under the cloud-bank. The ground looked like no ground ever seen before by anyone. Off to the right a rivulet ran between improbable-seeming banks. There were a few very small hills of most unlikely appearance. It was the ground, the matter on which one would walk, which was strangest. It had color, but the color was not green. Much of it was a pallid, dirty-yellowish white. But there were patches of blue, and curious veinings of black, and here and there were other colors, all of them unlike the normal color of vegetation on a planet with a sol-type sun. Harper spoke from the direction-finder; "The signal's coming from that mound, yonder." There was a hillock of elongated shape directly in line with the Nadine's course in descent. Except for the patches of color, it was the only considerable landmark within the half-mile circle in which anything could be seen at all. The Nadine checked her downward motion. Interplanetary drive is rugged and sure, but it does not respond to fine adjustment. Burleigh used rockets, issuing great bellowings of flame, to make actual contact. The yacht hovered, and as the rocket-flames diminished slowly she sat down with practically no impact at all. But around her there was a monstrous tumult of smoke and steam. When the rockets went off, she lay in a burned-out hollow some three or four feet deep with a bottom of solid stone. The walls of the hollow were black and scorched. It seemed that at some places they quivered persistently. There was silence in the control-room save for the whining noise which now was almost deafening. Harper snapped off the switch. Then there was true silence. The space-yacht had come to rest possibly a hundred yards from the mound which was the source of the space-signal. That mound shared the peculiarity of the ground as far as they could see through the haze. It was not vegetation in any ordinary sense. Certainly it was no mineral surface! The landing-pockets had burned away three or four feet of it, and the edge of the burned area smoked noisesomely, and somehow it looked as if it would reek. And there were places where it stirred. Burleigh blinked and stared. Then he reached up and flicked on the outside microphones. Instantly there was bedlam. If the landscape was strange, here, the sounds that came from it were unbelievable. There were grunting noises. There were clickings, uncountable clickings that made a background for all the rest. There were discordant howls and honkings. From time to time some thing unknown made a cry that sounded very much like a small boy trailing a stick against a picket fence, only much louder. Something hooted, maintaining the noise for an impossibly long time. And persistently, sounding as if they came from far away, there were booming noises, unspeakably deep-bass, made by something alive. And something shrieked in lunatic fashion and something else still moaned from time to time with the volume of a steam-whistle.... "This sounds and looks like a nice place to live," said Moran with fine irony. Burleigh did not answer. He turned down the outside sound. "What's that stuff there, the ground?" he demanded. "We burned it away in landing. I've seen something like it somewhere, but never taking the place of grass!" "That," said Moran as if brightly, "that's what I'm to make a garden in. Of evenings I'll stroll among my thrifty plantings and listen to the delightful sounds of nature." Burleigh scowled. Harper flicked off the direction-finder. "The signal still comes from that hillock yonder," he said with finality. Moran said bitingly; "That ain't no hillock, that's my home!" Then, instantly he'd said it, he recognized that it could be true. The mound was not a fold in the ground. It was not an up-cropping of the ash-covered stone on which the Nadine rested. The enigmatic, dirty-yellow-dirty-red-dirty-blue-and-dirty-black ground-cover hid something. It blurred the shape it covered, very much as enormous cobwebs made solid and opaque would have done. But when one looked carefully at the mound, there was a landing-fin sticking up toward the leaden skies. It was attached to a large cylindrical object of which the fore part was crushed in. The other landing-fins could be traced. "It's a ship," said Moran curtly. "It crash-landed and its crew set up a signal to call for help. None came, or they'd have turned the beacon off. Maybe they got the lifeboats to work and got away. Maybe they lived as I'm expected to live until they died as I'm expected to die." Burleigh said angrily; "You'd do what we are doing if you were in our shoes!" "Sure," said Moran, "but a man can gripe, can't he?" "You won't have to live here," said Burleigh. "We'll take you somewhere up by the ice-cap. As Carol said, we'll give you everything we can spare. And meanwhile we'll take a look at that wreck yonder. There might be an indication in it of what solar system this is. There could be something in it of use to you, too. You'd better come along when we explore." "Aye, aye, sir," said Moran with irony. "Very kind of you, sir. You'll go armed, sir?" Burleigh growled; "Naturally!" "Then since I can't be trusted with a weapon," said Moran, "I suggest that I take a torch. We may have to burn through that loathesome stuff to get in the ship." "Right," growled Burleigh again. "Brawn and Carol, you'll keep ship. The rest of us wear suits. We don't know what that stuff is outside." Moran silently went to the space-suit rack and began to get into a suit. Modern space-suits weren't like the ancient crudities with bulging metal casings and enormous globular helmets. Non-stretch fabrics took the place of metal, and constant-volume joints were really practical nowadays. A man could move about in a late-model space-suit almost as easily as in ship-clothing. The others of the landing-party donned their special garments with the brisk absence of fumbling that these people displayed in every action. "If there's a lifeboat left," said Carol suddenly, "Moran might be able to do something with it." "Ah, yes!" said Moran. "It's very likely that the ship hit hard enough to kill everybody aboard, but not smash the boats!" "Somebody survived the crash," said Burleigh, "because they set up a beacon. I wouldn't count on a boat, Moran." "I don't!" snapped Moran. He flipped the fastener of his suit. He felt all the openings catch. He saw the others complete their equipment. They took arms. So far they had seen no moving thing outside, but arms were simple sanity on an unknown world. Moran, though, would not be permitted a weapon. He picked up a torch. They filed into the airlock. The inner door closed. The outer door opened. It was not necessary to check the air specifically. The suits would take care of that. Anyhow the ice-cap said there were no water-soluble gases in the atmosphere, and a gas can't be an active poison if it can't dissolve. They filed out of the airlock. They stood on ash-covered stone, only slightly eroded by the processes which made life possible on this planet. They looked dubiously at the scorched, indefinite substance which had been ground before the Nadine landed. Moran moved scornfully forward. He kicked at the burnt stuff. His foot went through the char. The hole exposed a cheesy mass of soft matter which seemed riddled with small holes. Something black came squirming frantically out of one of the openings. It was eight or ten inches long. It had a head, a thorax, and an abdomen. It had wing-cases. It had six legs. It toppled down to the stone on which the Nadine rested. Agitatedly, it spread its wing-covers and flew away, droning loudly. The four men heard the sound above even the monstrous cacophony of cries and boomings and grunts and squeaks which seemed to fill the air. "What the devil—." Moran kicked again. More holes. More openings. More small tunnels in the cheese-like, curd-like stuff. More black things squirming to view in obvious panic. They popped out everywhere. It was suddenly apparent that the top of the soil, here, was a thick and blanket-like sheet over the whitish stuff. The black creatures lived and thrived in tunnels under it. Carol's voice came over the helmet-phones. " They're—bugs! " she said incredulously. " They're beetles! They're twenty times the size of the beetles we humans have been carrying around the galaxy, but that's what they are! " Moran grunted. Distastefully, he saw his predicament made worse. He knew what had happened here. He could begin to guess at other things to be discovered. It had not been practical for men to move onto new planets and subsist upon the flora and fauna they found there. On some new planets life had never gotten started. On such worlds a highly complex operation was necessary before humanity could move in. A complete ecological complex had to be built up; microbes to break down the rock for soil, bacteria to fix nitrogen to make the soil fertile; plants to grow in the new-made dirt and insects to fertilize the plants so they would multiply, and animals and birds to carry the seeds planet-wide. On most planets, to be sure, there were local, aboriginal plants and animals. But still terrestrial creatures had to be introduced if a colony was to feed itself. Alien plants did not supply satisfactory food. So an elaborate adaptation job had to be done on every planet before native and terrestrial living things settled down together. It wasn't impossible that the scuttling things were truly beetles, grown large and monstrous under the conditions of a new planet. And the ground.... "This ground stuff," said Moran distastefully, "is yeast or some sort of toadstool growth. This is a seedling world. It didn't have any life on it, so somebody dumped germs and spores and bugs to make it ready for plants and animals eventually. But nobody's come back to finish up the job." Burleigh grunted a somehow surprised assent. But it wasn't surprising; not wholly so. Once one mentioned yeasts and toadstools and fungi generally, the weird landscape became less than incredible. But it remained actively unpleasant to think of being marooned on it. "Suppose we go look at the ship?" said Moran unpleasantly. "Maybe you can find out where you are, and I can find out what's ahead of me." He climbed up on the unscorched surface. It was elastic. The parchment-like top skin yielded. It was like walking on a mass of springs. "We'd better spread out," added Moran, "or else we'll break through that skin and be floundering in this mess." "I'm giving the orders, Moran!" said Burleigh shortly. "But what you say does make sense." He and the others joined Moran on the yielding surface. Their footing was uncertain, as on a trampoline. They staggered. They moved toward the hillock which was a covered-over wrecked ship. The ground was not as level as it appeared from the Nadine's control-room. There were undulations. But they could not see more than a quarter-mile in any direction. Beyond that was mist. But Burleigh, at one end of the uneven line of advancing men, suddenly halted and stood staring down at something he had not seen before. The others halted. Something moved. It came out from behind a very minor spire of whitish stuff that looked like a dirty sheet stretched over a tall stone. The thing that appeared was very peculiar indeed. It was a—worm. But it was a foot thick and ten feet long, and it had a group of stumpy legs at its fore end—where there were eyes hidden behind bristling hair-like growths—and another set of feet at its tail end. It progressed sedately by reaching forward with its fore-part, securing a foothold, and then arching its middle portion like a cat arching its back, to bring its hind part forward. Then it reached forward again. It was of a dark olive color from one end to the other. Its manner of walking was insane but somehow sedate. Moran heard muffled noises in his helmet-phone as the others tried to speak. Carol's voice came anxiously; " What's the matter? What do you see? " Moran said with savage precision; "We're looking at an inch-worm, grown up like the beetles only more so. It's not an inch-worm any longer. It's a yard-worm." Then he said harshly to the men with him; "It's not a hunting creature on worlds where it's smaller. It's not likely to have turned deadly here. Come on!" He went forward over the singularly bouncy ground. The others followed. It was to be noted that Hallet the engineer, avoided the huge harmless creature more widely than most. They reached the mound which was the ship. Moran unlimbered his torch. He said sardonically; "This ship won't do anybody any good. It's old-style. That thick belt around its middle was dropped a hundred years ago, and more." There was an abrupt thickening of the cylindrical hull at the middle. There was an equally abrupt thinning, again, toward the landing-fins. The sharpness of the change was blurred over by the revolting ground-stuff growing everywhere. "We're going to find that this wreck has been here a century at least!" Without orders, he turned on the torch. A four-foot flame of pure blue-white leaped out. He touched its tip to the fungoid soil. Steam leaped up. He used the flame like a gigantic scalpel, cutting a square a yard deep in the whitish stuff, and then cutting it across and across to destroy it. Thick fumes arose, and quiverings and shakings began. Black creatures in their labyrinths of tunnels began to panic. Off to the right the blanket-like surface ripped and they poured out. They scuttled crazily here and there. Some took to wing. By instinct the other men—the armed ones—moved back from the smoke. They wore space-helmets but they felt that there should be an intolerable smell. Moran slashed and slashed angrily with the big flame, cutting a way to the metal hull that had fallen here before his grandfather was born. Sometimes the flame cut across things that writhed, and he was sickened. But above all he raged because he was to be marooned here. He could not altogether blame the others. They couldn't land at any colonized world with him on board without his being detected as an extra member of the crew. His fate would then be sealed. But they also would be investigated. Official queries would go across this whole sector of the galaxy, naming five persons of such-and-such description and such-and-such fingerprints, voyaging in a space-yacht of such-and-such size and registration. The world they came from would claim them as fugitives. They would be returned to it. They'd be executed. Then Carol's voice came in his helmet-phone. She cried out; " Look out! It's coming! Kill it! Kill it—. " He heard blast-rifles firing. He heard Burleigh pant commands. He was on his way out of the hollow he'd carved when he heard Harper cry out horribly. He got clear of the newly burned-away stuff. There was still much smoke and stream. But he saw Harper. More, he saw the thing that had Harper. It occurred to him instantly that if Harper died, there would not be too many people on the Nadine . They need not maroon him. In fact, they wouldn't dare. A ship that came in to port with two few on board would be investigated as thoroughly as one that had too many. Perhaps more thoroughly. So if Harper were killed, Moran would be needed to take his place. He'd go on from here in the Nadine , necessarily accepted as a member of her crew. Then he rushed, the flame-torch making a roaring sound. II. They went back to the Nadine for weapons more adequate for encountering the local fauna when it was over. Blast-rifles were not effective against such creatures as these. Torches were contact weapons but they killed. Blast-rifles did not. And Harper needed to pull himself together again, too. Also, neither Moran nor any of the others wanted to go back to the still un-entered wreck while the skinny, somehow disgusting legs of the thing still kicked spasmodically—quite separate—on the whitish ground-stuff. Moran had disliked such creatures in miniature form on other worlds. Enlarged like this. It seemed insane that such creatures, even in miniature, should painstakingly be brought across light-years of space to the new worlds men settled on. But it had been found to be necessary. The ecological system in which human beings belonged had turned out to be infinitely complicated. It had turned out, in fact, to be the ecological system of Earth, and unless all parts of the complex were present, the total was subtly or glaringly wrong. So mankind distastefully ferried pests as well as useful creatures to its new worlds as they were made ready for settlement. Mosquitos throve on the inhabited globes of the Rim Stars. Roaches twitched nervous antennae on the settled planets of the Coal-sack. Dogs on Antares had fleas, and scratched their bites, and humanity spread through the galaxy with an attendant train of insects and annoyances. If they left their pests behind, the total system of checks and balances which make life practical would get lopsided. It would not maintain itself. The vagaries that could result were admirably illustrated in and on the landscape outside the Nadine . Something had been left out of the seeding of this planet. The element—which might be a bacterium or a virus or almost anything at all—the element that kept creatures at the size called "normal" was either missing or inoperable here. The results were not desirable.
|
C. strict
|
Which attention mechanisms do they compare?
|
### Introduction
In machine translation, neural networks have attracted a lot of research attention. Recently, the attention-based encoder-decoder framework BIBREF0 , BIBREF1 has been largely adopted. In this approach, Recurrent Neural Networks (RNNs) map source sequences of words to target sequences. The attention mechanism is learned to focus on different parts of the input sentence while decoding. Attention mechanisms have shown to work with other modalities too, like images, where their are able to learn to attend the salient parts of an image, for instance when generating text captions BIBREF2 . For such applications, Convolutional Neural Networks (CNNs) such as Deep Residual BIBREF3 have shown to work best to represent images. Multimodal models of texts and images empower new applications such as visual question answering or multimodal caption translation. Also, the grounding of multiple modalities against each other may enable the model to have a better understanding of each modality individually, such as in natural language understanding applications. In the field of Machine Translation (MT), the efficient integration of multimodal information still remains a challenging task. It requires combining diverse modality vector representations with each other. These vector representations, also called context vectors, are computed in order the capture the most relevant information in a modality to output the best translation of a sentence. To investigate the effectiveness of information obtained from images, a multimodal machine translation shared task BIBREF4 has been addressed to the MT community. The best results of NMT model were those of BIBREF5 huang2016attention who used LSTM fed with global visual features or multiple regional visual features followed by rescoring. Recently, BIBREF6 CalixtoLC17b proposed a doubly-attentive decoder that outperformed this baseline with less data and without rescoring. Our paper is structured as follows. In section SECREF2 , we briefly describe our NMT model as well as the conditional GRU activation used in the decoder. We also explain how multi-modalities can be implemented within this framework. In the following sections ( SECREF3 and SECREF4 ), we detail three attention mechanisms and explain how we tweak them to work as well as possible with images. Finally, we report and analyze our results in section SECREF5 then conclude in section SECREF6 . ### Neural Machine Translation
In this section, we detail the neural machine translation architecture by BIBREF1 BahdanauCB14, implemented as an attention-based encoder-decoder framework with recurrent neural networks (§ SECREF2 ). We follow by explaining the conditional GRU layer (§ SECREF8 ) - the gating mechanism we chose for our RNN - and how the model can be ported to a multimodal version (§ SECREF13 ). ### Text-based NMT
Given a source sentence INLINEFORM0 , the neural network directly models the conditional probability INLINEFORM1 of its translation INLINEFORM2 . The network consists of one encoder and one decoder with one attention mechanism. The encoder computes a representation INLINEFORM3 for each source sentence and a decoder generates one target word at a time and by decomposing the following conditional probability : DISPLAYFORM0 Each source word INLINEFORM0 and target word INLINEFORM1 are a column index of the embedding matrix INLINEFORM2 and INLINEFORM3 . The encoder is a bi-directional RNN with Gated Recurrent Unit (GRU) layers BIBREF7 , BIBREF8 , where a forward RNN INLINEFORM4 reads the input sequence as it is ordered (from INLINEFORM5 to INLINEFORM6 ) and calculates a sequence of forward hidden states INLINEFORM7 . A backward RNN INLINEFORM8 reads the sequence in the reverse order (from INLINEFORM9 to INLINEFORM10 ), resulting in a sequence of backward hidden states INLINEFORM11 . We obtain an annotation for each word INLINEFORM12 by concatenating the forward and backward hidden state INLINEFORM13 . Each annotation INLINEFORM14 contains the summaries of both the preceding words and the following words. The representation INLINEFORM15 for each source sentence is the sequence of annotations INLINEFORM16 . The decoder is an RNN that uses a conditional GRU (cGRU, more details in § SECREF8 ) with an attention mechanism to generate a word INLINEFORM0 at each time-step INLINEFORM1 . The cGRU uses it's previous hidden state INLINEFORM2 , the whole sequence of source annotations INLINEFORM3 and the previously decoded symbol INLINEFORM4 in order to update it's hidden state INLINEFORM5 : DISPLAYFORM0 In the process, the cGRU also computes a time-dependent context vector INLINEFORM0 . Both INLINEFORM1 and INLINEFORM2 are further used to decode the next symbol. We use a deep output layer BIBREF9 to compute a vocabulary-sized vector : DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are model parameters. We can parameterize the probability of decoding each word INLINEFORM4 as: DISPLAYFORM0 The initial state of the decoder INLINEFORM0 at time-step INLINEFORM1 is initialized by the following equation : DISPLAYFORM0 where INLINEFORM0 is a feedforward network with one hidden layer. ### Conditional GRU
The conditional GRU consists of two stacked GRU activations called INLINEFORM0 and INLINEFORM1 and an attention mechanism INLINEFORM2 in between (called ATT in the footnote paper). At each time-step INLINEFORM3 , REC1 firstly computes a hidden state proposal INLINEFORM4 based on the previous hidden state INLINEFORM5 and the previously emitted word INLINEFORM6 : DISPLAYFORM0 Then, the attention mechanism computes INLINEFORM0 over the source sentence using the annotations sequence INLINEFORM1 and the intermediate hidden state proposal INLINEFORM2 : DISPLAYFORM0 Finally, the second recurrent cell INLINEFORM0 , computes the hidden state INLINEFORM1 of the INLINEFORM2 by looking at the intermediate representation INLINEFORM3 and context vector INLINEFORM4 : DISPLAYFORM0 ### Multimodal NMT
Recently, BIBREF6 CalixtoLC17b proposed a doubly attentive decoder (referred as the "MNMT" model in the author's paper) which can be seen as an expansion of the attention-based NMT model proposed in the previous section. Given a sequence of second a modality annotations INLINEFORM0 , we also compute a new context vector based on the same intermediate hidden state proposal INLINEFORM1 : DISPLAYFORM0 This new time-dependent context vector is an additional input to a modified version of REC2 which now computes the final hidden state INLINEFORM0 using the intermediate hidden state proposal INLINEFORM1 and both time-dependent context vectors INLINEFORM2 and INLINEFORM3 : DISPLAYFORM0 The probabilities for the next target word (from equation EQREF5 ) also takes into account the new context vector INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 is a new trainable parameter. In the field of multimodal NMT, the second modality is usually an image computed into feature maps with the help of a CNN. The annotations INLINEFORM0 are spatial features (i.e. each annotation represents features for a specific region in the image) . We follow the same protocol for our experiments and describe it in section SECREF5 . ### Attention-based Models
We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 . Even though these models differ in how the time-dependent context vector is derived, they share the same subsequent steps. For each mechanism, we propose two hand-picked illustrations showing where the attention is placed in an image. ### Soft attention
Soft attention has firstly been used for syntactic constituency parsing by BIBREF10 NIPS2015Vinyals but has been widely used for translation tasks ever since. One should note that it slightly differs from BIBREF1 BahdanauCB14 where their attention takes as input the previous decoder hidden state instead of the current (intermediate) one as shown in equation EQREF11 . This mechanism has also been successfully investigated for the task of image description generation BIBREF2 where a model generates an image's description in natural language. It has been used in multimodal translation as well BIBREF6 , for which it constitutes a state-of-the-art. The idea of the soft attentional model is to consider all the annotations when deriving the context vector INLINEFORM0 . It consists of a single feed-forward network used to compute an expected alignment INLINEFORM1 between modality annotation INLINEFORM2 and the target word to be emitted at the current time step INLINEFORM3 . The inputs are the modality annotations and the intermediate representation of REC1 INLINEFORM4 : DISPLAYFORM0 The vector INLINEFORM0 has length INLINEFORM1 and its INLINEFORM2 -th item contains a score of how much attention should be put on the INLINEFORM3 -th annotation in order to output the best word at time INLINEFORM4 . We compute normalized scores to create an attention mask INLINEFORM5 over annotations: DISPLAYFORM0 Finally, the modality time-dependent context vector INLINEFORM0 is computed as a weighted sum over the annotation vectors (equation ). In the above expressions, INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are trained parameters. ### Hard Stochastic attention
This model is a stochastic and sampling-based process where, at every timestep INLINEFORM0 , we are making a hard choice to attend only one annotation. This corresponds to one spatial location in the image. Hard attention has previously been used in the context of object recognition BIBREF11 , BIBREF12 and later extended to image description generation BIBREF2 . In the context of multimodal NMT, we can follow BIBREF2 icml2015xuc15 because both our models involve the same process on images. The mechanism INLINEFORM0 is now a function that returns a sampled intermediate latent variables INLINEFORM1 based upon a multinouilli distribution parameterized by INLINEFORM2 : DISPLAYFORM0 where INLINEFORM0 an indicator one-hot variable which is set to 1 if the INLINEFORM1 -th annotation (out of INLINEFORM2 ) is the one used to compute the context vector INLINEFORM3 : DISPLAYFORM0 Context vector INLINEFORM0 is now seen as the random variable of this distribution. We define the variational lower bound INLINEFORM1 on the marginal log evidence INLINEFORM2 of observing the target sentence INLINEFORM3 given modality annotations INLINEFORM4 . DISPLAYFORM0 The learning rules can be derived by taking derivatives of the above variational free energy INLINEFORM0 with respect to the model parameter INLINEFORM1 : DISPLAYFORM0 In order to propagate a gradient through this process, the summation in equation EQREF26 can then be approximated using Monte Carlo based sampling defined by equation EQREF24 : DISPLAYFORM0 To reduce variance of the estimator in equation EQREF27 , we use a moving average baseline estimated as an accumulated sum of the previous log likelihoods with exponential decay upon seeing the INLINEFORM0 -th mini-batch: DISPLAYFORM0 ### Local Attention
In this section, we propose a local attentional mechanism that chooses to focus only on a small subset of the image annotations. Local Attention has been used for text-based translation BIBREF13 and is inspired by the selective attention model of BIBREF14 gregor15 for image generation. Their approach allows the model to select an image patch of varying location and zoom. Local attention uses instead the same "zoom" for all target positions and still achieved good performance. This model can be seen as a trade-off between the soft and hard attentional models. The model picks one patch in the annotation sequence (one spatial location) and selectively focuses on a small window of context around it. Even though an image can't be seen as a temporal sequence, we still hope that the model finds points of interest and selects the useful information around it. This approach has an advantage of being differentiable whereas the stochastic attention requires more complicated techniques such as variance reduction and reinforcement learning to train as shown in section SECREF22 . The soft attention has the drawback to attend the whole image which can be difficult to learn, especially because the number of annotations INLINEFORM0 is usually large (presumably to keep a significant spatial granularity). More formally, at every decoding step INLINEFORM0 , the model first generates an aligned position INLINEFORM1 . Context vector INLINEFORM2 is derived as a weighted sum over the annotations within the window INLINEFORM3 where INLINEFORM4 is a fixed model parameter chosen empirically. These selected annotations correspond to a squared region in the attention maps around INLINEFORM7 . The attention mask INLINEFORM8 is of size INLINEFORM9 . The model predicts INLINEFORM10 as an aligned position in the annotation sequence (referred as Predictive alignment (local-m) in the author's paper) according to the following equation: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are both trainable model parameters and INLINEFORM2 is the annotation sequence length INLINEFORM3 . Because of the sigmoid, INLINEFORM4 . We use equation EQREF18 and EQREF19 respectively to compute the expected alignment vector INLINEFORM5 and the attention mask INLINEFORM6 . In addition, a Gaussian distribution centered around INLINEFORM7 is placed on the alphas in order to favor annotations near INLINEFORM8 : DISPLAYFORM0 where standard deviation INLINEFORM0 . We obtain context vector INLINEFORM1 by following equation . ### Image attention optimization
Three optimizations can be added to the attention mechanism regarding the image modality. All lead to a better use of the image by the model and improved the translation scores overall. At every decoding step INLINEFORM0 , we compute a gating scalar INLINEFORM1 according to the previous decoder state INLINEFORM2 : DISPLAYFORM0 It is then used to compute the time-dependent image context vector : DISPLAYFORM0 BIBREF2 icml2015xuc15 empirically found it to put more emphasis on the objects in the image descriptions generated with their model. We also double the output size of trainable parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 in equation EQREF18 when it comes to compute the expected annotations over the image annotation sequence. More formally, given the image annotation sequence INLINEFORM3 , the tree matrices are of size INLINEFORM4 , INLINEFORM5 and INLINEFORM6 respectively. We noticed a better coverage of the objects in the image by the alpha weights. Lastly, we use a grounding attention inspired by BIBREF15 delbrouck2017multimodal. The mechanism merge each spatial location INLINEFORM0 in the annotation sequence INLINEFORM1 with the initial decoder state INLINEFORM2 obtained in equation EQREF7 with non-linearity : DISPLAYFORM0 where INLINEFORM0 is INLINEFORM1 function. The new annotations go through a L2 normalization layer followed by two INLINEFORM2 convolutional layers (of size INLINEFORM3 respectively) to obtain INLINEFORM4 weights, one for each spatial location. We normalize the weights with a softmax to obtain a soft attention map INLINEFORM5 . Each annotation INLINEFORM6 is then weighted according to its corresponding INLINEFORM7 : DISPLAYFORM0 This method can be seen as the removal of unnecessary information in the image annotations according to the source sentence. This attention is used on top of the others - before decoding - and is referred as "grounded image" in Table TABREF41 . ### Experiments
For this experiments on Multimodal Machine Translation, we used the Multi30K dataset BIBREF17 which is an extended version of the Flickr30K Entities. For each image, one of the English descriptions was selected and manually translated into German by a professional translator. As training and development data, 29,000 and 1,014 triples are used respectively. A test set of size 1000 is used for metrics evaluation. ### Training and model details
All our models are build on top of the nematus framework BIBREF18 . The encoder is a bidirectional RNN with GRU, one 1024D single-layer forward and one 1024D single-layer backward RNN. Word embeddings for source and target language are of 620D and trained jointly with the model. Word embeddings and other non-recurrent matrices are initialized by sampling from a Gaussian INLINEFORM0 , recurrent matrices are random orthogonal and bias vectors are all initialized to zero. To create the image annotations used by our decoder, we used a ResNet-50 pre-trained on ImageNet and extracted the features of size INLINEFORM0 at its res4f layer BIBREF3 . In our experiments, our decoder operates on the flattened 196 INLINEFORM1 1024 (i.e INLINEFORM2 ). We also apply dropout with a probability of 0.5 on the embeddings, on the hidden states in the bidirectional RNN in the encoder as well as in the decoder. In the decoder, we also apply dropout on the text annotations INLINEFORM3 , the image features INLINEFORM4 , on both modality context vector and on all components of the deep output layer before the readout operation. We apply dropout using one same mask in all time steps BIBREF19 . We also normalize and tokenize English and German descriptions using the Moses tokenizer scripts BIBREF20 . We use the byte pair encoding algorithm on the train set to convert space-separated tokens into subwords BIBREF21 , reducing our vocabulary size to 9226 and 14957 words for English and German respectively. All variants of our attention model were trained with ADADELTA BIBREF22 , with mini-batches of size 80 for our monomodal (text-only) NMT model and 40 for our multimodal NMT. We apply early stopping for model selection based on BLEU4 : training is halted if no improvement on the development set is observed for more than 20 epochs. We use the metrics BLEU4 BIBREF23 , METEOR BIBREF24 and TER BIBREF25 to evaluate the quality of our models' translations. ### Quantitative results
We notice a nice overall progress over BIBREF6 CalixtoLC17b multimodal baseline, especially when using the stochastic attention. With improvements of +1.51 BLEU and -2.2 TER on both precision-oriented metrics, the model shows a strong similarity of the n-grams of our candidate translations with respect to the references. The more recall-oriented metrics METEOR scores are roughly the same across our models which is expected because all attention mechanisms share the same subsequent step at every time-step INLINEFORM0 , i.e. taking into account the attention weights of previous time-step INLINEFORM1 in order to compute the new intermediate hidden state proposal and therefore the new context vector INLINEFORM2 . Again, the largest improvement is given by the hard stochastic attention mechanism (+0.4 METEOR): because it is modeled as a decision process according to the previous choices, this may reinforce the idea of recall. We also remark interesting improvements when using the grounded mechanism, especially for the soft attention. The soft attention may benefit more of the grounded image because of the wide range of spatial locations it looks at, especially compared to the stochastic attention. This motivates us to dig into more complex grounding techniques in order to give the machine a deeper understanding of the modalities. Note that even though our baseline NMT model is basically the same as BIBREF6 CalixtoLC17b, our experiments results are slightly better. This is probably due to the different use of dropout and subwords. We also compared our results to BIBREF16 caglayan2016does because our multimodal models are nearly identical with the major exception of the gating scalar (cfr. section SECREF4 ). This motivated some of our qualitative analysis and hesitation towards the current architecture in the next section. ### Qualitative results
For space-saving and ergonomic reasons, we only discuss about the hard stochastic and soft attention, the latter being a generalization of the local attention. As we can see in Figure FIGREF44 , the soft attention model is looking roughly at the same region of the image for every decoding step INLINEFORM0 . Because the words "hund"(dog), "wald"(forest) or "weg"(way) in left image are objects, they benefit from a high gating scalar. As a matter of fact, the attention mechanism has learned to detect the objects within a scene (at every time-step, whichever word we are decoding as shown in the right image) and the gating scalar has learned to decide whether or not we have to look at the picture (or more accurately whether or not we are translating an object). Without this scalar, the translation scores undergo a massive drop (as seen in BIBREF16 caglayan2016does) which means that the attention mechanisms don't really understand the more complex relationships between objects, what is really happening in the scene. Surprisingly, the gating scalar happens to be really low in the stochastic attention mechanism: a significant amount of sentences don't have a summed gating scalar INLINEFORM1 0.10. The model totally discards the image in the translation process. It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example: The monomodal translation has a sentence-level BLEU of 82.16 whilst the soft attention and hard stochastic attention scores are of 16.82 and 34.45 respectively. Figure FIGREF47 shows the attention maps for both mechanism. Nevertheless, one has to concede that the use of images indubitably helps the translation as shown in the score tabular. ### Conclusion and future work
We have tried different attention mechanism and tweaks for the image modality. We showed improvements and encouraging results overall on the Flickr30K Entities dataset. Even though we identified some flaws of the current attention mechanisms, we can conclude pretty safely that images are an helpful resource for the machine in a translation task. We are looking forward to try out richer and more suitable features for multimodal translation (ie. dense captioning features). Another interesting approach would be to use visually grounded word embeddings to capture visual notions of semantic relatedness. ### Acknowledgements
This work was partly supported by the Chist-Era project IGLU with contribution from the Belgian Fonds de la Recherche Scientique (FNRS), contract no. R.50.11.15.F, and by the FSO project VCYCLE with contribution from the Belgian Waloon Region, contract no. 1510501. Figure 1: Die beiden Kinder spielen auf dem Spielplatz . Figure 2: Ein Junge sitzt auf und blickt aus einem Mikroskop . Figure 3: Ein Mann sitzt neben einem Computerbildschirm . Figure 4: Ein Mann in einem orangefarbenen Hemd und mit Helm . Figure 5: Ein Mädchen mit einer Schwimmweste Figure 6: Ein kleiner schwarzer Hund springt über Hindernisse . Table 1: Results on the 1000 test triples of the Multi30K dataset. We pick Calixto et al. (2017) scores as baseline and report our results accordingly (green for improvement and red for deterioration). In each of our experiments, Soft attention is used for text. The comparison is hence with respect to the attention mechanism used for the image modality. Figure 7: Representative figures of the soft-attention behavior discussed in §5.3 Figure 8: Wrong detection for both Soft attention (top) and Hard stochastic attention (bottom)
|
Soft attention, Hard Stochastic attention, Local Attention
|
The Movement believes all of the following EXCEPT: Questioning the failings of the old society, failings have put them in the dome; failure of foreign policy (self-containment)
A. The 'old society' failed in major ways
B. The 'old society's' failings led to the creation of the Dome
C. The best way to fight those controlling the Dome is collectively, versus individually
D. They cannot escape the dome without a strong foreign policy
|
A FALL OF GLASS By STANLEY R. LEE Illustrated by DILLON [Transcriber's Note: This etext was produced from Galaxy Magazine October 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] The weatherman was always right: Temperature, 59; humidity, 47%; occasional light showers—but of what? The pockets of Mr. Humphrey Fownes were being picked outrageously. It was a splendid day. The temperature was a crisp 59 degrees, the humidity a mildly dessicated 47%. The sun was a flaming orange ball in a cloudless blue sky. His pockets were picked eleven times. It should have been difficult. Under the circumstances it was a masterpiece of pocket picking. What made it possible was Humphrey Fownes' abstraction; he was an uncommonly preoccupied individual. He was strolling along a quiet residential avenue: small private houses, one after another, a place of little traffic and minimum distractions. But he was thinking about weather, which was an unusual subject to begin with for a person living in a domed city. He was thinking so deeply about it that it never occurred to him that entirely too many people were bumping into him. He was thinking about Optimum Dome Conditions (a crisp 59 degrees, a mildly dessicated 47%) when a bogus postman, who pretended to be reading a postal card, jostled him. In the confusion of spilled letters and apologies from both sides, the postman rifled Fownes's handkerchief and inside jacket pockets. He was still thinking about temperature and humidity when a pretty girl happened along with something in her eye. They collided. She got his right and left jacket pockets. It was much too much for coincidence. The sidewalk was wide enough to allow four people to pass at one time. He should surely have become suspicious when two men engaged in a heated argument came along. In the ensuing contretemps they emptied his rear pants pockets, got his wristwatch and restored the contents of the handkerchief pocket. It all went off very smoothly, like a game of put and take—the sole difference being that Humphrey Fownes had no idea he was playing. There was an occasional tinkle of falling glass. It fell on the streets and houses, making small geysers of shiny mist, hitting with a gentle musical sound, like the ephemeral droppings of a celesta. It was precipitation peculiar to a dome: feather-light fragments showering harmlessly on the city from time to time. Dome weevils, their metal arms reaching out with molten glass, roamed the huge casserole, ceaselessly patching and repairing. Humphrey Fownes strode through the puffs of falling glass still intrigued by a temperature that was always 59 degrees, by a humidity that was always 47%, by weather that was always Optimum. It was this rather than skill that enabled the police to maintain such a tight surveillance on him, a surveillance that went to the extent of getting his fingerprints off the postman's bag, and which photographed, X-rayed and chemically analyzed the contents of his pockets before returning them. Two blocks away from his home a careless housewife spilled a five-pound bag of flour as he was passing. It was really plaster of Paris. He left his shoe prints, stride measurement, height, weight and handedness behind. By the time Fownes reached his front door an entire dossier complete with photographs had been prepared and was being read by two men in an orange patrol car parked down the street. Lanfierre had undoubtedly been affected by his job. Sitting behind the wheel of the orange car, he watched Humphrey Fownes approach with a distinct feeling of admiration, although it was an odd, objective kind of admiration, clinical in nature. It was similar to that of a pathologist observing for the first time a new and particularly virulent strain of pneumococcus under his microscope. Lanfierre's job was to ferret out aberration. It couldn't be tolerated within the confines of a dome. Conformity had become more than a social force; it was a physical necessity. And, after years of working at it, Lanfierre had become an admirer of eccentricity. He came to see that genuine quirks were rare and, as time went on, due partly to his own small efforts, rarer. Fownes was a masterpiece of queerness. He was utterly inexplicable. Lanfierre was almost proud of Humphrey Fownes. "Sometimes his house shakes ," Lanfierre said. "House shakes," Lieutenant MacBride wrote in his notebook. Then he stopped and frowned. He reread what he'd just written. "You heard right. The house shakes ," Lanfierre said, savoring it. MacBride looked at the Fownes house through the magnifying glass of the windshield. "Like from ... side to side ?" he asked in a somewhat patronizing tone of voice. "And up and down." MacBride returned the notebook to the breast pocket of his orange uniform. "Go on," he said, amused. "It sounds interesting." He tossed the dossier carelessly on the back seat. Lanfierre sat stiffly behind the wheel, affronted. The cynical MacBride couldn't really appreciate fine aberrations. In some ways MacBride was a barbarian. Lanfierre had held out on Fownes for months. He had even contrived to engage him in conversation once, a pleasantly absurd, irrational little chat that titillated him for weeks. It was only with the greatest reluctance that he finally mentioned Fownes to MacBride. After years of searching for differences Lanfierre had seen how extraordinarily repetitious people were, echoes really, dimly resounding echoes, each believing itself whole and separate. They spoke in an incessant chatter of cliches, and their actions were unbelievably trite. Then a fine robust freak came along and the others—the echoes—refused to believe it. The lieutenant was probably on the point of suggesting a vacation. "Why don't you take a vacation?" Lieutenant MacBride suggested. "It's like this, MacBride. Do you know what a wind is? A breeze? A zephyr?" "I've heard some." "They say there are mountain-tops where winds blow all the time. Strong winds, MacBride. Winds like you and I can't imagine. And if there was a house sitting on such a mountain and if winds did blow, it would shake exactly the way that one does. Sometimes I get the feeling the whole place is going to slide off its foundation and go sailing down the avenue." Lieutenant MacBride pursed his lips. "I'll tell you something else," Lanfierre went on. "The windows all close at the same time. You'll be watching and all of a sudden every single window in the place will drop to its sill." Lanfierre leaned back in the seat, his eyes still on the house. "Sometimes I think there's a whole crowd of people in there waiting for a signal—as if they all had something important to say but had to close the windows first so no one could hear. Why else close the windows in a domed city? And then as soon as the place is buttoned up they all explode into conversation—and that's why the house shakes." MacBride whistled. "No, I don't need a vacation." A falling piece of glass dissolved into a puff of gossamer against the windshield. Lanfierre started and bumped his knee on the steering wheel. "No, you don't need a rest," MacBride said. "You're starting to see flying houses, hear loud babbling voices. You've got winds in your brain, Lanfierre, breezes of fatigue, zephyrs of irrationality—" At that moment, all at once, every last window in the house slammed shut. The street was deserted and quiet, not a movement, not a sound. MacBride and Lanfierre both leaned forward, as if waiting for the ghostly babble of voices to commence. The house began to shake. It rocked from side to side, it pitched forward and back, it yawed and dipped and twisted, straining at the mooring of its foundation. The house could have been preparing to take off and sail down the.... MacBride looked at Lanfierre and Lanfierre looked at MacBride and then they both looked back at the dancing house. "And the water ," Lanfierre said. "The water he uses! He could be the thirstiest and cleanest man in the city. He could have a whole family of thirsty and clean kids, and he still wouldn't need all that water." The lieutenant had picked up the dossier. He thumbed through the pages now in amazement. "Where do you get a guy like this?" he asked. "Did you see what he carries in his pockets?" "And compasses won't work on this street." The lieutenant lit a cigarette and sighed. He usually sighed when making the decision to raid a dwelling. It expressed his weariness and distaste for people who went off and got neurotic when they could be enjoying a happy, normal existence. There was something implacable about his sighs. "He'll be coming out soon," Lanfierre said. "He eats supper next door with a widow. Then he goes to the library. Always the same. Supper at the widow's next door and then the library." MacBride's eyebrows went up a fraction of an inch. "The library?" he said. "Is he in with that bunch?" Lanfierre nodded. "Should be very interesting," MacBride said slowly. "I can't wait to see what he's got in there," Lanfierre murmured, watching the house with a consuming interest. They sat there smoking in silence and every now and then their eyes widened as the house danced a new step. Fownes stopped on the porch to brush the plaster of paris off his shoes. He hadn't seen the patrol car and this intense preoccupation of his was also responsible for the dancing house—he simply hadn't noticed. There was a certain amount of vibration, of course. He had a bootleg pipe connected into the dome blower system, and the high-pressure air caused some buffeting against the thin walls of the house. At least, he called it buffeting; he'd never thought to watch from outside. He went in and threw his jacket on the sofa, there being no room left in the closets. Crossing the living room he stopped to twist a draw-pull. Every window slammed shut. "Tight as a kite," he thought, satisfied. He continued on toward the closet at the foot of the stairs and then stopped again. Was that right? No, snug as a hug in a rug . He went on, thinking: The old devils. The downstairs closet was like a great watch case, a profusion of wheels surrounding the Master Mechanism, which was a miniature see-saw that went back and forth 365-1/4 times an hour. The wheels had a curious stateliness about them. They were all quite old, salvaged from grandfather's clocks and music boxes and they went around in graceful circles at the rate of 30 and 31 times an hour ... although there was one slightly eccentric cam that vacillated between 28 and 29. He watched as they spun and flashed in the darkness, and then set them for seven o'clock in the evening, April seventh, any year. Outside, the domed city vanished. It was replaced by an illusion. Or, as Fownes hoped it might appear, the illusion of the domed city vanished and was replaced by a more satisfactory, and, for his specific purpose, more functional, illusion. Looking through the window he saw only a garden. Instead of an orange sun at perpetual high noon, there was a red sun setting brilliantly, marred only by an occasional arcover which left the smell of ozone in the air. There was also a gigantic moon. It hid a huge area of sky, and it sang. The sun and moon both looked down upon a garden that was itself scintillant, composed largely of neon roses. Moonlight, he thought, and roses. Satisfactory. And cocktails for two. Blast, he'd never be able to figure that one out! He watched as the moon played, Oh, You Beautiful Doll and the neon roses flashed slowly from red to violet, then went back to the closet and turned on the scent. The house began to smell like an immensely concentrated rose as the moon shifted to People Will Say We're In Love . He rubbed his chin critically. It seemed all right. A dreamy sunset, an enchanted moon, flowers, scent. They were all purely speculative of course. He had no idea how a rose really smelled—or looked for that matter. Not to mention a moon. But then, neither did the widow. He'd have to be confident, assertive. Insist on it. I tell you, my dear, this is a genuine realistic romantic moon. Now, does it do anything to your pulse? Do you feel icy fingers marching up and down your spine? His own spine didn't seem to be affected. But then he hadn't read that book on ancient mores and courtship customs. How really odd the ancients were. Seduction seemed to be an incredibly long and drawn-out process, accompanied by a considerable amount of falsification. Communication seemed virtually impossible. "No" meant any number of things, depending on the tone of voice and the circumstances. It could mean yes, it could mean ask me again later on this evening. He went up the stairs to the bedroom closet and tried the rain-maker, thinking roguishly: Thou shalt not inundate. The risks he was taking! A shower fell gently on the garden and a male chorus began to chant Singing in the Rain . Undiminished, the yellow moon and the red sun continued to be brilliant, although the sun occasionally arced over and demolished several of the neon roses. The last wheel in the bedroom closet was a rather elegant steering wheel from an old 1995 Studebaker. This was on the bootleg pipe; he gingerly turned it. Far below in the cellar there was a rumble and then the soft whistle of winds came to him. He went downstairs to watch out the living room window. This was important; the window had a really fixed attitude about air currents. The neon roses bent and tinkled against each other as the wind rose and the moon shook a trifle as it whispered Cuddle Up a Little Closer . He watched with folded arms, considering how he would start. My dear Mrs. Deshazaway. Too formal. They'd be looking out at the romantic garden; time to be a bit forward. My very dear Mrs. Deshazaway. No. Contrived. How about a simple, Dear Mrs. Deshazaway . That might be it. I was wondering, seeing as how it's so late, if you wouldn't rather stay over instead of going home.... Preoccupied, he hadn't noticed the winds building up, didn't hear the shaking and rattling of the pipes. There were attic pipes connected to wall pipes and wall pipes connected to cellar pipes, and they made one gigantic skeleton that began to rattle its bones and dance as high-pressure air from the dome blower rushed in, slowly opening the Studebaker valve wider and wider.... The neon roses thrashed about, extinguishing each other. The red sun shot off a mass of sparks and then quickly sank out of sight. The moon fell on the garden and rolled ponderously along, crooning When the Blue of the Night Meets the Gold of the Day . The shaking house finally woke him up. He scrambled upstairs to the Studebaker wheel and shut it off. At the window again, he sighed. Repairs were in order. And it wasn't the first time the winds got out of line. Why didn't she marry him and save all this bother? He shut it all down and went out the front door, wondering about the rhyme of the months, about stately August and eccentric February and romantic April. April. Its days were thirty and it followed September. And all the rest have thirty-one. What a strange people, the ancients! He still didn't see the orange car parked down the street. "Men are too perishable," Mrs. Deshazaway said over dinner. "For all practical purposes I'm never going to marry again. All my husbands die." "Would you pass the beets, please?" Humphrey Fownes said. She handed him a platter of steaming red beets. "And don't look at me that way," she said. "I'm not going to marry you and if you want reasons I'll give you four of them. Andrew. Curt. Norman. And Alphonse." The widow was a passionate woman. She did everything passionately—talking, cooking, dressing. Her beets were passionately red. Her clothes rustled and her high heels clicked and her jewelry tinkled. She was possessed by an uncontrollable dynamism. Fownes had never known anyone like her. "You forgot to put salt on the potatoes," she said passionately, then went on as calmly as it was possible for her to be, to explain why she couldn't marry him. "Do you have any idea what people are saying? They're all saying I'm a cannibal! I rob my husbands of their life force and when they're empty I carry their bodies outside on my way to the justice of the peace." "As long as there are people," he said philosophically, "there'll be talk." "But it's the air! Why don't they talk about that? The air is stale, I'm positive. It's not nourishing. The air is stale and Andrew, Curt, Norman and Alphonse couldn't stand it. Poor Alphonse. He was never so healthy as on the day he was born. From then on things got steadily worse for him." "I don't seem to mind the air." She threw up her hands. "You'd be the worst of the lot!" She left the table, rustling and tinkling about the room. "I can just hear them. Try some of the asparagus. Five. That's what they'd say. That woman did it again. And the plain fact is I don't want you on my record." "Really," Fownes protested. "I feel splendid. Never better." He could hear her moving about and then felt her hands on his shoulders. "And what about those very elaborate plans you've been making to seduce me?" Fownes froze with three asparagus hanging from his fork. "Don't you think they'll find out? I found out and you can bet they will. It's my fault, I guess. I talk too much. And I don't always tell the truth. To be completely honest with you, Mr. Fownes, it wasn't the old customs at all standing between us, it was air. I can't have another man die on me, it's bad for my self-esteem. And now you've gone and done something good and criminal, something peculiar." Fownes put his fork down. "Dear Mrs. Deshazaway," he started to say. "And of course when they do find out and they ask you why, Mr. Fownes, you'll tell them. No, no heroics, please! When they ask a man a question he always answers and you will too. You'll tell them I wanted to be courted and when they hear that they'll be around to ask me a few questions. You see, we're both a bit queer." "I hadn't thought of that," Fownes said quietly. "Oh, it doesn't really matter. I'll join Andrew, Curt, Norman—" "That won't be necessary," Fownes said with unusual force. "With all due respect to Andrew, Curt, Norman and Alphonse, I might as well state here and now I have other plans for you, Mrs. Deshazaway." "But my dear Mr. Fownes," she said, leaning across the table. "We're lost, you and I." "Not if we could leave the dome," Fownes said quietly. "That's impossible! How?" In no hurry, now that he had the widow's complete attention, Fownes leaned across the table and whispered: "Fresh air, Mrs. Deshazaway? Space? Miles and miles of space where the real-estate monopoly has no control whatever? Where the wind blows across prairies ; or is it the other way around? No matter. How would you like that , Mrs. Deshazaway?" Breathing somewhat faster than usual, the widow rested her chin on her two hands. "Pray continue," she said. "Endless vistas of moonlight and roses? April showers, Mrs. Deshazaway. And June, which as you may know follows directly upon April and is supposed to be the month of brides, of marrying. June also lies beyond the dome." "I see." " And ," Mr. Fownes added, his voice a honeyed whisper, "they say that somewhere out in the space and the roses and the moonlight, the sleeping equinox yawns and rises because on a certain day it's vernal and that's when it roams the Open Country where geigers no longer scintillate." " My. " Mrs. Deshazaway rose, paced slowly to the window and then came back to the table, standing directly over Fownes. "If you can get us outside the dome," she said, "out where a man stays warm long enough for his wife to get to know him ... if you can do that, Mr. Fownes ... you may call me Agnes." When Humphrey Fownes stepped out of the widow's house, there was a look of such intense abstraction on his features that Lanfierre felt a wistful desire to get out of the car and walk along with the man. It would be such a deliciously insane experience. ("April has thirty days," Fownes mumbled, passing them, "because thirty is the largest number such that all smaller numbers not having a common divisor with it are primes ." MacBride frowned and added it to the dossier. Lanfierre sighed.) Pinning his hopes on the Movement, Fownes went straight to the library several blocks away, a shattered depressing place given over to government publications and censored old books with holes in them. It was used so infrequently that the Movement was able to meet there undisturbed. The librarian was a yellowed, dog-eared woman of eighty. She spent her days reading ancient library cards and, like the books around her, had been rendered by time's own censor into near unintelligibility. "Here's one," she said to him as he entered. " Gulliver's Travels. Loaned to John Wesley Davidson on March 14, 1979 for five days. What do you make of it?" In the litter of books and cards and dried out ink pads that surrounded the librarian, Fownes noticed a torn dust jacket with a curious illustration. "What's that?" he said. "A twister," she replied quickly. "Now listen to this . Seven years later on March 21, 1986, Ella Marshall Davidson took out the same book. What do you make of that ?" "I'd say," Humphrey Fownes said, "that he ... that he recommended it to her, that one day they met in the street and he told her about this book and then they ... they went to the library together and she borrowed it and eventually, why eventually they got married." "Hah! They were brother and sister!" the librarian shouted in her parched voice, her old buckram eyes laughing with cunning. Fownes smiled weakly and looked again at the dust jacket. The twister was unquestionably a meteorological phenomenon. It spun ominously, like a malevolent top, and coursed the countryside destructively, carrying a Dorothy to an Oz. He couldn't help wondering if twisters did anything to feminine pulses, if they could possibly be a part of a moonlit night, with cocktails and roses. He absently stuffed the dust jacket in his pocket and went on into the other rooms, the librarian mumbling after him: "Edna Murdoch Featherstone, April 21, 1991," as though reading inscriptions on a tombstone. The Movement met in what had been the children's room, where unpaid ladies of the afternoon had once upon a time read stories to other people's offspring. The members sat around at the miniature tables looking oddly like giants fled from their fairy tales, protesting. "Where did the old society fail?" the leader was demanding of them. He stood in the center of the room, leaning on a heavy knobbed cane. He glanced around at the group almost complacently, and waited as Humphrey Fownes squeezed into an empty chair. "We live in a dome," the leader said, "for lack of something. An invention! What is the one thing that the great technological societies before ours could not invent, notwithstanding their various giant brains, electronic and otherwise?" Fownes was the kind of man who never answered a rhetorical question. He waited, uncomfortable in the tight chair, while the others struggled with this problem in revolutionary dialectics. " A sound foreign policy ," the leader said, aware that no one else had obtained the insight. "If a sound foreign policy can't be created the only alternative is not to have any foreign policy at all. Thus the movement into domes began— by common consent of the governments . This is known as self-containment." Dialectically out in left field, Humphrey Fownes waited for a lull in the ensuing discussion and then politely inquired how it might be arranged for him to get out. "Out?" the leader said, frowning. "Out? Out where?" "Outside the dome." "Oh. All in good time, my friend. One day we shall all pick up and leave." "And that day I'll await impatiently," Fownes replied with marvelous tact, "because it will be lonely out there for the two of us. My future wife and I have to leave now ." "Nonsense. Ridiculous! You have to be prepared for the Open Country. You can't just up and leave, it would be suicide, Fownes. And dialectically very poor." "Then you have discussed preparations, the practical necessities of life in the Open Country. Food, clothing, a weapon perhaps? What else? Have I left anything out?" The leader sighed. "The gentleman wants to know if he's left anything out," he said to the group. Fownes looked around at them, at some dozen pained expressions. "Tell the man what he's forgotten," the leader said, walking to the far window and turning his back quite pointedly on them. Everyone spoke at the same moment. " A sound foreign policy ," they all said, it being almost too obvious for words. On his way out the librarian shouted at him: " A Tale of a Tub , thirty-five years overdue!" She was calculating the fine as he closed the door. Humphrey Fownes' preoccupation finally came to an end when he was one block away from his house. It was then that he realized something unusual must have occurred. An orange patrol car of the security police was parked at his front door. And something else was happening too. His house was dancing. It was disconcerting, and at the same time enchanting, to watch one's residence frisking about on its foundation. It was such a strange sight that for the moment he didn't give a thought to what might be causing it. But when he stepped gingerly onto the porch, which was doing its own independent gavotte, he reached for the doorknob with an immense curiosity. The door flung itself open and knocked him back off the porch. From a prone position on his miniscule front lawn, Fownes watched as his favorite easy chair sailed out of the living room on a blast of cold air and went pinwheeling down the avenue in the bright sunshine. A wild wind and a thick fog poured out of the house. It brought chairs, suits, small tables, lamps trailing their cords, ashtrays, sofa cushions. The house was emptying itself fiercely, as if disgorging an old, spoiled meal. From deep inside he could hear the rumble of his ancient upright piano as it rolled ponderously from room to room. He stood up; a wet wind swept over him, whipping at his face, toying with his hair. It was a whistling in his ears, and a tingle on his cheeks. He got hit by a shoe. As he forced his way back to the doorway needles of rain played over his face and he heard a voice cry out from somewhere in the living room. "Help!" Lieutenant MacBride called. Standing in the doorway with his wet hair plastered down on his dripping scalp, the wind roaring about him, the piano rumbling in the distance like thunder, Humphrey Fownes suddenly saw it all very clearly. " Winds ," he said in a whisper. "What's happening?" MacBride yelled, crouching behind the sofa. " March winds," he said. "What?!" "April showers!" The winds roared for a moment and then MacBride's lost voice emerged from the blackness of the living room. "These are not Optimum Dome Conditions!" the voice wailed. "The temperature is not 59 degrees. The humidity is not 47%!" Fownes held his face up to let the rain fall on it. "Moonlight!" he shouted. "Roses! My soul for a cocktail for two!" He grasped the doorway to keep from being blown out of the house. "Are you going to make it stop or aren't you!" MacBride yelled. "You'll have to tell me what you did first!" "I told him not to touch that wheel! Lanfierre. He's in the upstairs bedroom!" When he heard this Fownes plunged into the house and fought his way up the stairs. He found Lanfierre standing outside the bedroom with a wheel in his hand. "What have I done?" Lanfierre asked in the monotone of shock. Fownes took the wheel. It was off a 1995 Studebaker. "I'm not sure what's going to come of this," he said to Lanfierre with an astonishing amount of objectivity, "but the entire dome air supply is now coming through my bedroom." The wind screamed. "Is there something I can turn?" Lanfierre asked. "Not any more there isn't." They started down the stairs carefully, but the wind caught them and they quickly reached the bottom in a wet heap. Recruiting Lieutenant MacBride from behind his sofa, the men carefully edged out of the house and forced the front door shut. The wind died. The fog dispersed. They stood dripping in the Optimum Dome Conditions of the bright avenue. "I never figured on this ," Lanfierre said, shaking his head. With the front door closed the wind quickly built up inside the house. They could see the furnishing whirl past the windows. The house did a wild, elated jig. "What kind of a place is this?" MacBride said, his courage beginning to return. He took out his notebook but it was a soggy mess. He tossed it away. "Sure, he was different ," Lanfierre murmured. "I knew that much." When the roof blew off they weren't really surprised. With a certain amount of equanimity they watched it lift off almost gracefully, standing on end for a moment before toppling to the ground. It was strangely slow motion, as was the black twirling cloud that now rose out of the master bedroom, spewing shorts and socks and cases every which way. " Now what?" MacBride said, thoroughly exasperated, as this strange black cloud began to accelerate, whirling about like some malevolent top.... Humphrey Fownes took out the dust jacket he'd found in the library. He held it up and carefully compared the spinning cloud in his bedroom with the illustration. The cloud rose and spun, assuming the identical shape of the illustration. "It's a twister," he said softly. "A Kansas twister!" "What," MacBride asked, his bravado slipping away again, "what ... is a twister?" The twister roared and moved out of the bedroom, out over the rear of the house toward the side of the dome. "It says here," Fownes shouted over the roaring, "that Dorothy traveled from Kansas to Oz in a twister and that ... and that Oz is a wonderful and mysterious land beyond the confines of everyday living ." MacBride's eyes and mouth were great zeros. "Is there something I can turn?" Lanfierre asked. Huge chunks of glass began to fall around them. "Fownes!" MacBride shouted. "This is a direct order! Make it go back!" But Fownes had already begun to run on toward the next house, dodging mountainous puffs of glass as he went. "Mrs. Deshazaway!" he shouted. "Yoo-hoo, Mrs. Deshazaway!" The dome weevils were going berserk trying to keep up with the precipitation. They whirred back and forth at frightful speed, then, emptied of molten glass, rushed to the Trough which they quickly emptied and then rushed about empty-handed. "Yoo-hoo!" he yelled, running. The artificial sun vanished behind the mushrooming twister. Optimum temperature collapsed. "Mrs. Deshazaway! Agnes , will you marry me? Yoo-hoo!" Lanfierre and Lieutenant MacBride leaned against their car and waited, dazed. There was quite a large fall of glass.
|
C. The best way to fight those controlling the Dome is collectively, versus individually
|
What recommendations do they offer?
|
### Introduction
Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation. This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5. Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation. Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis. ### Background
We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators. ### Background ::: Human Evaluation of Machine Translation
The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans. Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively. As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B. This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12. ### Background ::: Assessing Human–Machine Parity
BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation. In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations. ### Background ::: Assessing Human–Machine Parity ::: Choice of Raters
The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations. ### Background ::: Assessing Human–Machine Parity ::: Linguistic Context
MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents. ### Background ::: Assessing Human–Machine Parity ::: Reference Translations
The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality. We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6). ### Background ::: Translations
We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: [labelwidth=1cm, leftmargin=1.25cm] The professional human translations in the dataset of BIBREF3.[1] Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35. The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$. The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1] Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated. ### Choice of Raters
Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8. Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it. ### Choice of Raters ::: Evaluation Protocol
We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts. We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context. Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country. The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$). ### Choice of Raters ::: Results
Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts). It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24. ### Linguistic Context
Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence. While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16. ### Linguistic Context ::: Evaluation Protocol
We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation. We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$). Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised. We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully. We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters. ### Linguistic Context ::: Results
Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents. We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section. Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy. Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21). ### Linguistic Context ::: Discussion
Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation. Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34. ### Reference Translations
Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality. ### Reference Translations ::: Quality
Because the translations are created by humans, a number of factors could lead to compromises in quality: If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news. If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology. Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator. In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33. In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level. The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency. To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32. From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬). Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking. However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories. ### Reference Translations ::: Directionality
Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English. According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33. We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on. Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input). We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$). Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text. ### Recommendations
Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. ### Recommendations ::: (R1) Choose professional translators as raters.
In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs. ### Recommendations ::: (R2) Evaluate documents, not sentences.
When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4). ### Recommendations ::: (R3) Evaluate fluency in addition to adequacy.
Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality. ### Recommendations ::: (R4) Do not heavily edit reference translations for fluency.
In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30). ### Recommendations ::: (R5) Use original source texts.
Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT. Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation. We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity. ### Conclusion
We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors. Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves. Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost. Table 1: Ranks and TrueSkill scores (the higher the better) of one human (HA) and two machine translations (MT1, MT2) for evaluations carried out by expert and non-expert translators. An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank at p ≤ .05. Table 2: Pairwise ranking results for machine (MT1) against professional human translation (HA) as obtained from blind evaluation by professional translators. Preference for MT1 is lower when document-level context is available. Table 4: Pairwise ranking results for one machine (MT1) and two professional human translations (HA, HB) as obtained from blind evaluation by professional translators. Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher’s exact test (two-tailed) for each pair of translation outputs. Table 6: (Continued from previous page.) Table 7: Ranks of the translations given the original language of the source side of the test set shown with their TrueSkill score (the higher the better). An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank at p ≤ .05.
|
Choose professional translators as raters, Evaluate documents, not sentences, Evaluate fluency in addition to adequacy, Do not heavily edit reference translations for fluency, Use original source texts
|
Why does Bertrand Malloy end up with odd officers under his command?
A. He has a reputation of being able to handle them well.
B. Higher quality candidates were sent to higher priority jobs.
C. He requests them specifically.
D. It is part of his punishment for this low-ranking position.
|
IN CASE OF FIRE By RANDALL GARRETT There are times when a broken tool is better than a sound one, or a twisted personality more useful than a whole one. For instance, a whole beer bottle isn't half the weapon that half a beer bottle is ... Illustrated by Martinez In his office apartment, on the top floor of the Terran Embassy Building in Occeq City, Bertrand Malloy leafed casually through the dossiers of the four new men who had been assigned to him. They were typical of the kind of men who were sent to him, he thought. Which meant, as usual, that they were atypical. Every man in the Diplomatic Corps who developed a twitch or a quirk was shipped to Saarkkad IV to work under Bertrand Malloy, Permanent Terran Ambassador to His Utter Munificence, the Occeq of Saarkkad. Take this first one, for instance. Malloy ran his finger down the columns of complex symbolism that showed the complete psychological analysis of the man. Psychopathic paranoia. The man wasn't technically insane; he could be as lucid as the next man most of the time. But he was morbidly suspicious that every man's hand was turned against him. He trusted no one, and was perpetually on his guard against imaginary plots and persecutions. Number two suffered from some sort of emotional block that left him continually on the horns of one dilemma or another. He was psychologically incapable of making a decision if he were faced with two or more possible alternatives of any major importance. Number three ... Malloy sighed and pushed the dossiers away from him. No two men were alike, and yet there sometimes seemed to be an eternal sameness about all men. He considered himself an individual, for instance, but wasn't the basic similarity there, after all? He was—how old? He glanced at the Earth calendar dial that was automatically correlated with the Saarkkadic calendar just above it. Fifty-nine next week. Fifty-nine years old. And what did he have to show for it besides flabby muscles, sagging skin, a wrinkled face, and gray hair? Well, he had an excellent record in the Corps, if nothing else. One of the top men in his field. And he had his memories of Diane, dead these ten years, but still beautiful and alive in his recollections. And—he grinned softly to himself—he had Saarkkad. He glanced up at the ceiling, and mentally allowed his gaze to penetrate it to the blue sky beyond it. Out there was the terrible emptiness of interstellar space—a great, yawning, infinite chasm capable of swallowing men, ships, planets, suns, and whole galaxies without filling its insatiable void. Malloy closed his eyes. Somewhere out there, a war was raging. He didn't even like to think of that, but it was necessary to keep it in mind. Somewhere out there, the ships of Earth were ranged against the ships of the alien Karna in the most important war that Mankind had yet fought. And, Malloy knew, his own position was not unimportant in that war. He was not in the battle line, nor even in the major production line, but it was necessary to keep the drug supply lines flowing from Saarkkad, and that meant keeping on good terms with the Saarkkadic government. The Saarkkada themselves were humanoid in physical form—if one allowed the term to cover a wide range of differences—but their minds just didn't function along the same lines. For nine years, Bertrand Malloy had been Ambassador to Saarkkad, and for nine years, no Saarkkada had ever seen him. To have shown himself to one of them would have meant instant loss of prestige. To their way of thinking, an important official was aloof. The greater his importance, the greater must be his isolation. The Occeq of Saarkkad himself was never seen except by a handful of picked nobles, who, themselves, were never seen except by their underlings. It was a long, roundabout way of doing business, but it was the only way Saarkkad would do any business at all. To violate the rigid social setup of Saarkkad would mean the instant closing off of the supply of biochemical products that the Saarkkadic laboratories produced from native plants and animals—products that were vitally necessary to Earth's war, and which could be duplicated nowhere else in the known universe. It was Bertrand Malloy's job to keep the production output high and to keep the materiel flowing towards Earth and her allies and outposts. The job would have been a snap cinch in the right circumstances; the Saarkkada weren't difficult to get along with. A staff of top-grade men could have handled them without half trying. But Malloy didn't have top-grade men. They couldn't be spared from work that required their total capacity. It's inefficient to waste a man on a job that he can do without half trying where there are more important jobs that will tax his full output. So Malloy was stuck with the culls. Not the worst ones, of course; there were places in the galaxy that were less important than Saarkkad to the war effort. Malloy knew that, no matter what was wrong with a man, as long as he had the mental ability to dress himself and get himself to work, useful work could be found for him. Physical handicaps weren't at all difficult to deal with. A blind man can work very well in the total darkness of an infrared-film darkroom. Partial or total losses of limbs can be compensated for in one way or another. The mental disabilities were harder to deal with, but not totally impossible. On a world without liquor, a dipsomaniac could be channeled easily enough; and he'd better not try fermenting his own on Saarkkad unless he brought his own yeast—which was impossible, in view of the sterilization regulations. But Malloy didn't like to stop at merely thwarting mental quirks; he liked to find places where they were useful . The phone chimed. Malloy flipped it on with a practiced hand. "Malloy here." "Mr. Malloy?" said a careful voice. "A special communication for you has been teletyped in from Earth. Shall I bring it in?" "Bring it in, Miss Drayson." Miss Drayson was a case in point. She was uncommunicative. She liked to gather in information, but she found it difficult to give it up once it was in her possession. Malloy had made her his private secretary. Nothing—but nothing —got out of Malloy's office without his direct order. It had taken Malloy a long time to get it into Miss Drayson's head that it was perfectly all right—even desirable—for her to keep secrets from everyone except Malloy. She came in through the door, a rather handsome woman in her middle thirties, clutching a sheaf of papers in her right hand as though someone might at any instant snatch it from her before she could turn it over to Malloy. She laid them carefully on the desk. "If anything else comes in, I'll let you know immediately, sir," she said. "Will there be anything else?" Malloy let her stand there while he picked up the communique. She wanted to know what his reaction was going to be; it didn't matter because no one would ever find out from her what he had done unless she was ordered to tell someone. He read the first paragraph, and his eyes widened involuntarily. "Armistice," he said in a low whisper. "There's a chance that the war may be over." "Yes, sir," said Miss Drayson in a hushed voice. Malloy read the whole thing through, fighting to keep his emotions in check. Miss Drayson stood there calmly, her face a mask; her emotions were a secret. Finally, Malloy looked up. "I'll let you know as soon as I reach a decision, Miss Drayson. I think I hardly need say that no news of this is to leave this office." "Of course not, sir." Malloy watched her go out the door without actually seeing her. The war was over—at least for a while. He looked down at the papers again. The Karna, slowly being beaten back on every front, were suing for peace. They wanted an armistice conference—immediately. Earth was willing. Interstellar war is too costly to allow it to continue any longer than necessary, and this one had been going on for more than thirteen years now. Peace was necessary. But not peace at any price. The trouble was that the Karna had a reputation for losing wars and winning at the peace table. They were clever, persuasive talkers. They could twist a disadvantage to an advantage, and make their own strengths look like weaknesses. If they won the armistice, they'd be able to retrench and rearm, and the war would break out again within a few years. Now—at this point in time—they could be beaten. They could be forced to allow supervision of the production potential, forced to disarm, rendered impotent. But if the armistice went to their own advantage ... Already, they had taken the offensive in the matter of the peace talks. They had sent a full delegation to Saarkkad V, the next planet out from the Saarkkad sun, a chilly world inhabited only by low-intelligence animals. The Karna considered this to be fully neutral territory, and Earth couldn't argue the point very well. In addition, they demanded that the conference begin in three days, Terrestrial time. The trouble was that interstellar communication beams travel a devil of a lot faster than ships. It would take more than a week for the Earth government to get a vessel to Saarkkad V. Earth had been caught unprepared for an armistice. They objected. The Karna pointed out that the Saarkkad sun was just as far from Karn as it was from Earth, that it was only a few million miles from a planet which was allied with Earth, and that it was unfair for Earth to take so much time in preparing for an armistice. Why hadn't Earth been prepared? Did they intend to fight to the utter destruction of Karn? It wouldn't have been a problem at all if Earth and Karn had fostered the only two intelligent races in the galaxy. The sort of grandstanding the Karna were putting on had to be played to an audience. But there were other intelligent races throughout the galaxy, most of whom had remained as neutral as possible during the Earth-Karn war. They had no intention of sticking their figurative noses into a battle between the two most powerful races in the galaxy. But whoever won the armistice would find that some of the now-neutral races would come in on their side if war broke out again. If the Karna played their cards right, their side would be strong enough next time to win. So Earth had to get a delegation to meet with the Karna representatives within the three-day limit or lose what might be a vital point in the negotiations. And that was where Bertrand Malloy came in. He had been appointed Minister and Plenipotentiary Extraordinary to the Earth-Karn peace conference. He looked up at the ceiling again. "What can I do?" he said softly. On the second day after the arrival of the communique, Malloy made his decision. He flipped on his intercom and said: "Miss Drayson, get hold of James Nordon and Kylen Braynek. I want to see them both immediately. Send Nordon in first, and tell Braynek to wait." "Yes, sir." "And keep the recorder on. You can file the tape later." "Yes, sir." Malloy knew the woman would listen in on the intercom anyway, and it was better to give her permission to do so. James Nordon was tall, broad-shouldered, and thirty-eight. His hair was graying at the temples, and his handsome face looked cool and efficient. Malloy waved him to a seat. "Nordon, I have a job for you. It's probably one of the most important jobs you'll ever have in your life. It can mean big things for you—promotion and prestige if you do it well." Nordon nodded slowly. "Yes, sir." Malloy explained the problem of the Karna peace talks. "We need a man who can outthink them," Malloy finished, "and judging from your record, I think you're that man. It involves risk, of course. If you make the wrong decisions, your name will be mud back on Earth. But I don't think there's much chance of that, really. Do you want to handle small-time operations all your life? Of course not. "You'll be leaving within an hour for Saarkkad V." Nordon nodded again. "Yes, sir; certainly. Am I to go alone?" "No," said Malloy, "I'm sending an assistant with you—a man named Kylen Braynek. Ever heard of him?" Nordon shook his head. "Not that I recall, Mr. Malloy. Should I have?" "Not necessarily. He's a pretty shrewd operator, though. He knows a lot about interstellar law, and he's capable of spotting a trap a mile away. You'll be in charge, of course, but I want you to pay special attention to his advice." "I will, sir," Nordon said gratefully. "A man like that can be useful." "Right. Now, you go into the anteroom over there. I've prepared a summary of the situation, and you'll have to study it and get it into your head before the ship leaves. That isn't much time, but it's the Karna who are doing the pushing, not us." As soon as Nordon had left, Malloy said softly: "Send in Braynek, Miss Drayson." Kylen Braynek was a smallish man with mouse-brown hair that lay flat against his skull, and hard, penetrating, dark eyes that were shadowed by heavy, protruding brows. Malloy asked him to sit down. Again Malloy went through the explanation of the peace conference. "Naturally, they'll be trying to trick you every step of the way," Malloy went on. "They're shrewd and underhanded; we'll simply have to be more shrewd and more underhanded. Nordon's job is to sit quietly and evaluate the data; yours will be to find the loopholes they're laying out for themselves and plug them. Don't antagonize them, but don't baby them, either. If you see anything underhanded going on, let Nordon know immediately." "They won't get anything by me, Mr. Malloy." By the time the ship from Earth got there, the peace conference had been going on for four days. Bertrand Malloy had full reports on the whole parley, as relayed to him through the ship that had taken Nordon and Braynek to Saarkkad V. Secretary of State Blendwell stopped off at Saarkkad IV before going on to V to take charge of the conference. He was a tallish, lean man with a few strands of gray hair on the top of his otherwise bald scalp, and he wore a hearty, professional smile that didn't quite make it to his calculating eyes. He took Malloy's hand and shook it warmly. "How are you, Mr. Ambassador?" "Fine, Mr. Secretary. How's everything on Earth?" "Tense. They're waiting to see what is going to happen on Five. So am I, for that matter." His eyes were curious. "You decided not to go yourself, eh?" "I thought it better not to. I sent a good team, instead. Would you like to see the reports?" "I certainly would." Malloy handed them to the secretary, and as he read, Malloy watched him. Blendwell was a political appointee—a good man, Malloy had to admit, but he didn't know all the ins and outs of the Diplomatic Corps. When Blendwell looked up from the reports at last, he said: "Amazing! They've held off the Karna at every point! They've beaten them back! They've managed to cope with and outdo the finest team of negotiators the Karna could send." "I thought they would," said Malloy, trying to appear modest. The secretary's eyes narrowed. "I've heard of the work you've been doing here with ... ah ... sick men. Is this one of your ... ah ... successes?" Malloy nodded. "I think so. The Karna put us in a dilemma, so I threw a dilemma right back at them." "How do you mean?" "Nordon had a mental block against making decisions. If he took a girl out on a date, he'd have trouble making up his mind whether to kiss her or not until she made up his mind for him, one way or the other. He's that kind of guy. Until he's presented with one, single, clear decision which admits of no alternatives, he can't move at all. "As you can see, the Karna tried to give us several choices on each point, and they were all rigged. Until they backed down to a single point and proved that it wasn't rigged, Nordon couldn't possibly make up his mind. I drummed into him how important this was, and the more importance there is attached to his decisions, the more incapable he becomes of making them." The Secretary nodded slowly. "What about Braynek?" "Paranoid," said Malloy. "He thinks everyone is plotting against him. In this case, that's all to the good because the Karna are plotting against him. No matter what they put forth, Braynek is convinced that there's a trap in it somewhere, and he digs to find out what the trap is. Even if there isn't a trap, the Karna can't satisfy Braynek, because he's convinced that there has to be—somewhere. As a result, all his advice to Nordon, and all his questioning on the wildest possibilities, just serves to keep Nordon from getting unconfused. "These two men are honestly doing their best to win at the peace conference, and they've got the Karna reeling. The Karna can see that we're not trying to stall; our men are actually working at trying to reach a decision. But what the Karna don't see is that those men, as a team, are unbeatable because, in this situation, they're psychologically incapable of losing." Again the Secretary of State nodded his approval, but there was still a question in his mind. "Since you know all that, couldn't you have handled it yourself?" "Maybe, but I doubt it. They might have gotten around me someway by sneaking up on a blind spot. Nordon and Braynek have blind spots, but they're covered with armor. No, I'm glad I couldn't go; it's better this way." The Secretary of State raised an eyebrow. " Couldn't go, Mr. Ambassador?" Malloy looked at him. "Didn't you know? I wondered why you appointed me, in the first place. No, I couldn't go. The reason why I'm here, cooped up in this office, hiding from the Saarkkada the way a good Saarkkadic bigshot should, is because I like it that way. I suffer from agoraphobia and xenophobia. "I have to be drugged to be put on a spaceship because I can't take all that empty space, even if I'm protected from it by a steel shell." A look of revulsion came over his face. "And I can't stand aliens!" THE END Transcriber's Note: This etext was produced from Astounding Science Fiction March 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
|
B. Higher quality candidates were sent to higher priority jobs.
|
Why are the robots hunting Alan?
A. The robots aren't hunting Alan specifically. They are hunting all life forms.
B. The robots are hunting Alan because he invaded Waiamea.
C. The robots aren't hunting Alan. They're hunting pumas. Alan got in the way.
D. The robots are hunting Alan because he was illegally poaching pumas in the jungle.
|
SURVIVAL TACTICS By AL SEVCIK ILLUSTRATOR NOVICK The robots were built to serve Man; to do his work, see to his comforts, make smooth his way. Then the robots figured out an additional service—putting Man out of his misery. There was a sudden crash that hung sharply in the air, as if a tree had been hit by lightning some distance away. Then another. Alan stopped, puzzled. Two more blasts, quickly together, and the sound of a scream faintly. Frowning, worrying about the sounds, Alan momentarily forgot to watch his step until his foot suddenly plunged into an ant hill, throwing him to the jungle floor. "Damn!" He cursed again, for the tenth time, and stood uncertainly in the dimness. From tall, moss-shrouded trees, wrist-thick vines hung quietly, scraping the spongy ground like the tentacles of some monstrous tree-bound octopus. Fitful little plants grew straggly in the shadows of the mossy trunks, forming a dense underbrush that made walking difficult. At midday some few of the blue sun's rays filtered through to the jungle floor, but now, late afternoon on the planet, the shadows were long and gloomy. Alan peered around him at the vine-draped shadows, listening to the soft rustlings and faint twig-snappings of life in the jungle. Two short, popping sounds echoed across the stillness, drowned out almost immediately and silenced by an explosive crash. Alan started, "Blaster fighting! But it can't be!" Suddenly anxious, he slashed a hurried X in one of the trees to mark his position then turned to follow a line of similar marks back through the jungle. He tried to run, but vines blocked his way and woody shrubs caught at his legs, tripping him and holding him back. Then, through the trees he saw the clearing of the camp site, the temporary home for the scout ship and the eleven men who, with Alan, were the only humans on the jungle planet, Waiamea. Stepping through the low shrubbery at the edge of the site, he looked across the open area to the two temporary structures, the camp headquarters where the power supplies and the computer were; and the sleeping quarters. Beyond, nose high, stood the silver scout ship that had brought the advance exploratory party of scientists and technicians to Waiamea three days before. Except for a few of the killer robots rolling slowly around the camp site on their quiet treads, there was no one about. "So, they've finally got those things working." Alan smiled slightly. "Guess that means I owe Pete a bourbon-and-soda for sure. Anybody who can build a robot that hunts by homing in on animals' mind impulses ..." He stepped forward just as a roar of blue flame dissolved the branches of a tree, barely above his head. Without pausing to think, Alan leaped back, and fell sprawling over a bush just as one of the robots rolled silently up from the right, lowering its blaster barrel to aim directly at his head. Alan froze. "My God, Pete built those things wrong!" Suddenly a screeching whirlwind of claws and teeth hurled itself from the smoldering branches and crashed against the robot, clawing insanely at the antenna and blaster barrel. With an awkward jerk the robot swung around and fired its blaster, completely dissolving the lower half of the cat creature which had clung across the barrel. But the back pressure of the cat's body overloaded the discharge circuits. The robot started to shake, then clicked sharply as an overload relay snapped and shorted the blaster cells. The killer turned and rolled back towards the camp, leaving Alan alone. Shakily, Alan crawled a few feet back into the undergrowth where he could lie and watch the camp, but not himself be seen. Though visibility didn't make any difference to the robots, he felt safer, somehow, hidden. He knew now what the shooting sounds had been and why there hadn't been anyone around the camp site. A charred blob lying in the grass of the clearing confirmed his hypothesis. His stomach felt sick. "I suppose," he muttered to himself, "that Pete assembled these robots in a batch and then activated them all at once, probably never living to realize that they're tuned to pick up human brain waves, too. Damn! Damn!" His eyes blurred and he slammed his fist into the soft earth. When he raised his eyes again the jungle was perceptibly darker. Stealthy rustlings in the shadows grew louder with the setting sun. Branches snapped unaccountably in the trees overhead and every now and then leaves or a twig fell softly to the ground, close to where he lay. Reaching into his jacket, Alan fingered his pocket blaster. He pulled it out and held it in his right hand. "This pop gun wouldn't even singe a robot, but it just might stop one of those pumas." They said the blast with your name on it would find you anywhere. This looked like Alan's blast. Slowly Alan looked around, sizing up his situation. Behind him the dark jungle rustled forbiddingly. He shuddered. "Not a very healthy spot to spend the night. On the other hand, I certainly can't get to the camp with a pack of mind-activated mechanical killers running around. If I can just hold out until morning, when the big ship arrives ... The big ship! Good Lord, Peggy!" He turned white; oily sweat punctuated his forehead. Peggy, arriving tomorrow with the other colonists, the wives and kids! The metal killers, tuned to blast any living flesh, would murder them the instant they stepped from the ship! A pretty girl, Peggy, the girl he'd married just three weeks ago. He still couldn't believe it. It was crazy, he supposed, to marry a girl and then take off for an unknown planet, with her to follow, to try to create a home in a jungle clearing. Crazy maybe, but Peggy and her green eyes that changed color with the light, with her soft brown hair, and her happy smile, had ended thirty years of loneliness and had, at last, given him a reason for living. "Not to be killed!" Alan unclenched his fists and wiped his palms, bloody where his fingernails had dug into the flesh. There was a slight creak above him like the protesting of a branch too heavily laden. Blaster ready, Alan rolled over onto his back. In the movement, his elbow struck the top of a small earthy mound and he was instantly engulfed in a swarm of locust-like insects that beat disgustingly against his eyes and mouth. "Fagh!" Waving his arms before his face he jumped up and backwards, away from the bugs. As he did so, a dark shapeless thing plopped from the trees onto the spot where he had been lying stretched out. Then, like an ambient fungus, it slithered off into the jungle undergrowth. For a split second the jungle stood frozen in a brilliant blue flash, followed by the sharp report of a blaster. Then another. Alan whirled, startled. The planet's double moon had risen and he could see a robot rolling slowly across the clearing in his general direction, blasting indiscriminately at whatever mind impulses came within its pickup range, birds, insects, anything. Six or seven others also left the camp headquarters area and headed for the jungle, each to a slightly different spot. Apparently the robot hadn't sensed him yet, but Alan didn't know what the effective range of its pickup devices was. He began to slide back into the jungle. Minutes later, looking back he saw that the machine, though several hundred yards away, had altered its course and was now headed directly for him. His stomach tightened. Panic. The dank, musty smell of the jungle seemed for an instant to thicken and choke in his throat. Then he thought of the big ship landing in the morning, settling down slowly after a lonely two-week voyage. He thought of a brown-haired girl crowding with the others to the gangway, eager to embrace the new planet, and the next instant a charred nothing, unrecognizable, the victim of a design error or a misplaced wire in a machine. "I have to try," he said aloud. "I have to try." He moved into the blackness. Powerful as a small tank, the killer robot was equipped to crush, slash, and burn its way through undergrowth. Nevertheless, it was slowed by the larger trees and the thick, clinging vines, and Alan found that he could manage to keep ahead of it, barely out of blaster range. Only, the robot didn't get tired. Alan did. The twin moons cast pale, deceptive shadows that wavered and danced across the jungle floor, hiding debris that tripped him and often sent him sprawling into the dark. Sharp-edged growths tore at his face and clothes, and insects attracted by the blood matted against his pants and shirt. Behind, the robot crashed imperturbably after him, lighting the night with fitful blaster flashes as some winged or legged life came within its range. There was movement also, in the darkness beside him, scrapings and rustlings and an occasional low, throaty sound like an angry cat. Alan's fingers tensed on his pocket blaster. Swift shadowy forms moved quickly in the shrubs and the growling became suddenly louder. He fired twice, blindly, into the undergrowth. Sharp screams punctuated the electric blue discharge as a pack of small feline creatures leaped snarling and clawing back into the night. Mentally, Alan tried to figure the charge remaining in his blaster. There wouldn't be much. "Enough for a few more shots, maybe. Why the devil didn't I load in fresh cells this morning!" The robot crashed on, louder now, gaining on the tired human. Legs aching and bruised, stinging from insect bites, Alan tried to force himself to run holding his hands in front of him like a child in the dark. His foot tripped on a barely visible insect hill and a winged swarm exploded around him. Startled, Alan jerked sideways, crashing his head against a tree. He clutched at the bark for a second, dazed, then his knees buckled. His blaster fell into the shadows. The robot crashed loudly behind him now. Without stopping to think, Alan fumbled along the ground after his gun, straining his eyes in the darkness. He found it just a couple of feet to one side, against the base of a small bush. Just as his fingers closed upon the barrel his other hand slipped into something sticky that splashed over his forearm. He screamed in pain and leaped back, trying frantically to wipe the clinging, burning blackness off his arm. Patches of black scraped off onto branches and vines, but the rest spread slowly over his arm as agonizing as hot acid, or as flesh being ripped away layer by layer. Almost blinded by pain, whimpering, Alan stumbled forward. Sharp muscle spasms shot from his shoulder across his back and chest. Tears streamed across his cheeks. A blue arc slashed at the trees a mere hundred yards behind. He screamed at the blast. "Damn you, Pete! Damn your robots! Damn, damn ... Oh, Peggy!" He stepped into emptiness. Coolness. Wet. Slowly, washed by the water, the pain began to fall away. He wanted to lie there forever in the dark, cool, wetness. For ever, and ever, and ... The air thundered. In the dim light he could see the banks of the stream, higher than a man, muddy and loose. Growing right to the edge of the banks, the jungle reached out with hairy, disjointed arms as if to snag even the dirty little stream that passed so timidly through its domain. Alan, lying in the mud of the stream bed, felt the earth shake as the heavy little robot rolled slowly and inexorably towards him. "The Lord High Executioner," he thought, "in battle dress." He tried to stand but his legs were almost too weak and his arm felt numb. "I'll drown him," he said aloud. "I'll drown the Lord High Executioner." He laughed. Then his mind cleared. He remembered where he was. Alan trembled. For the first time in his life he understood what it was to live, because for the first time he realized that he would sometime die. In other times and circumstances he might put it off for a while, for months or years, but eventually, as now, he would have to watch, still and helpless, while death came creeping. Then, at thirty, Alan became a man. "Dammit, no law says I have to flame-out now !" He forced himself to rise, forced his legs to stand, struggling painfully in the shin-deep ooze. He worked his way to the bank and began to dig frenziedly, chest high, about two feet below the edge. His arm where the black thing had been was swollen and tender, but he forced his hands to dig, dig, dig, cursing and crying to hide the pain, and biting his lips, ignoring the salty taste of blood. The soft earth crumbled under his hands until he had a small cave about three feet deep in the bank. Beyond that the soil was held too tightly by the roots from above and he had to stop. The air crackled blue and a tree crashed heavily past Alan into the stream. Above him on the bank, silhouetting against the moons, the killer robot stopped and its blaster swivelled slowly down. Frantically, Alan hugged the bank as a shaft of pure electricity arced over him, sliced into the water, and exploded in a cloud of steam. The robot shook for a second, its blaster muzzle lifted erratically and for an instant it seemed almost out of control, then it quieted and the muzzle again pointed down. Pressing with all his might, Alan slid slowly along the bank inches at a time, away from the machine above. Its muzzle turned to follow him but the edge of the bank blocked its aim. Grinding forward a couple of feet, slightly overhanging the bank, the robot fired again. For a split second Alan seemed engulfed in flame; the heat of hell singed his head and back, and mud boiled in the bank by his arm. Again the robot trembled. It jerked forward a foot and its blaster swung slightly away. But only for a moment. Then the gun swung back again. Suddenly, as if sensing something wrong, its tracks slammed into reverse. It stood poised for a second, its treads spinning crazily as the earth collapsed underneath it, where Alan had dug, then it fell with a heavy splash into the mud, ten feet from where Alan stood. Without hesitation Alan threw himself across the blaster housing, frantically locking his arms around the barrel as the robot's treads churned furiously in the sticky mud, causing it to buck and plunge like a Brahma bull. The treads stopped and the blaster jerked upwards wrenching Alan's arms, then slammed down. Then the whole housing whirled around and around, tilting alternately up and down like a steel-skinned water monster trying to dislodge a tenacious crab, while Alan, arms and legs wrapped tightly around the blaster barrel and housing, pressed fiercely against the robot's metal skin. Slowly, trying to anticipate and shift his weight with the spinning plunges, Alan worked his hand down to his right hip. He fumbled for the sheath clipped to his belt, found it, and extracted a stubby hunting knife. Sweat and blood in his eyes, hardly able to move on the wildly swinging turret, he felt down the sides to the thin crack between the revolving housing and the stationary portion of the robot. With a quick prayer he jammed in the knife blade—and was whipped headlong into the mud as the turret literally snapped to a stop. The earth, jungle and moons spun in a pinwheeled blur, slowed, and settled to their proper places. Standing in the sticky, sweet-smelling ooze, Alan eyed the robot apprehensively. Half buried in mud, it stood quiet in the shadowy light except for an occasional, almost spasmodic jerk of its blaster barrel. For the first time that night Alan allowed himself a slight smile. "A blade in the old gear box, eh? How does that feel, boy?" He turned. "Well, I'd better get out of here before the knife slips or the monster cooks up some more tricks with whatever it's got for a brain." Digging little footholds in the soft bank, he climbed up and stood once again in the rustling jungle darkness. "I wonder," he thought, "how Pete could cram enough brain into one of those things to make it hunt and track so perfectly." He tried to visualize the computing circuits needed for the operation of its tracking mechanism alone. "There just isn't room for the electronics. You'd need a computer as big as the one at camp headquarters." In the distance the sky blazed as a blaster roared in the jungle. Then Alan heard the approaching robot, crunching and snapping its way through the undergrowth like an onrushing forest fire. He froze. "Good Lord! They communicate with each other! The one I jammed must be calling others to help." He began to move along the bank, away from the crashing sounds. Suddenly he stopped, his eyes widened. "Of course! Radio! I'll bet anything they're automatically controlled by the camp computer. That's where their brain is!" He paused. "Then, if that were put out of commission ..." He jerked away from the bank and half ran, half pulled himself through the undergrowth towards the camp. Trees exploded to his left as another robot fired in his direction, too far away to be effective but churning towards him through the blackness. Alan changed direction slightly to follow a line between the two robots coming up from either side, behind him. His eyes were well accustomed to the dark now, and he managed to dodge most of the shadowy vines and branches before they could snag or trip him. Even so, he stumbled in the wiry underbrush and his legs were a mass of stinging slashes from ankle to thigh. The crashing rumble of the killer robots shook the night behind him, nearer sometimes, then falling slightly back, but following constantly, more unshakable than bloodhounds because a man can sometimes cover a scent, but no man can stop his thoughts. Intermittently, like photographers' strobes, blue flashes would light the jungle about him. Then, for seconds afterwards his eyes would see dancing streaks of yellow and sharp multi-colored pinwheels that alternately shrunk and expanded as if in a surrealist's nightmare. Alan would have to pause and squeeze his eyelids tight shut before he could see again, and the robots would move a little closer. To his right the trees silhouetted briefly against brilliance as a third robot slowly moved up in the distance. Without thinking, Alan turned slightly to the left, then froze in momentary panic. "I should be at the camp now. Damn, what direction am I going?" He tried to think back, to visualize the twists and turns he'd taken in the jungle. "All I need is to get lost." He pictured the camp computer with no one to stop it, automatically sending its robots in wider and wider forays, slowly wiping every trace of life from the planet. Technologically advanced machines doing the job for which they were built, completely, thoroughly, without feeling, and without human masters to separate sense from futility. Finally parts would wear out, circuits would short, and one by one the killers would crunch to a halt. A few birds would still fly then, but a unique animal life, rare in the universe, would exist no more. And the bones of children, eager girls, and their men would also lie, beside a rusty hulk, beneath the alien sun. "Peggy!" As if in answer, a tree beside him breathed fire, then exploded. In the brief flash of the blaster shot, Alan saw the steel glint of a robot only a hundred yards away, much nearer than he had thought. "Thank heaven for trees!" He stepped back, felt his foot catch in something, clutched futilely at some leaves and fell heavily. Pain danced up his leg as he grabbed his ankle. Quickly he felt the throbbing flesh. "Damn the rotten luck, anyway!" He blinked the pain tears from his eyes and looked up—into a robot's blaster, jutting out of the foliage, thirty yards away. Instinctively, in one motion Alan grabbed his pocket blaster and fired. To his amazement the robot jerked back, its gun wobbled and started to tilt away. Then, getting itself under control, it swung back again to face Alan. He fired again, and again the robot reacted. It seemed familiar somehow. Then he remembered the robot on the river bank, jiggling and swaying for seconds after each shot. "Of course!" He cursed himself for missing the obvious. "The blaster static blanks out radio transmission from the computer for a few seconds. They even do it to themselves!" Firing intermittently, he pulled himself upright and hobbled ahead through the bush. The robot shook spasmodically with each shot, its gun tilted upward at an awkward angle. Then, unexpectedly, Alan saw stars, real stars brilliant in the night sky, and half dragging his swelling leg he stumbled out of the jungle into the camp clearing. Ahead, across fifty yards of grass stood the headquarters building, housing the robot-controlling computer. Still firing at short intervals he started across the clearing, gritting his teeth at every step. Straining every muscle in spite of the agonizing pain, Alan forced himself to a limping run across the uneven ground, carefully avoiding the insect hills that jutted up through the grass. From the corner of his eye he saw another of the robots standing shakily in the dark edge of the jungle waiting, it seemed, for his small blaster to run dry. "Be damned! You can't win now!" Alan yelled between blaster shots, almost irrational from the pain that ripped jaggedly through his leg. Then it happened. A few feet from the building's door his blaster quit. A click. A faint hiss when he frantically jerked the trigger again and again, and the spent cells released themselves from the device, falling in the grass at his feet. He dropped the useless gun. "No!" He threw himself on the ground as a new robot suddenly appeared around the edge of the building a few feet away, aimed, and fired. Air burned over Alan's back and ozone tingled in his nostrils. Blinding itself for a few seconds with its own blaster static, the robot paused momentarily, jiggling in place. In this instant, Alan jammed his hands into an insect hill and hurled the pile of dirt and insects directly at the robot's antenna. In a flash, hundreds of the winged things erupted angrily from the hole in a swarming cloud, each part of which was a speck of life transmitting mental energy to the robot's pickup devices. Confused by the sudden dispersion of mind impulses, the robot fired erratically as Alan crouched and raced painfully for the door. It fired again, closer, as he fumbled with the lock release. Jagged bits of plastic and stone ripped past him, torn loose by the blast. Frantically, Alan slammed open the door as the robot, sensing him strongly now, aimed point blank. He saw nothing, his mind thought of nothing but the red-clad safety switch mounted beside the computer. Time stopped. There was nothing else in the world. He half-jumped, half-fell towards it, slowly, in tenths of seconds that seemed measured out in years. The universe went black. Later. Brilliance pressed upon his eyes. Then pain returned, a multi-hurting thing that crawled through his body and dragged ragged tentacles across his brain. He moaned. A voice spoke hollowly in the distance. "He's waking. Call his wife." Alan opened his eyes in a white room; a white light hung over his head. Beside him, looking down with a rueful smile, stood a young man wearing space medical insignia. "Yes," he acknowledged the question in Alan's eyes, "you hit the switch. That was three days ago. When you're up again we'd all like to thank you." Suddenly a sobbing-laughing green-eyed girl was pressed tightly against him. Neither of them spoke. They couldn't. There was too much to say. THE END Transcriber's Note: This etext was produced from Amazing Science Fiction Stories October 1958. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
|
A. The robots aren't hunting Alan specifically. They are hunting all life forms.
|
What does the theme of the story reveal about how society treats the mentally ill?
A. There is insufficient social infrastructure to identify and care for those living with severe mental illnesses
B. The Christian church has too much unqualified involvement in treatment of those living with severe mental illnesses
C. Those living with severe mental illnesses are more likely to be abused by social institutions like schools, hospitals, and law enforcement
D. More studies need to be conducted to learn how to best care for people living with severe mental illnesses
|
Charity Case By JIM HARMON Illustrated by DICK FRANCIS [Transcriber's Note: This etext was produced from Galaxy Science Fiction December 1959. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Certainly I see things that aren't there and don't say what my voice says—but how can I prove that I don't have my health? When he began his talk with "You got your health, don't you?" it touched those spots inside me. That was when I did it. Why couldn't what he said have been "The best things in life are free, buddy" or "Every dog has his day, fellow" or "If at first you don't succeed, man"? No, he had to use that one line. You wouldn't blame me. Not if you believe me. The first thing I can remember, the start of all this, was when I was four or five somebody was soiling my bed for me. I absolutely was not doing it. I took long naps morning and evening so I could lie awake all night to see that it wouldn't happen. It couldn't happen. But in the morning the bed would sit there dispassionately soiled and convict me on circumstantial evidence. My punishment was as sure as the tide. Dad was a compact man, small eyes, small mouth, tight clothes. He was narrow but not mean. For punishment, he locked me in a windowless room and told me to sit still until he came back. It wasn't so bad a punishment, except that when Dad closed the door, the light turned off and I was left there in the dark. Being four or five, I didn't know any better, so I thought Dad made it dark to add to my punishment. But I learned he didn't know the light went out. It came back on when he unlocked the door. Every time I told him about the light as soon as I could talk again, but he said I was lying. One day, to prove me a liar, he opened and closed the door a few times from outside. The light winked off and on, off and on, always shining when Dad stuck his head inside. He tried using the door from the inside, and the light stayed on, no matter how hard he slammed the door. I stayed in the dark longer for lying about the light. Alone in the dark, I wouldn't have had it so bad if it wasn't for the things that came to me. They were real to me. They never touched me, but they had a little boy. He looked the way I did in the mirror. They did unpleasant things to him. Because they were real, I talked about them as if they were real, and I almost earned a bunk in the home for retarded children until I got smart enough to keep the beasts to myself. My mother hated me. I loved her, of course. I remember her smell mixed up with flowers and cookies and winter fires. I remember she hugged me on my ninth birthday. The trouble came from the notes written in my awkward hand that she found, calling her names I didn't understand. Sometimes there were drawings. I didn't write those notes or make those drawings. My mother and father must have been glad when I was sent away to reform school after my thirteenth birthday party, the one no one came to. The reform school was nicer. There were others there who'd had it about like me. We got along. I didn't watch their shifty eyes too much, or ask them what they shifted to see. They didn't talk about my screams at night. It was home. My trouble there was that I was always being framed for stealing. I didn't take any of those things they located in my bunk. Stealing wasn't in my line. If you believe any of this at all, you'll see why it couldn't be me who did the stealing. There was reason for me to steal, if I could have got away with it. The others got money from home to buy the things they needed—razor blades, candy, sticks of tea. I got a letter from Mom or Dad every now and then before they were killed, saying they had sent money or that it was enclosed, but somehow I never got a dime of it. When I was expelled from reform school, I left with just one idea in mind—to get all the money I could ever use for the things I needed and the things I wanted. It was two or three years later that I skulked into Brother Partridge's mission on Durbin Street. The preacher and half a dozen men were singing Onward Christian Soldiers in the meeting room. It was a drafty hall with varnished camp chairs. I shuffled in at the back with my suitcoat collar turned up around my stubbled jaw. I made my hand shaky as I ran it through my knotted hair. Partridge was supposed to think I was just a bum. As an inspiration, I hugged my chest to make him think I was some wino nursing a flask full of Sneaky Pete. All I had there was a piece of copper alloy tubing inside a slice of plastic hose for taking care of myself, rolling sailors and the like. Who had the price of a bottle? Partridge didn't seem to notice me, but I knew that was an act. I knew people were always watching every move I made. He braced his red-furred hands on the sides of his auctioneer's stand and leaned his splotched eagle beak toward us. "Brothers, this being Thanksgiving, I pray the good Lord that we all are truly thankful for all that we have received. Amen." Some skin-and-bones character I didn't know struggled out of his seat, amening. I could see he had a lot to be thankful for—somewhere he had received a fix. "Brothers," Partridge went on after enjoying the interruption with a beaming smile, "you shall all be entitled to a bowl of turkey soup prepared by Sister Partridge, a generous supply of sweet rolls and dinner rolls contributed by the Early Morning Bakery of this city, and all the coffee you can drink. Let us march out to The Stars and Stripes Forever , John Philip Sousa's grand old patriotic song." I had to laugh at all those bums clattering the chairs in front of me, scampering after water soup and stale bread. As soon as I got cleaned up, I was going to have dinner in a good restaurant, and I was going to order such expensive food and leave such a large tip for the waiter and send one to the chef that they were going to think I was rich, and some executive with some brokerage firm would see me and say to himself, "Hmm, executive material. Just the type we need. I beg your pardon, sir—" just like the razor-blade comic-strip ads in the old magazines that Frankie the Pig sells three for a quarter. I was marching. Man, was I ever marching, but the secret of it was I was only marking time the way we did in fire drills at the school. They passed me, every one of them, and marched out of the meeting room into the kitchen. Even Partridge made his way down from the auctioneer's stand like a vulture with a busted wing and darted through his private door. I was alone, marking time behind the closed half of double doors. One good breath and I raced past the open door and flattened myself to the wall. Crockery was ringing and men were slurping inside. No one had paid any attention to me. That was pretty odd. People usually watch my every move, but a man's luck has to change sometime, doesn't it? Following the wallboard, I went down the side of the room and behind the last row of chairs, closer, closer, and halfway up the room again to the entrance—the entrance and the little wooden box fastened to the wall beside it. The box was old and made out of some varnished wood. There was a slot in the top. There wasn't any sign anywhere around it, but you knew it wasn't a mailbox. My hand went flat on the top of the box. One finger at a time drew up and slipped into the slot. Index, fore, third, little. I put my thumb in my palm and shoved. My hand went in. There were coins inside. I scooped them up with two fingers and held them fast with the other two. Once I dropped a dime—not a penny, milled edge—and I started to reach for it. No, don't be greedy. I knew I would probably lose my hold on all the coins if I tried for that one. I had all the rest. It felt like about two dollars, or close to it. Then I found the bill. A neatly folded bill in the box. Somehow I knew all along it would be there. I tried to read the numbers on the bill with my fingertips, but I couldn't. It had to be a one. Who drops anything but a one into a Skid Row collection box? But still there were tourists, slummers. They might leave a fifty or even a hundred. A hundred! Yes, it felt new, crisp. It had to be a hundred. A single would be creased or worn. I pulled my hand out of the box. I tried to pull my hand out of the box. I knew what the trouble was, of course. I was in a monkey trap. The monkey reaches through the hole for the bait, and when he gets it in his hot little fist, he can't get his hand out. He's too greedy to let go, so he stays there, caught as securely as if he were caged. I was a man, not a monkey. I knew why I couldn't get my hand out. But I couldn't lose that money, especially that century bill. Calm, I ordered myself. Calm. The box was fastened to the vertical tongue-and-groove laths of the woodwork, not the wall. It was old lumber, stiffened by a hundred layers of paint since 1908. The paint was as thick and strong as the boards. The box was fastened fast. Six-inch spike nails, I guessed. Calmly, I flung my whole weight away from the wall. My wrist almost cracked, but there wasn't even a bend in the box. Carefully, I tried to jerk my fist straight up, to pry off the top of the box. It was as if the box had been carved out of one solid piece of timber. It wouldn't go up, down, left or right. But I kept trying. While keeping a lookout for Partridge and somebody stepping out of the kitchen for a pull on a bottle, I spotted the clock for the first time, a Western Union clock high up at the back of the hall. Just as I seen it for the first time, the electricity wound the spring motor inside like a chicken having its neck wrung. The next time I glanced at the clock, it said ten minutes had gone by. My hand still wasn't free and I hadn't budged the box. "This," Brother Partridge said, "is one of the most profound experiences of my life." My head hinged until it lined my eyes up with Brother Partridge. The pipe hung heavy in my pocket, but he was too far from me. "A vision of you at the box projected itself on the crest of my soup," the preacher explained in wonderment. I nodded. "Swimming right in there with the dead duck." "Cold turkey," he corrected. "Are you scoffing at a miracle?" "People are always watching me, Brother," I said. "So now they do it even when they aren't around. I should have known it would come to that." The pipe was suddenly a weight I wanted off me. I would try robbing a collection box, knowing positively that I would get caught, but I wasn't dumb enough to murder. Somebody, somewhere, would be a witness to it. I had never got away with anything in my life. I was too smart to even try anything but the little things. "I may be able to help you," Brother Partridge said, "if you have faith and a conscience." "I've got something better than a conscience," I told him. Brother Partridge regarded me solemnly. "There must be something special about you, for your apprehension to come through miraculous intervention. But I can't imagine what." "I always get apprehended somehow, Brother," I said. "I'm pretty special." "Your name?" "William Hagle." No sense lying. I had been booked and printed before. Partridge prodded me with his bony fingers as if making sure I was substantial. "Come. Let's sit down, if you can remove your fist from the money box." I opened up my fingers and let the coins ring inside the box and I drew out my hand. The bill stuck to the sweat on my fingers and slid out along with the digits. A one, I decided. I had got into trouble for a grubby single. It wasn't any century. I had been kidding myself. I unfolded the note. Sure enough, it wasn't a hundred-dollar bill, but it was a twenty, and that was almost the same thing to me. I creased it and put it back into the slot. As long as it stalled off the cops, I'd talk to Partridge. We took a couple of camp chairs and I told him the story of my life, or most of it. It was hard work on an empty stomach; I wished I'd had some of that turkey soup. Then again I was glad I hadn't. Something always happened to me when I thought back over my life. The same thing. The men filed out of the kitchen, wiping their chins, and I went right on talking. After some time Sister Partridge bustled in and snapped on the overhead lights and I kept talking. The brother still hadn't used the phone to call the cops. "Remarkable," Partridge finally said when I got so hoarse I had to take a break. "One is almost— almost —reminded of Job. William, you are being punished for some great sin. Of that, I'm sure." "Punished for a sin? But, Brother, I've always had it like this, as long as I can remember. What kind of a sin could I have committed when I was fresh out of my crib?" "William, all I can tell you is that time means nothing in Heaven. Do you deny the transmigration of souls?" "Well," I said, "I've had no personal experience—" "Of course you have, William! Say you don't remember. Say you don't want to remember. But don't say you have no personal experience!" "And you think I'm being punished for something I did in a previous life?" He looked at me in disbelief. "What else could it be?" "I don't know," I confessed. "I certainly haven't done anything that bad in this life." "William, if you atone for this sin, perhaps the horde of locusts will lift from you." It wasn't much of a chance, but I was unused to having any at all. I shook off the dizziness of it. "By the Lord Harry, Brother, I'm going to give it a try!" I cried. "I believe you," Partridge said, surprised at himself. He ambled over to the money box on the wall. He tapped the bottom lightly and a box with no top slid out of the slightly larger box. He reached in, fished out the bill and presented it to me. "Perhaps this will help in your atonement," he said. I crumpled it into my pocket fast. Not meaning to sound ungrateful, I'm pretty sure he hadn't noticed it was a twenty. And then the bill seemed to lie there, heavy, a lead weight. It would have been different if I had managed to get it out of the box myself. You know how it is. Money you haven't earned doesn't seem real to you. There was something I forgot to mention so far. During the year between when I got out of the reformatory and the one when I tried to steal Brother Partridge's money, I killed a man. It was all an accident, but killing somebody is reason enough to get punished. It didn't have to be a sin in some previous life, you see. I had gotten my first job in too long, stacking boxes at the freight door of Baysinger's. The drivers unloaded the stuff, but they just dumped it off the truck. An empty rear end was all they wanted. The freight boss told me to stack the boxes inside, neat and not too close together. I stacked boxes the first day. I stacked more the second. The third day I went outside with my baloney and crackers. It was warm enough even for November. Two of them, dressed like Harvard seniors, caps and striped duffer jackets, came up to the crate I was dining off. "Work inside, Jack?" the taller one asked. "Yeah," I said, chewing. "What do you do, Jack?" the fatter one asked. "Stack boxes." "Got a union card?" I shook my head. "Application?" "No," I said. "I'm just helping out during Christmas." "You're a scab, buddy," Long-legs said. "Don't you read the papers?" "I don't like comic strips," I said. They sighed. I think they hated to do it, but I was bucking the system. Fats hit me high. Long-legs hit me low. I blew cracker crumbs into their faces. After that, I just let them go. I know how to take a beating. That's one thing I knew. Then lying there, bleeding to myself, I heard them talking. I heard noises like make an example of him and do something permanent and I squirmed away across the rubbish like a polite mouse. I made it around a corner of brick and stood up, hurting my knee on a piece of brown-splotched pipe. There were noises on the other angle of the corner and so I tested if the pipe was loose and it was. I closed my eyes and brought the pipe up and then down. It felt as if I connected, but I was so numb, I wasn't sure until I unscrewed my eyes. There was a big man in a heavy wool overcoat and gray homburg spread on a damp centerfold from the News . There was a pick-up slip from the warehouse under the fingers of one hand, and somebody had beaten his brains out. The police figured it was part of some labor dispute, I guess, and they never got to me. I suppose I was to blame anyway. If I hadn't been alive, if I hadn't been there to get beaten up, it wouldn't have happened. I could see the point in making me suffer for it. There was a lot to be said for looking at it like that. But there was nothing to be said for telling Brother Partridge about the accident, or murder, or whatever had happened that day. Searching myself after I left Brother Partridge, I finally found a strip of gray adhesive tape on my side, out of the fuzzy area. Making the twenty the size of a thick postage stamp, I peeled back the tape and put the folded bill on the white skin and smoothed the tape back. There was only one place for me to go now. I headed for the public library. It was only about twenty blocks, but not having had anything to eat since the day before, it enervated me. The downstairs washroom was where I went first. There was nobody there but an old guy talking urgently to a kid with thick glasses, and somebody building a fix in one of the booths. I could see charred matches dropping down on the floor next to his tennis shoes, and even a few grains of white stuff. But he managed to hold still enough to keep from spilling more from the spoon. I washed my hands and face, smoothed my hair down, combing it with my fingers. Going over my suit with damp toweling got off a lot of the dirt. I put my collar on the outside of my jacket and creased the wings with my thumbnail so it would look more like a sports shirt. It didn't really. I still looked like a bum, but sort of a neat, non-objectionable bum. The librarian at the main desk looked sympathetically hostile, or hostilely sympathetic. "I'd like to get into the stacks, miss," I said, "and see some of the old newspapers." "Which newspapers?" the old girl asked stiffly. I thought back. I couldn't remember the exact date. "Ones for the first week in November last year." "We have the Times microfilmed. I would have to project them for you." "I didn't want to see the Times ," I said, fast. "Don't you have any newspapers on paper?" I didn't want her to see what I wanted to read up on. "We have the News , bound, for last year." I nodded. "That's the one I wanted to see." She sniffed and told me to follow her. I didn't rate a cart to my table, I guess, or else the bound papers weren't supposed to come out of the stacks. The cases of books, row after row, smelled good. Like old leather and good pipe tobacco. I had been here before. In this world, it's the man with education who makes the money. I had been reading the Funk & Wagnalls Encyclopedia. So far I knew a lot about Mark Antony, Atomic Energy, Boron, Brussels, Catapults, Demons, and Divans. I guess I had stopped to look around at some of the titles, because the busy librarian said sharply, "Follow me." I heard my voice say, "A pleasure. What about after work?" I didn't say it, but I was used to my voice independently saying things. Her neck got to flaming, but she walked stiffly ahead. She didn't say anything. She must be awful mad, I decided. But then I got the idea she was flushed with pleasure. I'm pretty ugly and I looked like a bum, but I was young. You had to grant me that. She waved a hand at the rows of bound News and left me alone with them. I wasn't sure if I was allowed to hunt up a table to lay the books on or not, so I took the volume for last year and laid it on the floor. That was the cleanest floor I ever saw. It didn't take me long to find the story. The victim was a big man, because the story was on the second page of the Nov. 4 edition. I started to tear the page out, then only memorized the name and home address. Somebody was sure to see me and I couldn't risk trouble just now. I stuck the book back in line and left by the side door. I went to a dry-cleaner, not the cheapest place I knew, because I wouldn't be safe with the change from a twenty in that neighborhood. My suit was cleaned while I waited. I paid a little extra and had it mended. Funny thing about a suit—it's almost never completely shot unless you just have it ripped off you or burned up. It wasn't exactly in style, but some rich executives wore suits out of style that they had paid a lot of money for. I remembered Fredric March's double-breasted in Executive Suite while Walter Pidgeon and the rest wore Ivy Leagues. Maybe I would look like an eccentric executive. I bought a new shirt, a good used pair of shoes, and a dime pack of single-edged razor blades. I didn't have a razor, but anybody with nerve can shave with a single-edge blade and soap and water. The clerk took my two bucks in advance and I went up to my room. I washed out my socks and underwear, took a bath, shaved and trimmed my hair and nails with the razor blade. With some soap on my finger, I scrubbed my teeth. Finally I got dressed. Everything was all right except that I didn't have a tie. They had them, a quarter a piece, where I got the shoes. It was only six blocks—I could go back. But I didn't want to wait. I wanted to complete the picture. The razor blade sliced through the pink bath towel evenly. I cut out a nice modern-style tie, narrow, with some horizontal stripes down at the bottom. I made a tight, thin knot. It looked pretty good. I was ready to leave, so I started for the door. I went back. I had almost forgotten my luggage. The box still had three unwrapped blades in it. I pocketed it. I hefted the used blade, dulled by all the work it had done. You can run being economical into stinginess. I tossed it into the wastebasket. I had five hamburgers and five cups of coffee. I couldn't finish all of the French fries. "Mac," I said to the fat counterman, who looked like all fat countermen, "give me a Milwaukee beer." He stopped polishing the counter in front of his friend. "Milwaukee, Wisconsin, or Milwaukee, Oregon?" "Wisconsin." He didn't argue. It was cold and bitter. All beer is bitter, no matter what they say on TV. I like beer. I like the bitterness of it. It felt like another, but I checked myself. I needed a clear head. I thought about going back to the hotel for some sleep; I still had the key in my pocket (I wasn't trusting it to any clerk). No, I had had sleep on Thanksgiving, bracing up for trying the lift at Brother Partridge's. Let's see, it was daylight outside again, so this was the day after Thanksgiving. But it had only been sixteen or twenty hours since I had slept. That was enough. I left the money on the counter for the hamburgers and coffee and the beer. There was $7.68 left. As I passed the counterman's friend on his stool, my voice said, "I think you're yellow." He turned slowly, his jaw moving further away from his brain. I winked. "It was just a bet for me to say that to you. I won two bucks. Half of it is yours." I held out the bill to him. His paw closed over the money and punched me on the biceps. Too hard. He winked back. "It's okay." I rubbed my shoulder, marching off fast, and I counted my money. With my luck, I might have given the counterman's friend the five instead of one of the singles. But I hadn't. I now had $6.68 left. "I still think you're yellow," my voice said. It was my voice, but it didn't come from me. There were no words, no feeling of words in my throat. It just came out of the air the way it always did. I ran. Harold R. Thompkins, 49, vice-president of Baysinger's, was found dead behind the store last night. His skull had been crushed by a vicious beating with a heavy implement, Coroner McClain announced in preliminary verdict. Tompkins, who resided at 1467 Claremont, Edgeway, had been active in seeking labor-management peace in the recent difficulties.... I had read that a year before. The car cards on the clanking subway and the rumbling bus didn't seem nearly so interesting to me. Outside the van, a tasteful sign announced the limits of the village of Edgeway, and back inside, the monsters of my boyhood went bloomp at me. I hadn't seen anything like them in years. The slimy, scaly beasts were slithering over the newspaper holders, the ad card readers, the girl watchers as the neat little carbon-copy modern homes breezed past the windows. I ignored the devils and concentrated on reading the withered, washed-out political posters on the telephone poles. My neck ached from holding it so stiff, staring out through the glass. More than that, I could feel the jabberwocks staring at me. You know how it is. You can feel a stare with the back of your neck and between your eyes. They got one brush of a gaze out of me. The things abruptly started their business, trying to act casually as if they hadn't been waiting for me to look at them at all. They had a little human being of some sort. It was the size of a small boy, like the small boy who looked like me that they used to destroy when I was locked up with them in the dark. Except this was a man, scaled down to child's size. He had sort of an ugly, worried, tired, stupid look and he wore a shiny suit with a piece of a welcome mat or something for a necktie. Yeah, it was me. I really knew it all the time. They began doing things to the midget me. I didn't even lift an eyebrow. They couldn't do anything worse to the small man than they had done to the young boy. It was sort of nostalgic watching them, but I really got bored with all that violence and killing and killing the same kill over and over. Like watching the Saturday night string of westerns in a bar. The sunlight through the window was yellow and hot. After a time, I began to dose. The shrieks woke me up. For the first time, I could hear the shrieks of the monster's victim and listen to their obscene droolings. For the very first time in my life. Always before it had been all pantomime, like Charlie Chaplin. Now I heard the sounds of it all. They say it's a bad sign when you start hearing voices. I nearly panicked, but I held myself in the seat and forced myself to be rational about it. My own voice was always saying things everybody could hear but which I didn't say. It wasn't any worse to be the only one who could hear other things I never said. I was as sane as I ever was. There was no doubt about that. But a new thought suddenly impressed itself on me. Whatever was punishing me for my sin was determined that I turn back before reaching 1467 Claremont.
|
A. There is insufficient social infrastructure to identify and care for those living with severe mental illnesses
|
How does Jery's behavior change when he's wearing the uniform and amnesty?
A. He's bolder, and he starts to misuse the authority it gives him
B. He tries to remain the same and not let it get to his head
C. He's bolder, playing into the sense of power it gives him
D. He's surprised by the authority it seems to give him
|
THE SECRET MARTIANS by JACK SHARKEY ACE BOOKS, INC. 23 West 47th Street, New York 36, N. Y. THE SECRET MARTIANS Copyright, 1960, by Ace Books, Inc. All Rights Reserved Printed in U.S.A. [Transcriber's Note: Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] MASTER SPY OF THE RED PLANET Jery Delvin had a most unusual talent. He could detect the flaws in any scheme almost on sight—even where they had eluded the best brains in the ad agency where he worked. So when the Chief of World Security told him that he had been selected as the answer to the Solar System's greatest mystery, Jery assumed that it was because of his mental agility. But when he got to Mars to find out why fifteen boys had vanished from a spaceship in mid-space, he found out that even his quick mind needed time to pierce the maze of out-of-this-world double-dealing. For Jery had become a walking bomb, and when he set himself off, it would be the end of the whole puzzle of THE SECRET MARTIANS—with Jery as the first to go! Jack Sharkey decided to be a writer nineteen years ago, in the Fourth Grade, when he realized all at once that "someone wrote all those stories in the textbooks." While everyone else looked forward variously to becoming firemen, cowboys, and trapeze artists, Jack was devouring every book he could get his hands on, figuring that "if I put enough literature into my head, some of it might overflow and come out." After sixteen years of education, Jack found himself teaching high school English in Chicago, a worthwhile career, but "not what one would call zesty." After a two-year Army hitch, and a year in advertising "sublimating my urge to write things for cash," Jack moved to New York, determined to make a career of full-time fiction-writing. Oddly enough, it worked out, and he now does nothing else. He says, "I'd like to say I do this for fulfillment, or for cash, or because it's my destiny; however, the real reason (same as that expressed by Jean Kerr) is that this kind of stay-at-home self-employment lets me sleep late in the morning." 1 I was sitting at my desk, trying to decide how to tell the women of America that they were certain to be lovely in a Plasti-Flex brassiere without absolutely guaranteeing them anything, when the two security men came to get me. I didn't quite believe it at first, when I looked up and saw them, six-feet-plus of steel nerves and gimlet eyes, staring down at me, amidst my litter of sketches, crumpled copy sheets and deadline memos. It was only a fraction of an instant between the time I saw them and the time they spoke to me, but in that miniscule interval I managed to retrace quite a bit of my lifetime up till that moment, seeking vainly for some reason why they'd be standing there, so terribly and inflexibly efficient looking. Mostly, I ran back over all the ads I'd created and/or okayed for Solar Sales, Inc. during my five years with the firm, trying to see just where I'd gone and shaken the security of the government. I couldn't find anything really incriminating, unless maybe it was that hair dye that unexpectedly turned bright green after six weeks in the hair, but that was the lab's fault, not mine. So I managed a weak smile toward the duo, and tried not to sweat too profusely. "Jery Delvin?" said the one on my left, a note of no-funny-business in his brusque baritone. "... Yes," I said, some terrified portion of my mind waiting masochistically for them to draw their collapsers and reduce me to a heap of hot protons. "Come with us," said his companion. I stared at him, then glanced hopelessly at the jumble of things on my desk. "Never mind that stuff," he added. I rose from my place, slipped my jacket from its hook, and started across the office toward the door, each of them falling into rigid step beside me. Marge, my secretary, stood wide-eyed as we passed through her office, heading for the hall exit. "Mr. Delvin," she said, her voice a wispy croak. "When will you be back? The Plasti-Flex man is waiting for your—" I opened my mouth, but one of the security men cut in. "You will be informed," he said to Marge. She was staring after me, open-mouthed, as the door slid neatly shut behind us. " W-Will I be back?" I asked desperately, as we waited for the elevator. "At all? Am I under arrest? What's up, anyhow?" "You will be informed," said the man again. I had to let it go at that. Security men were not hired for their loquaciousness. They had a car waiting at the curb downstairs, in the No Parking zone. The cop on the beat very politely opened the door for them when we got there. Those red-and-bronze uniforms carry an awful lot of weight. Not to mention the golden bulk of their holstered collapsers. There was nothing for me to do but sweat it out and to try and enjoy the ride, wherever we were going. " You are Jery Delvin?" The man who spoke seemed more than surprised; he seemed stunned. His voice held an incredulous squeak, a squeak which would have amazed his subordinates. It certainly amazed me. Because the speaker was Philip Baxter, Chief of Interplanetary Security, second only to the World President in power, and not even that in matters of security. I managed to nod. He shook his white-maned head, slowly. "I don't believe it." "But I am, sir," I insisted doggedly. Baxter pressed the heels of his hands against his eyes for a moment, then sighed, grinned wryly, and waggled an index finger at an empty plastic contour chair. "I guess maybe you are at that, son. Sit down, sit down." I folded gingerly at knees and hips and slid back into the chair, pressing my perspiring palms against the sides of my pants to get rid of their uncomfortably slippery feel. "Thank you, sir." There was a silence, during which I breathed uneasily, and a bit too loudly. Baxter seemed to be trying to say something. "I suppose you're wondering why I've called—" he started, then stopped short and flushed with embarrassment. I felt a sympathetic hot wave flooding my own features. A copy chief in an advertising company almost always reacts to an obvious cliche. Then, with something like a look of relief on his blunt face, he snatched up a brochure from his kidney-shaped desktop and his eyes raced over the lettering on its face. "Jery Delvin," he read, musingly and dispassionately. "Five foot eleven inches tall, brown hair, slate-gray eyes. Citizen. Honest, sober, civic-minded, slightly antisocial...." He looked at me, questioningly. "I'd rather not discuss that, sir, if you don't mind." "Do you mind if I do mind?" "Oh ... Oh, well if you put it like that. It's girls, sir. They block my mind. Ruin my work." "I don't get you." "Well, in my job—See, I've got this gift. I'm a spotter." "A what?" "A spotter. I can't be fooled. By advertising. Or mostly anything else. Except girls." "I'm still not sure that I—" "It's like this. I designate ratios, by the minute. They hand me a new ad, and I read it by a stopwatch. Then, as soon as I spot the clinker, they stop the watch. If I get it in five seconds, it passes. But if I spot it in less, they throw it out and start over again. Or is that clear? No, I guess you're still confused, sir." "Just a bit," Baxter said. I took a deep breath and tried again. "Maybe an example would be better. Uh, you know the one about 'Three out of five New York lawyers use Hamilton Bond Paper for note-taking'?" "I've heard that, yes." "Well, the clinker—that's the sneaky part of the ad, sir, or what we call weasel-wording—the clinker in that one is that while it seems to imply sixty percent of New York lawyers, it actually means precisely what it says: Three out of five. For that particular product, we had to question seventy-nine lawyers before we could come up with three who liked Hamilton Bond, see? Then we took the names of the three, and the names of two of the seventy-six men remaining, and kept them on file." "On file?" Baxter frowned. "What for?" "In case the Federal Trade Council got on our necks. We could prove that three out of five lawyers used the product. Three out of those five. See?" "Ah," said Baxter, grinning. "I begin to. And your job is to test these ads, before they reach the public. What fools you for five seconds will fool the average consumer indefinitely." I sat back, feeling much better. "That's right, sir." Then Baxter frowned again. "But what's this about girls?" "They—they block my thinking, sir, that's all. Why, take that example I just mentioned. In plain writing, I caught the clinker in one-tenth of a second. Then they handed me a layout with a picture of a lawyer dictating notes to his secretary on it. Her legs were crossed. Nice legs. Gorgeous legs...." "How long that time, Delvin?" "Indefinite. Till they took the girl away, sir." Baxter cleared his throat loudly. "I understand, at last. Hence your slight antisocial rating. You avoid women in order to keep your job." "Yes, sir. Even my secretary, Marge, whom I'd never in a million years think of looking at twice, except for business reasons, of course, has to stay out of my office when I'm working, or I can't function." "You have my sympathy, son," Baxter said, not unkindly. "Thank you, sir. It hasn't been easy." "No, I don't imagine it has...." Baxter was staring into some far-off distance. Then he remembered himself and blinked back to the present. "Delvin," he said sharply. "I'll come right to the point. This thing is.... You have been chosen for an extremely important mission." I couldn't have been more surprised had he announced my incipient maternity, but I was able to ask, "Me? For Pete's sake, why, sir?" Baxter looked me square in the eye. "Damned if I know!" 2 I stared at him, nonplussed. He'd spoken with evidence of utmost candor, and the Chief of Interplanetary Security was not one to be accused of a friendly josh, but—"You're kidding!" I said. "You must be. Otherwise, why was I sent for?" "Believe me, I wish I knew," he sighed. "You were chosen, from all the inhabitants of this planet, and all the inhabitants of the Earth Colonies, by the Brain." "You mean that International Cybernetics picked me for a mission? That's crazy, if you'll pardon me, sir." Baxter shrugged, and his genial smile was a bit tightly stretched. "When the current emergency arose and all our usual methods failed, we had to submit the problem to the Brain." "And," I said, beginning to be fascinated by his bewildered manner, "what came out?" He looked at me for a long moment, then picked up that brochure again, and said, without referring to it, "Jery Delvin, five foot eleven inches tall—" "Yes, but read me the part where it says why I was picked," I said, a little exasperated. Baxter eyed me balefully, then skimmed the brochure through the air in my direction. I caught it just short of the carpet. "If you can find it, I'll read it!" he said, almost snarling. I looked over the sheet, then turned it over and scanned the black opposite side. "All it gives is my description, governmental status, and address!" "Uh-huh," Baxter grunted laconically. "It amuses you, does it?" The smile was still on his lips, but there was a grimness in the glitter of his narrowing eyes. "Not really," I said hastily. "It baffles me, to be frank." "If you're sitting there in that hopeful stance awaiting some sort of explanation, you may as well relax," Baxter said shortly. "I have none to make. IC had none to make. Damn it all to hell!" He brought a meaty fist down on the desktop. "No one has an explanation! All we know is that the Brain always picks the right man." I let this sink in, then asked, "What made you ask for a man in the first place, sir? I've always understood that your own staff represented some of the finest minds—" "Hold it, son. Perhaps I didn't make myself clear. We asked for no man. We asked for a solution to an important problem. And your name was what we got. You, son, are the solution." Chief of Security or not, I was getting a little burned up at his highhanded treatment of my emotions. "How nice!" I said icily. "Now if I only knew the problem!" Baxter blinked, then lost some of his scowl. "Yes, of course;" Baxter murmured, lighting up a cigar. He blew a plume of blue smoke toward the ceiling, then continued. "You've heard, of course, of the Space Scouts?" I nodded. "Like the old-time Boy Scouts, only with rocket-names for their various troops in place of the old animal names." "And you recall the recent government-sponsored trip they had? To Mars and back, with the broadly-smiling government picking up the enormous tab?" I detected a tinge of cynicism in his tone, but said nothing. "What a gesture!" Baxter went on, hardly speaking directly to me at all. "Inter-nation harmony! Good will! If these mere boys can get together and travel the voids of space, then so can everyone else! Why should there be tensions between the various nations comprising the World Government, when there's none between these fine lads, one from every civilized nation on Earth?" "You sound disillusioned, sir," I interjected. He stared at me as though I'd just fallen in from the ceiling or somewhere. "Huh? Oh, yes, Delvin, isn't it? Sorry, I got carried away. Where was I?" "You were telling about how this gesture, the WG sending these kids off for an extraterrestrial romp, will cement relations between those nations who have remained hostile despite the unification of all governments on Earth. Personally, I think it was a pretty good idea, myself. Everybody likes kids. Take this jam we were trying to push. Pomegranate Nectar, it was called. Well, sir, it just wouldn't sell, and then we got this red-headed kid with freckles like confetti all over his slightly bucktoothed face, and we—Sir?" I'd paused, because he was staring at me like a man on the brink of apoplexy. I swallowed, and tried to look relaxed. After a moment, he found his voice. "To go on, Delvin. Do you recall what happened to the Space Scouts last week?" I thought a second, then nodded. "They've been having such a good time that the government extended their trip by—Why are you shaking your head that way, sir?" "Because it's not true, Delvin," he said. His voice was suddenly old and tired, and very much in keeping with his snowy hair. "You see, the Space Scouts have vanished." I came up in the chair, ramrod-straight. "Their mothers—they've been getting letters and—" "Forgeries, Fakes. Counterfeits." "You mean whoever took the Scouts is falsifying—" "No. My men are doing the work. Handpicked crews, day and night, have been sending those letters to the trusting mothers. It's been ghastly, Delvin. Hard on the men, terribly hard. Undotted i 's, misuse of tenses, deliberate misspellings. They take it out of an adult, especially an adult with a mind keen enough to get him into Interplanetary Security. We've limited the shifts to four hours per man per day. Otherwise, they'd all be gibbering by now!" "And your men haven't found out anything?" I marvelled. Baxter shook his head. "And you finally had to resort to the Brain, and it gave you my name, but no reason for it?" Baxter cupped his slightly jowled cheeks in his hands and propped his elbows on the desktop, suddenly slipping out of his high position to talk to me man-to-man. "Look, son, an adding machine—which is a minor form of an electronic brain, and even works on the same principle—can tell you that two and two make four. But can it tell you why? "Well, no, but—" "That, in a nutshell is our problem. We coded and fed to the Brain every shred of information at our disposal; the ages of the children, for instance, and all their physical attributes, and where they were last seen, and what they were wearing. Hell, everything! The machine took the factors, weighed them, popped them through its billions of relays and tubes, and out of the end of the answer slot popped a single sheet. The one you just saw. Your dossier." "Then I'm to be sent to Mars?" I said, nervously. "That's just it," Baxter sighed. "We don't even know that! We're like a savage who finds a pistol: used correctly, it's a mean little weapon; pointed the wrong way, it's a quick suicide. So, you are our weapon. Now, the question is: Which way do we point you?" "You got me!" I shrugged hopelessly. "However, since we have nothing else to go on but the locale from which the children vanished, my suggestion would be to send you there." "Mars, you mean," I said. "No, to the spaceship Phobos II . The one they were returning to Earth in when they disappeared." "They disappeared from a spaceship? While in space?" Baxter nodded. "But that's impossible," I said, shaking my head against this disconcerting thought. "Yes," said Baxter. "That's what bothers me." 3 Phobos II , for obvious reasons, was berthed in a Top Security spaceport. Even so, they'd shuttled it into a hangar, safe from the eyes of even their own men, and as a final touch had hidden the ship's nameplate beneath magnetic repair-plates. I had a metal disk—bronze and red, the Security colors—insigniaed by Baxter and counterembossed with the President's special device, a small globe surmounted by clasping hands. It gave me authority to do anything. With such an identification disc, I could go to Times Square and start machine gunning the passers-by, and not one of New York's finest would raise a hand to stop me. And, snugly enholstered, I carried a collapser, the restricted weapon given only to Security Agents, so deadly was its molecule-disrupting beam. Baxter had spent a tremulous hour showing me how to use the weapon, and especially how to turn the beam off. I'd finally gotten the hang of it, though not before half his kidney-shaped desk had flashed into nothingness, along with a good-sized swath of carpeting and six inches of concrete floor. His parting injunction had been. "Be careful, Delvin, huh?" Yes, parting. I was on my own. After all, with a Security disc—the Amnesty, they called it—such as I possessed, and a collapser, I could go anywhere, do anything, commandeer anything I might need. All with no questions asked. Needless to say, I was feeling pretty chipper as I entered the hangar housing Phobos II . At the moment, I was the most influential human being in the known universe. The pilot, as per my videophoned request, was waiting there for me. I saw him as I stepped into the cool shadows of the building from the hot yellow sunlight outside. He was tall, much taller than I, but he seemed nervous as hell. At least he was pacing back and forth amid a litter of half-smoked cigarette butts beside the gleaming tailfins of the spaceship, and a fuming butt was puckered into place in his mouth. "Anders?" I said, approaching to within five feet of him before halting, to get the best psychological effect from my appearance. He turned, saw me, and hurriedly spat the butt out onto the cement floor. "Yes, sir!" he said loudly, throwing me a quivering salute. His eyes were a bit wild as they took me in. And well they might be. An Amnesty-bearer can suddenly decide a subject is not answering questions to his satisfaction and simply blast the annoying party to atoms. It makes for straight responses. Of course, I was dressing the part, in a way. I wore the Amnesty suspended by a thin golden chain from my neck, and for costume I wore a raven-black blouse and matching uniform trousers and boots. I must have looked quite sinister. I'm under six feet, but I'm angular and wiry. Thus, in ominous black, with an Amnesty on my breast and a collapser in my holster, I was a sight to strike even honest citizens into quick examinations of conscience. I felt a little silly, but the outfit was Baxter's idea. "I understand you were aboard the Phobos II when the incident occurred?" I said sternly, which was unusual for my wonted demeanor. "Yes, sir!" he replied swiftly, at stiff attention. "I don't really have any details," I said, and waited for him to take his cue. As an afterthought, to help him talk, I added, "At ease, by the way, Anders." "Thank you, sir," he said, not actually loosening much in his rigid position, but his face looking happier. "See, I was supposed to pilot the kids back here from Mars when their trip was done, and—" He gave a helpless shrug. "I dunno, sir. I got 'em all aboard, made sure they were secure in the takeoff racks, and then I set my coordinates for Earth and took off. Just a run-of-the-mill takeoff, sir." "And when did you notice they were missing?" I asked, looking at the metallic bulk of the ship and wondering what alien force could snatch fifteen fair-sized young boys through its impervious hull without leaving a trace. "Chow time, sir. That's when you expect to have the little—to have the kids in your hair, sir. Everyone wants his rations first—You know how kids are, sir. So I went to the galley and was about to open up the ration packs, when I noticed how damned quiet it was aboard. And especially funny that no one was in the galley waiting for me to start passing the stuff out." "So you searched," I said. Anders nodded sorrowfully. "Not a trace of 'em, sir. Just some of their junk left in their storage lockers." I raised my eyebrows. "Really? I'd be interested in seeing this junk, Anders." "Oh, yes, sir. Right this way, sir. Watch out for these rungs, they're slippery." I ascended the retractable metal rungs that jutted from a point between the tailfins to the open airlock, twenty feet over ground level, and followed Anders inside the ship. I trailed Anders through the ship, from the pilot's compartment—a bewildering mass of dials, switches, signal lights and wire—through the galley into the troop section. It was a cramped cubicle housing a number of nylon-webbed foam rubber bunks. The bunks were empty, but I looked them over anyhow. I carefully tugged back the canvas covering that fitted envelope-fashion over a foam rubber pad, and ran my finger over the surface of the pad. It came away just slightly gritty. "Uh-huh!" I said, smiling. Anders just stared at me. I turned to the storage lockers. "Let's see this junk they were suddenly deprived of." Anders, after a puzzled frown, obediently threw open the doors of the riveted tiers of metal boxes along the rear wall; the wall next to the firing chambers, which I had no particular desire to visit. I glanced inside at the articles therein, and noted with interest their similarity. "Now, then," I resumed, "the thrust of this rocket to get from Mars to Earth is calculated with regard to the mass on board, is that correct?" He nodded. "Good, that clears up an important point. I'd also like to know if this rocket has a dehumidifying system to keep the cast-off moisture from the passengers out of the air?" "Well, sure, sir!" said Anders. "Otherwise, we'd all be swimming in our own sweat after a ten-hour trip across space!" "Have you checked the storage tanks?" I asked. "Or is the cast-off perspiration simply jetted into space?" "No. It's saved, sir. It gets distilled and stored for washing and drinking. Otherwise, we'd all dehydrate, with no water to replace the water we lost." "Check the tanks," I said. Anders, shaking his head, moved into the pilot's section and looked at a dial there. "Full, sir. But that's because I didn't drink very much, and any sweating I did—which was a hell of a lot, in this case—was a source of new water for the tanks." "Uh-huh." I paused and considered. "I suppose the tubing for these tanks is all over the ship? In all the hollow bulkhead space, to take up the moisture fast?" Anders, hopelessly lost, could only nod wearily. "Would it hold—" I did some quick mental arithmetic—"let's say, about twenty-four extra cubic feet?" He stared, then frowned, and thought hard. "Yes, sir," he said, after a minute. "Even twice that, with no trouble, but—" He caught himself short. It didn't pay to be too curious about the aims of an Amnesty-bearer. "It's all right, Anders. You've been a tremendous help. Just one thing. When you left Mars, you took off from the night side, didn't you?" "Why, yes, I did, sir. But how did you—?" "No matter, Anders. That'll be all." "Yes, sir!" He saluted sharply and started off. I started back for Interplanetary Security, and my second—and I hoped, last—interview with Chief Baxter. I had a slight inkling why the Brain had chosen me; because, in the affair of the missing Space Scouts, my infallible talent for spotting the True within the Apparent had come through nicely. I had found a very interesting clinker. 4 "Strange," I remarked to Chief Baxter when I was seated once again in his office, opposite his newly replaced desk. "I hardly acted like myself out at that airfield. I was brusque, highhanded, austere, almost malevolent with the pilot. And I'm ordinarily on the shy side, as a matter of fact." "It's the Amnesty that does it," he said, gesturing toward the disc. It lay on his desk, now, along with the collapser. I felt, with the new information I'd garnered, that my work was done, and that the new data fed into the Brain would produce some other results, not involving me. I looked at the Amnesty, then nodded. "Kind of gets you, after awhile. To know that you are the most influential person in creation is to automatically act the part. A shame, in a way." "The hell it is!" Baxter snapped. "Good grief, man, why'd you think the Amnesty was created in the first place?" I sat up straight and scratched the back of my head. "Now you mention it, I really don't know. It seems a pretty dangerous thing to have about, the way people jump when they see it." "It is dangerous, of course, but it's vitally necessary. You're young, Jery Delvin, and even the finest history course available these days is slanted in favor of World Government. So you have no idea how tough things were before the Amnesty came along. Ever hear of red tape?" I shook my head. "No, I don't believe so. Unless it had something to do with the former communist menace? They called themselves the Reds, I believe...." He waved me silent. "No connection at all, son. No, red tape was, well, involvement. Forms to be signed, certain factors to be considered, protocol to be dealt with, government agencies to be checked with, classifications, bureaus, sub-bureaus, congressional committees. It was impossible, Jery, my boy, to get anything done whatsoever without consulting someone else. And the time lag and paperwork involved made accurate and swift action impossible, sometimes. What we needed, of course, was a person who could simply have all authority, in order to save the sometimes disastrous delays. So we came up with the Amnesty." "But the danger. If you should pick the wrong man—" Baxter smiled. "No chance of that, Jery. We didn't leave it up to any committee or bureau or any other faction to do the picking. Hell, that would have put us right back where we'd been before. No, we left it up to the Brain. We'd find ourselves in a tight situation, and the Brain after being fed the data, would come up with either a solution, or a name." I stared at him. "Then, when I was here before, I was here solely to receive the Amnesty, is that it?" Baxter nodded. "The Brain just picks the men. Then we tell the men the situation, hand over the Amnesty, and pray." I had a sudden thought. "Say, what happens if two men are selected by the Brain? Who has authority over whom?" Baxter grimaced and shivered. "Don't even think such a thing! Even your mentioning such a contingency gives me a small migraine. It'd be unprecedented in the history of the Brain or the Amnesty." He grinned, suddenly. "Besides, it can't happen. There's only one of these—" he tapped the medallion gently "—in existence, Jery. So we couldn't have such a situation!" I sank back into the contour chair, and glanced at my watch. Much too late to go back to work. I'd done a lot in one day, I reasoned. Well, the thing was out of my hands. Baxter had the information I'd come up with, and it had been coded and fed to the Brain. As soon as the solution came through, I could be on my way back to the world of hard and soft sell. "You understand," said Baxter suddenly, "that you're to say nothing whatever about the disappearance of the Space Scouts until this office makes the news public? You know what would happen if this thing should leak!" The intercom on Baxter's desk suddenly buzzed, and a bright red light flashed on. "Ah!" he said, thumbing a knob. "Here we go, at last!" As he exerted pressure on the knob, a thin slit in the side of the intercom began feeding out a long sheet of paper; the new answer from the Brain. It reached a certain length, then was automatically sheared off within the intercom, and the sheet fell gently to the desktop. Baxter picked it up and swiftly scanned its surface. A look of dismay overrode his erstwhile genial features. I had a horrible suspicion. "Not again?" I said softly. Baxter swore under his breath. Then he reached across the desktop and tossed me the Amnesty.
|
C. He's bolder, playing into the sense of power it gives him
|
Why did the bank robbers end up crashing?
A. The cops used incendiary bullets to melt the tires.
B. The Scorpion somehow melted their tires.
C. They didn't realize the car they stole was damaged.
D. It was so hot outside that their tires melted and blew out.
|
CALL HIM NEMESIS By DONALD E. WESTLAKE Criminals, beware; the Scorpion is on your trail! Hoodlums fear his fury—and, for that matter, so do the cops! [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, September 1961. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] The man with the handkerchief mask said, "All right, everybody, keep tight. This is a holdup." There were twelve people in the bank. There was Mr. Featherhall at his desk, refusing to okay a personal check from a perfect stranger. There was the perfect stranger, an itinerant garage mechanic named Rodney (Rod) Strom, like the check said. There were Miss English and Miss Philicoff, the girls in the gilded teller cages. There was Mister Anderson, the guard, dozing by the door in his brown uniform. There was Mrs. Elizabeth Clayhorn, depositing her husband's pay check in their joint checking account, and with her was her ten-year-old son Edward (Eddie) Clayhorn, Junior. There was Charlie Casale, getting ten dollars dimes, six dollars nickels and four dollars pennies for his father in the grocery store down the street. There was Mrs. Dolly Daniels, withdrawing money from her savings account again. And there were three bank robbers. The three bank robbers looked like triplets. From the ground up, they all wore scuffy black shoes, baggy-kneed and unpressed khaki trousers, brown cracked-leather jackets over flannel shirts, white handkerchiefs over the lower half of their faces and gray-and-white check caps pulled low over their eyes. The eyes themselves looked dangerous. The man who had spoken withdrew a small but mean-looking thirty-two calibre pistol from his jacket pocket. He waved it menacingly. One of the others took the pistol away from Mister Anderson, the guard, and said to him in a low voice, "Think about retirement, my friend." The third one, who carried a black satchel like a doctor's bag, walked quickly around behind the teller's counter and started filling it with money. It was just like the movies. The man who had first spoken herded the tellers, Mr. Featherhall and the customers all over against the back wall, while the second man stayed next to Mr. Anderson and the door. The third man stuffed money into the black satchel. The man by the door said, "Hurry up." The man with the satchel said, "One more drawer." The man with the gun turned to say to the man at the door, "Keep your shirt on." That was all Miss English needed. She kicked off her shoes and ran pelting in her stocking feet for the door. The man by the door spread his arms out and shouted, "Hey!" The man with the gun swung violently back, cursing, and fired the gun. But he'd been moving too fast, and so had Miss English, and all he hit was the brass plate on Mr. Featherhall's desk. The man by the door caught Miss English in a bear hug. She promptly did her best to scratch his eyes out. Meanwhile, Mr. Anderson went scooting out the front door and running down the street toward the police station in the next block, shouting, "Help! Help! Robbery!" The man with the gun cursed some more. The man with the satchel came running around from behind the counter, and the man by the door tried to keep Miss English from scratching his eyes out. Then the man with the gun hit Miss English on the head. She fell unconscious to the floor, and all three of them ran out of the bank to the car out front, in which sat a very nervous-looking fourth man, gunning the engine. Everyone except Miss English ran out after the bandits, to watch. Things got very fast and very confused then. Two police cars came driving down the block and a half from the precinct house to the bank, and the car with the four robbers in it lurched away from the curb and drove straight down the street toward the police station. The police cars and the getaway car passed one another, with everybody shooting like the ships in pirate movies. There was so much confusion that it looked as though the bank robbers were going to get away after all. The police cars were aiming the wrong way and, as they'd come down with sirens wailing, there was a clear path behind them. Then, after the getaway car had gone more than two blocks, it suddenly started jouncing around. It smacked into a parked car and stopped. And all the police went running down there to clap handcuffs on the robbers when they crawled dazedly out of their car. "Hey," said Eddie Clayhorn, ten years old. "Hey, that was something, huh, Mom?" "Come along home," said his mother, grabbing his hand. "We don't want to be involved." "It was the nuttiest thing," said Detective-Sergeant Stevenson. "An operation planned that well, you'd think they'd pay attention to their getaway car, you know what I mean?" Detective-Sergeant Pauling shrugged. "They always slip up," he said. "Sooner or later, on some minor detail, they always slip up." "Yes, but their tires ." "Well," said Pauling, "it was a stolen car. I suppose they just grabbed whatever was handiest." "What I can't figure out," said Stevenson, "is exactly what made those tires do that. I mean, it was a hot day and all, but it wasn't that hot. And they weren't going that fast. I don't think you could go fast enough to melt your tires down." Pauling shrugged again. "We got them. That's the important thing." "Still and all, it's nutty. They're free and clear, barrelling out Rockaway toward the Belt, and all at once their tires melt, the tubes blow out and there they are." Stevenson shook his head. "I can't figure it." "Don't look a gift horse in the mouth," suggested Pauling. "They picked the wrong car to steal." "And that doesn't make sense, either," said Stevenson. "Why steal a car that could be identified as easily as that one?" "Why? What was it, a foreign make?" "No, it was a Chevvy, two-tone, three years old, looked just like half the cars on the streets. Except that in the trunk lid the owner had burned in 'The Scorpion' in big black letters you could see half a block away." "Maybe they didn't notice it when they stole the car," said Pauling. "For a well-planned operation like this one," said Stevenson, "they made a couple of really idiotic boners. It doesn't make any sense." "What do they have to say about it?" Pauling demanded. "Nothing, what do you expect? They'll make no statement at all." The squad-room door opened, and a uniformed patrolman stuck his head in. "The owner of that Chevvy's here," he said. "Right," said Stevenson. He followed the patrolman down the hall to the front desk. The owner of the Chevvy was an angry-looking man of middle age, tall and paunchy. "John Hastings," he said. "They say you have my car here." "I believe so, yes," said Stevenson. "I'm afraid it's in pretty bad shape." "So I was told over the phone," said Hastings grimly. "I've contacted my insurance company." "Good. The car's in the police garage, around the corner. If you'd come with me?" On the way around, Stevenson said, "I believe you reported the car stolen almost immediately after it happened." "That's right," said Hastings. "I stepped into a bar on my route. I'm a wine and liquor salesman. When I came out five minutes later, my car was gone." "You left the keys in it?" "Well, why not?" demanded Hastings belligerently. "If I'm making just a quick stop—I never spend more than five minutes with any one customer—I always leave the keys in the car. Why not?" "The car was stolen," Stevenson reminded him. Hastings grumbled and glared. "It's always been perfectly safe up till now." "Yes, sir. In here." Hastings took one look at his car and hit the ceiling. "It's ruined!" he cried. "What did you do to the tires?" "Not a thing, sir. That happened to them in the holdup." Hastings leaned down over one of the front tires. "Look at that! There's melted rubber all over the rims. Those rims are ruined! What did you use, incendiary bullets?" Stevenson shook his head. "No, sir. When that happened they were two blocks away from the nearest policeman." "Hmph." Hastings moved on around the car, stopping short to exclaim, "What in the name of God is that? You didn't tell me a bunch of kids had stolen the car." "It wasn't a bunch of kids," Stevenson told him. "It was four professional criminals, I thought you knew that. They were using it in a bank holdup." "Then why did they do that ?" Stevenson followed Hastings' pointing finger, and saw again the crudely-lettered words, "The Scorpion" burned black into the paint of the trunk lid. "I really don't know," he said. "It wasn't there before the car was stolen?" "Of course not!" Stevenson frowned. "Now, why in the world did they do that?" "I suggest," said Hastings with heavy sarcasm, "you ask them that." Stevenson shook his head. "It wouldn't do any good. They aren't talking about anything. I don't suppose they'll ever tell us." He looked at the trunk lid again. "It's the nuttiest thing," he said thoughtfully.... That was on Wednesday. The Friday afternoon mail delivery to the Daily News brought a crank letter. It was in the crank letter's most obvious form; that is, the address had been clipped, a letter or a word at a time, from a newspaper and glued to the envelope. There was no return address. The letter itself was in the same format. It was brief and to the point: Dear Mr. Editor: The Scorpion has struck. The bank robbers were captured. The Scorpion fights crime. Crooks and robbers are not safe from the avenging Scorpion. WARN YOUR READERS! Sincerely yours, THE SCORPION The warning was duly noted, and the letter filed in the wastebasket. It didn't rate a line in the paper. II The bank robbery occurred in late June. Early in August, a Brooklyn man went berserk. It happened in Canarsie, a section in southeast Brooklyn near Jamaica Bay. This particular area of Canarsie was a residential neighborhood, composed of one and two family houses. The man who went berserk was a Motor Vehicle Bureau clerk named Jerome Higgins. Two days before, he had flunked a Civil Service examination for the third time. He reported himself sick and spent the two days at home, brooding, a bottle of blended whiskey at all times in his hand. As the police reconstructed it later, Mrs. Higgins had attempted to awaken him on the third morning at seven-thirty, suggesting that he really ought to stop being so foolish, and go back to work. He then allegedly poked her in the eye, and locked her out of the bedroom. Mrs. Higgins then apparently called her sister-in-law, a Mrs. Thelma Stodbetter, who was Mr. Higgins' sister. Mrs. Stodbetter arrived at the house at nine o'clock, and spent some time tapping at the still-locked bedroom door, apparently requesting Mr. Higgins to unlock the door and "stop acting like a child." Neighbors reported to the police that they heard Mr. Higgins shout a number of times, "Go away! Can't you let a man sleep?" At about ten-fifteen, neighbors heard shots from the Higgins residence, a two-story one-family pink stucco affair in the middle of a block of similar homes. Mr. Higgins, it was learned later, had suddenly erupted from his bedroom, brandishing a .30-.30 hunting rifle and, being annoyed at the shrieks of his wife and sister, had fired seven shells at them, killing his wife on the spot and wounding his sister in the hand and shoulder. Mrs. Stodbetter, wounded and scared out of her wits, raced screaming out the front door of the house, crying for the police and shouting, "Murder! Murder!" At this point, neighbors called the police. One neighbor additionally phoned three newspapers and two television stations, thereby earning forty dollars in "news-tips" rewards. By chance, a mobile television unit was at that moment on the Belt Parkway, returning from having seen off a prime minister at Idlewild Airport. This unit was at once diverted to Canarsie, where it took up a position across the street from the scene of carnage and went to work with a Zoomar lens. In the meantime, Mister Higgins had barricaded himself in his house, firing at anything that moved. The two cameramen in the mobile unit worked their hearts out. One concentrated on the movements of the police and firemen and neighbors and ambulance attendants, while the other used the Zoomar lens to search for Mr. Higgins. He found him occasionally, offering the at-home audience brief glimpses of a stocky balding man in brown trousers and undershirt, stalking from window to window on the second floor of the house. The show lasted for nearly an hour. There were policemen everywhere, and firemen everywhere, and neighbors milling around down at the corner, where the police had roped the block off, and occasionally Mr. Higgins would stick his rifle out a window and shoot at somebody. The police used loudspeakers to tell Higgins he might as well give up, they had the place surrounded and could eventually starve him out anyway. Higgins used his own good lungs to shout obscenities back and challenge anyone present to hand-to-hand combat. The police fired tear gas shells at the house, but it was a windy day and all the windows in the Higgins house were either open or broken. Higgins was able to throw all the shells back out of the house again. The show lasted for nearly an hour. Then it ended, suddenly and dramatically. Higgins had showed himself to the Zoomar lens again, for the purpose of shooting either the camera or its operator. All at once he yelped and threw the rifle away. The rifle bounced onto the porch roof, slithered down to the edge, hung for a second against the drain, and finally fell barrel first onto the lawn. Meanwhile, Higgins was running through the house, shouting like a wounded bull. He thundered down the stairs and out, hollering, to fall into the arms of the waiting police. They had trouble holding him. At first they thought he was actually trying to get away, but then one of them heard what it was he was shouting: "My hands! My hands!" They looked at his hands. The palms and the palm-side of the fingers were red and blistering, from what looked like severe burns. There was another burn on his right cheek and another one on his right shoulder. Higgins, thoroughly chastened and bewildered, was led away for burn ointment and jail. The television crew went on back to Manhattan. The neighbors went home and telephoned their friends. On-duty policemen had been called in from practically all of the precincts in Brooklyn. Among them was Detective-Sergeant William Stevenson. Stevenson frowned thoughtfully at Higgins as that unhappy individual was led away, and then strolled over to look at the rifle. He touched the stock, and it was somewhat warm but that was all. He picked it up and turned it around. There, on the other side of the stock, burned into the wood, were the crudely-shaped letters, "The Scorpion." You don't get to be Precinct Captain on nothing but political connections. Those help, of course, but you need more than that. As Captain Hanks was fond of pointing out, you needed as well to be both more imaginative than most—"You gotta be able to second-guess the smart boys"—and to be a complete realist—"You gotta have both feet on the ground." If these were somewhat contradictory qualities, it was best not to mention the fact to Captain Hanks. The realist side of the captain's nature was currently at the fore. "Just what are you trying to say, Stevenson?" he demanded. "I'm not sure," admitted Stevenson. "But we've got these two things. First, there's the getaway car from that bank job. The wheels melt for no reason at all, and somebody burns 'The Scorpion' onto the trunk. Then, yesterday, this guy Higgins out in Canarsie. He says the rifle all of a sudden got too hot to hold, and he's got the burn marks to prove it. And there on the rifle stock it is again. 'The Scorpion'." "He says he put that on there himself," said the captain. Stevenson shook his head. "His lawyer says he put it on there. Higgins says he doesn't remember doing it. That's half the lawyer's case. He's trying to build up an insanity defense." "He put it on there himself, Stevenson," said the captain with weary patience. "What are you trying to prove?" "I don't know. All I know is it's the nuttiest thing I ever saw. And what about the getaway car? What about those tires melting?" "They were defective," said Hanks promptly. "All four of them at once? And what about the thing written on the trunk?" "How do I know?" demanded the captain. "Kids put it on before the car was stolen, maybe. Or maybe the hoods did it themselves, who knows? What do they say?" "They say they didn't do it," said Stevenson. "And they say they never saw it before the robbery and they would have noticed it if it'd been there." The captain shook his head. "I don't get it," he admitted. "What are you trying to prove?" "I guess," said Stevenson slowly, thinking it out as he went along, "I guess I'm trying to prove that somebody melted those tires, and made that rifle too hot, and left his signature behind." "What? You mean like in the comic books? Come on, Stevenson! What are you trying to hand me?" "All I know," insisted Stevenson, "is what I see." "And all I know," the captain told him, "is Higgins put that name on his rifle himself. He says so." "And what made it so hot?" "Hell, man, he'd been firing that thing at people for an hour! What do you think made it hot?" "All of a sudden?" "He noticed it all of a sudden, when it started to burn him." "How come the same name showed up each time, then?" Stevenson asked desperately. "How should I know? And why not, anyway? You know as well as I do these things happen. A bunch of teen-agers burgle a liquor store and they write 'The Golden Avengers' on the plate glass in lipstick. It happens all the time. Why not 'The Scorpion'? It couldn't occur to two people?" "But there's no explanation—" started Stevenson. "What do you mean, there's no explanation? I just gave you the explanation. Look, Stevenson, I'm a busy man. You got a nutty idea—like Wilcox a few years ago, remember him? Got the idea there was a fiend around loose, stuffing all those kids into abandoned refrigerators to starve. He went around trying to prove it, and getting all upset, and pretty soon they had to put him away in the nut hatch. Remember?" "I remember," said Stevenson. "Forget this silly stuff, Stevenson," the captain advised him. "Yes, sir," said Stevenson.... The day after Jerome Higgins went berserk, the afternoon mail brought a crank letter to the Daily News : Dear Mr. Editor, You did not warn your readers. The man who shot all those people could not escape the Scorpion. The Scorpion fights crime. No criminal is safe from the Scorpion. WARN YOUR READERS. Sincerely yours, THE SCORPION Unfortunately, this letter was not read by the same individual who had seen the first one, two months before. At any rate, it was filed in the same place, and forgotten. III Hallowe'en is a good time for a rumble. There's too many kids around for the cops to keep track of all of them, and if you're picked up carrying a knife or a length of tire chain or something, why, you're on your way to a Hallowe'en party and you're in costume. You're going as a JD. The problem was this schoolyard. It was a block wide, with entrances on two streets. The street on the north was Challenger territory, and the street on the south was Scarlet Raider territory, and both sides claimed the schoolyard. There had been a few skirmishes, a few guys from both gangs had been jumped and knocked around a little, but that had been all. Finally, the War Lords from the two gangs had met, and determined that the matter could only be settled in a war. The time was chosen: Hallowe'en. The place was chosen: the schoolyard. The weapons were chosen: pocket knives and tire chains okay, but no pistols or zip-guns. The time was fixed: eleven P.M. And the winner would have undisputed territorial rights to the schoolyard, both entrances. The night of the rumble, the gangs assembled in their separate clubrooms for last-minute instructions. Debs were sent out to play chicken at the intersections nearest the schoolyard, both to warn of the approach of cops and to keep out any non-combatant kids who might come wandering through. Judy Canzanetti was a Deb with the Scarlet Raiders. She was fifteen years old, short and black-haired and pretty in a movie-magazine, gum-chewing sort of way. She was proud of being in the Auxiliary of the Scarlet Raiders, and proud also of the job that had been assigned to her. She was to stand chicken on the southwest corner of the street. Judy took up her position at five minutes to eleven. The streets were dark and quiet. Few people cared to walk this neighborhood after dark, particularly on Hallowe'en. Judy leaned her back against the telephone pole on the corner, stuck her hands in the pockets of her Scarlet Raider jacket and waited. At eleven o'clock, she heard indistinct noises begin behind her. The rumble had started. At five after eleven, a bunch of little kids came wandering down the street. They were all about ten or eleven years old, and most of them carried trick-or-treat shopping bags. Some of them had Hallowe'en masks on. They started to make the turn toward the schoolyard. Judy said, "Hey, you kids. Take off." One of them, wearing a red mask, turned to look at her. "Who, us?" "Yes, you! Stay out of that street. Go on down that way." "The subway's this way," objected the kid in the red mask. "Who cares? You go around the other way." "Listen, lady," said the kid in the red mask, aggrieved, "we got a long way to go to get home." "Yeah," said another kid, in a black mask, "and we're late as it is." "I couldn't care less," Judy told them callously. "You can't go down that street." "Why not?" demanded yet another kid. This one was in the most complete and elaborate costume of them all, black leotards and a yellow shirt and a flowing: black cape. He wore a black and gold mask and had a black knit cap jammed down tight onto his head. "Why can't we go down there?" this apparition demanded. "Because I said so," Judy told him. "Now, you kids get away from here. Take off." "Hey!" cried the kid in the black-and-yellow costume. "Hey, they're fighting down there!" "It's a rumble," said Judy proudly. "You twerps don't want to be involved." "Hey!" cried the kid in the black-and-yellow costume again. And he went running around Judy and dashing off down the street. "Hey, Eddie!" shouted one of the other kids. "Eddie, come back!" Judy wasn't sure what to do next. If she abandoned her post to chase the one kid who'd gotten through, then maybe all the rest of them would come running along after her. She didn't know what to do. A sudden siren and a distant flashing red light solved her problems. "Cheez," said one of the kids. "The cops!" "Fuzz!" screamed Judy. She turned and raced down the block toward the schoolyard, shouting, "Fuzz! Fuzz! Clear out, it's the fuzz!" But then she stopped, wide-eyed, when she saw what was going on in the schoolyard. The guys from both gangs were dancing. They were jumping around, waving their arms, throwing their weapons away. Then they all started pulling off their gang jackets and throwing them away, whooping and hollering. They were making such a racket themselves that they never heard Judy's warning. They didn't even hear the police sirens. And all at once both schoolyard entrances were full of cops, a cop had tight hold of Judy and the rumble was over. Judy was so baffled and terrified that everything was just one great big blur. But in the middle of it all, she did see the little kid in the yellow-and-black costume go scooting away down the street. And she had the craziest idea that it was all his fault. Captain Hanks was still in his realistic cycle this morning, and he was impatient as well. "All right, Stevenson," he said. "Make it fast, I've got a lot to do this morning. And I hope it isn't this comic-book thing of yours again." "I'm afraid it is, Captain," said Stevenson. "Did you see the morning paper?" "So what?" "Did you see that thing about the gang fight up in Manhattan?" Captain Hanks sighed. "Stevenson," he said wearily, "are you going to try to connect every single time the word 'scorpion' comes up? What's the problem with this one? These kid gangs have names, so what?" "Neither one of them was called 'The Scorpions,'" Stevenson told him. "One of them was the Scarlet Raiders and the other gang was the Challengers." "So they changed their name," said Hanks. "Both gangs? Simultaneously? To the same name?" "Why not? Maybe that's what they were fighting over." "It was a territorial war," Stevenson reminded him. "They've admitted that much. It says so in the paper. And it also says they all deny ever seeing that word on their jackets until after the fight." "A bunch of juvenile delinquents," said Hanks in disgust. "You take their word?" "Captain, did you read the article in the paper?" "I glanced through it." "All right. Here's what they say happened: They say they started fighting at eleven o'clock. And they just got going when all at once all the metal they were carrying—knives and tire chains and coins and belt buckles and everything else—got freezing cold, too cold to touch. And then their leather jackets got freezing cold, so cold they had to pull them off and throw them away. And when the jackets were later collected, across the name of the gang on the back of each one had been branded 'The Scorpion.'" "Now, let me tell you something," said Hanks severely. "They heard the police sirens, and they threw all their weapons away. Then they threw their jackets away, to try to make believe they hadn't been part of the gang that had been fighting. But they were caught before they could get out of the schoolyard. If the squad cars had showed up a minute later, the schoolyard wouldn't have had anything in it but weapons and jackets, and the kids would have been all over the neighborhood, nice as you please, minding their own business and not bothering anybody. That's what happened. And all this talk about freezing cold and branding names into jackets is just some smart-alec punk's idea of a way to razz the police. Now, you just go back to worrying about what's happening in this precinct and forget about kid gangs up in Manhattan and comic book things like the Scorpion, or you're going to wind up like Wilcox, with that refrigerator business. Now, I don't want to hear any more about this nonsense, Stevenson." "Yes, sir," said Stevenson.
|
B. The Scorpion somehow melted their tires.
|
Had the portrait of H. H. Hartshorne not been knocked off the wall, what would have likely happened in the story?
A. Milly would have never been born.
B. Mr. Hawkins would have fired everyone who attended the party.
C. The partygoers would have remained sober that night.
D. The package would have never been delivered.
|
RATTLE OK By HARRY WARNER, JR. Illustrated by FINLAY [Transcriber's Note: This etext was produced from Galaxy Science Fiction December 1956. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] What better way to use a time machine than to handle department store complaints? But pleasing a customer should have its limits! The Christmas party at the Boston branch of Hartshorne-Logan was threatening to become more legendary than usual this Christmas. The farm machinery manager had already collapsed. When he slid under the table containing the drinks, Miss Pringle, who sold millinery, had screamed: "He'll drown!" One out of every three dirty stories started by party attendees had remained unfinished, because each had reminded someone else of another story. The recently developed liquors which affected the bloodstream three times faster had driven away twinges of conscience about untrimmed trees and midnight church services. The star salesman for mankies and the gentleman who was in charge of the janitors were putting on a display of Burmese foot-wrestling in one corner of the general office. The janitor foreman weighed fifty pounds less than the Burma gentleman, who was the salesman's customary opponent. So the climax of one tactic did not simply overturn the foreman. He glided through the air, crashing with a very loud thump against the wall. He wasn't hurt. But the impact knocked the hallowed portrait of H. H. Hartshorne, co-founder, from its nail. It tinkled imposingly as its glass splintered against the floor. The noise caused a temporary lull in the gaiety. Several employes even felt a passing suspicion that things might be getting out of hand. "It's all in the spirit of good, clean fun!" cried Mr. Hawkins, the assistant general manager. Since he was the highest executive present, worries vanished. Everyone felt fine. There was a scurry to shove the broken glass out of sight and to turn more attention to another type of glasses. Mr. Hawkins himself, acting by reflex, attempted to return the portrait to its place until new glass could be obtained. But the fall had sprung the frame at one corner and it wouldn't hang straight. "We'd better put old H. H. away for safekeeping until after the holiday," he told a small, blonde salesclerk who was beneath his attention on any working day. With the proper mixture of respect and bonhommie, he lifted the heavy picture out of its frame. A yellowed envelope slipped to the floor as the picture came free. Hawkins rolled the picture like a scroll and put it into a desk drawer, for later attention. Then he looked around for a drink that would make him feel even better. A sorting clerk in the mail order department wasn't used to liquor. She picked up the envelope and looked around vaguely for the mail-opening machine. "Hell, Milly, you aren't working!" someone shouted at her. "Have another!" Milly snapped out of it. She giggled, suppressed a ladylike belch and returned to reality. Looking at the envelope, she said: "Oh, I see. They must have stuck it in to tighten the frame. Gee, it's old." Mr. Hawkins had refreshed himself. He decided that he liked Milly's voice. To hear more of it, he said to her: "I'll bet that's been in there ever since the picture was framed. There's a company legend that that picture was put up the day this branch opened, eighty years ago." "I didn't know the company ever used buff envelopes like this." Milly turned it over in her hands. The ancient glue crackled as she did so. The flap popped open and an old-fashioned order blank fell out. Mr. Hawkins' eyes widened. He bent, reached painfully over his potbelly and picked up the order form. "This thing has never been processed!" Raising his voice, he shouted jovially, "Hey, people! You're all fired! Here's an order that Hartshorne-Logan never filled! We can't have such carelessness. This poor woman has waited eighty years for her merchandise!" Milly was reading aloud the scrawled words on the order form: "Best electric doorbell. Junior detective kit. Disposable sacks for vacuum cleaner. Dress for three-year-old girl." She turned to the assistant general manager, struck with an idea for the first time in her young life. "Let's fill this order right now!" "The poor woman must be dead by now," he objected, secretly angry that he hadn't thought of such a fine party stunt himself. Then he brightened. "Unless—" he said it loud enough for the employes to scent a great proposal and the room grew quiet—"unless we broke the rules just once and used the time warp on a big mission!" There was a silence. Finally, from an anonymous voice in one corner: "Would the warp work over eighty years? We were always told that it must be used only for complaints within three days." "Then let's find out!" Mr. Hawkins downed the rest of his drink and pulled a batch of keys from his pocket. "Someone scoot down to the warehouse. Tell the watchman that it's on my authority. Hunt up the stuff that's on the order. Get the best of everything. Ignore the catalogue numbers—they've changed a hundred times in all these years." Milly was still deciphering the form. Now she let out a little squeal of excitement. "Look, Mr. Hawkins! The name on this order—it's my great-grandmother! Isn't that wonderful? I was just a little girl when she died. I can barely remember her as a real old woman. But I remember that my grandmother never bought anything from Hartshorne-Logan because of some trouble her mother had once with the firm. My mother didn't want me to come to work here because of that." Mr. Hawkins put his arm around Milly in a way that he intended to look fatherly. It didn't. "Well, now. Since it's your relative, let's thrill the old girl. We wouldn't have vacuum sacks any more. So we'll substitute a manky!" Ann Hartley was returning from mailing the letter when she found the large parcel on her doorstep. She put her hands on her hips and stared pugnaciously at the bundle. "The minute I write a letter to complain about you, you turn up!" she told the parcel. She nudged her toe peevishly against the brown paper wrappings that were tied with a half-transparent twine she had never seen before. The label was addressed in a wandering scrawl, a sharp contrast to the impersonal typing on the customary Hartshorne-Logan bundles. But the familiar RATTLE OK sticker was pasted onto the box, indicating to the delivery man that the contents would make a rattling sound and therefore hadn't been broken in shipment. Ann sighed and picked up her bundle. With a last look at the lovely spring afternoon and the quiet suburban landscape, she went into the house. Two-year-old Sally heard the box rattling. She waddled up on chubby legs and grabbed her mother's skirt. "Want!" she said decisively. "Your dress ought to be here," Ann said. She found scissors in her sewing box, tossed a cushion onto the floor, sat on it, and began to open the parcel. "Now I'll have to write another letter to explain that they should throw away my letter of complaint," she told her daughter. "And by the time they get my second letter, they'll have answered my first letter. Then they'll write again." Out of consideration for Sally, she omitted the expletives that she wanted to add. The translucent cord was too tough for the scissors. Ann was about to hunt for a razor blade when Sally clutched at an intersection of the cord and yanked. The twine sprang away from the carton as if it were alive. The paper wrappings flapped open. "There!" Sally said. Ann repressed an irrational urge to slap her daughter. Instead, she tossed the wrappings aside and removed the lid from the carton. A slightly crushed thin cardboard box lay on top. Ann pulled out the dress and shook it into a freely hanging position. Then she groaned. It was green and she had ordered blue. It didn't remotely resemble the dress she had admired from the Hartshorne-Logan catalogue illustration. Moreover, the shoulders were lumpier than any small girl's dress should be. But Sally was delighted. "Mine!" she shrilled, grabbing for the dress. "It's probably the wrong size, too," Ann said, pulling off Sally's dress to try it on. "Let's find as many things to complain about as we can." The dress fitted precisely, except for the absurd shoulder bumps. Sally was radiant for a moment. Then her small face sobered and she started to look vacantly at the distant wall. "We'll have to send it back," Ann said, "and get the one we ordered." She tried to take it off, but the child squawked violently. Ann grabbed her daughter's arms, held them above her head and pulled at the dress. It seemed to be stuck somewhere. When Ann released the child's arms to loosen the dress, Sally squirmed away. She took one step forward, then began to float three inches above the ground. She landed just before she collided with the far wall. Sally looked scared until she saw her mother's face. Then she squealed in delight. Ann's legs were rubber. She was shaking her head and wobbling uncertainly toward her daughter when the door opened behind her. "It's me," her husband said. "Slow day at the office, so I came home early." "Les! I'm going crazy or something. Sally just—" Sally crouched to jump at her father. Before she could leap, he grabbed her up bodily and hugged her. Then he saw the box. "Your order's here? Good. What's this thing?" He was looking at a small box he had pulled from the carton. Its lid contained a single word: MANKY. The box rattled when he shook it. Les pulled off the lid and found inside a circular, shiny metal object. A triangular trio of jacks stuck out from one end. "Is this the doorbell? I've never seen a plug like this. And there's no wire." "I don't know," Ann said. "Les, listen. A minute ago, Sally—" He peered into the box for an instruction sheet, uselessly. "They must have made a mistake. It looks like some kind of farm equipment." He tossed the manky onto the hassock and delved into the carton again. Sally was still in his arms. "That's the doorbell, I think," he said, looking at the next object. It had a lovely, tubular shape, a half-dozen connecting rods and a plug for a wall socket. "That's funny," Ann mused, her mind distracted from Sally for a moment. "It looks terribly expensive. Maybe they sent door chimes instead of the doorbell." The bottom of the carton contained the detective outfit that they had ordered for their son. Ann glanced at its glaringly lithographed cover and said: "Les, about Sally. Put her down a minute and watch what she does." Les stared at his wife and put the child onto the rug. Sally began to walk, then rose and again floated, this time toward the hassock on which the manky lay. His jaw dropped. "My God! Ann, what—" Ann was staring, too, but not at her daughter. "Les! The hassock! It used to be brown!" The hassock was a livid shade of green. A neon, demanding, screaming green that clashed horribly with the soft browns and reds in which Ann had furnished the room. "That round thing must be leaking," Les said. "But did you see Sally when she—" Ann's frazzled nerves carried a frantic order to her muscles. She jumped up, strode to the hassock and picked up the manky with two fingers. She tossed it to Les. Immediately, she regretted her action. "Drop it!" she yelled. "Maybe it'll turn you green, too!" Les kicked the hassock into the hall closet, tossed the manky in after it and shut the door firmly. As the door closed, he saw the entire interior of the dark closet brighten into a wet-lettuce green. When he turned back to Ann, she was staring at her left hand. The wedding band that Les had put there a dozen years ago was a brilliant green, shedding its soft glow over the finger up to the first knuckle. Ann felt the scream building up inside her. She opened her mouth to let it out, then put her hand in front of her mouth to keep it in, finally jerked the hand away to prevent the glowing ring from turning her front teeth green. She collapsed into Les's arms, babbling incomprehensibly. He said: "It's all right. There must be balloons or something in the shoulders of that dress. I'll tie a paperweight to Sally's dress and that'll hold her down until we undress her. Don't worry. And that green dye or whatever it is will wash off." Ann immediately felt better. She put her hands behind her back, pulled off her ring and slipped it into her apron pocket. Les was sentimental about her removing it. "I'll get dinner," she said, trying to keep her voice on an even keel. "Maybe you'd better start a letter to Hartshorne-Logan. Let's go into the kitchen, Sally." Ann strode resolutely toward the rear of the house. She kept her eyes determinedly off the tinge of green that was showing through the apron pocket and didn't dare look back at her daughter's unsettling means of propulsion. A half-hour later, when the meal was almost ready, two things happened: Bob came home from school through the back door and a strange voice said from the front of the house, "Don't answer the front door." Ann stared at her son. He stared back at her, the detective outfit under his arm. She went into the front room. Her husband was standing with fists on hips, looking at the front door, chuckling. "Neatest trick I've seen in a long time. That voice you heard was the new doorbell. I put it up while you were in the kitchen. Did you hear what happened when old lady Burnett out there pushed the button?" "Oh. Something like those name cards with something funny printed on them, like 'Another hour shot.' Well, if there's a little tape in there repeating that message, you'd better shut that part off. It might get boring after a while. And it might insult someone." Ann went to the door and turned the knob. The door didn't open. The figure of Mrs. Burnett, half-visible through the heavy curtain, shifted impatiently on the porch. Les yanked at the doorknob. It didn't yield for him, either. He looked up at the doorbell, which he had installed just above the upper part of the door frame. "Queer," he said. "That isn't in contact with the door itself. I don't see how it can keep the door from opening." Ann put her mouth close to the glass, shouting: "Won't you come to the back door, Mrs. Burnett? This one is stuck." "I just wanted to borrow some sugar," the woman cried from the porch. "I realize that I'm a terrible bother." But she walked down the front steps and disappeared around the side of the house. "Don't open the back door." The well-modulated voice from the small doorbell box threatened to penetrate every corner of the house. Ann looked doubtfully at her husband's lips. They weren't moving. "If this is ventriloquism—" she began icily. "I'll have to order another doorbell just like this one, for the office," Les said. "But you'd better let the old girl in. No use letting her get peeved." The back door was already open, because it was a warm day. The screen door had no latch, held closed by a simple spring. Ann pushed it open when Mrs. Burnett waddled up the three back steps, and smiled at her neighbor. "I'm so sorry you had to walk around the house. It's been a rather hectic day in an awful lot of ways." Something seemed to impede Mrs. Burnett as she came to the threshold. She frowned and shoved her portly frame against something invisible. It apparently yielded abruptly, because she staggered forward into the kitchen, nearly falling. She stared grimly at Ann and looked suspiciously behind her. "The children have some new toys," Ann improvised hastily. "Sally is so excited over a new dress that she's positively feverish. Let's see now—it was sugar that you want, wasn't it?" "I already have it," Bob said, handing a filled cup to his mother. The boy turned back to the detective set which he had spread over the kitchen table. "Excitement isn't good for me," Mrs. Burnett said testily. "I've had a lot of troubles in my life. I like peace and quiet." "Your husband is better?" "Worse. I'm sure I don't know why everything happens to me." Mrs. Burnett edged toward the hall, trying to peer into the front of the house. Ann stood squarely in front of the door leading to the hall. Defeated, Mrs. Burnett left. A muffled volley of handclapping, mixed with a few faint cheers, came from the doorbell-box when she crossed the threshold. Ann went into the hall to order Les to disconnect the doorbell. She nearly collided with him, coming in the other direction. "Where did this come from?" Les held a small object in the palm of his hand, keeping it away from his body. A few drops of something unpleasant were dripping from his fingers. The object looked remarkably like a human eyeball. It was human-size, complete with pupil, iris and rather bloodshot veins. "Hey, that's mine," Bob said. "You know, this is a funny detective kit. That was in it. But there aren't instructions on how it works." "Well, put it away," Ann told Bob sharply. "It's slimy." Les laid the eyeball on the table and walked away. The eyeball rolled from the smooth, level table, bounced twice when it hit the floor, then rolled along, six inches behind him. He turned and kicked at it. The eyeball rolled nimbly out of the path of the kick. "Les, I think we've made poor Mrs. Burnett angry," Ann said. "She's so upset over her poor husband's health and she thinks we're insulting her." Les didn't hear her. He strode to the detective set, followed at a safe distance by the eyeball, and picked up the box. "Hey, watch out!" Bob cried. A small flashlight fell from the box, landed on its side and its bulb flashed on, throwing a pencil of light across Les's hands. Bob retrieved the flashlight and turned it off while Les glanced through an instruction booklet, frowning. "This toy is too complicated for a ten-year-old boy," Les told his wife. "I don't know why you ordered such a thing." He tossed the booklet into the empty box. "I'm going to return it, if you don't smudge it up," she replied. "Look at the marks you made on the instructions." The black finger-marks stood out clearly against the shiny, coated paper. Les looked at his hands. "I didn't do it," he said, pressing his clean fingertips against the kitchen table. Black fingerprints, a full set of them, stood out against the sparkling polished table's surface. "I think the Detectolite did it," Bob said. "The instructions say you've got to be very careful with it, because its effects last for a long time." Les began scrubbing his hands vigorously at the sink. Ann watched him silently, until she saw his fingerprints appear on the faucet, the soap and the towel. She began to yell at him for making such a mess, when Sally floated into the kitchen. The girl was wearing a nightgown. "My God!" Ann forgot her tongue before the children. "She got out of that dress herself. Where did she get that nightgown?" Ann fingered the garment. She didn't recognize it as a nightgown. But in cut and fold, it was suspiciously like the dress that had arrived in the parcel. Her heart sank. She picked up the child, felt the hot forehead, and said: "Les, I think it's the same dress. It must change color or something when it's time for a nap. It seems impossible, but—" She shrugged mutely. "And I think Sally's running a temperature. I'm going to put her to bed." She looked worriedly into the reddened eyes of the small girl, who whimpered on the way to the bedroom. Ann carried her up the stairs, keeping her balance with difficulty, as Sally threatened to pop upward out of her arms. The whole family decided that bed might be a good idea, soon after dinner. When the lights went out, the house seemed to be nearly normal. Les put on a pair of gloves and threw a pillowcase over the eyeball. Bob rigged up trestles to warn visitors from the front porch. Ann put small wads of cotton into her ears, because she didn't like the rhythmic rattle, soft but persistent, that emerged from the hall closet where the manky sat. Sally was whining occasionally in her sleep. When daylight entered her room, Sally's nightgown had turned back into the new dress. But the little girl was too sick to get out of bed. She wasn't hungry, her nose was running, and she had a dry cough. Les called the doctor before going to work. The only good thing about the morning for Ann was the fact that the manky had quieted down some time in the night. After she got Bob to school, she gingerly opened the closet door. The manky was now glowing a bright pink and seemed slightly larger. Deep violet lettering stood out on its side: " Today is Wednesday. For obvious reasons, the manky will not operate today. " The mailman brought a letter from Hartshorne-Logan. Ann stared stupidly at the envelope, until she realized that this wasn't an impossibly quick answer to the letter she had written yesterday. It must have crossed in the mail her complaint about the non-arrival of the order. She tore open the envelope and read: "We regret to inform you that your order cannot be filled until the balance you owe us has been reduced. From the attached form, you will readily ascertain that the payment of $87.56 will enable you to resume the purchasing of merchandise on credit. We shall fill your recent order as soon...." Ann crumpled the letter and threw it into the imitation fireplace, knowing perfectly well that it would need to be retrieved for Les after work tonight. She had just decided to call Hartshorne-Logan's complaint department when the phone rang. "I'm afraid I must ask you to come down to the school, Mrs. Morris," a voice said. "Your son is in trouble. He claims that it's connected with something that his parents gave him." "My son?" Ann asked incredulously. "Bob?" "Yes. It's a little gadget that looks like a water pistol. Your son insists that he didn't know it would make clothing transparent. He claims it was just accident that he tried it out when he was walking by the gym during calisthenics. We've had to call upon every family in the neighborhood for blankets. Bob has always been a good boy and we believe that we can expel him quietly without newspaper publicity involving his name, if you'll—" "I'll be right down," Ann said. "I mean I won't be right down. I've got a sick baby here. Don't do anything till I telephone my husband. And I'm sorry for Bob. I mean I'm sorry for the girls, and for the boys, too. I'm sorry for—for everything. Good-by." Just as she hung up the telephone, the doorbell rang. It rang with a normal buzz, then began to play soft music. Ann opened the door without difficulty, to admit Dr. Schwartz. "You aren't going to believe me, Doctor," Ann said while he took the child's temperature, "but we can't get that dress off Sally." "Kids are stubborn sometimes." Dr. Schwartz whistled softly when he looked at the thermometer. "She's pretty sick. I want a blood count before I try to move her. Let me undress her." Sally had been mumbling half-deliriously. She made no effort to resist as the doctor picked her up. But when he raised a fold of the dress and began to pull it back, she screamed. The doctor dropped the dress and looked in perplexity at the point where it touched Sally's skin. "It's apparently an allergy to some new kind of material. But I don't understand why the dress won't come off. It's not stuck tight." "Don't bother trying," Ann said miserably. "Just cut it off." Dr. Schwartz pulled scissors from his bag and clipped at a sleeve. When he had cut it to the shoulder, he gently began to peel back the edges of the cloth. Sally writhed and kicked, then collapsed in a faint. The physician smoothed the folds hastily back into place. He looked helpless as he said to Ann: "I don't know quite what to do. The flesh starts to hemorrhage when I pull at the cloth. She'd bleed to death if I yanked it off. But it's such an extreme allergy that it may kill her, if we leave it in contact with the skin." The manky's rattle suddenly began rhythmically from the lower part of the house. Ann clutched the side of the chair, trying to keep herself under control. A siren wailed somewhere down the street, grew louder rapidly, suddenly going silent at the peak of its crescendo. Dr. Schwartz glanced outside the window. "An ambulance. Looks as if they're stopping here." "Oh, no," Ann breathed. "Something's happened to Les." "It sure will," Les said grimly, walking into the bedroom. "I won't have a job if I can't get this stuff off my fingers. Big black fingerprints on everything I touch. I can't handle correspondence or shake hands with customers. How's the kid? What's the ambulance doing out front?" "They're going to the next house down the street," the physician said. "Has there been sickness there?" Les held up his hands, palms toward the doctor. "What's wrong with me? My fingers look all right. But they leave black marks on everything I touch." The doctor looked closely at the fingertips. "Every human has natural oil on the skin. That's how detectives get results with their fingerprint powder. But I've never heard of nigrification, in this sense. Better not try to commit any crimes until you've seen a skin specialist." Ann was peering through the window, curious about the ambulance despite her own troubles. She saw two attendants carry Mr. Burnett, motionless and white, on a stretcher from the house next door into the ambulance. A third member of the crew was struggling with a disheveled Mrs. Burnett at the door. Shrieks that sounded like "Murder!" came sharply through the window. "I know those bearers," Dr. Schwartz said. He yanked the window open. "Hey, Pete! What's wrong?" The front man with the stretcher looked up. "I don't know. This guy's awful sick. I think his wife is nuts." Mrs. Burnett had broken free. She dashed halfway down the sidewalk, gesticulating wildly to nobody in particular. "It's murder!" she screamed. "Murder again! He's been poisoned! He's going to die! It means the electric chair!" The orderly grabbed her again. This time he stuffed a handkerchief into her mouth to quiet her. "Come back to this house as soon as you deliver him," Dr. Schwartz shouted to the men. "We've got a very sick child up here." "I was afraid this would happen," Les said. "The poor woman already has lost three husbands. If this one is sick, it's no wonder she thinks that somebody is poisoning him." Bob stuck his head around the bedroom door. His mother stared unbelievingly for a moment, then advanced on him threateningly. Something in his face restrained her, just as she was about to start shaking him. "I got something important to tell you," Bob said rapidly, ready to duck. "I snuck out of the principal's office and came home. I got to tell you what I did." "I heard all about what you did," Ann said, advancing again. "And you're not going to slip away from me." "Give me a chance to explain something. Downstairs. So he won't hear," Bob ended in a whisper, nodding toward the doctor. Ann looked doubtfully at Les, then followed Bob down the stairs. The doorbell was monotonously saying in a monotone: "Don't answer me, don't answer me, don't go to the door." "Why did you do it?" Ann asked Bob, her anger suddenly slumping into weary sadness. "People will suspect you of being a sex maniac for the rest of your life. You can't possibly explain—" "Don't bother about the girls' clothing," Bob said, "because it was only an accident. The really important thing is something else I did before I left the house." Les, cursing softly, hurried past them on the way to answer the knocking. He ignored the doorbell's pleas. "I forgot about it," Bob continued, "when that ray gun accidentally went off. Then when they put me in the principal's office, I had time to think, and I remembered. I put some white stuff from the detective kit into that sugar we lent Mrs. Burnett last night. I just wanted to see what would happen. I don't know exactly what effect—" "He put stuff in the sugar?" A deep, booming voice came from the front of the house. Mother and son looked through the hall. A policeman stood on the threshold of the front door. "I heard that! The woman next door claims that her husband is poisoned. Young man, I'm going to put you under arrest." The policeman stepped over the threshold. A blue flash darted from the doorbell box, striking him squarely on the chest. The policeman staggered back, sitting down abruptly on the porch. A scent of ozone drifted through the house. "Close the door, close the door," the doorbell was chanting urgently. "Where's that ambulance?" Dr. Schwartz yelled from the top of the steps. "The child's getting worse."
|
D. The package would have never been delivered.
|
What experiments with large-scale features are performed?
|
### Introduction
Since Och BIBREF0 proposed minimum error rate training (MERT) to exactly optimize objective evaluation measures, MERT has become a standard model tuning technique in statistical machine translation (SMT). Though MERT performs better by improving its searching algorithm BIBREF1, BIBREF2, BIBREF3, BIBREF4, it does not work reasonably when there are lots of features. As a result, margin infused relaxed algorithms (MIRA) dominate in this case BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. In SMT, MIRAs consider margin losses related to sentence-level BLEUs. However, since the BLEU is not decomposable into each sentence, these MIRA algorithms use some heuristics to compute the exact losses, e.g., pseudo-document BIBREF8, and document-level loss BIBREF9. Recently, another successful work in large-scale feature tuning include force decoding basedBIBREF11, classification based BIBREF12. We aim to provide a simpler tuning method for large-scale features than MIRAs. Out motivation derives from an observation on MERT. As MERT considers the quality of only top1 hypothesis set, there might have more-than-one set of parameters, which have similar top1 performances in tuning, but have very different topN hypotheses. Empirically, we expect an ideal model to benefit the total N-best list. That is, better hypotheses should be assigned with higher ranks, and this might decrease the error risk of top1 result on unseen data. PlackettBIBREF13 offered an easy-to-understand theory of modeling a permutation. An N-best list is assumedly generated by sampling without replacement. The $i$th hypothesis to sample relies on those ranked after it, instead of on the whole list. This model also supports a partial permutation which accounts for top $k$ positions in a list, regardless of the remaining. When taking $k$ as 1, this model reduces to a standard conditional probabilistic training, whose dual problem is actual the maximum entropy based BIBREF14. Although Och BIBREF0 substituted direct error optimization for a maximum entropy based training, probabilistic models correlate with BLEU well when features are rich enough. The similar claim also appears in BIBREF15. This also make the new method be applicable in large-scale features. ### Plackett-Luce Model
Plackett-Luce was firstly proposed to predict ranks of horses in gambling BIBREF13. Let $\mathbf {r}=(r_{1},r_{2}\ldots r_{N})$ be $N$ horses with a probability distribution $\mathcal {P}$ on their abilities to win a game, and a rank $\mathbf {\pi }=(\pi (1),\pi (2)\ldots \pi (|\mathbf {\pi }|))$ of horses can be understood as a generative procedure, where $\pi (j)$ denotes the index of the horse in the $j$th position. In the 1st position, there are $N$ horses as candidates, each of which $r_{j}$ has a probability $p(r_{j})$ to be selected. Regarding the rank $\pi $, the probability of generating the champion is $p(r_{\pi (1)})$. Then the horse $r_{\pi (1)}$ is removed from the candidate pool. In the 2nd position, there are only $N-1$ horses, and their probabilities to be selected become $p(r_{j})/Z_{2}$, where $Z_{2}=1-p(r_{\pi (1)})$ is the normalization. Then the runner-up in the rank $\pi $, the $\pi (2)$th horse, is chosen at the probability $p(r_{\pi (2)})/Z_{2}$. We use a consistent terminology $Z_{1}$ in selecting the champion, though $Z_{1}$ equals 1 trivially. This procedure iterates to the last rank in $\pi $. The key idea for the Plackett-Luce model is the choice in the $i$th position in a rank $\mathbf {\pi }$ only depends on the candidates not chosen at previous stages. The probability of generating a rank $\pi $ is given as follows where $Z_{j}=1-\sum _{t=1}^{j-1}p(r_{\pi (t)})$. We offer a toy example (Table TABREF3) to demonstrate this procedure. Theorem 1 The permutation probabilities $p(\mathbf {\pi })$ form a probability distribution over a set of permutations $\Omega _{\pi }$. For example, for each $\mathbf {\pi }\in \Omega _{\pi }$, we have $p(\mathbf {\pi })>0$, and $\sum _{\pi \in \Omega _{\pi }}p(\mathbf {\pi })=1$. We have to note that, $\Omega _{\pi }$ is not necessarily required to be completely ranked permutations in theory and in practice, since gamblers might be interested in only the champion and runner-up, and thus $|\mathbf {\pi }|\le N$. In experiments, we would examine the effects on different length of permutations, systems being termed $PL(|\pi |)$. Theorem 2 Given any two permutations $\mathbf {\pi }$ and $\mathbf {\pi }\prime $, and they are different only in two positions $p$ and $q$, $p<q$, with $\pi (p)=\mathbf {\pi }\prime (q)$ and $\pi (q)=\mathbf {\pi }\prime (p)$. If $p(\pi (p))>p(\pi (q))$, then $p(\pi )>p(\pi \prime )$. In other words, exchanging two positions in a permutation where the horse more likely to win is not ranked before the other would lead to an increase of the permutation probability. This suggests the ground-truth permutation, ranked decreasingly by their probabilities, owns the maximum permutation probability on a given distribution. In SMT, we are motivated to optimize parameters to maximize the likelihood of ground-truth permutation of an N-best hypotheses. Due to the limitation of space, see BIBREF13, BIBREF16 for the proofs of the theorems. ### Plackett-Luce Model in Statistical Machine Translation
In SMT, let $\mathbf {f}=(f_{1},f_{2}\ldots )$ denote source sentences, and $\mathbf {e}=(\lbrace e_{1,1},\ldots \rbrace ,\lbrace e_{2,1},\ldots \rbrace \ldots )$ denote target hypotheses. A set of features are defined on both source and target side. We refer to $h(e_{i,*})$ as a feature vector of a hypothesis from the $i$th source sentence, and its score from a ranking function is defined as the inner product $h(e_{i,*})^{T}w$ of the weight vector $w$ and the feature vector. We first follow the popular exponential style to define a parameterized probability distribution over a list of hypotheses. The ground-truth permutation of an $n$best list is simply obtained after ranking by their sentence-level BLEUs. Here we only concentrate on their relative ranks which are straightforward to compute in practice, e.g. add 1 smoothing. Let $\pi _{i}^{*}$ be the ground-truth permutation of hypotheses from the $i$th source sentences, and our optimization objective is maximizing the log-likelihood of the ground-truth permutations and penalized using a zero-mean and unit-variance Gaussian prior. This results in the following objective and gradient: where $Z_{i,j}$ is defined as the $Z_{j}$ in Formula (1) of the $i$th source sentence. The log-likelihood function is smooth, differentiable, and concave with the weight vector $w$, and its local maximal solution is also a global maximum. Iteratively selecting one parameter in $\alpha $ for tuning in a line search style (or MERT style) could also converge into the global global maximum BIBREF17. In practice, we use more fast limited-memory BFGS (L-BFGS) algorithm BIBREF18. ### Plackett-Luce Model in Statistical Machine Translation ::: N-best Hypotheses Resample
The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation. The greater, the richer. In practice, we find a rough threshold of r is 5. In engineering, the size of an N-best list with unique hypotheses is usually less than several thousands. This suggests that, if features are up to thousands or more, the Plackett-Luce model is quite suitable here. Otherwise, we could reduce the size of N-best lists by sampling to make $r$ beyond the threshold. Their may be other efficient sampling methods, and here we adopt a simple one. If we want to $m$ samples from a list of hypotheses $\mathbf {e}$, first, the $\frac{m}{3}$ best hypotheses and the $\frac{m}{3}$ worst hypotheses are taken by their sentence-level BLEUs. Second, we sample the remaining hypotheses on distribution $p(e_{i})\propto \exp (h(e_{i})^{T}w)$, where $\mathbf {w}$ is an initial weight from last iteration. ### Evaluation
We compare our method with MERT and MIRA in two tasks, iterative training, and N-best list rerank. We do not list PRO BIBREF12 as our baseline, as Cherry et al.BIBREF10 have compared PRO with MIRA and MERT massively. In the first task, we align the FBIS data (about 230K sentence pairs) with GIZA++, and train a 4-gram language model on the Xinhua portion of Gigaword corpus. A hierarchical phrase-based (HPB) model (Chiang, 2007) is tuned on NIST MT 2002, and tested on MT 2004 and 2005. All features are eight basic ones BIBREF20 and extra 220 group features. We design such feature templates to group grammars by the length of source side and target side, (feat-type,a$\le $src-side$\le $b,c$\le $tgt-side$\le $d), where the feat-type denotes any of the relative frequency, reversed relative frequency, lexical probability and reversed lexical probability, and [a, b], [c, d] enumerate all possible subranges of [1, 10], as the maximum length on both sides of a hierarchical grammar is limited to 10. There are 4 $\times $ 55 extra group features. In the second task, we rerank an N-best list from a HPB system with 7491 features from a third party. The system uses six million parallel sentence pairs available to the DARPA BOLT Chinese-English task. This system includes 51 dense features (translation probabilities, provenance features, etc.) and up to 7440 sparse features (mostly lexical and fertility-based). The language model is a 6-gram model trained on a 10 billion words, including the English side of our parallel corpora plus other corpora such as Gigaword (LDC2011T07) and Google News. For the tuning and test sets, we use 1275 and 1239 sentences respectively from the LDC2010E30 corpus. ### Evaluation ::: Plackett-Luce Model for SMT Tuning
We conduct a full training of machine translation models. By default, a decoder is invoked for at most 40 times, and each time it outputs 200 hypotheses to be combined with those from previous iterations and sent into tuning algorithms. In getting the ground-truth permutations, there are many ties with the same sentence-level BLEU, and we just take one randomly. In this section, all systems have only around two hundred features, hence in Plackett-Luce based training, we sample 30 hypotheses in an accumulative $n$best list in each round of training. All results are shown in Table TABREF10, we can see that all PL($k$) systems does not perform well as MERT or MIRA in the development data, this maybe due to that PL($k$) systems do not optimize BLEU and the features here are relatively not enough compared to the size of N-best lists (empirical Formula DISPLAY_FORM9). However, PL($k$) systems are better than MERT in testing. PL($k$) systems consider the quality of hypotheses from the 2th to the $k$th, which is guessed to act the role of the margin like SVM in classification . Interestingly, MIRA wins first in training, and still performs quite well in testing. The PL(1) system is equivalent to a max-entropy based algorithm BIBREF14 whose dual problem is actually maximizing the conditional probability of one oracle hypothesis. When we increase the $k$, the performances improve at first. After reaching a maximum around $k=5$, they decrease slowly. We explain this phenomenon as this, when features are rich enough, higher BLEU scores could be easily fitted, then longer ground-truth permutations include more useful information. ### Evaluation ::: Plackett-Luce Model for SMT Reranking
After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena. First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective. Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin. This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree. Table 2: PL(k): Plackett-Luce model optimizing the ground-truth permutation with length k. The significant symbols (+ at 0.05 level) are compared with MERT. The bold font numbers signifies better results compared to M(1) system. Figure 1: PL(k) with 500 L-BFGS iterations, k=1,3,5,7,9,12,15 compared with MIRA in reranking.
|
Plackett-Luce Model for SMT Reranking
|
On which date was Mr. Chapman's ventriculoperitoneal shunt placed?
Choose the correct answer from the following options:
A. 03/04/2017
B. 08/04/2017
C. 04/13/2017
D. 05/01/2017
E. 05/25/2019
|
### Patient Report 0
**Dear colleague, **
We are reporting on our shared patient, Mr. John Chapman, born on
11/16/1994, who received emergency treatment at our clinic on
04/03/2017.
**Diagnoses**:
- Severe open traumatic brain injury with fractures of the cranial
vault, mastoid, and skull base
- Dissection of the distal internal carotid artery on both sides
- Subarachnoid hemorrhage involving both hemispheres and extending
into the basal cisterns
- Aspiration pneumonia
**Other Diagnoses: **
- Status post rib fracture 2005
- Status post appendectomy 2006
- Status post distal radius fracture 2008
- Status post elbow fracture 20010
**Procedure**: External ventricular drain (EVD) placement.
**Medical History:** Admission through the emergency department as a
polytrauma alert. The patient was involved in a motocross accident,
where he jumped, fell, and landed face-first. He was intubated at the
scene, and either during or before intubation, aspiration occurred. No
issues with airway, breathing, or circulation (A, B, or C problems) were
noted. A CT scan performed in the emergency department revealed an open
traumatic brain injury with fractures of the cranial vault, mastoid, and
skull base, as well as dissection of both carotid arteries. Upon
admission, we encountered an intubated and sedated patient with a
Richmond Agitation-Sedation Scale (RASS) score of -4. He was
hemodynamically stable at all times.
**Current Recommendations:**
- Regular checks of vigilance, laboratory values and microbiological
findings.
- Careful balancing
### Patient Report 1
**Dear colleague, **
We report on Mr. John Chapman, born on 11/16/1994, who was admitted to
our Intensive Care Unit from 04/03/2017 to 05/01/2017.
**Diagnoses:**
- Open severe traumatic brain injury with fractures of the skull
vault, mastoid, and skull base
- Dissection of the distal ACI on both sides
- Subarachnoid hemorrhage involving both hemispheres and extending
into basal cisterns
- Infarct areas in the border zone between MCA-ACA on the right
frontal and left parietal sides
- Malresorptive hydrocephalus
<!-- -->
- Rhabdomyolysis
- Aspiration pneumonia
**Other Diagnoses: **
- Status post rib fracture in 2005
- Status post appendectomy in 2006
- Status post distal radius fracture in 2008
- Status post elbow fracture in 20010
**Surgical Procedures:**
- 04/03/2017: Placement of external ventricular drain
- 04/08/2017: Placement of an intracranial pressure monitoring
catheter
- 04/13/2017: Surgical tracheostomy
- 05/01/2017: Left ventriculoperitoneal shunt placement
**Medical History:** The patient was admitted through the emergency
department as a polytrauma alert. The patient had fallen while riding a
motocross bike, landing face-first after jumping. He was intubated at
the scene. Aspiration occurred either during or before intubation. No
problems with breathing or circulation were noted. The CT performed in
the emergency department showed an open traumatic brain injury with
fractures of the skull vault, mastoid, and skull base, as well as
dissection of the carotid arteries on both sides and bilateral
subarachnoid hemorrhage.
Upon admission, the patient was sedated and intubated, with a Richmond
Agitation-Sedation Scale (RASS) score of -4, and was hemodynamically
stable under controlled ventilation.
**Therapy and Progression:**
[Neurology]{.underline}: Following the patient\'s admission, an external
ventricular drain was placed. Reduction of sedation had to be
discontinued due to increased intracranial pressure. A right pupil size
greater than the left showed no intracranial correlate. With
persistently elevated intracranial pressure, intensive intracranial
pressure therapy was initiated using deeper sedation, administration of
hyperosmolar sodium, and cerebrospinal fluid drainage, which normalized
intracranial pressure. Intermittently, there were recurrent intracranial
pressure peaks, which could be treated conservatively. Transcranial
Doppler examinations showed normal flow velocities. Microbiological
samples from cerebrospinal fluid were obtained when the patient had
elevated temperatures, but no bacterial growth was observed. Due to the
inability to adequately monitor intracranial pressure via the external
ventricular drain, an intracranial pressure monitoring catheter was
placed to facilitate adequate intracranial pressure monitoring. In the
perfusion computed tomography, progressive edema with increasingly
obstructed external ventricular spaces and previously known infarcts in
the border zone area were observed. To ensure appropriate intracranial
pressure monitoring, a Tuohy drain was inserted due to cerebrospinal
fluid buildup on 04/21/2017. After the initiation of antibiotic therapy
for suspected ventriculitis, the intracranial pressure monitoring
catheter was removed on 04/20/2017. Subsequently, a liquorrhea
developed, leading to the placement of a Tuohy drain. After successful
antibiotic treatment of ventriculitis, a ventriculoperitoneal shunt was
placed on 05/01/2017 without complications, and the Tuohy drain was
removed. Radiological control confirmed the correct positioning. The
patient gradually became more alert. Both pupils were isochoric and
reacted to light. All extremities showed movement, although the patient
only intermittently responded to commands. On 05/01/2017, a VP shunt was
placed on the left side without complications. Currently, the patient is
sedated with continuous clonidine at 60µg/h.
**Hemodynamics**: To maintain cerebral perfusion pressure in the
presence of increased intracranial pressure, circulatory support with
vasopressors was necessary. Echocardiography revealed preserved cardiac
function without wall motion abnormalities or right heart strain,
despite the increasing need for noradrenaline support. As the patient
had bilateral carotid dissection, a therapy with Aspirin 100mg was
initiated. On 04/16/2017, clinical examination revealed right\>left leg
circumference difference and redness of the right leg. Utrasound
revealed a long-segment deep vein thrombosis in the right leg, extending
from the pelvis (proximal end of the thrombus not clearly delineated) to
the lower leg. Therefore, Heparin was increased to a therapeutic dose.
Heparin therapy was paused on postoperative day 1, and prophylactic
anticoagulation started, followed by therapeutic anticoagulation on
postoperative day 2. The patient was switched to subcutaneous Lovenox.
**Pulmonary**: Due to the history of aspiration in the prehospital
setting, a bronchoscopy was performed, revealing a moderately obstructed
bronchial system with several clots. As prolonged sedation was
necessary, a surgical tracheostomy was performed without complications
on 04/13/2017. Subsequently, we initiated weaning from mechanical
ventilation. The current weaning strategy includes 12 hours of
synchronized intermittent mandatory ventilation (SIMV) during the night,
with nighttime pressure support ventilation (DuoPAP: Ti high 1.3s,
respiratory rate 11/min, Phigh 11 mbar, PEEP 5 mbar, Psupport 5 mbar,
trigger 4l, ramp 50 ms, expiratory trigger sensitivity 25%).
**Abdomen**: FAST examinations did not reveal any signs of
intra-abdominal trauma. Enteral feeding was initiated via a gastric
tube, along with supportive parenteral nutrition. With forced bowel
movement measures, the patient had regular bowel movements. On
04/17/2017, a complication-free PEG (percutaneous endoscopic
gastrostomy) placement was performed due to the potential long-term need
for enteral nutrition. The PEG tube is currently being fed with tube
feed nutrition, with no bowel movement for the past four days.
Additionally, supportive parenteral nutrition is being provided.
**Kidney**: Initially, the patient had polyuria without confirming
diabetes insipidus, and subsequently, adequate diuresis developed.
Retention parameters were within the normal range. As crush parameters
increased, a therapy involving forced diuresis was initiated, resulting
in a significant reduction of crush parameters.
**Infection Course:** Upon admission, with elevated infection parameters
and intermittently febrile temperatures, empirical antibiotic therapy
was initiated for suspected pneumonia using Piperacillin/Tazobactam.
Staphylococcus capitis was identified in blood cultures, and
Staphylococcus aureus was found in bronchial lavage. Both microbes were
sensitive to the current antibiotic therapy, so treatment with
Piperacillin/Tazobactam continued. Additionally, Enterobacter cloacae
was identified in tracheobronchial secretions during the course, also
sensitive to the ongoing antibiotic therapy. On 05/17, the patient
experienced another fever episode with elevated infection parameters and
right lower lobe infiltrates in the chest X-ray. After obtaining
microbiological samples, antibiotic therapy was switched to Meropenem
for suspected pneumonia. Microbiological findings from cerebrospinal
fluid indicated gram-negative rods. Therefore, antibiotic therapy was
adjusted to Ciprofloxacin in accordance with susceptibility testing due
to suspected ventriculitis, and the Meropenem dose was increased. This
led to a reduction in infection parameters. Finally, microbiological
examination of cerebrospinal fluid, blood cultures, and urine revealed
no pathological findings. Infection parameters decreased. We recommend
continuing antibiotic therapy until 05/02/2017.
**Anti-Infective Course: **
- Piperacillin/Tazobactam 04/03/2017-04/16/2017: Staph. Capitis in
Blood Culture Staph. Aureus in Bronchial Lavage
- Meropenem 04/16/2017-present (increased dose since 04/18) CSF:
gram-negative rods in Blood Culture: Pseudomonas aeruginosa
Acinetobacter radioresistens
- Ciprofloxacin 04/18/2017-present CSF: gram-negative rods in Blood
Culture: Pseudomonas aeruginosa, Acinetobacter radioresistens
**Weaning Settings:** Weaning Stage 6: 12-hour synchronized intermittent
mandatory ventilation (SIMV) with DuoPAP during the night (Thigh 1.3s,
respiratory rate 11/min, Phigh 11 mbar, PEEP 5 mbar, Psupport 5 mbar,
trigger 4l, ramp 50 ms, expiratory trigger sensitivity 25%).
**Status at transfer:** Currently, Mr. Chapman is monosedated with
Clonidine. He spontaneously opens both eyes and spontaneously moves all
four extremities. Pupils are bilaterally moderately dilated, round and
sensitive to light. There is bulbar divergence. Circulation is stable
without catecholamine therapy. He is in the process of weaning,
currently spontaneous breathing with intermittent CPAP. Renal function
is sufficient, enteral nutrition via PEG with supportive parenteral
nutrition is successful.
**Current Medication:**
**Medication** **Dosage** **Frequency**
------------------------------------ ---------------- ---------------
Bisoprolol (Zebeta) 2.5 mg 1-0-0
Ciprofloxacin (Cipro) 400 mg 1-1-1
Meropenem (Merrem) 4 g Every 4 hours
Morphine Hydrochloride (MS Contin) 10 mg 1-1-1-1-1-1
Polyethylene Glycol 3350 (MiraLAX) 13.1 g 1-1-1
Acetaminophen (Tylenol) 1000 mg 1-1-1-1
Aspirin 100 mg 1-0-0
Enoxaparin (Lovenox) 30 mg (0.3 mL) 0-0-1
Enoxaparin (Lovenox) 70 mg (0.7 mL) 1-0-1
**Lab results:**
**Parameter** **Results** **Reference Range**
-------------------- ------------- ---------------------
Creatinine (Jaffé) 0.42 mg/dL 0.70-1.20 mg/dL
Urea 31 mg/dL 17-48 mg/dL
Total Bilirubin 0.35 mg/dL \< 1.20 mg/dL
Hemoglobin 7.6 g/dL 13.5-17.0 g/dL
Hematocrit 28% 39.5-50.5%
Red Blood Cells 3.5 M/uL 4.3-5.8 M/uL
White Blood Cells 10.35 K/uL 3.90-10.50 K/uL
Platelets 379 K/uL 150-370 K/uL
MCV 77.2 fL 80.0-99.0 fL
MCH 24.1 pg 27.0-33.5 pg
MCHC 32.5 g/dL 31.5-36.0 g/dL
MPV 11.3 fL 7.0-12.0 fL
RDW-CV 17.7% 11.5-15.0%
Quick 54% 78-123%
INR 1.36 0.90-1.25
aPTT 32.8 sec 25.0-38.0 sec
**Addition: Radiological Findings**
[Clinical Information and Justification:]{.underline} Suspected deep
vein thrombosis (DVT) on the right leg.
[Special Notes:]{.underline} Examination at the bedside in the intensive
care unit, no digital image archiving available.
[Findings]{.underline}: Confirmation of a long-segment deep venous
thrombosis in the right leg, starting in the pelvis (proximal end not
clearly delineated) and extending to the lower leg.
Visible Inferior Vena Cava without evidence of thrombosis.
The findings were communicated to the treating physician.
**Full-Body Trauma CT on 04/03/2017:**
[Clinical Information and Justification:]{.underline} Motocross
accident. Polytrauma alert. Consequences of trauma? Informed consent:
Emergency indication. Recommended monitoring of kidney and thyroid
laboratory parameters.
**Findings**: CCT: Dissection of the distal internal carotid artery on
both sides (left 2-fold).
Signs of generalized elevated intracranial pressure.
Open skull-brain trauma with intracranial air inclusions and skull base
fracture at the level of the roof of the ethmoidal/sphenoidal sinuses
and clivus (in a close relationship to the bilateral internal carotid
arteries) and the temporal
**CT Head on 04/16/2017:**
[Clinical Information and Justification:]{.underline} History of skull
fracture, removal of EVD (External Ventricular Drain). Inquiry about the
course.
[Findings]{.underline}: Regression of ventricular system width (distance
of SVVH currently 41 mm, previously 46 mm) with residual liquor caps,
indicative of regressed hydrocephalus. Interhemispheric fissure in the
midline. No herniation.
Complete regression of subdural hematoma on the left, tentorial region.
Known defect areas on the right frontal lobe where previous catheters
were inserted.
Progression of a newly hypodense demarcated cortical infarct on the
left, postcentral.
Known bilateral skull base fractures involving the petrous bone, with
secretion retention in the mastoid air cells bilaterally. Minimal
secretion also in the sphenoid sinuses.
Postoperative bone fragments dislocated intracranially after right
frontal trepanation.
**Chest X-ray on 04/24/2017.**
[Clinical Information and Justification:]{.underline} Mechanically
ventilated patient. Suspected pneumonia. Question about infiltrates.
[Findings]{.underline}: Several previous images for comparison, last one
from 08/20/2021.
Persistence of infiltrates in the right lower lobe. No evidence of new
infiltrates. Removal of the tracheal tube and central venous catheter
with a newly inserted tracheal cannula. No evidence of pleural effusion
or pneumothorax.
**CT Head on 04/25/2017:**
[Clinical Information and Justification:]{.underline} Severe traumatic
brain injury with brain edema, one External Ventricular Drain removed,
one parenchymal catheter removed; Follow-up.
[Findings]{.underline}: Previous images available, CT last performed on
04/09/17, and MRI on 04/16/17.
Massive cerebrospinal fluid (CSF) stasis supra- and infratentorially
with CSF pressure caps at the ventricular and cisternal levels with
completely depleted external CSF spaces, differential diagnosis:
malresorptive hydrocephalus. The EVD and parenchymal catheter have been
completely removed.
No evidence of fresh intracranial hemorrhage. Residual subdural hematoma
on the left, tentorial. Slight regression of the cerebellar tonsils.
Increasing hypodensity of the known defect zone on the right frontal
region, differential diagnosis: CSF diapedesis. Otherwise, the status is
the same as for the other defects.
Secretion in the sphenoid sinus and mastoid cells bilaterally, known
bilateral skull base fractures.
**Bedside Chest X-ray on 04/262017:**
[Clinical Information and Justification]{.underline}: Respiratory
insufficiency. Inquiry about cardiorespiratory status.
[Findings]{.underline}: Previous image from 08/17/2021.
Left Central Venous Catheter and gastric tube in unchanged position.
Persistent consolidation in the right para-hilar region, differential
diagnosis: contusion or partial atelectasis. No evidence of new
pulmonary infiltrates. No pleural effusion. No pneumothorax. No
pulmonary congestion.
**Brain MRI on 04/26/2017:**
[Clinical Information and Justification:]{.underline} Severe skull-brain
trauma with skull calvarium, mastoid, and skull base fractures.
Assessment of infarct areas/edema for rehabilitation planning.
[Findings:]{.underline} Several previous examinations available.
Persistent small sulcal hemorrhages in both hemispheres (left \> right)
and parenchymal hemorrhage on the left frontal with minimal perifocal
edema.
Narrow subdural hematoma on the left occipital extending tentorially (up
to 2 mm).
No current signs of hypoxic brain damage. No evidence of fresh ischemia.
Slightly regressed ventricular size. No herniation. Unchanged placement
of catheters on the right frontal side. Mastoid air cells blocked
bilaterally due to known bilateral skull base fractures, mucosal
swelling in the sphenoid and ethmoid sinuses. Polypous mucosal swelling
in the left maxillary sinus. Other involved paranasal sinuses and
mastoids are clear.
**Bedside Chest X-ray on 04/27/2017:**
[Clinical Information and Justification:]{.underline} Tracheal cannula
placement. Inquiry about the position.
[Findings]{.underline}: Images from 04/03/2017 for comparison.
Tracheal cannula with tip projecting onto the trachea. No pneumothorax.
Regressing infiltrate in the right lower lung field. No leaking pleural
effusions.
Left ubclavian central venous catheter with tip projecting onto the
superior vena cava. Gastric tube in situ.
**CT Head on 04/28/2017:**
[Clinical Information and Justification:]{.underline} Open head injury,
bilateral subarachnoid hemorrhage (SAH), EVD placement. Inquiry about
herniation.
[Findings]{.underline}: Comparison with the last prior examination from
the previous day.
Generalized signs of cerebral edema remain constant, slightly
progressing with a somewhat increasing blurred cortical border,
particularly high frontal.
Essentially constant transtentorial herniation of the midbrain and low
position of the cerebellar tonsils. Marked reduction of inner CSF spaces
and depleted external CSF spaces, unchanged position of the ventricular
drainage catheter with the tip in the left lateral ventricle.
Constant small parenchymal hemorrhage on the left frontal and constant
SDH at the tentorial edge on both sides. No evidence of new intracranial
space-occupying hemorrhage.
Slightly less distinct demarcation of the demarcated infarcts/defect
zones, e.g., on the right frontal region, differential diagnosis:
fogging.
**CT Head Angiography with Perfusion on 04/28/2017:**
[Clinical Information and Justification]{.underline}: Post-traumatic
head injury, rising intracranial pressure, bilateral internal carotid
artery dissection. Inquiry about intracranial bleeding, edema course,
herniation, brain perfusion.
[Emergency indication:]{.underline} Vital indication. Recommended
monitoring of kidney and thyroid laboratory parameters. Consultation
with the attending physician from and the neuroradiology service was
conducted.
[Technique]{.underline}: Native moderately of the neurocranium. CT
angiography of brain-supplying cervical intracranial vessels during
arterial contrast agent phase and perfusion imaging of the neurocranium
after intravenous injection of a total of 140 ml of Xenetix-350. DLP
Head 502.4 mGy*cm. DLP Body 597.4 mGy*cm.
[Findings]{.underline}: Previous images from 08/11/2021 and the last CTA
of the head/neck from 04/03/2017 for comparison.
[Brain]{.underline}: Constant bihemispheric and cerebellar brain edema
with a slit-like appearance of the internal and completely compressed
external ventricular spaces. Constant compression of the midbrain with
transtentorial herniation and a constant tonsillar descent.
Increasing demarcation of infarct areas in the border zone of MCA-ACA on
the right frontal, possibly also on the left frontal. Predominantly
preserved cortex-gray matter contrast, sometimes discontinuous on both
frontal sides, differential diagnosis: artifact-related, differential
diagnosis: disseminated infarct demarcations/contusions.
Unchanged placement of the ventricular drainage from the right frontal
with the catheter tip in the left lateral ventricle anterior horn.
Constant subdural hematoma tentorial and posterior falx. Increasingly
vague delineation of the small frontal parenchymal hemorrhage. No new
space-occupying intracranial bleeding.
No evidence of secondary dislocation of the skull base fracture with
constant fluid collections in the paranasal sinuses and mastoid air
cells. Hematoma possible, cerebrospinal fluid leakage possible.
[CT Angiography Head/Neck]{.underline}: Constant presentation of
bilateral internal carotid artery dissection.
No evidence of higher-grade vessel stenosis or occlusion of the
brain-supplying intracranial arteries.
Moderately dilated venous collateral circuits in the cranial soft
tissues on both sides, right \> left. Moderately dilated ophthalmic
veins on both sides, right \> left.
No evidence of sinus or cerebral venous thrombosis. Slight perfusion
deficits in the area of the described infarct areas and contusions.
No evidence of perfusion mismatches in the perfusion imaging.
Unchanged presentation of the other documented skeletal segments.
Additional Note: Discussion of findings with the responsible medical
colleagues on-site and by telephone, as well as with the neuroradiology
service by telephone, was conducted.
**CT Head on 04/30/2017:**
[Clinical Information and Justification]{.underline}: Open head injury
following a motorcycle accident.. Inquiry about rebleeding, edema, EVD
displacement.
[Findings and Assessment:]{.underline} CT last performed on 04/05/2017
for comparison.
Constant narrow subdural hematoma on both sides, tentorial and posterior
parasagittal. Constant small parenchymal hemorrhage on the left frontal.
No new intracranial bleeding.
Progressively demarcated infarcts on the right frontal and left
parietal.
Slightly progressive compression of the narrow ventricles as an
indication of progressive edema. Completely depleted external CSF spaces
with the ventricular drain catheter in the left lateral ventricle.
Increasing compression of the midbrain due to transtentorial herniation,
progressive tonsillar descent of 6 mm.
Fracture of the skull base and the petrous part of the temporal bone on
both sides without significant displacement. Hematoma in the mastoid and
sphenoid sinuses and the maxillary sinus.
**CT Head on 05/01/2017:**
[Clinical Information and Justification:]{.underline} Open skull-brain
trauma. Inquiry about CSF stasis, bleeding, edema.
[Findings]{.underline}: CT last performed on 04/05/17 for comparison.
Completely regressed subarachnoid hemorrhages on both sides. Minimal SDH
components on the tentorial edges bilaterally (left more than right,
with a 3 mm margin width). No new intracranial bleeding. Continuously
narrow inner ventricular system and narrow basal cisterns. The fourth
ventricle is unfolded. Narrow external CSF spaces and consistently
swollen gyration with global cerebral edema.
Better demarcated circumscribed hypodensity in the centrum semiovale on
the right (Series 3, Image 176) and left (Series 3, Image 203);
Differential diagnosis: fresh infarcts due to distal ACI dissections.
Consider repeat vascular imaging. No midline shift. No herniation.
Regressing intracranial air inclusions. Fracture of the skull base and
the petrous part of the temporal bone on both sides without significant
displacement. Hematoma in the maxillary, sphenoidal, and ethmoidal
sinuses.
**Consultation Reports:**
**1) Consultation with Ophthalmology on 04/03/2017**
[Patient Information:]{.underline}
- Motorbike accident, heavily contaminated eyes.
- Request for assessment.
**Diagnosis:** Motorbike accident
**Findings:** Patient intubated, unresponsive. In cranial CT, the
eyeball appears intact, no retrobulbar hematoma. Intraocular pressure:
Right/left within the normal range. Eyelid margins of both eyes crusty
with sand, inferiorly in the lower lid sac, and on the upper lid with
sand. Lower lid somewhat chemotic. Slight temporal hyperemia in the left
eyelid angle. Both eyes have erosions, small, multiple, superficial.
Lower conjunctival sac clean. Round pupils, anisocoria right larger than
left. Left iris hyperemia, no iris defects in the direct light. Lens
unremarkable. Reduced view of the optic nerve head due to miosis,
somewhat pale, rather sharp-edged, central neuroretinal rim present,
central vessels normal. Left eye, due to narrow pupil, limited view,
optic nerve head not visible, central vessels normal, no retinal
hemorrhages.
**Assessment:** Eyelid and conjunctival foreign bodies removed. Mild
erosions in the lower conjunctival sac. Right optic nerve head somewhat
pale, rather sharp-edged.
**Current Recommendations:**
- Antibiotic eye drops three times a day for both eyes.
- Ensure complete eyelid closure.
**2) Consultation with Craniomaxillofacial (CMF) Surgery on 04/05/2017**
**Patient Information:**
- Motorbike accident with severe open traumatic brain injury with
fractures of the cranial vault, mastoid, and skull base
<!-- -->
- Request for assessment.
- Patient with maxillary fracture.
**Findings:** According to the responsible attending physician,
\"minimal handling in case of decompensating intracranial pressure\" is
indicated. Therefore, currently, a cautious approach is suggested
regarding surgical intervention for the radiologically hardly displaced
maxillary fracture. Re-consultation is possible if there are changes in
the clinical outcome.
**Assessment:** Awaiting developments.
**3) Consultation with Neurology on 04/06/2017**
**Patient Information:**
- Brain edema following a severe open traumatic brain injury with
fractures of the cranial vault, mastoid, and skull base
<!-- -->
- Request for assessment.
- Traumatic subarachnoid hemorrhage, intracranial artery dissection,
and various other injuries.
**Findings:** Patient comatose, intubated, sedated. Isocoric pupils. No
light reaction in either eye. No reaction to pain stimuli for
vestibulo-ocular reflex and oculomotor responses. Babinski reflex
negative.
**Assessment:** Long-term ventilation due to a history of intracerebral
bleeding and skull base fracture. No response to pain stimuli or light
reactions in the eyes.
**Procedure/Therapy Suggestion:** Monitoring of patient condition.
**4) Consultation with ENT on 04/16/2017**
**Patient Information:** Tracheostomy tube change.
**Findings:** Tracheostomy tube change performed. Stoma unremarkable.
Trachea clear up to the bifurcation. Sutures in place.
**Assessment:** Re-consultation on 08/27/2021 for suture removal.
**5) Consultation with Neurology on 04/22/2017**
**Patient Information:** Adduction deficit., Request for assessment.
**Findings:** Long-term ventilation due to a history of intracerebral
bleeding and skull base fracture. Adduction deficit in the right eye and
horizontal nystagmus.
**Assessment:** Suspected mesencephalic lesion due to horizontal
nystagmus, but no diagnostic or therapeutic action required.
**6) Consultation with ENT on 04/23/2017**
**Patient Information:** Suture removal. Request for assessment.
**Findings:** Tracheostomy site unremarkable. Sutures trimmed, and skin
sutures removed.
**Assessment:** Procedure completed successfully.
Please note that some information is clinical and may not include
specific dates or recommendations for further treatment.
**Antibiogram:**
**Antibiotic** **Organism 1 (Pseudomonas aeruginosa)** **Organism 2 (Acinetobacter radioresistens)**
------------------------- ----------------------------------------- -----------------------------------------------
Aztreonam I (4.0) \-
Cefepime I (2.0) \-
Cefotaxime \- \-
Amikacin S (\<=2.0) S (4.0)
Ampicillin \- \-
Piperacillin I (\<=4.0) \-
Piperacillin/Tazobactam I (8.0) \-
Imipenem I (2.0) S (\<=0.25)
Meropenem S (\<=0.25) S (\<=0.25)
Ceftriaxone \- \-
Ceftazidime I (4.0) \-
Gentamicin . (\<=1.0) S (\<=1.0)
Tobramycin S (\<=1.0) S (\<=1.0)
Cotrimoxazole \- S (\<=20.0)
Ciprofloxacin I (\<=0.25) I (0.5)
Moxifloxacin \- \-
Fosfomycin \- \-
Tigecyclin \- \-
\"S\" means Susceptible
\"I\" means Intermediate
\".\" indicates not specified
\"-\" means Resistant
### Patient Report 2
**Dear colleague, **
We are reporting on our mutual patient, Mr. John Chapman, born on
11/16/1994, who presented himself to our Outpatient Clinic from
08/08/2018.
**Diagnoses**:
- Right abducens Nerve Palsy and Facial Nerve Palsy
- Lagophthalmos with corneal opacities due to eyelid closure deficit
- Left Abducens Nerve Palsy with slight compensatory head leftward
rotation and preferred leftward gaze
- Bilateral disc swelling
- Suspected left cavernous internal carotid artery aneurysm following
traumatic ICA dissection
- History of shunt explantation due to dysfunction and right-sided
re-implantation (Codman, current pressure setting 12 cm H2O)
- History of left VP shunt placement (programmable
ventriculoperitoneal shunt, initial pressure setting 5/25 cm H2O,
adjusted to 3 cm H2O before discharge)
- Malresorptive hydrocephalus
- History of severe open head injury in a motocross accident with
multiple skull fractures and distal dissection
**Procedure**: We conducted the following preoperative assessment:
- Visual acuity: Distant vision: Right eye: 0.5, Left eye: 0.8p
- Eye position: Fusion/Normal with significant esotropia in the right
eye; no fusion reflex observed
- Ocular deviation: After CT, at distance, esodeviation simulating
alternating 100 prism diopters (overcorrection); at near,
esodeviation simulating alternating 90 prism diopters
- Head posture: Fusion/Normal with leftward head turn of 5-10 degrees
- Correspondence: Bagolini test shows suppression at both distance and
near fixation
- Motility: Right eye abduction limited to 25 degrees from the
midline, abduction in up and down gaze limited to 30 degrees from
midline; left eye abduction limited to 30 degrees
- Binocular functions: Bagolini test shows suppression in the right
eye at both distance and near fixation; Lang I negative
**Current Presentation:** Mr. Chapman presented himself today in our
neurovascular clinic, providing an MRI of the head.
**Medical History:** The patient is known to have a pseudoaneurysm of
the cavernous left internal carotid artery following traumatic carotid
dissection in 04/2017, along with ipsilateral abducens nerve palsy.
**Physical Examination:** Patient in good general condition. Oriented in
all aspects. No cyanosis. No edema. Warm and dry skin. Normal nasal and
pharyngeal findings. Pupils round, equal, and react promptly to light
bilaterally. Moist tongue. Pharynx and buccal mucosa unremarkable. No
jugular vein distension. No carotid bruits heard. Palpation of lymph
nodes unremarkable. Palpation of the thyroid gland unremarkable, freely
movable. Lungs: Normal chest shape, moderately mobile, vesicular breath
sounds. Heart: Regular heart action, normal rate; heart sounds clear, no
pathological sounds. Abdomen: Peristalsis and bowel sounds normal in all
quadrants; soft abdomen, no tenderness, no palpable masses, liver and
spleen not palpable due to limited access, non-tender kidneys. Normal
peripheral pulses; joints freely movable. Strength, motor function, and
sensation are unremarkable.
**Therapy and Progression:** The pseudoaneurysm has shown slight
enlargement in the recent follow-up imaging and remains partially
thrombosed. The findings were discussed on during a neurovascular board
meeting, where a recommendation for endovascular treatment was made,
which the patient has not yet pursued. Since Mr. Chapman has not been
able to decide on treatment thus far, it is advisable to further
evaluate this still asymptomatic condition through a diagnostic
angiography. This examination would also help in better planning any
potential intervention. Mr. Chapman agreed to this course of action, and
we will provide him with a timely appointment for the angiography.
**Lab results upon Discharge:**
**Parameter** **Results** **Reference Range**
-------------------- ------------- ---------------------
Creatinine (Jaffé) 0.44 mg/dL 0.70-1.20 mg/dL
Urea 31 mg/dL 17-48 mg/dL
Total Bilirubin 0.35 mg/dL \< 1.20 mg/dL
Hemoglobin 7.8 g/dL 13.5-17.0 g/dL
Hematocrit 28% 39.5-50.5%
Red Blood Cells 3.5 M/uL 4.3-5.8 M/uL
White Blood Cells 10.35 K/uL 3.90-10.50 K/uL
Platelets 379 K/uL 150-370 K/uL
MCV 77.2 fL 80.0-99.0 fL
MCH 24.1 pg 27.0-33.5 pg
MCHC 32.5 g/dL 31.5-36.0 g/dL
MPV 11.3 fL 7.0-12.0 fL
RDW-CV 17.7% 11.5-15.0%
Quick 54% 78-123%
INR 1.36 0.90-1.25
aPTT 32.8 sec 25.0-38.0 sec
### Patient Report 3
**Dear colleague, **
We are reporting on our patient, Mr. John Chapman, born on 11/16/1994,
who was under our inpatient care from 05/25/2019 to 05/26/2019.
**Diagnoses: **
- Pseudoaneurysm of the cavernous left internal carotid artery
following traumatic carotid dissection
- Abducens nerve palsy.
- History of severe open head trauma with fractures of the cranial
vault, mastoid, and skull base. Distal ICA dissection bilaterally.
Bilateral hemispheric subarachnoid hemorrhage extending into the
basal cisterns.mInfarct areas in the MCA-ACA border zones, right
frontal, and left parietal. Malresorptive hydrocephalus.
<!-- -->
- Rhabdomyolysis.
- History of aspiration pneumonia.
- Suspected Propofol infusion syndrome.
**Current Presentation:** For cerebral digital subtraction angiography
of the intracranial vessels. The patient presented with stable
cardiopulmonary conditions.
**Medical History**: The patient was admitted for the evaluation of a
pseudoaneurysm of the supra-aortic vessels. Further medical history can
be assumed to be known.
**Physical Examination:** Patient in good general condition. Oriented in
all aspects. No cyanosis. No edema. Warm and dry skin. Normal nasal and
pharyngeal findings. Pupils round, equal, and react promptly to light
bilaterally. Moist tongue. Pharynx and buccal mucosa unremarkable. No
jugular vein distension. No carotid bruits heard. Palpation of lymph
nodes unremarkable. Palpation of the thyroid gland unremarkable, freely
movable. Lungs: Normal chest shape, moderately mobile, vesicular breath
sounds. Heart: Regular heart action, normal rate; heart sounds clear, no
pathological sounds. Abdomen: Peristalsis and bowel sounds normal in all
quadrants; soft abdomen, no tenderness, no palpable masses, liver and
spleen not palpable due to limited access, non-tender kidneys. Normal
peripheral pulses; joints freely movable. Strength, motor function, and
sensation are unremarkable.
**Supra-aortic angiography on 05/25/2019:**
[Clinical context, question, justifying indication:]{.underline}
Pseudoaneurysm of the left ICA. Written consent was obtained for the
procedure. Anesthesia, Medications: Procedure performed under local
anesthesia. Medications: 500 IU Heparin in 500 mL NaCl for flushing.
[Methodology]{.underline}: Puncture of the right common femoral artery
under local anesthesia. 4F sheath, 4F vertebral catheter. Serial
angiographies after selective catheterization of the internal carotid
arteries. Uncomplicated manual intra-arterial contrast medium injection
with a total of 50 mL of Iomeron 300. Post-interventional closure of the
puncture site by manual compression. Subsequent application of a
circular pressure bandage.
[Technique]{.underline}: Biplanar imaging technique, area dose product
1330 cGy x cm², fluoroscopy time 3:43 minutes.
[Findings]{.underline}: The perfused portion of the partially thrombosed
cavernous aneurysm of the left internal carotid artery measures 4 x 2
mm. No evidence of other vascular pathologies in the anterior
circulation.
[Recommendation]{.underline}: In case of post-procedural bleeding,
immediate manual compression of the puncture site and notification of
the on-call neuroradiologist are advised.
- Pressure bandage to be kept until 2:30 PM. Bed rest until 6:30 PM.
- Follow-up in our Neurovascular Clinic
**Addition: Doppler ultrasound of the right groin on 05/26/2019:**
[Clinical context, question, justifying indication:]{.underline} Free
fluid? Hematoma?
[Findings]{.underline}: A CT scan from 04/05/2017 is available for
comparison. No evidence of a significant hematoma or an aneurysm in the
right groin puncture site. No evidence of an arteriovenous fistula.
Normal flow profiles of the femoral artery and vein. No evidence of
thrombosis.
**Treatment and Progression:** Pre-admission occurred on 05/24/2019 due
to a medically justified increase in risk for DSA of intracranial
vessels. After appropriate preparation, the angiography was performed on
05/25/2019. The puncture site was managed with a pressure bandage. In
the color Doppler sonographic control the following day, neither a
puncture aneurysm nor an arteriovenous fistula was detected. On
05/25/2019, we discharged the patient in good subjective condition for
your outpatient follow-up care.
**Current Recommendations:** Outpatient follow-up
**Lab results:**
**Parameter** **Reference Range** **Result**
----------------------- --------------------- -------------
Sodium 136-145 mEq/L 141 mEq/L
Potassium 3.5-4.5 mEq/L 4.9 mEq/L
Chloride 98-107 mEq/L 100 mEq/L
Osmolality 280-300 mOsm/kg 290 mOsm/kg
Glucose in Fluoride 60-110 mg/dL 76 mg/dL
Creatinine (Jaffé) 0.70-1.20 mg/dL 0.98 mg/dL
CRP \< 5.0 mg/L 4.5 mg/L
Triglycerides \< 150 mg/dL 119 mg/dL
Creatine Kinase \< 190 U/L 142 U/L
Free Triiodothyronine 2.00-4.40 ng/L 3.25 ng/L
Free Thyroxine 9.30-17.00 ng/L 14.12 ng/L
TSH Basal 0.27-4.20 mU/L 1.65 mU/L
Hemoglobin 13.5-17.0 g/dL 14.3 g/dL
Hematocrit 39.5-50.5% 43.4%
Erythrocytes 4.3-5.8 M/uL 5.6 M/uL
Leukocytes 3.90-10.50 K/uL 10.25 K/uL
Platelets 150-370 K/uL 198 K/uL
MCV 80.0-99.0 fL 83.2 fL
MCH 27.0-33.5 pg 28.1 pg
MCHC 31.5-36.0 g/dL 33.4 g/dL
MPV 7.0-12.0 fL 11.6 fL
RDW-CV 11.5-15.0% 13.5%
Quick \> 78% 90%
INR \< 1.25 1.07
aPTT 25.0-38.0 sec 36.1 sec
|
05/01/2017
|
How much better is inference that has addition of image representation compared to text-only representations?
|
### Introduction
Language understanding is a task proving difficult to automatize, because, among other factors, much of the information that is needed for the correct interpretation of an utterance is not explicit in text BIBREF0. This contrasts with how natural is language understanding for humans, who can cope easily with information absent in text, using common sense and background knowledge like, for instance, typical spatial relations between objects. From another perspective, it is well-known that the visual modality provides complementary information to that in the text. In fact, recent advances in deep learning research have led the field of computer vision and natural language processing to significant progress in tasks that involve visual and textual understanding. Tasks that include visual and textual content include Image Captioning BIBREF1, Visual Question Answering BIBREF2, and Visual Machine Translation BIBREF3, among others. On the other hand, progress in language understanding has been driven by datasets which measure the quality of sentence representations, specially those where inference tasks are performed on top of sentence representations, including textual entailment BIBREF4, BIBREF5 and semantic textual similarity (STS). In STS BIBREF6, for instance, pairs of sentences have been annotated with similarity scores, with top scores for semantically equivalent sentences and bottom scores for completely unrelated sentences. STS provides a unified framework for extrinsic evaluation of multiple semantic aspects such as compositionality and phrase similarity. Contrary to related tasks, such as textual entailment and paraphrase detection, STS incorporates the notion of graded semantic similarity between the pair of textual sentences and is symmetric. In this paper we extend STS to the visual modality, and present Visual Semantic Textual Similarity (vSTS), a task and dataset which allows to study whether better sentence representations can be built when having access to the corresponding images, in contrast with having access to the text alone. Similar to STS, annotators were asked to score the similarity between two items, but in this case each item comprises an image and a textual caption. Systems need to predict the human score. Figure FIGREF1 shows an instance in the dataset, with similarity scores in the captions. The example illustrates the need to re-score the similarity values, as the text-only similarity is not applicable to the multimodal version of the dataset: the annotators return a low similarity when using only text, while, when having access to the corresponding image, they return a high similarity. Although a dataset for multimodal inference exists (visual textual entailment BIBREF7) that dataset reused the text-only inference labels. The vSTS dataset aims to become a standard benchmark to test the contribution of visual information when evaluating the similarity of sentences and the quality of multimodal representations, allowing to test the complementarity of visual and textual information for improved language understanding. Although multimodal tasks such as image captioning, visual question answering and visual machine translation already show that the combination of both modalities can be effectively used, those tasks do not separately benchmark the inference capabilities of multimodal visual and textual representations. We evaluate a variety of well-known textual, visual and multimodal representations in supervised and unsupervised scenarios, and systematically explore if visual content is useful for sentence similarity. For text, we studied pre-trained word embeddings such as GloVe BIBREF8, pre-trained language models like GPT-2 and BERT BIBREF9, BIBREF10, sentence representations fine-tuned on an entailment task like USE BIBREF11, and textual representations pre-trained on a multimodal caption retrieval task like VSE++ BIBREF12. For image representation we use a model pre-trained on Imagenet (ResNet BIBREF13). In order to combine visual and textual representations we used concatenation and learn simple projections. Our experiments show that the text-only models are outperformed by their multimodal counterparts when adding visual representations, with up to 24% error reduction. Our contributions are the following: (1) We present a dataset which allows to evaluate visual/textual representations on an inference task. The dataset is publicly available under a free license. (2) Our results show, for the first time, that the addition of image representations allows better inference. (3) The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. (4) The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. At the same time the improvement holds for all textual representations, even those fine-tuned on a similarity task. ### Related Work
The task of Visual Semantic Textual Similarity stems from previous work on textual inference tasks. In textual entailment, given a textual premise and a textual hypothesis, systems need to decide whether the first entails the second, they are in contradiction, or none of the previous BIBREF4. Popular datasets include the Stanford Natural Language Inference dataset BIBREF5. As an alternative to entailment, STS datasets comprise pairs of sentences which have been annotated with similarity scores. STS systems are usually evaluated on the STS benchmark dataset BIBREF6. In this paper we present an extension of STS, so we present the task in more detail in the next section. Textual entailment has been recently extended with visual information. A dataset for visual textual entailment was presented in BIBREF7. Even if the task is different from the text-only counterpart, they reused the text-only inference ground-truth labels without re-annotating them. In fact, they annotate a small sample to show that the labels change. In addition, their dataset tested pairs of text snippets referring to a single image, and it was only useful for testing grounding techniques, but not to measure the complementarity of visual and textual representations. The reported results did not show that grounding improves results, while our study shows that the inference capabilities of multimodal visual and textual representations improve over text-only representations. In related work, BIBREF14 propose visual entailment, where the premise is an image and the hypothesis is textual. The chosen setting does not allow to test the contribution of multimodal representationn with respect to unimodal ones. The complementarity of visual and text representations for improved language understanding was first proven on word representations, where word embeddings were combined with visual or perceptual input to produce multimodal representations BIBREF15. The task of Visual Semantic Textual Similarity is also related to other multimodal tasks such as Image Captioning BIBREF16, BIBREF17, Text-Image Retrieval BIBREF18, BIBREF19 and Visual Question Answering BIBREF2. Image Captioning is a task that aims to generate a description of a given image. The task is related to ours in that it is required an understanding of the scene depicted in the image, so the system can generate an accurate description of it. Unlike vSTS, image captioning is a generation task in which evaluation is challenging and unclear, as the defined automatic metrics are somewhat problematic BIBREF20. On the other hand, Text-Image Retrieval task requires to find similarities and differences of the items in two modalities, so we can distinguish relevant and irrelevant texts and images regarding the query. Apart from not checking inference explicitly, the other main difference with regards to vSTS is that, in retrieval, items are ranked from most to least similar, whereas the vSTS task consists on scoring an accurate real valued similarity. A comprehensive overview is out of the scope, and thus we focus on the most related vision and language tasks. We refer the reader to BIBREF21 for a survey on vision and language research. Many of these tasks can be considered as extensions of previously existing NLP taks. For instance, Image Captioning can be seen as an extension of conditional language modeling BIBREF22 or natural language generation BIBREF23, whereas Visual Question Answering is a natural counterpart of the traditional Question Answering in NLP. Regarding multimodal and unimodal representation learning, convolutional neural networks (CNN) have become the standard architecture for generating representations for images BIBREF24. Most of these models learn transferable general image features in tasks such as image classification, and detection, semantic segmentation, and action recognition. Most used transferable global image representations are learned with deep CNN architectures such as AlexNet BIBREF25, VGG BIBREF26, Inception-v3 BIBREF27, and ResNet BIBREF13 using large datasets such as ImageNet BIBREF1, MSCOCO BIBREF28 and Visual Genome BIBREF29. Recently, Graph Convolution Networks (GCN) showed to be promising way to distill multiple input types multimodal representations BIBREF30. Language representation is mostly done with pretrained word embeddings like Glove BIBREF8 and sequence learning techniques such as Recurrent Neural Networks (RNN) BIBREF31. Recently, self-attention approaches like Transformers BIBREF32 provided transferable models (BERT, GPT-2, among others BIBREF9, BIBREF10) that significantly improve many state-of-the-art tasks in NLP. Alternatively, sentence representations have been fine-tuned on an entailment task BIBREF11. We will present those used in our work in more detail below. ### The Visual STS Dataset
STS assesses the degree to which two sentences are semantically equivalent to each other. The annotators measure the similarity among sentences, with higher scores for more similar sentences. The annotations of similarity were guided by the scale in Table TABREF4, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning. In this work, we extend the STS task with images, providing visual information that models use, and assess how much visual content can contribute in a language understanding task. The input of the task now consists of two items, each comprising an image and its corresponding caption. In the same way as in STS, systems need to score the similarity of the sentences with the help of the images. Figure FIGREF1 shows an example of an instance in the dataset. In previous work reported in a non-archival workshop paper BIBREF33, we presented a preliminary dataset which used the text-only ground-truth similarity scores. The 819 pairs were extracted from a subset of the STS benchmark, more specifically, the so called STS-images subset, which contains pairs of captions with access to images from PASCAL VOC-2008 BIBREF34 and Flickr-8K BIBREF35. Our manual analysis, including examples like Figure FIGREF1, showed that in many cases the text-only ground truth was not valid, so we decided to re-annotated the dataset but showing the images in addition to the captions (the methodology is identical to the AMT annotation method mentioned below). The correlation of the new annotations with regard to the old ones was high (0.9$\rho $) showing that the change in scores was not drastic, but that annotations did differ. The annotators tended to return higher similarity scores, as the mean similarity score across the dataset increased from 1.7 to 2.1. The inter-tagger correlation was comparable to the text-only task, showing that the new annotation task was well-defined. From another perspective, the fact that we could only extract 819 pairs from existing STS datasets showed the need to sample new pairs from other image-caption datasets. In order to be effective in measuring the quality of multimodal representations, we defined the following desiderata for the new dataset: (1) Following STS datasets, the similarity values need to be balanced, showing a uniform distribution; (2) Paired images have to be different to avoid making the task trivial, as hand analysis of image-caption datasets showed that two captions of the same image tended to be paraphrases of each other; (3) The images should not be present in more than one instance, to avoid biases in the visual side; (4) It has to contain a wide variety of images so we can draw stronger conclusions. The preliminary dataset fulfilled 2 and 3, but the dataset was skewed towards low similarity values and the variety was limited. ### The Visual STS Dataset ::: Data Collection
The data collection of sentence-image pairs comprised several steps, including the selection of pairs to be annotated, the annotation methodology, and a final filtering stage. ### The Visual STS Dataset ::: Data Collection ::: 1. Sampling data for manual annotation.
We make use of two well-known image-caption datasets. On one hand, Flickr30K dataset BIBREF36 that has about 30K images with 5 manually generated captions per image. On the other hand, we use the Microsoft COCO dataset BIBREF28, which contains more than 120K images and 5 captions per image. Using both sources we hope to cover a wide variety of images. In order to select pairs of instances, we did two sampling rounds. The goal of the first run is to gather a large number of varied image pairs with their captions which contain interesting pairs. We started by sampling images. We then combined two ways of sampling pairs of images. In the first, we generated pairs by sampling the images randomly. This way, we ensure higher variety of paired scenes, but presumably two captions paired at random will tend to have very low similarity. In the second, we paired images taking into account their visual similarity, ensuring the selection of related scenes with a higher similarity rate. We used the cosine distance of the top-layer of a pretrained ResNet-50 BIBREF13 to compute the similarity of images. We collected an equal number of pairs for the random and visual similarity strategy, gathering, in total, $155,068$ pairs. As each image has 5 captions, we had to select one caption for each image, and we decided to select the two captions with highest word overlap. This way, we get more balanced samples in terms of caption similarity. The initial sampling created thousands of pairs that were skewed towards very low similarity values. Given that manual annotation is a costly process, and with the goal of having a balanced dataset, we used an automatic similarity system to score all the pairs. This text-only similarity system is an ensemble of feature-based machine learning systems that uses a large variety of distance and machine-translation based features. The model was evaluated on a subset of STS benchmark dataset BIBREF6 and compared favorably to other baseline models. As this model is very different from current deep learning techniques, it should not bias the dataset sampling in a way which influences current similarity systems. The automatic scores were used to sample the final set of pairs as follows. We defined five similarity ranges ($(0, 1], \ldots ,(4, 5]$) and randomly selected the same amount of pairs from the initial paired sample. We set a sampling of maximum 3000 instances (i.e 600 instances per range). Given the fact that the high similarity range had less than 600 instances, we collected a total of 2639 potential text-image candidate pairs for manual annotation. Figure FIGREF8 shows the proposed methodology can sample approximately a uniform distribution with the exception of the higher similarity values (left and middle plots). In addition, we show that the lower predicted similarities are mainly coming from random sampling, whereas, as expected, the higher ones come from similar images. ### The Visual STS Dataset ::: Data Collection ::: 2. Manual annotations.
In order to annotate the sample of 2639 pairs, we used Amazon Mechanical Turk (AMT). Crowdworkers followed the same instructions of previous STS annotation campaigns BIBREF6, very similar to those in Table TABREF4. Annotators needed to focus on textual similarity with the aid of aligned images. We got up to 5 scores per item, and we discarded annotators that showed low correlation with the rest of the annotators ($\rho < 0.75$). In total 56 annotators took part. On average each crowdworker annotated 220 pairs, where the amounts ranged from 19 to 940 annotations. Regardless the annotation amounts, most of the annotators showed high correlations with the rest of the participants. We computed the annotation correlation by aggregating the individual Pearson correlation with averaged similarity of the other annotators. The annotation shows high correlation among the crowdworkers ($\rho = 0.89$ $\pm 0.01$) comparable to that of text-only STS datasets. Table TABREF10 shows the average item similarity and item disagreement in the annotation. We defined item disagreement as the standard deviation of the annotated similarity value. The low average similarity can be explained by the high number of zero-similarity pairs. Item disagreement is moderately low (about 0.6 points out of 5) which is in accordance with the high correlation between the annotators. ### The Visual STS Dataset ::: Data Collection ::: 3. Selection of difficult examples.
In preliminary experiments, the evaluation of two baseline models, word overlap and the ensemble system mentioned before, showed that the sampling strategy introduced a large number of trivial examples. For example, the word overlap system attained $0.83$ $\rho $. This high correlation could be the result of using word-overlap in the first sampling round. In order to create a more challenging dataset where to measure the effectiveness of multimodal representations, we defined the easiness metric to filter out some of the easy examples from the annotated dataset. We defined easiness as an amount of discrepancy provided by an example regarding the whole dataset. Taking the inner product of the Pearson correlation formula as basis, we measure the easiness of an annotated example $i$ as follows: where $o_{i}$ is the word-overlap similarity of the $i$-th pair, $\overline{o}$ is the mean overlap similarity in the dataset, and $s_{o}$ is the standard deviation. Similarly, variable $gs_{i}$ is the gold-standard value of the $i$-th pair, and $\overline{gs}$ and $s_{gs}$ are the mean and standard deviation of gold values in the dataset, respectively. We removed 30% of the easiest examples and create a more challenging dataset of 1858 pairs, reducing $\rho $ to $0.57$ for the word-overlap model, and to $0.66$ $\rho $ (from $0.85$) for the ML based approach. ### The Visual STS Dataset ::: Dataset Description
The full dataset comprises both the sample mentioned above and the 819 pairs from our preliminary work, totalling 2677 pairs. Figure FIGREF14 shows the final item similarity distribution. Although the distribution is skewed towards lower similarity values, we consider that all the similarity ranges are sufficiently well covered. Average similarity of the dataset is $1.9$ with a standard deviation of $1.36$ points. The dataset contains 335 zero-valued pairs out of the 2677 instances, which somehow explains the lower average similarity. ### Evaluation of Representation Models
The goal of the evaluation is to explore whether representation models can have access to images, instead of text alone, have better inference abilities. We consider the following models. ResNet BIBREF13 is a deep network of 152 layers in which the residual representation functions are learned instead of learning the signal representation directly. The model is trained over 1.2 million images of ImageNet, the ILSRVC subset of 1000 image categories. We use the top layer of a pretrained ResNet-152 model to represent the images associated to text. Each image is represented with a vector of 2048 dimensions. GloVe. The Global Vector model BIBREF8 is a log-linear model trained to encode semantic relationships between words as vector offsets in the learned vector space, combining global matrix factorization and local context window methods. Since GloVe is a word-level vector model, we build sentence representations with the mean of the vectors of the words composing the sentence. The pre-trained model from GloVe considered in this paper is the 6B-300d, with a vocabulary of 400k words, 300 dimension vectors and trained on a dataset of 6 billion tokens. BERT. The Bidirectional Encoder Representations from Transformer BIBREF9 implements a novel methodology based on the so-called masked language model, which randomly masks some of the tokens from the input, and predicts the original vocabulary id of the masked word based only on its context. The BERT model used in our experiments is the BERT-Large Uncased (24-layer, 1024-hidden, 16-heads, 340M parameters). In order to obtain the sentence-level representation we extract the token embeddings of the last layer and compute the mean vector, yielding a vector of 1024 dimensions. GPT-2. The Generative Pre-Training-2 modelBIBREF10 is a language model based on the transformer architecture, which is trained on the task of predicting the next word, given all the previous words occurring in some text. In the same manner to BERT and GloVe, we extract the token embeddings of the last layer and compute the mean vector to obtain the sentence-level representation of 768 dimensions. The GPT-2 model used in our experiments was trained on a very large corpus of about 40 GB of text data with 1.5 billion parameters. USE. The Universal Sentence Encoder BIBREF11 is a model for encoding sentences into embedding vectors, specifically designed for transfer learning in NLP. Based on a deep averaging network encoder, the model is trained for varying text lengths, such as sentences, phrases or short textbfs, and in a variety of semantic tasks including STS. The encoder returns the vector of the sentence with 512 dimensions. VSE++. The Visual-Semantic Embedding BIBREF12 is a model trained for image-caption retrieval. The model learns a joint space of aligned images and captions. The model is an improvement of the original introduced by BIBREF37, and combines a ResNet-152 over images with a bidirectional Recurrent Neural Network (GRU) over the sentences. Texts and images are projected onto the joint space, obtaining representations of 1024 dimension both for images and texts. We used projections of images and texts in our experiments. The VSE++ model used in our experiments was pre-trained on the Microsoft COCO dataset BIBREF28 and the Flickr30K dataset BIBREF36. Table TABREF15 summarizes the sentence and image representations used in the evaluation. ### Evaluation of Representation Models ::: Experiments ::: Experimental Setting.
We split the vSTS dataset into training, validation and test partitions sampling at random and preserving the overall score distributions. In total, we use 1338 pairs for training, 669 for validation, and the rest of the 670 pairs were used for the final testing. Similar to the STS task, we use the Pearson correlation coefficient ($\rho $) as the evaluation metric of the task. ### Evaluation of Representation Models ::: Experiments ::: STS models.
Our goal is to keep similarity models as simple as possible in order to directly evaluate textual and visual representations and avoid as much as possible the influence of the parameters that intertwine when learning a particular task. We defined two scenarios: the supervised and the unsupervised scenarios. In the supervised scenario we train a Siamese Regression model in a similar way presented in BIBREF38. Given a sentence/image pair, we wish to predict a real-valued similarity in some range $[1,K]$, being $K=5$ in our experiments. We first produce sentence/image representations $h_{L}$ and $h_{R}$ for each sentence in the pair using any of the unimodal models described above, or using a multimodal representations as explained below. Given these representations, we predict the similarity score $o$ using a regression model that takes both the distance and angle between the pair ($h_{L}$, $h_{R}$): Note that the distance and angle concatenation ($[h_{x}, h_{+}]$) yields a $2 * d$-dimensional vector. The resulting vector is used as input for the non-linear hidden layer ($h_{s}$) of the model. Contrary to BIBREF38, we empirically found that the estimation of a continuous value worked better than learning a softmax distribution over $[1,K]$ integer values. The loss function of our model is the Mean Square Error (MSE), which is the most commonly used regression loss function. In the unsupervised scenario similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations. ### Evaluation of Representation Models ::: Experiments ::: Multimodal representation.
We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project). The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function. ### Evaluation of Representation Models ::: Experiments ::: Hyperparameters and training details.
We use the validation set to learn parameters of the supervised models, and to carry an exploration of the hyperparameters. We train each model a maximum of 300 epochs and apply early-stopping strategy with a patience of 25 epochs. For early stopping we monitor MSE loss value on validation. For the rest, we run a grid search for selecting the rest of the hyperparameter values. We explore learning rate values (0.0001, 0.001, 0.01, 0.05), L2 regularization weights (0.0, 0.0001, 0.001, 0.01), and different hidden layer ($h_{s}$) dimensions (50,100, 200, 300). In addition, we activate and deactivate batch normalization in each layer for each of the hyperparameter selection. ### Evaluation of Representation Models ::: Results ::: The unsupervised scenario.
Table TABREF26 reports the results using the item representations directly. We report results over train and dev partitions for completeness, but note that none of them was used to tune the models. As it can be seen, multimodal representations consistently outperform their text-only counterparts. This confirms that, overall, visual information is helpful in the semantic textual similarity task and that image and sentence representation are complementary. For example, the bert model improves more than 13 points when visual information provided by the resnet is concatenated. glove shows a similar or even larger improvement, with similar trends for use and vse++(text). Although vse++(img) shows better performance than resnet when applying them alone, further experimentation showed lower complementarity when combining with textual representation (e.g. $0.807\rho $ in test combining textual and visual modalities of vse++). This is something expected as vse++(img) is pre-trained along with the textual part of the vse++ model on the same task. We do not show the combinations with vse++(img) due to the lack of space. Interestingly, results show that images alone are valid to predict caption similarity ($0.627 \rho $ in test). Actually, in this experimental setting resnet is on par with bert, which is the best purely unsupervised text-only model. Surprisingly, gpt-2 representations are not useful for text similarity tasks. This might be because language models tend to forget past context as they focus on predicting the next token BIBREF39. Due to the low results of gpt-2 we decided not to combine it with resnet. ### Evaluation of Representation Models ::: Results ::: The supervised scenario.
Table TABREF29 show a similar pattern to that in the the unsupervised setting. Overall, models that use a conjunction of multimodal features significantly outperform unimodal models, and this confirms, in a more competitive scenario, that adding visual information helps learning easier the STS task. The gain of multimodal models is considerable compared to the text-only models. The most significant gain is obtained when glove features are combined with resnet. The model improves more than $15.0$ points. In this case, the improvement over bert is lower, but still considerable with more than $4.0$ points. In the same vein as in the unsupervised scenario, features obtained with a resnet can be as competitive as some text based models (e.g. BERT). gpt-2, as in the unsupervised scenario, does not produce useful representations for semantic similarity tasks. Surprisingly, the regression model with gpt-2 features is not able to learn anything in the training set. As we did in the previous scenario, we do not keep combining gpt-2 with visual features. Multimodal version of vse++ and use are the best model among the supervised approaches. Textual version of use and vse++ alone obtain very competitive results and outperforms some of the multimodal models (the concatenate version of glove and bert with resnet). Results might indicate that text-only with sufficient training data can be on par with multimodal models, but, still, when there is data scarcity, multimodal models can perform better as they have more information over the same data point. Comparison between projected and concatenated models show that projected models attain slightly better results in two cases, but the best overall results are obtained when concatenating vse++(text) with resnet. Although concatenation proofs to be a hard baseline, we expect that more sophisticated combination methods like grounding BIBREF40 will obtain larger gains in the future. ### Discussion ::: Contribution of the Visual Content
Table TABREF31 summarizes the contribution of the images on text representations in test partition. The contribution is consistent through all text-based representations. We measure the absolute difference (Diff) and the error reduction (E.R) of each textual representation with the multimodal counterpart. For the comparison we chose the best text model for each representation. As expected we obtain the largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations. Note that unsupervised models are not learning anything about the specific task, so the more information in the representation, the better. In the case of use and vse++ the improvement is significant but not as large as the purely unsupervised models. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. Improvement is consistent for the supervised models. Contrary to the unsupervised setting, these models are designed to learn about the task, so there is usually less room for the improvement. Still, glove+resnet shows an error reduction of $12.9$ in the test set. Finally, use and vse++ show smaller improvements when we add visual information into the model. Figure FIGREF32 displays some examples where visual information positively contributes predicting accurately similarity values. Examples show the case where related descriptions are lexicalized in a different way so a text-only model (glove) predicts low similarity between captions (top two examples). Instead, the multimodal representation glove+resnet does have access to the image and can predict more accurately the similarity value of the two captions. The examples in the bottom show the opposite case, where similar set of words are used to describe very different situations. The text based model overestimates the similarity of captions, while the multimodal model corrects the output by looking at the differences of the images. On the contrary, Figure FIGREF33 shows that images can also be misleading, and that the task is not as trivial as combining global representations of the image. In this case, related but different captions are supported by very similar images, and as a consequence, the multimodal model overestimates their similarity, while the text-only model focuses on the most discriminating piece of information in the text. ### Discussion ::: The effect of hyperparameters
Neural models are sensitive to hyperparameters, and we might think that results on the supervised scenario are due to hyperparameter optimization. Figure FIGREF35 displays the variability of $\rho $ in development across all hyperparameters. Due to space constraints we show text-only and multimodal concatenated models. Models are ordered by mean performance. As we can see, combined models show better mean performance, and all models except Glove exhibit tight variability. ### Conclusions and Future Work
The long term goal of our research is to devise multimodal representation techniques that improve current inference capabilities. We have presented a novel task, Visual Semantic Textual Similarity (vSTS), where the inference capabilities of visual, textual, and multimodal representations can be tested directly. The dataset has been manually annotated by crowdsourcers with high inter-annotator correlation ($\rho = 0.89$). We tested several well-known textual and visual representations, which we combined using concatenation and projection. Our results show, for the first time, that the addition of image representations allows better inference. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine- tuned in a text-only inference task like USE. The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. In the future, we would like to ground the text representations to image regions BIBREF40, which could avoid misleading predictions due to the global representation of the image. Finally, we would like to extend the dataset with more examples, as we acknowledge that training set is limited to train larger models. This research was partially funded by the Basque Government excellence research group (IT1343-19), the NVIDIA GPU grant program, the Spanish MINECO (DeepReading RTI2018-096846-B-C21 (MCIU/AEI/FEDER, UE)) and project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018). Ander enjoys a PhD grant from the Basque Government. Figure 1. A sample with two items, showing the influence of images when judging the similarity between two captions. While the similarity for the captions alone was annotated as low (1.8), when having access to the images, the annotators assigned a much higher similarity (4). The similarity score ranges between 0 and 5. Table 1. Similarity scores with the definition of each ordinal value. Definitions are the same as used in STS datasets [6] Figure 2. Histograms of the similarity distribution in the 2639 sample, according to the automatic text-only system (left and middle plots), and the distribution of the similarity of each sampling strategy (rnd stands for random image sampling and sim stands for image similarity driven sampling). Table 2. Overall item similarity and disagreement of the AMT annotations. Figure 3. Similarity distribution of the visual STS dataset. Plots show three views of the data. Histogram of the similarity distribution of ground-truth values (left plot), sorted pairs according to their similarity (middle) and boxplot of the similarity values (right). Table 3. Summary of the text and image representation models used. Table 4. The unsupervised scenario: train, validation and test results of the unsupervised models. Table 5. Supervised scenario: Train, validation and test results of the unsupervised models Table 6. Contribution of images over text representations on test. Figure 5. Example of misleading images. The high similarity of images makes the prediction of the multimodal model inaccurate, while the text only model focuses on the most discriminating piece of information. Note that gs refers to the gold standard similarity value, and text and mm refer to text-only and multimodal models, respectively. Figure 4. Examples of the contribution of the visual information in the task. gs for gold standard similarity value, text and mm for text-only and multimodal models, respectively. On top examples where related descriptions are lexicalized differently and images help. On the bottom cases where similar words are used to describe different situations. Figure 6. Variability of the supervised models regarding hyperparameter selection on development. The multimodal models use concatenation. Best viewed in colour.
|
largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations
|
How did Wayne's attitude change by the end of the article?
A. Wayne went from feeling excited to disgusted.
B. Wayne went from feeling excited to regretful for not listening to his parents.
C. Wayne went from feeling confident to feeling defeated.
D. Wayne went from feeling nervous to guilty.
|
THE RECRUIT BY BRYCE WALTON It was dirty work, but it would make him a man. And kids had a right to grow up—some of them! [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, July 1962. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Wayne, unseen, sneered down from the head of the stairs. The old man with his thick neck, thick cigar, evening highball, potgut and bald head without a brain in it. His slim mother with nervously polite smiles and voice fluttering, assuring the old man by her frailty that he was big in the world. They were squareheads one and all, marking moron time in a gray dream. Man, was he glad to break out. The old man said, "He'll be okay. Let him alone." "But he won't eat. Just lies there all the time." "Hell," the old man said. "Sixteen's a bad time. School over, waiting for the draft and all. He's in between. It's rough." Mother clasped her forearms and shook her head once slowly. "We got to let him go, Eva. It's a dangerous time. You got to remember about all these dangerous repressed impulses piling up with nowhere to go, like they say. You read the books." "But he's unhappy." "Are we specialists? That's the Youth Board's headache, ain't it? What do we know about adolescent trauma and like that? Now get dressed or we'll be late." Wayne watched the ritual, grinning. He listened to their purposeless noises, their blabbing and yakking as if they had something to say. Blab-blab about the same old bones, and end up chewing them in the same old ways. Then they begin all over again. A freak sideshow all the way to nowhere. Squareheads going around either unconscious or with eyes looking dead from the millennium in the office waiting to retire into limbo. How come he'd been stuck with parental images like that? One thing—when he was jockeying a rocket to Mars or maybe firing the pants off Asiatic reds in some steamy gone jungle paradise, he'd forget his punkie origins in teeveeland. But the old man was right on for once about the dangerous repressed impulses. Wayne had heard about it often enough. Anyway there was no doubt about it when every move he made was a restrained explosion. So he'd waited in his room, and it wasn't easy sweating it out alone waiting for the breakout call from HQ. "Well, dear, if you say so," Mother said, with the old resigned sigh that must make the old man feel like Superman with a beerbelly. They heard Wayne slouching loosely down the stairs and looked up. "Relax," Wayne said. "You're not going anywhere tonight." "What, son?" his old man said uneasily. "Sure we are. We're going to the movies." He could feel them watching him, waiting; and yet still he didn't answer. Somewhere out in suburban grayness a dog barked, then was silent. "Okay, go," Wayne said. "If you wanta walk. I'm taking the family boltbucket." "But we promised the Clemons, dear," his mother said. "Hell," Wayne said, grinning straight into the old man. "I just got my draft call." He saw the old man's Adam's apple move. "Oh, my dear boy," Mother cried out. "So gimme the keys," Wayne said. The old man handed the keys over. His understanding smile was strained, and fear flicked in his sagging eyes. "Do be careful, dear," his mother said. She ran toward him as he laughed and shut the door on her. He was still laughing as he whoomed the Olds between the pale dead glow of houses and roared up the ramp onto the Freeway. Ahead was the promising glitter of adventure-calling neon, and he looked up at the high skies of night and his eyes sailed the glaring wonders of escape. He burned off some rubber finding a slot in the park-lot. He strode under a sign reading Public Youth Center No. 947 and walked casually to the reception desk, where a thin man with sergeant's stripes and a pansy haircut looked out of a pile of paperwork. "Where you think you're going, my pretty lad?" Wayne grinned down. "Higher I hope than a typewriter jockey." "Well," the sergeant said. "How tough we are this evening. You have a pass, killer?" "Wayne Seton. Draft call." "Oh." The sergeant checked his name off a roster and nodded. He wrote on a slip of paper, handed the pass to Wayne. "Go to the Armory and check out whatever your lusting little heart desires. Then report to Captain Jack, room 307." "Thanks, sarge dear," Wayne said and took the elevator up to the Armory. A tired fat corporal with a naked head blinked up at tall Wayne. Finally he said, "So make up your mind, bud. Think you're the only kid breaking out tonight?" "Hold your teeth, pop," Wayne said, coolly and slowly lighting a cigarette. "I've decided." The corporal's little eyes studied Wayne with malicious amusement. "Take it from a vet, bud. Sooner you go the better. It's a big city and you're starting late. You can get a cat, not a mouse, and some babes are clever hellcats in a dark alley." "You must be a genius," Wayne said. "A corporal with no hair and still a counterboy. I'm impressed. I'm all ears, Dad." The corporal sighed wearily. "You can get that balloon head ventilated, bud, and good." Wayne's mouth twitched. He leaned across the counter toward the shelves and racks of weapons. "I'll remember that crack when I get my commission." He blew smoke in the corporal's face. "Bring me a Smith and Wesson .38, shoulder holster with spring-clip. And throw in a Skelly switchblade for kicks—the six-inch disguised job with the double springs." The corporal waddled back with the revolver and the switchblade disguised in a leather comb case. He checked them on a receipt ledger, while Wayne examined the weapons, broke open the revolver, twirled the cylinder and pushed cartridges into the waiting chamber. He slipped the knife from the comb case, flicked open the blade and stared at its gleam in the buttery light as his mouth went dry and the refracted incandescence of it trickled on his brain like melted ice, exciting and scary. He removed his leather jacket. He slung the holster under his left armpit and tested the spring clip release several times, feeling the way the serrated butt dropped into his wet palm. He put his jacket back on and the switchblade case in his pocket. He walked toward the elevator and didn't look back as the corporal said, "Good luck, tiger." Captain Jack moved massively. The big stone-walled office, alive with stuffed lion and tiger and gunracks, seemed to grow smaller. Captain Jack crossed black-booted legs and whacked a cane at the floor. It had a head shaped like a grinning bear. Wayne felt the assured smile die on his face. Something seemed to shrink him. If he didn't watch himself he'd begin feeling like a pea among bowling balls. Contemptuously amused little eyes glittered at Wayne from a shaggy head. Shoulders hunched like stuffed sea-bags. "Wayne Seton," said Captain Jack as if he were discussing something in a bug collection. "Well, well, you're really fired up aren't you? Really going out to eat 'em. Right, punk?" "Yes, sir," Wayne said. He ran wet hands down the sides of his chinos. His legs seemed sheathed in lead as he bit inwardly at shrinking fear the way a dog snaps at a wound. You big overblown son, he thought, I'll show you but good who is a punk. They made a guy wait and sweat until he screamed. They kept a guy on the fire until desire leaped in him, ran and billowed and roared until his brain was filled with it. But that wasn't enough. If this muscle-bound creep was such a big boy, what was he doing holding down a desk? "Well, this is it, punk. You go the distance or start a butterfly collection." The cane darted up. A blade snicked from the end and stopped an inch from Wayne's nose. He jerked up a shaky hand involuntarily and clamped a knuckle-ridged gag to his gasping mouth. Captain Jack chuckled. "All right, superboy." He handed Wayne his passcard. "Curfew's off, punk, for 6 hours. You got 6 hours to make out." "Yes, sir." "Your beast is primed and waiting at the Four Aces Club on the West Side. Know where that is, punk?" "No, sir, but I'll find it fast." "Sure you will, punk," smiled Captain Jack. "She'll be wearing yellow slacks and a red shirt. Black hair, a cute trick. She's with a hefty psycho who eats punks for breakfast. He's butchered five people. They're both on top of the Undesirable list, Seton. They got to go and they're your key to the stars." "Yes, sir," Wayne said. "So run along and make out, punk," grinned Captain Jack. A copcar stopped Wayne as he started over the bridge, out of bright respectable neon into the murky westside slum over the river. Wayne waved the pass card, signed by Captain Jack, under the cop's quivering nose. The cop shivered and stepped back and waved him on. The Olds roared over the bridge as the night's rain blew away. The air through the open window was chill and damp coming from Slumville, but Wayne felt a cold that wasn't of the night or the wind. He turned off into a rat's warren of the inferiors. Lights turned pale, secretive and sparse, the uncared-for streets became rough with pitted potholes, narrow and winding and humid with wet unpleasant smells. Wayne's fearful exhilaration increased as he cruised with bated breath through the dark mazes of streets and rickety tenements crawling with the shadows of mysterious promise. He found the alley, dark, a gloom-dripping tunnel. He drove cautiously into it and rolled along, watching. His belly ached with expectancy as he spotted the sick-looking dab of neon wanly sparkling. FOUR ACES CLUB He parked across the alley. He got out and stood in shadows, digging the sultry beat of a combo, the wild pulse of drums and spinning brass filtering through windows painted black. He breathed deep, started over, ducked back. A stewbum weaved out of a bank of garbage cans, humming to himself, pulling at a rainsoaked shirt clinging to a pale stick body. He reminded Wayne of a slim grub balanced on one end. The stewbum stumbled. His bearded face in dim breaking moonlight had a dirty, greenish tinge as he sensed Wayne there. He turned in a grotesque uncoordinated jiggling and his eyes were wide with terror and doom. "I gotta hide, kid. They're on me." Wayne's chest rose and his hands curled. The bum's fingers drew at the air like white talons. "Help me, kid." He turned with a scratchy cry and retreated before the sudden blast of headlights from a Cad bulleting into the alley. The Cad rushed past Wayne and he felt the engine-hot fumes against his legs. Tires squealed. The Cad stopped and a teener in black jacket jumped out and crouched as he began stalking the old rummy. "This is him! This is him all right," the teener yelled, and one hand came up swinging a baseball bat. A head bobbed out of the Cad window and giggled. The fumble-footed rummy tried to run and plopped on wet pavement. The teener moved in, while a faint odor of burnt rubber hovered in the air as the Cad cruised in a slow follow-up. Wayne's breath quickened as he watched, feeling somehow blank wonder at finding himself there, free and breaking out at last with no curfew and no law but his own. He felt as though he couldn't stop anything. Living seemed directionless, but he still would go with it regardless, until something dropped off or blew to hell like a hot light-bulb. He held his breath, waiting. His body was tensed and rigid as he moved in spirit with the hunting teener, an omniscient shadow with a hunting license and a ghetto jungle twenty miles deep. The crawling stewbum screamed as the baseball bat whacked. The teener laughed. Wayne wanted to shout. He opened his mouth, but the yell clogged up somewhere, so that he remained soundless yet with his mouth still open as he heard the payoff thuds where the useless wino curled up with stick arms over his rheumy face. The teener laughed, tossed the bat away and began jumping up and down with his hobnailed, mail-order air force boots. Then he ran into the Cad. A hootch bottle soared out, made a brittle tink-tink of falling glass. "Go, man!" The Cad wooshed by. It made a sort of hollow sucking noise as it bounced over the old man twice. Then the finlights diminished like bright wind-blown sparks. Wayne walked over and sneered down at the human garbage lying in scummed rain pools. The smell of raw violence, the scent of blood, made his heart thump like a trapped rubber ball in a cage. He hurried into the Four Aces, drawn by an exhilarating vision ... and pursued by the hollow haunting fears of his own desires. He walked through the wavering haze of smoke and liquored dizziness and stood until his eyes learned the dark. He spotted her red shirt and yellow legs over in the corner above a murky lighted table. He walked toward her, watching her little subhuman pixie face lift. The eyes widened with exciting terror, turned even paler behind a red slash of sensuous mouth. Briefed and waiting, primed and eager for running, she recognized her pursuer at once. He sat at a table near her, watching and grinning and seeing her squirm. She sat in that slightly baffled, fearful and uncomprehending attitude of being motionless, as though they were all actors performing in a weirdo drama being staged in that smoky thick-aired dive. Wayne smiled with wry superiority at the redheaded psycho in a dirty T-shirt, a big bruiser with a gorilla face. He was tussling his mouse heavy. "What's yours, teener?" the slug-faced waiter asked. "Bring me a Crusher, buddyroo," Wayne said, and flashed his pass card. "Sure, teener." Red nuzzled the mouse's neck and made drooly noises. Wayne watched and fed on the promising terror and helplessness of her hunted face. She sat rigid, eyes fixed on Wayne like balls of frozen glass. Red looked up and stared straight at Wayne with eyes like black buttons imbedded in the waxlike skin of his face. Then he grinned all on one side. One huge hand scratched across the wet table top like a furious cat's. Wayne returned the challenging move but felt a nervous twitch jerk at his lips. A numbness covered his brain like a film as he concentrated on staring down Red the psycho. But Red kept looking, his eyes bright but dead. Then he began struggling it up again with the scared little mouse. The waiter sat the Crusher down. Wayne signed a chit; tonight he was in the pay of the state. "What else, teener?" "One thing. Fade." "Sure, teener," the waiter said, his breathy words dripping like syrup. Wayne drank. Liquored heat dripped into his stomach. Fire tickled his veins, became hot wire twisting in his head. He drank again and forced out a shaky breath. The jazz beat thumped fast and muted brass moaned. Drumpulse, stabbing trumpet raped the air. Tension mounted as Wayne watched her pale throat convulsing, the white eyelids fluttering. Red fingered at her legs and salivated at her throat, glancing now and then at Wayne, baiting him good. "Okay, you creep," Wayne said. He stood up and started through the haze. The psycho leaped and a table crashed. Wayne's .38 dropped from its spring-clip holster and the blast filled the room. The psycho screamed and stumbled toward the door holding something in. The mouse darted by, eluded Wayne's grasp and was out the door. Wayne went out after her in a laughing frenzy of release. He felt the cold strange breath of moist air on his sweating skin as he sprinted down the alley into a wind full of blowing wet. He ran laughing under the crazy starlight and glimpsed her now and then, fading in and out of shadows, jumping, crawling, running with the life-or-death animation of a wild deer. Up and down alleys, a rat's maze. A rabbit run. Across vacant lots. Through shattered tenement ruins. Over a fence. There she was, falling, sliding down a brick shute. He gained. He moved up. His labored breath pumped more fire. And her scream was a rejuvenation hypo in his blood. She quivered above him on the stoop, panting, her eyes afire with terror. "You, baby," Wayne gasped. "I gotcha." She backed into darkness, up there against the sagging tenement wall, her arms out and poised like crippled wings. Wayne crept up. She gave a squeaking sob, turned, ran. Wayne leaped into gloom. Wood cracked. He clambered over rotten lumber. The doorway sagged and he hesitated in the musty dark. A few feet away was the sound of loose trickling plaster, a whimpering whine. "No use running," Wayne said. "Go loose. Give, baby. Give now." She scurried up sagging stairs. Wayne laughed and dug up after her, feeling his way through debris. Dim moonlight filtered through a sagging stairway from a shattered skylight three floors up. The mouse's shadow floated ahead. He started up. The entire stair structure canted sickeningly. A railing ripped and he nearly went with it back down to the first floor. He heard a scream as rotten boards crumbled and dust exploded from cracks. A rat ran past Wayne and fell into space. He burst into the third-floor hallway and saw her half-falling through a door under the jagged skylight. Wayne took his time. He knew how she felt waiting in there, listening to his creeping, implacable footfalls. Then he yelled and slammed open the door. Dust and stench, filth so awful it made nothing of the dust. In the corner he saw something hardly to be called a bed. More like a nest. A dirty, lumpy pile of torn mattress, felt, excelsior, shredded newspapers and rags. It seemed to crawl a little under the moon-streaming skylight. She crouched in the corner panting. He took his time moving in. He snickered as he flashed the switchblade and circled it like a serpent's tongue. He watched what was left of her nerves go to pieces like rotten cloth. "Do it quick, hunter," she whispered. "Please do it quick." "What's that, baby?" "I'm tired running. Kill me first. Beat me after. They won't know the difference." "I'm gonna bruise and beat you," he said. "Kill me first," she begged. "I don't want—" She began to cry. She cried right up in his face, her wide eyes unblinking, and her mouth open. "You got bad blood, baby," he snarled. He laughed but it didn't sound like him and something was wrong with his belly. It was knotting up. "Bad, I know! So get it over with, please. Hurry, hurry." She was small and white and quivering. She moaned but kept staring up at him. He ripped off his rivet-studded belt and swung once, then groaned and shuffled away from her. He kept backing toward the door. She crawled after him, begging and clutching with both arms as she wriggled forward on her knees. "Don't run. Please. Kill me! It'll be someone else if you don't. Oh, God, I'm so tired waiting and running!" "I can't," he said, and sickness soured in his throat. "Please." "I can't, I can't!" He turned and ran blindly, half-fell down the cracking stairs. Doctor Burns, head of the readjustment staff at the Youth Center, studied Wayne with abstract interest. "You enjoyed the hunt, Seton? You got your kicks?" "Yes, sir." "But you couldn't execute them?" "No, sir." "They're undesirables. Incurables. You know that, Seton?" "Yes, sir." "The psycho you only wounded. He's a five-times murderer. And that girl killed her father when she was twelve. You realize there's nothing can be done for them? That they have to be executed?" "I know." "Too bad," the doctor said. "We all have aggressive impulses, primitive needs that must be expressed early, purged. There's murder in all of us, Seton. The impulse shouldn't be denied or suppressed, but educated . The state used to kill them. Isn't it better all around, Seton, for us to do it, as part of growing up? What was the matter, Seton?" "I—felt sorry for her." "Is that all you can say about it?" "Yes, sir." The doctor pressed a buzzer. Two men in white coats entered. "You should have got it out of your system, Seton, but now it's still in there. I can't turn you out and have it erupt later—and maybe shed clean innocent blood, can I?" "No, sir," Wayne mumbled. He didn't look up. "I'm sorry I punked out." "Give him the treatment," the doctor said wearily. "And send him back to his mother." Wayne nodded and they led him away. His mind screamed still to split open some prison of bone and lay bare and breathing wide. But there was no way out for the trapped. Now he knew about the old man and his poker-playing pals. They had all punked out. Like him.
|
C. Wayne went from feeling confident to feeling defeated.
|
What is most ironic about the conclusion of the story?
A. While Sammy is the least qualified to go into space, he was the only replacement for Phil
B. Everything that used to give Phil joy will now represent pain and suffering
C. Mary's fear of losing Phil became a self-fulfilling prophecy
D. Phil trained all of his life for one moment, and gave it all up within the period of one day
|
Transcriber's Note: This etext was produced from Astounding Science Fiction December 1955. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. BREAKAWAY BY STANLEY GIMBLE Illustrated by Freas She surely got her wish ... but there was some question about getting what she wanted. Phil Conover pulled the zipper of his flight suit up the front of his long, thin body and came into the living room. His face, usually serious and quietly handsome, had an alive, excited look. And the faint lines around his dark, deep-set eyes were accentuated when he smiled at his wife. "All set, honey. How do I look in my monkey suit?" His wife was sitting stiffly on the flowered couch that was still not theirs completely. In her fingers she held a cigarette burned down too far. She said, "You look fine, Phil. You look just right." She managed a smile. Then she leaned forward and crushed the cigarette in the ash tray on the maple coffee table and took another from the pack. He came to her and touched his hands to her soft blond hair, raising her face until she was looking into his eyes. "You're the most beautiful girl I know. Did I ever tell you that?" "Yes, I think so. Yes, I'm sure you did," she said, finishing the ritual; but her voice broke, and she turned her head away. Phil sat beside her and put his arm around her small shoulders. He had stopped smiling. "Honey, look at me," he said. "It isn't going to be bad. Honestly it isn't. We know exactly how it will be. If anything could go wrong, they wouldn't be sending me; you know that. I told you that we've sent five un-manned ships up and everyone came back without a hitch." She turned, facing him. There were tears starting in the corners of her wide, brown eyes, and she brushed them away with her hand. "Phil, don't go. Please don't. They can send Sammy. Sammy doesn't have a wife. Can't he go? They'd understand, Phil. Please!" She was holding his arms tightly with her hands, and the color had drained from her cheeks. "Mary, you know I can't back out now. How could I? It's been three years. You know how much I've wanted to be the first man to go. Nothing would ever be right with me again if I didn't go. Please don't make it hard." He stopped talking and held her to him and stroked the back of her head. He could feel her shoulders shaking with quiet sobs. He released her and stood up. "I've got to get started, Mary. Will you come to the field with me?" "Yes, I'll come to say good-by." She paused and dropped her eyes. "Phil, if you go, I won't be here when you get back—if you get back. I won't be here because I won't be the wife of a space pilot for the rest of my life. It isn't the kind of life I bargained for. No matter how much I love you, I just couldn't take that, Phil. I'm sorry. I guess I'm not the noble sort of wife." She finished and took another cigarette from the pack on the coffee table and put it to her lips. Her hand was trembling as she touched the lighter to the end of the cigarette and drew deeply. Phil stood watching her, the excitement completely gone from his eyes. "I wish you had told me this a long time ago, Mary," Phil said. His voice was dry and low. "I didn't know you felt this way about it." "Yes, you did. I told you how I felt. I told you I could never be the wife of a space pilot. But I don't think I ever really believed it was possible—not until this morning when you said tonight was the take-off. It's so stupid to jeopardize everything we've got for a ridiculous dream!" He sat down on the edge of the couch and took her hands between his. "Mary, listen to me," he said. "It isn't a dream. It's real. There's nothing means anything more to me than you do—you know that. But no man ever had the chance to do what I'm going to do tonight—no man ever. If I backed out now for any reason, I'd never be able to look at the sky again. I'd be through." She looked at him without seeing him, and there was nothing at all in her eyes. "Let's go, if you're still going," she finally said. They drove through the streets of the small town with its small bungalows, each alike. There were no trees and very little grass. It was a new town, a government built town, and it had no personality yet. It existed only because of the huge ship standing poised in the take-off zone five miles away in the desert. Its future as a town rested with the ship, and the town seemed to feel the uncertainty of its future, seemed ready to stop existing as a town and to give itself back to the desert, if such was its destiny. Phil turned the car off the highway onto the rutted dirt road that led across the sand to the field where the ship waited. In the distance they could see the beams of the searchlights as they played across the take-off zone and swept along the top of the high wire fence stretching out of sight to right and left. At the gate they were stopped by the guard. He read Phil's pass, shined his flashlight in their faces, and then saluted. "Good luck, colonel," he said, and shook Phil's hand. "Thanks, sergeant. I'll be seeing you next week," Phil said, and smiled. They drove between the rows of wooden buildings that lined the field, and he parked near the low barbed fence ringing the take-off zone. He turned off the ignition, and sat quietly for a moment before lighting a cigarette. Then he looked at his wife. She was staring through the windshield at the rocket two hundred yards away. Its smooth polished surface gleamed in the spotlight glare, and it sloped up and up until the eye lost the tip against the stars. "She's beautiful, Mary. You've never seen her before, have you?" "No, I've never seen her before," she said. "Hadn't you better go?" Her voice was strained and she held her hands closed tightly in her lap. "Please go now, Phil," she said. He leaned toward her and touched her cheek. Then she was in his arms, her head buried against his shoulder. "Good-by, darling," she said. "Wish me luck, Mary?" he asked. "Yes, good luck, Phil," she said. He opened the car door and got out. The noise of men and machines scurrying around the ship broke the spell of the rocket waiting silently for flight. "Mary, I—" he began, and then turned and strode toward the administration building without looking back. Inside the building it was like a locker room before the big game. The tension stood alone, and each man had the same happy, excited look that Phil had worn earlier. When he came into the room, the noise and bustle stopped. They turned as one man toward him, and General Small came up to him and took his hand. "Hello, Phil. We were beginning to think you weren't coming. You all set, son?" "Yes, sir, I'm all set, I guess," Phil said. "I'd like you to meet the Secretary of Defense, Phil. He's over here by the radar." As they crossed the room, familiar faces smiled, and each man shook his hand or touched his arm. He saw Sammy, alone, by the coffee urn. Sammy waved to him, but he didn't smile. Phil wanted to talk to him, to say something; but there was nothing to be said now. Sammy's turn would come later. "Mr. Secretary," the general said, "this is Colonel Conover. He'll be the first man in history to see the other side of the Moon. Colonel—the Secretary of Defense." "How do you do, sir. I'm very proud to meet you," Phil said. "On the contrary, colonel. I'm very proud to meet you. I've been looking at that ship out there and wondering. I almost wish I were a young man again. I'd like to be going. It's a thrilling thought—man's first adventure into the universe. You're lighting a new dawn of history, colonel. It's a privilege few men have ever had; and those who have had it didn't realize it at the time. Good luck, and God be with you." "Thank you, sir. I'm aware of all you say. It frightens me a little." The general took Phil's arm and they walked to the briefing room. There were chairs set up for the scientists and Air Force officers directly connected with the take-off. They were seated now in a semicircle in front of a huge chart of the solar system. Phil took his seat, and the last minute briefing began. It was a routine he knew by heart. He had gone over and over it a thousand times, and he only half listened now. He kept thinking of Mary outside, alone by the fence. The voice of the briefing officer was a dull hum in his ears. "... And orbit at 18,000-mph. You will then accelerate for the breakaway to 24,900-mph for five minutes and then free-coast for 116 hours until—" Phil asked a few questions about weather and solar conditions. And then the session was done. They rose and looked at each other, the same unanswered questions on each man's face. There were forced smiles and handshakes. They were ready now. "Phil," the general said, and took him aside. "Sir?" "Phil, you're ... you feel all right, don't you, son?" "Yes, sir. I feel fine. Why?" "Phil, I've spent nearly every day with you for three years. I know you better than I know myself in many ways. And I've studied the psychologist's reports on you carefully. Maybe it's just nervousness, Phil, but I think there's something wrong. Is there?" "No, sir. There's nothing wrong," Phil said, but his voice didn't carry conviction. He reached for a cigarette. "Phil, if there is anything—anything at all—you know what it might mean. You've got to be in the best mental and physical condition of your life tonight. You know better than any man here what that means to our success. I think there is something more than just natural apprehension wrong with you. Want to tell me?" Outside, the take-off zone crawled with men and machines at the base of the rocket. For ten hours, the final check-outs had been in progress; and now the men were checking again, on their own time. The thing they had worked toward for six years was ready to happen, and each one felt that he was sending just a little bit of himself into the sky. Beyond the ring of lights and moving men, on the edge of the field, Mary stood. Her hands moved slowly over the top of the fence, twisting the barbs of wire. But her eyes were on the ship. And then they were ready. A small group of excited men came out from the administration building and moved forward. The check-out crews climbed into their machines and drove back outside the take-off zone. And, alone, one man climbed the steel ladder up the side of the rocket—ninety feet into the air. At the top he waved to the men on the ground and then disappeared through a small port. Mary waved to him. "Good-by," she said to herself, but the words stuck tight in her throat. The small group at the base of the ship turned and walked back to the fence. And for an eternity the great ship stood alone, waiting. Then, from deep inside, a rumble came, increasing in volume to a gigantic roar that shook the earth and tore at the ears. Slowly, the first manned rocket to the Moon lifted up and up to the sky. For a long time after the rocket had become a tiny speck of light in the heavens, she stood holding her face in her hands and crying softly to herself. And then she felt the touch of a hand on her arm. She turned. "Phil! Oh, Phil." She held tightly to him and repeated his name over and over. "They wouldn't let me go, Mary," he said finally. "The general would not let me go." She looked at him. His face was drawn tight, and there were tears on his cheeks. "Thank, God," she said. "It doesn't matter, darling. The only thing that matters is you didn't go." "You're right, Mary," he said. His voice was low—so low she could hardly hear him. "It doesn't matter. Nothing matters now." He stood with his hands at his sides, watching her. And then turned away and walked toward the car. THE END
|
C. Mary's fear of losing Phil became a self-fulfilling prophecy
|
Why is the narrator unafraid to work openly in the park among the leprechauns? Others aren't believers
A. He feels that he and the leprechauns can protect themselves through cunning ways and physical strength
B. He doubts that his colleagues at the Center would ever venture outdoors to the park area
C. He knows that it is rare to find believers among his colleagues and fellow humans
D. He believes strongly in the importance of his collaboration with the leprechauns and is willing to take the risk of being discovered
|
Every writer must seek his own Flowery Kingdom in imagination's wide demesne, and if that search can begin and end on Earth his problem has been greatly simplified. In post-war Japan Walt Sheldon has found not only serenity, but complete freedom to write undisturbed about the things he treasures most. A one-time Air Force officer, he has turned to fantasy in his lighter moments, to bring us such brightly sparkling little gems as this. houlihan's equation by ... Walt Sheldon The tiny spaceship had been built for a journey to a star. But its small, mischievous pilots had a rendezvous with destiny—on Earth. I must admit that at first I wasn't sure I was hearing those noises. It was in a park near the nuclear propulsion center—a cool, green spot, with the leaves all telling each other to hush, be quiet, and the soft breeze stirring them up again. I had known precisely such a secluded little green sanctuary just over the hill from Mr. Riordan's farm when I was a boy. Now it was a place I came to when I had a problem to thrash out. That morning I had been trying to work out an equation to give the coefficient of discharge for the matter in combustion. You may call it gas, if you wish, for we treated it like gas at the center for convenience—as it came from the rocket tubes in our engine. Without this coefficient to give us control, we would have lacked a workable equation when we set about putting the first moon rocket around those extraordinary engines of ours, which were still in the undeveloped blueprint stage. I see I shall have to explain this, although I had hoped to get right along with my story. When you start from scratch, matter discharged from any orifice has a velocity directly proportional to the square root of the pressure-head driving it. But when you actually put things together, contractions or expansions in the gas, surface roughness and other factors make the velocity a bit smaller. At the terrible discharge speed of nuclear explosion—which is what the drive amounts to despite the fact that it is simply water in which nuclear salts have been previously dissolved—this small factor makes quite a difference. I had to figure everything into it—diameter of the nozzle, sharpness of the edge, the velocity of approach to the point of discharge, atomic weight and structure— Oh, there is so much of this that if you're not a nuclear engineer yourself it's certain to weary you. Perhaps you had better take my word for it that without this equation—correctly stated, mind you—mankind would be well advised not to make a first trip to the moon. And all this talk of coefficients and equations sits strangely, you might say, upon the tongue of a man named Kevin Francis Houlihan. But I am, after all, a scientist. If I had not been a specialist in my field I would hardly have found myself engaged in vital research at the center. Anyway, I heard these little noises in the park. They sounded like small working sounds, blending in eerily mysterious fashion with a chorus of small voices. I thought at first it might be children at play, but then at the time I was a bit absent-minded. I tiptoed to the edge of the trees, not wanting to deprive any small scalawags of their pleasure, and peered out between the branches. And what do you suppose I saw? Not children, but a group of little people, hard at work. There was a leader, an older one with a crank face. He was beating the air with his arms and piping: "Over here, now! All right, bring those electrical connections over here—and see you're not slow as treacle about it!" There were perhaps fifty of the little people. I was more than startled by it, too. I had not seen little people in—oh, close to thirty years. I had seen them first as a boy of eight, and then, very briefly again, on my tenth birthday. And I had become convinced they could never be seen here in America. I had never seen them so busy, either. They were building something in the middle of the glade. It was long and shiny and upright and a little over five feet in height. "Come along now, people!" said this crotchety one, looking straight at me. "Stop starin' and get to work! You'll not be needin' to mind that man standin' there! You know he can't see nor hear us!" Oh, it was good to hear the rich old tongue again. I smiled, and the foreman of the leprechauns—if that's what he was—saw me smile and became stiff and alert for a moment, as though suspecting that perhaps I actually could see him. Then he shrugged and turned away, clearly deeming such a thing impossible. I said, "Just a minute, friend, and I'll beg your pardon. It so happens I can see you." He whirled to face me again, staring open-mouthed. Then he said, "What? What's that, now?" "I can see you," I said. "Ohhh!" he said and put his palms to his cheekbones. "Saints be with us! He's a believer! Run everybody—run for your lives!" And they all began running, in as many directions as there were little souls. They began to scurry behind the trees and bushes, and a sloping embankment nearby. "No, wait!" I said. "Don't go away! I'll not be hurting you!" They continued to scurry. I knew what it was they feared. "I don't intend catching one of you!" I said. "Come back, you daft little creatures!" But the glade was silent, and they had all disappeared. They thought I wanted their crock of gold, of course. I'd be entitled to it if I could catch one and keep him. Or so the legends affirmed, though I've wondered often about the truth of them. But I was after no gold. I only wanted to hear the music of an Irish tongue. I was lonely here in America, even if I had latched on to a fine job of work for almost shamefully generous pay. You see, in a place as full of science as the nuclear propulsion center there is not much time for the old things. I very much wanted to talk to the little people. I walked over to the center of the glade where the curious shiny object was standing. It was as smooth as glass and shaped like a huge cigar. There were a pair of triangular fins down at the bottom, and stubby wings amidships. Of course it was a spaceship, or a miniature replica of one. I looked at it more closely. Everything seemed almost miraculously complete and workable. I shook my head in wonder, then stepped back from the spaceship and looked about the glade. I knew they were all hiding nearby, watching me apprehensively. I lifted my head to them. "Listen to me now, little people!" I called out. "My name's Houlihan of the Roscommon Houlihans. I am descended from King Niall himself—or so at least my father used to say! Come on out now, and pass the time o' day!" Then I waited, but they didn't answer. The little people always had been shy. Yet without reaching a decision in so many words I knew suddenly that I had to talk to them. I'd come to the glen to work out a knotty problem, and I was up against a blank wall. Simply because I was so lonely that my mind had become clogged. I knew that if I could just once hear the old tongue again, and talk about the old things, I might be able to think the problem through to a satisfactory conclusion. So I stepped back to the tiny spaceship, and this time I struck it a resounding blow with my fist. "Hear me now, little people! If you don't show yourselves and come out and talk to me, I'll wreck this spaceship from stem to stern!" I heard only the leaves rustling softly. "Do you understand? I'll give you until I count three to make an appearance! One!" The glade remained deathly silent. "Two!" I thought I heard a stirring somewhere, as if a small, brittle twig had snapped in the underbrush. " Three! " And with that the little people suddenly appeared. The leader—he seemed more wizened and bent than before—approached me slowly and warily as I stood there. The others all followed at a safe distance. I smiled to reassure them and then waved my arm in a friendly gesture of greeting. "Good morning," I said. "Good morning," the foreman said with some caution. "My name is Keech." "And mine's Houlihan, as I've told you. Are you convinced now that I have no intention of doing you any injury?" "Mr. Houlihan," said Keech, drawing a kind of peppered dignity up about himself, "in such matters I am never fully convinced. After living for many centuries I am all too acutely aware of the perversity of human nature." "Yes," I said. "Well, as you will quickly see, all I want to do is talk." I nodded as I spoke, and sat down cross-legged upon the grass. "Any Irishman wants to talk, Mr. Houlihan." "And often that's all he wants," I said. "Sit down with me now, and stop staring as if I were a snake returned to the Island." He shook his head and remained standing. "Have your say, Mr. Houlihan. And afterward we'll appreciate it if you'll go away and leave us to our work." "Well, now, your work," I said, and glanced at the spaceship. "That's exactly what's got me curious." The others had edged in a bit now and were standing in a circle, intently staring at me. I took out my pipe. "Why," I asked, "would a group of little people be building a spaceship here in America—out in this lonely place?" Keech stared back without much expression, and said, "I've been wondering how you guessed it was a spaceship. I was surprised enough when you told me you could see us but not overwhelmingly so. I've run into believers before who could see the little people. It happens every so often, though not as frequently as it did a century ago. But knowing a spaceship at first glance! Well, I must confess that does astonish me." "And why wouldn't I know a spaceship when I see one?" I said. "It just so happens I'm a doctor of science." "A doctor of science, now," said Keech. "Invited by the American government to work on the first moon rocket here at the nuclear propulsion center. Since it's no secret I can advise you of it." "A scientist, is it," said Keech. "Well, now, that's very interesting." "I'll make no apologies for it," I said. "Oh, there's no need for apology," said Keech. "Though in truth we prefer poets to scientists. But it has just now crossed my mind, Mr. Houlihan that you, being a scientist, might be of help to us." "How?" I asked. "Well, I might try starting at the beginning," he replied. "You might," I said. "A man usually does." Keech took out his own pipe—a clay dudeen—and looked hopeful. I gave him a pinch of tobacco from my pouch. "Well, now," he said, "first of all you're no doubt surprised to find us here in America." "I am surprised from time to time to find myself here," I said. "But continue." "We had to come here," said Keech, "to learn how to make a spaceship." "A spaceship, now," I said, unconsciously adopting some of the old manner. "Leprechauns are not really mechanically inclined," said Keech. "Their major passions are music and laughter and mischief, as anyone knows." "Myself included," I agreed. "Then why do you need a spaceship?" "Well, if I may use an old expression, we've had a feelin' lately that we're not long for this world. Or let me put it this way. We feel the world isn't long for itself." I scratched my cheek. "How would a man unravel a statement such as that?" "It's very simple. With all the super weapons you mortals have developed, there's the distinct possibility you might be blowin' us all up in the process of destroying yourselves." "There is that possibility," I said. "Well, then, as I say," said Keech, "the little people have decided to leave the planet in a spaceship. Which we're buildin' here and now. We've spied upon you and learned how to do it. Well—almost how to do it. We haven't learned yet how to control the power—" "Hold on, now," I said. "Leaving the planet, you say. And where would you be going?" "There's another committee working on that. 'Tis not our concern. I was inclined to suggest the constellation Orion, which sounds as though it has a good Irish name, but I was hooted down. Be that as it may, my own job was to go into your nuclear center, learn how to make the ship, and proceed with its construction. Naturally, we didn't understand all of your high-flyin' science, but some of our people are pretty clever at gettin' up replicas of things." "You mean you've been spying on us at the center all this time? Do you know, we often had the feeling we were being watched, but we thought it was by the Russians. There's one thing which puzzles me, though. If you've been constantly around us—and I'm still able to see the little people—why did I never see you before?" "It may be we never crossed your path. It may be you can only see us when you're thinkin' of us, and of course truly believin' in us. I don't know—'tis a thing of the mind, and not important at the moment. What's important is for us to get our first ship to workin' properly and then we'll be on our way." "You're determined to go." "Truly we are, Mr. Houlihan. Now—to business. Just during these last few minutes a certain matter has crossed my mind. That's why I'm wastin' all this time with you, sir. You say you are a scientist." "A nuclear engineer." "Well, then, it may be that you can help us—now that you know we're here." "Help you?" "The power control, Mr. Houlihan. As I understand it, 'tis necessary to know at any instant exactly how much thrust is bein' delivered through the little holes in back. And on paper it looks simple enough—the square of somethin' or other. I've got the figures jotted in a book when I need 'em. But when you get to doin' it it doesn't come out exactly as it does on paper." "You're referring to the necessity for a coefficient of discharge." "Whatever it might be named," said Keech, shrugging. "'Tis the one thing we lack. I suppose eventually you people will be gettin' around to it. But meanwhile we need it right now, if we're to make our ship move." "And you want me to help you with this?" "That is exactly what crossed my mind." I nodded and looked grave and kneaded my chin for a moment softly. "Well, now, Keech," I said finally, "why should I help you?" "Ha!" said Keech, grinning, but not with humor, "the avarice of humans! I knew it! Well, Mr. Houlihan, I'll give you reason enough. The pot o' gold, Mr. Houlihan!" "The one at the end of the rainbow?" "It's not at the end of the rainbow. That's a grandmother's tale. Nor is it actually in an earthen crock. But there's gold, all right, enough to make you rich for the rest of your life. And I'll make you a proposition." "Go ahead." "We'll not be needin' gold where we're goin'. It's yours if you show us how to make our ship work." "Well, now, that's quite an offer," I said. Keech had the goodness to be quiet while I sat and thought for a while. My pipe had gone out and I lit it again. I finally said, "Let's have a look at your ship's drive and see what we can see." "You accept the proposition then?" "Let's have a look," I said, and that was all. Well, we had a look, and then several looks, and before the morning was out we had half the spaceship apart, and were deep in argument about the whole project. It was a most fascinating session. I had often wished for a true working model at the center, but no allowance had been inserted in the budget for it. Keech brought me paper and pencil and I talked with the aid of diagrams, as engineers are wont to do. Although the pencils were small and I had to hold them between thumb and forefinger, as you would a needle, I was able to make many sensible observations and even a few innovations. I came back again the next day—and every day for the following two weeks. It rained several times, but Keech and his people made a canopy of boughs and leaves and I was comfortable enough. Every once in a while someone from the town or the center itself would pass by, and stop to watch me. But of course they wouldn't see the leprechauns or anything the leprechauns had made, not being believers. I would halt work, pass the time of day, and then, in subtle fashion, send the intruder on his way. Keech and the little people just stood by and grinned all the while. At the end of sixteen days I had the entire problem all but whipped. It is not difficult to understand why. The working model and the fact that the small people with their quick eyes and clever fingers could spot all sorts of minute shortcomings was a great help. And I was hearing the old tongue and talking of the old things every day, and truly that went far to take the clutter out of my mind. I was no longer so lonely that I couldn't think properly. On the sixteenth day I covered a piece of paper with tiny mathematical symbols and handed it to Keech. "Here is your equation," I said. "It will enable you to know your thrust at any given moment, under any circumstances, in or out of gravity, and under all conditions of friction and combustion." "Thank you, Mr. Houlihan," said Keech. All his people had gathered in a loose circle, as though attending a rite. They were all looking at me quietly. "Mr. Houlihan," said Keech, "you will not be forgotten by the leprechauns. If we ever meet again, upon another world perchance, you'll find our friendship always eager and ready." "Thank you," I said. "And now, Mr. Houlihan," said Keech, "I'll see that a quantity of gold is delivered to your rooms tonight, and so keep my part of the bargain." "I'll not be needing the gold," I said. Keech's eyebrows popped upward. "What's this now?" "I'll not be needing it," I repeated. "I don't feel it would be right to take it for a service of this sort." "Well," said Keech in surprise, and in some awe, too, "well, now, musha Lord help us! 'Tis the first time I ever heard such a speech from a mortal." He turned to his people. "We'll have three cheers now, do you hear, for Mr. Houlihan—friend of the little people as long as he shall live!" And they cheered. And little tears crept into the corners of some of their turned-up eyes. We shook hands, all of us, and I left. I walked through the park, and back to the nuclear propulsion center. It was another cool, green morning with the leaves making only soft noises as the breezes came along. It smelled exactly like a wood I had known in Roscommon. And I lit my pipe and smoked it slowly and chuckled to myself at how I had gotten the best of the little people. Surely it was not every mortal who could accomplish that. I had given them the wrong equation, of course. They would never get their spaceship to work now, and later, if they tried to spy out the right information I would take special measures to prevent it, for I had the advantage of being able to see them. As for our own rocket ship, it should be well on its way by next St. Patrick's Day. For I had indeed determined the true coefficient of discharge, which I never could have done so quickly without those sessions in the glade with Keech and his working model. It would go down in scientific literature now, I suppose, as Houlihan's Equation, and that was honor and glory enough for me. I could do without Keech's pot of gold, though it would have been pleasant to be truly rich for a change. There was no sense in cheating him out of the gold to boot, for leprechauns are most clever in matters of this sort and he would have had it back soon enough—or else made it a burden in some way. Indeed, I had done a piece of work greatly to my advantage, and also to the advantage of humankind, and when a man can do the first and include the second as a fortunate byproduct it is a most happy accident. For if I had shown the little people how to make a spaceship they would have left our world. And this world, as long as it lasts—what would it be in that event? I ask you now, wouldn't we be even more likely to blow ourselves to Kingdom Come without the little people here for us to believe in every now and then? Transcriber's Note: This etext was produced from Fantastic Universe September 1955. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
|
C. He knows that it is rare to find believers among his colleagues and fellow humans
|
What empricial investigations do they reference?
|
### Introduction
Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation. This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5. Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation. Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis. ### Background
We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators. ### Background ::: Human Evaluation of Machine Translation
The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans. Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively. As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B. This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12. ### Background ::: Assessing Human–Machine Parity
BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation. In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations. ### Background ::: Assessing Human–Machine Parity ::: Choice of Raters
The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations. ### Background ::: Assessing Human–Machine Parity ::: Linguistic Context
MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents. ### Background ::: Assessing Human–Machine Parity ::: Reference Translations
The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality. We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6). ### Background ::: Translations
We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: [labelwidth=1cm, leftmargin=1.25cm] The professional human translations in the dataset of BIBREF3.[1] Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35. The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$. The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1] Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated. ### Choice of Raters
Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8. Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it. ### Choice of Raters ::: Evaluation Protocol
We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts. We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context. Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country. The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$). ### Choice of Raters ::: Results
Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts). It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24. ### Linguistic Context
Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence. While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16. ### Linguistic Context ::: Evaluation Protocol
We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation. We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$). Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised. We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully. We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters. ### Linguistic Context ::: Results
Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents. We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section. Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy. Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21). ### Linguistic Context ::: Discussion
Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation. Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34. ### Reference Translations
Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality. ### Reference Translations ::: Quality
Because the translations are created by humans, a number of factors could lead to compromises in quality: If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news. If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology. Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator. In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33. In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level. The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency. To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32. From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬). Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking. However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories. ### Reference Translations ::: Directionality
Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English. According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33. We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on. Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input). We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$). Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text. ### Recommendations
Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. ### Recommendations ::: (R1) Choose professional translators as raters.
In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs. ### Recommendations ::: (R2) Evaluate documents, not sentences.
When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4). ### Recommendations ::: (R3) Evaluate fluency in addition to adequacy.
Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality. ### Recommendations ::: (R4) Do not heavily edit reference translations for fluency.
In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30). ### Recommendations ::: (R5) Use original source texts.
Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT. Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation. We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity. ### Conclusion
We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors. Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves. Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost. Table 1: Ranks and TrueSkill scores (the higher the better) of one human (HA) and two machine translations (MT1, MT2) for evaluations carried out by expert and non-expert translators. An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank at p ≤ .05. Table 2: Pairwise ranking results for machine (MT1) against professional human translation (HA) as obtained from blind evaluation by professional translators. Preference for MT1 is lower when document-level context is available. Table 4: Pairwise ranking results for one machine (MT1) and two professional human translations (HA, HB) as obtained from blind evaluation by professional translators. Table 5: Classification of errors in machine translation MT1 and two professional human translation outputs HA and HB. Errors represent the number of sentences (out of N = 150) that contain at least one error of the respective type. We also report the number of sentences that contain at least one error of any category (Any), and the total number of error categories present in all sentences (Total). Statistical significance is assessed with Fisher’s exact test (two-tailed) for each pair of translation outputs. Table 6: (Continued from previous page.) Table 7: Ranks of the translations given the original language of the source side of the test set shown with their TrueSkill score (the higher the better). An asterisk next to a translation indicates that this translation is significantly better than the one in the next rank at p ≤ .05.
|
empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation
|
How does Crownwall feel about the Vegans?
A. Crownwall thinks the Vegans are a kind and benevolent race.
B. Crownwall thinks the Vegans seem to be just as brutal and horrible as they make the Sundans out to be.
C. Crownwall thinks the Vegans are murderous and can't wait to get away from them.
D. Crownwall is disgusted by the sight of the slobbering, boneless, tentacled creatures.
|
UPSTARTS By L. J. STECHER, JR. Illustrated by DILLON The sight of an Earthman on Vega III, where it was impossible for an outlander to be, brought angry crowds to surround John Crownwall as he strode toward the palace of Viceroy Tronn Ffallk, ruler of Sector XII of the Universal Holy Empire of Sunda. He ignored the snarling, the spitting, the waving of boneless prehensile fingers, as he ignored the heavy gravity and heavier air of the unfamiliar planet. John Crownwall, florid, red-headed and bulky, considered himself to be a bold man. But here, surrounded by this writhing, slithering mass of eight-foot creatures, he felt distinctly unhappy. Crownwall had heard about creatures that slavered, but he had never before seen it done. These humanoids had large mouths and sharp teeth, and they unquestionably slavered. He wished he knew more about them. If they carried out the threats of their present attitude, Earth would have to send Marshall to replace him. And if Crownwall couldn't do the job, thought Crownwall, then it was a sure bet that Marshall wouldn't have a chance. He climbed the great ramp, with its deeply carved Greek key design, toward the mighty entrance gate of the palace. His manner demonstrated an elaborate air of unconcern that he felt sure was entirely wasted on these monsters. The clashing teeth of the noisiest of them were only inches from the quivering flesh of his back as he reached the upper level. Instantly, and unexpectedly to Crownwall, the threatening crowd dropped back fearfully, so that he walked the last fifty meters alone. Crownwall all but sagged with relief. A pair of guards, their purple hides smoothly polished and gleaming with oil, crossed their ceremonial pikes in front of him as he approached the entrance. "And just what business do you have here, stranger?" asked the senior of the guards, his speaking orifice framing with difficulty the sibilances of Universal Galactic. "What business would I have at the Viceroy's Palace?" asked Crownwall. "I want to see Ffallk." "Mind your tongue," growled the guard. "If you mean His Effulgence, Right Hand of the Glorious Emperor, Hereditary Ruler of the Seventy Suns, Viceroy of the Twelfth Sector of the Universal Holy Empire"—Universal Galactic had a full measure of ceremonial words—"he sees only those whom he summons. If you know what's good for you, you'll get out of here while you can still walk. And if you run fast enough, maybe you can even get away from that crowd out there, but I doubt it." "Just tell him that a man has arrived from Earth to talk to him. He'll summon me fast enough. Meanwhile, my highly polished friends, I'll just wait here, so why don't you put those heavy pikes down?" Crownwall sat on the steps, puffed alight a cigarette, and blew expert smoke rings toward the guards. An elegant courtier, with elaborately jeweled harness, bustled from inside the palace, obviously trying to present an air of strolling nonchalance. He gestured fluidly with a graceful tentacle. "You!" he said to Crownwall. "Follow me. His Effulgence commands you to appear before him at once." The two guards withdrew their pikes and froze into immobility at the sides of the entrance. Crownwall stamped out his smoke and ambled after the hurrying courtier along tremendous corridors, through elaborate waiting rooms, under guarded doorways, until he was finally bowed through a small curtained arch. At the far side of the comfortable, unimpressive room, a plump thing, hide faded to a dull violet, reclined on a couch. Behind him stood a heavy and pompous appearing Vegan in lordly trappings. They examined Crownwall with great interest for a few moments. "It's customary to genuflect when you enter the Viceroy's presence," said the standing one at last. "But then I'm told you're an Earthling. I suppose we can expect you to be ignorant of those niceties customary among civilized peoples." "It's all right, Ggaran," said the Viceroy languidly. He twitched a tentacle in a beckoning gesture. "Come closer, Earthling. I bid you welcome to my capital. I have been looking forward to your arrival for some time." Crownwall put his hands in his pockets. "That's hardly possible," he said. "It was only decided yesterday, back on Earth, that I would be the one to make the trip here. Even if you could spy through buildings on Earth from space, which I doubt, your communications system can't get the word through that fast." "Oh, I didn't mean you in particular," the Vegan said with a negligent wave. "Who can tell one Earthling from another? What I meant was that I expected someone from Earth to break through our blockade and come here. Most of my advisors—even Ggaran here—thought it couldn't be done, but I never doubted that you'd manage it. Still, if you were on your home planet only yesterday, that's astonishing even to me. Tell me, how did you manage to get here so fast, and without even alerting my detection web?" "You're doing the talking," said Crownwall. "If you wanted someone from Earth to come here to see you, why did you put the cordon around Earth? And why did you drop a planet-buster in the Pacific Ocean, and tell us that it was triggered to go off if we tried to use the distorter drive? That's hardly the action of somebody who expects visitors." Ffallk glanced up at Ggaran. "I told you that Earthlings were unbelievably bold." He turned back to Crownwall. "If you couldn't come to me in spite of the trifling inconveniences I put in your way, your presence here would be useless to both of us. But you did come, so I can tell you that although I am the leader of one of the mightiest peoples in the Galaxy, whereas there are scarcely six billions of you squatting on one minor planet, we still need each other. Together, there is nothing we can't do." "I'm listening," said Crownwall. "We offer you partnership with us to take over the rule of the Galaxy from the Sunda—the so-called Master Race." "It would hardly be an equal partnership, would it, considering that there are so many more of you than there are of us?" His Effulgence twitched his ear stalks in amusement. "I'm Viceroy of one of the hundred Sectors of the Empire. I rule over a total of a hundred Satrapies; these average about a hundred Provinces each. Provinces consist, in general, of about a hundred Clusters apiece, and every Cluster has an average of a hundred inhabited solar systems. There are more inhabited planets in the Galaxy than there are people on your single world. I, personally, rule three hundred trillion people, half of them of my own race. And yet I tell you that it would be an equal partnership." "I don't get it. Why?" "Because you came to me." Crownwall shrugged. "So?" The Vegan reached up and engulfed the end of a drinking tube with his eating orifice. "You upstart Earthlings are a strange and a frightening race," he said. "Frightening to the Sunda, especially. When you showed up in the spaceways, it was decreed that you had to be stopped at once. There was even serious discussion of destroying Earth out of hand, while it is still possible. "Your silly little planet was carefully examined at long range in a routine investigation just about fifty thousand years ago. There were at that time three different but similar racial strains of pulpy bipeds, numbering a total of perhaps a hundred thousand individuals. They showed many signs of an ability to reason, but a complete lack of civilization. While these creatures could by no means be classed among the intelligent races, there was a general expectation, which we reported to the Sunda, that they would some day come to be numbered among the Servants of the Emperor. So we let you alone, in order that you could develop in your own way, until you reached a high enough civilization to be useful—if you were going to. "Intelligence is very rare in the Galaxy. In all, it has been found only fifteen times. The other races we have watched develop, and some we have actively assisted to develop. It took the quickest of them just under a million years. One such race we left uncontrolled too long—but no matter. "You Earthlings, in defiance of all expectation and all reason, have exploded into space. You have developed in an incredibly short space of time. But even that isn't the most disconcerting item of your development. As an Earthling, you have heard of the details of the first expedition of your people into space, of course?" " Heard about it?" exclaimed Crownwall. "I was on it." He settled down comfortably on a couch, without requesting permission, and thought back to that first tremendous adventure; an adventure that had taken place little more than ten years before. The Star Seeker had been built in space, about forty thousand kilometers above the Earth. It had been manned by a dozen adventurous people, captained by Crownwall, and had headed out on its ion drive until it was safely clear of the warping influence of planetary masses. Then, after several impatient days of careful study and calculation, the distorter drive had been activated, for the first time in Earth's history, and, for the twelve, the stars had winked out. The men of Earth had decided that it should work in theory. They had built the drive—a small machine, as drives go—but they had never dared to try it, close to a planet. To do so, said their theory, would usually—seven point three four times out of 10—destroy the ship, and everything in space for thousands of miles around, in a ravening burst of raw energy. So the drive had been used for the first time without ever having been tested. And it had worked. In less than a week's time, if time has any meaning under such circumstances, they had flickered back into normal space, in the vicinity of Alpha Centauri. They had quickly located a dozen planets, and one that looked enough like Earth to be its twin sister. They had headed for that planet confidently and unsuspectingly, using the ion drive. Two weeks later, while they were still several planetary diameters from their destination, they had been shocked to find more than two score alien ships of space closing in on them—ships that were swifter and more maneuverable than their own. These ships had rapidly and competently englobed the Star Seeker , and had then tried to herd it away from the planet it had been heading toward. Although caught by surprise, the Earthmen had acted swiftly. Crownwall recalled the discussion—the council of war, they had called it—and their unanimous decision. Although far within the dangerous influence of a planetary mass, they had again activated the distorter drive, and they had beaten the odds. On the distorter drive, they had returned to Earth as swiftly as they had departed. Earth had immediately prepared for war against her unknown enemy. "Your reaction was savage," said Ggaran, his tentacles stiffening with shock at the memory. "You bloody-minded Earthlings must have been aware of the terrible danger." Ffallk rippled in agreement. "The action you took was too swift and too foolhardy to be believed. You knew that you could have destroyed not only yourself, but also all who live on that planet. You could also have wrecked the planet itself and the ships and those of my own race who manned them. We had tried to contact you, but since you had not developed subspace radio, we were of course not successful. Our englobement was just a routine quarantine. With your total lack of information about us, what you did was more than the height of folly. It was madness." "Could we have done anything else that would have kept you from landing on Earth and taking us over?" asked Crownwall. "Would that have been so bad?" said Ggaran. "We can't tolerate wild and warlike races running free and uncontrolled in the Galaxy. Once was enough for that." "But what about my question? Was there any other way for us to stay free?" "Well, no. But you didn't have enough information to realize that when you acted so precipitously. As a matter of fact, we didn't expect to have much trouble, even after your surprising action. Of course, it took us a little time to react. We located your planet quickly enough, and confirmed that you were a new race. But by the time we could try to set up communications and send ambassadors, you had already organized a not inconsiderable defense. Your drones blew up our unmanned ships as fast as we could send them down to your planet. And by the time we had organized properly for war against you, it was obvious that we could not conquer you. We could only destroy you." "That old fool on Sunda, the Emperor, decided that we should blow you up, but by that time I had decided," said His Effulgence, "that you might be useful to me—that is, that we might be useful to each other. I traveled halfway across the Galaxy to meet him, to convince him that it would be sufficient just to quarantine you. When we had used your radio system to teach a few of you the Universal Galactic tongue, and had managed to get what you call the 'planet-buster' down into the largest of your oceans, he figured we had done our job. "With his usual lack of imagination, he felt sure that we were safe from you—after all, there was no way for you to get off the planet. Even if you could get down to the bottom of the ocean and tamper with the bomb, you would only succeed in setting it off, and that's what the Sunda had been in favor of in the first place. "But I had different ideas. From what you had already done, I suspected it wouldn't be long before one of you amazing Earthlings would dream up some device or other, head out into space, and show up on our planet. So I've been waiting for you, and here you are." "It was the thinking of a genius," murmured Ggaran. "All right, then, genius, here I am," said Crownwall. "So what's the pitch?" "Ggaran, you explain it to the Earthling," said His Effulgence. Ggaran bowed. "The crustaceans on Sunda—the lobsterlike creatures that rule the Galaxy—are usurpers. They have no rights to their position of power. Our race is much older than theirs. We were alone when we found the Sundans—a primitive tribe, grubbing in the mud at the edge of their shallow seas, unable even to reason. In those days we were desperately lonely. We needed companionship among the stars, and we helped them develop to the point where, in their inferior way, they were able to reason, almost as well as we, The People, can. And then they cheated us of our rightful place. "The Emperor at Sunda is one of them. They provide sixty-eight of the hundred Viceroys; we provide only seventeen. It is a preposterous and intolerable situation. "For more than two million years we have waited for the opportunity for revenge. And now that you have entered space, that opportunity is at hand." "If you haven't been able to help yourselves for two million years," asked Crownwall, "how does the sight of me give you so much gumption all of a sudden?" Ggaran's tentacles writhed, and he slavered in fury, but the clashing of his teeth subsided instantly at a soothing wave from His Effulgence. "War in space is almost an impossibility," said the aged ruler. "We can destroy planets, of course, but with few exceptions, we cannot conquer them. I rule a total of seven races in my Sector. I rule them, but I don't let them intermingle. Each race settles on the planets that best suit it. Each of those planets is quite capable of defending itself from raids, or even large-scale assaults that would result in its capture and subjugation—just as your little Earth can defend itself. "Naturally, each is vulnerable to economic blockade—trade provides a small but vital portion of the goods each planet uses. All that a world requires for a healthy and comfortable life cannot be provided from the resources of that single world alone, and that gives us a very considerable measure of control. "And it is true that we can always exterminate any planet that refuses to obey the just and legal orders of its Viceroy. So we achieve a working balance in our Empire. We control it adequately, and we live in peace. "The Sundans, for example, though they took the rule of the Empire that was rightfully ours away from us, through trickery, were unable to take over the Sectors we control. We are still powerful. And soon we will be all-powerful. In company with you Earthlings, that is." Crownwall nodded. "In other words, you think that we Earthmen can break up this two-million-year-old stalemate. You've got the idea that, with our help, you can conquer planets without the necessity of destroying them, and thereby take over number one spot from these Sunda friends of yours." "Don't call those damn lobsters friends," growled Ggaran. He subsided at the Viceroy's gesture. "Exactly," said His Effulgence to Crownwall. "You broke our blockade without any trouble. Our instruments didn't even wiggle when you landed here on my capital world. You can do the same on the worlds of the Sunda. Now, just tell us how you did it, and we're partners." Crownwall lifted one eyebrow quizzically, but remained silent. He didn't expect his facial gesture to be interpreted correctly, but he assumed that his silence would be. He was correct. "Of course," His Effulgence said, "we will give you any assurances that your people may desire in order to feel safe, and we will guarantee them an equal share in the government of the Galaxy." "Bunk," said Crownwall. His Effulgence lifted a tentacle swiftly, before Ggaran, lunging angrily forward, could speak. "Then what do you want of us?" "It seems to me that we need no wordy assurances from each other," said Crownwall, and he puffed a cigarette aglow. "We can arrange something a little more trustworthy, I believe. On your side, you have the power to destroy our only planet at any time. That is certainly adequate security for our own good behavior and sincerity. "It is impossible for us of Earth to destroy all of your planets. As you have said, there are more planets that belong to you than there are human beings on Earth. But there is a way for us to be reasonably sure that you will behave yourselves. You will transfer to us, at once, a hundred of your planet-destroying bombs. That will be a sufficient supply to let us test some of them, to see that they are in good working order. Then, if you try any kind of double-cross, we will be able to use our own methods—which you cannot prevent—to send one of those bombs here to destroy this planet. "And if you try to move anywhere else, by your clumsy distorter drive, we can follow you, and destroy any planet you choose to land on. You would not get away from us. We can track you without any difficulty. "We wouldn't use the bombs lightly, to be sure, because of what would happen to Earth. And don't think that blowing up our planet would save you, because we naturally wouldn't keep the bombs on Earth. How does that sound to you?" "Ridiculous," snorted Ggaran. "Impossible." After several minutes of silent consideration, "It is an excellent plan," said His Effulgence. "It is worthy of the thinking of The People ourselves. You Earthlings will make very satisfactory allies. What you request will be provided without delay. Meanwhile, I see no reason why we cannot proceed with our discussions." "Nor do I," consented Crownwall. "But your stooge here doesn't seem very happy about it all." His Effulgence wiggled his tentacles. "I'm afraid that Ggaran had expected to take what you Earthlings have to offer without giving anything in return. I never had any such ideas. I have not underestimated you, you see." "That's nice," said Crownwall graciously. "And now," Ggaran put in, "I think it's time for you to tell us something about how you get across light-years of space in a few hours, without leaving any traces for us to detect." He raised a tentacle to still Crownwall's immediate exclamation of protest. "Oh, nothing that would give us a chance to duplicate it—just enough to indicate how we can make use of it, along with you—enough to allow us to begin to make intelligent plans to beat the claws off the Master Race." After due consideration, Crownwall nodded. "I don't see why not. Well, then, let me tell you that we don't travel in space at all. That's why I didn't show up on any of your long-range detection instruments. Instead, we travel in time. Surely any race that has progressed as far as your own must know, at least theoretically, that time travel is entirely possible. After all, we knew it, and we haven't been around nearly as long as you have." "We know about it," said Ffallk, "but we've always considered it useless—and very dangerous—knowledge." "So have we, up until the time you planted that bomb on us. Anyone who tried to work any changes in his own past would be almost certain to end up finding himself never having been born. So we don't do any meddling. What we have discovered is a way not only of moving back into the past, but also of making our own choice of spatial references while we do it, and of changing our spatial anchor at will. "For example, to reach this planet, I went back far enough, using Earth as the spatial referent, to move with Earth a little more than a third of the way around this spiral nebula that is our Galaxy. Then I shifted my frame of reference to that of the group of galaxies of which ours is such a distinguished member. "Then of course, as I continued to move in time, the whole Galaxy moved spatially with reference to my own position. At the proper instant I shifted again, to the reference frame of this Galaxy itself. Then I was stationary in the Galaxy, and as I continued time traveling, your own mighty sun moved toward me as the Galaxy revolved. I chose a point where there was a time intersection of your planet's position and my own. When you got there, I just changed to the reference plane of this planet I'm on now, and then came on back with it to the present. So here I am. It was a long way around to cover a net distance of 26 light-years, but it was really very simple. "And there's no danger of meeting myself, or getting into any anachronistic situation. As you probably know, theory shows that these are excluded times for me, as is the future—I can't stop in them." "Are you sure that you haven't given us a little too much information for your own safety?" asked Ffallk softly. "Not at all. We were enormously lucky to have learned how to control spatial reference frames ourselves. I doubt if you could do it in another two million years." Crownwall rose to his feet. "And now, Your Effulgence, I think it's about time I went back to my ship and drove it home to Earth to make my report, so we can pick up those bombs and start making arrangements." "Excellent," said Ffallk. "I'd better escort you; my people don't like strangers much." "I'd noticed that," Crownwall commented drily. "Since this is a very important occasion, I think it best that we make this a Procession of Full Ceremony. It's a bother, but the proprieties have to be observed." Ggaran stepped out into the broad corridor and whistled a shrill two-tone note, using both his speaking and his eating orifices. A cohort of troops, pikes at the ready and bows strapped to their backs, leaped forward and formed a double line leading from His Effulgence's sanctum to the main door. Down this lane, carried by twenty men, came a large sedan chair. "Protocol takes a lot of time," said His Effulgence somewhat sadly, "but it must be observed. At least, as Ambassador, you can ride with me in the sedan, instead of walking behind it, like Ggaran." "I'm glad of that," said Crownwall. "Too bad Ggaran can't join us." He climbed into the chair beside Ffallk. The bearers trotted along at seven or eight kilometers an hour, carrying their contraption with absolute smoothness. Blasts from horns preceded them as they went. When they passed through the huge entrance doors of the palace and started down the ramp toward the street, Crownwall was astonished to see nobody on the previously crowded streets, and mentioned it to Ffallk. "When the Viceroy of the Seventy Suns," said the Viceroy of the Seventy Suns, "travels in state, no one but my own entourage is permitted to watch. And my guests, of course," he added, bowing slightly to Crownwall. "Of course," agreed Crownwall, bowing back. "Kind of you, I'm sure. But what happens if somebody doesn't get the word, or doesn't hear your trumpeters, or something like that?" Ggaran stepped forward, already panting slightly. "A man with knots in all of his ear stalks is in a very uncomfortable position," he explained. "Wait. Let me show you. Let us just suppose that that runner over there"—he gestured toward a soldier with a tentacle—"is a civilian who has been so unlucky as to remain on the street after His Effulgence's entourage arrived." He turned to one of the bowmen who ran beside the sedan chair, now strung and at the ready. "Show him!" he ordered peremptorily. In one swift movement the bowman notched an arrow, drew and fired. The arrow hissed briefly, and then sliced smoothly through the soldier's throat. "You see," said Ggaran complacently, "we have very little trouble with civilians who violate this particular tradition." His Effulgence beckoned to the bowman to approach. "Your results were satisfactory," he said, "but your release was somewhat shaky. The next time you show such sloppy form, you will be given thirty lashes." He leaned back on the cushion and spoke again to Crownwall. "That's the trouble with these requirements of civilization. The men of my immediate guard must practice with such things as pikes and bows and arrows, which they seldom get an opportunity to use. It would never do for them to use modern weapons on occasions of ceremony, of course." "Of course," said Crownwall, then added, "It's too bad that you can't provide them with live targets a little more often." He stifled a shudder of distaste. "Tell me, Your Effulgence, does the Emperor's race—the Master Race—also enjoy the type of civilization you have just had demonstrated for me?" "Oh, no. They are far too brutal, too morally degraded, to know anything of these finer points of etiquette and propriety. They are really an uncouth bunch. Why, do you know, I am certain that they would have had the bad taste to use an energy weapon to dispose of the victim in a case such as you just witnessed! They are really quite unfit to rule. They can scarcely be called civilized at all. But we will soon put a stop to all of that—your race and mine, of course." "I sincerely hope so," said Crownwall. Refreshments were served to His Effulgence and to Crownwall during the trip, without interrupting the smooth progress of the sedan. The soldiers of the cohort, the bearers and Ggaran continued to run—without food, drink or, except for Ggaran, evidence of fatigue. After several hours of travel, following Crownwall's directions, the procession arrived at the copse in which he had concealed his small transportation machine. The machine, for spatial mobility, was equipped with the heavy and grossly inefficient anti-gravity field generator developed by Kowalsky. It occupied ten times the space of the temporal translation and coordination selection systems combined, but it had the great advantage of being almost undetectable in use. It emitted no mass or radiation. After elaborate and lengthy farewells, Crownwall climbed into his machine and fell gently up until he was out of the atmosphere, before starting his enormous journey through time back to Earth. More quickly than it had taken him to reach his ship from the palace of His Effulgence, he was in the Council Chamber of the Confederation Government of Earth, making a full report on his trip to Vega. When he had finished, the President sighed deeply. "Well," he said, "we gave you full plenipotentiary powers, so I suppose we'll have to stand behind your agreements—especially in view of the fact that we'll undoubtedly be blown into atoms if we don't. But from what you say, I'd rather be in bed with a rattler than have a treaty with a Vegan. They sound ungodly murderous to me. There are too many holes in that protection plan of yours. It's only a question of time before they'll find some way around it, and then—poof—we'll all be dust." "Things may not be as bad as they seem," answered Crownwall complacently. "After I got back a few million years, I'm afraid I got a little careless and let my ship dip down into Vega III's atmosphere for a while. I was back so far that the Vegans hadn't appeared yet. Now, I didn't land—or deliberately kill anything—but I'd be mighty surprised if we didn't find a change or two. Before I came in here, I asked Marshall to take the ship out and check on things. He should be back with his report before long. Why don't we wait and see what he has to say?" Marshall was excited when he was escorted into the Council Chamber. He bowed briefly to the President and began to speak rapidly. "They're gone without trace— all of them !" he cried. "I went clear to Sunda and there's no sign of intelligent life anywhere! We're all alone now!" "There, you see?" exclaimed Crownwall. "Our enemies are all gone!" He looked around, glowing with victory, at the others at the table, then slowly quieted and sat down. He turned his head away from their accusing eyes. "Alone," he said, and unconsciously repeated Marshall's words: "We're all alone now." In silence, the others gathered their papers together and left the room, leaving Crownwall sitting at the table by himself. He shivered involuntarily, and then leaped to his feet to follow after them. Loneliness, he found, was something that he couldn't face alone. —L. J. STECHER, JR. Transcriber's Note: This etext was produced from Galaxy Magazine June 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
|
B. Crownwall thinks the Vegans seem to be just as brutal and horrible as they make the Sundans out to be.
|
Which paired corpora did they use in the other experiment?
|
### Introduction
Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors. Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data. Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments. To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner. The contributions of this work are as follows: ### Machine Commenting
In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges. ### Challenges
Here, we first introduce the challenges of building a well-performed machine commenting system. The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem. One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set. There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments. ### Solutions
Facing the above challenges, we provide three solutions to the problems. Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set). The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias. Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics. ### Proposed Approach
We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning. ### Retrieval-based Commenting
Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports. The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large. To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0 The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model. ### Neural Variational Topic Model
We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text. We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution. In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0 where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1 We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 . At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 . ### Training
In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0 where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 . ### Datasets
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words. ### Implementation Details
The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch. ### Baselines
We compare our model with several unsupervised models and supervised models. Unsupervised baseline models are as follows: TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model. LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract "topics" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations. NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic. The supervised baseline models are: S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder. IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 . ### Retrieval Evaluation
For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts: Correct: The ground-truth comments of the corresponding news provided by the human. Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments. Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments. Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set. Following previous work, we measure the rank in terms of the following metrics: Recall@k: The proportion of human comments found in the top-k recommendations. Mean Rank (MR): The mean rank of the human comments. Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments. The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score. Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information. We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance. ### Generative Evaluation
Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 . Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios. ### Analysis and Discussion
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model. Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments. Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S. IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments. ### Article Comment
There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 . ### Topic Model and Variational Auto-Encoder
Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models. Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN. ### Conclusion
We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.) Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.) Figure 1: The performance of the supervised model and the semi-supervised model trained on different paired data size. Figure 2: Error types of comments generated by different models.
|
Chinese dataset BIBREF0
|
Which of these does Dr. Niemand believe to be true about the timing of the attacks?
A. They are related to sunspots and the speed of the Earth's rotation
B. Overcast weather throws off the timing of paired attacks in different areas
C. The timing of the events depends on the movement of the moon, like tides of oceans
D. They are related to the sun's cycle and the speed at which S-Regions travel
|
DISTURBING SUN By PHILIP LATHAM Illustrated by Freas [Transcriber's Note: This etext was produced from Astounding Science Fiction May 1959. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] This, be it understood, is fiction—nothing but fiction—and not, under any circumstances, to be considered as having any truth whatever to it. It's obviously utterly impossible ... isn't it? An interview with Dr. I. M. Niemand, Director of the Psychophysical Institute of Solar and Terrestrial Relations, Camarillo, California. In the closing days of December, 1957, at the meeting of the American Association for the Advancement of Science in New York, Dr. Niemand delivered a paper entitled simply, "On the Nature of the Solar S-Regions." Owing to its unassuming title the startling implications contained in the paper were completely overlooked by the press. These implications are discussed here in an exclusive interview with Dr. Niemand by Philip Latham. LATHAM. Dr. Niemand, what would you say is your main job? NIEMAND. I suppose you might say my main job today is to find out all I can between activity on the Sun and various forms of activity on the Earth. LATHAM. What do you mean by activity on the Sun? NIEMAND. Well, a sunspot is a form of solar activity. LATHAM. Just what is a sunspot? NIEMAND. I'm afraid I can't say just what a sunspot is. I can only describe it. A sunspot is a region on the Sun that is cooler than its surroundings. That's why it looks dark. It isn't so hot. Therefore not so bright. LATHAM. Isn't it true that the number of spots on the Sun rises and falls in a cycle of eleven years? NIEMAND. The number of spots on the Sun rises and falls in a cycle of about eleven years. That word about makes quite a difference. LATHAM. In what way? NIEMAND. It means you can only approximately predict the future course of sunspot activity. Sunspots are mighty treacherous things. LATHAM. Haven't there been a great many correlations announced between sunspots and various effects on the Earth? NIEMAND. Scores of them. LATHAM. What is your opinion of these correlations? NIEMAND. Pure bosh in most cases. LATHAM. But some are valid? NIEMAND. A few. There is unquestionably a correlation between sunspots and disturbances of the Earth's magnetic field ... radio fade-outs ... auroras ... things like that. LATHAM. Now, Dr. Niemand, I understand that you have been investigating solar and terrestrial relationships along rather unorthodox lines. NIEMAND. Yes, I suppose some people would say so. LATHAM. You have broken new ground? NIEMAND. That's true. LATHAM. In what way have your investigations differed from those of others? NIEMAND. I think our biggest advance was the discovery that sunspots themselves are not the direct cause of the disturbances we have been studying on the Earth. It's something like the eruptions in rubeola. Attention is concentrated on the bright red papules because they're such a conspicuous symptom of the disease. Whereas the real cause is an invisible filterable virus. In the solar case it turned out to be these S-Regions. LATHAM. Why S-Regions? NIEMAND. We had to call them something. Named after the Sun, I suppose. LATHAM. You say an S-Region is invisible? NIEMAND. It is quite invisible to the eye but readily detected by suitable instrumental methods. It is extremely doubtful, however, if the radiation we detect is the actual cause of the disturbing effects observed. LATHAM. Just what are these effects? NIEMAND. Well, they're common enough, goodness knows. As old as the world, in fact. Yet strangely enough it's hard to describe them in exact terms. LATHAM. Can you give us a general idea? NIEMAND. I'll try. Let's see ... remember that speech from "Julius Caesar" where Cassius is bewailing the evil times that beset ancient Rome? I believe it went like this: "The fault, dear Brutus, is not in our stars but in ourselves that we are underlings." LATHAM. I'm afraid I don't see— NIEMAND. Well, Shakespeare would have been nearer the truth if he had put it the other way around. "The fault, dear Brutus, is not in ourselves but in our stars" or better "in the Sun." LATHAM. In the Sun? NIEMAND. That's right, in the Sun. I suppose the oldest problem in the world is the origin of human evil. Philosophers have wrestled with it ever since the days of Job. And like Job they have usually given up in despair, convinced that the origin of evil is too deep for the human mind to solve. Generally they have concluded that man is inherently wicked and sinful and that is the end of it. Now for the first time science has thrown new light on this subject. LATHAM. How is that? NIEMAND. Consider the record of history. There are occasional periods when conditions are fairly calm and peaceful. Art and industry flourished. Man at last seemed to be making progress toward some higher goal. Then suddenly— for no detectable reason —conditions are reversed. Wars rage. People go mad. The world is plunged into an orgy of bloodshed and misery. LATHAM. But weren't there reasons? NIEMAND. What reasons? LATHAM. Well, disputes over boundaries ... economic rivalry ... border incidents.... NIEMAND. Nonsense. Men always make some flimsy excuse for going to war. The truth of the matter is that men go to war because they want to go to war. They can't help themselves. They are impelled by forces over which they have no control. By forces outside of themselves. LATHAM. Those are broad, sweeping statements. Can't you be more specific? NIEMAND. Perhaps I'd better go back to the beginning. Let me see.... It all started back in March, 1955, when I started getting patients suffering from a complex of symptoms, such as profound mental depression, anxiety, insomnia, alternating with fits of violent rage and resentment against life and the world in general. These people were deeply disturbed. No doubt about that. Yet they were not psychotic and hardly more than mildly neurotic. Now every doctor gets a good many patients of this type. Such a syndrome is characteristic of menopausal women and some men during the climacteric, but these people failed to fit into this picture. They were married and single persons of both sexes and of all ages. They came from all walks of life. The onset of their attack was invariably sudden and with scarcely any warning. They would be going about their work feeling perfectly all right. Then in a minute the whole world was like some scene from a nightmare. A week or ten days later the attack would cease as mysteriously as it had come and they would be their old self again. LATHAM. Aren't such attacks characteristic of the stress and strain of modern life? NIEMAND. I'm afraid that old stress-and-strain theory has been badly overworked. Been hearing about it ever since I was a pre-med student at ucla . Even as a boy I can remember my grandfather deploring the stress and strain of modern life when he was a country doctor practicing in Indiana. In my opinion one of the most valuable contributions anthropologists have made in recent years is the discovery that primitive man is afflicted with essentially the same neurotic conditions as those of us who live a so-called civilized life. They have found savages displaying every symptom of a nervous breakdown among the mountain tribes of the Elgonyi and the Aruntas of Australia. No, Mr. Latham, it's time the stress-and-strain theory was relegated to the junk pile along with demoniac possession and blood letting. LATHAM. You must have done something for your patients— NIEMAND. A doctor must always do something for the patients who come to his office seeking help. First I gave them a thorough physical examination. I turned up some minor ailments—a slight heart murmur or a trace of albumin in the urine—but nothing of any significance. On the whole they were a remarkably healthy bunch of individuals, much more so than an average sample of the population. Then I made a searching inquiry into their personal life. Here again I drew a blank. They had no particular financial worries. Their sex life was generally satisfactory. There was no history of mental illness in the family. In fact, the only thing that seemed to be the matter with them was that there were times when they felt like hell. LATHAM. I suppose you tried tranquilizers? NIEMAND. Oh, yes. In a few cases in which I tried tranquilizing pills of the meprobamate type there was some slight improvement. I want to emphasize, however, that I do not believe in prescribing shotgun remedies for a patient. To my way of thinking it is a lazy slipshod way of carrying on the practice of medicine. The only thing for which I do give myself credit was that I asked my patients to keep a detailed record of their symptoms taking special care to note the time of exacerbation—increase in the severity of the symptoms—as accurately as possible. LATHAM. And this gave you a clue? NIEMAND. It was the beginning. In most instances patients reported the attack struck with almost the impact of a physical blow. The prodromal symptoms were usually slight ... a sudden feeling of uneasiness and guilt ... hot and cold flashes ... dizziness ... double vision. Then this ghastly sense of depression coupled with a blind insensate rage at life. One man said he felt as if the world were closing in on him. Another that he felt the people around him were plotting his destruction. One housewife made her husband lock her in her room for fear she would injure the children. I pored over these case histories for a long time getting absolutely nowhere. Then finally a pattern began to emerge. LATHAM. What sort of pattern? NIEMAND. The first thing that struck me was that the attacks all occurred during the daytime, between the hours of about seven in the morning and five in the evening. Then there were these coincidences— LATHAM. Coincidences? NIEMAND. Total strangers miles apart were stricken at almost the same moment. At first I thought nothing of it but as my records accumulated I became convinced it could not be attributed to chance. A mathematical analysis showed the number of coincidences followed a Poisson distribution very closely. I couldn't possibly see what daylight had to do with it. There is some evidence that mental patients are most disturbed around the time of full moon, but a search of medical literature failed to reveal any connection with the Sun. LATHAM. What did you do? NIEMAND. Naturally I said nothing of this to my patients. I did, however, take pains to impress upon them the necessity of keeping an exact record of the onset of an attack. The better records they kept the more conclusive was the evidence. Men and women were experiencing nearly simultaneous attacks of rage and depression all over southern California, which was as far as my practice extended. One day it occurred to me: if people a few miles apart could be stricken simultaneously, why not people hundreds or thousands of miles apart? It was this idea that prompted me to get in touch with an old colleague of mine I had known at UC medical school, Dr. Max Hillyard, who was in practice in Utica, New York. LATHAM. With what result? NIEMAND. I was afraid the result would be that my old roommate would think I had gone completely crazy. Imagine my surprise and gratification on receiving an answer by return mail to the effect that he also had been getting an increasing number of patients suffering with the same identical symptoms as my own. Furthermore, upon exchanging records we did find that in many cases patients three thousand miles apart had been stricken simultaneously— LATHAM. Just a minute. I would like to know how you define "simultaneous." NIEMAND. We say an attack is simultaneous when one occurred on the east coast, for example, not earlier or later than five minutes of an attack on the west coast. That is about as close as you can hope to time a subjective effect of this nature. And now another fact emerged which gave us another clue. LATHAM. Which was? NIEMAND. In every case of a simultaneous attack the Sun was shining at both New York and California. LATHAM. You mean if it was cloudy— NIEMAND. No, no. The weather had nothing to do with it. I mean the Sun had to be above the horizon at both places. A person might undergo an attack soon after sunrise in New York but there would be no corresponding record of an attack in California where it was still dark. Conversely, a person might be stricken late in the afternoon in California without a corresponding attack in New York where the Sun had set. Dr. Hillyard and I had been searching desperately for a clue. We had both noticed that the attacks occurred only during the daylight hours but this had not seemed especially significant. Here we had evidence pointing directly to the source of trouble. It must have some connection with the Sun. LATHAM. That must have had you badly puzzled at first. NIEMAND. It certainly did. It looked as if we were headed back to the Middle Ages when astrology and medicine went hand in hand. But since it was our only lead we had no other choice but to follow it regardless of the consequences. Here luck played somewhat of a part, for Hillyard happened to have a contact that proved invaluable to us. Several years before Hillyard had gotten to know a young astrophysicist, Henry Middletown, who had come to him suffering from a severe case of myositis in the arms and shoulders. Hillyard had been able to effect a complete cure for which the boy was very grateful, and they had kept up a desultory correspondence. Middletown was now specializing in radio astronomy at the government's new solar observatory on Turtle Back Mountain in Arizona. If it had not been for Middletown's help I'm afraid our investigation would never have gotten past the clinical stage. LATHAM. In what way was Middletown of assistance? NIEMAND. It was the old case of workers in one field of science being completely ignorant of what was going on in another field. Someday we will have to establish a clearing house in science instead of keeping it in tight little compartments as we do at present. Well, Hillyard and I packed up for Arizona with considerable misgivings. We were afraid Middletown wouldn't take our findings seriously but somewhat to our surprise he heard our story with the closest attention. I guess astronomers have gotten so used to hearing from flying saucer enthusiasts and science-fiction addicts that nothing surprises them any more. When we had finished he asked to see our records. Hillyard had them all set down for easy numerical tabulation. Middletown went to work with scarcely a word. Within an hour he had produced a chart that was simply astounding. LATHAM. Can you describe this chart for us? NIEMAND. It was really quite simple. But if it had not been for Middletown's experience in charting other solar phenomena it would never have occurred to us to do it. First, he laid out a series of about thirty squares horizontally across a sheet of graph paper. He dated these beginning March 1, 1955, when our records began. In each square he put a number from 1 to 10 that was a rough index of the number and intensity of the attacks reported on that day. Then he laid out another horizontal row below the first one dated twenty-seven days later. That is, the square under March 1st in the top row was dated March 28th in the row below it. He filled in the chart until he had an array of dozens of rows that included all our data down to May, 1958. When Middletown had finished it was easy to see that the squares of highest index number did not fall at random on the chart. Instead they fell in slightly slanting parallel series so that you could draw straight lines down through them. The connection with the Sun was obvious. LATHAM. In what way? NIEMAND. Why, because twenty-seven days is about the synodic period of solar rotation. That is, if you see a large spot at the center of the Sun's disk today, there is a good chance if it survives that you will see it at the same place twenty-seven days later. But that night Middletown produced another chart that showed the connection with the Sun in a way that was even more convincing. LATHAM. How was that? NIEMAND. I said that the lines drawn down through the days of greatest mental disturbance slanted slightly. On this second chart the squares were dated under one another not at intervals of twenty-seven days, but at intervals of twenty-seven point three days. LATHAM. Why is that so important? NIEMAND. Because the average period of solar rotation in the sunspot zone is not twenty-seven days but twenty-seven point three days. And on this chart the lines did not slant but went vertically downward. The correlation with the synodic rotation of the Sun was practically perfect. LATHAM. But how did you get onto the S-Regions? NIEMAND. Middletown was immediately struck by the resemblance between the chart of mental disturbance and one he had been plotting over the years from his radio observations. Now when he compared the two charts the resemblance between the two was unmistakable. The pattern shown by the chart of mental disturbance corresponded in a striking way with the solar chart but with this difference. The disturbances on the Earth started two days later on the average than the disturbances due to the S-Regions on the Sun. In other words, there was a lag of about forty-eight hours between the two. But otherwise they were almost identical. LATHAM. But if these S-Regions of Middletown's are invisible how could he detect them? NIEMAND. The S-Regions are invisible to the eye through an optical telescope, but are detected with ease by a radio telescope. Middletown had discovered them when he was a graduate student working on radio astronomy in Australia, and he had followed up his researches with the more powerful equipment at Turtle Back Mountain. The formation of an S-Region is heralded by a long series of bursts of a few seconds duration, when the radiation may increase up to several thousand times that of the background intensity. These noise storms have been recorded simultaneously on wavelengths of from one to fifteen meters, which so far is the upper limit of the observations. In a few instances, however, intense bursts have also been detected down to fifty cm. LATHAM. I believe you said the periods of mental disturbance last for about ten or twelve days. How does that tie-in with the S-Regions? NIEMAND. Very closely. You see it takes about twelve days for an S-Region to pass across the face of the Sun, since the synodic rotation is twenty-seven point three days. LATHAM. I should think it would be nearer thirteen or fourteen days. NIEMAND. Apparently an S-Region is not particularly effective when it is just coming on or just going off the disk of the Sun. LATHAM. Are the S-Regions associated with sunspots? NIEMAND. They are connected in this way: that sunspot activity and S-Region activity certainly go together. The more sunspots the more violent and intense is the S-Region activity. But there is not a one-to-one correspondence between sunspots and S-Regions. That is, you cannot connect a particular sunspot group with a particular S-Region. The same thing is true of sunspots and magnetic storms. LATHAM. How do you account for this? NIEMAND. We don't account for it. LATHAM. What other properties of the S-Regions have you discovered? NIEMAND. Middletown says that the radio waves emanating from them are strongly circularly polarized. Moreover, the sense of rotation remains constant while one is passing across the Sun. If the magnetic field associated with an S-Region extends into the high solar corona through which the rays pass, then the sense of rotation corresponds to the ordinary ray of the magneto-ionic theory. LATHAM. Does this mean that the mental disturbances arise from some form of electromagnetic radiation? NIEMAND. We doubt it. As I said before, the charts show a lag of about forty-eight hours between the development of an S-Region and the onset of mental disturbance. This indicates that the malignant energy emanating from an S-Region consists of some highly penetrating form of corpuscular radiation, as yet unidentified. [A] LATHAM. A question that puzzles me is why some people are affected by the S-Regions while others are not. NIEMAND. Our latest results indicate that probably no one is completely immune. All are affected in some degree. Just why some should be affected so much more than others is still a matter of speculation. LATHAM. How long does an S-Region last? NIEMAND. An S-Region may have a lifetime of from three to perhaps a dozen solar rotations. Then it dies out and for a time we are free from this malignant radiation. Then a new region develops in perhaps an entirely different region of the Sun. Sometimes there may be several different S-Regions all going at once. LATHAM. Why were not the S-Regions discovered long ago? NIEMAND. Because the radio exploration of the Sun only began since the end of World War II. LATHAM. How does it happen that you only got patients suffering from S-radiation since about 1955? NIEMAND. I think we did get such patients previously but not in large enough numbers to attract attention. Also the present sunspot cycle started its rise to maximum about 1954. LATHAM. Is there no way of escaping the S-radiation? NIEMAND. I'm afraid the only sure way is to keep on the unilluminated side of the Earth which is rather difficult to do. Apparently the corpuscular beam from an S-Region is several degrees wide and not very sharply defined, since its effects are felt simultaneously over the entire continent. Hillyard and Middletown are working on some form of shielding device but so far without success. LATHAM. What is the present state of S-Region activity? NIEMAND. At the present moment there happens to be no S-Region activity on the Sun. But a new one may develop at any time. Also, the outlook for a decrease in activity is not very favorable. Sunspot activity continues at a high level and is steadily mounting in violence. The last sunspot cycle had the highest maximum of any since 1780, but the present cycle bids fair to set an all time record. LATHAM. And so you believe that the S-Regions are the cause of most of the present trouble in the world. That it is not ourselves but something outside ourselves— NIEMAND. That is the logical outcome of our investigation. We are controlled and swayed by forces which in many cases we are powerless to resist. LATHAM. Could we not be warned of the presence of an S-Region? NIEMAND. The trouble is they seem to develop at random on the Sun. I'm afraid any warning system would be worse than useless. We would be crying WOLF! all the time. LATHAM. How may a person who is not particularly susceptible to this malignant radiation know that one of these regions is active? NIEMAND. If you have a feeling of restlessness and anxiety, if you are unable to concentrate, if you feel suddenly depressed and discouraged about yourself, or are filled with resentment toward the world, then you may be pretty sure that an S-Region is passing across the face of the Sun. Keep a tight rein on yourself. For it seems that evil will always be with us ... as long as the Sun shall continue to shine upon this little world. THE END [A] Middletown believes that the Intense radiation recently discovered from information derived from Explorer I and III has no connection with the corpuscular S-radiation.
|
D. They are related to the sun's cycle and the speed at which S-Regions travel
|
Which of these was not an impact of Russell's decision to kill Dunbar?
A. Russell would have to travel alone
B. He was able to pick the path to the correct sun
C. It became quieter in general
D. Arguments increased amongst the team
|
Transcriber's Note: This etext was produced from Space Science Fiction May 1952. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. TO EACH HIS STAR by BRYCE WALTON "Nothing around those other suns but ashes and dried blood," old Dunbar told the space-wrecked, desperate men. "Only one way to go, where we can float down through the clouds to Paradise. That's straight ahead to the sun with the red rim around it." But Dunbar's eyes were old and uncertain. How could they believe in his choice when every star in this forsaken section of space was surrounded by a beckoning red rim? There was just blackness, frosty glimmering terrible blackness, going out and out forever in all directions. Russell didn't think they could remain sane in all this blackness much longer. Bitterly he thought of how they would die—not knowing within maybe thousands of light years where they were, or where they were going. After the wreck, the four of them had floated a while, floated and drifted together, four men in bulbous pressure suits like small individual rockets, held together by an awful pressing need for each other and by the "gravity-rope" beam. Dunbar, the oldest of the four, an old space-buster with a face wrinkled like a dried prune, burned by cosmic rays and the suns of worlds so far away they were scarcely credible, had taken command. Suddenly, Old Dunbar had known where they were. Suddenly, Dunbar knew where they were going. They could talk to one another through the etheric transmitters inside their helmets. They could live ... if this was living ... a long time, if only a man's brain would hold up, Russell thought. The suits were complete units. 700 pounds each, all enclosing shelters, with atmosphere pressure, temperature control, mobility in space, and electric power. Each suit had its own power-plant, reprocessing continuously the precious air breathed by the occupants, putting it back into circulation again after enriching it. Packed with food concentrates. Each suit a rocket, each human being part of a rocket, and the special "life-gun" that went with each suit each blast of which sent a man a few hundred thousand miles further on toward wherever he was going. Four men, thought Russell, held together by an invisible string of gravity, plunging through a lost pocket of hell's dark where there had never been any sound or life, with old Dunbar the first in line, taking the lead because he was older and knew where he was and where he was going. Maybe Johnson, second in line, and Alvar who was third, knew too, but were afraid to admit it. But Russell knew it and he'd admitted it from the first—that old Dunbar was as crazy as a Jovian juke-bird. A lot of time had rushed past into darkness. Russell had no idea now how long the four of them had been plunging toward the red-rimmed sun that never seemed to get any nearer. When the ultra-drive had gone crazy the four of them had blanked out and nobody could say now how long an interim that had been. Nobody knew what happened to a man who suffered a space-time warping like that. When they had regained consciousness, the ship was pretty banged up, and the meteor-repeller shields cracked. A meteor ripped the ship down the center like an old breakfast cannister. How long ago that had been, Russell didn't know. All Russell knew was that they were millions of light years from any place he had ever heard about, where the galactic space lanterns had absolutely no recognizable pattern. But Dunbar knew. And Russell was looking at Dunbar's suit up ahead, watching it more and more intently, thinking about how Dunbar looked inside that suit—and hating Dunbar more and more for claiming he knew when he didn't, for his drooling optimism—because he was taking them on into deeper darkness and calling their destination Paradise. Russell wanted to laugh, but the last time he'd given way to this impulse, the results inside his helmet had been too unpleasant to repeat. Sometimes Russell thought of other things besides his growing hatred of the old man. Sometimes he thought about the ship, lost back there in the void, and he wondered if wrecked space ships were ever found. Compared with the universe in which one of them drifted, a wrecked ship was a lot smaller than a grain of sand on a nice warm beach back on Earth, or one of those specks of silver dust that floated like strange seeds down the night winds of Venus. And a human was smaller still, thought Russell when he was not hating Dunbar. Out here, a human being is the smallest thing of all. He thought then of what Dunbar would say to such a thought, how Dunbar would laugh that high piping squawking laugh of his and say that the human being was bigger than the Universe itself. Dunbar had a big answer for every little thing. When the four of them had escaped from that prison colony on a sizzling hot asteroid rock in the Ronlwhyn system, that wasn't enough for Dunbar. Hell no—Dunbar had to start talking about a place they could go where they'd never be apprehended, in a system no one else had ever heard of, where they could live like gods on a green soft world like the Earth had been a long time back. And Dunbar had spouted endlessly about a world of treasure they would find, if they would just follow old Dunbar. That's what all four of them had been trying to find all their lives in the big cold grabbag of eternity—a rich star, a rich far fertile star where no one else had ever been, loaded with treasure that had no name, that no one had ever heard of before. And was, because of that, the richest treasure of all. We all look alike out here in these big rocket pressure suits, Russell thought. No one for God only knew how many of millions of light years away could see or care. Still—we might have a chance to live, even now, Russell thought—if it weren't for old crazy Dunbar. They might have a chance if Alvar and Johnson weren't so damn lacking in self-confidence as to put all their trust in that crazed old rum-dum. Russell had known now for some time that they were going in the wrong direction. No reason for knowing. Just a hunch. And Russell was sure his hunch was right. Russell said. "Look—look to your left and to your right and behind us. Four suns. You guys see those other three suns all around you, don't you?" "Sure," someone said. "Well, if you'll notice," Russell said, "the one on the left also now has a red rim around it. Can't you guys see that?" "Yeah, I see it," Alvar said. "So now," Johnson said, "there's two suns with red rims around them." "We're about in the middle of those four suns aren't we, Dunbar?" Russell said. "That's right, boys!" yelled old Dunbar in that sickeningly optimistic voice. Like a hysterical old woman's. "Just about in the sweet dark old middle." "You're still sure it's the sun up ahead ... that's the only one with life on it, Dunbar ... the only one we can live on?" Russell asked. "That's right! That's right," Dunbar yelled. "That's the only one—and it's a paradise. Not just a place to live, boys—but a place you'll have trouble believing in because it's like a dream!" "And none of these other three suns have worlds we could live on, Dunbar?" Russell asked. Keep the old duck talking like this and maybe Alvar and Johnson would see that he was cracked. "Yeah," said Alvar. "You still say that, Dunbar?" "No life, boys, nothing," Dunbar laughed. "Nothing on these other worlds but ashes ... just ashes and iron and dried blood, dried a million years or more." "When in hell were you ever here?" Johnson said. "You say you were here before. You never said when, or why or anything!" "It was a long time back boys. Don't remember too well, but it was when we had an old ship called the DOG STAR that I was here. A pirate ship and I was second in command, and we came through this sector. That was—hell, it musta' been fifty years ago. I been too many places nobody's ever bothered to name or chart, to remember where it is, but I been here. I remember those four suns all spotted to form a perfect circle from this point, with us squarely in the middle. We explored all these suns and the worlds that go round 'em. Trust me, boys, and we'll reach the right one. And that one's just like Paradise." "Paradise is it," Russell whispered hoarsely. "Paradise and there we'll be like gods, like Mercuries with wings flying on nights of sweet song. These other suns, don't let them bother you. They're Jezebels of stars. All painted up in the darkness and pretty and waiting and calling and lying! They make you think of nice green worlds all running waters and dews and forests thick as fleas on a wet dog. But it ain't there, boys. I know this place. I been here, long time back." Russell said tightly. "It'll take us a long time won't it? If it's got air we can breath, and water we can drink and shade we can rest in—that'll be paradise enough for us. But it'll take a long time won't it? And what if it isn't there—what if after all the time we spend hoping and getting there—there won't be nothing but ashes and cracked clay?" "I know we're going right," Dunbar said cheerfully. "I can tell. Like I said—you can tell it because of the red rim around it." "But the sun on our left, you can see—it's got a red rim too now," Russell said. "Yeah, that's right," said Alvar. "Sometimes I see a red rim around the one we're going for, sometimes a red rim around that one on the left. Now, sometimes I'm not sure either of them's got a red rim. You said that one had a red rim, Dunbar, and I wanted to believe it. So now maybe we're all seeing a red rim that was never there." Old Dunbar laughed. The sound brought blood hotly to Russell's face. "We're heading to the right one, boys. Don't doubt me ... I been here. We explored all these sun systems. And I remember it all. The second planet from that red-rimmed sun. You come down through a soft atmosphere, floating like in a dream. You see the green lakes coming up through the clouds and the women dancing and the music playing. I remember seeing a ship there that brought those women there, a long long time before ever I got there. A land like heaven and women like angels singing and dancing and laughing with red lips and arms white as milk, and soft silky hair floating in the winds." Russell was very sick of the old man's voice. He was at least glad he didn't have to look at the old man now. His bald head, his skinny bobbing neck, his simpering watery blue eyes. But he still had to suffer that immutable babbling, that idiotic cheerfulness ... and knowing all the time the old man was crazy, that he was leading them wrong. I'd break away, go it alone to the right sun, Russell thought—but I'd never make it alone. A little while out here alone and I'd be nuttier than old Dunbar will ever be, even if he keeps on getting nuttier all the time. Somewhere, sometime then ... Russell got the idea that the only way was to get rid of Dunbar. You mean to tell us there are people living by that red-rimmed sun," Russell said. "Lost people ... lost ... who knows how long," Dunbar said, as the four of them hurtled along. "You never know where you'll find people on a world somewhere nobody's ever named or knows about. Places where a lost ship's landed and never got up again, or wrecked itself so far off the lanes they'll never be found except by accident for millions of years. That's what this world is, boys. Must have been a ship load of beautiful people, maybe actresses and people like that being hauled to some outpost to entertain. They're like angels now, living in a land all free from care. Every place you see green forests and fields and blue lakes, and at nights there's three moons that come around the sky in a thousand different colors. And it never gets cold ... it's always spring, always spring, boys, and the music plays all night, every night of a long long year...." Russell suddenly shouted. "Keep quiet, Dunbar. Shut up will you?" Johnson said. "Dunbar—how long'll it take us?" "Six months to a year, I'd say," Dunbar yelled happily. "That is—of our hereditary time." "What?" croaked Alvar. Johnson didn't say anything at all. Russell screamed at Dunbar, then quieted down. He whispered. "Six months to a year—out here—cooped up in these damn suits. You're crazy as hell, Dunbar. Crazy ... crazy! Nobody could stand it. We'll all be crazier than you are—" "We'll make it, boys. Trust ole' Dunbar. What's a year when we know we're getting to Paradise at the end of it? What's a year out here ... it's paradise ain't it, compared with that prison hole we were rotting in? We can make it. We have the food concentrates, and all the rest. All we need's the will, boys, and we got that. The whole damn Universe isn't big enough to kill the will of a human being, boys. I been over a whole lot of it, and I know. In the old days—" "The hell with the old days," screamed Russell. "Now quiet down, Russ," Dunbar said in a kind of dreadful crooning whisper. "You calm down now. You younger fellows—you don't look at things the way we used to. Thing is, we got to go straight. People trapped like this liable to start meandering. Liable to start losing the old will-power." He chuckled. "Yeah," said Alvar. "Someone says maybe we ought to go left, and someone says to go right, and someone else says to go in another direction. And then someone says maybe they'd better go back the old way. An' pretty soon something breaks, or the food runs out, and you're a million million miles from someplace you don't care about any more because you're dead. All frozen up in space ... preserved like a piece of meat in a cold storage locker. And then maybe in a million years or so some lousy insect man from Jupiter comes along and finds you and takes you away to a museum...." "Shut up!" Johnson yelled. Dunbar laughed. "Boys, boys, don't get panicky. Keep your heads. Just stick to old Dunbar and he'll see you through. I'm always lucky. Only one way to go ... an' that's straight ahead to the sun with the red-rim around it ... and then we tune in the gravity repellers, and coast down, floating and singing down through the clouds to paradise." After that they traveled on for what seemed months to Russell, but it couldn't have been over a day or two of the kind of time-sense he had inherited from Earth. Then he saw how the other two stars also were beginning to develop red rims. He yelled this fact out to the others. And Alvar said. "Russ's right. That sun to the right, and the one behind us ... now they ALL have red rims around them. Dunbar—" A pause and no awareness of motion. Dunbar laughed. "Sure, they all maybe have a touch of red, but it isn't the same, boys. I can tell the difference. Trust me—" Russell half choked on his words. "You old goat! With those old eyes of yours, you couldn't see your way into a fire!" "Don't get panicky now. Keep your heads. In another year, we'll be there—" "God, you gotta' be sure," Alvar said. "I don't mind dyin' out here. But after a year of this, and then to get to a world that was only ashes, and not able to go any further—" "I always come through, boys. I'm lucky. Angel women will take us to their houses on the edges of cool lakes, little houses that sit there in the sun like fancy jewels. And we'll walk under colored fountains, pretty colored fountains just splashing and splashing like pretty rain on our hungry hides. That's worth waiting for." Russell did it before he hardly realized he was killing the old man. It was something he had had to do for a long time and that made it easy. There was a flash of burning oxygen from inside the suit of Dunbar. If he'd aimed right, Russell knew the fire-bullet should have pierced Dunbar's back. Now the fire was gone, extinguished automatically by units inside the suit. The suit was still inflated, self-sealing. Nothing appeared to have changed. The four of them hurtling on together, but inside that first suit up there on the front of the gravity rope, Dunbar was dead. He was dead and his mouth was shut for good. Dunbar's last faint cry from inside his suit still rang in Russell's ears, and he knew Alvar and Johnson had heard it too. Alvar and Johnson both called Dunbar's name a few times. There was no answer. "Russ—you shouldn't have done that," Johnson whispered. "You shouldn't have done that to the old man!" "No," Alvar said, so low he could barely be heard. "You shouldn't have done it." "I did it for the three of us," Russell said. "It was either him or us. Lies ... lies that was all he had left in his crazy head. Paradise ... don't tell me you guys don't see the red rims around all four suns, all four suns all around us. Don't tell me you guys didn't know he was batty, that you really believed all that stuff he was spouting all the time!" "Maybe he was lying, maybe not," Johnson said. "Now he's dead anyway." "Maybe he was wrong, crazy, full of lies," Alvar said. "But now he's dead." "How could he see any difference in those four stars?" Russell said, louder. "He thought he was right," Alvar said. "He wanted to take us to paradise. He was happy, nothing could stop the old man—but he's dead now." He sighed. "He was taking us wrong ... wrong!" Russell screamed. "Angels—music all night—houses like jewels—and women like angels—" " Shhhh ," said Alvar. It was quiet. How could it be so quiet, Russell thought? And up ahead the old man's pressure suit with a corpse inside went on ahead, leading the other three at the front of the gravity-rope. "Maybe he was wrong," Alvar said. "But now do we know which way is right?" Sometime later, Johnson said, "We got to decide now. Let's forget the old man. Let's forget him and all that's gone and let's start now and decide what to do." And Alvar said, "Guess he was crazy all right, and I guess we trusted him because we didn't have the strength to make up our own minds. Why does a crazy man's laugh sound so good when you're desperate and don't know what to do?" "I always had a feeling we were going wrong," Johnson said. "Anyway, it's forgotten, Russ. It's swallowed up in the darkness all around. It's never been." Russell said, "I've had a hunch all along that maybe the old man was here before, and that he was right about there being a star here with a world we can live on. But I've known we was heading wrong. I've had a hunch all along that the right star was the one to the left." "I don't know," Johnson sighed. "I been feeling partial toward that one on the right. What about you, Alvar?" "I always thought we were going straight in the opposite direction from what we should, I guess. I always wanted to turn around and go back. It won't make over maybe a month's difference. And what does a month matter anyway out here—hell there never was any time out here until we came along. We make our own time here, and a month don't matter to me." Sweat ran down Russell's face. His voice trembled. "No—that's wrong. You're both wrong." He could see himself going it alone. Going crazy because he was alone. He'd have broken away, gone his own direction, long ago but for that fear. "How can we tell which of us is right?" Alvar said. "It's like everything was changing all the time out here. Sometimes I'd swear none of those suns had red rims, and at other times—like the old man said, they're all pretty and lying and saying nothing, just changing all the time. Jezebel stars, the old man said." "I know I'm right," Russell pleaded. "My hunches always been right. My hunch got us out of that prison didn't it? Listen—I tell you it's that star to the left—" "The one to the right," said Johnson. "We been going away from the right one all the time," said Alvar. "We got to stay together," said Russell. "Nobody could spend a year out here ... alone...." "Ah ... in another month or so we'd be lousy company anyway," Alvar said. "Maybe a guy could get to the point where he'd sleep most of the time ... just wake up enough times to give himself another boost with the old life-gun." "We got to face it," Johnson said finally. "We three don't go on together any more." "That's it," said Alvar. "There's three suns that look like they might be right seeing as how we all agree the old man was wrong. But we believe there is one we can live by, because we all seem to agree that the old man might have been right about that. If we stick together, the chance is three to one against us. But if each of us makes for one star, one of us has a chance to live. Maybe not in paradise like the old man said, but a place where we can live. And maybe there'll be intelligent life, maybe even a ship, and whoever gets the right star can come and help the other two...." "No ... God no...." Russell whispered over and over. "None of us can ever make it alone...." Alvar said, "We each take the star he likes best. I'll go back the other way. Russ, you take the left. And you, Johnson, go to the right." Johnson started to laugh. Russell was yelling wildly at them, and above his own yelling he could hear Johnson's rising laughter. "Every guy's got a star of his own," Johnson said when he stopped laughing. "And we got ours. A nice red-rimmed sun for each of us to call his very own." "Okay," Alvar said. "We cut off the gravity rope, and each to his own sun." Now Russell wasn't saying anything. "And the old man," Alvar said, "can keep right on going toward what he thought was right. And he'll keep on going. Course he won't be able to give himself another boost with the life-gun, but he'll keep going. Someday he'll get to that red-rimmed star of his. Out here in space, once you're going, you never stop ... and I guess there isn't any other body to pull him off his course. And what will time matter to old Dunbar? Even less than to us, I guess. He's dead and he won't care." "Ready," Johnson said. "I'll cut off the gravity rope." "I'm ready," Alvar said. "To go back toward whatever it was I started from." "Ready, Russ?" Russell couldn't say anything. He stared at the endless void which now he would share with no one. Not even crazy old Dunbar. "All right," Johnson said. "Good-bye." Russell felt the release, felt the sudden inexplicable isolation and aloneness even before Alvar and Johnson used their life-guns and shot out of sight, Johnson toward the left and Alvar back toward that other red-rimmed sun behind them. And old Dunbar shooting right on ahead. And all three of them dwindling and dwindling and blinking out like little lights. Fading, he could hear their voices. "Each to his own star," Johnson said. "On a bee line." "On a bee line," Alvar said. Russell used his own life-gun and in a little while he didn't hear Alvar or Johnson's voices, nor could he see them. They were thousands of miles away, and going further all the time. Russell's head fell forward against the front of his helmet, and he closed his eyes. "Maybe," he thought, "I shouldn't have killed the old man. Maybe one sun's as good as another...." Then he raised his body and looked out into the year of blackness that waited for him, stretching away to the red-rimmed sun. Even if he were right—he was sure now he'd never make it alone. The body inside the pressure suit drifted into a low-level orbit around the second planet from the sun of its choice, and drifted there a long time. A strato-cruiser detected it by chance because of the strong concentration of radio-activity that came from it. They took the body down to one of the small, quiet towns on the edge of one of the many blue lakes where the domed houses were like bright joyful jewels. They got the leathery, well-preserved body from the pressure suit. "An old man," one of them mused. "A very old man. From one of the lost sectors. I wonder how and why he came so very far from his home?" "Wrecked a ship out there, probably," one of the others said. "But he managed to get this far. It looks as though a small meteor fragment pierced his body. Here. You see?" "Yes," another of them said. "But what amazes me is that this old man picked this planet out of all the others. The only one in this entire sector that would sustain life." "Maybe he was just a very lucky old man. Yes ... a man who attains such an age was usually lucky. Or at least that is what they say about the lost sectors." "Maybe he knew the way here. Maybe he was here before—sometime." The other shook his head. "I don't think so. They say some humans from that far sector did land here—but that's probably only a myth. And if they did, it was well over a thousand years ago." Another said. "He has a fine face, this old man. A noble face. Whoever he is ... wherever he came from, he died bravely and he knew the way, though he never reached this haven of the lost alive." "Nor is it irony that he reached here dead," said the Lake Chieftain. He had been listening and he stepped forward and raised his arm. "He was old. It is obvious that he fought bravely, that he had great courage, and that he knew the way. He will be given a burial suitable to his stature, and he will rest here among the brave. "Let the women dance and the music play for this old man. Let the trumpets speak, and the rockets fly up. And let flowers be strewn over the path above which the women will carry him to rest."
|
B. He was able to pick the path to the correct sun
|
What online text resources are used to test binomial lists?
|
### Introduction
Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture. The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.' This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker. Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings. Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context. The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means. We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time. We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time. While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable. Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering. On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first. The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases. Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order. Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale. ### Introduction ::: Related Work
Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules. Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17. Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3. Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering. Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5. Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20. Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22. ### Data
We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments. Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits. We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics). The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting. The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information. We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets. ### Dimensions of Binomials
In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions. For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}. ### Dimensions of Binomials ::: Definitions
Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8. Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\lbrace x,y\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise. Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0. [Asymmetry] The asymmetry of an unordered list $\lbrace x,y\rbrace $ is $A_{x,y} = 2 \cdot \vert o_{x,y} - 0.5 \vert $. The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\lbrace x,y\rbrace $ for data in year $t \in T$. The movement of $\lbrace x,y\rbrace $ is $M_{x,y} = \max _{t \in T} o_{x,y,t} - \min _{t \in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \in C$. The agreement of $\lbrace x,y\rbrace $ is $A_{x,y} = 1 - (\max _{c \in C} o_{x,y,c} - \min _{c \in C} o_{x,y,c})$. ### Dimensions of Binomials ::: Dimensions
Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\lbrace x,y\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8. Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis. Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another. Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27. There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative. ### Models And Predictions
In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success. ### Models And Predictions ::: Stability of Asymmetry
Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\lbrace x,y\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \sim \text{Bernoulli}(0.5)$. If $\lbrace x,y\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \sim \text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5. One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \in L$ in community $c \in C$ by $\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \in L$ let $p^*_{l} = \max _{c \in C}(\hat{p}_{l, c}) - \min _{ c \in C}(\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\hat{p}_{l, y} = o_{l,y}$ for $y \in Y$ our estimates. The distribution of $p^{\prime }_{l} = \max _{y \in Y}(\hat{p}_{l, y}) - \min _{y \in Y}(\hat{p}_{l, y})$ over $l \in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed. We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\lbrace \bar{s}_1, \ldots , \bar{s}_{k}\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13). ### Models And Predictions ::: Prediction Results
Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted. To formalize these notions, first consider any unordered list $\lbrace x,y\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\lbrace x,y\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy. If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$. Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials. Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing. Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18). Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set. ### Proper Nouns and the Proximity Principle
Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle. ### Proper Nouns and the Proximity Principle ::: NBA Names
First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others. ### Proper Nouns and the Proximity Principle ::: Subreddit and team names
Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle. We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25). While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits. ### Proper Nouns and the Proximity Principle ::: Political Names
In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'. One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'. Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company. ### Formal Text
While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above. ### Formal Text ::: Wine
Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']. ### Formal Text ::: News
We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text. ### Global Structure
We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total. If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'. Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.) We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism']. Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady']. Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']. ### Multinomials
Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words. Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows. Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b]. There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from. ### Discussion
Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general. Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different. We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure. It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research. ### Acknowledgements
The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI. Figure 1: Histogram of comment timestamps for r/nba and r/nfl. Both subreddits exhibit a seasonal structure. The number of comments is increasing for all subreddits. Table 1: Summary statistics of subreddit list data that we investigate in this paper. Figure 2: A histogram of the log frequency of lists of various lengths, wherewe use name lists for r/nba. In this case, there is no filtering applied, but we cap list length at 50. Table 2: Covariance table for movement in r/nba, r/nfl and r/politics. Figure 3: 309 binomials that occur at least 30 times per year in r/politics, r/nba, and r/nfl mapped on to the 3- dimensional cube. The point on the bottom left is {‘10’, ‘20’}. Figure 4: Histograms of the alphabetical orientation of the 14920 most common binomials within r/nba, r/nfl and r/politics. Note that while there are many frozen binomials (with orientation of 0 or 1), the rest of the binomials appear to be roughly normally distributed around 0.5. Table 3: The average difference in asymmetry between the same binomial in various subreddits. The difference between r/nba and r/nfl is 0.062. Figure 5: Histogram of the maximum difference in pl for all lists l across communities and years, on a log-log scale. We add 0.01 to all differences to show cases with a difference of 0, which is represented as the bar on the left of the graph (mostly due to frozen binomials). We sampled 40000 instances for this graph, since there was variation in the number of binomials across years and communities. Table 4: Accuracy of binomial orientation predictions using a number of basic rules. The scoring was done based on “unweighted type” scoring, and statistics are given based on the scores across the subreddits. Figure 6: Histogram of asymmetry for lists of names in r/nfl, r/nba and r/politics. Table 5: Count for number of syllables in first and second word of all binomials in r/politics. First word is rows, second word is columns. Overall, shorter words are significantly more likely to come before longer words (see also Table 6). Table 7: Paired prediction results. Table 11: If two sports subreddits are listed in a sports subreddit, the subreddit of origin (r/nba in top row, r/nfl in bottom row) usually comes first, in terms of the weighted token evaluation (number of occurrences in parentheses). A ‘-’ means that there are fewer than 30 such lists. Table 8: The accuracy using "unweighted type" for only frozen binomials, here defined as binomials with asymmetry above 0.97. The results suggest that these rules are equally ineffective for frozen and non-frozen binomials. Table 9: Results of logistic regression based on word embeddings. This is by far our most successful model. Note that not all words in our binomials were found in the word embeddings, leaving about 70–97% of the binomials usable. Table 12: Political name ordering by party across political subreddits. Note that r/politics is left-leaning. Figure 7: The r/nba binomial graph, where nodes are words and directed edges indicate binomial orientation. Figure 8: The wines binomial graph, where nodes are words and directed edges indicate binomial orientation. Table 13: Number of total lists (log scale) and percent of lists that are frozen. There is no correlation between size and frozenness, but note that news is far more frozen than any other data source. Figure 9: Graphs of some of the 30 most common names in r/Conservative, r/Libertarian, and r/politics. Nodes are names, and an edge from A to B represents a list where the dominant order is [A,B]. Node size is the number of lists the word comes first in, node color is the total number of lists the node shows up in, edge color is the asymmetry of the list. Figure 10: Histogram of difference in asymmetry and compatibility for binomials within trinomials on r/politics.
|
news publications, wine reviews, and Reddit
|
What theme could be taken from this story?
A. don't try to change the past
B. it's important to try new things
C. people can't be trusted
D. there's a solution to every problem
|
RATTLE OK By HARRY WARNER, JR. Illustrated by FINLAY [Transcriber's Note: This etext was produced from Galaxy Science Fiction December 1956. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] What better way to use a time machine than to handle department store complaints? But pleasing a customer should have its limits! The Christmas party at the Boston branch of Hartshorne-Logan was threatening to become more legendary than usual this Christmas. The farm machinery manager had already collapsed. When he slid under the table containing the drinks, Miss Pringle, who sold millinery, had screamed: "He'll drown!" One out of every three dirty stories started by party attendees had remained unfinished, because each had reminded someone else of another story. The recently developed liquors which affected the bloodstream three times faster had driven away twinges of conscience about untrimmed trees and midnight church services. The star salesman for mankies and the gentleman who was in charge of the janitors were putting on a display of Burmese foot-wrestling in one corner of the general office. The janitor foreman weighed fifty pounds less than the Burma gentleman, who was the salesman's customary opponent. So the climax of one tactic did not simply overturn the foreman. He glided through the air, crashing with a very loud thump against the wall. He wasn't hurt. But the impact knocked the hallowed portrait of H. H. Hartshorne, co-founder, from its nail. It tinkled imposingly as its glass splintered against the floor. The noise caused a temporary lull in the gaiety. Several employes even felt a passing suspicion that things might be getting out of hand. "It's all in the spirit of good, clean fun!" cried Mr. Hawkins, the assistant general manager. Since he was the highest executive present, worries vanished. Everyone felt fine. There was a scurry to shove the broken glass out of sight and to turn more attention to another type of glasses. Mr. Hawkins himself, acting by reflex, attempted to return the portrait to its place until new glass could be obtained. But the fall had sprung the frame at one corner and it wouldn't hang straight. "We'd better put old H. H. away for safekeeping until after the holiday," he told a small, blonde salesclerk who was beneath his attention on any working day. With the proper mixture of respect and bonhommie, he lifted the heavy picture out of its frame. A yellowed envelope slipped to the floor as the picture came free. Hawkins rolled the picture like a scroll and put it into a desk drawer, for later attention. Then he looked around for a drink that would make him feel even better. A sorting clerk in the mail order department wasn't used to liquor. She picked up the envelope and looked around vaguely for the mail-opening machine. "Hell, Milly, you aren't working!" someone shouted at her. "Have another!" Milly snapped out of it. She giggled, suppressed a ladylike belch and returned to reality. Looking at the envelope, she said: "Oh, I see. They must have stuck it in to tighten the frame. Gee, it's old." Mr. Hawkins had refreshed himself. He decided that he liked Milly's voice. To hear more of it, he said to her: "I'll bet that's been in there ever since the picture was framed. There's a company legend that that picture was put up the day this branch opened, eighty years ago." "I didn't know the company ever used buff envelopes like this." Milly turned it over in her hands. The ancient glue crackled as she did so. The flap popped open and an old-fashioned order blank fell out. Mr. Hawkins' eyes widened. He bent, reached painfully over his potbelly and picked up the order form. "This thing has never been processed!" Raising his voice, he shouted jovially, "Hey, people! You're all fired! Here's an order that Hartshorne-Logan never filled! We can't have such carelessness. This poor woman has waited eighty years for her merchandise!" Milly was reading aloud the scrawled words on the order form: "Best electric doorbell. Junior detective kit. Disposable sacks for vacuum cleaner. Dress for three-year-old girl." She turned to the assistant general manager, struck with an idea for the first time in her young life. "Let's fill this order right now!" "The poor woman must be dead by now," he objected, secretly angry that he hadn't thought of such a fine party stunt himself. Then he brightened. "Unless—" he said it loud enough for the employes to scent a great proposal and the room grew quiet—"unless we broke the rules just once and used the time warp on a big mission!" There was a silence. Finally, from an anonymous voice in one corner: "Would the warp work over eighty years? We were always told that it must be used only for complaints within three days." "Then let's find out!" Mr. Hawkins downed the rest of his drink and pulled a batch of keys from his pocket. "Someone scoot down to the warehouse. Tell the watchman that it's on my authority. Hunt up the stuff that's on the order. Get the best of everything. Ignore the catalogue numbers—they've changed a hundred times in all these years." Milly was still deciphering the form. Now she let out a little squeal of excitement. "Look, Mr. Hawkins! The name on this order—it's my great-grandmother! Isn't that wonderful? I was just a little girl when she died. I can barely remember her as a real old woman. But I remember that my grandmother never bought anything from Hartshorne-Logan because of some trouble her mother had once with the firm. My mother didn't want me to come to work here because of that." Mr. Hawkins put his arm around Milly in a way that he intended to look fatherly. It didn't. "Well, now. Since it's your relative, let's thrill the old girl. We wouldn't have vacuum sacks any more. So we'll substitute a manky!" Ann Hartley was returning from mailing the letter when she found the large parcel on her doorstep. She put her hands on her hips and stared pugnaciously at the bundle. "The minute I write a letter to complain about you, you turn up!" she told the parcel. She nudged her toe peevishly against the brown paper wrappings that were tied with a half-transparent twine she had never seen before. The label was addressed in a wandering scrawl, a sharp contrast to the impersonal typing on the customary Hartshorne-Logan bundles. But the familiar RATTLE OK sticker was pasted onto the box, indicating to the delivery man that the contents would make a rattling sound and therefore hadn't been broken in shipment. Ann sighed and picked up her bundle. With a last look at the lovely spring afternoon and the quiet suburban landscape, she went into the house. Two-year-old Sally heard the box rattling. She waddled up on chubby legs and grabbed her mother's skirt. "Want!" she said decisively. "Your dress ought to be here," Ann said. She found scissors in her sewing box, tossed a cushion onto the floor, sat on it, and began to open the parcel. "Now I'll have to write another letter to explain that they should throw away my letter of complaint," she told her daughter. "And by the time they get my second letter, they'll have answered my first letter. Then they'll write again." Out of consideration for Sally, she omitted the expletives that she wanted to add. The translucent cord was too tough for the scissors. Ann was about to hunt for a razor blade when Sally clutched at an intersection of the cord and yanked. The twine sprang away from the carton as if it were alive. The paper wrappings flapped open. "There!" Sally said. Ann repressed an irrational urge to slap her daughter. Instead, she tossed the wrappings aside and removed the lid from the carton. A slightly crushed thin cardboard box lay on top. Ann pulled out the dress and shook it into a freely hanging position. Then she groaned. It was green and she had ordered blue. It didn't remotely resemble the dress she had admired from the Hartshorne-Logan catalogue illustration. Moreover, the shoulders were lumpier than any small girl's dress should be. But Sally was delighted. "Mine!" she shrilled, grabbing for the dress. "It's probably the wrong size, too," Ann said, pulling off Sally's dress to try it on. "Let's find as many things to complain about as we can." The dress fitted precisely, except for the absurd shoulder bumps. Sally was radiant for a moment. Then her small face sobered and she started to look vacantly at the distant wall. "We'll have to send it back," Ann said, "and get the one we ordered." She tried to take it off, but the child squawked violently. Ann grabbed her daughter's arms, held them above her head and pulled at the dress. It seemed to be stuck somewhere. When Ann released the child's arms to loosen the dress, Sally squirmed away. She took one step forward, then began to float three inches above the ground. She landed just before she collided with the far wall. Sally looked scared until she saw her mother's face. Then she squealed in delight. Ann's legs were rubber. She was shaking her head and wobbling uncertainly toward her daughter when the door opened behind her. "It's me," her husband said. "Slow day at the office, so I came home early." "Les! I'm going crazy or something. Sally just—" Sally crouched to jump at her father. Before she could leap, he grabbed her up bodily and hugged her. Then he saw the box. "Your order's here? Good. What's this thing?" He was looking at a small box he had pulled from the carton. Its lid contained a single word: MANKY. The box rattled when he shook it. Les pulled off the lid and found inside a circular, shiny metal object. A triangular trio of jacks stuck out from one end. "Is this the doorbell? I've never seen a plug like this. And there's no wire." "I don't know," Ann said. "Les, listen. A minute ago, Sally—" He peered into the box for an instruction sheet, uselessly. "They must have made a mistake. It looks like some kind of farm equipment." He tossed the manky onto the hassock and delved into the carton again. Sally was still in his arms. "That's the doorbell, I think," he said, looking at the next object. It had a lovely, tubular shape, a half-dozen connecting rods and a plug for a wall socket. "That's funny," Ann mused, her mind distracted from Sally for a moment. "It looks terribly expensive. Maybe they sent door chimes instead of the doorbell." The bottom of the carton contained the detective outfit that they had ordered for their son. Ann glanced at its glaringly lithographed cover and said: "Les, about Sally. Put her down a minute and watch what she does." Les stared at his wife and put the child onto the rug. Sally began to walk, then rose and again floated, this time toward the hassock on which the manky lay. His jaw dropped. "My God! Ann, what—" Ann was staring, too, but not at her daughter. "Les! The hassock! It used to be brown!" The hassock was a livid shade of green. A neon, demanding, screaming green that clashed horribly with the soft browns and reds in which Ann had furnished the room. "That round thing must be leaking," Les said. "But did you see Sally when she—" Ann's frazzled nerves carried a frantic order to her muscles. She jumped up, strode to the hassock and picked up the manky with two fingers. She tossed it to Les. Immediately, she regretted her action. "Drop it!" she yelled. "Maybe it'll turn you green, too!" Les kicked the hassock into the hall closet, tossed the manky in after it and shut the door firmly. As the door closed, he saw the entire interior of the dark closet brighten into a wet-lettuce green. When he turned back to Ann, she was staring at her left hand. The wedding band that Les had put there a dozen years ago was a brilliant green, shedding its soft glow over the finger up to the first knuckle. Ann felt the scream building up inside her. She opened her mouth to let it out, then put her hand in front of her mouth to keep it in, finally jerked the hand away to prevent the glowing ring from turning her front teeth green. She collapsed into Les's arms, babbling incomprehensibly. He said: "It's all right. There must be balloons or something in the shoulders of that dress. I'll tie a paperweight to Sally's dress and that'll hold her down until we undress her. Don't worry. And that green dye or whatever it is will wash off." Ann immediately felt better. She put her hands behind her back, pulled off her ring and slipped it into her apron pocket. Les was sentimental about her removing it. "I'll get dinner," she said, trying to keep her voice on an even keel. "Maybe you'd better start a letter to Hartshorne-Logan. Let's go into the kitchen, Sally." Ann strode resolutely toward the rear of the house. She kept her eyes determinedly off the tinge of green that was showing through the apron pocket and didn't dare look back at her daughter's unsettling means of propulsion. A half-hour later, when the meal was almost ready, two things happened: Bob came home from school through the back door and a strange voice said from the front of the house, "Don't answer the front door." Ann stared at her son. He stared back at her, the detective outfit under his arm. She went into the front room. Her husband was standing with fists on hips, looking at the front door, chuckling. "Neatest trick I've seen in a long time. That voice you heard was the new doorbell. I put it up while you were in the kitchen. Did you hear what happened when old lady Burnett out there pushed the button?" "Oh. Something like those name cards with something funny printed on them, like 'Another hour shot.' Well, if there's a little tape in there repeating that message, you'd better shut that part off. It might get boring after a while. And it might insult someone." Ann went to the door and turned the knob. The door didn't open. The figure of Mrs. Burnett, half-visible through the heavy curtain, shifted impatiently on the porch. Les yanked at the doorknob. It didn't yield for him, either. He looked up at the doorbell, which he had installed just above the upper part of the door frame. "Queer," he said. "That isn't in contact with the door itself. I don't see how it can keep the door from opening." Ann put her mouth close to the glass, shouting: "Won't you come to the back door, Mrs. Burnett? This one is stuck." "I just wanted to borrow some sugar," the woman cried from the porch. "I realize that I'm a terrible bother." But she walked down the front steps and disappeared around the side of the house. "Don't open the back door." The well-modulated voice from the small doorbell box threatened to penetrate every corner of the house. Ann looked doubtfully at her husband's lips. They weren't moving. "If this is ventriloquism—" she began icily. "I'll have to order another doorbell just like this one, for the office," Les said. "But you'd better let the old girl in. No use letting her get peeved." The back door was already open, because it was a warm day. The screen door had no latch, held closed by a simple spring. Ann pushed it open when Mrs. Burnett waddled up the three back steps, and smiled at her neighbor. "I'm so sorry you had to walk around the house. It's been a rather hectic day in an awful lot of ways." Something seemed to impede Mrs. Burnett as she came to the threshold. She frowned and shoved her portly frame against something invisible. It apparently yielded abruptly, because she staggered forward into the kitchen, nearly falling. She stared grimly at Ann and looked suspiciously behind her. "The children have some new toys," Ann improvised hastily. "Sally is so excited over a new dress that she's positively feverish. Let's see now—it was sugar that you want, wasn't it?" "I already have it," Bob said, handing a filled cup to his mother. The boy turned back to the detective set which he had spread over the kitchen table. "Excitement isn't good for me," Mrs. Burnett said testily. "I've had a lot of troubles in my life. I like peace and quiet." "Your husband is better?" "Worse. I'm sure I don't know why everything happens to me." Mrs. Burnett edged toward the hall, trying to peer into the front of the house. Ann stood squarely in front of the door leading to the hall. Defeated, Mrs. Burnett left. A muffled volley of handclapping, mixed with a few faint cheers, came from the doorbell-box when she crossed the threshold. Ann went into the hall to order Les to disconnect the doorbell. She nearly collided with him, coming in the other direction. "Where did this come from?" Les held a small object in the palm of his hand, keeping it away from his body. A few drops of something unpleasant were dripping from his fingers. The object looked remarkably like a human eyeball. It was human-size, complete with pupil, iris and rather bloodshot veins. "Hey, that's mine," Bob said. "You know, this is a funny detective kit. That was in it. But there aren't instructions on how it works." "Well, put it away," Ann told Bob sharply. "It's slimy." Les laid the eyeball on the table and walked away. The eyeball rolled from the smooth, level table, bounced twice when it hit the floor, then rolled along, six inches behind him. He turned and kicked at it. The eyeball rolled nimbly out of the path of the kick. "Les, I think we've made poor Mrs. Burnett angry," Ann said. "She's so upset over her poor husband's health and she thinks we're insulting her." Les didn't hear her. He strode to the detective set, followed at a safe distance by the eyeball, and picked up the box. "Hey, watch out!" Bob cried. A small flashlight fell from the box, landed on its side and its bulb flashed on, throwing a pencil of light across Les's hands. Bob retrieved the flashlight and turned it off while Les glanced through an instruction booklet, frowning. "This toy is too complicated for a ten-year-old boy," Les told his wife. "I don't know why you ordered such a thing." He tossed the booklet into the empty box. "I'm going to return it, if you don't smudge it up," she replied. "Look at the marks you made on the instructions." The black finger-marks stood out clearly against the shiny, coated paper. Les looked at his hands. "I didn't do it," he said, pressing his clean fingertips against the kitchen table. Black fingerprints, a full set of them, stood out against the sparkling polished table's surface. "I think the Detectolite did it," Bob said. "The instructions say you've got to be very careful with it, because its effects last for a long time." Les began scrubbing his hands vigorously at the sink. Ann watched him silently, until she saw his fingerprints appear on the faucet, the soap and the towel. She began to yell at him for making such a mess, when Sally floated into the kitchen. The girl was wearing a nightgown. "My God!" Ann forgot her tongue before the children. "She got out of that dress herself. Where did she get that nightgown?" Ann fingered the garment. She didn't recognize it as a nightgown. But in cut and fold, it was suspiciously like the dress that had arrived in the parcel. Her heart sank. She picked up the child, felt the hot forehead, and said: "Les, I think it's the same dress. It must change color or something when it's time for a nap. It seems impossible, but—" She shrugged mutely. "And I think Sally's running a temperature. I'm going to put her to bed." She looked worriedly into the reddened eyes of the small girl, who whimpered on the way to the bedroom. Ann carried her up the stairs, keeping her balance with difficulty, as Sally threatened to pop upward out of her arms. The whole family decided that bed might be a good idea, soon after dinner. When the lights went out, the house seemed to be nearly normal. Les put on a pair of gloves and threw a pillowcase over the eyeball. Bob rigged up trestles to warn visitors from the front porch. Ann put small wads of cotton into her ears, because she didn't like the rhythmic rattle, soft but persistent, that emerged from the hall closet where the manky sat. Sally was whining occasionally in her sleep. When daylight entered her room, Sally's nightgown had turned back into the new dress. But the little girl was too sick to get out of bed. She wasn't hungry, her nose was running, and she had a dry cough. Les called the doctor before going to work. The only good thing about the morning for Ann was the fact that the manky had quieted down some time in the night. After she got Bob to school, she gingerly opened the closet door. The manky was now glowing a bright pink and seemed slightly larger. Deep violet lettering stood out on its side: " Today is Wednesday. For obvious reasons, the manky will not operate today. " The mailman brought a letter from Hartshorne-Logan. Ann stared stupidly at the envelope, until she realized that this wasn't an impossibly quick answer to the letter she had written yesterday. It must have crossed in the mail her complaint about the non-arrival of the order. She tore open the envelope and read: "We regret to inform you that your order cannot be filled until the balance you owe us has been reduced. From the attached form, you will readily ascertain that the payment of $87.56 will enable you to resume the purchasing of merchandise on credit. We shall fill your recent order as soon...." Ann crumpled the letter and threw it into the imitation fireplace, knowing perfectly well that it would need to be retrieved for Les after work tonight. She had just decided to call Hartshorne-Logan's complaint department when the phone rang. "I'm afraid I must ask you to come down to the school, Mrs. Morris," a voice said. "Your son is in trouble. He claims that it's connected with something that his parents gave him." "My son?" Ann asked incredulously. "Bob?" "Yes. It's a little gadget that looks like a water pistol. Your son insists that he didn't know it would make clothing transparent. He claims it was just accident that he tried it out when he was walking by the gym during calisthenics. We've had to call upon every family in the neighborhood for blankets. Bob has always been a good boy and we believe that we can expel him quietly without newspaper publicity involving his name, if you'll—" "I'll be right down," Ann said. "I mean I won't be right down. I've got a sick baby here. Don't do anything till I telephone my husband. And I'm sorry for Bob. I mean I'm sorry for the girls, and for the boys, too. I'm sorry for—for everything. Good-by." Just as she hung up the telephone, the doorbell rang. It rang with a normal buzz, then began to play soft music. Ann opened the door without difficulty, to admit Dr. Schwartz. "You aren't going to believe me, Doctor," Ann said while he took the child's temperature, "but we can't get that dress off Sally." "Kids are stubborn sometimes." Dr. Schwartz whistled softly when he looked at the thermometer. "She's pretty sick. I want a blood count before I try to move her. Let me undress her." Sally had been mumbling half-deliriously. She made no effort to resist as the doctor picked her up. But when he raised a fold of the dress and began to pull it back, she screamed. The doctor dropped the dress and looked in perplexity at the point where it touched Sally's skin. "It's apparently an allergy to some new kind of material. But I don't understand why the dress won't come off. It's not stuck tight." "Don't bother trying," Ann said miserably. "Just cut it off." Dr. Schwartz pulled scissors from his bag and clipped at a sleeve. When he had cut it to the shoulder, he gently began to peel back the edges of the cloth. Sally writhed and kicked, then collapsed in a faint. The physician smoothed the folds hastily back into place. He looked helpless as he said to Ann: "I don't know quite what to do. The flesh starts to hemorrhage when I pull at the cloth. She'd bleed to death if I yanked it off. But it's such an extreme allergy that it may kill her, if we leave it in contact with the skin." The manky's rattle suddenly began rhythmically from the lower part of the house. Ann clutched the side of the chair, trying to keep herself under control. A siren wailed somewhere down the street, grew louder rapidly, suddenly going silent at the peak of its crescendo. Dr. Schwartz glanced outside the window. "An ambulance. Looks as if they're stopping here." "Oh, no," Ann breathed. "Something's happened to Les." "It sure will," Les said grimly, walking into the bedroom. "I won't have a job if I can't get this stuff off my fingers. Big black fingerprints on everything I touch. I can't handle correspondence or shake hands with customers. How's the kid? What's the ambulance doing out front?" "They're going to the next house down the street," the physician said. "Has there been sickness there?" Les held up his hands, palms toward the doctor. "What's wrong with me? My fingers look all right. But they leave black marks on everything I touch." The doctor looked closely at the fingertips. "Every human has natural oil on the skin. That's how detectives get results with their fingerprint powder. But I've never heard of nigrification, in this sense. Better not try to commit any crimes until you've seen a skin specialist." Ann was peering through the window, curious about the ambulance despite her own troubles. She saw two attendants carry Mr. Burnett, motionless and white, on a stretcher from the house next door into the ambulance. A third member of the crew was struggling with a disheveled Mrs. Burnett at the door. Shrieks that sounded like "Murder!" came sharply through the window. "I know those bearers," Dr. Schwartz said. He yanked the window open. "Hey, Pete! What's wrong?" The front man with the stretcher looked up. "I don't know. This guy's awful sick. I think his wife is nuts." Mrs. Burnett had broken free. She dashed halfway down the sidewalk, gesticulating wildly to nobody in particular. "It's murder!" she screamed. "Murder again! He's been poisoned! He's going to die! It means the electric chair!" The orderly grabbed her again. This time he stuffed a handkerchief into her mouth to quiet her. "Come back to this house as soon as you deliver him," Dr. Schwartz shouted to the men. "We've got a very sick child up here." "I was afraid this would happen," Les said. "The poor woman already has lost three husbands. If this one is sick, it's no wonder she thinks that somebody is poisoning him." Bob stuck his head around the bedroom door. His mother stared unbelievingly for a moment, then advanced on him threateningly. Something in his face restrained her, just as she was about to start shaking him. "I got something important to tell you," Bob said rapidly, ready to duck. "I snuck out of the principal's office and came home. I got to tell you what I did." "I heard all about what you did," Ann said, advancing again. "And you're not going to slip away from me." "Give me a chance to explain something. Downstairs. So he won't hear," Bob ended in a whisper, nodding toward the doctor. Ann looked doubtfully at Les, then followed Bob down the stairs. The doorbell was monotonously saying in a monotone: "Don't answer me, don't answer me, don't go to the door." "Why did you do it?" Ann asked Bob, her anger suddenly slumping into weary sadness. "People will suspect you of being a sex maniac for the rest of your life. You can't possibly explain—" "Don't bother about the girls' clothing," Bob said, "because it was only an accident. The really important thing is something else I did before I left the house." Les, cursing softly, hurried past them on the way to answer the knocking. He ignored the doorbell's pleas. "I forgot about it," Bob continued, "when that ray gun accidentally went off. Then when they put me in the principal's office, I had time to think, and I remembered. I put some white stuff from the detective kit into that sugar we lent Mrs. Burnett last night. I just wanted to see what would happen. I don't know exactly what effect—" "He put stuff in the sugar?" A deep, booming voice came from the front of the house. Mother and son looked through the hall. A policeman stood on the threshold of the front door. "I heard that! The woman next door claims that her husband is poisoned. Young man, I'm going to put you under arrest." The policeman stepped over the threshold. A blue flash darted from the doorbell box, striking him squarely on the chest. The policeman staggered back, sitting down abruptly on the porch. A scent of ozone drifted through the house. "Close the door, close the door," the doorbell was chanting urgently. "Where's that ambulance?" Dr. Schwartz yelled from the top of the steps. "The child's getting worse."
|
A. don't try to change the past
|
How they know what are content words?
|
### Introduction
All text has style, whether it be formal or informal, polite or aggressive, colloquial, persuasive, or even robotic. Despite the success of style transfer in image processing BIBREF0, BIBREF1, there has been limited progress in the text domain, where disentangling style from content is particularly difficult. To date, most work in style transfer relies on the availability of meta-data, such as sentiment, authorship, or formality. While meta-data can provide insight into the style of a text, it often conflates style with content, limiting the ability to perform style transfer while preserving content. Generalizing style transfer requires separating style from the meaning of the text itself. The study of literary style can guide us. For example, in the digital humanities and its subfield of stylometry, content doesn't figure prominently in practical methods of discriminating authorship and genres, which can be thought of as style at the level of the individual and population, respectively. Rather, syntactic and functional constructions are the most salient features. In this work, we turn to literary style as a test-bed for style transfer, and build on work from literature scholars using computational techniques for analysis. In particular we draw on stylometry: the use of surface level features, often counts of function words, to discriminate between literary styles. Stylometry first saw success in attributing authorship to the disputed Federalist Papers BIBREF2, but is recently used by scholars to study things such as the birth of genres BIBREF3 and the change of author styles over time BIBREF4. The use of function words is likely not the way writers intend to express style, but they appear to be downstream realizations of higher-level stylistic decisions. We hypothesize that surface-level linguistic features, such as counts of personal pronouns, prepositions, and punctuation, are an excellent definition of literary style, as borne out by their use in the digital humanities, and our own style classification experiments. We propose a controllable neural encoder-decoder model in which these features are modelled explicitly as decoder feature embeddings. In training, the model learns to reconstruct a text using only the content words and the linguistic feature embeddings. We can then transfer arbitrary content words to a new style without parallel data by setting the low-level style feature embeddings to be indicative of the target style. This paper makes the following contributions: A formal model of style as a suite of controllable, low-level linguistic features that are independent of content. An automatic evaluation showing that our model fools a style classifier 84% of the time. A human evaluation with English literature experts, including recommendations for dealing with the entanglement of content with style. ### Related Work ::: Style Transfer with Parallel Data
Following in the footsteps of machine translation, style transfer in text has seen success by using parallel data. BIBREF5 use modern translations of Shakespeare plays to build a modern-to-Shakespearan model. BIBREF6 compile parallel data for formal and informal sentences, allowing them to successfully use various machine translation techniques. While parallel data may work for very specific styles, the difficulty of finding parallel texts dramatically limits this approach. ### Related Work ::: Style Transfer without Parallel Data
There has been a decent amount of work on this approach in the past few years BIBREF7, BIBREF8, mostly focusing on variations of an encoder-decoder framework in which style is modeled as a monolithic style embedding. The main obstacle is often to disentangle style and content. However, it remains a challenging problem. Perhaps the most successful is BIBREF9, who use a de-noising auto encoder and back translation to learn style without parallel data. BIBREF10 outline the benefits of automatically extracting style, and suggest there is a formal weakness of using linguistic heuristics. In contrast, we believe that monolithic style embeddings don't capture the existing knowledge we have about style, and will struggle to disentangle content. ### Related Work ::: Controlling Linguistic Features
Several papers have worked on controlling style when generating sentences from restaurant meaning representations BIBREF11, BIBREF12. In each of these cases, the diversity in outputs is quite small given the constraints of the meaning representation, style is often constrained to interjections (like “yeah”), and there is no original style from which to transfer. BIBREF13 investigate using stylistic parameters and content parameters to control text generation using a movie review dataset. Their stylistic parameters are created using word-level heuristics and they are successful in controlling these parameters in the outputs. Their success bodes well for our related approach in a style transfer setting, in which the content (not merely content parameters) is held fixed. ### Related Work ::: Stylometry and the Digital Humanities
Style, in literary research, is anything but a stable concept, but it nonetheless has a long tradition of study in the digital humanities. In a remarkably early quantitative study of literature, BIBREF14 charts sentence-level stylistic attributes specific to a number of novelists. Half a century later, BIBREF15 builds on earlier work in information theory by BIBREF16, and defines a literary text as consisting of two “materials": “the vocabulary, and some structural properties, the style, of its author." Beginning with BIBREF2, statistical approaches to style, or stylometry, join the already-heated debates over the authorship of literary works. A noteable example of this is the “Delta" measure, which uses z-scores of function word frequencies BIBREF17. BIBREF18 find that Shakespeare added some material to a later edition of Thomas Kyd's The Spanish Tragedy, and that Christopher Marlowe collaborated with Shakespeare on Henry VI. ### Models ::: Preliminary Classification Experiments
The stylometric research cited above suggests that the most frequently used words, e.g. function words, are most discriminating of authorship and literary style. We investigate these claims using three corpora that have distinctive styles in the literary community: gothic novels, philosophy books, and pulp science fiction, hereafter sci-fi. We retrieve gothic novels and philosophy books from Project Gutenberg and pulp sci-fi from Internet Archive's Pulp Magazine Archive. We partition this corpus into train, validation, and test sets the sizes of which can be found in Table TABREF12. In order to validate the above claims, we train five different classifiers to predict the literary style of sentences from our corpus. Each classifier has gradually more content words replaced with part-of-speech (POS) tag placeholder tokens. The All model is trained on sentences with all proper nouns replaced by `PROPN'. The models Ablated N, Ablated NV, and Ablated NVA replace nouns, nouns & verbs, and nouns, verbs, & adjectives with the corresponding POS tag respectively. Finally, Content-only is trained on sentences with all words that are not tagged as NOUN, VERB, ADJ removed; the remaining words are not ablated. We train the classifiers on the training set, balancing the class distribution to make sure there are the same number of sentences from each style. Classifiers are trained using fastText BIBREF19, using tri-gram features with all other settings as default. table:classifiers shows the accuracies of the classifiers. The styles are highly distinctive: the All classifier has an accuracy of 86%. Additionally, even the Ablated NVA is quite successful, with 75% accuracy, even without access to any content words. The Content only classifier is also quite successful, at 80% accuracy. This indicates that these stylistic genres are distinctive at both the content level and at the syntactic level. ### Models ::: Formal Model of Style
Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples. ### Models ::: Formal Model of Style ::: Reconstruction Task
Models are trained with a reconstruction task, in which a distorted version of a reference sentence is input and the goal is to output the original reference. fig:sentenceinput illustrates the process. Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas. In this way we encourage models to construct a sentence using content and style independently. This will allow us to vary the stylistic controls while keeping the content constant, and successfully perform style transfer. When generating a new sentence, the controls correspond to the counts of the corresponding syntactic features that we expect to be realized in the output. ### Models ::: Neural Architecture
We implement our feature controlled language model using a neural encoder-decoder with attention BIBREF22, using 2-layer uni-directional gated recurrent units (GRUs) for the encoder and decoder BIBREF23. The input to the encoder is a sequence of $M$ content words, along with their lemmas, and fine and coarse grained part-of-speech (POS) tags, i.e. $X_{.,j} = (x_{1,j},\ldots ,x_{M,j})$ for $j \in \mathcal {T} = \lbrace \textrm {word, lemma, fine-pos, coarse-pos}\rbrace $. We embed each token (and its lemma and POS) before concatenating, and feeding into the encoder GRU to obtain encoder hidden states, $ c_i = \operatorname{gru}(c_{i-1}, \left[E_j(X_{i,j}), \; j\in \mathcal {T} \right]; \omega _{enc}) $ for $i \in {1,\ldots ,M},$ where initial state $c_0$, encoder GRU parameters $\omega _{enc}$ and embedding matrices $E_j$ are learned parameters. The decoder sequentially generates the outputs, i.e. a sequence of $N$ tokens $y =(y_1,\ldots ,y_N)$, where all tokens $y_i$ are drawn from a finite output vocabulary $\mathcal {V}$. To generate the each token we first embed the previously generated token $y_{i-1}$ and a vector of $K$ control features $z = ( z_1,\ldots , z_K)$ (using embedding matrices $E_{dec}$ and $E_{\textrm {ctrl-1}}, \ldots , E_{\textrm {ctrl-K}}$ respectively), before concatenating them into a vector $\rho _i,$ and feeding them into the decoder side GRU along with the previous decoder state $h_{i-1}$: where $\omega _{dec}$ are the decoder side GRU parameters. Using the decoder hidden state $h_i$ we then attend to the encoder context vectors $c_j$, computing attention scores $\alpha _{i,j}$, where before passing $h_i$ and the attention weighted context $\bar{c}_i=\sum _{j=1}^M \alpha _{i,j} c_j$ into a single hidden-layer perceptron with softmax output to compute the next token prediction probability, where $W,U,V$ and $u,v, \nu $ are parameter matrices and vectors respectively. Crucially, the controls $z$ remain fixed for all input decoder steps. Each $z_k$ represents the frequency of one of the low-level features described in sec:formalstyle. During training on the reconstruction task, we can observe the full output sequence $y,$ and so we can obtain counts for each control feature directly. Controls receive a different embedding depending on their frequency, where counts of 0-20 each get a unique embedding, and counts greater than 20 are assigned to the same embedding. At test time, we set the values of the controls according to procedure described in Section SECREF25. We use embedding sizes of 128, 128, 64, and 32 for token, lemma, fine, and coarse grained POS embedding matrices respectively. Output token embeddings $E_{dec}$ have size 512, and 50 for the control feature embeddings. We set 512 for all GRU and perceptron output sizes. We refer to this model as the StyleEQ model. See fig:model for a visual depiction of the model. ### Models ::: Neural Architecture ::: Baseline Genre Model
We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7. To generate in a specific style, we simply set the appropriate embedding. We use genre embeddings of size 850 which is equivalent to the total size of the $K$ feature embeddings in the StyleEQ model. ### Models ::: Neural Architecture ::: Training
We train both models with minibatch stochastic gradient descent with a learning rate of 0.25, weight decay penalty of 0.0001, and batch size of 64. We also apply dropout with a drop rate of 0.25 to all embedding layers, the GRUs, and preceptron hidden layer. We train for a maximum of 200 epochs, using validation set BLEU score BIBREF26 to select the final model iteration for evaluation. ### Models ::: Neural Architecture ::: Selecting Controls for Style Transfer
In the Baseline model, style transfer is straightforward: given an input sentence in one style, fix the encoder content features while selecting a different genre embedding. In contrast, the StyleEQ model requires selecting the counts for each control. Although there are a variety of ways to do this, we use a method that encourages a diversity of outputs. In order to ensure the controls match the reference sentence in magnitude, we first find all sentences in the target style with the same number of words as the reference sentence. Then, we add the following constraints: the same number of proper nouns, the same number of nouns, the same number of verbs, and the same number of adjectives. We randomly sample $n$ of the remaining sentences, and for each of these `sibling' sentences, we compute the controls. For each of the new controls, we generate a sentence using the original input sentence content features. The generated sentences are then reranked using the length normalized log-likelihood under the model. We can then select the highest scoring sentence as our style-transferred output, or take the top-$k$ when we need a diverse set of outputs. The reason for this process is that although there are group-level distinctive controls for each style, e.g. the high use of punctuation in philosophy books or of first person pronouns in gothic novels, at the sentence level it can understandably be quite varied. This method matches sentences between styles, capturing the natural distribution of the corpora. ### Automatic Evaluations ::: BLEU Scores & Perplexity
In tab:blueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations, as well as the model perplexities of the reconstruction. For both models, we use beam decoding with a beam size of eight. Beam candidates are ranked according to their length normalized log-likelihood. On these automatic measures we see that StyleEQ is better able to reconstruct the original sentences. In some sense this evaluation is mostly a sanity check, as the feature controls contain more locally specific information than the genre embeddings, which say very little about how many specific function words one should expect to see in the output. ### Automatic Evaluations ::: Feature Control
Designing controllable language models is often difficult because of the various dependencies between tokens; when changing one control value it may effect other aspects of the surface realization. For example, increasing the number of conjunctions may effect how the generator places prepositions to compensate for structural changes in the sentence. Since our features are deterministically recoverable, we can perturb an individual control value and check to see that the desired change was realized in the output. Moreover, we can check the amount of change in the other non-perturbed features to measure the independence of the controls. We sample 50 sentences from each genre from the test set. For each sample, we create a perturbed control setting for each control by adding $\delta $ to the original control value. This is done for $\delta \in \lbrace -3, -2, -1, 0, 1, 2, 3\rbrace $, skipping any settings where the new control value would be negative. table:autoeval:ctrl shows the results of this experiment. The Exact column displays the percentage of generated texts that realize the exact number of control features specified by the perturbed control. High percentages in the Exact column indicate greater one-to-one correspondence between the control and surface realization. For example, if the input was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\delta =-1$, an output of “Dracula, Frankenstein and the mummy,” would count towards the Exact category, while “Dracula, Frankenstein, the mummy,” would not. The Direction column specifies the percentage of cases where the generated text produces a changed number of the control features that, while not exactly matching the specified value of the perturbed control, does change from the original in the correct direction. For example, if the input again was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\delta =-1$, both outputs of “Dracula, Frankenstein and the mummy,” and “Dracula, Frankenstein, the mummy,” would count towards Direction. High percentages in Direction mean that we could roughly ensure desired surface realizations by modifying the control by a larger $\delta $. Finally, the Atomic column specifies the percentage of cases where the generated text with the perturbed control only realizes changes to that specific control, while other features remain constant. For example, if the input was “Dracula and Frankenstein in the castle,” and we set the conjunction feature to $\delta =-1$, an output of “Dracula near Frankenstein in the castle,” would not count as Atomic because, while the number of conjunctions did decrease by one, the number of simple preposition changed. An output of “Dracula, Frankenstein in the castle,” would count as Atomic. High percentages in the Atomic column indicate this feature is only loosely coupled to the other features and can be changed without modifying other aspects of the sentence. Controls such as conjunction, determiner, and punctuation are highly controllable, with Exact rates above 80%. But with the exception of the constituency parse features, all controls have high Direction rates, many in the 90s. These results indicate our model successfully controls these features. The fact that the Atomic rates are relatively low is to be expected, as controls are highly coupled – e.g. to increase 1stPer, it is likely another pronoun control will have to decrease. ### Automatic Evaluations ::: Automatic Classification
For each model we look at the classifier prediction accuracy of reconstructed and transferred sentences. In particular we use the Ablated NVA classifier, as this is the most content-blind one. We produce 16 outputs from both the Baseline and StyleEq models. For the Baseline, we use a beam search of size 16. For the StyleEQ model, we use the method described in Section SECREF25 to select 16 `sibling' sentences in the target style, and generated a transferred sentence for each. We look at three different methods for selection: all, which uses all output sentences; top, which selects the top ranked sentence based on the score from the model; and oracle, which selects the sentence with the highest classifier likelihood for the intended style. The reason for the third method, which indeed acts as an oracle, is that using the score from the model didn't always surface a transferred sentence that best reflected the desired style. Partially this was because the model score was mostly a function of how well a transferred sentence reflected the distribution of the training data. But additionally, some control settings are more indicative of a target style than others. The use of the classifier allows us to identify the most suitable control setting for a target style that was roughly compatible with the number of content words. In table:fasttext-results we see the results. Note that for both models, the all and top classification accuracy tends to be quite similar, though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs. However, the oracle introduces a huge jump in accuracy for the StyleEQ model, especially compared to the Baseline, partially because the diversity of outputs from StyleEQ is much higher; often the Baseline model produces no diversity – the 16 output sentences may be nearly identical, save a single word or two. It's important to note that neither model uses the classifier in any way except to select the sentence from 16 candidate outputs. What this implies is that lurking within the StyleEQ model outputs are great sentences, even if they are hard to find. In many cases, the StyleEQ model has a classification accuracy above the base rate from the test data, which is 75% (see table:classifiers). ### Human Evaluation
table:cherrypicking shows example outputs for the StyleEQ and Baseline models. Through inspection we see that the StyleEQ model successfully changes syntactic constructions in stylistically distinctive ways, such as increasing syntactic complexity when transferring to philosophy, or changing relevant pronouns when transferring to sci-fi. In contrast, the Baseline model doesn't create outputs that move far from the reference sentence, making only minor modifications such changing the type of a single pronoun. To determine how readers would classify our transferred sentences, we recruited three English Literature PhD candidates, all of whom had passed qualifying exams that included determining both genre and era of various literary texts. ### Human Evaluation ::: Fluency Evaluation
To evaluate the fluency of our outputs, we had the annotators score reference sentences, reconstructed sentences, and transferred sentences on a 0-5 scale, where 0 was incoherent and 5 was a well-written human sentence. table:fluency shows the average fluency of various conditions from all three annotators. Both models have fluency scores around 3. Upon inspection of the outputs, it is clear that many have fluency errors, resulting in ungrammatical sentences. Notably the Baseline often has slightly higher fluency scores than the StyleEQ model. This is likely because the Baseline model is far less constrained in how to construct the output sentence, and upon inspection often reconstructs the reference sentence even when performing style transfer. In contrast, the StyleEQ is encouraged to follow the controls, but can struggle to incorporate these controls into a fluent sentence. The fluency of all outputs is lower than desired. We expect that incorporating pre-trained language models would increase the fluency of all outputs without requiring larger datasets. ### Human Evaluation ::: Human Classification
Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation. In discussing this task with the annotators, they noted that content is a heavy predictor of genre, and that would certainly confound their annotations. To attempt to mitigate this, we gave them two annotation tasks: which-of-3 where they simply marked which style they thought a sentence was from, and which-of-2 where they were given the original style and marked which style they thought the sentence was transferred into. For each task, each annotator marked 180 sentences: 90 from each model, with an even split across the three genres. Annotators were presented the sentences in a random order, without information about the models. In total, each marked 270 sentences. (Note there were no reconstructions in this annotation task.) table:humanclassifiers shows the results. In both tasks, accuracy of annotators classifying the sentence as its intended style was low. In which-of-3, scores were around 20%, below the chance rate of 33%. In which-of-2, scores were in the 50s, slightly above the chance rate of 50%. This was the case for both models. There was a slight increase in accuracy for the StyleEQ model over the Baseline for which-of-3, but the opposite trend for which-of-2, suggesting these differences are not significant. It's clear that it's hard to fool the annotators. Introspecting on their approach, the annotators expressed having immediate responses based on key words – for instance any references of `space' implied `sci-fi'. We call this the `vampires in space' problem, because no matter how well a gothic sentence is rewritten as a sci-fi one, it's impossible to ignore the fact that there is a vampire in space. The transferred sentences, in the eyes of the Ablated NVA classifier (with no access to content words), did quite well transferring into their intended style. But people are not blind to content. ### Human Evaluation ::: The `Vampires in Space' Problem
Working with the annotators, we regularly came up against the 'vampires in space' problem: while syntactic constructions account for much of the distinction of literary styles, these constructions often co-occur with distinctive content. Stylometrics finds syntactic constructions are great at fingerprinting, but suggests that these constructions are surface realizations of higher-level stylistic decisions. The number and type of personal pronouns is a reflection of how characters feature in a text. A large number of positional prepositions may be the result of a writer focusing on physical descriptions of scenes. In our attempt to decouple these, we create Frankenstein sentences, which piece together features of different styles – we are putting vampires in space. Another way to validate our approach would be to select data that is stylistically distinctive but with similar content: perhaps genres in which content is static but language use changes over time, stylistically distinct authors within a single genre, or parodies of a distinctive genre. ### Conclusion and Future Work
We present a formal, extendable model of style that can add control to any neural text generation system. We model style as a suite of low-level linguistic controls, and train a neural encoder-decoder model to reconstruct reference sentences given only content words and the setting of the controls. In automatic evaluations, we show that our model can fool a style classifier 84% of the time and outperforms a baseline genre-embedding model. In human evaluations, we encounter the `vampires in space' problem in which content and style are equally discriminative but people focus more on the content. In future work we would like to model higher-level syntactic controls. BIBREF20 show that differences in clausal constructions, for instance having a dependent clause before an independent clause or vice versa, is a marker of style appreciated by the reader. Such features would likely interact with our lower-level controls in an interesting way, and provide further insight into style transfer in text. ### Acknowledgements
Katy Gero is supported by an NSF GRF (DGE - 1644869). We would also like to thank Elsbeth Turcan for her helpful comments. Table 1: The size of the data across the three different styles investigated. Table 2: Accuracy of five classifiers trained using trigrams with fasttext, for all test data and split by genre. Despite heavy ablation, the Ablated NVA classifier has an accuracy of 75%, suggesting synactic and functional features alone can be fully predictive of style. Table 3: All controls, their source, and examples. Punctuation doesn’t include end punctuation. Figure 1: How a reference sentence from the dataset is prepared for input to the model. Controls are calculated heuristically, and then removed from the sentence. The remaining words, as well as their lemmatized versions and part-of-speech tags, are used as input separately. Figure 2: A schematic depiction of our style control model. Table 4: Test set reconstruction BLEU score and perplexity (in nats). Table 5: Percentage rates of Exact, Direction, and Atomic feature control changes. See subsection 4.2 for explanation. Table 6: Ablated NVA classifier accuracy using three different methods of selecting an output sentence. This is additionally split into the nine transfer possibilities, given the three source styles. The StyleEQ model produces far more diverse outputs, allowing the oracle method to have a very high accuracy compared to the Baseline model. Table 7: Example outputs (manually selected) from both models. The StyleEQ model successfully rewrites the sentence with very different syntactic constructions that reflect style, while the Baseline model rarely moves far from the reference. Table 8: Fluency scores (0-5, where 0 is incoherent) of sentences from three annotators. The Baseline model tends to produce slightly more fluent sentences than the StyleEQ model, likely because it is less constrained. Table 9: Accuracy of three annotators in selecting the correct style for transferred sentences. In this evaluation there is little difference between the models.
|
words found in the control word lists are then removed, The remaining words, which represent the content
|
what are the advantages of the proposed model?
|
### Introduction
Topic models, such as latent Dirichlet allocation (LDA), allow us to analyze large collections of documents by revealing their underlying themes, or topics, and how each document exhibits them BIBREF0 . Therefore, it is not surprising that topic models have become a standard tool in data analysis, with many applications that go even beyond their original purpose of modeling textual data, such as analyzing images BIBREF1 , BIBREF2 , videos BIBREF3 , survey data BIBREF4 or social networks data BIBREF5 . Since documents are frequently associated with other variables such as labels, tags or ratings, much interest has been placed on supervised topic models BIBREF6 , which allow the use of that extra information to “guide" the topics discovery. By jointly learning the topics distributions and a classification or regression model, supervised topic models have been shown to outperform the separate use of their unsupervised analogues together with an external regression/classification algorithm BIBREF2 , BIBREF7 . Supervised topics models are then state-of-the-art approaches for predicting target variables associated with complex high-dimensional data, such as documents or images. Unfortunately, the size of modern datasets makes the use of a single annotator unrealistic and unpractical for the majority of the real-world applications that involve some form of human labeling. For instance, the popular Reuters-21578 benchmark corpus was categorized by a group of personnel from Reuters Ltd and Carnegie Group, Inc. Similarly, the LabelMe project asks volunteers to annotate images from a large collection using an online tool. Hence, it is seldom the case where a single oracle labels an entire collection. Furthermore, the Web, through its social nature, also exploits the wisdom of crowds to annotate large collections of documents and images. By categorizing texts, tagging images or rating products and places, Web users are generating large volumes of labeled content. However, when learning supervised models from crowds, the quality of labels can vary significantly due to task subjectivity and differences in annotator reliability (or bias) BIBREF8 , BIBREF9 . If we consider a sentiment analysis task, it becomes clear that the subjectiveness of the exercise is prone to generate considerably distinct labels from different annotators. Similarly, online product reviews are known to vary considerably depending on the personal biases and volatility of the reviewer's opinions. It is therefore essential to account for these issues when learning from this increasingly common type of data. Hence, the interest of researchers on building models that take the reliabilities of different annotators into consideration and mitigate the effect of their biases has spiked during the last few years (e.g. BIBREF10 , BIBREF11 ). The increasing popularity of crowdsourcing platforms like Amazon Mechanical Turk (AMT) has further contributed to the recent advances in learning from crowds. This kind of platforms offers a fast, scalable and inexpensive solution for labeling large amounts of data. However, their heterogeneous nature in terms of contributors makes their straightforward application prone to many sorts of labeling noise and bias. Hence, a careless use of crowdsourced data as training data risks generating flawed models. In this article, we propose a fully generative supervised topic model that is able to account for the different reliabilities of multiple annotators and correct their biases. The proposed model is then capable of jointly modeling the words in documents as arising from a mixture of topics, the latent true target variables as a result of the empirical distribution over topics of the documents, and the labels of the multiple annotators as noisy versions of that latent ground truth. We propose two different models, one for classification BIBREF12 and another for regression problems, thus covering a very wide range of possible practical applications, as we empirically demonstrate. Since the majority of the tasks for which multiple annotators are used generally involve complex data such as text, images and video, by developing a multi-annotator supervised topic model we are contributing with a powerful tool for learning predictive models of complex high-dimensional data from crowds. Given that the increasing sizes of modern datasets can pose a problem for obtaining human labels as well as for Bayesian inference, we propose an efficient stochastic variational inference algorithm BIBREF13 that is able to scale to very large datasets. We empirically show, using both simulated and real multiple-annotator labels obtained from AMT for popular text and image collections, that the proposed models are able to outperform other state-of-the-art approaches in both classification and regression tasks. We further show the computational and predictive advantages of the stochastic variational inference algorithm over its batch counterpart. ### Supervised topic models
Latent Dirichlet allocation (LDA) soon proved to be a powerful tool for modeling documents BIBREF0 and images BIBREF1 by extracting their underlying topics, where topics are probability distributions across words, and each document is characterized by a probability distribution across topics. However, the need to model the relationship between documents and labels quickly gave rise to many supervised variants of LDA. One of the first notable works was that of supervised LDA (sLDA) BIBREF6 . By extending LDA through the inclusion of a response variable that is linearly dependent on the mean topic-assignments of the words in a document, sLDA is able to jointly model the documents and their responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. Although initially developed for general continuous response variables, sLDA was later extended to classification problems BIBREF2 , by modeling the relationship between topic-assignments and labels with a softmax function as in logistic regression. From a classification perspective, there are several ways in which document classes can be included in LDA. The most natural one in this setting is probably the sLDA approach, since the classes are directly dependent on the empirical topic mixture distributions. This approach is coherent with the generative perspective of LDA but, nevertheless, several discriminative alternatives also exist. For example, DiscLDA BIBREF14 introduces a class-dependent linear transformation on the topic mixture proportions of each document, such that the per-word topic assignments are drawn from linearly transformed mixture proportions. The class-specific transformation matrices are then able to reposition the topic mixture proportions so that documents with the same class labels have similar topics mixture proportions. The transformation matrices can be estimated by maximizing the conditional likelihood of response variables as the authors propose BIBREF14 . An alternative way of including classes in LDA for supervision is the one proposed in the Labeled-LDA model BIBREF15 . Labeled-LDA is a variant of LDA that incorporates supervision by constraining the topic model to assign to a document only topics that correspond to its label set. While this allows for multiple labels per document, it is restrictive in the sense that the number of topics needs to be the same as the number of possible labels. From a regression perspective, other than sLDA, the most relevant approaches are the Dirichlet-multimonial regression BIBREF16 and the inverse regression topic models BIBREF17 . The Dirichlet-multimonial regression (DMR) topic model BIBREF16 includes a log-linear prior on the document's mixture proportions that is a function of a set of arbitrary features, such as author, date, publication venue or references in scientific articles. The inferred Dirichlet-multinomial distribution can then be used to make predictions about the values of theses features. The inverse regression topic model (IRTM) BIBREF17 is a mixed-membership extension of the multinomial inverse regression (MNIR) model proposed in BIBREF18 that exploits the topical structure of text corpora to improve its predictions and facilitate exploratory data analysis. However, this results in a rather complex and inefficient inference procedure. Furthermore, making predictions in the IRTM is not trivial. For example, MAP estimates of targets will be in a different scale than the original document's metadata. Hence, the authors propose the use of a linear model to regress metadata values onto their MAP predictions. The approaches discussed so far rely on likelihood-based estimation procedures. The work in BIBREF7 contrasts with these approaches by proposing MedLDA, a supervised topic model that utilizes the max-margin principle for estimation. Despite its margin-based advantages, MedLDA looses the probabilistic interpretation of the document classes given the topic mixture distributions. On the contrary, in this article we propose a fully generative probabilistic model of the answers of multiple annotators and of the words of documents arising from a mixture of topics. ### Learning from multiple annotators
Learning from multiple annotators is an increasingly important research topic. Since the early work of Dawid and Skeene BIBREF19 , who attempted to obtain point estimates of the error rates of patients given repeated but conflicting responses to various medical questions, many approaches have been proposed. These usually rely on latent variable models. For example, in BIBREF20 the authors propose a model to estimate the ground truth from the labels of multiple experts, which is then used to train a classifier. While earlier works usually focused on estimating the ground truth and the error rates of different annotators, recent works are more focused on the problem of learning classifiers using multiple-annotator data. This idea was explored by Raykar et al. BIBREF21 , who proposed an approach for jointly learning the levels of expertise of different annotators and the parameters of a logistic regression classifier, by modeling the ground truth labels as latent variables. This work was later extended in BIBREF11 by considering the dependencies of the annotators' labels on the instances they are labeling, and also in BIBREF22 through the use of Gaussian process classifiers. The model proposed in this article for classification problems shares the same intuition with this line of work and models the true labels as latent variables. However, it differs significantly by using a fully Bayesian approach for estimating the reliabilities and biases of the different annotators. Furthermore, it considers the problems of learning a low-dimensional representation of the input data (through topic modeling) and modeling the answers of multiple annotators jointly, providing an efficient stochastic variational inference algorithm. Despite the considerable amount of approaches for learning classifiers from the noisy answers of multiple annotators, for continuous response variables this problem has been approached in a much smaller extent. For example, Groot et al. BIBREF23 address this problem in the context of Gaussian processes. In their work, the authors assign a different variance to the likelihood of the data points provided by the different annotators, thereby allowing them to have different noise levels, which can be estimated by maximizing the marginal likelihood of the data. Similarly, the authors in BIBREF21 propose an extension of their own classification approach to regression problems by assigning different variances to the Gaussian noise models of the different annotators. In this article, we take this idea one step further by also considering a per-annotator bias parameter, which gives the proposed model the ability to overcome certain personal tendencies in the annotators labeling styles that are quite common, for example, in product ratings and document reviews. Furthermore, we empirically validate the proposed model using real multi-annotator data obtained from Amazon Mechanical Turk. This contrasts with the previously mentioned works, which rely only on simulated annotators. ### Classification model
In this section, we develop a multi-annotator supervised topic model for classification problems. The model for regression settings will be presented in Section SECREF5 . We start by deriving a (batch) variational inference algorithm for approximating the posterior distribution over the latent variables and an algorithm to estimate the model parameters. We then develop a stochastic variational inference algorithm that gives the model the capability of handling large collections of documents. Finally, we show how to use the learned model to classify new documents. ### Proposed model
Let INLINEFORM0 be an annotated corpus of size INLINEFORM1 , where each document INLINEFORM2 is given a set of labels INLINEFORM3 from INLINEFORM4 distinct annotators. We can take advantage of the inherent topical structure of documents and model their words as arising from a mixture of topics, each being defined as a distribution over the words in a vocabulary, as in LDA. In LDA, the INLINEFORM5 word, INLINEFORM6 , in a document INLINEFORM7 is provided a discrete topic-assignment INLINEFORM8 , which is drawn from the documents' distribution over topics INLINEFORM9 . This allows us to build lower-dimensional representations of documents, which we can explore to build classification models by assigning coefficients INLINEFORM10 to the mean topic-assignment of the words in the document, INLINEFORM11 , and applying a softmax function in order to obtain a distribution over classes. Alternatively, one could consider more flexible models such as Gaussian processes, however that would considerably increase the complexity of inference. Unfortunately, a direct mapping between document classes and the labels provided by the different annotators in a multiple-annotator setting would correspond to assuming that they are all equally reliable, an assumption that is violated in practice, as previous works clearly demonstrate (e.g. BIBREF8 , BIBREF9 ). Hence, we assume the existence of a latent ground truth class, and model the labels from the different annotators using a noise model that states that, given a true class INLINEFORM0 , each annotator INLINEFORM1 provides the label INLINEFORM2 with some probability INLINEFORM3 . Hence, by modeling the matrix INLINEFORM4 we are in fact modeling a per-annotator (normalized) confusion matrix, which allows us to account for their different levels of expertise and correct their potential biases. The generative process of the proposed model for classification problems can then be summarized as follows: For each annotator INLINEFORM0 For each class INLINEFORM0 Draw reliability parameter INLINEFORM0 For each topic INLINEFORM0 Draw topic distribution INLINEFORM0 For each document INLINEFORM0 Draw topic proportions INLINEFORM0 For the INLINEFORM0 word Draw topic assignment INLINEFORM0 Draw word INLINEFORM0 Draw latent (true) class INLINEFORM0 For each annotator INLINEFORM0 Draw annotator's label INLINEFORM0 where INLINEFORM0 denotes the set of annotators that labeled the INLINEFORM1 document, INLINEFORM2 , and the softmax is given by DISPLAYFORM0 Fig. FIGREF20 shows a graphical model representation of the proposed model, where INLINEFORM0 denotes the number of topics, INLINEFORM1 is the number of classes, INLINEFORM2 is the total number of annotators and INLINEFORM3 is the number of words in the document INLINEFORM4 . Shaded nodes are used to distinguish latent variable from the observed ones and small solid circles are used to denote model parameters. Notice that we included a Dirichlet prior over the topics INLINEFORM5 to produce a smooth posterior and control sparsity. Similarly, instead of computing maximum likelihood or MAP estimates for the annotators reliability parameters INLINEFORM6 , we place a Dirichlet prior over these variables and perform approximate Bayesian inference. This contrasts with previous works on learning classification models from crowds BIBREF21 , BIBREF24 . For developing a multi-annotator supervised topic model for regression, we shall follow a similar intuition as the one we considered for classification. Namely, we shall assume that, for a given document INLINEFORM0 , each annotator provides a noisy version, INLINEFORM1 , of the true (continuous) target variable, which we denote by INLINEFORM2 . This can be, for example, the true rating of a product or the true sentiment of a document. Assuming that each annotator INLINEFORM3 has its own personal bias INLINEFORM4 and precision INLINEFORM5 (inverse variance), and assuming a Gaussian noise model for the annotators' answers, we have that DISPLAYFORM0 This approach is therefore more powerful than previous works BIBREF21 , BIBREF23 , where a single precision parameter was used to model the annotators' expertise. Fig. FIGREF45 illustrates this intuition for 4 annotators, represented by different colors. The “green annotator" is the best one, since he is right on the target and his answers vary very little (low bias, high precision). The “yellow annotator" has a low bias, but his answers are very uncertain, as they can vary a lot. Contrarily, the “blue annotator" is very precise, but consistently over-estimates the true target (high bias, high precision). Finally, the “red annotator" corresponds to the worst kind of annotator (with high bias and low precision). Having specified a model for annotators answers given the true targets, the only thing left is to do is to specify a model of the latent true targets INLINEFORM0 given the empirical topic mixture distributions INLINEFORM1 . For this, we shall keep things simple and assume a linear model as in sLDA BIBREF6 . The generative process of the proposed model for continuous target variables can then be summarized as follows: For each annotator INLINEFORM0 For each class INLINEFORM0 Draw reliability parameter INLINEFORM0 For each topic INLINEFORM0 Draw topic distribution INLINEFORM0 For each document INLINEFORM0 Draw topic proportions INLINEFORM0 For the INLINEFORM0 word Draw topic assignment INLINEFORM0 Draw word INLINEFORM0 Draw latent (true) target INLINEFORM0 For each annotator INLINEFORM0 Draw answer INLINEFORM0 Fig. FIGREF60 shows a graphical representation of the proposed model. ### Approximate inference
Given a dataset INLINEFORM0 , the goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM1 , the per-word topic assignments INLINEFORM2 , the per-topic distribution over words INLINEFORM3 , the per-document latent true class INLINEFORM4 , and the per-annotator confusion parameters INLINEFORM5 . As with LDA, computing the exact posterior distribution of the latent variables is computationally intractable. Hence, we employ mean-field variational inference to perform approximate Bayesian inference. Variational inference methods seek to minimize the KL divergence between the variational and the true posterior distribution. We assume a fully-factorized (mean-field) variational distribution of the form DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are variational parameters. Table TABREF23 shows the correspondence between variational parameters and the original parameters. Let INLINEFORM0 denote the model parameters. Following BIBREF25 , the KL minimization can be equivalently formulated as maximizing the following lower bound on the log marginal likelihood DISPLAYFORM0 which we maximize using coordinate ascent. Optimizing INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 gives the same coordinate ascent updates as in LDA BIBREF0 DISPLAYFORM0 The variational Dirichlet parameters INLINEFORM0 can be optimized by collecting only the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 denotes the documents labeled by the INLINEFORM1 annotator, INLINEFORM2 , and INLINEFORM3 and INLINEFORM4 are the gamma and digamma functions, respectively. Taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 and setting them to zero, yields the following update DISPLAYFORM0 Similarly, the coordinate ascent updates for the documents distribution over classes INLINEFORM0 can be found by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 . Adding the necessary Lagrange multipliers to ensure that INLINEFORM1 and setting the derivatives w.r.t. INLINEFORM2 to zero gives the following update DISPLAYFORM0 Observe how the variational distribution over the true classes results from a combination between the dot product of the inferred mean topic assignment INLINEFORM0 with the coefficients INLINEFORM1 and the labels INLINEFORM2 from the multiple annotators “weighted" by their expected log probability INLINEFORM3 . The main difficulty of applying standard variational inference methods to the proposed model is the non-conjugacy between the distribution of the mean topic-assignment INLINEFORM0 and the softmax. Namely, in the expectation DISPLAYFORM0 the second term is intractable to compute. We can make progress by applying Jensen's inequality to bound it as follows DISPLAYFORM0 where INLINEFORM0 , which is constant w.r.t. INLINEFORM1 . This local variational bound can be made tight by noticing that INLINEFORM2 , where equality holds if and only if INLINEFORM3 . Hence, given the current parameter estimates INLINEFORM4 , if we set INLINEFORM5 and INLINEFORM6 then, for an individual parameter INLINEFORM7 , we have that DISPLAYFORM0 Using this local bound to approximate the expectation of the log-sum-exp term, and taking derivatives of the evidence lower bound w.r.t. INLINEFORM0 with the constraint that INLINEFORM1 , yields the following fix-point update DISPLAYFORM0 where INLINEFORM0 denotes the size of the vocabulary. Notice how the per-word variational distribution over topics INLINEFORM1 depends on the variational distribution over the true class label INLINEFORM2 . The variational inference algorithm iterates between Eqs. EQREF25 - EQREF33 until the evidence lower bound, Eq. EQREF24 , converges. Additional details are provided as supplementary material. The goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM0 , the per-word topic assignments INLINEFORM1 , the per-topic distribution over words INLINEFORM2 and the per-document latent true targets INLINEFORM3 . As we did for the classification model, we shall develop a variational inference algorithm using coordinate ascent. The lower-bound on the log marginal likelihood is now given by DISPLAYFORM0 where INLINEFORM0 are the model parameters. We assume a fully-factorized (mean-field) variational distribution INLINEFORM1 of the form DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are the variational parameters. Notice the new Gaussian term, INLINEFORM5 , corresponding to the approximate posterior distribution of the unobserved true targets. Optimizing the variational objective INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 yields the same updates from Eqs. EQREF25 and . Optimizing w.r.t. INLINEFORM3 gives a similar update to the one in sLDA BIBREF6 DISPLAYFORM0 where we defined INLINEFORM0 . Notice how this update differs only from the one in BIBREF6 by replacing the true target variable by its expected value under the variational distribution, which is given by INLINEFORM1 . The only variables left for doing inference on are then the latent true targets INLINEFORM0 . The variational distribution of INLINEFORM1 is governed by two parameters: a mean INLINEFORM2 and a variance INLINEFORM3 . Collecting all the terms in INLINEFORM4 that contain INLINEFORM5 gives DISPLAYFORM0 Taking derivatives of INLINEFORM0 and setting them to zero gives the following update for INLINEFORM1 DISPLAYFORM0 Notice how the value of INLINEFORM0 is a weighted average of what the linear regression model on the empirical topic mixture believes the true target should be, and the bias-corrected answers of the different annotators weighted by their individual precisions. As for INLINEFORM0 , we can optimize INLINEFORM1 w.r.t. INLINEFORM2 by collecting all terms that contain INLINEFORM3 DISPLAYFORM0 and taking derivatives, yielding the update DISPLAYFORM0 ### Parameter estimation
The model parameters are INLINEFORM0 . The parameters INLINEFORM1 of the Dirichlet priors can be regarded as hyper-parameters of the proposed model. As with many works on topic models (e.g. BIBREF26 , BIBREF2 ), we assume hyper-parameters to be fixed, since they can be effectively selected by grid-search procedures which are able to explore well the parameter space without suffering from local optima. Our focus is then on estimating the coefficients INLINEFORM2 using a variational EM algorithm. Therefore, in the E-step we use the variational inference algorithm from section SECREF21 to estimate the posterior distribution of the latent variables, and in the M-step we find maximum likelihood estimates of INLINEFORM3 by maximizing the evidence lower bound INLINEFORM4 . Unfortunately, taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 does not yield a closed-form solution. Hence, we use a numerical method, namely L-BFGS BIBREF27 , to find an optimum. The objective function and gradients are given by DISPLAYFORM0 where, for convenience, we defined the following variable: INLINEFORM0 . The parameters of the proposed regression model are INLINEFORM0 . As we did for the classification model, we shall assume the Dirichlet parameters, INLINEFORM1 and INLINEFORM2 , to be fixed. Similarly, we shall assume that the variance of the true targets, INLINEFORM3 , to be constant. The only parameters left to estimate are then the regression coefficients INLINEFORM4 and the annotators biases, INLINEFORM5 , and precisions, INLINEFORM6 , which we estimate using variational Bayesian EM. Since the latent true targets are now linear functions of the documents' empirical topic mixtures (i.e. there is no softmax function), we can find a closed form solution for the regression coefficients INLINEFORM0 . Taking derivatives of INLINEFORM1 w.r.t. INLINEFORM2 and setting them to zero, gives the following solution for INLINEFORM3 DISPLAYFORM0 where DISPLAYFORM0 We can find maximum likelihood estimates for the annotator biases INLINEFORM0 by optimizing the lower bound on the marginal likelihood. The terms in INLINEFORM1 that involve INLINEFORM2 are DISPLAYFORM0 Taking derivatives w.r.t. INLINEFORM0 gives the following estimate for the bias of the INLINEFORM1 annotator DISPLAYFORM0 Similarly, we can find maximum likelihood estimates for the precisions INLINEFORM0 of the different annotators by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 The maximum likelihood estimate for the precision (inverse variance) of the INLINEFORM0 annotator is then given by DISPLAYFORM0 Given a set of fitted parameters, it is then straightforward to make predictions for new documents: it is just necessary to infer the (approximate) posterior distribution over the word-topic assignments INLINEFORM0 for all the words using the coordinates ascent updates of standard LDA (Eqs. EQREF25 and EQREF42 ), and then use the mean topic assignments INLINEFORM1 to make predictions INLINEFORM2 . ### Stochastic variational inference
In Section SECREF21 , we proposed a batch coordinate ascent algorithm for doing variational inference in the proposed model. This algorithm iterates between analyzing every document in the corpus to infer the local hidden structure, and estimating the global hidden variables. However, this can be inefficient for large datasets, since it requires a full pass through the data at each iteration before updating the global variables. In this section, we develop a stochastic variational inference algorithm BIBREF13 , which follows noisy estimates of the gradients of the evidence lower bound INLINEFORM0 . Based on the theory of stochastic optimization BIBREF28 , we can find unbiased estimates of the gradients by subsampling a document (or a mini-batch of documents) from the corpus, and using it to compute the gradients as if that document was observed INLINEFORM0 times. Hence, given an uniformly sampled document INLINEFORM1 , we use the current posterior distributions of the global latent variables, INLINEFORM2 and INLINEFORM3 , and the current coefficient estimates INLINEFORM4 , to compute the posterior distribution over the local hidden variables INLINEFORM5 , INLINEFORM6 and INLINEFORM7 using Eqs. EQREF25 , EQREF33 and EQREF29 respectively. These posteriors are then used to update the global variational parameters, INLINEFORM8 and INLINEFORM9 by taking a step of size INLINEFORM10 in the direction of the noisy estimates of the natural gradients. Algorithm SECREF37 describes a stochastic variational inference algorithm for the proposed model. Given an appropriate schedule for the learning rates INLINEFORM0 , such that INLINEFORM1 and INLINEFORM2 , the stochastic optimization algorithm is guaranteed to converge to a local maximum of the evidence lower bound BIBREF28 . [t] Stochastic variational inference for the proposed classification model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 Set t = t + 1 Sample a document INLINEFORM6 uniformly from the corpus Compute INLINEFORM7 using Eq. EQREF33 , for INLINEFORM8 Compute INLINEFORM9 using Eq. EQREF25 Compute INLINEFORM10 using Eq. EQREF29 local parameters INLINEFORM11 , INLINEFORM12 and INLINEFORM13 converge Compute step-size INLINEFORM14 Update topics variational parameters DISPLAYFORM0 Update annotators confusion parameters DISPLAYFORM0 global convergence criterion is met As we did for the classification model from Section SECREF4 , we can envision developing a stochastic variational inference for the proposed regression model. In this case, the only “global" latent variables are the per-topic distributions over words INLINEFORM0 . As for the “local" latent variables, instead of a single variable INLINEFORM1 , we now have two variables per-document: INLINEFORM2 and INLINEFORM3 . The stochastic variational inference can then be summarized as shown in Algorithm SECREF76 . For added efficiency, one can also perform stochastic updates of the annotators biases INLINEFORM4 and precisions INLINEFORM5 , by taking a step in the direction of the gradient of the noisy evidence lower bound scaled by the step-size INLINEFORM6 . [t] Stochastic variational inference for the proposed regression model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 Set t = t + 1 Sample a document INLINEFORM7 uniformly from the corpus Compute INLINEFORM8 using Eq. EQREF64 , for INLINEFORM9 Compute INLINEFORM10 using Eq. EQREF25 Compute INLINEFORM11 using Eq. EQREF66 Compute INLINEFORM12 using Eq. EQREF68 local parameters INLINEFORM13 , INLINEFORM14 and INLINEFORM15 converge Compute step-size INLINEFORM16 Update topics variational parameters DISPLAYFORM0 global convergence criterion is met ### Document classification
In order to make predictions for a new (unlabeled) document INLINEFORM0 , we start by computing the approximate posterior distribution over the latent variables INLINEFORM1 and INLINEFORM2 . This can be achieved by dropping the terms that involve INLINEFORM3 , INLINEFORM4 and INLINEFORM5 from the model's joint distribution (since, at prediction time, the multi-annotator labels are no longer observed) and averaging over the estimated topics distributions. Letting the topics distribution over words inferred during training be INLINEFORM6 , the joint distribution for a single document is now simply given by DISPLAYFORM0 Deriving a mean-field variational inference algorithm for computing the posterior over INLINEFORM0 results in the same fixed-point updates as in LDA BIBREF0 for INLINEFORM1 (Eq. EQREF25 ) and INLINEFORM2 DISPLAYFORM0 Using the inferred posteriors and the coefficients INLINEFORM0 estimated during training, we can make predictions as follows DISPLAYFORM0 This is equivalent to making predictions in the classification version of sLDA BIBREF2 . ### Regression model
In this section, we develop a variant of the model proposed in Section SECREF4 for regression problems. We shall start by describing the proposed model with a special focus on the how to handle multiple annotators with different biases and reliabilities when the target variables are continuous variables. Next, we present a variational inference algorithm, highlighting the differences to the classification version. Finally, we show how to optimize the model parameters. ### Experiments
In this section, the proposed multi-annotator supervised LDA models for classification and regression (MA-sLDAc and MA-sLDAr, respectively) are validated using both simulated annotators on popular corpora and using real multiple-annotator labels obtained from Amazon Mechanical Turk. Namely, we shall consider the following real-world problems: classifying posts and news stories; classifying images according to their content; predicting number of stars that a given user gave to a restaurant based on the review; predicting movie ratings using the text of the reviews. ### Classification
In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers", “science", “politics" and “recreative". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing. The different annotators were simulated by sampling their answers from a multinomial distribution, where the parameters are given by the lines of the annotators' confusion matrices. Hence, for each annotator INLINEFORM0 , we start by pre-defining a confusion matrix INLINEFORM1 with elements INLINEFORM2 , which correspond to the probability that the annotators' answer is INLINEFORM3 given that the true label is INLINEFORM4 , INLINEFORM5 . Then, the answers are sampled i.i.d. from INLINEFORM6 . This procedure was used to simulate 5 different annotators with the following accuracies: 0.737, 0.468, 0.284, 0.278, 0.260. In this experiment, no repeated labelling was used. Hence, each annotator only labels roughly one-fifth of the data. When compared to the ground truth, the simulated answers revealed an accuracy of 0.405. See Table TABREF81 for an overview of the details of the classification datasets used. Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines: [itemsep=0.02cm] LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 . LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features. LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead. Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv). sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers. For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best. In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm. In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 . The Reuters-21578 is a collection of manually categorized newswire stories with labels such as Acquisitions, Crude-oil, Earnings or Grain. For this experiment, only the documents belonging to the ModApte split were considered with the additional constraint that the documents should have no more than one label. This resulted in a total of 7016 documents distributed among 8 classes. Of these, 1800 documents were submitted to AMT for multiple annotators to label, giving an average of approximately 3 answers per document (see Table TABREF81 for further details). The remaining 5216 documents were used for testing. The collected answers yield an average worker accuracy of 56.8%. Applying majority voting to these answers reveals a ground truth accuracy of 71.0%. Fig. FIGREF90 shows the boxplots of the number of answers per worker and their accuracies. Observe how applying majority voting yields a higher accuracy than the median accuracy of the workers. The results obtained by the different approaches are given in Fig. FIGREF91 , where it can be seen that the proposed model (MA-sLDAc) outperforms all the other approaches. For this dataset, the svi algorithm is using mini-batches of 300 documents. The proposed model was also validated using a dataset from the computer vision domain: LabelMe BIBREF31 . In contrast to the Reuters and Newsgroups corpora, LabelMe is an open online tool to annotate images. Hence, this experiment allows us to see how the proposed model generalizes beyond non-textual data. Using the Matlab interface provided in the projects' website, we extracted a subset of the LabelMe data, consisting of all the 256 x 256 images with the categories: “highway", “inside city", “tall building", “street", “forest", “coast", “mountain" or “open country". This allowed us to collect a total of 2688 labeled images. Of these, 1000 images were given to AMT workers to classify with one of the classes above. Each image was labeled by an average of 2.547 workers, with a mean accuracy of 69.2%. When majority voting is applied to the collected answers, a ground truth accuracy of 76.9% is obtained. Fig. FIGREF92 shows the boxplots of the number of answers per worker and their accuracies. Interestingly, the worker accuracies are much higher and their distribution is much more concentrated than on the Reuters-21578 data (see Fig. FIGREF90 ), which suggests that this is an easier task for the AMT workers. The preprocessing of the images used is similar to the approach in BIBREF1 . It uses 128-dimensional SIFT BIBREF32 region descriptors selected by a sliding grid spaced at one pixel. This sliding grid extracts local regions of the image with sizes uniformly sampled between 16 x 16 and 32 x 32 pixels. The 128-dimensional SIFT descriptors produced by the sliding window are then fed to a k-means algorithm (with k=200) in order construct a vocabulary of 200 “visual words". This allows us to represent the images with a bag of visual words model. With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced: Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv). The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents. Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators. In order to verify that the proposed model was estimating the (normalized) confusion matrices INLINEFORM0 of the different workers correctly, a random sample of them was plotted against the true confusion matrices (i.e. the normalized confusion matrices evaluated against the true labels). Figure FIGREF95 shows the results obtained with 60 topics on the Reuters-21578 dataset, where the color intensity of the cells increases with the magnitude of the value of INLINEFORM1 (the supplementary material provides a similar figure for the LabelMe dataset). Using this visualization we can verify that the AMT workers are quite heterogeneous in their labeling styles and in the kind of mistakes they make, with several workers showing clear biases (e.g. workers 3 and 4), while others made mistakes more randomly (e.g. worker 1). Nevertheless, the proposed is able to capture these patterns correctly and account for effect. To gain further insights, Table TABREF96 shows 4 example images from the LabelMe dataset, along with their true labels, the answers provided by the different workers, the true label inferred by the proposed model and the likelihood of the different possible answers given the true label for each annotator ( INLINEFORM0 for INLINEFORM1 ) using a color-coding scheme similar to Fig. FIGREF95 . In the first example, although majority voting suggests “inside city" to be the correct label, we can see that the model has learned that annotators 32 and 43 are very likely to provide the label “inside city" when the true label is actually “street", and it is able to leverage that fact to infer that the correct label is “street". Similarly, in the second image the model is able to infer the correct true label from 3 conflicting labels. However, in the third image the model is not able to recover the correct true class, which can be explained by it not having enough evidence about the annotators and their reliabilities and biases (likelihood distribution for these cases is uniform). In fact, this raises interesting questions regarding requirements for the minimum number of labels per annotator, their reliabilities and their coherence. Finally, for the fourth image, somehow surprisingly, the model is able to infer the correct true class, even though all 3 annotators labeled it as “inside city". ### Regression
As for proposed classification model, we start by validating MA-sLDAr using simulated annotators on a popular corpus where the documents have associated targets that we wish to predict. For this purpose, we shall consider a dataset of user-submitted restaurant reviews from the website we8there.com. This dataset was originally introduced in BIBREF34 and it consists of 6260 reviews. For each review, there is a five-star rating on four specific aspects of quality (food, service, value, and atmosphere) as well as the overall experience. Our goal is then to predict the overall experience of the user based on his comments in the review. We apply the same preprocessing as in BIBREF18 , which consists in tokenizing the text into bigrams and discarding those that appear in less than ten reviews. The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing. As with the classification model, we seek to simulate an heterogeneous set of annotators in terms of reliability and bias. Hence, in order to simulate an annotator INLINEFORM0 , we proceed as follows: let INLINEFORM1 be the true review of the restaurant; we start by assigning a given bias INLINEFORM2 and precision INLINEFORM3 to the reviewers, depending on what type of annotator we wish to simulate (see Fig. FIGREF45 ); we then sample a simulated answer as INLINEFORM4 . Using this procedure, we simulated 5 annotators with the following (bias, precision) pairs: (0.1, 10), (-0.3, 3), (-2.5, 10), (0.1, 0.5) and (1, 0.25). The goal is to have 2 good annotators (low bias, high precision), 1 highly biased annotator and 2 low precision annotators where one is unbiased and the other is reasonably biased. The coefficients of determination ( INLINEFORM5 ) of the simulated annotators are: [0.940, 0.785, -2.469, -0.131, -1.749]. Computing the mean of the answers of the different annotators yields a INLINEFORM6 of 0.798. Table TABREF99 gives an overview on the statistics of datasets used in the regression experiments. We compare the proposed model (MA-sLDAr) with the two following baselines: [itemsep=0.02cm] LDA + LinReg (mean): This baseline corresponds to applying unsupervised LDA to the data, and learning a linear regression model on the inferred topics distributions of the documents. The answers from the different annotators were aggregated by computing the mean. sLDA (mean): This corresponds to using the regression version of sLDA BIBREF6 with the target variables obtained by computing the mean of the annotators' answers. Fig. FIGREF102 shows the results obtained for different numbers of topics. Do to the stochastic nature of both the annotators simulation procedure and the initialization of the variational Bayesian EM algorithm, we repeated each experiment 30 times and report the average INLINEFORM0 obtained with the corresponding standard deviation. Since the regression datasets that are considered in this article are not large enough to justify the use of a stochastic variational inference (svi) algorithm, we only made experiments using the batch algorithm developed in Section SECREF61 . The results obtained clearly show the improved performance of MA-sLDAr over the other methods. The proposed multi-annotator regression model (MA-sLDAr) was also validated with real annotators by using AMT. For that purpose, the movie review dataset from BIBREF35 was used. This dataset consists of 5006 movie reviews along with their respective star rating (from 1 to 10). The goal of this experiment is then predict how much a person liked a movie based on what she says about it. We ask workers to guess how much they think the writer of the review liked the movie based on her comments. An average of 4.96 answers per-review was collected for a total of 1500 reviews. The remaining reviews were used for testing. In average, each worker rated approximately 55 reviews. Using the mean answer as an estimate of the true rating of the movie yields a INLINEFORM0 of 0.830. Table TABREF99 gives an overview of the statistics of this data. Fig. FIGREF104 shows boxplots of the number of answers per worker, as well as boxplots of their respective biases ( INLINEFORM1 ) and variances (inverse precisions, INLINEFORM2 ). The preprocessing of the text consisted of stemming and stop-words removal. Using the preprocessed data, the proposed MA-sLDAr model was compared with the same baselines that were used with the we8there dataset in Section UID98 . Fig. FIGREF105 shows the results obtained for different numbers of topics. These results show that the proposed model outperforms all the other baselines. With the purpose of verifying that the proposed model is indeed estimating the biases and precisions of the different workers correctly, we plotted the true values against the estimates of MA-sLDAr with 60 topics for a random subset of 10 workers. Fig. FIGREF106 shows the obtained results, where higher color intensities indicate higher values. Ideally, the colour of two horizontally-adjacent squares would then be of similar shades, and this is indeed what happens in practice for the majority of the workers, as Fig. FIGREF106 shows. Interestingly, the figure also shows that there are a couple of workers that are considerably biased (e.g. workers 6 and 8) and that those biases are being correctly estimated, thus justifying the inclusion of a bias parameter in the proposed model, which contrasts with previous works BIBREF21 , BIBREF23 . ### Conclusion
This article proposed a supervised topic model that is able to learn from multiple annotators and crowds, by accounting for their biases and different levels of expertise. Given the large sizes of modern datasets, and considering that the majority of the tasks for which crowdsourcing and multiple annotators are desirable candidates, generally involve complex high-dimensional data such as text and images, the proposed model constitutes a strong contribution for the multi-annotator paradigm. This model is then capable of jointly modeling the words in documents as arising from a mixture of topics, as well as the latent true target variables and the (noisy) answers of the multiple annotators. We developed two distinct models, one for classification and another for regression, which share similar intuitions but that inevitably differ due to the nature of the target variables. We empirically showed, using both simulated and real annotators from Amazon Mechanical Turk that the proposed model is able to outperform state-of-the-art approaches in several real-world problems, such as classifying posts, news stories and images, or predicting the number of stars of restaurant and the rating of movie based on their reviews. For this, we use various popular datasets from the state-of-the-art, that are commonly used for benchmarking machine learning algorithms. Finally, an efficient stochastic variational inference algorithm was described, which gives the proposed models the ability to scale to large datasets. ### Acknowledgment
The Fundação para a Ciência e Tecnologia (FCT) is gratefully acknowledged for founding this work with the grants SFRH/BD/78396/2011 and PTDC/ECM-TRA/1898/2012 (InfoCROWDS). []Mariana Lourenço has a MSc degree in Informatics Engineering from University of Coimbra, Portugal. Her thesis presented a supervised topic model that is able to learn from crowds and she took part in a research project whose primary objective was to exploit online information about public events to build predictive models of flows of people in the city. Her main research interests are machine learning, pattern recognition and natural language processing. []Bernardete Ribeiro is Associate Professor at the Informatics Engineering Department, University of Coimbra in Portugal, from where she received a D.Sc. in Informatics Engineering, a Ph.D. in Electrical Engineering, speciality of Informatics, and a MSc in Computer Science. Her research interests are in the areas of Machine Learning, Pattern Recognition and Signal Processing and their applications to a broad range of fields. She was responsible/participated in several research projects in a wide range of application areas such as Text Classification, Financial, Biomedical and Bioinformatics. Bernardete Ribeiro is IEEE Senior Member, and member of IARP International Association of Pattern Recognition and ACM. []Francisco C. Pereira is Full Professor at the Technical University of Denmark (DTU), where he leads the Smart Mobility research group. His main research focus is on applying machine learning and pattern recognition to the context of transportation systems with the purpose of understanding and predicting mobility behavior, and modeling and optimizing the transportation system as a whole. He has Master€™s (2000) and Ph.D. (2005) degrees in Computer Science from University of Coimbra, and has authored/co-authored over 70 journal and conference papers in areas such as pattern recognition, transportation, knowledge based systems and cognitive science. Francisco was previously Research Scientist at MIT and Assistant Professor in University of Coimbra. He was awarded several prestigious prizes, including an IEEE Achievements award, in 2009, the Singapore GYSS Challenge in 2013, and the Pyke Johnson award from Transportation Research Board, in 2015. Fig. 1. Graphical representation of the proposed model for classification. TABLE 1 Correspondence Between Variational Parameters and the Original Parameters Fig. 3. Graphical representation of the proposed model for regression. Fig. 2. Example of four different annotators (represented by different colours) with different biases and precisions. TABLE 2 Overall Statistics of the Classification Datasets Used in the Experiments Fig. 5. Comparison of the log marginal likelihood between the batch and the stochastic variational inference (svi) algorithms on the 20-newsgroups corpus. Fig. 4. Average testset accuracy (over five runs; stddev.) of the different approaches on the 20-newsgroups data. Fig. 6. Boxplot of the number of answers per worker (a) and their respective accuracies (b) for the reuters dataset. Fig. 7. Average testset accuracy (over 30 runs; stddev.) of the different approaches on the reuters data. Fig. 9. Average testset accuracy (over 30 runs; stddev.) of the different approaches on the LabelMe data. Fig. 8. Boxplot of the number of answers per worker (a) and trespective accuracies (b) for the LabelMe dataset. Fig. 10. True versus estimated confusion matrix (cm) of six different workers of the reuters-21,578 dataset. Fig. 11. Average testset R2 (over 30 runs; stddev.) of the different approaches on the we8there data. TABLE 3 Results for Four Example LabelMe Images Fig. 14. True versus predicted biases and precisions of 10 random workers of the movie reviews dataset. Fig. 12. Boxplot of the number of answers per worker (a) and their respective biases (b) and variances (c) for the movie reviews dataset. Fig. 13. Average testset R2 (over 30 runs; stddev.) of the different approaches on the movie reviews data.
|
he proposed model outperforms all the baselines, being the svi version the one that performs best., the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.
|
Which models are best for learning long-distance movement?
|
### Introduction
The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability. Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best. Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by warstadt2018neural. The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features. We identify many specific syntactic features that make sentences harder to classify, and many that have little effect. For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn. We also find features of sentences that accentuate or minimize the differences between models. Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations. ### Analysis Set
We introduce a grammatically annotated version of the entire CoLA development set to facilitate detailed error analysis of acceptability classifiers. These 1043 sentences are expert-labeled for the presence of 63 minor grammatical features organized into 15 major features. Each minor feature belongs to a single major feature. A sentence belongs to a major feature if it belongs to one or more of the relevant minor features. The Appendix includes descriptions of each feature along with examples and the criteria used for annotation. The 63 minor features and 15 major features are illustrated in Table TABREF5 . Considering minor features, an average of 4.31 features is present per sentence (SD=2.59). The average feature is present in 71.3 sentences (SD=54.7). Turning to major features, the average sentence belongs to 3.22 major features (SD=1.66), and the average major feature is present in 224 sentences (SD=112). Every sentence is labeled with at least one feature. ### Annotation
The sentences were annotated manually by one of the authors, who is a PhD student with extensive training in formal linguistics. The features were developed in a trial stage, in which the annotator performed a similar annotation with different annotation schema for several hundred sentences from CoLA not belonging to the development set. ### Feature Descriptions
Here we briefly summarize the feature set in order of the major features. Many of these constructions are well-studied in syntax, and further background can be found in textbooks such as adger2003core and sportiche2013introduction. This major feature contains only one minor feature, simple, including sentences with a syntactically simplex subject and predicate. These three features correspond to predicative phrases, including copular constructions, small clauses (I saw Bo jump), and resultatives/depictives (Bo wiped the table clean). These six features mark various kinds of optional modifiers. This includes modifiers of NPs (The boy with blue eyes gasped) or VPs (The cat meowed all morning), and temporal (Bo swam yesterday) or locative (Bo jumped on the bed). These five features identify syntactically selected arguments, differentiating, for example, obliques (I gave a book to Bo), PP arguments of NPs and VPs (Bo voted for Jones), and expletives (It seems that Bo left). These four features mark VPs with unusual argument structures, including added arguments (I baked Bo a cake) or dropped arguments (Bo knows), and the passive (I was applauded). This contains only one feature for imperative clauses (Stop it!). These are two minor features, one for bound reflexives (Bo loves himself), and one for other bound pronouns (Bo thinks he won). These five features apply to sentences with question-like properties. They mark whether the interrogative is an embedded clause (I know who you are), a matrix clause (Who are you?), or a relative clause (Bo saw the guy who left); whether it contains an island out of which extraction is unacceptable (*What was a picture of hanging on the wall?); or whether there is pied-piping or a multi-word wh-expressions (With whom did you eat?). These six features apply to various complement clauses (CPs), including subject CPs (That Bo won is odd); CP arguments of VPs or NPs/APs (The fact that Bo won); CPs missing a complementizer (I think Bo's crazy); or non-finite CPs (This is ready for you to eat). These four minor features mark the presence of auxiliary or modal verbs (I can win), negation, or “pseudo-auxiliaries” (I have to win). These five features mark various infinitival embedded VPs, including control VPs (Bo wants to win); raising VPs (Bo seemed to fly); VP arguments of NPs or APs (Bo is eager to eat); and VPs with extraction (e.g. This is easy to read ts ). These seven features mark complex NPs and APs, including ones with PP arguments (Bo is fond of Mo), or CP/VP arguments; noun-noun compounds (Bo ate mud pie); modified NPs, and NPs derived from verbs (Baking is fun). These seven features mark various unrelated syntactic constructions, including dislocated phrases (The boy left who was here earlier); movement related to focus or information structure (This I've gotta see this); coordination, subordinate clauses, and ellipsis (I can't); or sentence-level adjuncts (Apparently, it's raining). These four features mark various determiners, including quantifiers, partitives (two of the boys), negative polarity items (I *do/don't have any pie), and comparative constructions. These three features apply only to unacceptable sentences, and only ones which are ungrammatical due to a semantic or morphological violation, or the presence or absence of a single salient word. ### Correlations
We wish to emphasize that these features are overlapping and in many cases are correlated, thus not all results from using this analysis set will be independent. We analyzed the pairwise Matthews Correlation Coefficient BIBREF17 of the 63 minor features (giving 1953 pairs), and of the 15 major features (giving 105 pairs). MCC is a special case of Pearson's INLINEFORM0 for Boolean variables. These results are summarized in Table TABREF25 . Regarding the minor features, 60 pairs had a correlation of 0.2 or greater, 17 had a correlation of 0.4 or greater, and 6 had a correlation of 0.6 or greater. None had an anti-correlation of greater magnitude than -0.17. Turning to the major features, 6 pairs had a correlation of 0.2 or greater, and 2 had an anti-correlation of greater magnitude than -0.2. We can see at least three reasons for these observed correlations. First, some correlations can be attributed to overlapping feature definitions. For instance, expletive arguments (e.g. There are birds singing) are, by definition, non-canonical arguments, and thus are a subset of add arg. However, some added arguments, such as benefactives (Bo baked Mo a cake), are not expletives. Second, some correlations can be attributed to grammatical properties of the relevant constructions. For instance, question and aux are correlated because main-clause questions in English require subject-aux inversion and in many cases the insertion of auxiliary do (Do lions meow?). Third, some correlations may be a consequence of the sources sampled in CoLA and the phenomena they focus on. For instance, the unusually high correlation of Emb-Q and ellipsis/anaphor can be attributed to BIBREF18 , which is an article about the sluicing construction involving ellipsis of an embedded interrogative (e.g. I saw someone, but I don't know who). Finally, two strongest anti-correlations between major features are between simple and the two features related to argument structure, argument types and arg altern. This follows from the definition of simple, which excludes any sentence containing a large number or unusual configuration of arguments. ### Models Evaluated
We train MLP acceptability classifiers for CoLA on top of three sentence encoders: (1) the CoLA baseline encoder with ELMo-style embeddings, (2) OpenAI GPT, and (3) BERT. We use publicly available sentence encoders with pretrained weights. ### Overall CoLA Results
The overall performance of the three sentence encoders is shown in Table TABREF33 . Performance on CoLA is measured using MCC BIBREF14 . We present the best single restart for each encoder, the mean over restarts for an encoder, and the result of ensembling the restarts for a given encoder, i.e. taking the majority classification for a given sentence, or the majority label of acceptable if tied. For BERT results, we exclude 5 out of the 20 restarts because they were degenerate (MCC=0). Across the board, BERT outperforms GPT, which outperforms the CoLA baseline. However, BERT and GPT are much closer in performance than they are to CoLA baseline. While ensemble performance exceeded the average for BERT and GPT, it did not outperform the best single model. ### Analysis Set Results
The results for the major features and minor features are shown in Figures FIGREF26 and FIGREF35 , respectively. For each feature, we measure the MCC of the sentences including that feature. We plot the mean of these results across the different restarts for each model, and error bars mark the mean INLINEFORM0 standard deviation. For the Violations features, MCC is technically undefined because these features only contain unacceptable sentences. We report MCC in these cases by including for each feature a single acceptable example that is correctly classified by all models. Comparison across features reveals that the presence of certain features has a large effect on performance, and we comment on some overall patterns below. Within a given feature, the effect of model type is overwhelmingly stable, and resembles the overall difference in performance. However, we observe several interactions, i.e. specific features where the relative performance of models does not track their overall relative performance. Among the major features (Figure FIGREF26 ), performance is universally highest on the simple sentences, and is higher than each model's overall performance. Though these sentences are simple, we notice that the proportion of ungrammatical ones is on par with the entire dataset. Otherwise we find that a model's performance on sentences of a given feature is on par with or lower than its overall performance, reflecting the fact that features mark the presence of unusual or complex syntactic structure. Performance is also high (and close to overall performance) on sentences with marked argument structures (Argument Types and Arg(ument) Alt(ernation)). While these models are still worse than human (overall) performance on these sentences, this result indicates that argument structure is relatively easy to learn. Comparing different kinds of embedded content, we observe higher performance on sentences with embedded clauses (major feature=Comp Clause) embedded VPs (major feature=to-VP) than on sentences with embedded interrogatives (minor features=Emb-Q, Rel Clause). An exception to this trend is the minor feature No C-izer, which labels complement clauses without a complementizer (e.g. I think that you're crazy). Low performance on these sentences compared to most other features in Comp Clause might indicate that complementizers are an important syntactic cue for these models. As the major feature Question shows, the difficulty of sentences with question-like syntax applies beyond just embedded questions. Excluding polar questions, sentences with question-like syntax almost always involve extraction of a wh-word, creating a long-distance dependency between the wh-word and its extraction site, which may be difficult for models to recognize. The most challenging features are all related to Violations. Low performance on Infl/Agr Violations, which marks morphological violations (He washed yourself, This is happy), is especially striking because a relatively high proportion (29%) of these sentences are Simple. These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions. Finally, unusual performance on some features is due to small samples, and have a high standard deviation, suggesting the result is unreliable. This includes CP Subj, Frag/Paren, imperative, NPI/FCI, and Comparative. Comparing within-feature performance of the three encoders to their overall performance, we find they have differing strengths and weaknesses. BERT stands out over other models in Deep Embed, which includes challenging sentences with doubly-embedded, as well as in several features involving extraction (i.e. long-distance dependencies) such as VP+Extract and Info-Struc. The transformer models show evidence of learning long-distance dependencies better than the CoLA baseline. They outperform the CoLA baseline by an especially wide margin on Bind:Refl, which all involves establishing a dependency between a reflexive and its antecedent (Bo tries to love himself). They also have a large advantage in dislocation, in which expressions are separated from their dependents (Bo practiced on the train an important presentation). The advantage of BERT and GPT may be due in part to their use of the transformer architecture. Unlike the BiLSTM used by the CoLA baseline, the transformer uses a self-attention mechanism that associates all pairs of words regardless of distance. In some cases models showed surprisingly good or bad performance, revealing possible idiosyncrasies of the sentence embeddings they output. For instance, the CoLA baseline performs on par with the others on the major feature adjunct, especially considering the minor feature Particle (Bo looked the word up). Furthermore, all models struggle equally with sentences in Violation, indicating that the advantages of the transformer models over the CoLA baseline does not extend to the detection of morphological violations (Infl/Agr Violation) or single word anomalies (Extra/Missing Expr). ### Length Analysis
For comparison, we analyze the effect of sentence length on acceptability classifier performance. The results are shown in Figure FIGREF39 . The results for the CoLA baseline are inconsistent, but do drop off as sentence length increases. For BERT and GPT, performance decreases very steadily with length. Exceptions are extremely short sentences (length 1-3), which may be challenging due to insufficient information; and extremely long sentences, where we see a small (but somewhat unreliable) boost in BERT's performance. BERT and GPT are generally quite close in performance, except on the longest sentences, where BERT's performance is considerably better. ### Conclusion
Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA. We also use these results to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models. Our findings can guide future work on sentence embeddings. A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations. Future engineering work should investigate whether switching to a character-level model can mitigate this problem. Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena. It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences. This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders. Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance. Our results are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence. Future experiments following ettinger2018assessing and kann2019verb can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings. ### Acknowledgments
We would like to thank Jason Phang and Thibault Févry for sharing GPT and BERT model predictions on CoLA, and Alex Wang for feedback. ### Simple
These are sentences with transitive or intransitive verbs appearing with their default syntax and argument structure. All arguments are noun phrases (DPs), and there are no modifiers or adjuncts on DPs or the VP. . Included J̇ohn owns the book. (37) Park Square has a festive air. (131) *Herself likes Mary's mother. (456) . Excluded Ḃill has eaten cake. I gave Joe a book. ### Pred (Predicates)
These are sentences including the verb be used predicatively. Also, sentences where the object of the verb is itself a predicate, which applies to the subject. Not included are auxiliary uses of be or other predicate phrases that are not linked to a subject by a verb. . Included J̇ohn is eager. (27) He turned into a frog. (150) To please John is easy. (315) . Excluded Ṫhere is a bench to sit on. (309) John broke the geode open. The cake was eaten. These sentences involve predication of a non-subject argument by another non-subject argument, without the presence of a copula. Some of these cases may be analyzed as small clauses. BIBREF35 . Included J̇ohn called the president a fool. (234) John considers himself proud of Mary. (464) They want them arrested. (856) the election of John president surprised me. (1001) Modifiers that act as predicates of an argument. Resultatives express a resulting state of that argument, and depictives describe that argument during the matrix event. See BIBREF24 . . Included Ṙesultative Ṭhe table was wiped by John clean. (625) The horse kicked me black and blue. (898) . Depictive J̇ohn left singing. (971) In which car was the man seen? (398) . Excluded Ḣe turned into a frog. (150) ### Adjunct
Particles are lone prepositions associated with verbs. When they appear with transitive verbs they may immediately follow the verb or the object. Verb-particle pairs may have a non-compositional (idiomatic) meaning. See [pp. 69-70]carnie2013syntax and [pp. 16-17]kim2008syntax. . Included Ṭhe argument was summed by the coach up. (615) Some sentences go on and on and on. (785) *He let the cats which were whining out. (71) Adjuncts modifying verb phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. See BIBREF33 . . Included ṖP-adjuncts, e.g. locative, temporal, instrumental, beneficiary Ṅobody who hates to eat anything should work in a delicatessen. (121) Felicia kicked the ball off the bench. (127) . Adverbs Ṁary beautifully plays the violin. (40) John often meets Mary. (65) . Purpose VPs Ẇe need another run to win. (769) . 0.5em. Excluded ṖP arguments Ṣue gave to Bill a book. (42) Everything you like is on the table. (736) . S-adjuncts J̇ohn lost the race, unfortunately. These are adjuncts modifying noun phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. Single-word prenominal adjectives are excluded, as are relative clauses (this has another category). . Included ṖP-adjuncts Ṭom's dog with one eye attacked Frank's with three legs. (676) They were going to meet sometime on Sunday, but the faculty didn't know when. (565) . Phrasal adjectives Ȧs a statesman, scarcely could he do anything worth mentioning. (292) . Verbal modifiers Ṫhe horse raced past the barn fell. (900) . Excluded Ṗrenominal Adjectives İt was the policeman met that several young students in the park last night. (227) . Relative Clauses NP arguments These are adjuncts of VPs and NPs that specify a time or modify tense or aspect or frequency of an event. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials (never, today, now, always) Ẉhich hat did Mike quip that she never wore? (95) . PPs Ḟiona might be here by 5 o'clock. (426) . When İ inquired when could we leave. (520) These are adjuncts of VPs and NPs that specify a location of an event or a part of an event, or of an individual. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials PPs Ṫhe bed was slept in. (298) *Anson demonized up the Khyber (479) Some people consider dogs in my neighborhood dangerous. (802) Mary saw the boy walking toward the railroad station. (73) . Where İ found the place where we can relax. (307) . Excluded Ŀocative arguments Ṣam gave the ball out of the basket. (129) Jessica loaded boxes on the wagon. (164) I went to Rome. These are adjuncts of VPs and NPs not described by some other category (with the exception of (6-7)), i.e. not temporal, locative, or relative clauses. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ḃeneficiary Ị know which book José didn't read for class, and which book Lilly did it for him. (58) . Instrument Ŀee saw the student with a telescope. (770) . Comitative J̇oan ate dinner with someone but I don't know who. (544) . VP adjuncts Ẇhich article did Terry file papers without reading? (431) . Purpose Ẇe need another run to win. (769) ### Argument Types
Oblique arguments of verbs are individual-denoting arguments (DPs or PPs) which act as the third argument of verb, i.e. not a subject or (direct) object. They may or may not be marked by a preposition. Obliques are only found in VPs that have three or more individual arguments. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. See [p.40]kim2008syntax. . Included Ṗrepositional Ṣue gave to Bill a book. (42) Mary has always preferred lemons to limes. (70) *Janet broke Bill on the finger. (141) . Benefactives Ṁartha carved the baby a toy out of wood. (139) . Double object Ṡusan told her a story. (875) Locative arguments Ȧnn may spend her vacation in Italy. (289) . High-arity Passives Ṃary was given by John the book. (626) . Excluded Ṅon-DP arguments Ẇe want John to win (28) . 3rd argments where not all three arguments are DPs Ẇe want John to win (28) Prepositional Phrase arguments of VPs are individual-denoting arguments of a verb which are marked by a proposition. They may or may not be obliques. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. . Included Ḋative Ṣue gave to Bill a book. (42) . Conative (at) C̣arla slid at the book. (179) . Idiosyncratic prepositional verbs İ wonder who to place my trust in. (711) She voted for herself. (743) . Locative J̇ohn was found in the office. (283) . PP predicates Ėverything you like is on the table. (736) . Excluded ṖP adjuncts Particles Arguments of deverbal expressions ṭhe putter of books left. (892) . By-phrase Ṫed was bitten by the spider. (613) Prepositional Phrase arguments of NPs or APs are individual-denoting arguments of a noun or adjective which are marked by a proposition. Arguments are selected for by the head, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. . Included Ṙelational adjectives Ṁany people were fond of Pat. (936) *I was already aware of fact. (824) . Relational nouns Ẇe admired the pictures of us in the album. (759) They found the book on the atom. (780) . Arguments of deverbal nouns ṭhe putter of books left. (892) Prepositional arguments introduced with by. Usually, this is the (semantic) subject of a passive verb, but in rare cases it may be the subject of a nominalized verb. Arguments are usually selected for by the head, and they are generally not optional. In this case, the argument introduced with by is semantically selected for by the verb, but it is syntactically optional. See [p.190]adger2003core and []collins2005smuggling. . Included Ṗassives Ṫed was bitten by the spider. (613) . Subjects of deverbal nouns ṫhe attempt by John to leave surprised me. (1003) Expletives, or “dummy” arguments, are semantically inert arguments. The most common expletives in English are it and there, although not all occurrences of these items are expletives. Arguments are usually selected for by the head, and they are generally not optional. In this case, the expletive occupies a syntactic argument slot, but it is not semantically selected by the verb, and there is often a syntactic variation without the expletive. See [p.170-172]adger2003core and [p.82-83]kim2008syntax. . Included Ṫhere—inserted, existential Ṭhere loved Sandy. (939) There is a nurse available. (466) . It—cleft, inserted İt was a brand new car that he bought. (347) It bothers me that John coughs. (314) It is nice to go abroad. (47) . Environmental it K̇erry remarked it was late. (821) Poor Bill, it had started to rain and he had no umbrella. (116) You've really lived it up. (160) . Excluded J̇ohn counted on Bill to get there on time. (996) I bought it to read. (1026) ### Arg Altern (Argument Alternations)
These are verbs with 3 or more arguments of any kind. Arity refers to the number of arguments that a head (or function) selects for. Arguments are usually selected for by the head, and they are generally not optional. They may be DPs, PPs, CPs, VPs, APs or other categories. . Included Ḋitransitive [̣Sue] gave [to Bill] [a book]. (42) [Martha] carved [the baby] [a toy] out of wood. (139) . VP arguments [̣We] believed [John] [to be a fountain in the park]. (274) [We] made [them] [be rude]. (260) . Particles He] let [the cats which were whining] [out]. (71) . Passives with by-phrase [̣A good friend] is remained [to me] [by him]. (237) . Expletives [̣We] expect [there] [to will rain]. (282) [There] is [a seat] [available]. (934) [It] bothers [me] [that he is here]. (1009) . Small clause John] considers [Bill] [silly]. (1039) . Excluded Ṙesults, depictives John] broke [the geode] [open]. These are VPs where a canonical argument of the verb is missing. This can be difficult to determine, but in many cases the missing argument is understood with existential quantification or generically, or contextually salient. See [p.106-109]sportiche2013introduction. . Included Ṁiddle voice/causative inchoative Ṭhe problem perceives easily. (66) . Passive Ṫhe car was driven. (296) . Null complement anaphora J̇ean persuaded Robert. (380) Nobody told Susan. (883) . Dropped argument Ḳim put in the box. (253) The guests dined. (835) I wrote to Bill. (1030) . Transitive adjective J̇ohn is eager. (27) We pulled free. (144) . Transitive noun İ sensed his eagerness. (155) . Expletive insertion Ịt loved Sandy. (949) . Excluded Ṫed was bitten by the spider. (613) These are VPs in which a non-canonical argument of the verb has been added. These cases are clearer to identify where the additional argument is a DP. In general, PPs which mark locations, times, beneficiaries, or purposes should be analyzed as adjuncts, while PPs marking causes can be considered arguments. See []pylkkanen2008introducing. . Included Ėxtra argument Ḷinda winked her lip. (202) Sharon fainted from hunger. (204) I shaved myself. (526) . Causative Ị squeaked the door. (207) . Expletive insertion Ṫhere is a monster in Loch Ness. (928) It annoys people that dogs bark. (943) . Benefactive Ṁartha carved the baby a toy out of wood. (139) The passive voice is marked by the demotion of the subject (either complete omission or to a by-phrase) and the verb appearing as a past participle. In the stereotypical construction there is an auxiliary be verb, though this may be absent. See [p.175-190]kim2008syntax, collins2005smuggling, and [p.311-333]sag2003syntactic. . Included V̇erbs Ṫhe earth was believed to be round. (157) . Psuedopassive Ṫhe bed was slept in. (298) . Past participle adjuncts Ṫhe horse raced past the barn fell. (900) ### Imperative
The imperative mood is marked by the absence of the a subject and the bare form of the verb, and expresses a command, request, or other directive speech act. . Included Ẉash you! (224) Somebody just left - guess who. (528) ### Binding
These are cases in which a reflexive (non-possessive) pronoun, usually bound by an antecedent. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic. . Included Ọurselves like ourselves. (742) Which pictures of himself does John like? (386) These are cases in which a non-reflexive pronoun appears along with its antecedent. This includes donkey anaphora, quantificational binding, and bound possessives, among other bound pronouns. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic. . Included Ḃound possessor Ṫhe children admire their mother. (382) . Quantificational binding Ėverybody gets on well with a certain relative, but often only his therapist knows which one. (562) . Bound pronoun Ẉe gave us to the cause. (747) ### Question
These are sentences in which the matrix clause is interrogative (either a wh- or polar question). See [pp.282-213]adger2003core, [pp.193-222]kim2008syntax, and [p.315-350]carnie2013syntax. . Included Ẇh-question Ẇho always drinks milk? (684) . Polar question Ḋid Athena help us? (486) These are embedded interrogative clauses appearing as arguments of verbs, nouns, and adjectives. Not including relative clauses and free relatives. See [p.297]adger2003core. . Included U̇nder VP İ forgot how good beer tastes. (235) *What did you ask who saw? (508) . Under NP Ṫhat is the reason why he resigned. (313) . Under AP Ṫhey claimed they had settled on something, but it wasn't clear what they had settled on. (529) . Free relative Ẇhat the water did to the bottle was fill it. (33) . Excluded Relative clauses, free relatives These are phrasal Wh-phrases, in which the wh-word moves along with other expressions, including prepositions (pied-piping) or nouns in the case of determiner wh-words such as how many and which. . Included Ṗied-piping Ṭhe ship sank, but I don't know with what. (541) . Other phrasal wh-phrases İ know which book Mag read, and which book Bob read my report that you hadn't. (61) How sane is Peter? (88) Relative clauses are noun modifiers appearing with a relativizer (either that or a wh-word) and an associated gap. See [p.223-244]kim2008syntax. . Included Ṫhough he may hate those that criticize Carter, it doesn't matter. (332) *The book what inspired them was very long. (686) Everything you like is on the table. (736) . Excluded Ṭhe more you would want, the less you would eat. (6) This is wh-movement out of an extraction island, or near-island. Islands include, for example, complex NPs, adjuncts, embedded questions, coordination. A near-island is an extraction that closely resembles an island violation, such as extraction out of an embedded clause, or across-the-board extraction. See [pp.323-333]adger2003core and [pp.332-334]carnie2013syntax. . Included Ėmbedded question *What did you ask who Medea gave? (493) . Adjunct Ẉhat did you leave before they did? (598) . Parasitic gaps Ẇhich topic did you choose without getting his approval? (311) . Complex NP Ẇho did you get an accurate description of? (483) ### Comp Clause (Complement Clauses)
These are complement clauses acting as the (syntactic) subject of verbs. See [pp.90-91]kim2008syntax. . Included Ṫhat dogs bark annoys people. (942) The socks are ready for for you to put on to be planned. (112) . Excluded Ėxpletive insertion İt bothers me that John coughs. (314) These are complement clauses acting as (non-subject) arguments of verbs. See [pp.84-90]kim2008syntax. . Included İ can't believe Fred won't, either. (50) I saw that gas can explode. (222) It bothers me that John coughs. (314) Clefts İt was a brand new car that he bought. (347) These are complement clauses acting as an argument of a noun or adjective. See [pp.91-94]kim2008syntax. . Included U̇nder NP Ḋo you believe the claim that somebody was looking for something? (99) . Under AP Ṭhe children are fond that they have ice cream. (842) These are complement clauses with a non-finite matrix verb. Often, the complementizer is for, or there is no complementizer. See [pp.252-253,256-260]adger2003core. . Included Ḟor complementizer İ would prefer for John to leave. (990) . No Complementizer Ṁary intended John to go abroad. (48) . Ungrammatical Ḣeidi thinks that Andy to eat salmon flavored candy bars. (363) . V-ing Ȯnly Churchill remembered Churchill giving the Blood, Sweat and Tears speech. (469) These are complement clauses with no overt complementizer. . Included Ċomplement clause İ'm sure we even got these tickets! (325) He announced he would marry the woman he loved most, but none of his relatives could figure out who. (572) . Relative clause Ṫhe Peter we all like was at the party (484) These are sentences with three or nested verbs, where VP is not an aux or modal, i.e. with the following syntax: [S ...[ VP ...[ VP ...[ VP ...] ...] ...] ...] . Included Ėmbedded VPs Ṁax seemed to be trying to force Ted to leave the room, and Walt, Ira. (657) . Embedded clauses İ threw away a book that Sandy thought we had read. (713) ### Aux (Auxiliaries)
Any occurrence of negation in a sentence, including sentential negation, negative quantifiers, and negative adverbs. . Included Ṡentential İ can't remember the name of somebody who had misgivings. (123) . Quantifier Ṅo writer, and no playwright, meets in Vienna. (124) . Adverb Ṫhey realised that never had Sir Thomas been so offended. (409) Modal verbs (may, might, can, could, will, would, shall, should, must). See [pp.152-155]kim2008syntax. . Included J̇ohn can kick the ball. (280) As a statesman, scarcely could he do anything worth mentioning. (292) . Excluded Ṗseudo-modals Ṡandy was trying to work out which students would be able to solve a certain problem. (600) Auxiliary verbs (e.g. be, have, do). See [pp.149-174]kim2008syntax. . Included Ṫhey love to play golf, but I do not. (290) The car was driven. (296) he had spent five thousand dollars. (301) . Excluded Ṗseudo-auxiliaries Ṣally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926) These are predicates acting as near-auxiliary (e.g. get-passive) or near-modals (e.g. willing) . Included Ṅear-auxiliaries Ṃary came to be introduced by the bartender and I also came to be. (55) *Sally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926) . Near-modals Ċlinton is anxious to find out which budget dilemmas Panetta would be willing to tackle in a certain way, but he won't say in which. (593) Sandy was trying to work out which students would be able to solve a certain problem. (600) ### to-VP (Infinitival VPs)
These are VPs with control verbs, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. See [pp.252,266-291]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax. . Included İntransitive subject control Ịt tries to leave the country. (275) . Transitive subject control J̇ohn promised Bill to leave. (977) . Transitive object control İ want her to dance. (379) John considers Bill to be silly. (1040) . Excluded V̇P args of NP/AP Ṫhis violin is difficult to play sonatas on. (114) . Purpose Ṫhere is a bench to sit on. (309) . Subject VPs Ṫo please John is easy. (315) . Argument present participles Ṁedea denied poisoning the phoenix. (490) . Raising Ȧnson believed himself to be handsome. (499) These are VPs with raising predicates, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. Unlike control verbs, the coindexed argument is not a semantic argument of the raising predicate. See [pp.260-266]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax. . Included Ṡubject raising U̇nder the bed seems to be a fun place to hide. (277) . Object raising Ȧnson believed himself to be handsome. (499) . Raising adjective J̇ohn is likely to leave. (370) These are embedded infinitival VPs containing a (non-subject) gap that is filled by an argument in the upper clause. Examples are purpose-VPs and tough-movement. See [pp.246-252]kim2008syntax. . Included Ṫough-movement Ḍrowning cats, which is against the law, are hard to rescue. (79) . Infinitival relatives F̣ed knows which politician her to vote for. (302) . Purpose ṫhe one with a red cover takes a very long time to read. (352) . Other non-finite VPs with extraction Ȧs a statesman, scarcely could he do anything worth mentioning. (292) These are non-finite VP arguments of nouns and adjectives. . Included Ṙaising adjectives J̇ohn is likely to leave. (370) . Control adjectives Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604) . Control nouns Ȧs a teacher, you have to deal simultaneously with the administration's pressure on you to succeed, and the children's to be a nice guy. (673) . Purpose VPs ṫhere is nothing to do. (983) These are miscellaneous non-finite VPs. . Included İ saw that gas can explode. (222) Gerunds/Present participles Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) Knowing the country well, he took a short cut. (411) John became deadly afraid of flying. (440) . Subject VPs Ṫo please John is easy. (315) . Nominalized VPs Ẉhat Mary did Bill was give a book. (473) . Excluded ṫo-VPs acting as complements or modifiers of verbs, nouns, or adjectives ### N, Adj (Nouns and Adjectives)
These are nouns and adjectives derived from verbs. . Included Ḋeverbal nouns ṭhe election of John president surprised me. (1001) . “Light” verbs Ṫhe birds give the worm a tug. (815) . Gerunds İf only Superman would stop flying planes! (773) . Event-wh Ẇhat the water did to the bottle was fill it. (33) . Deverbal adjectives Ḣis or her least known work. (95) Relational nouns are NPs with an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of NP and the argument. The argument must be a DP possessor or a PP. See [pp.82-83]kim2008syntax. . Included Ṅouns with of-arguments J̇ohn has a fear of dogs. (353) . Nouns with other PP-arguments Ḣenri wants to buy which books about cooking? (442) . Measure nouns İ bought three quarts of wine and two of Clorox. (667) . Possessed relational nouns J̣ohn's mother likes himself. (484) . Excluded Ṅouns with PP modifiers Ṡome people consider dogs in my neighborhood dangerous. (802) Transitive (non-relational) nouns take a VP or CP argument. See [pp.82-83]kim2008syntax. . Included V̇P argument ṫhe attempt by John to leave surprised me. (1003) . CP argument Ẉhich report that John was incompetent did he submit? (69) . QP argument Ṫhat is the reason why he resigned. (313) These are complex NPs, including coordinated nouns and nouns with modifiers (excluding prenominal adjectives). . Included Ṁodified NPs Ṭhe madrigals which Henry plays the lute and sings sound lousy. (84) John bought a book on the table. (233) . NPs with coordination Ṭhe soundly and furry cat slept. (871) The love of my life and mother of my children would never do such a thing. (806) Noun-noun compounds are NPs consisting of two constituent nouns. . Included İt was the peasant girl who got it. (320) A felon was elected to the city council. (938) These are adjectives that take an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of the modified NP and the argument. The argument must be a DP or PP. See [pp.80-82]kim2008syntax. . Included Ȯf-arguments Ṫhe chickens seem fond of the farmer. (254) . Other PP arguments Ṫhis week will be a difficult one for us. (241) John made Bill mad at himself. (1035) A transitive (non-relational) adjective. I.e. an adjectives that takes a VP or CP argument. See [pp.80-82]kim2008syntax. . Included V̇P argument J̇ohn is likely to leave. (370) . CP argument J̇ohn is aware of it that Bill is here. (1013) . QP argument Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604) ### S-Syntax (Sentence-Level Syntax)
These are expressions with non-canonical word order. See, for example, [p.76]sportiche2013introduction. . Includes Ṗarticle shift Ṃickey looked up it. (24) . Preposed modifiers Ȯut of the box jumped a little white rabbit. (215) *Because she's so pleasant, as for Mary I really like her. (331) . Quantifier float Ṫhe men will all leave. (43) . Preposed argument Ẇith no job would John be happy. (333) . Relative clause extraposition Ẇhich book's, author did you meet who you liked? (731) . Misplaced phrases Ṁary was given by John the book. (626) This includes topicalization and focus constructions. See [pp.258-269]kim2008syntax and [pp.68-75]sportiche2013introduction. . Included Ṫopicalization Ṁost elections are quickly forgotten, but the election of 2000, everyone will remember for a long time. (807) . Clefts İt was a brand new car that he bought. (347) . Pseudo-clefts Ẇhat John promised is to be gentle. (441) . Excluded Ṫhere-insertion Passive These are parentheticals or fragmentary expressions. . Included Ṗarenthetical Ṁary asked me if, in St. Louis, John could rent a house cheap. (704) . Fragments Ṫhe soup cooks, thickens. (448) . Tag question Ġeorge has spent a lot of money, hasn't he? (291) Coordinations and disjunctions are expressions joined with and, but, or, etc. See [pp.61-68]sportiche2013introduction. . Included ḊP coordination Ḋave, Dan, Erin, Jaime, and Alina left. (341) . Right Node Raising K̇im gave a dollar to Bobbie and a dime to Jean. (435) . Clausal coordination Ṡhe talked to Harry, but I don't know who else. (575) . Or, nor Ṇo writer, nor any playwright, meets in Vienna. (125) . Pseudo-coordination İ want to try and buy some whiskey. (432) . Juxtaposed clauses Ŀights go out at ten. There will be no talking afterwards. (779) This includes subordinate clauses, especially with subordinating conjunctions, and conditionals. . Included Ċonditional İf I can, I will work on it. (56) . Subordinate clause Ẉhat did you leave before they did? (598) *Because Steve's of a spider's eye had been stolen, I borrowed Fred's diagram of a snake's fang. (677) . Correlative Ạs you eat the most, you want the least. (5) This includes VP or NP ellipsis, or anaphora standing for VPs or NPs (not DPs). See [pp.55-61]sportiche2013introduction. . Included V̇P Ellipsis İf I can, I will work on it. (56) Mary likes to tour art galleries, but Bill hates to. (287) . VP Anaphor İ saw Bill while you did so Mary. (472) . NP Ellipsis Ṫom's dog with one eye attacked Fred's. (679) . NP anaphor ṫhe one with a red cover takes a very long time to read. (352) . Sluicing Ṁost columnists claim that a senior White House official has been briefing them, and the newspaper today reveals which one. (557) . Gapping Ḃill ate the peaches, but Harry the grapes. (646) These are adjuncts modifying sentences, sentence-level adverbs, subordinate clauses. . Included Ṡentence-level adverbs Ṡuddenly, there arrived two inspectors from the INS. (447) . Subordinate clauses Ṫhe storm arrived while we ate lunch. (852) ### Determiner
These are quantificational DPs, i.e. the determiner is a quantifier. . Included Q̇uantifiers Ẹvery student, and he wears socks, is a swinger. (118) We need another run to win. (769) . Partitive Ṇeither of students failed. (265) These are quantifiers that take PP arguments, and measure nouns. See [pp.109-118]kim2008syntax. . Included Q̇uantifiers with PP arguments Ṇeither of students failed. (265) . Numerals Ȯne of Korea's most famous poets wrote these lines. (294) . Measure nouns İ bought three quarts of wine and two of Clorox. (667) These are negative polarity items (any, ever, etc.) and free choice items (any). See kadmon1993any. . Included ṄPI Ėverybody around here who ever buys anything on credit talks in his sleep. (122) I didn't have a red cent. (350) . FCI Ȧny owl hunts mice. (387) These are comparative constructions. See BIBREF22 . . Included Ċorrelative Ṫhe angrier Mary got, the more she looked at pictures. (9) They may grow as high as bamboo. (337) I know you like the back of my hand. (775) ### Violations
These are sentences that include a semantic violation, including type mismatches, violations of selectional restrictions, polarity violations, definiteness violations. . Included V̇olation of selectional restrictions ṃany information was provided. (218) *It tries to leave the country. (275) . Aspectual violations J̣ohn is tall on several occasions. (540) . Definiteness violations Ịt is the problem that he is here. (1018) . Polarity violations Ȧny man didn't eat dinner. (388) These are sentences that include a violation in inflectional morphology, including tense-aspect marking, or agreement. . Included Ċase Ụs love they. (46) . Agreement Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) . Gender Ṣally kissed himself. (339) . Tense/Aspect Ḳim alienated cats and beating his dog. (429) These are sentences with a violation that can be identified with the presence or absence of a single word. . Included Ṁissing word J̣ohn put under the bathtub. (247) *I noticed the. (788) . Extra word Ẹveryone hopes everyone to sleep. (467) *He can will go (510) Table 1: A random sample of sentences from the CoLA development set, shown with their original acceptability labels (3= acceptable, *=unacceptable) and with a subset of our new phenomenon-level annotations. Table 2: Major features and their associated minor features (with number of occurrences n). Table 3: Correlation (MCC) of features in the annotated analysis set. We display only the correlations with the greatest magnitude. Figure 1: Performance (MCC) on CoLA analysis set by major feature. Dashed lines show mean performance on all of CoLA. Table 4: Performance (MCC) on the CoLA test set, including mean over restarts of a given model with standard deviation, max over restarts, and majority prediction over restarts. Human performance is measured by Warstadt et al. Figure 2: Performance (MCC) on CoLA analysis set by minor feature. Dashed lines show mean performance on all of CoLA. Figure 3: Performance (MCC) on the CoLA analysis set by sentence length.
|
the transformer models
|
What was the key agenda of the AMCOR's 8k filing dated 1st July 2022?
|
Evidence 0:
On June 30, 2022, Amcor Finance (USA), Inc. (the Former Issuer) and Amcor Flexibles North America, Inc. (the Substitute Issuer),
each a wholly-owned subsidiary of Amcor plc (the Company), entered into a (i) Second Supplemental Indenture (the Second Supplemental
Indenture) with the Trustee (as defined below) with respect to the Indenture, dated as of April 28, 2016 (as amended and/or supplemented to
date, the 2016 Indenture and, together with the Second Supplemental Indenture, the 2016 Indenture), among the Former Issuer, the
guarantors party thereto and Deutsche Bank Trust Company Americas, as trustee (the Trustee), governing the Former Issuers (a) 3.625%
Guaranteed Senior Notes due 2026 (the 2026 Notes) and (b) 4.500% Guaranteed Senior Notes due 2028 (the 2028 Notes and, together with
the 2026 Notes, the Existing Notes) and (ii) First Supplemental Indenture (the First Supplemental Indenture and, together with the Second
Supplemental Indenture, the Supplemental Indentures) with the Trustee with respect to the Indenture, dated as of June 13, 2019 (as amended
and/or supplemented to date, the 2019 Indenture and, together with the First Supplemental Indenture, the 2019 Indenture and, together with
the 2016 Indenture, the Indentures), among the Former Issuer, the guarantors party thereto and the Trustee, governing the Former Issuers
(a) 3.625% Guaranteed Senior Notes due 2026 (the New 2026 Notes) and (b) 4.500% Guaranteed Senior Notes due 2028 (the New 2028
Notes and, together with the New 2026 Notes, the New Notes), in each case, relating to the substitution of the Substitute Issuer for the Former
Issuer and the assumption by the Substitute Issuer of the covenants of the Former Issuer under the Indentures. As disclosed in the Companys
Current Report on Form 8-K, filed with the Securities and Exchange Commission (the SEC) on June 17, 2019, the New Notes were issued in
June 2019 following the completion of the Former Issuers exchange offer to certain eligible holders of the Existing Notes.
|
Amcor Finance (USA), Inc. and Amcor Flexibles North America, Inc., entered into supplemental indentures relating to Guaranteed Senior Notes due 2026 and 2028. This involved the substitution of the Substitute Issuer (Amcor Flexibles North America) for the Former Issuer (Amcor Finance) and the assumption of covenants under the indentures. (In essence a novation agreement)
|
How much better are results of proposed model compared to pivoting method?
|
### Introduction
Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus. A natural choice is to leverage the bilingual or multilingual parallel data used in machine translation, which are of great quantity and quality. The basic assumption is that if two sentences in one language (e.g., English) have the same translation in another language (e.g., French), they are assumed to have the same meaning, i.e., they are paraphrases of each other. Therefore, one typical solution for paraphrasing in one language is to pivot over a translation in another language. Specifically, it is implemented as the round-trip translation, where the input sentence is translated into a foreign sentence, then back-translated into a sentence in the same language as input BIBREF7. The process is shown in Figure FIGREF1. Apparently, two machine translation systems (English$\rightarrow $French and French$\leftarrow $English) are needed to conduct the generation of a paraphrase. Although the pivoting approach works in general, there are several intrinsic defects. First, the round-trip system can hardly explore all the paths of paraphrasing, since it is pivoted through the finite intermedia outputs of a translation system. More formally, let $Z$ denote the meaning representation of a sentence $X$, and finding paraphrases of $X$ can be treated as sampling another sentence $Y$ conditioning on the representation $Z$. Ideally, paraphrases should be generated by following $P(Y|X) = \int _{Z} P(Y|Z)P(Z|X)dZ$, which is marginalized over all possible values of $Z$. However, in the round-trip translation, only one or several $Z$s are sampled from the machine translation system $P(Z|X)$, which can lead to an inaccurate approximation of the whole distribution and is prone to the problem of semantic drift due to the sampling variances. Second, the results are determined by the pre-existing translation systems, and it is difficult to optimize the pipeline end-to-end. Last, the system is not efficient especially at the inference stage, because it needs two rounds of translation decoding. To address these issues, we propose a single-step zero-shot paraphrase generation model, which can be trained on machine translation corpora in an end-to-end fashion. Unlike the pivoting approach, our proposed model does not involve explicit translation between multiple languages. Instead, it directly learns the paraphrasing distribution $P(Y|X)$ from the parallel data sampled from $P(Z|X)$ and $P(Y|Z)$. Specifically, we build a Transformer-based BIBREF8 language model, which is trained on the concatenated bilingual parallel sentences with language indicators. At inference stage, given a input sentence in a particular language, the model is guided to generate sentences in the same language, which are deemed as paraphrases of the input. Our model is simple and compact, and can empirically reduce the risk of semantic drift to a large extent. Moreover, we can initialize our model with generative pre-training (GPT) BIBREF0 on monolingual data, which can benefit the generation in low-resource languages. Finally, we borrow the idea of denoising auto-encoder (DAE) to further enhance robustness in paraphrase generation. We conduct experiments on zero-shot paraphrase generation task, and find that the proposed model significantly outperforms the pivoting approach in terms of both automatic and human evaluations. Meanwhile, the training and inference cost are largely reduced compared to the pivot-based methods which involves multiple systems. ### Methodology ::: Transformer-based Language Model
Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood: where $X=[x_1,x_2,\ldots ,x_n]$ is a sentence in a language (e.g., English), and $\theta $ denotes the parameters of the model. Each Transformer layer is composed of multi-head self-attention, layer normalization and a feed-forward network. We refer reader to the original paper for details of each component. Formally, the decoding probability is given by where $x_i$ denotes the token embedding, $p_i$ denote the positional embedding and $h_i$ denotes the output states of the $i$-th token, and $W_e$ and $W_o$ are the input and output embedding matrices. Although TLM is normally employed to model monolingual sequences, there is no barrier to utilize TLM to model sequences in multiple languages. In this paper, inspired by BIBREF9, we concatenate pairs of sentences from bilingual parallel corpora (e.g., English$\rightarrow $French) as training instances to the model. Let $X$ and $Y$ denote the parallel sentences in two different languages, the training objective becomes This bilingual language model can be regarded as the decoder-only model compared to the traditional encoder-decoder model. It has been proved to work effectively on monolingual text-to-text generation tasks such as summarization BIBREF10. The advantages of such architecture include less model parameters, easier optimization and potential better performance for longer sequences. Furthermore, it naturally integrates with language models pre-training on monolingual corpus. For each input sequence of concatenated sentences, we add special tokens $\langle $bos$\rangle $ and $\langle $eos$\rangle $ at the beginning and the end, and $\langle $delim$\rangle $ in between the sentences. Moreover, at the beginning of each sentence, we add a special token as its language identifier, for instance, $\langle $en$\rangle $ for English, $\langle $fr$\rangle $ for French. One example of English$\rightarrow $French training sequence is “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $ $\langle $fr$\rangle $ chat assis sur le tapis $\langle $eos$\rangle $". At inference stage, the model predicts the next word as the conventional auto-regressive model: ### Methodology ::: Zero-shot Paraphrase Generation
We train the bilingual language model on multiple bilingual corpora, for example, English$\leftrightarrow $French and German$\leftrightarrow $Chinese. Once the language model has been trained, we can conduct zero-shot paraphrase generation based on the model. Specifically, given an input sentence that is fed into the language model, we set the output language identifier the same as input, and then simply conduct decoding to generate paraphrases of the input sentence. Figure FIGREF2 illustrates the training and decoding process of our model. In the training stage, the model is trained to sequentially generate the input sentence and its translation in a specific language. Training is conducted in the way of teacher-forcing. In the decoding stage, after an English sentence “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $" is fed to the model, we intentionally set the output language identifier as “$\langle $en$\rangle $", in order to guide the model to continue to generate English words. At the same time, since the model has been trained on translation corpus, it implicitly learns to keep the semantic meaning of the output sentence the same as the input. Accordingly, the model will probably generate the paraphrases of the input sentence, such as “the cat sitting on the carpet $\langle $eos$\rangle $". It should be noted our model can obviously be trained on parallel paraphrase data without any modification. But in this paper, we will mainly focus on the research and evaluation in the zero-shot learning setting. In the preliminary experiments of zero-shot paraphrasing, we find the model does not perform consistently well and sometimes fails to generate the words in the correct language as indicated by the language identifier. Similar phenomenon has been observed in the research of zero-shot neural machine translation BIBREF11, BIBREF12, BIBREF13, which is referred as the degeneracy problem by BIBREF13. To address these problems in zero-shot paraphrase generation, we propose several techniques to improve the quality and diversity of the model as follows. ### Methodology ::: Zero-shot Paraphrase Generation ::: Language Embeddings
The language identifier prior to the sentence does not always guarantee the language of the sequences generated by the model. In order to keep the language consistency, we introduce language embeddings, where each language is assigned a specific vector representation. Supposing that the language embedding for the $i$-th token in a sentence is $a_i$, we concatenate the language embedding with the Transformer output states and feed it to the softmax layer for predicting each token: We empirically demonstrate that the language embedding added to each tokens can effectively guide the model to generate sentences in the required language. Note that we still let the model to learn the output distribution for each language rather than simply restricting the vocabularies of output space. This offers flexibility to handle coding switching cases commonly seen in real-world data, e.g., English words could also appear in French sentences. ### Methodology ::: Zero-shot Paraphrase Generation ::: Pre-Training on Monolingual Corpora
Language model pre-training has shown its effectiveness in language generation tasks such as machine translation, text summarization and generative question answering BIBREF14, BIBREF15, BIBREF16. It is particularly helpful to the low/zero-resource tasks since the knowledge learned from large-scale monolingual corpus can be transferred to downstream tasks via the pre-training-then-fine-tuning approach. Since our model for paraphrase generation shares the same architecture as the language model, we are able to pre-train the model on massive monolingual data. Pre-training on monolingual data is conducted in the same way as training on parallel data, except that each training example contains only one sentence with the beginning/end of sequence tokens and the language identifier. The language embeddings are also employed. The pre-training objective is the same as Equation (DISPLAY_FORM4). In our experiments, we first pre-train the model on monolingual corpora of multiple languages respectively, and then fine-tune the model on parallel corpora. ### Methodology ::: Zero-shot Paraphrase Generation ::: Denoising Auto-Encoder
We adopt the idea of denoising auto-encoder (DAE) to further improve the robustness of our paraphrasing model. DAE is originally proposed to learn intermediate representations that are robust to partial corruption of the inputs in training auto-encoders BIBREF17. Specifically, the initial input $X$ is first partially corrupted as $\tilde{X}$, which can be treated as sampling from a noise distribution $\tilde{X}\sim {q(\tilde{X}|X)}$. Then, an auto-encoder is trained to recover the original $X$ from the noisy input $\tilde{X}$ by minimizing the reconstruction error. In the applications of text generation BIBREF18 and machine translation BIBREF19, DAE has shown to be able to learn representations that are more robust to input noises and also generalize to unseen examples. Inspired by BIBREF19, we directly inject three different types of noises into input sentence that are commonly encountered in real applications. 1) Deletion: We randomly delete 1% tokens from source sentences, for example, “cat sat on the mat $\mapsto $ cat on the mat." 2) Insertion: We insert a random token into source sentences in 1% random positions, for example, “cat sat on the mat $\mapsto $ cat sat on red the mat." 3) Reordering: We randomly swap 1% tokens in source sentences, and keep the distance between tokens being swapped within 5. “cat sat on the mat $\mapsto $ mat sat on the cat." By introducing such noises into the input sentences while keeping the target sentences clean in training, our model can be more stable in generating paraphrases and generalisable to unseen sentences in the training corpus. The training objective with DAE becomes Once the model is trained, we generate paraphrases of a given sentence based on $P(Y|X;\theta )$. ### Experiments ::: Datasets
We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words. ### Experiments ::: Experimental Settings
We implement our model in Tensorflow BIBREF22. The size of our Transformer model is identical to BERT-base BIBREF23. The model is constituted by 12 layers of Transformer blocks. Number of dimension of token embedding, position embedding and transformer hidden state are 768, while that of states in position-wise feed-forward networks are 3072. The number of attention heads is 12. Models are train using Adam optimization BIBREF24 with a learning rate up to $1e-4$, $\beta _1=0.9$, $\beta _2=0.999$ and $L2$ weight decay of 0.01. We use top-k truncated random sampling strategy for inference that only sample from k candidate words with highest probabilities. Throughout our experiments, we train and evaluate two models for paraphrase generation: the bilingual model and the multilingual model. The bilingual models are trained only with English$\leftrightarrow $Chinese, while the multilingual models are trained with all the data between the four languages. The round-trip translation baseline is based on the Transformer-based neural translation model. ### Experiments ::: Automatic Evaluation
We evaluate the relevance between input and generated paraphrase as well as the diversity among multiple generated paraphrases from the same input. For relevance, we use the cosine similarity between the sentential representations BIBREF25. Specifically, we use the Glove-840B embeddings BIBREF26 for word representation and Vector Extrema BIBREF25 for sentential representation. For generation diversity, we employ two evaluation metrics: Distinct-2 and inverse Self-BLEU (defined as: $1-$Self-BLEU) BIBREF27. Larger values of Distinct-2 and inverse Self-BLEU indicate higher diversity of the generation. For each model, we draw curves in Figure FIGREF15 with the aforementioned metrics as coordinates, and each data-point is obtained at a specific sampling temperature. Since a good paraphrasing model should generate both relevant and diverse paraphrases, the model with curve lying towards the up-right corner is regarded as with good performance. ### Experiments ::: Automatic Evaluation ::: Comparison with Baseline
First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence. Note that in Figure FIGREF15 (a), there is a cross point between the curve of the bilingual model and the baseline curve when relevance is around 0.71. We particularly investigate generated paraphrases around this point and find that the baseline actually achieves better relevance when Distinct-2 is at a high level ($>$0.3). It means our bilingual model is semantically drifting faster than the baseline model as the Distinct-2 diversity increases. The round-trip translation performs two-round of supervised translations, while the zero-shot paraphrasing performs single-round unsupervised `translation' (paraphrasing). We suspect that the unsupervised paraphrasing can be more sensitive to the decoding strategy. It also implies the latent, language-agnostic representation may be not well learned in our bilingual model. While on the other hand, our multilingual model alleviate this insufficiency. We further verify and analyze it as follows. ### Experiments ::: Automatic Evaluation ::: Multilingual Models
As mentioned above, our bilingual model can be unstable in some cases due to the lack of a well-learned language-agnostic semantic representation. A natural method is to introduce multilingual corpus, which consists of various translation directions. Training over multilingual corpus forces the model to decouple the language type and semantic representation. Empirical results shows that our multilingual model performs significantly better than the bilingual model. The red and blue curves in Figure FIGREF15 (a)(b) demonstrates a great improvement of our multilingual model over the bilingual model. In addition, the multilingual model also significantly outperforms the baseline in the setting with the reasonable relevance scores. ### Experiments ::: Automatic Evaluation ::: Denoising Auto-Encoder
To verify the effectiveness of DAE in our model, various experiments with different hyper-parameters were conducted. We find that DAE works the best when uniformly perturbing input sentences with probability 0.01, using only Deletion and Reordering operations. We investigate DAE over both bilingual and multilingual models as plotted in Figure FIGREF15 (c)(d). Curves with the yellow circles represent models with DAE training. Results in the Figure FIGREF15 (c)(d) demonstrate positive effects of DAE in either bilingual or multilingual models. It is worth to note that, while DAE have marginal impact on multilingual model, it improves bilingual model significantly. This is an evidence indicating that DAE can improve the model in learning a more robust representation. More specifically, since Deletion forces model to focus on sentence-level semantics rather than word-level meaning while Reordering forces model to focus more on meaning rather than their positions, it would be more difficult for a model to learn shortcuts (e.g. copy words). In other words, DAE improves models' capability in extracting deep semantic representation, which has a similar effect to introducing multilingual data. ### Experiments ::: Automatic Evaluation ::: Monolingual Pre-Training
As shown in Figure FIGREF15 (a)(b), the model with language model pre-training almost performs equally to its contemporary without pre-training. However, evaluations on fluency uncover the value of pre-training. We evaluate a group of models over our test set in terms of fluency, using a n-grams language model trained on 14k public domain books. As depicted in Table TABREF25, models with language model pre-training stably achieves greater log-probabilities than the model without pre-training. Namely, language model pre-training brings better fluency. ### Experiments ::: Human Evaluation
200 sentences are sampled from our test set for human evaluation. The human evaluation guidance generally follows that of BIBREF5 but with a compressed scoring range from [1, 5] to [1, 4]. We recruit five human annotators to evaluate models in semantic relevance and fluency. A test example consists of one input sentence, one generated sentence from baseline model and one generated sentence from our model. We randomly permute a pair of generated sentences to reduce annotators' bias on a certain model. Each example is evaluated by two annotators. As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators. Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies. ### Experiments ::: Case Studies
We further study some generated cases from different models. All results in Table TABREF30 are generated over our test set using randomly sampling. For both baseline and multilingual model, we tune their sampling temperatures to control the Distinct-2 and the inverse Self-BLEU at 0.31 and 0.47 respectively. In the case studies, we find that our method usually generates sentences with better relevance to source inputs, while the round-trip translation method can sometimes run into serious semantic drift. In the second case, our model demonstrates a good feature that it maintains the meaning and even a proper noun $guide$ unchanged while modifies the source sentence by both changing and reordering words. This feature may be introduced by DAE perturbation strategies which improves model's robustness and diversity simultaneously. These results evidence that our methods outperforms the baseline in both relevance and diversity. ### Related Work
Generating paraphrases based on deep neural networks, especially Seq2Seq models, has become the mainstream approach. A majority of neural paraphrasing models tried to improve generation quality and diversity with high-quality paraphrase corpora. BIBREF2 starts a deep learning line of paraphrase generation through introducing stacked residual LSTM network. A word constraint model proposed by BIBREF3 improves both generation quality and diversity. BIBREF4 adopts variational auto-encoder to further improve generation diversity. BIBREF5 utilize neural reinforcement learning and adversarial training to promote generation quality. BIBREF6 decompose paraphrase generation into phrase-level and sentence-level. Several works tried to generate paraphrases from monolingual non-parallel or translation corpora. BIBREF28 exploits Markov Network model to extract paraphrase tables from monolingual corpus. BIBREF29, BIBREF30 and BIBREF31 create paraphrase corpus through clustering and aligning paraphrases from crawled articles or headlines. With parallel translation corpora, pivoting approaches such round-trip translation BIBREF7 and back-translation BIBREF32 are explored. However, to the best knowledge of us, none of these paraphrase generation models has been trained directly from parallel translation corpora as a single-round end-to-end model. ### Conclusions
In this work, we have proposed a Transformer-based model for zero-shot paraphrase generation, which can leverage huge amount of off-the-shelf translation corpora. Moreover, we improve generation fluency of our model with language model pre-training. Empirical results from both automatic and human evaluation demonstrate that our model surpasses the conventional pivoting approaches in terms of relevance, diversity, fluency and efficiency. Nevertheless, there are some interesting directions to be explored. For instance, how to obtain a better latent semantic representation with multi-modal data and how to further improve the generation diversity without sacrificing relevance. We plan to strike these challenging yet valuable problems in the future. Figure 1: Paraphrase generation via round-trip translation. Figure 2: Paraphrase generation via multilingual language model training. Table 1: Statistics of training data (#sentences). Table 3: Human evaluation results. Table 2: Log-probabilities of the generated sentences.√ and × symbols denote learning with or without pretraining respectively, bold font denotes greater values. Table 4: Case studies. For each input source, we randomly sample three paraphrases for comparison.
|
our method outperforms the baseline in both relevance and fluency significantly.
|
Was the ship on target, within maximum deviation from schedule?
A. Yes, they were within 5 degrees
B. No, they were over by 8 degrees
C. Yes, they were over by only 3 degrees.
D. No, they were under by 2 degrees
|
CAPTAIN CHAOS By D. ALLEN MORRISSEY Science equipped David Corbin with borrowed time; sent him winging out in a state of suspension to future centuries ... to a dark blue world whose only defense was to seal tight the prying minds of foolish interlopers. [Transcriber's Note: This etext was produced from Planet Stories November 1952. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] I heard the voice as I opened my eyes. I was lying down, still not aware of where I was, waiting for the voice. "Your name is David Corbin. Do you understand?" I looked in the direction of the sound. Above my feet a bulkhead loomed. There were round dials set in a row above a speaker. Over the mesh-covered speaker, two knobs glowed red. I ran the words over in my sluggish mind, thinking about an answer. The muscles in my throat tightened up in reflex as I tried to bring some unity into the jumble of thoughts and ideas that kept forming. One word formed out of the rush of anxiety. "No." I shouted a protest against the strangeness of the room. I looked to the right, my eyes following the curving ceiling that started at the cot. The curve met another straight bulkhead on the left. I was in a small room, gray in color, like dull metal. Overhead a bright light burned into my vision. I wondered where in the universe I was. "Your name is David Corbin. If you understand, press button A on your right." I stared at the speaker in the wall. The mesh-covered hole and the two lights looked like a caricature of a face, set in a panel of dials. I twisted my head to look for the button. I pushed away from the close wall but I couldn't move. I reached down to the tightness that held my body, found the wide strap that held me and fumbled with the buckle. I threw it off and pushed myself up from the hard cot. I heard myself yell in surprise as I floated up towards the light overhead. I was weightless. How do you describe being weightless when you are born into a world bound by gravity. I twisted and shut my eyes in terror. There was no sensation of place, no feeling of up or down, no direction. My back bumped against the ceiling and I opened my eyes to stare at the cot and floor. I was concentrating too hard on remembering to be frightened for long. I pushed away from the warm metal and the floor moved up to meet me. "If you understand, press button A on your right." What should I understand? That I was floating in a room that had a curved wall ... that nothing was right in this hostile room? When I reached the cot I held it and drew myself down. I glanced at the planes of the room, trying to place it with other rooms I could see in my mind. Gray walls with a crazy curved ceiling ... a door to my left that appeared to be air tight. I stared at my familiar hands. I rubbed them across my face, feeling the solidity of flesh and bone, afraid to think too hard about myself. "My name ... my name is...." "Your name is David Corbin." I stared at the speaker. How long did this go on? The name meant nothing to me, but I thought about it, watching the relentless lights that shone below the dials. I stood up slowly and looked at myself. I was naked except for heavy shorts, and there was no clue to my name in the pockets. The room was warm and the air I had been breathing was good but it seemed wrong to be dressed like this. I didn't know why. I thought about insanity, and the room seemed to fit my thoughts. When the voice repeated the message again I had to act. Walking was like treading water that couldn't be seen or felt. I floated against the door, twisting the handle in fear that it wouldn't turn. The handle clanged as I pushed it down and I stared at the opposite wall of a narrow gray passageway. I pushed out into it and grasped the metal rail that ran along the wall. I reasoned it was there to propel yourself through the passageway in this weightless atmosphere. It was effortless to move. I turned on my side like a swimmer and went hand over hand, shooting down the corridor. I braced against forward motion and stopped against a door at the end. Behind me I could see the opened door I had left, and the thought of that questioning voice made me want to move. I swung the door open, catching a glimpse of a room crowded with equipment and.... I will always remember the scream of terror, the paralyzing fright of what I saw through the portholes in the wall of the room. I saw the blackest night, pierced by brilliance that blinded me. There was no depth to the searing brightness of countless stars. They seemed to press against the glass, blobs of fire against a black curtain burning into my eyes and brain. It was space. I looked out at deep space, star systems in clusters. I shut my eyes. When I looked again I knew where I was. Why the little room had been shaped like quarter round. Why I drifted weightlessly. Why I was.... David Corbin. I knew more of the puzzle. Something was wrong. After the first shock of looking out, I accepted the fact that I was in a space ship, yet I couldn't read the maps that were fastened to a table, nor understand the function or design of the compact machinery. WHY, Why, Why? The thought kept pounding at me. I was afraid to touch anything in the room. I pressed against the clear window, wondering if the stars were familiar. I had a brief vivid picture of a night sky on Earth. This was not the same sky. Back in the room where I had awakened, I touched the panel with the glowing eyes. It had asked me if I understood. Now it must tell me why I didn't. It had to help me, that flat metallic voice that repeated the same words. It must tell me.... "Your name is David Corbin. If you understand, press button A on your right." I pressed the button by the cot. The red lights blinked out as I stood in patient attention, trying to outguess the voice. I recalled a phrase ... some words about precaution. Precaution against forgetting. It was crazy, but I trusted the panel. It was the only thing I saw that could help me, guard me against another shock like seeing outside of the clear portholes. "It is assumed the experiment is a success," the voice said. What experiment? "You have been removed from suspension. Assume manual control of this ship." Control of a ship? Going where? "Do not begin operations until the others are removed from suspension." What others? Tell me what to do. "Rely on instructions for factoring when you check the coordinates. Your maximum deviation from schedule cannot exceed two degrees. Adopt emergency procedures as you see fit. Good luck." The voice snapped off and I laughed hysterically. None of it had made sense, and I cursed whatever madness had put me here. "Tell me what to do," I shouted wildly. I hammered the hard metal until the pain in my hands made me stop. "I can't remember what to do." I held my bruised hands to my mouth, and I knew that was all the message there was. In blind panic I pushed away from the panel. Something tripped me and I fell back in a graceless arc. I pushed away from the floor, barely feeling the pain in my leg, and went into the hall. Pain burned along my leg but I couldn't stop. In the first panic of waking up in strangeness I had missed the other doors in the passage. The first swung back to reveal a deep closet holding five bulky suits. The second room was like my own. A dark haired, deep chested man lay on the cot. His muscular body was secured by a wide belt. He was as still as death, motionless without warmth or breath as I hovered over him. I couldn't remember his face. The next room held another man. He was young and wiry, like an athlete cast in marble, dark haired and big jawed. A glassy eye stared up when I rolled back his eyelid. The eyelid remained open until I closed it and went on. Another room ... another man ... another stranger. This man was tall and raw boned, light of skin and hair, as dead as the others. A flat, illogical voice had instructed me to revive these men. I shivered in spite of the warmth of the room, studying the black box that squatted on a shelf by his head. My hand shook when I touched the metal. I dared not try to operate anything. Revive the others ... instructions without knowledge were useless to me. I stopped looking into the doors in the passageway and went back to the room with the portholes. Everything lay in readiness, fastened down star charts, instruments, glittering equipment. There was no feeling of disorder or use in the room. It waited for human hands to make it operate. Not mine. Not now. I went past the room into another, where the curves were more sharp. I could visualize the tapering hull leading to the nose of the ship. This room was filled with equipment that formed a room out of the bordered area I stood in. I sat in the deep chair facing the panel of dials and instruments, in easy reach. I ran my hands over the dials, the rows of smooth colored buttons, wondering. The ports on the side were shielded and I stared out at static energy, hung motionless in a world of searing light. There was no distortion, no movement outside and I glanced back at the dials. What speeds were they recording? What speeds and perhaps, what distance? It was useless to translate the markings. They stood for anything I might guess, and something kept pricking my mind, telling me I had no time to guess. I thought of time again. I was supposed to act according to ... plan. Did that mean ... in time ... in time. I went back down the passageway. The fourth small room was the same. Except for the woman. She lay on a cot, young and beautiful, even in the death-like immobility I had come to accept. Her beauty was graceful lines of face and her figure—smooth tapering legs, soft curves that were carved out of flesh colored stone. Yet not stone. I held her small hand, then put it back on the cot. Her attire was brief like the rest of us, shorts and a man's shirt. Golden hair curled up around her lovely face. I wondered if she would ever smile or move that graceful head. I rolled back her eyelid and looked at a deep blue eye that stared back in glassy surprise. Four people in all, depending on a blind helpless fool who didn't know their names or the reason for that dependence. I sat beside her on the cot until I could stand it no longer. Searching the ship made me forget my fear. I hoped I would find some answers. I went from the nose to the last bulkhead in a frenzy of floating motion, looking behind each door until I went as far as I could. There were two levels to the ship. They both ended in the lead shield that was set where the swell of the curve was biggest. It meant the engine or engines took up half the ship, cut off from the forward half by the instrument studded shield. I retraced my steps and took a rough estimate of size. The ship, as I called it, was at least four hundred feet long, fifty feet in diameter on the inside. The silence was a force in itself, pressing down from the metal walls, driving me back to the comforting smallness of the room where I had been reborn. I laughed bitterly, thinking about the aptness of that. I had literally been reborn in this room, equipped with half ideas, and no point to start from, no premise to seek. I sensed the place to start from was back in the room. I searched it carefully. Minutes later I realized the apparatus by the cot was different. It was the same type of black box, but out from it was a metal arm, bent in a funny angle. At the tip of the arm, a needle gleamed dully and I rubbed the deep gash on my leg. I bent the arm back until the angle looked right. It was then I realized the needle came to a spot where it could have hit my neck when I lay down. My shout of excitement rang out in the room, as I pictured the action of the extended arm. I lost my sudden elation in the cabin where the girl lay. The box behind her head was completely closed, and it didn't yield to the pressure I applied. It had a cover, but no other opening where an arm could extend. I ran my fingers over the unbroken surface, prying over the thin crack at the base helplessly. If some sort of antidote was to be administered manually I was lost. I had no knowledge of what to inject or where to look for it. The chamber of the needle that had awakened me was empty. That meant a measured amount. In the laboratory on the lower level I went over the rows of cans and tubes fastened to the shelves. There were earths and minerals, seeds and chemicals, testing equipment in compact drawers, but nothing marked for me. I wondered if I was an engineer or a pilot, or perhaps a doctor sent along to safeguard the others. Complete amnesia would have been terrible enough but this half knowledge, part awareness and association with the ship was a frightening force that seemed ready to break out of me. I went back to the cabin where the powerful man lay. I had to risk failure with one of them. I didn't want it to be the girl. I fought down the thought that he might be the key man, remembering the voice that had given the message. It was up to me, and soon. The metal in the box would have withstood a bullet. It couldn't be pried apart, and I searched again and again for a release mechanism. I found it. I swung the massive cover off and set it down. The equipment waited for the touch of a button and it went into operation. I stepped back as the tubes glowed to life and the arm swung down with the gleaming needle. The needle went into the corded neck of the man. The fluid chamber drained under pressure and the arm moved back. I stood by the man for long minutes. Finally it came. He stirred restlessly, closing his hands into fists. The deep chest rose and fell unevenly as he breathed. Finally the eyes opened and he looked at me. I watched him adjust to the room. It was in his eyes, wide at first, moving about the confines of the room back to me. "It looks like we made it," he said. "Yes." He unfastened the belt and sat up. I pushed him back as he floated up finding little humor in the comic expression on his face. "No gravity," he grunted and sat back. "You get used to it fast," I answered. I thought of what to say as he watched me. "How do you feel?" He shrugged at the question. "Fine, I guess. Funny, I can't remember." He saw it in my face, making him stop. "I can't remember dropping off to sleep," he finished. I held his hard arm. "What else? How much do you remember?" "I'm all right," he answered. "There aren't supposed to be any effects from this." "Who is in charge of this ship?" I asked. He tensed suddenly. "You are, sir. Why?" I moved away from the cot. "Listen, I can't remember. I don't know your name or anything about this ship." "What do you mean? What can't you remember?" he asked. He stood up slowly, edging around towards the door. I didn't want to fight him. I wanted him to understand. "Look, I'm in trouble. Nothing fits, except my name." "You don't know me?" "No." "Are you serious?" "Yes, yes. I don't know why but it's happened." He let his breath out in a whistle. "For God's sake. Any bump on your head?" "I feel all right physically. I just can't place enough." "The others. What about the others?" he blurted. "I don't know. You're the first besides myself. I don't know how I stumbled on the way to revive you." He shook his head, watching me like I was a freak. "Let's check the rest right away." "Yes. I've got to know if they are like me. I'm afraid to think they might be." "Maybe it's temporary. We can figure something out." II The second man, the dark haired one, opened his eyes and recognized us. He asked questions in rapid fire excitement. The third man, the tall Viking, was all right until he moved. The weightless sensation made him violently sick. We put him back on the cot, securing him again with the belt, but the sight of us floating made him shake. He was retching without results when we drifted out. I followed him to the girl's quarters. "What about her. Why is she here?" I asked my companion. He lifted the cover from the apparatus. "She's the chemist in the crew." "A girl?" "Dr. Thiesen is an expert, trained for this," he said. I looked at her. She looked anything but like a chemist. "There must be men who could have been sent. I've been wondering why a girl." "I don't know why, Captain. You tried to stop her before. Age and experience were all that mattered to the brass." "It's a bad thing to do." "I suppose. The mission stated one chemist." "What is the mission of this ship?" I asked. He held up his hand. "We'd better wait, sir. Everything was supposed to be all right on this end. First you, then Carl, sick to his stomach." "Okay. I'll hold the questions until we see about her." We were out of luck with the girl. She woke up and she was frightened. We questioned her and she was coherent but she couldn't remember. I tried to smile as I sat on the cot, wondering what she was thinking. "How do you feel?" I asked. Her face was a mask of wide-eyed fear as she shook her head. "Can you remember?" "I don't know." Blue eyes stared at me in fear. Her voice was low. "Do you know my name?" The question frightened her. "Should I? I feel so strange. Give me a minute to think." I let her sit up slowly. "Do you know your name?" She tightened up in my arms. "Yes. It's...." She looked at us for help, frightened by the lack of clothing we wore, by the bleak room. Her eyes circled the room. "I'm afraid," she cried. I held her and she shook uncontrollably. "What's happened to me?" she asked. The dark haired man came into the room, silent and watchful. My companion motioned to him. "Get Carl and meet us in Control." The man looked at me and I nodded. "We'll be there in a moment. I'm afraid we've got trouble." He nodded and pushed away from us. The girl screamed and covered her face with her hands. I turned to the other man. "What's your name?" "Croft. John Croft." "John, what are your duties if any?" "Automatic control. I helped to install it." "Can you run this ship? How about the other two?" He hit his hands together. "You fly it, sir. Can't you think?" "I'm trying. I know the ship is familiar, but I've looked it over. Maybe I'm trying too hard." "You flew her from earth until we went into suspension," he said. "I can't remember when," I said. I held the trembling girl against me, shaking my head. He glanced at the girl. "If the calculations are right it was more than a hundred years ago." We assembled in the control room for a council. We were all a little better for being together. John Croft named the others for me. I searched each face without recognition. The blond man was Carl Herrick, a metallurgist. His lean face was white from his spell but he was better. Paul Sample was a biologist, John said. He was lithe and restless, with dark eyes that studied the rest of us. I looked at the girl. She was staring out of the ports, her hands pressed against the transparent break in the smooth wall. Karen Thiesen was a chemist, now frightened and trying to remember. I wasn't in much better condition. "Look, if it comes too fast for me, for any of us, we'll stop. John, you can lead off." "You ask the questions," he said. I indicated the ship. "Where in creation are we going?" "We set out from Earth for a single star in the direction of the center of our Galaxy." "From Earth? How could we?" "Let's move slowly, sir," he said. "We're moving fast. I don't know if you can picture it, but we're going about one hundred thousand miles an hour." "Through space?" "Yes." "What direction?" Paul cut in. "It's a G type star, like our own sun in mass and luminosity. We hope to find a planetary system capable of supporting life." "I can't grasp it. How can we go very far in a lifetime?" "It can be done in two lifetimes," John said quietly. "You said I had flown this ship. You meant before this suspension." "Yes. That's why we can cross space to a near star." "How long ago was it?" "It was set at about a hundred years, sir. Doesn't that fit at all?" "I can't believe it's possible." Carl caught my eye. "Captain, we save this time without aging at all. It puts us near a calculated destination." "We've lost our lifetime." It was Karen. She had been crying silently while we talked. "Don't think about it," Paul said. "We can still pull this out all right if you don't lose your nerve." "What are we to do?" she asked. John answered for me. "First we've got to find out where we are. I know this ship but I can't fly it." "Can I?" I asked. We set up a temporary plan of action. Paul took Karen to the laboratory in an effort to help her remember her job. Carl went back to divide the rations. I was to study the charts and manuals. It was better than doing nothing, and I went into the navigation room and sat down. Earth was an infinitesimal point somewhere behind us on the galactic plane, and no one else was trained to navigate. The ship thundered to life as I sat there. The blast roared once ... twice, then settled into a muted crescendo of sound that hummed through the walls. I went into the control room and watched John at the panel. "I wish I knew what you were doing," I said savagely. "Give it time." "We can't spare any, can we?" I asked. "I wish we knew. What about her—Dr. Thiesen?" "She's in the lab. I don't think that will do much good. She's got to be shocked out of a mental state like that." "I guess you're right," he said slowly. "She's trained to administer the suspension on the return trip." I let my breath out slowly. "I didn't think about that." "We couldn't even get part way back in a lifetime," he said. "How old are you, John?" "Twenty-eight." "What about me?" "Thirty." He stared at the panel in thought for a minutes. "What about shock treatment? It sounds risky." "I know. It's the only thing I could think of. Why didn't everyone react the same?" "That had me wondering for a while. I don't know. Anyway how could you go about making her remember?" "Throw a crisis, some situation at her, I guess." He shrugged, letting his sure hands rest on the panel of dials. I headed back towards the lab. If I could help her I might help myself. I was past the rooms when the horn blasted through the corridor. I turned automatically with the sound, pushing against the rail, towards the control room. Deep in my mind I could see danger, and without questioning why I knew I had to be at Control when the sound knifed through the stillness. John was shouting as I thrust my way into the room. "Turn the ship. There's something dead ahead." I had a glimpse of his contorted face as I dove at the control board. My hands hit buttons, thumbed a switch and then a sudden force threw me to the right. I slammed into the panel on the right, as the pressure of the change dimmed my vision. Reflex made me look up at the radar control screen. It wasn't operating. John let go of the padded chair, grinning weakly. I was busy for a few seconds, feeding compensation into the gyros. Relief flooded through me like warm liquid. I hung on the intercom for support, drawing air into my heaving lungs. "What—made you—think of that," I asked weakly. "Shock treatment." "I must have acted on instinct." "You did. Even for a sick man that was pretty fast," he laughed. "I can think again, John. I know who I am," I shouted. I threw my arms around his massive shoulders. "You did it." "You gave me the idea, Mister, talking about Dr. Thiesen." "It worked. I'm okay," I said in giddy relief. "I don't have to tell you I was scared as hell. I wish you could have seen your face, the look in your eyes when I woke up." "I wouldn't want to wake up like that again." "You're all right now?" he asked. I grinned and nodded an answer. I saw John as he was at the base, big and competent, sweating in the blazing sun. I thought about the rest of the crew too. "We're heading right for a star...." "It's been dead ahead for hours," he grunted. I leaned over and threw the intercom to open. "This is control. Listen ... everyone. I'm over it. Disregard the warning siren ... we were testing the ship." The lab light blinked on as Paul cut in. "What was it ... hey, you said you're all right." "John did it. He hit the alarm figuring I would react. Listen, Paul. Is any one hurt?" "No. Carl is here too. His stomach flopped again but he's okay. What about food. We're supposed to be checked before we eat." "We'll have to go ahead without it. Any change?" "No, I put her to bed. Shall I bring food?" I glanced at John. He rubbed his stomach. "Yes," I answered. "Bring it when you can. I've got to find out where we are." We had to get off course before we ran into the yellow-white star that had been picked for us. Food was set down by me, grew cold and was carried away and I was still rechecking the figures. We were on a line ten degrees above the galactic plane. The parallactic baseline from Earth to the single star could be in error several degrees, or we could be right on the calculated position of the star. The radar confirmed my findings ... and my worst fears. When we set it for direction and distance, the screen glowed to life and recorded the star dead ahead. In all the distant star clusters, only this G type star was thought to have a planetary system like our own. We were out on a gamble to find a planet capable of supporting life. The idea had intrigued scientists before I had first looked up at the night sky. When I was sure the electronically recorded course was accurate for time, I checked direction and speed from the readings and plotted our position. If I was right we were much closer than we wanted to be. The bright pips on the screen gave us the distance and size of the star while we fed the figures into the calculator for our rate of approach. Spectroscopic tests were run on the sun and checked against the figures that had been calculated on Earth. We analyzed temperature, magnetic fields, radial motion, density and luminosity, checking against the standards the scientists had constructed. It was a G type star like our own. It had more density and temperature and suitable planets or not, we had to change course in a hurry. Carl analyzed the findings while we came to a decision. Somewhere along an orbit that might be two hundred miles across, our hypothetical planet circled this star. That distance was selected when the planets in Earth's solar system had proved to be barren. If the observations on this star were correct, we could expect to find a planet in a state of fertility ... if it existed ... if it were suitable for colonization ... if we could find it.
|
B. No, they were over by 8 degrees
|
What, according to the film reviewer, is Zaillian's strength in "A Civil Action"?
A. Staying true to the real story's timeline
B. Dramatic monologues
C. Intercutting cinematography
D. Casting excellent actors and actresses
|
War and Pieces No movie in the last decade has succeeded in psyching out critics and audiences as fully as the powerful, rambling war epic The Thin Red Line , Terrence Malick's return to cinema after 20 years. I've sat through it twice and am still trying to sort out my responses, which run from awe to mockery and back. Like Saving Private Ryan , the picture wallops you in the gut with brilliant, splattery battle montages and Goyaesque images of hell on earth. But Malick, a certified intellectual and the Pynchonesque figure who directed Badlands and Days of Heaven in the 1970s and then disappeared, is in a different philosophical universe from Steven Spielberg. Post-carnage, his sundry characters philosophize about their experiences in drowsy, runic voice-overs that come at you like slow bean balls: "Why does nature vie with itself? ... Is there an avenging power in nature, not one power but two?" Or "This great evil: Where's it come from? What seed, what root did it grow from? Who's doin' this? Who's killin' us, robbin' us of life and light?" First you get walloped with viscera, then you get beaned by blather. Those existential speculations don't derive from the screenplay's source, an archetypal but otherwise down-to-earth 1962 novel by James Jones (who also wrote From Here to Eternity ) about the American invasion of the South Pacific island of Guadalcanal. They're central to Malick's vision of the story, however, and not specious. In the combat genre, the phrase "war is hell" usually means nothing more than that it's a bummer to lose a limb or two, or to see your buddy get his head blown off. A true work of art owes us more than literal horrors, and Malick obliges by making his theater of war the setting for nothing less than a meditation on the existence of God. He tells the story solemnly, in three parts, with a big-deal cast (Sean Penn, Nick Nolte, John Cusack) and a few other major stars (John Travolta, Woody Harrelson, George Clooney) dropping by for cameos. After an Edenic prelude, in which a boyishly idealistic absent without leave soldier, Pvt. Witt (Jim Caviezel), swims with native youths to the accompaniment of a heavenly children's choir, the first part sees the arrival of the Allied forces on the island, introduces the principal characters (none of whom amounts to a genuine protagonist), and lays out the movie's geographical and philosophical terrain. The centerpiece--the fighting--goes on for over an hour and features the most frantic and harrowing sequences, chiefly the company's initially unsuccessful frontal assault on a Japanese hilltop bunker. The coda lasts nearly 40 minutes and is mostly talk and cleanup, the rhythms growing more relaxed until a final, incongruous spasm of violence--whereupon the surviving soldiers pack their gear and motor off to another South Pacific battle. In the final shot, a twisted tree grows on the waterline of the beach, the cycle of life beginning anew. The Thin Red Line has a curious sound-scape, as the noise of battle frequently recedes to make room for interior monologues and Hans Zimmer's bump-bump, minimalist New Age music. Pvt. Bell (Ben Chaplin) talks to his curvy, redheaded wife, viewed in deliriously sensual flashbacks. ("Love: Where does it come from? Who lit this flame in us?") Lt. Col. Tall (Nolte), a borderline lunatic passed over one too many times for promotion and itching to win a battle no matter what the human cost, worries groggily about how his men perceive him. The dreamer Witt poses folksy questions about whether we're all a part of one big soul. If the movie has a spine, it's his off-and-on dialogue with Sgt. Welsh (Penn), who's increasingly irritated by the private's beatific, almost Billy Budd-like optimism. Says Welsh, "In this world, a man himself is nothin', and there ain't no world but this one." Replies Witt, high cheekbones glinting, "I seen another world." At first it seems as if Witt will indeed be Billy Budd to Welsh's vindictive Claggart. But if Witt is ultimately an ethereal martyr, Welsh turns out to be a Bogart-like romantic who can't stop feeling pain in the face of an absent God. He speaks the movie's epitaph, "Darkness and light, strife and love: Are they the workings of one mind, the feature of the same face? O my soul, let me be in you now. Look out through my eyes. Look out at the things you made, all things shining." Malick puts a lot of shining things on the screen: soldiers, natives, parrots, bats, rodents, visions of Eden by way of National Geographic and of the Fall by way of Alpo. Malick's conception of consciousness distributes it among the animate and inanimate alike; almost every object is held up for rapturous contemplation. I could cite hundreds of images: A soldier in a rocking boat hovers over a letter he's writing, which is crammed from top to bottom and side to side with script. (You don't know the man, but you can feel in an instant his need to cram everything in.) A small, white-bearded Melanesian man strolls nonchalantly past a platoon of tensely trudging grunts who can't believe they're encountering this instead of a hail of Japanese bullets. Two shots bring down the first pair of soldiers to advance on the hill; a second later, the sun plays mystically over the tall, yellow grass that has swallowed their bodies. John Toll's camera rushes in on a captured Japanese garrison: One Japanese soldier shrieks; another, skeletal, laughs and laughs; a third weeps over a dying comrade. The face of a Japanese soldier encased in earth speaks from the dead, "Are you righteous? Know that I was, too." Whether or not these pearllike epiphanies are strung is another matter. Malick throws out his overarching theme--is nature two-sided, at war with itself?--in the first few minutes but, for all his startling juxtapositions, he never dramatizes it with anything approaching the clarity of, say, Brian De Palma's Casualties of War (1989). Besides the dialogue between Welsh and Witt, The Thin Red Line 's other organizing story involves a wrenching tug of war between Nolte's ambition-crazed Tall and Capt. Staros (Elias Koteas), who refuses an order to send his men on what will surely be a suicidal--and futile--assault on a bunker. But matters of cause and effect don't really interest Malick. Individual acts of conscience can and do save lives, and heroism can win a war or a battle, he acknowledges. But Staros is ultimately sent packing, and Malick never bothers to trace the effect of his action on the Guadalcanal operation. In fact, the entire battle seems to take place in a crazed void. Tall quotes Homer's "rosy-fingered dawn" and orders a meaningless bombardment to "buck the men up--it'll look like the Japs are catching hell." Soldiers shoot at hazy figures, unsure whether they're Japanese or American. Men collide, blow themselves in half with their own mishandled grenades, stab themselves frantically with morphine needles, shove cigarettes up their noses to keep the stench of the dying and the dead at bay. A tiny bird, mortally wounded, flutters in the grass. Malick is convincing--at times overwhelming--on the subject of chaos. It's when he tries to ruminate on order that he gets gummed up, retreating to one of his gaseous multiple mouthpieces: "Where is it that we were together? Who is it that I lived with? Walked with? The brother. ... The friend. ... One mind." I think I'd have an easier time with Malick's metaphysical speculations if I had a sense of some concomitant geopolitical ones--central to any larger musings on forces of nature as viewed through the prism of war. Couldn't it be that the German and Japanese fascist orders were profoundly anti-natural, and that the Allies' cause was part of a violent but natural correction? You don't have to buy into Spielberg's Lincolnesque pieties in Saving Private Ryan to believe that there's a difference between World War II and Vietnam (or, for that matter, World War II and the invasion of Grenada or our spats with Iraq). While he was at Harvard, Malick might have peeled himself off the lap of his pointy-headed mentor, Stanley Cavell, the philosopher and film theorist, and checked out a few of Michael Waltzer's lectures on just and unjust wars. Maybe then he'd view Guadalcanal not in an absurdist vacuum (the soldiers come, they kill and are killed, they leave) but in the larger context of a war that was among the most rational (in its aims, if not its methods) fought in the last several centuries. For all his visionary filmmaking, Malick's Zen neutrality sometimes seems like a cultivated--and pretentious--brand of fatuousness. John Travolta's empty nightclub impersonation of Bill Clinton in Primary Colors (1998) had one positive result: It gave him a jump-start on Jan Schlichtmann, the reckless personal injury lawyer at the center of A Civil Action . Travolta's Schlichtmann is much more redolent of Clinton: slick and selfish and corrupt in lots of ways but basically on the side of the angels, too proud and arrogant to change tactics when all is certainly lost. Schlichtmann pursued--and more or less blew--a civil liability case against the corporate giants Beatrice and W.R. Grace over the allegedly carcinogenic water supply of Woburn, Mass. Boston writer Jonathan Harr, in the book the movie is based on, went beyond the poison in the Woburn wells to evoke (stopping just short of libel) the poison of the civil courts, where platoons of overpaid corporate lawyers can drive opponents with pockets less deep and psyches less stable into bankruptcy and hysteria. Director Steven Zaillian's version doesn't capture the mounting rage that one experiences while reading Harr's book, or even the juicy legal machinations that Francis Ford Coppola giddily manipulated in his underrated adaptation of John Grisham's The Rainmaker (1997). But A Civil Action is a sturdy piece of work, an old-fashioned conversion narrative with some high-tech zip. Schlichtmann doesn't take this "orphan" case--brought by the parents of several children who died of leukemia--because he wants to do good but because he figures that Grace and Beatrice will fork over huge sums of money to keep the parents from testifying publicly about their children's last days. He might succeed, too, if it weren't for Jerome Facher (Robert Duvall), the Beatrice lawyer who knows how to keep Schlichtmann shadowboxing while his small firm's financial resources dwindle to nothing. Zaillian is at his most assured when he cuts back and forth between Facher's Harvard Law School lectures on what not to do in court and Schlichtmann's fumbling prosecution. The sequence has the extra dimension of good journalism: It dramatizes and comments simultaneously. Plus, it gives Duvall a splendid platform for impish understatement. (Duvall has become more fun to watch than just about anyone in movies.) Elsewhere, Zaillian takes a more surface approach, sticking to legal minutiae and rarely digging for the deeper evil. As in his Searching for Bobby Fischer (1993), the outcome of every scene is predictable, but how Zaillian gets from beat to beat is surprisingly fresh. He also gets sterling bit performances from Sydney Pollack as the spookily sanguine Grace CEO, William H. Macy as Schlichtmann's rabbity accountant, and Kathleen Quinlan as the mother of one of the victims. Quinlan knows that when you're playing a woman who has lost a child you don't need to emote--you reveal the emotion by trying not to emote. To the families involved in the Woburn tragedy, the real climax of this story isn't the downbeat ending of the book or the sleight of hand, "let's call the Environmental Protection Agency," upbeat ending of the movie. The climax is the publication of a book that takes the plaintiffs' side and that remains on the best-seller list in hardcover and paperback for years. The climax is the movie starring John Travolta. Beatrice and Grace made out OK legally, but some of us will never use their products again without thinking about Travolta losing his shirt in the name of those wasted-away little kids.
|
C. Intercutting cinematography
|
Why did Maggie not travel with her husband, Jacob, while on his missions?
A. Jacob didn't think women should be in unexplored space.
B. She feared space exploration.
C. She was to be searching for an astrogator.
D. Maggie didn't think women should be in unexplored space.
|
A Coffin for Jacob By EDWARD W. LUDWIG Illustrated by EMSH [Transcriber's Note: This etext was produced from Galaxy Science Fiction May 1956. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] With never a moment to rest, the pursuit through space felt like a game of hounds and hares ... or was it follow the leader? Ben Curtis eased his pale, gaunt body through the open doorway of the Blast Inn, the dead man following silently behind him. His fear-borne gaze traveled into the dimly illumined Venusian gin mill. The place was like an evil caldron steaming with a brew whose ingredients had been culled from the back corners of three planets. Most of the big room lay obscured behind a shimmering veil of tobacco smoke and the sweet, heavy fumes of Martian Devil's Egg. Here and there, Ben saw moving figures. He could not tell if they were Earthmen, Martians or Venusians. Someone tugged at his greasy coat. He jumped, thinking absurdly that it was the dead man's hand. " Coma esta, senor? " a small voice piped. " Speken die Deutsch? Desirez-vous d'amour? Da? Nyet? " Ben looked down. The speaker was an eager-eyed Martian boy of about ten. He was like a red-skinned marionette with pipestem arms and legs, clad in a torn skivvy shirt and faded blue dungarees. "I'm American," Ben muttered. "Ah, buena ! I speak English tres fine, senor . I have Martian friend, she tres pretty and tres fat. She weigh almost eighty pounds, monsieur . I take you to her, si ?" Ben shook his head. He thought, I don't want your Martian wench. I don't want your opium or your Devil's Egg or your Venusian kali. But if you had a drug that'd bring a dead man to life, I'd buy and pay with my soul. "It is deal, monsieur ? Five dollars or twenty keelis for visit Martian friend. Maybe you like House of Dreams. For House of Dreams—" "I'm not buying." The dirty-faced kid shrugged. "Then I show you to good table,— tres bien . I do not charge you, senor ." The boy grabbed his hand. Because Ben could think of no reason for resisting, he followed. They plunged into shifting layers of smoke and through the drone of alcohol-cracked voices. They passed the bar with its line of lean-featured, slit-eyed Earthmen—merchant spacemen. They wormed down a narrow aisle flanked by booths carved from Venusian marble that jutted up into the semi-darkness like fog-blanketed tombstones. Several times, Ben glimpsed the bulky figures of CO 2 -breathing Venusians, the first he'd ever seen. They were smoky gray, scaly, naked giants, toads in human shape. They stood solitary and motionless, aloof, their green-lidded eyes unblinking. They certainly didn't look like telepaths, as Ben had heard they were, but the thought sent a fresh rivulet of fear down his spine. Once he spied a white-uniformed officer of Hoover City's Security Police. The man was striding down an aisle, idly tapping his neuro-club against the stone booths. Keep walking , Ben told himself. You look the same as anyone else here. Keep walking. Look straight ahead. The officer passed. Ben breathed easier. "Here we are, monsieur ," piped the Martian boy. "A tres fine table. Close in the shadows." Ben winced. How did this kid know he wanted to sit in the shadows? Frowning, he sat down—he and the dead man. He listened to the lonely rhythms of the four-piece Martian orchestra. The Martians were fragile, doll-like creatures with heads too large for their spindly bodies. Their long fingers played upon the strings of their cirillas or crawled over the holes of their flutes like spider legs. Their tune was sad. Even when they played an Earth tune, it still seemed a song of old Mars, charged with echoes of lost voices and forgotten grandeur. For an instant, Ben's mind rose above the haunting vision of the dead man. He thought, What are they doing here, these Martians? Here, in a smoke-filled room under a metalite dome on a dust-covered world? Couldn't they have played their music on Mars? Or had they, like me, felt the challenge of new worlds? He sobered. It didn't matter. He ordered a whiskey from a Chinese waiter. He wet his lips but did not drink. His gaze wandered over the faces of the Inn's other occupants. You've got to find him , he thought. You've got to find the man with the red beard. It's the only way you can escape the dead man. The dead man was real. His name was Cobb. He was stout and flabby and about forty and he hated spacemen. His body was buried now—probably in the silent gray wastes outside Luna City. But he'd become a kind of invisible Siamese twin, as much a part of Ben as sight in his eyes. Sometimes the image would be shuffling drunkenly beside him, its lips spitting whiskey-slurred curses. Again, its face would be a pop-eyed mask of surprise as Ben's fist thudded into its jaw. More often, the face would be frozen in the whiteness of death. The large eyes would stare. Blood would trickle from a corner of the gaping mouth. You can forget a living man. You can defeat him or submit to him or ignore him, and the matter is over and done. You can't escape from a memory that has burned into your mind. It had begun a week ago in Luna City. The flight from White Sands had been successful. Ben, quietly and moderately, wanted to celebrate. He stopped alone in a rocketfront bar for a beer. The man named Cobb plopped his portly and unsteady posterior on the stool next to him. "Spacemen," he muttered, "are getting like flies. Everywhere, all you see's spacemen." He was a neatly dressed civilian. Ben smiled. "If it weren't for spacemen, you wouldn't be here." "The name's Cobb." The man hiccoughed. "Spacemen in their white monkey suits. They think they're little tin gods. Betcha you think you're a little tin god." He downed a shot of whiskey. Ben stiffened. He was twenty-four and dressed in the white, crimson-braided uniform of the Odyssey's junior astrogation officer. He was three months out of the Academy at White Sands and the shining uniform was like a key to all the mysteries of the Universe. He'd sought long for that key. At the age of five—perhaps in order to dull the memory of his parents' death in a recent strato-jet crash—he'd spent hours watching the night sky for streaking flame-tails of Moon rockets. At ten, he'd ground his first telescope. At fourteen, he'd converted an abandoned shed on the government boarding-school grounds to a retreat which housed his collection of astronomy and rocketry books. At sixteen, he'd spent every weekend holiday hitchhiking from Boys Town No. 5 in the Catskills to Long Island Spaceport. There, among the grizzled veterans of the old Moon Patrol, he'd found friends who understood his dream and who later recommended his appointment to the U. S. Academy for the Conquest of Space. And a month ago, he'd signed aboard the Odyssey —the first ship, it was rumored, equipped to venture as far as the asteroids and perhaps beyond. Cobb was persistent: "Damn fools shoulda known enough to stay on Earth. What the hell good is it, jumpin' from planet to planet?" The guy's drunk , Ben thought. He took his drink and moved three stools down the bar. Cobb followed. "You don't like the truth, eh, kid? You don't like people to call you a sucker." Ben rose and started to leave the bar, but Cobb grabbed his arm and held him there. "Thas what you are—a sucker. You're young now. Wait ten years. You'll be dyin' of radiation rot or a meteor'll get you. Wait and see, sucker!" Until this instant, Ben had suppressed his anger. Now, suddenly and without warning, it welled up into savage fury. His fist struck the man on the chin. Cobb's eyes gaped in shocked horror. He spun backward. His head cracked sickeningly on the edge of the bar. The sound was like a punctuation mark signaling the end of life. He sank to the floor, eyes glassy, blood tricking down his jaw. Ben knew that he was dead. Then, for a single absurd second, Ben was seized with terror—just as, a moment before, he'd been overwhelmed with anger. He ran. For some twenty minutes, he raced through a dizzying, nightmare world of dark rocketfront alleys and shouting voices and pursuing feet. At last, abruptly, he realized that he was alone and in silence. He saw that he was still on the rocketfront, but in the Tycho-ward side of the city. He huddled in a dark corner of a loading platform and lit a cigarette. A thousand stars—a thousand motionless balls of silver fire—shone above him through Luna City's transparent dome. He was sorry he'd hit Cobb, of course. He was not sorry he'd run. Escaping at least gave him a power of choice, of decision. You can do two things , he thought. You can give yourself up, and that's what a good officer would do. That would eliminate the escape charge. You'd get off with voluntary manslaughter. Under interplanetary law, that would mean ten years in prison and a dishonorable discharge. And then you'd be free. But you'd be through with rockets and space. They don't want new men over thirty-four for officers on rockets or even for third-class jet-men on beat-up freighters—they don't want convicted killers. You'd get the rest of the thrill of conquering space through video and by peeking through electric fences of spaceports. Or— There were old wives' tales of a group of renegade spacemen who operated from the Solar System's frontiers. The spacemen weren't outlaws. They were misfits, rejectees from the clearing houses on Earth. And whereas no legally recognized ship had ventured past Mars, the souped-up renegade rigs had supposedly hit the asteroids. Their headquarters was Venus. Their leader—a subject of popular and fantastic conjecture in the men's audiozines—was rumored to be a red-bearded giant. So , Ben reflected, you can take a beer-and-pretzels tale seriously. You can hide for a couple of days, get rid of your uniform, change your name. You can wait for a chance to get to Venus. To hell with your duty. You can try to stay in space, even if you exile yourself from Earth. After all, was it right for a single second, a single insignificant second, to destroy a man's life and his dream? He was lucky. He found a tramp freighter whose skipper was on his last flight before retirement. Discipline was lax, investigation of new personnel even more so. Ben Curtis made it to Venus. There was just one flaw in his decision. He hadn't realized that the memory of the dead man's face would haunt him, torment him, follow him as constantly as breath flowed into his lungs. But might not the rumble of atomic engines drown the murmuring dead voice? Might not the vision of alien worlds and infinite spaceways obscure the dead face? So now he sat searching for a perhaps nonexistent red-bearded giant, and hoping and doubting and fearing, all at once. "You look for someone, senor ?" He jumped. "Oh. You still here?" " Oui. " The Martian kid grinned, his mouth full of purple teeth. "I keep you company on your first night in Hoover City, n'est-ce-pas ?" "This isn't my first night here," Ben lied. "I've been around a while." "You are spacemen?" Ben threw a fifty-cent credit piece on the table. "Here. Take off, will you?" Spiderlike fingers swept down upon the coin. " Ich danke, senor. You know why city is called Hoover City?" Ben didn't answer. "They say it is because after women come, they want first thing a thousand vacuum cleaners for dust. What is vacuum cleaner, monsieur ?" Ben raised his hand as if to strike the boy. " Ai-yee , I go. You keep listen to good Martian music." The toothpick of a body melted into the semi-darkness. Minutes passed. There were two more whiskeys. A ceaseless parade of faces broke through the smoky veil that enclosed him—reddish balloon faces, scaly reptilian faces, white-skinned, slit-eyed faces, and occasionally a white, rouged, powdered face. But nowhere was there a face with a red beard. A sense of hopelessness gripped Ben Curtis. Hoover City was but one of a dozen cities of Venus. Each had twenty dives such as this. He needed help. But his picture must have been 'scoped to Venusian visiscreens. A reward must have been offered for his capture. Whom could he trust? The Martian kid, perhaps? Far down the darkened aisle nearest him, his eyes caught a flash of white. He tensed. Like the uniform of a Security Policeman, he thought. His gaze shifted to another aisle and another hint of whiteness. And then he saw another and another and another. Each whiteness became brighter and closer, like shrinking spokes of a wheel with Ben as their focal point. You idiot! The damned Martian kid! You should have known! Light showered the room in a dazzling explosion. Ben, half blinded, realized that a broad circle of unshaded globes in the ceiling had been turned on. The light washed away the room's strangeness and its air of brooding wickedness, revealing drab concrete walls and a debris-strewn floor. Eyes blinked and squinted. There were swift, frightened movements and a chorus of angry murmurs. The patrons of the Blast Inn were like tatter-clad occupants of a house whose walls have been ripped away. Ben Curtis twisted his lean body erect. His chair tumbled backward, falling. The white-clad men charged, neuro-clubs upraised. A woman screamed. The music ceased. The Martian orchestra slunk with feline stealth to a rear exit. Only the giant Venusians remained undisturbed. They stood unmoving, their staring eyes shifting lazily in Ben's direction. "Curtis!" one of the policemen yelled. "You're covered! Hold it!" Ben whirled away from the advancing police, made for the exit into which the musicians had disappeared. A hissing sound traveled past his left ear, a sound like compressed air escaping from a container. A dime-sized section of the concrete wall ahead of him crumbled. He stumbled forward. They were using deadly neuro-pistols now, not the mildly stunning neuro-clubs. Another hiss passed his cheek. He was about twelve feet from the exit. Another second , his brain screamed. Just another second— Or would the exits be guarded? He heard the hiss. It hit directly in the small of his back. There was no pain, just a slight pricking sensation, like the shallow jab of a needle. He froze as if yanked to a stop by a noose. His body seemed to be growing, swelling into balloon proportions. He knew that the tiny needle had imbedded itself deep in his flesh, knew that the paralyzing mortocain was spreading like icy fire into every fiber and muscle of his body. He staggered like a man of stone moving in slow motion. He'd have fifteen—maybe twenty—seconds before complete lethargy of mind and body overpowered him. In the dark world beyond his fading consciousness, he heard a voice yell, "Turn on the damn lights!" Then a pressure and a coldness were on his left hand. He realized that someone had seized it. A soft feminine voice spoke to him. "You're wounded? They hit you?" "Yes." His thick lips wouldn't let go of the word. "You want to escape—even now?" "Yes." "You may die if you don't give yourself up." "No, no." He tried to stumble toward the exit. "All right then. Not that way. Here, this way." Heavy footsteps thudded toward them. A few yards away, a flashlight flicked on. Hands were guiding him. He was aware of being pushed and pulled. A door closed behind him. The glare of the flashlight faded from his vision—if he still had vision. "You're sure?" the voice persisted. "I'm sure," Ben managed to say. "I have no antidote. You may die." His mind fought to comprehend. With the anti-paralysis injection, massage and rest, a man could recover from the effects of mortocain within half a day. Without treatment, the paralysis could spread to heart and lungs. It could become a paralysis of death. An effective weapon: the slightest wound compelled the average criminal to surrender at once. "Anti ... anti ..." The words were as heavy as blobs of mercury forced from his throat. "No ... I'm sure ... sure." He didn't hear the answer or anything else. Ben Curtis had no precise sensation of awakening. Return to consciousness was an intangible evolution from a world of black nothingness to a dream-like state of awareness. He felt the pressure of hands on his naked arms and shoulders, hands that massaged, manipulated, fought to restore circulation and sensitivity. He knew they were strong hands. Their strength seemed to transfer itself to his own body. For a long time, he tried to open his eyes. His lids felt welded shut. But after a while, they opened. His world of darkness gave way to a translucent cloak of mist. A round, featureless shape hovered constantly above him—a face, he supposed. He tried to talk. Although his lips moved slightly, the only sound was a deep, staccato grunting. But he heard someone say, "Don't try to talk." It was the same gentle voice he'd heard in the Blast Inn. "Don't talk. Just lie still and rest. Everything'll be all right." Everything all right , he thought dimly. There were long periods of lethargy when he was aware of nothing. There were periods of light and of darkness. Gradually he grew aware of things. He realized that the soft rubber mouth of a spaceman's oxygen mask was clamped over his nose. He felt the heat of electric blankets swathed about his body. Occasionally a tube would be in his mouth and he would taste liquid food and feel a pleasant warmth in his stomach. Always, it seemed, the face was above him, floating in the obscuring mist. Always, it seemed, the soft voice was echoing in his ears: "Swallow this now. That's it. You must have food." Or, "Close your eyes. Don't strain. It won't be long. You're getting better." Better , he'd think. Getting better.... At last, after one of the periods of lethargy, his eyes opened. The mist brightened, then dissolved. He beheld the cracked, unpainted ceiling of a small room, its colorless walls broken with a single, round window. He saw the footboard of his aluminite bed and the outlines of his feet beneath a faded blanket. Finally he saw the face and figure that stood at his side. "You are better?" the kind voice asked. The face was that of a girl probably somewhere between twenty-five and thirty. Her features, devoid of makeup, had an unhealthy-looking pallor, as if she hadn't used a sunlamp for many weeks. Yet, at the same time, her firm slim body suggested a solidity and a strength. Her straight brown hair was combed backward, tight upon her scalp, and drawn together in a knot at the nape of her neck. "I—I am better," he murmured. His words were still slow and thick. "I am going to live?" "You will live." He thought for a moment. "How long have I been here?" "Nine days." "You took care of me?" He noted the deep, dark circles beneath her sleep-robbed eyes. She nodded. "You're the one who carried me when I was shot?" "Yes." "Why?" Suddenly he began to cough. Breath came hard. She held the oxygen mask in readiness. He shook his head, not wanting it. "Why?" he asked again. "It would be a long story. Perhaps I'll tell you tomorrow." A new thought, cloaked in sudden fear, entered his murky consciousness. "Tell me, will—will I be well again? Will I be able to walk?" He lay back then, panting, exhausted. "You have nothing to worry about," the girl said softly. Her cool hand touched his hot forehead. "Rest. We'll talk later." His eyes closed and breath came easier. He slept. When he next awoke, his gaze turned first to the window. There was light outside, but he had no way of knowing if this was morning, noon or afternoon—or on what planet. He saw no white-domed buildings of Hoover City, no formal lines of green-treed parks, no streams of buzzing gyro-cars. There was only a translucent and infinite whiteness. It was as if the window were set on the edge of the Universe overlooking a solemn, silent and matterless void. The girl entered the room. "Hi," she said, smiling. The dark half-moons under her eyes were less prominent. Her face was relaxed. She increased the pressure in his rubberex pillows and helped him rise to a sitting position. "Where are we?" he asked. "Venus." "We're not in Hoover City?" "No." He looked at her, wondering. "You won't tell me?" "Not yet. Later, perhaps." "Then how did you get me here? How did we escape from the Inn?" She shrugged. "We have friends who can be bribed. A hiding place in the city, the use of a small desert-taxi, a pass to leave the city—these can be had for a price." "You'll tell me your name?" "Maggie." "Why did you save me?" Her eyes twinkled mischievously. "Because you're a good astrogator." His own eyes widened. "How did you know that?" She sat on a plain chair beside his bed. "I know everything about you, Lieutenant Curtis." "How did you learn my name? I destroyed all my papers—" "I know that you're twenty-four. Born July 10, 1971. Orphaned at four, you attended Boys Town in the Catskills till you were 19. You graduated from the Academy at White Sands last June with a major in Astrogation. Your rating for the five-year period was 3.8—the second highest in a class of fifty-seven. Your only low mark in the five years was a 3.2 in History of Martian Civilization. Want me to go on?" Fascinated, Ben nodded. "You were accepted as junior astrogation officer aboard the Odyssey . You did well on your flight from Roswell to Luna City. In a barroom fight in Luna City, you struck and killed a man named Arthur Cobb, a pre-fab salesman. You've been charged with second degree murder and escape. A reward of 5,000 credits has been offered for your capture. You came to Hoover City in the hope of finding a renegade group of spacemen who operate beyond Mars. You were looking for them in the Blast Inn." He gaped incredulously, struggling to rise from his pillows. "I—don't get it." "There are ways of finding out what we want to know. As I told you, we have many friends." He fell back into his pillows, breathing hard. She rose quickly. "I'm sorry," she said. "I shouldn't have told you yet. I felt so happy because you're alive. Rest now. We'll talk again soon." "Maggie, you—you said I'd live. You didn't say I'd be able to walk again." She lowered her gaze. "I hope you'll be able to." "But you don't think I will, do you?" "I don't know. We'll try walking tomorrow. Don't think about it now. Rest." He tried to relax, but his mind was a vortex of conjecture. "Just one more question," he almost whispered. "Yes?" "The man I killed—did he have a wife?" She hesitated. He thought, Damn it, of all the questions, why did I ask that? Finally she said, "He had a wife." "Children?" "Two. I don't know their ages." She left the room. He sank into the softness of his bed. As he turned over on his side, his gaze fell upon an object on a bureau in a far corner of the room. He sat straight up, his chest heaving. The object was a tri-dimensional photo of a rock-faced man in a merchant spaceman's uniform. He was a giant of a man with a neatly trimmed red beard ! Ben stared at the photo for a long time. At length, he slipped into restless sleep. Images of faces and echoes of words spun through his brain. The dead man returned to him. Bloodied lips cursed at him. Glassy eyes accused him. Somewhere were two lost children crying in the night. And towering above him was a red-bearded man whose great hands reached down and beckoned to him. Ben crawled through the night on hands and knees, his legs numb and useless. The crying of the children was a chilling wail in his ears. His head rose and turned to the red-bearded man. His pleading voice screamed out to him in a thick, harsh cackle. Yet even as he screamed, the giant disappeared, to be replaced by white-booted feet stomping relentlessly toward him. He awoke still screaming.... A night without darkness passed. Ben lay waiting for Maggie's return, a question already formed in his mind. She came and at once he asked, "Who is the man with the red beard?" She smiled. "I was right then when I gave you that thumbnail biog. You were looking for him, weren't you?" "Who is he?" She sat on the chair beside him. "My husband," she said softly. He began to understand. "And your husband needs an astrogator? That's why you saved me?" "We need all the good men we can get." "Where is he?" She cocked her head in mock suspicion. "Somewhere between Mercury and Pluto. He's building a new base for us—and a home for me. When his ship returns, I'll be going to him." "Why aren't you with him now?" "He said unexplored space is no place for a woman. So I've been studying criminal reports and photos from the Interplanetary Bureau of Investigation and trying to find recruits like yourself. You know how we operate?" He told her the tales he'd heard. She nodded. "There are quite a few of us now—about a thousand—and a dozen ships. Our base used to be here on Venus, down toward the Pole. The dome we're in now was designed and built by us a few years ago after we got pushed off Mars. We lost a few men in the construction, but with almost every advance in space, someone dies." "Venus is getting too civilized. We're moving out and this dome is only a temporary base when we have cases like yours. The new base—I might as well tell you it's going to be an asteroid. I won't say which one." "Don't get the idea that we're outlaws. Sure, about half our group is wanted by the Bureau, but we make honest livings. We're just people like yourself and Jacob." "Jacob? Your husband?" She laughed. "Makes you think of a Biblical character, doesn't it? Jacob's anything but that. And just plain 'Jake' reminds one of a grizzled old uranium prospector and he isn't like that, either." She lit a cigarette. "Anyway, the wanted ones stay out beyond the frontiers. Jacob and those like him can never return to Earth—not even to Hoover City—except dead. The others are physical or psycho rejects who couldn't get clearance if they went back to Earth. They know nothing but rocketing and won't give up. They bring in our ships to frontier ports like Hoover City to unload cargo and take on supplies." "Don't the authorities object?" "Not very strongly. The I. B. I. has too many problems right here to search the whole System for a few two-bit crooks. Besides, we carry cargoes of almost pure uranium and tungsten and all the stuff that's scarce on Earth and Mars and Venus. Nobody really cares whether it comes from the asteroids or Hades. If we want to risk our lives mining it, that's our business." She pursed her lips. "But if they guessed how strong we are or that we have friends planted in the I. B. I.—well, things might be different. There probably would be a crackdown." Ben scowled. "What happens if there is a crackdown? And what will you do when Space Corps ships officially reach the asteroids? They can't ignore you then." "Then we move on. We dream up new gimmicks for our crates and take them to Jupiter, Saturn, Uranus, Neptune, Pluto. In time, maybe, we'll be pushed out of the System itself. Maybe it won't be the white-suited boys who'll make that first hop to the stars. It could be us, you know—if we live long enough. But that Asteroid Belt is murder. You can't follow the text-book rules of astrogation out there. You make up your own." Ben stiffened. "And that's why you want me for an astrogator." Maggie rose, her eyes wistful. "If you want to come—and if you get well." She looked at him strangely. "Suppose—" He fought to find the right words. "Suppose I got well and decided not to join Jacob. What would happen to me? Would you let me go?" Her thin face was criss-crossed by emotion—alarm, then bewilderment, then fear. "I don't know. That would be up to Jacob." He lay biting his lip, staring at the photo of Jacob. She touched his hand and it seemed that sadness now dominated the flurry of emotion that had coursed through her. "The only thing that matters, really," she murmured, "is your walking again. We'll try this afternoon. Okay?" "Okay," he said. When she left, his eyes were still turned toward Jacob's photo. He was like two people, he thought. Half of him was an officer of the Space Corps. Perhaps one single starry-eyed boy out of ten thousand was lucky enough to reach that goal. He remembered a little picture book his mother had given him when she was alive. Under the bright pictures of spacemen were the captions: "A Space Officer Is Honest" "A Space Officer Is Loyal." "A Space Officer Is Dutiful." Honesty, loyalty, duty. Trite words, but without those concepts, mankind would never have broken away from the planet that held it prisoner for half a million years. Without them, Everson, after three failures and a hundred men dead, would never have landed on the Moon twenty-seven years ago.
|
A. Jacob didn't think women should be in unexplored space.
|
What is the Shopping Avenger susceptible not to withstand?
A. Life-threatening weather.
B. Radiation.
C. Bear attacks.
D. Critical self-reflection.
|
It's Time To Keelhaul U-Haul! Like all superheroes worthy of the title, the Shopping Avenger has an Achilles' heel. In the case of the Shopping Avenger, his Achilles' heel is not animal, vegetable, or mineral but something less tangible. An explanation: Last week, the magazine you are currently reading forced the Shopping Avenger at gunpoint to read a series of treacle-filled self-help books, and then to . The Shopping Avenger, who can withstand radiation, extreme heat and cold, hail, bear attacks, and Eyes Wide Shut , almost succumbed to terminal jejuneness after reading these books. Except for one thing: One of the books, The Art of Happiness , which collects and simplifies the Dalai Lama's philosophy, got the Shopping Avenger to thinking. This, in a way, is the Shopping Avenger's Achilles' heel: thinking. Perhaps it is wrong, the Shopping Avenger thought, to complain about the petty insults and inconveniences of life in the materialistic '90s. The Shopping Avenger felt that perhaps he should counsel those who write seeking help to meditate, to accept bad service the way one accepts the change of seasons, and to extend a compassionate hand of forgiveness to those who provide poor customer care. But then the Shopping Avenger sat down, and the feeling passed. The Shopping Avenger does not make light of the Dalai Lama or of the notion that there is more to life than the impatient acquisition of material goods. If the Shopping Avenger were not, for a superhero, extremely nonjudgmental--as opposed to his alter ego, who is considered insufferably judgmental by his alter ego's wife--the Shopping Avenger would tell the occasional correspondent to let go of his petty grievance and get a life. But the Shopping Avenger also believes that the Dalai Lama has never tried to rent a truck from U-Haul. If he had tried to rent from U-Haul, he never would have escaped from Tibet. (For the complete back story, see "Shopping Avenger" column and one.) The complaints about U-Haul's nonreservation reservation policy continue to pour in through the electronic mail. One correspondent, B.R., wrote in with this cautionary tale: "Last weekend, I went to San Francisco to help my brother and his family move into their first house. My brother had reserved a moving truck with U-Haul for the big day. I warned my brother about U-Haul's 'not really a reservation per se' policy that I learned from the Shopping Avenger. He didn't believe such a thing would happen to him, so he didn't act on my warning." B.R. continues--as if you don't know what happened already--"I went to U-Haul with my brother to get our 'reserved' truck. The store had many customers standing around looking frustrated. When we got to the front of the line, the clerk informed us that our 'reserved' truck had not yet been returned. We asked if we could rent one of the many trucks sitting idle in the parking lot. The clerk laughed and said the keys to those trucks were lost." B.R. and his chastened brother--the Shopping Avenger is resisting the urge to gloat--went to Ryder. "Ryder had a truck available for us. The gentleman who helped us at Ryder said Ryder prides itself on being everything U-Haul is not." The Shopping Avenger has still not received a call from U-Haul spokeswoman Johna Burke explaining why U-Haul refuses to provide trucks to people who reserve trucks, but the Shopping Avenger is pleased to note that several correspondents have written in over the past month saying that, based on what they have read in this column, they will be taking their business to Ryder or Budget or elsewhere. The Shopping Avenger will undoubtedly return to the sorry state of affairs at U-Haul in the next episode, but now on to this month's airline debacle. Before we begin, though, the Shopping Avenger nearly forgot to announce the winner of last month's contest, in which readers were asked to answer the question, "What's the difference between pests and airlines?" The winner is one Tom Morgan, who wrote, "You can hire someone to kill pests." Tom is the winner of a year's supply of Turtle Wax, and he will receive his prize just as soon as the Shopping Avenger figures out how much Turtle Wax actually constitutes a year's supply. The new contest question: How much Turtle Wax comprises a year's supply of Turtle Wax? This month's airline in the spotlight is Southwest. Loyal readers will recall that last month the Shopping Avenger praised Southwest Airlines for its "sterling" customer service. This brought forth a small number of articulate dissensions. The most articulate, and the most troubling, came from M., who wrote, "Last year, flying from Baltimore to Chicago with my entire family (two really little kids included), we set down at Midway in a rainstorm. And waited for our bags. And waited for bags. And waited for bags." An hour later, M. says, the bags showed up, "soaked through. We took them to baggage services at SW and were faced with the most complicated, unclear, and confusing mechanism for filing a claim we experienced flyers have ever seen." When they arrived at their destination, M. and her family made a terrible discovery, "We discovered that our clothes were soaked through--the top clothes were so wet that the dye had bled through down to the lower levels, destroying lots of other clothes. Obviously, our bags had just been sitting out on the runway in the rain. To this day, I've never heard a thing from SW, despite calls and letters." This, of course, is where Shopping Avenger steps in. Shopping Avenger knows that Southwest is different from the average airline, in that it doesn't go out of its way to infuriate its paying customers (see: ), so I expected a quick and generous resolution to M.'s problem. What I got at first, though, was a load of corporate hoo-ha. "The airline's policy, which is consistent with all contracts of carriage at all airlines, requires that passengers file a report in person for lost or damaged luggage within four hours of arrival at their destination," a Southwest spokeswoman, Linda Rutherford, e-mailed me. "[M.] indicates she called for a few days, but did not file a report in person until April 12--three days later. Southwest, as a courtesy, took her report anyway and asked for follow up information and written inventory of the damage." Rutherford said that M. should have submitted detailed receipts and photographs of the damage in order to make a claim. Harrumph, the Shopping Avenger says. It is a bad hair day at Southwest when its officials defend themselves by comparing their airline to other airlines. I forwarded this message to M., who replied: "Wow. Well, of course I didn't file it at the airport on the 9 th because I didn't know the clothes were ruined at the airport. I didn't know until I opened the baggage at my hotel and saw the ruined stuff. (And it's worth noting that we had already waited for about an hour for our luggage with two little kids and impatient in-laws nipping at our heels.)" She goes on, "I did call that evening ... and was told that that sufficed. This is the first time I've been told that I had to file a complaint in person within four hours. ... When I filed on the 12 th , I was never told that I needed any receipts or photos or other type of documentation. The baggage folks seemed pretty uninterested in all of this. ... They know that the type of 'evidence' they want is impossible to obtain. They also know that on April 9 they screwed up the luggage retrieval and left bags out in the rain a long time." Southwest's response actually served to anger M. more than the original problem. "Before, they had a mildly annoyed but loyal customer (who would have been placated by an apology and thrilled with some modest token of their regret). Now they have a pissed-off customer." Things do look bad for Southwest, don't they? The Shopping Avenger sent M.'s response to Rutherford, who e-mailed back saying she thought the Shopping Avenger was asking for "policy information." The Shopping Avenger e-mailed back again, stressing to Rutherford that the Great Court of Consumer Justice would, if this case were brought to trial, undoubtedly find for the plaintiff (the Shopping Avenger serves as prosecutor, judge, and jury in the Great Court of Consumer Justice--defendants are represented by the president of U-Haul), and that Southwest was precipitously close to feeling the sword of retribution at its neck. But then she came through, provisionally, "Yep, you can be sure if [M.] will call me we will get everything squared away. I'm sorry it's taken this long for her to get someone who can help, but we will take care of it from here." Stay tuned, shoppers, to hear whether Southwest makes good it promise to compensate M. and apologize to her for her troubles. The story of M. reminds the Shopping Avenger of a central truth of consumer service: It's not the crime, it's the cover-up. Take the case of K., who found himself waiting in vain for Circuit City to repair his television. Televisions break, even 1-year-old televisions, as is the case with K's. But Circuit City, where he bought the television, gave him a terrible runaround. The Shopping Avenger dispatched his sidekick, Tad the Deputy Avenger, to get to the bottom of K.'s story. This is what he found: K. grew concerned, Tad the Deputy Avenger reports, after his television had been in the Circuit City shop for a week. When he called, he was told to "check back next week." When he asked if someone from the store could call him with more information, he was refused. Weeks went by. When K. told one Circuit City employee that he really would like to get his television back, the employee, K. says, asked him, "Don't you have another television in your house?" More than a month later--after hours and hours and hours of telephone calls and days missed at work--K. received his television back. Mistakes happen, but not, Tad the Deputy Avenger found out, at Circuit City. The case, K. was told by a Circuit City official, was "handled perfectly." Another official, Morgan Stewart in public relations, assured Deputy Avenger Tad that "We got to be a big and successful company by treating customers better than the other guy." The Shopping Avenger and his loyal sidekick would like to hear from other Circuit City customers: Does Circuit City, in fact, treat its customers better than the other guy? Stay tuned for answers. And next month, a Shopping Avenger clergy special: TWA screws with a Hasidic rabbi's travel plans, leaving the rabbi's wife crying at the airport. Find out if the Shopping Avenger can save TWA from certain heavenly punishment, in the next episode. Got a consumer score you want settled? Send e-mail to [email protected].
|
D. Critical self-reflection.
|
Why has Infield attached a lightning rod to his head?
A. He needs it in order to survive the elements
B. He wants to go back to being an Incomplete
C. He is conducting an experiment involving electricity
D. He believes it has cured him of his fear
|
Name Your Symptom By JIM HARMON Illustrated by WEISS [Transcriber's Note: This etext was produced from Galaxy Science Fiction May 1956. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Anybody who shunned a Cure needed his head examined—assuming he had one left! Henry Infield placed the insulated circlet on his head gently. The gleaming rod extended above his head about a foot, the wires from it leading down into his collar, along his spine and finally out his pants leg to a short metallic strap that dragged on the floor. Clyde Morgan regarded his partner. "Suppose—just suppose—you were serious about this, why not just the shoes?" Infield turned his soft blue eyes to the black and tan oxfords with the very thick rubber soles. "They might get soaked through." Morgan took his foot off the chair behind the desk and sat down. "Suppose they were soaked through and you were standing on a metal plate—steps or a manhole cover—what good would your lightning rod do you then?" Infield shrugged slightly. "I suppose a man must take some chances." Morgan said, "You can't do it, Henry. You're crossing the line. The people we treat are on one side of the line and we're on the other. If you cross that line, you won't be able to treat people again." The small man looked out the large window, blinking myopically at the brassy sunlight. "That's just it, Clyde. There is a line between us, a wall. How can we really understand the people who come to us, if we hide on our side of the wall?" Morgan shook his thick head, ruffling his thinning red hair. "I dunno, Henry, but staying on our side is a pretty good way to keep sane and that's quite an accomplishment these days." Infield whirled and stalked to the desk. "That's the answer! The whole world is going mad and we are just sitting back watching it hike along. Do you know that what we are doing is really the most primitive medicine in the world? We are treating the symptoms and not the disease. One cannibal walking another with sleeping sickness doesn't cure anything. Eventually the savage dies—just as all those sick savages out in the street will die unless we can cure the disease, not only the indications." Morgan shifted his ponderous weight uneasily. "Now, Henry, it's no good to talk like that. We psychiatrists can't turn back the clock. There just aren't enough of us or enough time to give that old-fashioned therapy to all the sick people." Infield leaned on the desk and glared. "I called myself a psychiatrist once. But now I know we're semi-mechanics, semi-engineers, semi-inventors, semi lots of other things, but certainly not even semi-psychiatrists. A psychiatrist wouldn't give a foetic gyro to a man with claustrophobia." His mind went back to the first gyro ball he had ever issued; the remembrance of his pride in the thing sickened him. Floating before him in memory was the vertical hoop and the horizontal hoop, both of shining steel-impervium alloy. Transfixed in the twin circles was the face of the patient, slack with smiles and sweat. But his memory was exaggerating the human element. The gyro actually passed over a man's shoulder, through his legs, under his arms. Any time he felt the walls creeping in to crush him, he could withdraw his head and limbs into the circle and feel safe. Steel-impervium alloy could resist even a nuclear explosion. The foetic gyro ball was worn day and night, for life. The sickness overcame him. He sat down on Morgan's desk. "That's just one thing, the gyro ball. There are so many others, so many." Morgan smiled. "You know, Henry, not all of our Cures are so—so—not all are like that. Those Cures for mother complexes aren't even obvious. If anybody does see that button in a patient's ear, it looks like a hearing aid. Yet for a nominal sum, the patient is equipped to hear the soothing recorded voice of his mother saying, 'It's all right, everything's all right, Mommy loves you, it's all right....'" "But is everything all right?" Infield asked intensely. "Suppose the patient is driving over one hundred on an icy road. He thinks about slowing down, but there's the voice in his ear. Or suppose he's walking down a railroad track and hears a train whistle—if he can hear anything over that verbal pablum gushing in his ear." Morgan's face stiffened. "You know as well as I do that those voices are nearly subsonic. They don't cut a sense efficiency more than 23 per cent." "At first, Clyde—only at first. But what about the severe case where we have to burn a three-dimensional smiling mother-image on the eyes of the patient with radiation? With that image over everything he sees and with that insidious voice drumming in his head night and day, do you mean to say that man's senses will only be impaired 23 per cent? Why, he'll turn violently schizophrenic sooner or later—and you know it. The only cure we have for that is still a strait jacket, a padded cell or one of those inhuman lobotomies." Morgan shrugged helplessly. "You're an idealist." "You're damned right!" Infield slammed the door behind him. The cool air of the street was a relief. Infield stepped into the main stream of human traffic and tried to adjust to the second change in the air. People didn't bathe very often these days. He walked along, buffeted by the crowd, carried along in this direction, shoved back in that direction. Most people in the crowd seemed to be Normals, but you couldn't tell. Many "Cures" were not readily apparent. A young man with black glasses and a radar headset (a photophobe) was unable to keep from being pushed against Infield. He sounded out the lightning rod, his face changing when he realized it must be some kind of Cure. "Pardon me," he said warmly. "Quite all right." It was the first time in years that anyone had apologized to Infield for anything. He had been one of those condemned Normals, more to be scorned than pitied. Perhaps he could really get to understand these people, now that he had taken down the wall. Suddenly something else was pushing against Infield, forcing the air from his lungs. He stared down at the magnetic suction dart clinging leechlike to his chest. Model Acrophobe 101-X, he catalogued immediately. Description: safety belt. But his emotions didn't behave so well. He was thoroughly terrified, heart racing, sweat glands pumping. The impervium cable undulated vulgarly. Some primitive fear of snake symbols? his mind wondered while panic crushed him. "Uncouple that cable!" the shout rang out. It was not his own. A clean-cut young man with mouse-colored hair was moving toward the stubble-chinned, heavy-shouldered man quivering in the center of a web of impervium cables stuck secure to the walls and windows of buildings facing the street, the sidewalk, a mailbox, the lamp post and Infield. Mouse-hair yelled hoarsely, "Uncouple it, Davies! Can't you see the guy's got a lightning rod? You're grounding him! "I can't," Davies groaned. "I'm scared!" Halfway down the twenty feet of cable, Mouse-hair grabbed on. "I'm holding it. Release it, you hear?" Davies fumbled for the broad belt around his thickening middle. He jabbed the button that sent a negative current through the cable. The magnetic suction dart dropped away from Infield like a thing that had been alive and now was killed. He felt an overwhelming sense of relief. After breathing deeply for a few moments, he looked up to see Davies releasing and drawing all his darts into his belt, making it resemble a Hydra-sized spiked dog collar. Mouse-hair stood by tensely as the crowd disassembled. "This isn't the first time you've pulled something like this, Davies," he said. "You weren't too scared to release that cable. You just don't care about other people's feelings. This is official ." Mouse-hair drove a fast, hard right into the soft blue flesh of Davies' chin. The big man fell silently. The other turned to Infield. "He was unconscious on his feet," he explained. "He never knew he fell." "What did you mean by that punch being official?" Infield asked while trying to arrange his feelings into the comfortable, familiar patterns. The young man's eyes almost seemed to narrow, although his face didn't move; he merely radiated narrowed eyes. "How long have you been Cured?" "Not—not long," Infield evaded. The other glanced around the street. He moistened his lips and spoke slowly. "Do you think you might be interested in joining a fraternal organization of the Cured?" Infield's pulse raced, trying to get ahead of his thoughts, and losing out. A chance to study a pseudo-culture of the "Cured" developed in isolation! "Yes, I think I might. I owe you a drink for helping me out. How about it?" The man's face paled so fast, Infield thought for an instant that he was going to faint. "All right. I'll risk it." He touched the side of his face away from the psychiatrist. Infield shifted around, trying to see that side of his benefactor, but couldn't manage it in good grace. He wondered if the fellow was sporting a Mom-voice hearing aid and was afraid of raising her ire. He cleared his throat, noticing the affectation of it. "My name's Infield." "Price," the other answered absently. "George Price. I suppose they have liquor at the Club. We can have a drink there, I guess." Price set the direction and Infield fell in at his side. "Look, if you don't drink, I'll buy you a cup of coffee. It was just a suggestion." Under the mousy hair, Price's strong features were beginning to gleam moistly. "You are lucky in one way, Mr. Infield. People take one look at your Cure and don't ask you to go walking in the rain. But even after seeing this , some people still ask me to have a drink." This was revealed, as he turned his head, to be a small metal cube above his left ear. Infield supposed it was a Cure, although he had never issued one like it. He didn't know if it would be good form to inquire what kind it was. "It's a cure for alcoholism," Price told him. "It runs a constant blood check to see that the alcohol level doesn't go over the sobriety limit." "What happens if you take one too many?" Price looked off as if at something not particularly interesting, but more interesting than what he was saying. "It drives a needle into my temple and kills me." The psychiatrist felt cold fury rising in him. The Cures were supposed to save lives, not endanger them. "What kind of irresponsible idiot could have issued such a device?" he demanded angrily. "I did," Price said. "I used to be a psychiatrist. I was always good in shop. This is a pretty effective mechanism, if I say so myself. It can't be removed without causing my death and it's indestructible. Impervium-shielded, you see." Price probably would never get crazed enough for liquor to kill himself, Infield knew. The threat of death would keep him constantly shocked sane. Men hide in the comforts of insanity, but when faced with death, they are often forced back to reality. A man can't move his legs; in a fire, though, he may run. His legs were definitely paralyzed before and may be again, but for one moment he would forget the moral defeat of his life and his withdrawal from life and live an enforced sanity. But sometimes the withdrawal was—or could become—too complete. "We're here." Infield looked up self-consciously and noticed that they had crossed two streets from his building and were standing in front of what appeared to be a small, dingy cafe. He followed Price through the screeching screen door. They seated themselves at a small table with a red-checked cloth. Infield wondered why cheap bars and restaurants always used red-checked cloths. Then he looked closer and discovered the reason. They did a remarkably good job of camouflaging the spots of grease and alcohol. A fat man who smelled of the grease and alcohol of the tablecloths shuffled up to them with a towel on his arm, staring ahead of him at some point in time rather than space. Price lit a cigarette with unsteady hands. "Reggie is studying biblical text. Cute gadget. His contact lenses are made up of a lot of layers of polarized glass. Every time he blinks, the amount of polarization changes and a new page appears. His father once told him that if he didn't study his Bible and pray for him, his old dad would die." The psychiatrist knew the threat on the father's part couldn't create such a fixation by itself. His eyebrows faintly inquired. Price nodded jerkily. "Twenty years ago, at least." "What'll you have, Georgie?" Reggie asked. The young man snubbed out his cigarette viciously. "Bourbon. Straight." Reggie smiled—a toothy, vacant, comedy-relief smile. "Fine. The Good Book says a little wine is good for a man, or something like that. I don't remember exactly." Of course he didn't, Infield knew. Why should he? It was useless to learn his Bible lessons to save his father, because it was obvious his father was dead. He would never succeed because there was no reason to succeed. But he had to try, didn't he, for his father's sake? He didn't hate his father for making him study. He didn't want him to die. He had to prove that. Infield sighed. At least this device kept the man on his feet, doing some kind of useful work instead of rotting in a padded cell with a probably imaginary Bible. A man could cut his wrists with the edge of a sheet of paper if he tried long enough, so of course the Bible would be imaginary. "But, Georgie," the waiter complained, "you know you won't drink it. You ask me to bring you drinks and then you just look at them. Boy, do you look funny when you're looking at drinks. Honest, Georgie, I want to laugh when I think of the way you look at a glass with a drink in it." He did laugh. Price fumbled with the cigarette stub in the black iron ashtray, examining it with the skill of scientific observation. "Mr. Infield is buying me the drink and that makes it different." Reggie went away. Price kept dissecting the tobacco and paper. Infield cleared his throat and again reminded himself against such obvious affectations. "You were telling me about some organization of the Cured," he said as a reminder. Price looked up, no longer interested in the relic of a cigarette. He was suddenly intensely interested and intensely observant of the rest of the cafe. "Was I? I was? Well, suppose you tell me something. What do you really think of the Incompletes?" The psychiatrist felt his face frown. "Who?" "I forgot. You haven't been one of us long. The Incompletes is a truer name for the so-called Normals. Have you ever thought of just how dangerous these people are, Mr. Infield?" "Frankly, no," Infield said, realizing it was not the right thing to say but tiring of constant pretense. "You don't understand. Everyone has some little phobia or fixation. Maybe everyone didn't have one once, but after being told they did have them for generations, everyone who didn't have one developed a defense mechanism and an aberration so they would be normal. If that phobia isn't brought to the surface and Cured, it may arise any time and endanger other people. The only safe, good sound citizens are Cured. Those lacking Cures—the Incompletes— must be dealt with ." Infield's throat went dry. "And you're the one to deal with them?" "It's my Destiny." Price quickly added, "And yours, too, of course." Infield nodded. Price was a demagogue, young, handsome, dynamic, likable, impassioned with his cause, and convinced that it was his divine destiny. He was a psychopathic egotist and a dangerous man. Doubly dangerous to Infield because, even though he was one of the few people who still read books from the old days of therapy to recognize Price for what he was, he nevertheless still liked the young man for the intelligence behind the egotism and the courage behind the fanaticism. "How are we going to deal with the Incompletes?" Infield asked. Price started to glance around the cafe, then half-shrugged, almost visibly thinking that he shouldn't run that routine into the ground. "We'll Cure them whether they want to be Cured or not—for their own good." Infield felt cold inside. After a time, he found that the roaring was not just in his head. It was thundering outside. He was getting sick. Price was the type of man who could spread his ideas throughout the ranks of the Cured—if indeed the plot was not already universal, imposed upon many ill minds. He could picture an entirely Cured world and he didn't like the view. Every Cure cut down on the mental and physical abilities of the patient as it was, whether Morgan and the others admitted it or not. But if everyone had a crutch to lean on for one phobia, he would develop secondary symptoms. People would start needing two Cures—perhaps a foetic gyro and a safety belt—then another and another. There would always be a crutch to lean on for one thing and then room enough to develop something else—until everyone would be loaded down with too many Cures to operate. A Cure was a last resort, dope for a malignancy case, euthanasia for the hopeless. Enforced Cures would be a curse for the individual and the race. But Infield let himself relax. How could anyone force a mechanical relief for neurotic or psychopathic symptoms on someone who didn't want or need it? "Perhaps you don't see how it could be done," Price said. "I'll explain." Reggie's heavy hand sat a straight bourbon down before Price and another before Infield. Price stared at the drink almost without comprehension of how it came to be. He started to sweat. "George, drink it." The voice belonged to a young woman, a blonde girl with pink skin and suave, draped clothes. In this den of the Cured, Infield thought half-humorously, it was surprising to see a Normal—an "Incomplete." But then he noticed something about the baby she carried. The Cure had been very simple. It wasn't even a mechanized half-human robot, just a rag doll. She sat down at the table. "George," she said, "drink it. One drink won't raise your alcohol index to the danger point. You've got to get over this fear of even the sight or smell of liquor." The girl turned to Infield. "You're one of us, but you're new, so you don't know about George. Maybe you can help if you do. It's all silly. He's not an alcoholic. He didn't need to put that Cure on his head. It's just an excuse for not drinking. All of this is just because a while back something happened to the baby here—" she adjusted the doll's blanket—"when he was drinking. Just drinking, not drunk. "I don't remember what happened to the baby—it wasn't important. But George has been brooding about it ever since. I guess he thinks something else bad will happen because of liquor. That's silly. Why don't you tell him it's silly?" "Maybe it is," Infield said softly. "You could take the shock if he downed that drink and the shock might do you good." Price laughed shortly. "I feel like doing something very melodramatic, like throwing my drink—and yours—across the room, but I haven't got the guts to touch those glasses. Do it for me, will you? Cauterizing the bite might do me good if I'd been bitten by a rabid dog, but I don't have the nerve to do it." Before Infield could move, Reggie came and set both drinks on a little circular tray. He moved away. "I knew it. That's all he did, just look at the drink. Makes me laugh." Price wiped the sweat off his palms. Infield sat and thought. Mrs. Price cooed to the rag doll, unmindful of either of them now. "You were explaining," the psychiatrist said. "You were going to tell me how you were going to Cure the Incompletes." "I said we were going to do it. Actually you will play a greater part than I, Doctor Infield." The psychiatrist sat rigidly. "You didn't think you could give me your right name in front of your own office building and that I wouldn't recognize you? I know some psychiatrists are sensitive about wearing Cures themselves, but it is a mark of honor of the completely sane man. You should be proud of your Cure and eager to Cure others. Very eager." "Just what do you mean?" He already suspected Price's meaning. Price leaned forward. "There is one phobia that is so wide-spread, a Cure is not even thought of—hypochondria. Hundreds of people come to your office for a Cure and you turn them away. Suppose you and the other Cured psychiatrists give everybody who comes to you a Cure?" Infield gestured vaguely. "A psychiatrist wouldn't hand out Cures unless they were absolutely necessary." "You'll feel differently after you've been Cured for a while yourself. Other psychiatrists have." Before Infield could speak, a stubble-faced, barrel-chested man moved past their table. He wore a safety belt. It was the man Price had called Davies, the one who had fastened one of his safety lines to Infield in the street. Davies went to the bar in the back. "Gimme a bottle," he demanded of a vacant-eyed Reggie. He came back toward them, carrying the bottle in one hand, brushing off rain drops with the other. He stopped beside Price and glared. Price leaned back. The chair creaked. Mrs. Price kept cooing to the doll. "You made me fall," Davies accused. Price shrugged. "You were unconscious. You never knew it." Sweat broke out on Davies' forehead. "You broke the Code. Don't you think I can imagine how it was to fall? You louse!" Suddenly, Davies triggered his safety belt. At close range, before the lines could fan out in a radius, all the lines in front attached themselves to Price, the ones at each side clung to their table and the floor, and all the others to the table behind Infield. Davies released all lines except those on Price, and then threw himself backward, dragging Price out of his chair and onto the floor. Davies didn't mind making others fall. They were always trying to make him fall just so they could laugh at him or pounce on him; why shouldn't he like to make them fall first? Expertly, Davies moved forward and looped the loose lines around Price's head and shoulders and then around his feet. He crouched beside Price and shoved the bottle into the gasping mouth and poured. Price twisted against the binding lines in blind terror, gagging and spouting whiskey. Davies laughed and tilted the bottle more. Mrs. Price screamed. "The Cure! If you get that much liquor in his system, it will kill him!" She rocked the rag doll in her arms, trying to soothe it, and stared in horror. Infield hit the big man behind the ear. He dropped the bottle and fell over sideways on the floor. Fear and hate mingled in his eyes as he looked up at Infield. Nonsense, Infield told himself. Eyes can't register emotion. Davies released his lines and drew them in. He got up precariously. "I'm going to kill you," he said, glaring at Infield. "You made me fall worse than Georgie did. I'm really going to kill you." Infield wasn't a large man, but he had pressed two hundred and fifty many times in gym. He grabbed Davies' belt with both hands and lifted him about six inches off the floor. "I could drop you," the psychiatrist said. "No!" Davies begged weakly. "Please!" "I'll do it if you cause more trouble." Infield sat down and rubbed his aching forearms. Davies backed off in terror, right into the arms of Reggie. The waiter closed his huge hands on the acrophobe's shoulders. " You broke the Code all the way," Reggie said. "The Good Book says 'Thou shouldn't kill' or something like that, and so does the Code." "Let him go, Reggie," Price choked out, getting to his feet. "I'm not dead." He wiped his hand across his mouth. "No. No, you aren't." Infield felt an excitement pounding through him, same as when he had diagnosed his first case. No, better than that. "That taste of liquor didn't kill you, Price. Nothing terrible happened. You could find some way to get rid of that Cure." Price stared at him as if he were a padded-cell case. "That's different. I'd be a hopeless drunk without the Cure. Besides, no one ever gets rid of a Cure." They were all looking at Infield. Somehow he felt this represented a critical point in history. It was up to him which turn the world took, the world as represented by these four Cured people. "I'm afraid I'm for less Cures instead of more, Price. Look, if I can show you that someone can discard a Cure, would you get rid of that—if I may use the word— monstrous thing on your head?" Price grinned. Infield didn't recognize its smugness at the time. "I'll show you." He took off the circlet with the lightning rod and yanked at the wire running down into his collar. The new-old excitement within was running high. He felt the wire snap and come up easily. He threw the Cure on the floor. "Now," he said, "I am going out in that rain storm. There's thunder and lightning out there. I'm afraid, but I can get along without a Cure and so can you." "You can't! Nobody can!" Price screamed after him. He turned to the others. "If he reveals us, the Cause is lost. We've got to stop him for good . We've got to go after him." "It's slippery," Davies whimpered. "I might fall." Mrs. Price cuddled her rag doll. "I can't leave the baby and she mustn't get wet." "Well, there's no liquor out there and you can study your text in the lightning flashes, Reggie. Come on." Running down the streets that were tunnels of shining tar, running into the knifing ice bristles of the rain, Henry Infield realized that he was very frightened of the lightning. There is no action without a reason, he knew from the old neglected books. He had had a latent fear of lightning when he chose the lightning rod Cure. He could have picked a safety belt or foetic gyro just as well. He sneezed. He was soaked through, but he kept on running. He didn't know what Price and Reggie planned to do when they caught him. He slipped and fell. He would soon find out what they wanted. The excitement was all gone now and it left an empty space into which fear rushed. Reggie said, "We shall make a sacrifice." Infield looked up and saw the lightning reflected on the blade of a thin knife. Infield reached toward it more in fascination than fear. He managed to get all his fingers around two of Reggie's. He jerked and the knife fell into Infield's palm. The psychiatrist pulled himself erect by holding to Reggie's arm. Staggering to his feet, he remembered what he must do and slashed at the waiter's head. A gash streaked across the man's brow and blood poured into his eyes. He screamed. "I can't see the words!" It was his problem. Infield usually solved other people's problems, but now he ran away—he couldn't even solve his own. Infield realized that he had gone mad as he held the thin blade high overhead, but he did need some kind of lightning rod. Price (who was right behind him, gaining) had been right. No one could discard a Cure. He watched the lightning play its light on the blade of his Cure and he knew that Price was going to kill him in the next moment. He was wrong. The lightning hit him first. Reggie squinted under the bandage at the lettering on the door that said INFIELD & MORGAN and opened the door. He ran across the room to the man sitting at the desk, reading by the swivel light. "Mr. Morgan, your partner, Mr. Infield, he—" "Just a moment." Morgan switched on the room lights. "What were you saying?" "Mr. Infield went out without his Cure in a storm and was struck by lightning. We took him to the morgue. He must have been crazy to go out without his Cure." Morgan stared into his bright desk light without blinking. "This is quite a shock to me. Would you mind leaving? I'll come over to your place and you can tell me about it later." Reggie went out. "Yes, sir. He was struck by lightning, struck dead. He must have been crazy to leave his Cure...." The door closed. Morgan exhaled. Poor Infield. But it wasn't the lightning that killed him, of course. Morgan adjusted the soundproofing plugs in his ears, thinking that you did have to have quite a bit of light to read lips. The thunder, naturally, was what had killed Infield. Loud noise—any noise—that would do it every time. Too bad Infield had never really stopped being one of the Incompletes. Dangerous people. He would have to deal with them.
|
D. He believes it has cured him of his fear
|
What wouldn't be something Humphrey would want from his life?
A. to experience real weather
B. a family
C. a promotion from his job
D. to escape the dome
|
A FALL OF GLASS By STANLEY R. LEE Illustrated by DILLON [Transcriber's Note: This etext was produced from Galaxy Magazine October 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] The weatherman was always right: Temperature, 59; humidity, 47%; occasional light showers—but of what? The pockets of Mr. Humphrey Fownes were being picked outrageously. It was a splendid day. The temperature was a crisp 59 degrees, the humidity a mildly dessicated 47%. The sun was a flaming orange ball in a cloudless blue sky. His pockets were picked eleven times. It should have been difficult. Under the circumstances it was a masterpiece of pocket picking. What made it possible was Humphrey Fownes' abstraction; he was an uncommonly preoccupied individual. He was strolling along a quiet residential avenue: small private houses, one after another, a place of little traffic and minimum distractions. But he was thinking about weather, which was an unusual subject to begin with for a person living in a domed city. He was thinking so deeply about it that it never occurred to him that entirely too many people were bumping into him. He was thinking about Optimum Dome Conditions (a crisp 59 degrees, a mildly dessicated 47%) when a bogus postman, who pretended to be reading a postal card, jostled him. In the confusion of spilled letters and apologies from both sides, the postman rifled Fownes's handkerchief and inside jacket pockets. He was still thinking about temperature and humidity when a pretty girl happened along with something in her eye. They collided. She got his right and left jacket pockets. It was much too much for coincidence. The sidewalk was wide enough to allow four people to pass at one time. He should surely have become suspicious when two men engaged in a heated argument came along. In the ensuing contretemps they emptied his rear pants pockets, got his wristwatch and restored the contents of the handkerchief pocket. It all went off very smoothly, like a game of put and take—the sole difference being that Humphrey Fownes had no idea he was playing. There was an occasional tinkle of falling glass. It fell on the streets and houses, making small geysers of shiny mist, hitting with a gentle musical sound, like the ephemeral droppings of a celesta. It was precipitation peculiar to a dome: feather-light fragments showering harmlessly on the city from time to time. Dome weevils, their metal arms reaching out with molten glass, roamed the huge casserole, ceaselessly patching and repairing. Humphrey Fownes strode through the puffs of falling glass still intrigued by a temperature that was always 59 degrees, by a humidity that was always 47%, by weather that was always Optimum. It was this rather than skill that enabled the police to maintain such a tight surveillance on him, a surveillance that went to the extent of getting his fingerprints off the postman's bag, and which photographed, X-rayed and chemically analyzed the contents of his pockets before returning them. Two blocks away from his home a careless housewife spilled a five-pound bag of flour as he was passing. It was really plaster of Paris. He left his shoe prints, stride measurement, height, weight and handedness behind. By the time Fownes reached his front door an entire dossier complete with photographs had been prepared and was being read by two men in an orange patrol car parked down the street. Lanfierre had undoubtedly been affected by his job. Sitting behind the wheel of the orange car, he watched Humphrey Fownes approach with a distinct feeling of admiration, although it was an odd, objective kind of admiration, clinical in nature. It was similar to that of a pathologist observing for the first time a new and particularly virulent strain of pneumococcus under his microscope. Lanfierre's job was to ferret out aberration. It couldn't be tolerated within the confines of a dome. Conformity had become more than a social force; it was a physical necessity. And, after years of working at it, Lanfierre had become an admirer of eccentricity. He came to see that genuine quirks were rare and, as time went on, due partly to his own small efforts, rarer. Fownes was a masterpiece of queerness. He was utterly inexplicable. Lanfierre was almost proud of Humphrey Fownes. "Sometimes his house shakes ," Lanfierre said. "House shakes," Lieutenant MacBride wrote in his notebook. Then he stopped and frowned. He reread what he'd just written. "You heard right. The house shakes ," Lanfierre said, savoring it. MacBride looked at the Fownes house through the magnifying glass of the windshield. "Like from ... side to side ?" he asked in a somewhat patronizing tone of voice. "And up and down." MacBride returned the notebook to the breast pocket of his orange uniform. "Go on," he said, amused. "It sounds interesting." He tossed the dossier carelessly on the back seat. Lanfierre sat stiffly behind the wheel, affronted. The cynical MacBride couldn't really appreciate fine aberrations. In some ways MacBride was a barbarian. Lanfierre had held out on Fownes for months. He had even contrived to engage him in conversation once, a pleasantly absurd, irrational little chat that titillated him for weeks. It was only with the greatest reluctance that he finally mentioned Fownes to MacBride. After years of searching for differences Lanfierre had seen how extraordinarily repetitious people were, echoes really, dimly resounding echoes, each believing itself whole and separate. They spoke in an incessant chatter of cliches, and their actions were unbelievably trite. Then a fine robust freak came along and the others—the echoes—refused to believe it. The lieutenant was probably on the point of suggesting a vacation. "Why don't you take a vacation?" Lieutenant MacBride suggested. "It's like this, MacBride. Do you know what a wind is? A breeze? A zephyr?" "I've heard some." "They say there are mountain-tops where winds blow all the time. Strong winds, MacBride. Winds like you and I can't imagine. And if there was a house sitting on such a mountain and if winds did blow, it would shake exactly the way that one does. Sometimes I get the feeling the whole place is going to slide off its foundation and go sailing down the avenue." Lieutenant MacBride pursed his lips. "I'll tell you something else," Lanfierre went on. "The windows all close at the same time. You'll be watching and all of a sudden every single window in the place will drop to its sill." Lanfierre leaned back in the seat, his eyes still on the house. "Sometimes I think there's a whole crowd of people in there waiting for a signal—as if they all had something important to say but had to close the windows first so no one could hear. Why else close the windows in a domed city? And then as soon as the place is buttoned up they all explode into conversation—and that's why the house shakes." MacBride whistled. "No, I don't need a vacation." A falling piece of glass dissolved into a puff of gossamer against the windshield. Lanfierre started and bumped his knee on the steering wheel. "No, you don't need a rest," MacBride said. "You're starting to see flying houses, hear loud babbling voices. You've got winds in your brain, Lanfierre, breezes of fatigue, zephyrs of irrationality—" At that moment, all at once, every last window in the house slammed shut. The street was deserted and quiet, not a movement, not a sound. MacBride and Lanfierre both leaned forward, as if waiting for the ghostly babble of voices to commence. The house began to shake. It rocked from side to side, it pitched forward and back, it yawed and dipped and twisted, straining at the mooring of its foundation. The house could have been preparing to take off and sail down the.... MacBride looked at Lanfierre and Lanfierre looked at MacBride and then they both looked back at the dancing house. "And the water ," Lanfierre said. "The water he uses! He could be the thirstiest and cleanest man in the city. He could have a whole family of thirsty and clean kids, and he still wouldn't need all that water." The lieutenant had picked up the dossier. He thumbed through the pages now in amazement. "Where do you get a guy like this?" he asked. "Did you see what he carries in his pockets?" "And compasses won't work on this street." The lieutenant lit a cigarette and sighed. He usually sighed when making the decision to raid a dwelling. It expressed his weariness and distaste for people who went off and got neurotic when they could be enjoying a happy, normal existence. There was something implacable about his sighs. "He'll be coming out soon," Lanfierre said. "He eats supper next door with a widow. Then he goes to the library. Always the same. Supper at the widow's next door and then the library." MacBride's eyebrows went up a fraction of an inch. "The library?" he said. "Is he in with that bunch?" Lanfierre nodded. "Should be very interesting," MacBride said slowly. "I can't wait to see what he's got in there," Lanfierre murmured, watching the house with a consuming interest. They sat there smoking in silence and every now and then their eyes widened as the house danced a new step. Fownes stopped on the porch to brush the plaster of paris off his shoes. He hadn't seen the patrol car and this intense preoccupation of his was also responsible for the dancing house—he simply hadn't noticed. There was a certain amount of vibration, of course. He had a bootleg pipe connected into the dome blower system, and the high-pressure air caused some buffeting against the thin walls of the house. At least, he called it buffeting; he'd never thought to watch from outside. He went in and threw his jacket on the sofa, there being no room left in the closets. Crossing the living room he stopped to twist a draw-pull. Every window slammed shut. "Tight as a kite," he thought, satisfied. He continued on toward the closet at the foot of the stairs and then stopped again. Was that right? No, snug as a hug in a rug . He went on, thinking: The old devils. The downstairs closet was like a great watch case, a profusion of wheels surrounding the Master Mechanism, which was a miniature see-saw that went back and forth 365-1/4 times an hour. The wheels had a curious stateliness about them. They were all quite old, salvaged from grandfather's clocks and music boxes and they went around in graceful circles at the rate of 30 and 31 times an hour ... although there was one slightly eccentric cam that vacillated between 28 and 29. He watched as they spun and flashed in the darkness, and then set them for seven o'clock in the evening, April seventh, any year. Outside, the domed city vanished. It was replaced by an illusion. Or, as Fownes hoped it might appear, the illusion of the domed city vanished and was replaced by a more satisfactory, and, for his specific purpose, more functional, illusion. Looking through the window he saw only a garden. Instead of an orange sun at perpetual high noon, there was a red sun setting brilliantly, marred only by an occasional arcover which left the smell of ozone in the air. There was also a gigantic moon. It hid a huge area of sky, and it sang. The sun and moon both looked down upon a garden that was itself scintillant, composed largely of neon roses. Moonlight, he thought, and roses. Satisfactory. And cocktails for two. Blast, he'd never be able to figure that one out! He watched as the moon played, Oh, You Beautiful Doll and the neon roses flashed slowly from red to violet, then went back to the closet and turned on the scent. The house began to smell like an immensely concentrated rose as the moon shifted to People Will Say We're In Love . He rubbed his chin critically. It seemed all right. A dreamy sunset, an enchanted moon, flowers, scent. They were all purely speculative of course. He had no idea how a rose really smelled—or looked for that matter. Not to mention a moon. But then, neither did the widow. He'd have to be confident, assertive. Insist on it. I tell you, my dear, this is a genuine realistic romantic moon. Now, does it do anything to your pulse? Do you feel icy fingers marching up and down your spine? His own spine didn't seem to be affected. But then he hadn't read that book on ancient mores and courtship customs. How really odd the ancients were. Seduction seemed to be an incredibly long and drawn-out process, accompanied by a considerable amount of falsification. Communication seemed virtually impossible. "No" meant any number of things, depending on the tone of voice and the circumstances. It could mean yes, it could mean ask me again later on this evening. He went up the stairs to the bedroom closet and tried the rain-maker, thinking roguishly: Thou shalt not inundate. The risks he was taking! A shower fell gently on the garden and a male chorus began to chant Singing in the Rain . Undiminished, the yellow moon and the red sun continued to be brilliant, although the sun occasionally arced over and demolished several of the neon roses. The last wheel in the bedroom closet was a rather elegant steering wheel from an old 1995 Studebaker. This was on the bootleg pipe; he gingerly turned it. Far below in the cellar there was a rumble and then the soft whistle of winds came to him. He went downstairs to watch out the living room window. This was important; the window had a really fixed attitude about air currents. The neon roses bent and tinkled against each other as the wind rose and the moon shook a trifle as it whispered Cuddle Up a Little Closer . He watched with folded arms, considering how he would start. My dear Mrs. Deshazaway. Too formal. They'd be looking out at the romantic garden; time to be a bit forward. My very dear Mrs. Deshazaway. No. Contrived. How about a simple, Dear Mrs. Deshazaway . That might be it. I was wondering, seeing as how it's so late, if you wouldn't rather stay over instead of going home.... Preoccupied, he hadn't noticed the winds building up, didn't hear the shaking and rattling of the pipes. There were attic pipes connected to wall pipes and wall pipes connected to cellar pipes, and they made one gigantic skeleton that began to rattle its bones and dance as high-pressure air from the dome blower rushed in, slowly opening the Studebaker valve wider and wider.... The neon roses thrashed about, extinguishing each other. The red sun shot off a mass of sparks and then quickly sank out of sight. The moon fell on the garden and rolled ponderously along, crooning When the Blue of the Night Meets the Gold of the Day . The shaking house finally woke him up. He scrambled upstairs to the Studebaker wheel and shut it off. At the window again, he sighed. Repairs were in order. And it wasn't the first time the winds got out of line. Why didn't she marry him and save all this bother? He shut it all down and went out the front door, wondering about the rhyme of the months, about stately August and eccentric February and romantic April. April. Its days were thirty and it followed September. And all the rest have thirty-one. What a strange people, the ancients! He still didn't see the orange car parked down the street. "Men are too perishable," Mrs. Deshazaway said over dinner. "For all practical purposes I'm never going to marry again. All my husbands die." "Would you pass the beets, please?" Humphrey Fownes said. She handed him a platter of steaming red beets. "And don't look at me that way," she said. "I'm not going to marry you and if you want reasons I'll give you four of them. Andrew. Curt. Norman. And Alphonse." The widow was a passionate woman. She did everything passionately—talking, cooking, dressing. Her beets were passionately red. Her clothes rustled and her high heels clicked and her jewelry tinkled. She was possessed by an uncontrollable dynamism. Fownes had never known anyone like her. "You forgot to put salt on the potatoes," she said passionately, then went on as calmly as it was possible for her to be, to explain why she couldn't marry him. "Do you have any idea what people are saying? They're all saying I'm a cannibal! I rob my husbands of their life force and when they're empty I carry their bodies outside on my way to the justice of the peace." "As long as there are people," he said philosophically, "there'll be talk." "But it's the air! Why don't they talk about that? The air is stale, I'm positive. It's not nourishing. The air is stale and Andrew, Curt, Norman and Alphonse couldn't stand it. Poor Alphonse. He was never so healthy as on the day he was born. From then on things got steadily worse for him." "I don't seem to mind the air." She threw up her hands. "You'd be the worst of the lot!" She left the table, rustling and tinkling about the room. "I can just hear them. Try some of the asparagus. Five. That's what they'd say. That woman did it again. And the plain fact is I don't want you on my record." "Really," Fownes protested. "I feel splendid. Never better." He could hear her moving about and then felt her hands on his shoulders. "And what about those very elaborate plans you've been making to seduce me?" Fownes froze with three asparagus hanging from his fork. "Don't you think they'll find out? I found out and you can bet they will. It's my fault, I guess. I talk too much. And I don't always tell the truth. To be completely honest with you, Mr. Fownes, it wasn't the old customs at all standing between us, it was air. I can't have another man die on me, it's bad for my self-esteem. And now you've gone and done something good and criminal, something peculiar." Fownes put his fork down. "Dear Mrs. Deshazaway," he started to say. "And of course when they do find out and they ask you why, Mr. Fownes, you'll tell them. No, no heroics, please! When they ask a man a question he always answers and you will too. You'll tell them I wanted to be courted and when they hear that they'll be around to ask me a few questions. You see, we're both a bit queer." "I hadn't thought of that," Fownes said quietly. "Oh, it doesn't really matter. I'll join Andrew, Curt, Norman—" "That won't be necessary," Fownes said with unusual force. "With all due respect to Andrew, Curt, Norman and Alphonse, I might as well state here and now I have other plans for you, Mrs. Deshazaway." "But my dear Mr. Fownes," she said, leaning across the table. "We're lost, you and I." "Not if we could leave the dome," Fownes said quietly. "That's impossible! How?" In no hurry, now that he had the widow's complete attention, Fownes leaned across the table and whispered: "Fresh air, Mrs. Deshazaway? Space? Miles and miles of space where the real-estate monopoly has no control whatever? Where the wind blows across prairies ; or is it the other way around? No matter. How would you like that , Mrs. Deshazaway?" Breathing somewhat faster than usual, the widow rested her chin on her two hands. "Pray continue," she said. "Endless vistas of moonlight and roses? April showers, Mrs. Deshazaway. And June, which as you may know follows directly upon April and is supposed to be the month of brides, of marrying. June also lies beyond the dome." "I see." " And ," Mr. Fownes added, his voice a honeyed whisper, "they say that somewhere out in the space and the roses and the moonlight, the sleeping equinox yawns and rises because on a certain day it's vernal and that's when it roams the Open Country where geigers no longer scintillate." " My. " Mrs. Deshazaway rose, paced slowly to the window and then came back to the table, standing directly over Fownes. "If you can get us outside the dome," she said, "out where a man stays warm long enough for his wife to get to know him ... if you can do that, Mr. Fownes ... you may call me Agnes." When Humphrey Fownes stepped out of the widow's house, there was a look of such intense abstraction on his features that Lanfierre felt a wistful desire to get out of the car and walk along with the man. It would be such a deliciously insane experience. ("April has thirty days," Fownes mumbled, passing them, "because thirty is the largest number such that all smaller numbers not having a common divisor with it are primes ." MacBride frowned and added it to the dossier. Lanfierre sighed.) Pinning his hopes on the Movement, Fownes went straight to the library several blocks away, a shattered depressing place given over to government publications and censored old books with holes in them. It was used so infrequently that the Movement was able to meet there undisturbed. The librarian was a yellowed, dog-eared woman of eighty. She spent her days reading ancient library cards and, like the books around her, had been rendered by time's own censor into near unintelligibility. "Here's one," she said to him as he entered. " Gulliver's Travels. Loaned to John Wesley Davidson on March 14, 1979 for five days. What do you make of it?" In the litter of books and cards and dried out ink pads that surrounded the librarian, Fownes noticed a torn dust jacket with a curious illustration. "What's that?" he said. "A twister," she replied quickly. "Now listen to this . Seven years later on March 21, 1986, Ella Marshall Davidson took out the same book. What do you make of that ?" "I'd say," Humphrey Fownes said, "that he ... that he recommended it to her, that one day they met in the street and he told her about this book and then they ... they went to the library together and she borrowed it and eventually, why eventually they got married." "Hah! They were brother and sister!" the librarian shouted in her parched voice, her old buckram eyes laughing with cunning. Fownes smiled weakly and looked again at the dust jacket. The twister was unquestionably a meteorological phenomenon. It spun ominously, like a malevolent top, and coursed the countryside destructively, carrying a Dorothy to an Oz. He couldn't help wondering if twisters did anything to feminine pulses, if they could possibly be a part of a moonlit night, with cocktails and roses. He absently stuffed the dust jacket in his pocket and went on into the other rooms, the librarian mumbling after him: "Edna Murdoch Featherstone, April 21, 1991," as though reading inscriptions on a tombstone. The Movement met in what had been the children's room, where unpaid ladies of the afternoon had once upon a time read stories to other people's offspring. The members sat around at the miniature tables looking oddly like giants fled from their fairy tales, protesting. "Where did the old society fail?" the leader was demanding of them. He stood in the center of the room, leaning on a heavy knobbed cane. He glanced around at the group almost complacently, and waited as Humphrey Fownes squeezed into an empty chair. "We live in a dome," the leader said, "for lack of something. An invention! What is the one thing that the great technological societies before ours could not invent, notwithstanding their various giant brains, electronic and otherwise?" Fownes was the kind of man who never answered a rhetorical question. He waited, uncomfortable in the tight chair, while the others struggled with this problem in revolutionary dialectics. " A sound foreign policy ," the leader said, aware that no one else had obtained the insight. "If a sound foreign policy can't be created the only alternative is not to have any foreign policy at all. Thus the movement into domes began— by common consent of the governments . This is known as self-containment." Dialectically out in left field, Humphrey Fownes waited for a lull in the ensuing discussion and then politely inquired how it might be arranged for him to get out. "Out?" the leader said, frowning. "Out? Out where?" "Outside the dome." "Oh. All in good time, my friend. One day we shall all pick up and leave." "And that day I'll await impatiently," Fownes replied with marvelous tact, "because it will be lonely out there for the two of us. My future wife and I have to leave now ." "Nonsense. Ridiculous! You have to be prepared for the Open Country. You can't just up and leave, it would be suicide, Fownes. And dialectically very poor." "Then you have discussed preparations, the practical necessities of life in the Open Country. Food, clothing, a weapon perhaps? What else? Have I left anything out?" The leader sighed. "The gentleman wants to know if he's left anything out," he said to the group. Fownes looked around at them, at some dozen pained expressions. "Tell the man what he's forgotten," the leader said, walking to the far window and turning his back quite pointedly on them. Everyone spoke at the same moment. " A sound foreign policy ," they all said, it being almost too obvious for words. On his way out the librarian shouted at him: " A Tale of a Tub , thirty-five years overdue!" She was calculating the fine as he closed the door. Humphrey Fownes' preoccupation finally came to an end when he was one block away from his house. It was then that he realized something unusual must have occurred. An orange patrol car of the security police was parked at his front door. And something else was happening too. His house was dancing. It was disconcerting, and at the same time enchanting, to watch one's residence frisking about on its foundation. It was such a strange sight that for the moment he didn't give a thought to what might be causing it. But when he stepped gingerly onto the porch, which was doing its own independent gavotte, he reached for the doorknob with an immense curiosity. The door flung itself open and knocked him back off the porch. From a prone position on his miniscule front lawn, Fownes watched as his favorite easy chair sailed out of the living room on a blast of cold air and went pinwheeling down the avenue in the bright sunshine. A wild wind and a thick fog poured out of the house. It brought chairs, suits, small tables, lamps trailing their cords, ashtrays, sofa cushions. The house was emptying itself fiercely, as if disgorging an old, spoiled meal. From deep inside he could hear the rumble of his ancient upright piano as it rolled ponderously from room to room. He stood up; a wet wind swept over him, whipping at his face, toying with his hair. It was a whistling in his ears, and a tingle on his cheeks. He got hit by a shoe. As he forced his way back to the doorway needles of rain played over his face and he heard a voice cry out from somewhere in the living room. "Help!" Lieutenant MacBride called. Standing in the doorway with his wet hair plastered down on his dripping scalp, the wind roaring about him, the piano rumbling in the distance like thunder, Humphrey Fownes suddenly saw it all very clearly. " Winds ," he said in a whisper. "What's happening?" MacBride yelled, crouching behind the sofa. " March winds," he said. "What?!" "April showers!" The winds roared for a moment and then MacBride's lost voice emerged from the blackness of the living room. "These are not Optimum Dome Conditions!" the voice wailed. "The temperature is not 59 degrees. The humidity is not 47%!" Fownes held his face up to let the rain fall on it. "Moonlight!" he shouted. "Roses! My soul for a cocktail for two!" He grasped the doorway to keep from being blown out of the house. "Are you going to make it stop or aren't you!" MacBride yelled. "You'll have to tell me what you did first!" "I told him not to touch that wheel! Lanfierre. He's in the upstairs bedroom!" When he heard this Fownes plunged into the house and fought his way up the stairs. He found Lanfierre standing outside the bedroom with a wheel in his hand. "What have I done?" Lanfierre asked in the monotone of shock. Fownes took the wheel. It was off a 1995 Studebaker. "I'm not sure what's going to come of this," he said to Lanfierre with an astonishing amount of objectivity, "but the entire dome air supply is now coming through my bedroom." The wind screamed. "Is there something I can turn?" Lanfierre asked. "Not any more there isn't." They started down the stairs carefully, but the wind caught them and they quickly reached the bottom in a wet heap. Recruiting Lieutenant MacBride from behind his sofa, the men carefully edged out of the house and forced the front door shut. The wind died. The fog dispersed. They stood dripping in the Optimum Dome Conditions of the bright avenue. "I never figured on this ," Lanfierre said, shaking his head. With the front door closed the wind quickly built up inside the house. They could see the furnishing whirl past the windows. The house did a wild, elated jig. "What kind of a place is this?" MacBride said, his courage beginning to return. He took out his notebook but it was a soggy mess. He tossed it away. "Sure, he was different ," Lanfierre murmured. "I knew that much." When the roof blew off they weren't really surprised. With a certain amount of equanimity they watched it lift off almost gracefully, standing on end for a moment before toppling to the ground. It was strangely slow motion, as was the black twirling cloud that now rose out of the master bedroom, spewing shorts and socks and cases every which way. " Now what?" MacBride said, thoroughly exasperated, as this strange black cloud began to accelerate, whirling about like some malevolent top.... Humphrey Fownes took out the dust jacket he'd found in the library. He held it up and carefully compared the spinning cloud in his bedroom with the illustration. The cloud rose and spun, assuming the identical shape of the illustration. "It's a twister," he said softly. "A Kansas twister!" "What," MacBride asked, his bravado slipping away again, "what ... is a twister?" The twister roared and moved out of the bedroom, out over the rear of the house toward the side of the dome. "It says here," Fownes shouted over the roaring, "that Dorothy traveled from Kansas to Oz in a twister and that ... and that Oz is a wonderful and mysterious land beyond the confines of everyday living ." MacBride's eyes and mouth were great zeros. "Is there something I can turn?" Lanfierre asked. Huge chunks of glass began to fall around them. "Fownes!" MacBride shouted. "This is a direct order! Make it go back!" But Fownes had already begun to run on toward the next house, dodging mountainous puffs of glass as he went. "Mrs. Deshazaway!" he shouted. "Yoo-hoo, Mrs. Deshazaway!" The dome weevils were going berserk trying to keep up with the precipitation. They whirred back and forth at frightful speed, then, emptied of molten glass, rushed to the Trough which they quickly emptied and then rushed about empty-handed. "Yoo-hoo!" he yelled, running. The artificial sun vanished behind the mushrooming twister. Optimum temperature collapsed. "Mrs. Deshazaway! Agnes , will you marry me? Yoo-hoo!" Lanfierre and Lieutenant MacBride leaned against their car and waited, dazed. There was quite a large fall of glass.
|
C. a promotion from his job
|
What is significant about the “secret” Retief unveils about the Soetti?
A. They're easier to take down than they thought, meaning they can stand up to the Soetti.
B. The Soetti are going to exact revenge on the crew now that he's exposed their secret.
C. They don't have the right to be asking for papers, making their presence on board illegal.
D. They're easy to bluff against. They'll believe what the captain tells them.
|
THE FROZEN PLANET By Keith Laumer [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, September 1961. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] "It is rather unusual," Magnan said, "to assign an officer of your rank to courier duty, but this is an unusual mission." Retief sat relaxed and said nothing. Just before the silence grew awkward, Magnan went on. "There are four planets in the group," he said. "Two double planets, all rather close to an unimportant star listed as DRI-G 33987. They're called Jorgensen's Worlds, and in themselves are of no importance whatever. However, they lie deep in the sector into which the Soetti have been penetrating. "Now—" Magnan leaned forward and lowered his voice—"we have learned that the Soetti plan a bold step forward. Since they've met no opposition so far in their infiltration of Terrestrial space, they intend to seize Jorgensen's Worlds by force." Magnan leaned back, waiting for Retief's reaction. Retief drew carefully on his cigar and looked at Magnan. Magnan frowned. "This is open aggression, Retief," he said, "in case I haven't made myself clear. Aggression on Terrestrial-occupied territory by an alien species. Obviously, we can't allow it." Magnan drew a large folder from his desk. "A show of resistance at this point is necessary. Unfortunately, Jorgensen's Worlds are technologically undeveloped areas. They're farmers or traders. Their industry is limited to a minor role in their economy—enough to support the merchant fleet, no more. The war potential, by conventional standards, is nil." Magnan tapped the folder before him. "I have here," he said solemnly, "information which will change that picture completely." He leaned back and blinked at Retief. "All right, Mr. Councillor," Retief said. "I'll play along; what's in the folder?" Magnan spread his fingers, folded one down. "First," he said. "The Soetti War Plan—in detail. We were fortunate enough to make contact with a defector from a party of renegade Terrestrials who've been advising the Soetti." He folded another finger. "Next, a battle plan for the Jorgensen's people, worked out by the Theory group." He wrestled a third finger down. "Lastly; an Utter Top Secret schematic for conversion of a standard anti-acceleration field into a potent weapon—a development our systems people have been holding in reserve for just such a situation." "Is that all?" Retief said. "You've still got two fingers sticking up." Magnan looked at the fingers and put them away. "This is no occasion for flippancy, Retief. In the wrong hands, this information could be catastrophic. You'll memorize it before you leave this building." "I'll carry it, sealed," Retief said. "That way nobody can sweat it out of me." Magnan started to shake his head. "Well," he said. "If it's trapped for destruction, I suppose—" "I've heard of these Jorgensen's Worlds," Retief said. "I remember an agent, a big blond fellow, very quick on the uptake. A wizard with cards and dice. Never played for money, though." "Umm," Magnan said. "Don't make the error of personalizing this situation, Retief. Overall policy calls for a defense of these backwater worlds. Otherwise the Corps would allow history to follow its natural course, as always." "When does this attack happen?" "Less than four weeks." "That doesn't leave me much time." "I have your itinerary here. Your accommodations are clear as far as Aldo Cerise. You'll have to rely on your ingenuity to get you the rest of the way." "That's a pretty rough trip, Mr. Councillor. Suppose I don't make it?" Magnan looked sour. "Someone at a policy-making level has chosen to put all our eggs in one basket, Retief. I hope their confidence in you is not misplaced." "This antiac conversion; how long does it take?" "A skilled electronics crew can do the job in a matter of minutes. The Jorgensens can handle it very nicely; every other man is a mechanic of some sort." Retief opened the envelope Magnan handed him and looked at the tickets inside. "Less than four hours to departure time," he said. "I'd better not start any long books." "You'd better waste no time getting over to Indoctrination," Magnan said. Retief stood up. "If I hurry, maybe I can catch the cartoon." "The allusion escapes me," Magnan said coldly. "And one last word. The Soetti are patrolling the trade lanes into Jorgensen's Worlds; don't get yourself interned." "I'll tell you what," Retief said soberly. "In a pinch, I'll mention your name." "You'll be traveling with Class X credentials," Magnan snapped. "There must be nothing to connect you with the Corps." "They'll never guess," Retief said. "I'll pose as a gentleman." "You'd better be getting started," Magnan said, shuffling papers. "You're right," Retief said. "If I work at it, I might manage a snootful by takeoff." He went to the door. "No objection to my checking out a needler, is there?" Magnan looked up. "I suppose not. What do you want with it?" "Just a feeling I've got." "Please yourself." "Some day," Retief said, "I may take you up on that." II Retief put down the heavy travel-battered suitcase and leaned on the counter, studying the schedules chalked on the board under the legend "ALDO CERISE—INTERPLANETARY." A thin clerk in a faded sequined blouse and a plastic snakeskin cummerbund groomed his fingernails, watching Retief from the corner of his eye. Retief glanced at him. The clerk nipped off a ragged corner with rabbitlike front teeth and spat it on the floor. "Was there something?" he said. "Two twenty-eight, due out today for the Jorgensen group," Retief said. "Is it on schedule?" The clerk sampled the inside of his right cheek, eyed Retief. "Filled up. Try again in a couple of weeks." "What time does it leave?" "I don't think—" "Let's stick to facts," Retief said. "Don't try to think. What time is it due out?" The clerk smiled pityingly. "It's my lunch hour," he said. "I'll be open in an hour." He held up a thumb nail, frowned at it. "If I have to come around this counter," Retief said, "I'll feed that thumb to you the hard way." The clerk looked up and opened his mouth. Then he caught Retief's eye, closed his mouth and swallowed. "Like it says there," he said, jerking a thumb at the board. "Lifts in an hour. But you won't be on it," he added. Retief looked at him. "Some ... ah ... VIP's required accommodation," he said. He hooked a finger inside the sequined collar. "All tourist reservations were canceled. You'll have to try to get space on the Four-Planet Line ship next—" "Which gate?" Retief said. "For ... ah...?" "For the two twenty-eight for Jorgensen's Worlds," Retief said. "Well," the clerk said. "Gate 19," he added quickly. "But—" Retief picked up his suitcase and walked away toward the glare sign reading To Gates 16-30 . "Another smart alec," the clerk said behind him. Retief followed the signs, threaded his way through crowds, found a covered ramp with the number 228 posted over it. A heavy-shouldered man with a scarred jawline and small eyes was slouching there in a rumpled gray uniform. He put out a hand as Retief started past him. "Lessee your boarding pass," he muttered. Retief pulled a paper from an inside pocket, handed it over. The guard blinked at it. "Whassat?" "A gram confirming my space," Retief said. "Your boy on the counter says he's out to lunch." The guard crumpled the gram, dropped it on the floor and lounged back against the handrail. "On your way, bub," he said. Retief put his suitcase carefully on the floor, took a step and drove a right into the guard's midriff. He stepped aside as the man doubled and went to his knees. "You were wide open, ugly. I couldn't resist. Tell your boss I sneaked past while you were resting your eyes." He picked up his bag, stepped over the man and went up the gangway into the ship. A cabin boy in stained whites came along the corridor. "Which way to cabin fifty-seven, son?" Retief asked. "Up there." The boy jerked his head and hurried on. Retief made his way along the narrow hall, found signs, followed them to cabin fifty-seven. The door was open. Inside, baggage was piled in the center of the floor. It was expensive looking baggage. Retief put his bag down. He turned at a sound behind him. A tall, florid man with an expensive coat belted over a massive paunch stood in the open door, looking at Retief. Retief looked back. The florid man clamped his jaws together, turned to speak over his shoulder. "Somebody in the cabin. Get 'em out." He rolled a cold eye at Retief as he backed out of the room. A short, thick-necked man appeared. "What are you doing in Mr. Tony's room?" he barked. "Never mind! Clear out of here, fellow! You're keeping Mr. Tony waiting." "Too bad," Retief said. "Finders keepers." "You nuts?" The thick-necked man stared at Retief. "I said it's Mr. Tony's room." "I don't know Mr. Tony. He'll have to bull his way into other quarters." "We'll see about you, mister." The man turned and went out. Retief sat on the bunk and lit a cigar. There was a sound of voices in the corridor. Two burly baggage-smashers appeared, straining at an oversized trunk. They maneuvered it through the door, lowered it, glanced at Retief and went out. The thick-necked man returned. "All right, you. Out," he growled. "Or have I got to have you thrown out?" Retief rose and clamped the cigar between his teeth. He gripped a handle of the brass-bound trunk in each hand, bent his knees and heaved the trunk up to chest level, then raised it overhead. He turned to the door. "Catch," he said between clenched teeth. The trunk slammed against the far wall of the corridor and burst. Retief turned to the baggage on the floor, tossed it into the hall. The face of the thick-necked man appeared cautiously around the door jamb. "Mister, you must be—" "If you'll excuse me," Retief said, "I want to catch a nap." He flipped the door shut, pulled off his shoes and stretched out on the bed. Five minutes passed before the door rattled and burst open. Retief looked up. A gaunt leathery-skinned man wearing white ducks, a blue turtleneck sweater and a peaked cap tilted raffishly over one eye stared at Retief. "Is this the joker?" he grated. The thick-necked man edged past him, looked at Retief and snorted, "That's him, sure." "I'm captain of this vessel," the first man said. "You've got two minutes to haul your freight out of here, buster." "When you can spare the time from your other duties," Retief said, "take a look at Section Three, Paragraph One, of the Uniform Code. That spells out the law on confirmed space on vessels engaged in interplanetary commerce." "A space lawyer." The captain turned. "Throw him out, boys." Two big men edged into the cabin, looking at Retief. "Go on, pitch him out," the captain snapped. Retief put his cigar in an ashtray, and swung his feet off the bunk. "Don't try it," he said softly. One of the two wiped his nose on a sleeve, spat on his right palm, and stepped forward, then hesitated. "Hey," he said. "This the guy tossed the trunk off the wall?" "That's him," the thick-necked man called. "Spilled Mr. Tony's possessions right on the deck." "Deal me out," the bouncer said. "He can stay put as long as he wants to. I signed on to move cargo. Let's go, Moe." "You'd better be getting back to the bridge, Captain," Retief said. "We're due to lift in twenty minutes." The thick-necked man and the Captain both shouted at once. The Captain's voice prevailed. "—twenty minutes ... uniform Code ... gonna do?" "Close the door as you leave," Retief said. The thick-necked man paused at the door. "We'll see you when you come out." III Four waiters passed Retief's table without stopping. A fifth leaned against the wall nearby, a menu under his arm. At a table across the room, the Captain, now wearing a dress uniform and with his thin red hair neatly parted, sat with a table of male passengers. He talked loudly and laughed frequently, casting occasional glances Retief's way. A panel opened in the wall behind Retief's chair. Bright blue eyes peered out from under a white chef's cap. "Givin' you the cold shoulder, heh, Mister?" "Looks like it, old-timer," Retief said. "Maybe I'd better go join the skipper. His party seems to be having all the fun." "Feller has to be mighty careless who he eats with to set over there." "I see your point." "You set right where you're at, Mister. I'll rustle you up a plate." Five minutes later, Retief cut into a thirty-two ounce Delmonico backed up with mushrooms and garlic butter. "I'm Chip," the chef said. "I don't like the Cap'n. You can tell him I said so. Don't like his friends, either. Don't like them dern Sweaties, look at a man like he was a worm." "You've got the right idea on frying a steak, Chip. And you've got the right idea on the Soetti, too," Retief said. He poured red wine into a glass. "Here's to you." "Dern right," Chip said. "Dunno who ever thought up broiling 'em. Steaks, that is. I got a Baked Alaska coming up in here for dessert. You like brandy in yer coffee?" "Chip, you're a genius." "Like to see a feller eat," Chip said. "I gotta go now. If you need anything, holler." Retief ate slowly. Time always dragged on shipboard. Four days to Jorgensen's Worlds. Then, if Magnan's information was correct, there would be four days to prepare for the Soetti attack. It was a temptation to scan the tapes built into the handle of his suitcase. It would be good to know what Jorgensen's Worlds would be up against. Retief finished the steak, and the chef passed out the baked Alaska and coffee. Most of the other passengers had left the dining room. Mr. Tony and his retainers still sat at the Captain's table. As Retief watched, four men arose from the table and sauntered across the room. The first in line, a stony-faced thug with a broken ear, took a cigar from his mouth as he reached the table. He dipped the lighted end in Retief's coffee, looked at it, and dropped it on the tablecloth. The others came up, Mr. Tony trailing. "You must want to get to Jorgensen's pretty bad," the thug said in a grating voice. "What's your game, hick?" Retief looked at the coffee cup, picked it up. "I don't think I want my coffee," he said. He looked at the thug. "You drink it." The thug squinted at Retief. "A wise hick," he began. With a flick of the wrist, Retief tossed the coffee into the thug's face, then stood and slammed a straight right to the chin. The thug went down. Retief looked at Mr. Tony, still standing open-mouthed. "You can take your playmates away now, Tony," he said. "And don't bother to come around yourself. You're not funny enough." Mr. Tony found his voice. "Take him, Marbles!" he growled. The thick-necked man slipped a hand inside his tunic and brought out a long-bladed knife. He licked his lips and moved in. Retief heard the panel open beside him. "Here you go, Mister," Chip said. Retief darted a glance; a well-honed french knife lay on the sill. "Thanks, Chip," Retief said. "I won't need it for these punks." Thick-neck lunged and Retief hit him square in the face, knocking him under the table. The other man stepped back, fumbling a power pistol from his shoulder holster. "Aim that at me, and I'll kill you," Retief said. "Go on, burn him!" Mr. Tony shouted. Behind him, the captain appeared, white-faced. "Put that away, you!" he yelled. "What kind of—" "Shut up," Mr. Tony said. "Put it away, Hoany. We'll fix this bum later." "Not on this vessel, you won't," the captain said shakily. "I got my charter to consider." "Ram your charter," Hoany said harshly. "You won't be needing it long." "Button your floppy mouth, damn you!" Mr. Tony snapped. He looked at the man on the floor. "Get Marbles out of here. I ought to dump the slob." He turned and walked away. The captain signaled and two waiters came up. Retief watched as they carted the casualty from the dining room. The panel opened. "I usta be about your size, when I was your age," Chip said. "You handled them pansies right. I wouldn't give 'em the time o' day." "How about a fresh cup of coffee, Chip?" Retief said. "Sure, Mister. Anything else?" "I'll think of something," Retief said. "This is shaping up into one of those long days." "They don't like me bringing yer meals to you in yer cabin," Chip said. "But the cap'n knows I'm the best cook in the Merchant Service. They won't mess with me." "What has Mr. Tony got on the captain, Chip?" Retief asked. "They're in some kind o' crooked business together. You want some more smoked turkey?" "Sure. What have they got against my going to Jorgensen's Worlds?" "Dunno. Hasn't been no tourists got in there fer six or eight months. I sure like a feller that can put it away. I was a big eater when I was yer age." "I'll bet you can still handle it, Old Timer. What are Jorgensen's Worlds like?" "One of 'em's cold as hell and three of 'em's colder. Most o' the Jorgies live on Svea; that's the least froze up. Man don't enjoy eatin' his own cookin' like he does somebody else's." "That's where I'm lucky, Chip. What kind of cargo's the captain got aboard for Jorgensen's?" "Derned if I know. In and out o' there like a grasshopper, ever few weeks. Don't never pick up no cargo. No tourists any more, like I says. Don't know what we even run in there for." "Where are the passengers we have aboard headed?" "To Alabaster. That's nine days' run in-sector from Jorgensen's. You ain't got another one of them cigars, have you?" "Have one, Chip. I guess I was lucky to get space on this ship." "Plenty o' space, Mister. We got a dozen empty cabins." Chip puffed the cigar alight, then cleared away the dishes, poured out coffee and brandy. "Them Sweaties is what I don't like," he said. Retief looked at him questioningly. "You never seen a Sweaty? Ugly lookin' devils. Skinny legs, like a lobster; big chest, shaped like the top of a turnip; rubbery lookin' head. You can see the pulse beatin' when they get riled." "I've never had the pleasure," Retief said. "You prob'ly have it perty soon. Them devils board us nigh ever trip out. Act like they was the Customs Patrol or somethin'." There was a distant clang, and a faint tremor ran through the floor. "I ain't superstitious ner nothin'," Chip said. "But I'll be triple-damned if that ain't them boarding us now." Ten minutes passed before bootsteps sounded outside the door, accompanied by a clicking patter. The doorknob rattled, then a heavy knock shook the door. "They got to look you over," Chip whispered. "Nosy damn Sweaties." "Unlock it, Chip." The chef opened the door. "Come in, damn you," he said. A tall and grotesque creature minced into the room, tiny hoof-like feet tapping on the floor. A flaring metal helmet shaded the deep-set compound eyes, and a loose mantle flapped around the knobbed knees. Behind the alien, the captain hovered nervously. "Yo' papiss," the alien rasped. "Who's your friend, Captain?" Retief said. "Never mind; just do like he tells you." "Yo' papiss," the alien said again. "Okay," Retief said. "I've seen it. You can take it away now." "Don't horse around," the captain said. "This fellow can get mean." The alien brought two tiny arms out from the concealment of the mantle, clicked toothed pincers under Retief's nose. "Quick, soft one." "Captain, tell your friend to keep its distance. It looks brittle, and I'm tempted to test it." "Don't start anything with Skaw; he can clip through steel with those snappers." "Last chance," Retief said. Skaw stood poised, open pincers an inch from Retief's eyes. "Show him your papers, you damned fool," the captain said hoarsely. "I got no control over Skaw." The alien clicked both pincers with a sharp report, and in the same instant Retief half-turned to the left, leaned away from the alien and drove his right foot against the slender leg above the bulbous knee-joint. Skaw screeched and floundered, greenish fluid spattering from the burst joint. "I told you he was brittle," Retief said. "Next time you invite pirates aboard, don't bother to call." "Jesus, what did you do! They'll kill us!" the captain gasped, staring at the figure flopping on the floor. "Cart poor old Skaw back to his boat," Retief said. "Tell him to pass the word. No more illegal entry and search of Terrestrial vessels in Terrestrial space." "Hey," Chip said. "He's quit kicking." The captain bent over Skaw, gingerly rolled him over. He leaned close and sniffed. "He's dead." The captain stared at Retief. "We're all dead men," he said. "These Soetti got no mercy." "They won't need it. Tell 'em to sheer off; their fun is over." "They got no more emotions than a blue crab—" "You bluff easily, Captain. Show a few guns as you hand the body back. We know their secret now." "What secret? I—" "Don't be no dumber than you got to, Cap'n," Chip said. "Sweaties die easy; that's the secret." "Maybe you got a point," the captain said, looking at Retief. "All they got's a three-man scout. It could work." He went out, came back with two crewmen. They hauled the dead alien gingerly into the hall. "Maybe I can run a bluff on the Soetti," the captain said, looking back from the door. "But I'll be back to see you later." "You don't scare us, Cap'n," Chip said. "Him and Mr. Tony and all his goons. You hit 'em where they live, that time. They're pals o' these Sweaties. Runnin' some kind o' crooked racket." "You'd better take the captain's advice, Chip. There's no point in your getting involved in my problems." "They'd of killed you before now, Mister, if they had any guts. That's where we got it over these monkeys. They got no guts." "They act scared, Chip. Scared men are killers." "They don't scare me none." Chip picked up the tray. "I'll scout around a little and see what's goin' on. If the Sweaties figure to do anything about that Skaw feller they'll have to move fast; they won't try nothin' close to port." "Don't worry, Chip. I have reason to be pretty sure they won't do anything to attract a lot of attention in this sector just now." Chip looked at Retief. "You ain't no tourist, Mister. I know that much. You didn't come out here for fun, did you?" "That," Retief said, "would be a hard one to answer." IV Retief awoke at a tap on his door. "It's me, Mister. Chip." "Come on in." The chef entered the room, locking the door. "You shoulda had that door locked." He stood by the door, listening, then turned to Retief. "You want to get to Jorgensen's perty bad, don't you, Mister?" "That's right, Chip." "Mr. Tony give the captain a real hard time about old Skaw. The Sweaties didn't say nothin'. Didn't even act surprised, just took the remains and pushed off. But Mr. Tony and that other crook they call Marbles, they was fit to be tied. Took the cap'n in his cabin and talked loud at him fer half a hour. Then the cap'n come out and give some orders to the Mate." Retief sat up and reached for a cigar. "Mr. Tony and Skaw were pals, eh?" "He hated Skaw's guts. But with him it was business. Mister, you got a gun?" "A 2mm needler. Why?" "The orders cap'n give was to change course fer Alabaster. We're by-passin' Jorgensen's Worlds. We'll feel the course change any minute." Retief lit the cigar, reached under the mattress and took out a short-barreled pistol. He dropped it in his pocket, looked at Chip. "Maybe it was a good thought, at that. Which way to the Captain's cabin?" "This is it," Chip said softly. "You want me to keep an eye on who comes down the passage?" Retief nodded, opened the door and stepped into the cabin. The captain looked up from his desk, then jumped up. "What do you think you're doing, busting in here?" "I hear you're planning a course change, Captain." "You've got damn big ears." "I think we'd better call in at Jorgensen's." "You do, huh?" the captain sat down. "I'm in command of this vessel," he said. "I'm changing course for Alabaster." "I wouldn't find it convenient to go to Alabaster," Retief said. "So just hold your course for Jorgensen's." "Not bloody likely." "Your use of the word 'bloody' is interesting, Captain. Don't try to change course." The captain reached for the mike on his desk, pressed the key. "Power Section, this is the captain," he said. Retief reached across the desk, gripped the captain's wrist. "Tell the mate to hold his present course," he said softly. "Let go my hand, buster," the captain snarled. Eyes on Retief's, he eased a drawer open with his left hand, reached in. Retief kneed the drawer. The captain yelped and dropped the mike. "You busted it, you—" "And one to go," Retief said. "Tell him." "I'm an officer of the Merchant Service!" "You're a cheapjack who's sold his bridge to a pack of back-alley hoods." "You can't put it over, hick." "Tell him." The captain groaned and picked up the mike. "Captain to Power Section," he said. "Hold your present course until you hear from me." He dropped the mike and looked up at Retief. "It's eighteen hours yet before we pick up Jorgensen Control. You going to sit here and bend my arm the whole time?" Retief released the captain's wrist and turned to the door. "Chip, I'm locking the door. You circulate around, let me know what's going on. Bring me a pot of coffee every so often. I'm sitting up with a sick friend." "Right, Mister. Keep an eye on that jasper; he's slippery." "What are you going to do?" the captain demanded. Retief settled himself in a chair. "Instead of strangling you, as you deserve," he said, "I'm going to stay here and help you hold your course for Jorgensen's Worlds." The captain looked at Retief. He laughed, a short bark. "Then I'll just stretch out and have a little nap, farmer. If you feel like dozing off sometime during the next eighteen hours, don't mind me." Retief took out the needler and put it on the desk before him. "If anything happens that I don't like," he said, "I'll wake you up. With this."
|
A. They're easier to take down than they thought, meaning they can stand up to the Soetti.
|
What datasets were used?
|
### Introduction
Neural machine translation (NMT, § SECREF2 ; kalchbrenner13emnlp, sutskever14nips) is a variant of statistical machine translation (SMT; brown93cl), using neural networks. NMT has recently gained popularity due to its ability to model the translation process end-to-end using a single probabilistic model, and for its state-of-the-art performance on several language pairs BIBREF0 , BIBREF1 . One feature of NMT systems is that they treat each word in the vocabulary as a vector of continuous-valued numbers. This is in contrast to more traditional SMT methods such as phrase-based machine translation (PBMT; koehn03phrasebased), which represent translations as discrete pairs of word strings in the source and target languages. The use of continuous representations is a major advantage, allowing NMT to share statistical power between similar words (e.g. “dog” and “cat”) or contexts (e.g. “this is” and “that is”). However, this property also has a drawback in that NMT systems often mistranslate into words that seem natural in the context, but do not reflect the content of the source sentence. For example, Figure FIGREF2 is a sentence from our data where the NMT system mistakenly translated “Tunisia” into the word for “Norway.” This variety of error is particularly serious because the content words that are often mistranslated by NMT are also the words that play a key role in determining the whole meaning of the sentence. In contrast, PBMT and other traditional SMT methods tend to rarely make this kind of mistake. This is because they base their translations on discrete phrase mappings, which ensure that source words will be translated into a target word that has been observed as a translation at least once in the training data. In addition, because the discrete mappings are memorized explicitly, they can be learned efficiently from as little as a single instance (barring errors in word alignments). Thus we hypothesize that if we can incorporate a similar variety of information into NMT, this has the potential to alleviate problems with the previously mentioned fatal errors on low-frequency words. In this paper, we propose a simple, yet effective method to incorporate discrete, probabilistic lexicons as an additional information source in NMT (§ SECREF3 ). First we demonstrate how to transform lexical translation probabilities (§ SECREF7 ) into a predictive probability for the next word by utilizing attention vectors from attentional NMT models BIBREF2 . We then describe methods to incorporate this probability into NMT, either through linear interpolation with the NMT probabilities (§ UID10 ) or as the bias to the NMT predictive distribution (§ UID9 ). We construct these lexicon probabilities by using traditional word alignment methods on the training data (§ SECREF11 ), other external parallel data resources such as a handmade dictionary (§ SECREF13 ), or using a hybrid between the two (§ SECREF14 ). We perform experiments (§ SECREF5 ) on two English-Japanese translation corpora to evaluate the method's utility in improving translation accuracy and reducing the time required for training. ### Neural Machine Translation
The goal of machine translation is to translate a sequence of source words INLINEFORM0 into a sequence of target words INLINEFORM1 . These words belong to the source vocabulary INLINEFORM2 , and the target vocabulary INLINEFORM3 respectively. NMT performs this translation by calculating the conditional probability INLINEFORM4 of the INLINEFORM5 th target word INLINEFORM6 based on the source INLINEFORM7 and the preceding target words INLINEFORM8 . This is done by encoding the context INLINEFORM9 a fixed-width vector INLINEFORM10 , and calculating the probability as follows: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are respectively weight matrix and bias vector parameters. The exact variety of the NMT model depends on how we calculate INLINEFORM0 used as input. While there are many methods to perform this modeling, we opt to use attentional models BIBREF2 , which focus on particular words in the source sentence when calculating the probability of INLINEFORM1 . These models represent the current state of the art in NMT, and are also convenient for use in our proposed method. Specifically, we use the method of luong15emnlp, which we describe briefly here and refer readers to the original paper for details. First, an encoder converts the source sentence INLINEFORM0 into a matrix INLINEFORM1 where each column represents a single word in the input sentence as a continuous vector. This representation is generated using a bidirectional encoder INLINEFORM2 Here the INLINEFORM0 function maps the words into a representation BIBREF3 , and INLINEFORM1 is a stacking long short term memory (LSTM) neural network BIBREF4 , BIBREF5 , BIBREF6 . Finally we concatenate the two vectors INLINEFORM2 and INLINEFORM3 into a bidirectional representation INLINEFORM4 . These vectors are further concatenated into the matrix INLINEFORM5 where the INLINEFORM6 th column corresponds to INLINEFORM7 . Next, we generate the output one word at a time while referencing this encoded input sentence and tracking progress with a decoder LSTM. The decoder's hidden state INLINEFORM0 is a fixed-length continuous vector representing the previous target words INLINEFORM1 , initialized as INLINEFORM2 . Based on this INLINEFORM3 , we calculate a similarity vector INLINEFORM4 , with each element equal to DISPLAYFORM0 INLINEFORM0 can be an arbitrary similarity function, which we set to the dot product, following luong15emnlp. We then normalize this into an attention vector, which weights the amount of focus that we put on each word in the source sentence DISPLAYFORM0 This attention vector is then used to weight the encoded representation INLINEFORM0 to create a context vector INLINEFORM1 for the current time step INLINEFORM2 Finally, we create INLINEFORM0 by concatenating the previous hidden state INLINEFORM1 with the context vector, and performing an affine transform INLINEFORM2 Once we have this representation of the current state, we can calculate INLINEFORM0 according to Equation ( EQREF3 ). The next word INLINEFORM1 is chosen according to this probability, and we update the hidden state by inputting the chosen word into the decoder LSTM DISPLAYFORM0 If we define all the parameters in this model as INLINEFORM0 , we can then train the model by minimizing the negative log-likelihood of the training data INLINEFORM1 ### Integrating Lexicons into NMT
In § SECREF2 we described how traditional NMT models calculate the probability of the next target word INLINEFORM0 . Our goal in this paper is to improve the accuracy of this probability estimate by incorporating information from discrete probabilistic lexicons. We assume that we have a lexicon that, given a source word INLINEFORM1 , assigns a probability INLINEFORM2 to target word INLINEFORM3 . For a source word INLINEFORM4 , this probability will generally be non-zero for a small number of translation candidates, and zero for the majority of words in INLINEFORM5 . In this section, we first describe how we incorporate these probabilities into NMT, and explain how we actually obtain the INLINEFORM6 probabilities in § SECREF4 . ### Converting Lexicon Probabilities into Conditioned Predictive Proabilities
First, we need to convert lexical probabilities INLINEFORM0 for the individual words in the source sentence INLINEFORM1 to a form that can be used together with INLINEFORM2 . Given input sentence INLINEFORM3 , we can construct a matrix in which each column corresponds to a word in the input sentence, each row corresponds to a word in the INLINEFORM4 , and the entry corresponds to the appropriate lexical probability: INLINEFORM5 This matrix can be precomputed during the encoding stage because it only requires information about the source sentence INLINEFORM0 . Next we convert this matrix into a predictive probability over the next word: INLINEFORM0 . To do so we use the alignment probability INLINEFORM1 from Equation ( EQREF5 ) to weight each column of the INLINEFORM2 matrix: INLINEFORM3 This calculation is similar to the way how attentional models calculate the context vector INLINEFORM0 , but over a vector representing the probabilities of the target vocabulary, instead of the distributed representations of the source words. The process of involving INLINEFORM1 is important because at every time step INLINEFORM2 , the lexical probability INLINEFORM3 will be influenced by different source words. ### Combining Predictive Probabilities
After calculating the lexicon predictive probability INLINEFORM0 , next we need to integrate this probability with the NMT model probability INLINEFORM1 . To do so, we examine two methods: (1) adding it as a bias, and (2) linear interpolation. In our first bias method, we use INLINEFORM0 to bias the probability distribution calculated by the vanilla NMT model. Specifically, we add a small constant INLINEFORM1 to INLINEFORM2 , take the logarithm, and add this adjusted log probability to the input of the softmax as follows: INLINEFORM3 We take the logarithm of INLINEFORM0 so that the values will still be in the probability domain after the softmax is calculated, and add the hyper-parameter INLINEFORM1 to prevent zero probabilities from becoming INLINEFORM2 after taking the log. When INLINEFORM3 is small, the model will be more heavily biased towards using the lexicon, and when INLINEFORM4 is larger the lexicon probabilities will be given less weight. We use INLINEFORM5 for this paper. We also attempt to incorporate the two probabilities through linear interpolation between the standard NMT probability model probability INLINEFORM0 and the lexicon probability INLINEFORM1 . We will call this the linear method, and define it as follows: INLINEFORM2 where INLINEFORM0 is an interpolation coefficient that is the result of the sigmoid function INLINEFORM1 . INLINEFORM2 is a learnable parameter, and the sigmoid function ensures that the final interpolation level falls between 0 and 1. We choose INLINEFORM3 ( INLINEFORM4 ) at the beginning of training. This notation is partly inspired by allamanis16icml and gu16acl who use linear interpolation to merge a standard attentional model with a “copy” operator that copies a source word as-is into the target sentence. The main difference is that they use this to copy words into the output while our method uses it to influence the probabilities of all target words. ### Constructing Lexicon Probabilities
In the previous section, we have defined some ways to use predictive probabilities INLINEFORM0 based on word-to-word lexical probabilities INLINEFORM1 . Next, we define three ways to construct these lexical probabilities using automatically learned lexicons, handmade lexicons, or a combination of both. ### Automatically Learned Lexicons
In traditional SMT systems, lexical translation probabilities are generally learned directly from parallel data in an unsupervised fashion using a model such as the IBM models BIBREF7 , BIBREF8 . These models can be used to estimate the alignments and lexical translation probabilities INLINEFORM0 between the tokens of the two languages using the expectation maximization (EM) algorithm. First in the expectation step, the algorithm estimates the expected count INLINEFORM0 . In the maximization step, lexical probabilities are calculated by dividing the expected count by all possible counts: INLINEFORM1 The IBM models vary in level of refinement, with Model 1 relying solely on these lexical probabilities, and latter IBM models (Models 2, 3, 4, 5) introducing more sophisticated models of fertility and relative alignment. Even though IBM models also occasionally have problems when dealing with the rare words (e.g. “garbage collecting” effects BIBREF9 ), traditional SMT systems generally achieve better translation accuracies of low-frequency words than NMT systems BIBREF6 , indicating that these problems are less prominent than they are in NMT. Note that in many cases, NMT limits the target vocabulary BIBREF10 for training speed or memory constraints, resulting in rare words not being covered by the NMT vocabulary INLINEFORM0 . Accordingly, we allocate the remaining probability assigned by the lexicon to the unknown word symbol INLINEFORM1 : DISPLAYFORM0 ### Manual Lexicons
In addition, for many language pairs, broad-coverage handmade dictionaries exist, and it is desirable that we be able to use the information included in them as well. Unlike automatically learned lexicons, however, handmade dictionaries generally do not contain translation probabilities. To construct the probability INLINEFORM0 , we define the set of translations INLINEFORM1 existing in the dictionary for particular source word INLINEFORM2 , and assume a uniform distribution over these words: INLINEFORM3 Following Equation ( EQREF12 ), unknown source words will assign their probability mass to the INLINEFORM0 tag. ### Hybrid Lexicons
Handmade lexicons have broad coverage of words but their probabilities might not be as accurate as the learned ones, particularly if the automatic lexicon is constructed on in-domain data. Thus, we also test a hybrid method where we use the handmade lexicons to complement the automatically learned lexicon. Specifically, inspired by phrase table fill-up used in PBMT systems BIBREF11 , we use the probability of the automatically learned lexicons INLINEFORM1 by default, and fall back to the handmade lexicons INLINEFORM2 only for uncovered words: DISPLAYFORM0 ### Experiment & Result
In this section, we describe experiments we use to evaluate our proposed methods. ### Settings
Dataset: We perform experiments on two widely-used tasks for the English-to-Japanese language pair: KFTT BIBREF12 and BTEC BIBREF13 . KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversation corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are depicted in Table TABREF19 . We tokenize English according to the Penn Treebank standard BIBREF14 and lowercase, and tokenize Japanese using KyTea BIBREF15 . We limit training sentence length up to 50 in both experiments and keep the test data at the original length. We replace words of frequency less than a threshold INLINEFORM0 in both languages with the INLINEFORM1 symbol and exclude them from our vocabulary. We choose INLINEFORM2 for BTEC and INLINEFORM3 for KFTT, resulting in INLINEFORM4 k, INLINEFORM5 k for BTEC and INLINEFORM6 k, INLINEFORM7 k for KFTT. NMT Systems: We build the described models using the Chainer toolkit. The depth of the stacking LSTM is INLINEFORM0 and hidden node size INLINEFORM1 . We concatenate the forward and backward encodings (resulting in a 1600 dimension vector) and then perform a linear transformation to 800 dimensions. We train the system using the Adam BIBREF16 optimization method with the default settings: INLINEFORM0 . Additionally, we add dropout BIBREF17 with drop rate INLINEFORM1 at the last layer of each stacking LSTM unit to prevent overfitting. We use a batch size of INLINEFORM2 and we run a total of INLINEFORM3 iterations for all data sets. All of the experiments are conducted on a single GeForce GTX TITAN X GPU with a 12 GB memory cache. At test time, we use beam search with beam size INLINEFORM0 . We follow luong15acl in replacing every unknown token at position INLINEFORM1 with the target token that maximizes the probability INLINEFORM2 . We choose source word INLINEFORM3 according to the highest alignment score in Equation ( EQREF5 ). This unknown word replacement is applied to both baseline and proposed systems. Finally, because NMT models tend to give higher probabilities to shorter sentences BIBREF18 , we discount the probability of INLINEFORM4 token by INLINEFORM5 to correct for this bias. Traditional SMT Systems: We also prepare two traditional SMT systems for comparison: a PBMT system BIBREF19 using Moses BIBREF20 , and a hierarchical phrase-based MT system BIBREF21 using Travatar BIBREF22 , Systems are built using the default settings, with models trained on the training data, and weights tuned on the development data. Lexicons: We use a total of 3 lexicons for the proposed method, and apply bias and linear method for all of them, totaling 6 experiments. The first lexicon (auto) is built on the training data using the automatically learned lexicon method of § SECREF11 separately for both the BTEC and KFTT experiments. Automatic alignment is performed using GIZA++ BIBREF8 . The second lexicon (man) is built using the popular English-Japanese dictionary Eijiro with the manual lexicon method of § SECREF13 . Eijiro contains 104K distinct word-to-word translation entries. The third lexicon (hyb) is built by combining the first and second lexicon with the hybrid method of § SECREF14 . Evaluation: We use standard single reference BLEU-4 BIBREF23 to evaluate the translation performance. Additionally, we also use NIST BIBREF24 , which is a measure that puts a particular focus on low-frequency word strings, and thus is sensitive to the low-frequency words we are focusing on in this paper. We measure the statistical significant differences between systems using paired bootstrap resampling BIBREF25 with 10,000 iterations and measure statistical significance at the INLINEFORM0 and INLINEFORM1 levels. Additionally, we also calculate the recall of rare words from the references. We define “rare words” as words that appear less than eight times in the target training corpus or references, and measure the percentage of time they are recovered by each translation system. ### Effect of Integrating Lexicons
In this section, we first a detailed examination of the utility of the proposed bias method when used with the auto or hyb lexicons, which empirically gave the best results, and perform a comparison among the other lexicon integration methods in the following section. Table TABREF20 shows the results of these methods, along with the corresponding baselines. First, compared to the baseline attn, our bias method achieved consistently higher scores on both test sets. In particular, the gains on the more difficult KFTT set are large, up to 2.3 BLEU, 0.44 NIST, and 30% Recall, demonstrating the utility of the proposed method in the face of more diverse content and fewer high-frequency words. Compared to the traditional pbmt systems hiero, particularly on KFTT we can see that the proposed method allows the NMT system to exceed the traditional SMT methods in BLEU. This is despite the fact that we are not performing ensembling, which has proven to be essential to exceed traditional systems in several previous works BIBREF6 , BIBREF0 , BIBREF1 . Interestingly, despite gains in BLEU, the NMT methods still fall behind in NIST score on the KFTT data set, demonstrating that traditional SMT systems still tend to have a small advantage in translating lower-frequency words, despite the gains made by the proposed method. In Table TABREF27 , we show some illustrative examples where the proposed method (auto-bias) was able to obtain a correct translation while the normal attentional model was not. The first example is a mistake in translating “extramarital affairs” into the Japanese equivalent of “soccer,” entirely changing the main topic of the sentence. This is typical of the errors that we have observed NMT systems make (the mistake from Figure FIGREF2 is also from attn, and was fixed by our proposed method). The second example demonstrates how these mistakes can then affect the process of choosing the remaining words, propagating the error through the whole sentence. Next, we examine the effect of the proposed method on the training time for each neural MT method, drawing training curves for the KFTT data in Figure FIGREF26 . Here we can see that the proposed bias training methods achieve reasonable BLEU scores in the upper 10s even after the first iteration. In contrast, the baseline attn method has a BLEU score of around 5 after the first iteration, and takes significantly longer to approach values close to its maximal accuracy. This shows that by incorporating lexical probabilities, we can effectively bootstrap the learning of the NMT system, allowing it to approach an appropriate answer in a more timely fashion. It is also interesting to examine the alignment vectors produced by the baseline and proposed methods, a visualization of which we show in Figure FIGREF29 . For this sentence, the outputs of both methods were both identical and correct, but we can see that the proposed method (right) placed sharper attention on the actual source word corresponding to content words in the target sentence. This trend of peakier attention distributions in the proposed method held throughout the corpus, with the per-word entropy of the attention vectors being 3.23 bits for auto-bias, compared with 3.81 bits for attn, indicating that the auto-bias method places more certainty in its attention decisions. ### Comparison of Integration Methods
Finally, we perform a full comparison between the various methods for integrating lexicons into the translation process, with results shown in Table TABREF31 . In general the bias method improves accuracy for the auto and hyb lexicon, but is less effective for the man lexicon. This is likely due to the fact that the manual lexicon, despite having broad coverage, did not sufficiently cover target-domain words (coverage of unique words in the source vocabulary was 35.3% and 9.7% for BTEC and KFTT respectively). Interestingly, the trend is reversed for the linear method, with it improving man systems, but causing decreases when using the auto and hyb lexicons. This indicates that the linear method is more suited for cases where the lexicon does not closely match the target domain, and plays a more complementary role. Compared to the log-linear modeling of bias, which strictly enforces constraints imposed by the lexicon distribution BIBREF27 , linear interpolation is intuitively more appropriate for integrating this type of complimentary information. On the other hand, the performance of linear interpolation was generally lower than that of the bias method. One potential reason for this is the fact that we use a constant interpolation coefficient that was set fixed in every context. gu16acl have recently developed methods to use the context information from the decoder to calculate the different interpolation coefficients for every decoding step, and it is possible that introducing these methods would improve our results. ### Additional Experiments
To test whether the proposed method is useful on larger data sets, we also performed follow-up experiments on the larger Japanese-English ASPEC dataset BIBREF28 that consist of 2 million training examples, 63 million tokens, and 81,000 vocabulary size. We gained an improvement in BLEU score from 20.82 using the attn baseline to 22.66 using the auto-bias proposed method. This experiment shows that our method scales to larger datasets. ### Related Work
From the beginning of work on NMT, unknown words that do not exist in the system vocabulary have been focused on as a weakness of these systems. Early methods to handle these unknown words replaced them with appropriate words in the target vocabulary BIBREF10 , BIBREF29 according to a lexicon similar to the one used in this work. In contrast to our work, these only handle unknown words and do not incorporate information from the lexicon in the learning procedure. There have also been other approaches that incorporate models that learn when to copy words as-is into the target language BIBREF30 , BIBREF31 , BIBREF32 . These models are similar to the linear approach of § UID10 , but are only applicable to words that can be copied as-is into the target language. In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. On the other hand, while we are simply using a static interpolation coefficient INLINEFORM0 , these works generally have a more sophisticated method for choosing the interpolation between the standard and “copy” models. Incorporating these into our linear method is a promising avenue for future work. In addition mi16acl have also recently proposed a similar approach by limiting the number of vocabulary being predicted by each batch or sentence. This vocabulary is made by considering the original HMM alignments gathered from the training corpus. Basically, this method is a specific version of our bias method that gives some of the vocabulary a bias of negative infinity and all other vocabulary a uniform distribution. Our method improves over this by considering actual translation probabilities, and also considering the attention vector when deciding how to combine these probabilities. Finally, there have been a number of recent works that improve accuracy of low-frequency words using character-based translation models BIBREF33 , BIBREF34 , BIBREF35 . However, luong16acl have found that even when using character-based models, incorporating information about words allows for gains in translation accuracy, and it is likely that our lexicon-based method could result in improvements in these hybrid systems as well. ### Conclusion & Future Work
In this paper, we have proposed a method to incorporate discrete probabilistic lexicons into NMT systems to solve the difficulties that NMT systems have demonstrated with low-frequency words. As a result, we achieved substantial increases in BLEU (2.0-2.3) and NIST (0.13-0.44) scores, and observed qualitative improvements in the translations of content words. For future work, we are interested in conducting the experiments on larger-scale translation tasks. We also plan to do subjective evaluation, as we expect that improvements in content word translation are critical to subjective impressions of translation results. Finally, we are also interested in improvements to the linear method where INLINEFORM0 is calculated based on the context, instead of using a fixed value. ### Acknowledgment
We thank Makoto Morishita and Yusuke Oda for their help in this project. We also thank the faculty members of AHC lab for their supports and suggestions. This work was supported by grants from the Ministry of Education, Culture, Sport, Science, and Technology of Japan and in part by JSPS KAKENHI Grant Number 16H05873. Figure 1: An example of a mistake made by NMT on low-frequency content words. Table 1: Corpus details. Table 2: Accuracies for the baseline attentional NMT (attn) and the proposed bias-based method using the automatic (auto-bias) or hybrid (hyb-bias) dictionaries. Bold indicates a gain over the attn baseline, † indicates a significant increase at p < 0.05, and ∗ indicates p < 0.10. Traditional phrase-based (pbmt) and hierarchical phrase based (hiero) systems are shown for reference. Figure 2: Training curves for the baseline attn and the proposed bias method. Table 3: Examples where the proposed auto-bias improved over the baseline system attn. Underlines indicate words were mistaken in the baseline output but correct in the proposed model’s output. Figure 3: Attention matrices for baseline attn and proposed bias methods. Lighter colors indicate stronger attention between the words, and boxes surrounding words indicate the correct alignments. Table 4: A comparison of the bias and linear lexicon integration methods on the automatic, manual, and hybrid lexicons. The first line without lexicon is the traditional attentional NMT.
|
KFTT BIBREF12 and BTEC BIBREF13
|
How does the phrase "to be or not to be" tie into the overall story?
A. It is what Glmpauszn has to ask himself as he invades the not-world.
B. It plays into the nature of Glmpauszn's people, and how they exist along side ours.
C. It references Glmpauszn's disappearance, and the question if he was ever really there.
D. It plays into the uncertain nature of the story's truth.
|
A Gleeb for Earth By CHARLES SHAFHAUSER Illustrated by EMSH [Transcriber's Note: This etext was produced from Galaxy Science Fiction May 1953. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Not to be or not to not be ... that was the not-question for the invader of the not-world. Dear Editor: My 14 year old boy, Ronnie, is typing this letter for me because he can do it neater and use better grammar. I had to get in touch with somebody about this because if there is something to it, then somebody, everybody, is going to point finger at me, Ivan Smernda, and say, "Why didn't you warn us?" I could not go to the police because they are not too friendly to me because of some of my guests who frankly are stew bums. Also they might think I was on booze, too, or maybe the hops, and get my license revoked. I run a strictly legit hotel even though some of my guests might be down on their luck now and then. What really got me mixed up in this was the mysterious disappearance of two of my guests. They both took a powder last Wednesday morning. Now get this. In one room, that of Joe Binkle, which maybe is an alias, I find nothing but a suit of clothes, some butts and the letters I include here in same package. Binkle had only one suit. That I know. And this was it laying right in the middle of the room. Inside the coat was the vest, inside the vest the shirt, inside the shirt the underwear. The pants were up in the coat and inside of them was also the underwear. All this was buttoned up like Binkle had melted out of it and dripped through a crack in the floor. In a bureau drawer were the letters I told you about. Now. In the room right under Binkle's lived another stew bum that checked in Thursday ... name Ed Smith, alias maybe, too. This guy was a real case. He brought with him a big mirror with a heavy bronze frame. Airloom, he says. He pays a week in advance, staggers up the stairs to his room with the mirror and that's the last I see of him. In Smith's room on Wednesday I find only a suit of clothes, the same suit he wore when he came in. In the coat the vest, in the vest the shirt, in the shirt the underwear. Also in the pants. Also all in the middle of the floor. Against the far wall stands the frame of the mirror. Only the frame! What a spot to be in! Now it might have been a gag. Sometimes these guys get funny ideas when they are on the stuff. But then I read the letters. This knocks me for a loop. They are all in different handwritings. All from different places. Stamps all legit, my kid says. India, China, England, everywhere. My kid, he reads. He says it's no joke. He wants to call the cops or maybe some doctor. But I say no. He reads your magazine so he says write to you, send you the letters. You know what to do. Now you have them. Maybe you print. Whatever you do, Mr. Editor, remember my place, the Plaza Ritz Arms, is straight establishment. I don't drink. I never touch junk, not even aspirin. Yours very truly, Ivan Smernda Bombay, India June 8 Mr. Joe Binkle Plaza Ritz Arms New York City Dear Joe: Greetings, greetings, greetings. Hold firm in your wretched projection, for tomorrow you will not be alone in the not-world. In two days I, Glmpauszn, will be born. Today I hang in our newly developed not-pod just within the mirror gateway, torn with the agony that we calculated must go with such tremendous wavelength fluctuations. I have attuned myself to a fetus within the body of a not-woman in the not-world. Already I am static and for hours have looked into this weird extension of the Universe with fear and trepidation. As soon as my stasis was achieved, I tried to contact you, but got no response. What could have diminished your powers of articulate wave interaction to make you incapable of receiving my messages and returning them? My wave went out to yours and found it, barely pulsing and surrounded with an impregnable chimera. Quickly, from the not-world vibrations about you, I learned the not-knowledge of your location. So I must communicate with you by what the not-world calls "mail" till we meet. For this purpose I must utilize the feeble vibrations of various not-people through whose inadequate articulation I will attempt to make my moves known to you. Each time I will pick a city other than the one I am in at the time. I, Glmpauszn, come equipped with powers evolved from your fragmentary reports before you ceased to vibrate to us and with a vast treasury of facts from indirect sources. Soon our tortured people will be free of the fearsome not-folk and I will be their liberator. You failed in your task, but I will try to get you off with light punishment when we return again. The hand that writes this letter is that of a boy in the not-city of Bombay in the not-country of India. He does not know he writes it. Tomorrow it will be someone else. You must never know of my exact location, for the not-people might have access to the information. I must leave off now because the not-child is about to be born. When it is alone in the room, it will be spirited away and I will spring from the pod on the gateway into its crib and will be its exact vibrational likeness. I have tremendous powers. But the not-people must never know I am among them. This is the only way I could arrive in the room where the gateway lies without arousing suspicion. I will grow up as the not-child in order that I might destroy the not-people completely. All is well, only they shot this information file into my matrix too fast. I'm having a hard time sorting facts and make the right decision. Gezsltrysk, what a task! Farewell till later. Glmpauszn Wichita, Kansas June 13 Dear Joe: Mnghjkl, fhfjgfhjklop phelnoprausynks. No. When I communicate with you, I see I must avoid those complexities of procedure for which there are no terms in this language. There is no way of describing to you in not-language what I had to go through during the first moments of my birth. Now I know what difficulties you must have had with your limited equipment. These not-people are unpredictable and strange. Their doctor came in and weighed me again the day after my birth. Consternation reigned when it was discovered I was ten pounds heavier. What difference could it possibly make? Many doctors then came in to see me. As they arrived hourly, they found me heavier and heavier. Naturally, since I am growing. This is part of my instructions. My not-mother (Gezsltrysk!) then burst into tears. The doctors conferred, threw up their hands and left. I learned the following day that the opposite component of my not-mother, my not-father, had been away riding on some conveyance during my birth. He was out on ... what did they call it? Oh, yes, a bender. He did not arrive till three days after I was born. When I heard them say that he was straightening up to come see me, I made a special effort and grew marvelously in one afternoon. I was 36 not-world inches tall by evening. My not-father entered while I was standing by the crib examining a syringe the doctor had left behind. He stopped in his tracks on entering the room and seemed incapable of speech. Dredging into the treasury of knowledge I had come equipped with, I produced the proper phrase for occasions of this kind in the not-world. "Poppa," I said. This was the first use I had made of the so-called vocal cords that are now part of my extended matrix. The sound I emitted sounded low-pitched, guttural and penetrating even to myself. It must have jarred on my not-father's ears, for he turned and ran shouting from the room. They apprehended him on the stairs and I heard him babble something about my being a monster and no child of his. My not-mother appeared at the doorway and instead of being pleased at the progress of my growth, she fell down heavily. She made a distinct thump on the floor. This brought the rest of them on the run, so I climbed out the window and retreated across a nearby field. A prolonged search was launched, but I eluded them. What unpredictable beings! I reported my tremendous progress back to our world, including the cleverness by which I managed to escape my pursuers. I received a reply from Blgftury which, on careful analysis, seems to be small praise indeed. In fact, some of his phrases apparently contain veiled threats. But you know old Blgftury. He wanted to go on this expedition himself and it's his nature never to flatter anyone. From now on I will refer to not-people simply as people, dropping the qualifying preface except where comparisons must be made between this alleged world and our own. It is merely an offshoot of our primitive mythology when this was considered a spirit world, just as these people refer to our world as never-never land and other anomalies. But we learned otherwise, while they never have. New sensations crowd into my consciousness and I am having a hard time classifying them. Anyway, I shall carry on swiftly now to the inevitable climax in which I singlehanded will obliterate the terror of the not-world and return to our world a hero. I cannot understand your not replying to my letters. I have given you a box number. What could have happened to your vibrations? Glmpauszn Albuquerque, New Mexico June 15 Dear Joe: I had tremendous difficulty getting a letter off to you this time. My process—original with myself, by the way—is to send out feeler vibrations for what these people call the psychic individual. Then I establish contact with him while he sleeps and compel him without his knowledge to translate my ideas into written language. He writes my letter and mails it to you. Of course, he has no awareness of what he has done. My first five tries were unfortunate. Each time I took control of an individual who could not read or write! Finally I found my man, but I fear his words are limited. Ah, well. I had great things to tell you about my progress, but I cannot convey even a hint of how I have accomplished these miracles through the thick skull of this incompetent. In simple terms then: I crept into a cave and slipped into a kind of sleep, directing my squhjkl ulytz & uhrytzg ... no, it won't come out. Anyway, I grew overnight to the size of an average person here. As I said before, floods of impressions are driving into my xzbyl ... my brain ... from various nerve and sense areas and I am having a hard time classifying them. My one idea was to get to a chemist and acquire the stuff needed for the destruction of these people. Sunrise came as I expected. According to my catalog of information, the impressions aroused by it are of beauty. It took little conditioning for me finally to react in this manner. This is truly an efficient mechanism I inhabit. I gazed about me at the mixture of lights, forms and impressions. It was strange and ... now I know ... beautiful. However, I hurried immediately toward the nearest chemist. At the same time I looked up and all about me at the beauty. Soon an individual approached. I knew what to do from my information. I simply acted natural. You know, one of your earliest instructions was to realize that these people see nothing unusual in you if you do not let yourself believe they do. This individual I classified as a female of a singular variety here. Her hair was short, her upper torso clad in a woolen garment. She wore ... what are they? ... oh, yes, sneakers. My attention was diverted by a scream as I passed her. I stopped. The woman gesticulated and continued to scream. People hurried from nearby houses. I linked my hands behind me and watched the scene with an attitude of mild interest. They weren't interested in me, I told myself. But they were. I became alarmed, dived into a bush and used a mechanism that you unfortunately do not have—invisibility. I lay there and listened. "He was stark naked," the girl with the sneakers said. A figure I recognized as a police officer spoke to her. "Lizzy, you'll just have to keep these crackpot friends of yours out of this area." "But—" "No more buck-bathing, Lizzy," the officer ordered. "No more speeches in the Square. Not when it results in riots at five in the morning. Now where is your naked friend? I'm going to make an example of him." That was it—I had forgotten clothes. There is only one answer to this oversight on my part. My mind is confused by the barrage of impressions that assault it. I must retire now and get them all classified. Beauty, pain, fear, hate, love, laughter. I don't know one from the other. I must feel each, become accustomed to it. The more I think about it, the more I realize that the information I have been given is very unrealistic. You have been inefficient, Joe. What will Blgftury and the others say of this? My great mission is impaired. Farewell, till I find a more intelligent mind so I can write you with more enlightenment. Glmpauszn Moscow, Idaho June 17 Dear Joe: I received your first communication today. It baffles me. Do you greet me in the proper fringe-zone manner? No. Do you express joy, hope, pride, helpfulness at my arrival? No. You ask me for a loan of five bucks! It took me some time, culling my information catalog to come up with the correct variant of the slang term "buck." Is it possible that you are powerless even to provide yourself with the wherewithal to live in this inferior world? A reminder, please. You and I—I in particular—are now engaged in a struggle to free our world from the terrible, maiming intrusions of this not-world. Through many long gleebs, our people have lived a semi-terrorized existence while errant vibrations from this world ripped across the closely joined vibration flux, whose individual fluctuations make up our sentient population. Even our eminent, all-high Frequency himself has often been jeopardized by these people. The not-world and our world are like two baskets as you and I see them in our present forms. Baskets woven with the greatest intricacy, design and color; but baskets whose convex sides are joined by a thin fringe of filaments. Our world, on the vibrational plane, extends just a bit into this, the not-world. But being a world of higher vibration, it is ultimately tenuous to these gross peoples. While we vibrate only within a restricted plane because of our purer, more stable existence, these people radiate widely into our world. They even send what they call psychic reproductions of their own selves into ours. And most infamous of all, they sometimes are able to force some of our individuals over the fringe into their world temporarily, causing them much agony and fright. The latter atrocity is perpetrated through what these people call mediums, spiritualists and other fatuous names. I intend to visit one of them at the first opportunity to see for myself. Meanwhile, as to you, I would offer a few words of advice. I picked them up while examining the "slang" portion of my information catalog which you unfortunately caused me to use. So, for the ultimate cause—in this, the penultimate adventure, and for the glory and peace of our world—shake a leg, bub. Straighten up and fly right. In short, get hep. As far as the five bucks is concerned, no dice. Glmpauszn Des Moines, Iowa June 19 Dear Joe: Your letter was imponderable till I had thrashed through long passages in my information catalog that I had never imagined I would need. Biological functions and bodily processes which are labeled here "revolting" are used freely in your missive. You can be sure they are all being forwarded to Blgftury. If I were not involved in the most important part of my journey—completion of the weapon against the not-worlders—I would come to New York immediately. You would rue that day, I assure you. Glmpauszn Boise, Idaho July 15 Dear Joe: A great deal has happened to me since I wrote to you last. Systematically, I have tested each emotion and sensation listed in our catalog. I have been, as has been said in this world, like a reed bending before the winds of passion. In fact, I'm rather badly bent indeed. Ah! You'll pardon me, but I just took time for what is known quaintly in this tongue as a "hooker of red-eye." Ha! I've mastered even the vagaries of slang in the not-language.... Ahhh! Pardon me again. I feel much better now. You see, Joe, as I attuned myself to the various impressions that constantly assaulted my mind through this body, I conditioned myself to react exactly as our information catalog instructed me to. Now it is all automatic, pure reflex. A sensation comes to me when I am burned; then I experience a burning pain. If the sensation is a tickle, I experience a tickle. This morning I have what is known medically as a syndrome ... a group of symptoms popularly referred to as a hangover ... Ahhh! Pardon me again. Strangely ... now what was I saying? Oh, yes. Ha, ha. Strangely enough, the reactions that come easiest to the people in this world came most difficult to me. Money-love, for example. It is a great thing here, both among those who haven't got it and those who have. I went out and got plenty of money. I walked invisible into a bank and carried away piles of it. Then I sat and looked at it. I took the money to a remote room of the twenty room suite I have rented in the best hotel here in—no, sorry—and stared at it for hours. Nothing happened. I didn't love the stuff or feel one way or the other about it. Yet all around me people are actually killing one another for the love of it. Anyway.... Ahhh. Pardon me. I got myself enough money to fill ten or fifteen rooms. By the end of the week I should have all eighteen spare rooms filled with money. If I don't love it then, I'll feel I have failed. This alcohol is taking effect now. Blgftury has been goading me for reports. To hell with his reports! I've got a lot more emotions to try, such as romantic love. I've been studying this phenomenon, along with other racial characteristics of these people, in the movies. This is the best place to see these people as they really are. They all go into the movie houses and there do homage to their own images. Very quaint type of idolatry. Love. Ha! What an adventure this is becoming. By the way, Joe, I'm forwarding that five dollars. You see, it won't cost me anything. It'll come out of the pocket of the idiot who's writing this letter. Pretty shrewd of me, eh? I'm going out and look at that money again. I think I'm at last learning to love it, though not as much as I admire liquor. Well, one simply must persevere, I always say. Glmpauszn Penobscot, Maine July 20 Dear Joe: Now you tell me not to drink alcohol. Why not? You never mentioned it in any of your vibrations to us, gleebs ago, when you first came across to this world. It will stint my powers? Nonsense! Already I have had a quart of the liquid today. I feel wonderful. Get that? I actually feel wonderful, in spite of this miserable imitation of a body. There are long hours during which I am so well-integrated into this body and this world that I almost consider myself a member of it. Now I can function efficiently. I sent Blgftury some long reports today outlining my experiments in the realm of chemistry where we must finally defeat these people. Of course, I haven't made the experiments yet, but I will. This is not deceit, merely realistic anticipation of the inevitable. Anyway, what the old xbyzrt doesn't know won't muss his vibrations. I went to what they call a nightclub here and picked out a blonde-haired woman, the kind that the books say men prefer. She was attracted to me instantly. After all, the body I have devised is perfect in every detail ... actually a not-world ideal. I didn't lose any time overwhelming her susceptibilities. I remember distinctly that just as I stooped to pick up a large roll of money I had dropped, her eyes met mine and in them I could see her admiration. We went to my suite and I showed her one of the money rooms. Would you believe it? She actually took off her shoes and ran around through the money in her bare feet! Then we kissed. Concealed in the dermis of the lips are tiny, highly sensitized nerve ends which send sensations to the brain. The brain interprets these impulses in a certain manner. As a result, the fate of secretion in the adrenals on the ends of the kidneys increases and an enlivening of the entire endocrine system follows. Thus I felt the beginnings of love. I sat her down on a pile of money and kissed her again. Again the tingling, again the secretion and activation. I integrated myself quickly. Now in all the motion pictures—true representations of life and love in this world—the man with a lot of money or virtue kisses the girl and tries to induce her to do something biological. She then refuses. This pleases both of them, for he wanted her to refuse. She, in turn, wanted him to want her, but also wanted to prevent him so that he would have a high opinion of her. Do I make myself clear? I kissed the blonde girl and gave her to understand what I then wanted. Well, you can imagine my surprise when she said yes! So I had failed. I had not found love. I became so abstracted by this problem that the blonde girl fell asleep. I thoughtfully drank quantities of excellent alcohol called gin and didn't even notice when the blonde girl left. I am now beginning to feel the effects of this alcohol again. Ha. Don't I wish old Blgftury were here in the vibrational pattern of an olive? I'd get the blonde in and have her eat him out of a Martini. That is a gin mixture. I think I'll get a hot report off to the old so-and-so right now. It'll take him a gleeb to figure this one out. I'll tell him I'm setting up an atomic reactor in the sewage systems here and that all we have to do is activate it and all the not-people will die of chain asphyxiation. Boy, what an easy job this turned out to be. It's just a vacation. Joe, you old gold-bricker, imagine you here all these gleebs living off the fat of the land. Yak, yak. Affectionately. Glmpauszn Sacramento, Calif. July 25 Dear Joe: All is lost unless we work swiftly. I received your revealing letter the morning after having a terrible experience of my own. I drank a lot of gin for two days and then decided to go to one of these seance things. Somewhere along the way I picked up a red-headed girl. When we got to the darkened seance room, I took the redhead into a corner and continued my investigations into the realm of love. I failed again because she said yes immediately. The nerves of my dermis were working overtime when suddenly I had the most frightening experience of my life. Now I know what a horror these people really are to our world. The medium had turned out all the lights. He said there was a strong psychic influence in the room somewhere. That was me, of course, but I was too busy with the redhead to notice. Anyway, Mrs. Somebody wanted to make contact with her paternal grandmother, Lucy, from the beyond. The medium went into his act. He concentrated and sweated and suddenly something began to take form in the room. The best way to describe it in not-world language is a white, shapeless cascade of light. Mrs. Somebody reared to her feet and screeched, "Grandma Lucy!" Then I really took notice. Grandma Lucy, nothing! This medium had actually brought Blgftury partially across the vibration barrier. He must have been vibrating in the fringe area and got caught in the works. Did he look mad! His zyhku was open and his btgrimms were down. Worst of all, he saw me. Looked right at me with an unbelievable pattern of pain, anger, fear and amazement in his matrix. Me and the redhead. Then comes your letter today telling of the fate that befell you as a result of drinking alcohol. Our wrenchingly attuned faculties in these not-world bodies need the loathsome drug to escape from the reality of not-reality. It's true. I cannot do without it now. The day is only half over and I have consumed a quart and a half. And it is dulling all my powers as it has practically obliterated yours. I can't even become invisible any more. I must find the formula that will wipe out the not-world men quickly. Quickly! Glmpauszn Florence, Italy September 10 Dear Joe: This telepathic control becomes more difficult every time. I must pick closer points of communication soon. I have nothing to report but failure. I bought a ton of equipment and went to work on the formula that is half complete in my instructions. Six of my hotel rooms were filled with tubes, pipes and apparatus of all kinds. I had got my mechanism as close to perfect as possible when I realized that, in my befuddled condition, I had set off a reaction that inevitably would result in an explosion. I had to leave there immediately, but I could not create suspicion. The management was not aware of the nature of my activities. I moved swiftly. I could not afford time to bring my baggage. I stuffed as much money into my pockets as I could and then sauntered into the hotel lobby. Assuming my most casual air, I told the manager I was checking out. Naturally he was stunned since I was his best customer. "But why, sir?" he asked plaintively. I was baffled. What could I tell him? "Don't you like the rooms?" he persisted. "Isn't the service good?" "It's the rooms," I told him. "They're—they're—" "They're what?" he wanted to know. "They're not safe." "Not safe? But that is ridiculous. This hotel is...." At this point the blast came. My nerves were a wreck from the alcohol. "See?" I screamed. "Not safe. I knew they were going to blow up!" He stood paralyzed as I ran from the lobby. Oh, well, never say die. Another day, another hotel. I swear I'm even beginning to think like the not-men, curse them. Glmpauszn Rochester, New York September 25 Dear Joe: I have it! It is done! In spite of the alcohol, in spite of Blgftury's niggling criticism, I have succeeded. I now have developed a form of mold, somewhat similar to the antibiotics of this world, that, transmitted to the human organism, will cause a disease whose end will be swift and fatal. First the brain will dissolve and then the body will fall apart. Nothing in this world can stop the spread of it once it is loose. Absolutely nothing. We must use care. Stock in as much gin as you are able. I will bring with me all that I can. Meanwhile I must return to my original place of birth into this world of horrors. There I will secure the gateway, a large mirror, the vibrational point at which we shall meet and slowly climb the frequency scale to emerge into our own beautiful, now secure world. You and I together, Joe, conquerors, liberators. You say you eat little and drink as much as you can. The same with me. Even in this revolting world I am a sad sight. My not-world senses falter. This is the last letter. Tomorrow I come with the gateway. When the gin is gone, we will plant the mold in the hotel where you live. In only a single gleeb it will begin to work. The men of this queer world will be no more. But we can't say we didn't have some fun, can we, Joe? And just let Blgftury make one crack. Just one xyzprlt. I'll have hgutry before the ghjdksla! Glmpauszn Dear Editor: These guys might be queer drunk hopheads. But if not? If soon brain dissolve, body fall apart, how long have we got? Please, anybody who knows answer, write to me—Ivan Smernda, Plaza Ritz Arms—how long is a gleeb?
|
B. It plays into the nature of Glmpauszn's people, and how they exist along side ours.
|
There is one central object that saves Casey Ritter and Pard Hoskins from the wrath of Jupiter’s scorpion race. What is it and what does it do?
A. A potion that causes the scorpions to go insane.
B. A yellow space suit. The scorpion race considers yellow is a sign of serious respect.
C. A yellow space suit. The scorpion race considers yellow a sign of romantic love.
D. A perfume that makes the scorpions fall in love with whoever wears it.
|
JUPITER'S JOKE By A. L. HALEY Casey Ritter, the guy who never turned down a dare, breathed a prayer to the gods of idiots and spacemen, and headed in toward the great red spot of terrible Jupiter. [Transcriber's Note: This etext was produced from Planet Stories Fall 1954. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Those methane and ammonia planets, take it from me, they're the dead-end of creation, and why the Old Man ever thought them up I'll never know. I never thought I'd mess around any of them, but things can sure happen. A man can get himself backed into a corner in this little old solar system. It just ain't big enough for a gent of scope and talent; and the day the Solar System Customs caught me red-handed smuggling Kooleen crystals in from Mars, I knew I was in that corner, and sewed up tight. Sure, the crystals are deadly, but I was smuggling them legitimately, in a manner of speaking, for this doctor to experiment with. He wasn't going to sell them for dope. But—and this was the 'but' that was likely to deprive the System of my activities—even experimenting with them was illegal even if it needed to be done; also, I had promised not to rat on him before taking the job. Well, Casey Ritter may be a lot of things we won't mention, but he doesn't rat on his clients. So there I was, closeted with the ten members of the S.S. Customs Court, getting set to hear the gavel fall and the head man intone the sentence that would take me out of circulation for a long, long time. And instead, blast me, if they didn't foul me with this trip to good old Jupiter. I didn't get it at first. I'd argued with 'em, but inside I'd been all set for the sentence, and even sort of reconciled to it. I could even hear the words in my mind. But they didn't match what the judge was saying. I stood there gaping like a beached fish while I sorted it out. Then I croaked, "Jupiter! What for? Are you running outa space in stir? Want to choke me to death in chlorine instead?" Being civil to the court didn't seem important just then. Jupiter was worse than the pen, a lot worse. Jupiter was a death sentence. The senior judge rapped sharply with his gavel. He frowned me down and then nodded at the judge on his right. This bird, a little old hank of dried-up straw, joined his fingertips carefully, cleared his scrawny throat, and told me what for. "You've no doubt heard tales of the strange population of Jupiter," he said. "Every spaceman has, I am sure. Insect-like creatures who manifestly migrated there from some other system and who inhabit the Red Spot of the planet, floating in some kind of artificial anti-gravity field in the gaseous portion of the atmosphere—" I snorted. "Aw, hell, judge, that's just one of those screwy fairy tales! How could any—" The senior judge rapped ferociously, and I skidded to a halt. Our little story teller patiently cleared his skinny throat again. "I assure you it is no fairy tale. We possess well-authenticated photographs of these inhabitants, and if you are prepared to visit them and in some way worm from them the secret of their anti-gravity field, the government stands ready to issue you a full pardon as well as a substantial monetary reward. Your talents, Mr. Ritter, seem, shall we say, eminently suited to the task." He beamed at me. I looked around. They were all beaming. At me! Suddenly I smelled a rat as big as an elephant. That whole Kooleen caper: Had it been just a trap to lead me straight to this? I hadn't been able to figure how they'd cracked my setup.... At the thought my larynx froze up tight. This was worse than I'd thought. Government men trapping me and then beaming at me. And a full pardon. And a reward. Oh, no! I told myself, it wasn't possible. Not when I already had more counts against me than a cur has fleas. Not unless it was a straight suicide mission! I feebly massaged my throat. "Pictures?" I whispered. "Show me 'em." Crude, but it was all I could squeeze out. I squeezed out more when I saw those pictures, though. Those inhabitants were charming, just charming if you like scorpions. Well, a cross between a scorpion and a grasshopper, to be accurate. Floating among that red stuff, they showed up a kind of sickly purple turning to gangrene around the edges. The bleat of anguish that accompanied my first view of those beauties had taken my voice again. "How big?" I whispered. He shrugged, trying for nonchalance. "About the size of a man, I believe." I raised my shrinking head. "Take me to jail!" I said firmly, and collapsed onto my chair. A crafty-eyed buzzard across the table leaned toward me. "So this is the great Casey Ritter, daredevil of the Solar System!" he sneered. "Never loses a bet, never turns down a dare!" I shuddered. "You're telling that one! And besides, a man's got to draw the line somewhere. And I'm drawing it right here. Take me to jail!" They were really stumped. They hadn't expected me to take this attitude at all. No doubt they had it figured that I'd gratefully throw myself into a sea of ammonia among man-size scorpions just for the hell of it. Nuts! After all, in the pen a man can eat and breathe, and a guard won't reach in and nip off an arm or leg while he's got his back turned. How stupid could they get? When I finally wore them down and got to my little cell, I looked around it with a feeling of real coziness. I even patted the walls chummily and snapped a salute at the guard. It makes me grind my molars now to think of it. The way that bunch of stuffed shirts in the S.S.C. made a gold-barred chimpanzee out of me has broken my spirit and turned me into an honest trader. Me, Casey Ritter, slickest slicker in the Solar System, led like a precious infant right where I'd flatly refused to go! In plain English, I underestimated the enemy. Feeling safe and secure in the grip of the good old Iron College, I relaxed. At this strategic point, the enemy planted a stoolie on me. Not in my cell block. They were too smart for that. But we met at recreation, and his mug seemed familiar, like a wisp of smoke where no smoke has got a right to be; and after awhile I braced him. I was right. I'd met the shrimp before when I was wound up in an asteroid real estate racket. Pard Hoskins was his alias, and he had the tag of being a real slick operator. We swapped yarns for about a week when we met, and then I asked him what's his rap this trip. "Oh, a pretty good jolt if they can keep hold of me," he says. "I just made a pass at the Killicut Emeralds, that's all, and got nabbed." "Oh, no!" I moaned. "What were you trying to do, start a feud between us and Mars?" He shrugged, but his little black-currant eyes began to sparkle with real passion, the high voltage kind that only a woman in a million, or a million in a bank, can kindle in a guy. "Buddy," he said reverently, "I'd start more than that just to get me mitts on them stones again! Why, you ain't never seen jools till you've seen them! Big as hen's eggs, an even dozen of 'em; and flawless, I'm a-shoutin', not a flaw!" His eyes watered at the memory, yearning like a hound-dog's over a fresh scent. I couldn't believe it. Those emeralds were in the inner shrine of the super-sacred, super-secret temple of the cavern-dwelling tribe of Killicuts on Mars—the real aborigines. Bleachies, we call them, sort of contemptuously; but those Bleachies are a rough lot when they're mad, and if Pard had really got near those emeralds, he should be nothing but a heap of cleaned bones by now. Either he was the world's champion liar or its bravest son, and either way I took my hat off to him. "How'd you make the getaway?" I asked, taking him at his word. He looked loftily past me. "Sorry. Gotta keep that a secret. Likewise where I cached 'em." "Cached what?" "The rocks, stupe." I hardly heard the cut. "You mean you really did get away with them?" My jaw must've been hanging down a foot, because I'd just been playing along with him, not really believing him, and now all of a sudden I somehow knew that he'd really lifted those emeralds. But how? It was impossible. I'd investigated once myself. He nodded and then moved casually away. I looked up and saw a guard coming. That night I turned on my hard prison cot until my bones were so much jelly, trying to figure that steal. The next morning I got up burning with this fever for information, only to find that Pard had got himself put in solitary for mugging a guard, and that really put the heat on me. I chewed my fingernails down to the quick by the time he got out a week later. By that time he really had me hooked. I'd of sworn he was leveling with me. But he wouldn't tell me how he'd worked the steal. Instead, he opened up on the trade he'd booked for the string. He said, "When I chisel me way outa this squirrel cage, I'm gonna hit fer good old Jupe and sell 'em to Akroida. She's nuts about jools. What that old girl won't give me fer 'em—" He whistled appreciatively, thinking about it. "Jupiter!" I goggled at him. "Akroida! Who's she?" He looked at me as if I hadn't yet got out from under the rock where he was sure I'd been born. "Don't you know nothin', butterhead?" From him I took it. I even waited patiently till the master spoke again. The memory still makes me fry. "Akroida," he explained in his own sweet time, "is the queen-scorp of them idiotic scorpions that lives on Jupiter. I sold her the Halcyon Diamond that disappeared from the World Museum five years ago, remember?" He winked broadly. "It come from Mars in the first place, you know. Mars! What a place fer jools! Damn desert's lousy with 'em, if it wasn't so much trouble to dig 'em out—" He went off into a dream about the rocks on Mars but I jerked him back. "You mean those scorpions have really got brains?" "Brains!" he snorted. "Have they got brains! Why, they're smarter than people! And not ferocious, neither, in spite of how they look, if you just leave 'em alone. That's all they want, just to be left alone. Peace an' quiet, and lots of methane and ammonia and arsenic, that's fer them. Besides, the space suit rig you got to wear, they can't bite you. Akroida's not a bad old girl. Partial to arsenic on her lettuce, so I brought her a hundred pounds of the stuff, an' she went fer that almost like it was diamonds, too. Did I rate around there fer awhile!" He sighed regretfully. "But then I went and made her mad, an' I'm kinda persona non grata there right now. By the time I gnaw outa this here cheese trap, though, I figger she'll be all cooled off and ready fer them emeralds." I went back to my cot that night, and this time instead of biting my nails, I bit myself. So I faced it. Casey Ritter lost his nerve, and along with it, the chance of a lifetime. A better man than me had already penetrated the Great Red Spot of old Jupiter and come out alive. That thought ate me to the quick, and I began to wonder if it was too late, after all. I could hardly wait for morning to come, so that I could pry more information out of Pard Hoskins. But I didn't see Pard for a few days. And then, a week later, a group of lifers made a break that didn't jell, and the whole bunch was locked up in the blockhouse, the special building reserved for escapees. Pard Hoskins was in the bunch. He'd never get out of there, and he knew it. So did I. For three more days I worked down my knuckles, my nails being gone, while I sat around all hunched up, wondering feverishly if Pard would make a deal about those emeralds. Then I broke down and sent out a letter to the S.S.C. The Big Sneer of the conference table promptly dropped in on me, friendly as a bottle of strychnine. But for a lad headed for Jupiter that was good training, so I sneered right back at him, explained the caper, and we both paid a visit to Pard. In two days the deal was made and the caper set up. There were a few bits of info that Pard had to shell out, like where the emeralds were, and how to communicate with those scorpions, and how he'd made Akroida mad. "I put on a yeller slicker," he confessed sadly. "That there ammonia mist was eatin' into the finish on my spacesuit, so I draped this here slicker around me to sorta fancy up the rig before goin' in to an audience with the old rip." He shook his head slowly. "The kid that took me in was colorblind, so I didn't have no warning at all. I found out that them scorpions can't stand yeller. It just plain drives them nuts! Thought they'd chaw me up and spit me out into the chlorine before I could get outa the damn thing. If my colorblind pal hadn't helped me, they'd of done it, too. And Akroida claimed I done it a-purpose to upset her." Then he winked at me. "But then I got off in a corner and cooked up some perfume that drives them nuts the other way; sorta frantic with ecstasy, like the book says. Didn't have a chance to try it on Akroida, though. She wouldn't give me another audience. It's in the stuff they cleaned outa me room: a poiple bottle with a bright green stopper." He ruminated a few minutes. "Tell you what, chump. Make them shell out with a green an' poiple spacesuit—them's the real Jupiter colors—an' put just a touch o' that there perfume on the outside of it. Akroida'll do anything fer you if she just gets a whiff. Just anything! But remember, don't use but a drop. It's real powerful." II Real powerful, said the man. What an understatement! But the day I was set adrift in that sea of frozen ammonia clouds mixed with nice cozy methane gas I sure prayed for it to be powerful, and I clutched that tiny bottle like that boy Aladdin clutching his little old lamp. I'd had a lot of cooperation getting that far. An Earth patrol had slipped down onto the Red Desert of Mars and picked up the Killicut Emeralds from where Pard Hoskins had cached them; and safe out in space again, we had pored over that string of green headlights practically slobbering. But the Big Sneer of the S.S.C., the fellow that had got me into this caper, was right there to take the joy out of it all and to remind me that this was public service, strictly. "These—" he had proclaimed with a disdainful flourish, like a placer miner pointing to a batch of fool's gold—"These jewels are as nothing, Ritter, compared with the value of the secret you are to buy with them. And be assured that if you're man enough to effect the trade—" He paused, his long nose twitching cynically—"IF you succeed, your reward will be triple what you could get for them in any market. Added to which, IF you succeed, you will be a free man." That twitch of the nose riled me no little. "I ain't failed yet!" I snarled at him. "Just you wait till I do, feller!" I slipped the string of emeralds back into its little safe. "Instead of sniping at me, why don't you get that brain busy and set our rendezvous?" With that we got down to business and fixed a meeting point out on Jupiter's farthest moon; then they took me in to the edge of Jupiter's ice-cloud and turned me loose in a peanut of a space boat with old Jupe looming ahead bigger than all outdoors and the Red Spot dead ahead. I patted my pretty enameled suit, which was a study in paris green and passionate purple. I patted the three hundred pounds of arsenic crystals for Akroida and anyone else I might have to bribe. I anxiously examined my suit's air and water containers and the heating unit that would keep them in their proper state. I had already gone over the space boat. Yeah, I was as nervous as a cat with new kittens. Feeling again for my little bottle of horrid stench, I breathed a prayer to the god of idiots and spacemen, and headed in. The big ship was long gone, and I felt like a mighty small and naked microbe diving into the Pacific Ocean. That famous Red Spot was that big, too. It kept expanding until the whole universe was a fierce, raw luminous red. Out beyond it at first there had been fringes of snow-white frozen ammonia, but now it was all dyed redder than Mars. Then I took the plunge right into it. Surprise! The stuff was plants! Plants as big as meadows, bright red, floating around in those clouds of frozen ammonia like seaweed! Then I noticed that the ammonia around them wasn't frozen any more and peeked at the outside thermometer I couldn't believe it. It was above zero. Then I forgot about the temperature because it dawned on me that I was lost. I couldn't see a thing but drifting ammonia fog and those tangles of red floating plants like little islands all around. Cutting down the motor, I eased along. But my green boat must have showed up like a lighthouse in all that red, because it wasn't long until I spotted a purple and green hopper-scorp traveling straight toward me, sort of rowing along with a pair of stubby wings. He didn't seem to be making much effort, even though he was climbing vertically up from the planet. In fact, he didn't seem to be climbing at all but just going along horizontally. There just wasn't any up or down in that crazy place. It must be that anti-grav field, I concluded. The air was getting different, too, now that I was further in. I'm no chemist, and I couldn't have gotten out there to experiment if I had been, but those plants were certainly doing something to that ammonia and methane. The fog thinned, for one thing, and the temperature rose to nearly forty. Meanwhile the hopper-scorp reached the ship. Hastily I squirted some of my Scorpion-Come-Hither lure on the chest of my spacesuit, opened the lock, and popped out, brave as could be. Face to face with that thing, though, I nearly lost my grip on the handle. In fact, I'd have fainted dead away right there if Pard Hoskins hadn't been there already and lived. If that little shrimp could do it, I could, too. I braced up and tapped out the greeting Pard had taught me. My fiendish-looking opponent tapped right back, inquiring why the hell I was back so soon when I knew that Akroida was all set to carve me into steaks for just any meal. But the tone was friendly and even intimate—or rather, the taps were. There was even a rather warm expression discernible in the thing's eyes, so I took heart and decided to ignore the ferocious features surrounding those eyes. After all, the poor sinner's map was made of shell, and he wasn't responsible for its expression. I tapped back very politely that he must be mistaking me for someone else. "I've never been here before, and so I've never met the charming lady," I informed him. "However, I have something very special in the way of jewels—not with me, naturally—and the rumor is that she might be interested." He reared back at that, and reaching up, plucked his right eye out of the socket and reeled it out to the end of a two-foot tentacle, and then he examined me with it just like an old-time earl with one of those things they called monocles. Pard hadn't warned me about those removable eyes, for reasons best known to himself. I still wake up screaming.... Anyway, when that thing pulled out its eye and held it toward me, I backed up against the side of the ship like I'd been half-electrocuted. Then I gagged. But I could still remember that I had to live in that suit for awhile, so I held on. Then that monstrosity reeled in the eye, and I gagged again. My actions didn't bother him a bit. "Jewels, did you say?" he tapped out thoughtfully, just like an ordinary business man, and I managed to tap out yes. He drifted closer; close enough to get a whiff.... A shudder of ecstasy stiffened him. His head and eyes rolled with it, and he wafted closer still. Right there I began to harbor a premonition that there might be such a thing as being too popular in Scorpdom, but I thrust this sneak-thief idea back into limbo. Taking advantage of his condition, I boldly tapped out, "How's about taking me on a guided tour through this red spinach patch to Akroida, old pal?" Or words to that effect. He lolled his hideous cranium practically on my shoulder. "Anything! Just anything you desire, my dearest friend." I tried to back off from him a bit, but the ship stopped me. "I'm Casey Ritter. What's your label, chum?" "Attaboy," he ticked coyly. "Attaboy?" Things blurred around me. It couldn't be. It was just plain nuts. Then I got a glimmer through my paralyzed gray matter. "Who named you that?" He simpered. "My dear friend, Pard Hoskins." I breathed again. How simple could I get? He'd already mistaken me for Pard, hadn't he? Then I remembered something else. "How come you aren't mad at him? Don't you hate yellow, too?" He hung his silly head. "I fear I am colorblind," he confessed sadly. Right there I forgave him for pulling that eye on me. He was the guide I needed, the one who had got Pard out alive. I almost hugged him. "Lead off, old pal," I sang out, and then had to tap it. "I'll follow in my boat." Well, I'd met the first of the brood and was still alive. Not only alive but loved and cherished, thanks to Pard's inventiveness and to a kindly fate which had sent Pard's old pal my way. A great man, Pard Hoskins. How had he made friends with the brute in the first place? Being once more inside my spaceboat, I raised my helmet, which was like one of those head-pieces they used to put on suits of armor instead of the usual plastic bubble. And it was rigged out with phony antennae and mandibles and other embellishments calculated to interest my hosts. Whether it interested them or not, it was plenty uncomfortable for me. Peeking out the porthole I saw that my guide was fidgeting and looking over his shoulder at my ship, so I eased in the controls and edge after him. To my surprise a vapor shot out of a box that I had taken for a natural lump on his back, and he darted away from me. I opened the throttle and tore after him among the immense red blobs that were now beginning to be patterned with dozens of green-and-purple scorpions, all busy filling huge baskets with buds and tendrils, no doubt. Other scorpions oared and floated about in twos and threes in a free and peaceable manner that almost made me forget that I was scared to death of them, and they stared at my boat with only a mild interest that would have taught manners to most of my fellow citizens of Earth. It wasn't until we had covered some two hundred miles of this that something began to loom out of the mist, and I forgot the playboys and the field workers. It loomed higher and higher. Then we burst out into a clearing several miles in diameter, and I saw the structure clearly. It was red, like everything else in this screwy place, and could only have been built out of compressed blocks of the red plant. In shape it was a perfect octagon. It hung poised in the center of the cleared space, suspended on nothing. It had to be at least a mile in diameter, and its sides were pierced with thousands of openings through which its nightmare occupants appeared and disappeared, drifting in and out like they had all the time in the world. I stared until my eyeballs felt paralyzed. Pard was right again. These critters had brains. And my S.S.C. persecutor was right, too. That anti-grav secret was worth more than any string of rocks in the system, including the Killicut Emeralds. Then I swallowed hard. Attaboy was leading me straight across to a window. Closing my helmet, my fingers fumbled badly. My brain was fumbling, too. "Zero hour, chump!" it told me, and I shuddered. Picking up the first hundred pounds of the arsenic, I wobbled over to the airlock. III That palace was like nothing on earth. Naturally, you'll say, it's on Jupiter. But I mean it was even queerer than that. It was like no building on any planet at all. And, in fact, it wasn't on a planet; it was floating up there only two hundred miles in from the raw edge of space. In that building everything stayed right where it was put. If it was put twelve or fifty feet up off the floor, it stayed there. Not that there wasn't gravity. There was plenty of gravity to suit me—just right, in fact—and still they had furniture sitting around in the air as solid as if on a floor. Which was fine for flying hopper-scorps, but what about Casey Ritter, who hadn't cultivated even a feather? Attaboy, however, had the answers for everything. Towing me from the airlock to the window ledge, he again sniffed that delectable odor on my chest, caressed me with his front pair of legs while I manfully endured, and then without warning tossed me onto his back above the little box and flew off with me along a tunnel with luminous red walls. We finally came to the central hall of the palace, and at the sight of all that space dropping away, I clutched at his shell and nearly dropped the arsenic. But he didn't have any brakes I could grab, so he just flew out into mid-air in a room that could have swallowed a city block, skyscrapers and all. It was like a mammoth red cavern, and it glowed like the inside of a red light. No wonder those scorpions like green and purple. What a relief from all that red! A patch in the middle of the hall became a floating platform holding up a divan twenty feet square covered with stuff as green as new spring grass, and in the center of this reclined Akroida. It had to be. Who else could look like that? No one, believe me, boys and girls, no one! Our little Akroida was a pure and peculiarly violent purple—not a green edge anywhere. She was even more purple than my fancy enameled space suit, and she was big enough to comfortably fill most of that twenty-foot couch. To my shrinking eyes right then she looked as big as a ten-ton cannon and twice as mean and dangerous. She was idly nipping here and there as though she was just itching to take a hunk out of somebody, and the way the servants were edging away out around her, I could see they didn't want to get in range. I didn't blame them a bit. Under the vicious sag of her Roman nose, her mandibles kept grinding, shaking the jewels that were hung all over her repulsive carcass, and making the Halcyon Diamond on her chest blaze like a bonfire. Attaboy dumped me onto a floating cushion where I lay clutching and shuddering away from her and from the void all around me, and went across to her alone with the arsenic. Akroida rose up sort of languidly on an elbow that was all stripped bone and sharp as a needle. She pulled an eyeball out about a yard and scanned Attaboy and the box. He closed in to the couch all hunched over, ducked his head humbly half-a-dozen times, and pushed the box over beside her. Akroida eased her eyeball back, opened the box and sniffed, and then turned to Attaboy with a full-blown Satanic grin. I could hear her question reverberate away over where I was. "Who from?" asked Akroida. That conversation was telegraphed to me blow by blow by the actions of those hopper-scorps. I didn't need their particular brand of Morse Code at all. "Who from?" Attaboy cringed lower and blushed a purple all-over blush. "Dear lady, it is from an interspace trader who possesses some truly remarkable jewels," he confessed coyly. Akroida toyed with the Halcyon Diamond and ignored the bait. "His name?" she demanded. And when he told her, with a bad stutter in his code, she reared up higher on her skinny elbow and glared in my direction. "Casey Ritter? Never heard of him. Where's he from?" Well, after all, she wasn't blind. He had to confess. "I—uh—the stones were so amazing, Royal Akroida, that I didn't pay much attention to the—uh—trader. He does seem to resemble an—ah—earthman." He ducked his head and fearfully waited. A sort of jerking quiver ran through Akroida. She reared up even higher. Her mean Roman nose twitched. "An earthman? Like Pard Hoskins?" Attaboy shrank smaller and smaller. He could only nod dumbly. The storm broke, all right. That old dame let out a scream like a maddened stallion and began to thrash around and flail her couch with that dragon's tail of hers.
|
D. A perfume that makes the scorpions fall in love with whoever wears it.
|
Which is the most likely social consequence of AIs?
A. The AI developers will be able to shape societal structures as they see fit
B. There will be an overwhelming amount of regulation that will add control to people's lives
C. Over-reliance on technology might cause some loss of valuable intuition from educated people
D. There will be no jobs left for humans to complete if AIs continue developing
|
AI: what's the worst that could happen? The Centre for the Future of Intelligence is seeking to investigate the implications of artificial intelligence for humanity, and make sure humans take advantage of the opportunities while dodging the risks. It launched at the University of Cambridge last October, and is a collaboration between four universities and colleges – Cambridge, Oxford, Imperial and Berkeley – backed with a 10-year, £10m grant from the Leverhulme Trust. Because no single discipline is ideally suited to this task, the centre emphasises the importance of interdisciplinary knowledge-sharing and collaboration. It is bringing together a diverse community of some of the world's best researchers, philosophers, psychologists, lawyers and computer scientists. Executive director of the centre is Stephen Cave, a writer, philosopher and former diplomat. Harry Armstrong, head of futures at Nesta, which publishes The Long + Short, spoke with Cave about the impact of AI. Their conversation has been edited. Harry Armstrong: Do you see the interdisciplinary nature of the centre as one of its key values and one of the key impacts you hope it will have on the field? Stephen Cave: Thinking about the impact of AI is not something that any one discipline owns or does in any very systematic way. So if academia is going to rise to the challenge and provide thought leadership on this hugely important issue, then we’re going to need to do it by breaking down current disciplinary boundaries and bringing people with very different expertise together. That means bringing together the technologists and the experts at developing these algorithms together with social scientists, philosophers, legal scholars and so forth. I think there are many areas of science where more interdisciplinary engagement would be valuable. Biotech’s another example. In that sense AI isn’t unique, but I think because thinking about AI is still in very early stages, we have an opportunity to shape the way in which we think about it, and build that community. We want to create a space where many different disciplines can come together and develop a shared language, learn from each other’s approaches, and hopefully very quickly move to be able to actually develop new ideas, new conclusions, together. But the first step is learning how to talk to each other. At a recent talk, Naomi Klein said that addressing the challenge of climate change could not have come at a worse time. The current dominant political and economic ideologies, along with growing isolationist sentiment, runs contrary to the bipartisan, collaborative approaches needed to solve global issues like climate change. Do you see the same issues hampering a global effort to respond to the challenges AI raises? Climate change suffers from the problem that the costs are not incurred in any direct way by the industrialists who own the technology and are profiting from it. With AI, that has been the case so far; although not on the same scale. There has been disruption but so far, compared to industrialisation, the impact has been fairly small. That will probably change. AI companies, and in particular the big tech companies, are very concerned that this won't go like climate change, but rather it will go like GMOs: that people will have a gut reaction to this technology as soon as the first great swathe of job losses take hold. People speculate that 50m jobs could be lost in the US if trucking is automated, which is conceivable within 10 years. You could imagine a populist US government therefore simply banning driverless cars. So I think there is anxiety in the tech industry that there could be a serious reaction against this technology at any point. And so my impression is that there is a feeling within these companies that these ethical and social implications need to be taken very seriously, now. And that a broad buy-in by society into some kind of vision of the future in which this technology plays a role is required, if a dangerous – or to them dangerous – counteraction is to be avoided. My personal experience working with these tech companies is that they are concerned for their businesses and genuinely want to do the right thing. Of course there are intellectual challenges and there is money to be made, but equally they are people who don't think when they get up in the morning that they're going to put people out of jobs or bring about the downfall of humanity. As the industry matures it's developing a sense of responsibility. So I think we've got a real opportunity, despite the general climate, and in some ways because of it. There's a great opportunity to bring industry on board to make sure the technology is developed in the right way. One of the dominant narratives around not only AI but technology and automation more generally is that we, as humans, are at the mercy of technological progress. If you try and push against this idea you can be labelled as being anti-progress and stuck in the past. But we do have a lot more control than we give ourselves credit for. For example, routineness and susceptibility to automation are not inevitable features of occupations, job design is hugely important. How do we design jobs? How do we create jobs that allow people to do the kind of work they want to do? There can be a bit of a conflict between being impacted by what's happening and having some sort of control over what we want to happen. Certainly, we encounter technological determinism a lot. And it's understandable. For us as individuals, of course it does feel like it always is happening and we just have to cope. No one individual can do much about it, other than adapt. But that's different when we consider ourselves at a level of a society, as a polis [city state], or as an international community. I think we can shape the way in which technology develops. We have various tools. In any given country, we have regulations. There's a possibility of international regulation. Technology is emerging from a certain legal, political, normative, cultural, and social framework. It's coming from a certain place. And it is shaped by all of those things. And I think the more we understand a technology's relationship with those things, and the more we then consciously try to shape those things, the more we are going to influence the technology. So, for example, developing a culture of responsible innovation. For example, a kind of Hippocratic oath for AI developers. These things are within the realms of what is feasible, and I think will help to shape the future. One of the problems with intervention, generally, is that we cannot control the course of events. We can attempt to, but we don't know how things are going to evolve. The reality is, societies are much too complex for us to be able to shape them in any very specific way, as plenty of ideologies and political movements have found to their cost. There are often unforeseen consequences that can derail a project. I think, nonetheless, there are things we can do. We can try to imagine how things might go very badly wrong, and then work hard to develop systems that will stop that from happening. We can also try collectively to imagine how things could go very right. The kind of society that we actually want to live in that uses this technology. And I'm sure that will be skewed in all sorts of ways, and we might imagine things that seem wonderful and actually have terrible by-products. This conversation cannot be in the hands of any one group. It oughtn't be in the hands of Silicon Valley billionaires alone. They've got their role to play, but this is a conversation we need to be having as widely as possible. The centre is developing some really interesting projects but perhaps one of the most interesting is the discussion of what intelligence might be. Could you go into a bit more detail about the kinds of questions you are trying to explore in this area? You mean kinds of intelligence? Yeah. I think this is very important because historically, we've had an overwhelming tendency to anthropomorphise. We define what intelligence is, historically, as being human-like. And then within that, being like certain humans. And it's taken a very long time for the academic community to accept that there could be such a thing as non-human intelligence at all. We know that crows, for example, who have had a completely different evolutionary history, or octopuses, who have an even more different evolutionary history, might have a kind of intelligence that's very different to ours. That in some ways rivals our own, and so forth. But luckily, we have got to that point in recent years of accepting that we are not the only form of intelligence. But now, AI is challenging that from a different direction. Just as we are accepting that the natural world offers this enormous range of different intelligences, we are at the same time inventing new intelligences that are radically different to humans. And I think, still, this anthropomorphic picture of the kind of humanoid android, the robot, dominates our idea of what AI is too much. And too many people, and the industry as well, talk about human-level artificial intelligence as a goal, or general AI, which basically means like a human. But actually what we're building is nothing like a human. When the first pocket calculator was made, it didn't do maths like a human. It was vastly better. It didn't make the occasional mistake. When we set about creating these artificial agents to solve these problems, because they have a completely different evolutionary history to humans, they solve problems in very different ways. And until now, people have been fairly shy about describing them as intelligent. Or rather, in the history of AIs, we think solving a particular problem would require intelligence. Then we solve it. And then that's no longer intelligence, because we've solved it. Chess is a good example. But the reality is, we are creating a whole new world of different artificial agents. And we need to understand that world. We need to understand all the different ways of being clever, if you like. How you can be extremely sophisticated at some particular rational process, and yet extremely bad at another one in a way that bears no relation to the way humans are on these axes. And this is important, partly because we need to expand our sense of what is intelligent, like we have done with the natural world. Because lots of things follow from saying something is intelligent. Historically, we have a long tradition in Western philosophy of saying those who are intelligent should rule. So if intelligence equates to power, then obviously we need to think about what we mean by intelligence. Who has it and who doesn't. Or how it equates to rights and responsibilities. It certainly is a very ambitious project to create the atlas of intelligence. There was a point I read in something you wrote on our ideas of intelligence that I thought was very interesting. We actually tend to think of intelligence at the societal level when we think about human ability, rather than at the individual level but in the end conflate the two. I think that's a very good point, when we think about our capabilities, we think about what we can achieve as a whole, not individually. But when we talk about AI, we tend to think about that individual piece of technology, or that individual system. So for example if we think about the internet of things and AI, we should discuss intelligence as something encompassed by the whole. Yeah, absolutely. Yes, right now, perhaps it is a product of our anthropomorphising bias. But there is a tendency to see a narrative of AI versus humanity, as if it's one or the other. And yet, obviously, there are risks in this technology long before it acquires any kind of manipulative agency. Robotic technology is dangerous. Or potentially dangerous. But at the same time, most of what we're using technology for is to enhance ourselves, to increase our capacities. And a lot of what AI is going to be doing is augmenting us – we're going to be working as teams, AI-human teams. Where do you think this AI-human conflict, or concept of a conflict, comes from? Do you think that's just a reflection of historical conversations we've had about automation, or do you think it is a deeper fear? I do think it comes both from some biases that might well be innate, such as anthropomorphism, or our human tendency to ascribe agency to other objects, particularly moving ones, is well-established and probably has sound evolutionary roots. If it moves, it's probably wise to start asking yourself questions like, "What is it? What might it want? Where might it be going? Might it be hungry? Do I look like food to it?" I think it makes sense, it's natural for us to think in terms of agency. And when we do, it's natural for us to project our own ways of being and acting. And we, as primates, are profoundly co-operative. But at the same time, we're competitive and murderous. We have a strong sense of in-group versus out-group, which is responsible for both a great deal of cooperation, within the in-group, but also terrible crimes. Murder, rape, pillage, genocide; and they're pointed at the out-group. And so I think it's very natural for us to see AIs in terms of agents. We anthropomorphise them as these kind of android robots. And then we think about, well, you know, are they part of our in-group, or are they some other group? If they're some other group, it's us against them. Who's going to win? Well, let's see. So I think that's very natural, I think that's very human. There is this long tradition, in Western culture in particular, with associating intelligence and dominance and power. It's interesting to speculate about how, and I wish I knew more about it, and I'd like to see more research on this, about how different cultures perceive AI. It's well known that Japan is very accepting of technology and robots, for example. You can think, well, we in the West have long been justifying power relations of a certain kind on the basis that we're 'cleverer'. That's why men get to vote and women don't, or whatever. In a culture where power is not based on intelligence but, say, on a caste system, which is purely hereditary, we’d build an AI, and it would just tune in, drop out, attain enlightenment, just sit in the corner. Or we beg it to come back and help us find enlightenment. It might be that we find a completely different narrative to the one that's dominant in the West. One of the projects the centre is running is looking into what kind of AI breakthroughs may come, when and what the social consequences could be. What do you think the future holds? What are your fears – what do you think could go right and wrong in the short, medium and long term? That's a big question. Certainly I don't lie awake at night worried that robots are going to knock the door down and come in with a machine gun. If the robots take over the world, it won't be by knocking the door down. At the moment, I think it's certainly as big a risk that we have a GMO moment, and there's a powerful reaction against the technology which prevents us from reaping the benefits, which are enormous. I think that's as big a risk as the risks from the technologies themselves. I think one worry that we haven't talked about is that we've become extremely dependent upon this technology. And that we essentially become deskilled. There's an extent to which the history of civilisation is the history of the domestication of the human species sort of by ourselves, and also by our technology, to some extent. And AI certainly allows for that to reach a whole new level. Just think about GPs with diagnostic tools. Even now, my GP consults the computer fairly regularly. But as diagnostic tools get better, what are they going to be doing other than just typing something into the computer and reading out what comes back? At which point, you might as well do away with the GP. But then, who does know about medicine? And so we do need to worry about deskilling and about becoming dependent. And it is entirely possible that you can imagine a society in which we're all sort of prosperous, in a sense. Our basic bodily needs are provided for, perhaps, in a way, to an extent that we've never before even dreamed of. Unprecedented in human history. And yet, we're stripped of any kind of meaningful work. We have no purpose. We're escaping to virtual reality. And then you could imagine all sorts of worrying countercultures or Luddite movements or what have you. I guess that's the kind of scenario that – I haven't sketched it terribly well – but that's the kind of thing that worries me more than missile-toting giant robots. As to utopian, yes, that's interesting. I certainly mentioned a couple of things. One thing that I hope is that this new technological revolution enables us to undo some of the damage of the last one. That's a very utopian thought and not terribly realistic, but we use fossil fuels so incredibly efficiently. The idea that driverless cars that are shared, basically a kind of shared service located off a Brownfield site does away with 95 per cent of all cars, freeing up a huge amount of space in the city to be greener, many fewer cars need to be produced, they would be on the road much less, there'd be fewer traffic jams. It's just one example, but the idea that we can live much more resource-efficiently, because we are living more intelligently through using these tools. And therefore can undo some of the damage of the last Industrial Revolution. That's my main utopian hope, I guess. Vintage toy robot image by josefkubes/Shutterstock This article was originally published on TheLong+Short. Read the original article.
|
C. Over-reliance on technology might cause some loss of valuable intuition from educated people
|
What change in Mr. Romero's left ventricular ejection fraction was observed between the echocardiography reports dated 06/15/2016 and 06/30/2016?
Choose the correct answer from the following options:
A. Decreased from 28% to 27%
B. Increased from 28% to 29%
C. Remained constant at 28%
D. Increased from 27% to 28%
E. Decreased from 29% to 28%
|
### Patient Report 0
**Dear colleague, **
We would like to report to you about our patient, Mr. David Romero, born
on 02/16/1942, who was under our inpatient care from 03/25/2016 to
03/30/2016.
**Diagnoses:**
- Suspected myocarditis
- Uncomplicated biopsy, pending results
- LifeVest has been adjusted
- Left ventricular ejection fraction of 28%
- Chronic hepatitis C
- Status post hepatitis A
- Post-antiviral therapy
- Exclusion of relevant coronary artery disease
**Medical History:** The patient was admitted with suspected myocarditis
due to a significantly impaired pump function noticed during outpatient
visits. Anamnestically, the patient reported experiencing fatigue and
exertional dyspnea since mid-December, with no recollection of a
preceding infection. Antiviral therapy with Interferon/Ribavirin for
chronic Hepatitis C had been ongoing since November. An outpatient
evaluation had excluded relevant coronary artery disease.
**Current Presentation:** Suspected inflammatory/dilated cardiomyopathy,
Indication for biopsy
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without guarding, spleen
and liver not palpable. Normal bowel sounds.
**Coronary Angiography**: Globally significantly impaired left
ventricular function (EF: 28%)
[Myocardial biopsy:]{.underline} Uncomplicated retrieval of LV
endomyocardial biopsies
[Recommendation]{.underline}: A conservative medical approach is
recommended, and further therapeutic decisions will depend on the
histological, immunohistological, and molecular biological examination
results of the now-retrieved myocardial biopsies.
[Procedure]{.underline}: Femoral closure system is applied, 6 hours of
bed rest, administration of 100 mg/day of Aspirin for 4 weeks following
left ventricular heart biopsy.
**Echocardiography before Heart Catheterization**:
Performed in sinus rhythm. Satisfactory ultrasound condition.
[Findings]{.underline}: Moderately dilated left ventricle (LVDd 64mm).
Markedly reduced systolic LV function (EF 28%). Global longitudinal
strain (2D speckle tracking): -8.6%.
Regional wall motion abnormalities: despite global hypokinesia, the
posterolateral wall (basal) contracts best. Diastolic dysfunction Grade
1 (LV relaxation disorder) (E/A 0.7) (E/E\' mean 13.8). No LV
hypertrophy. Morphologically age-appropriate heart valves. Moderately
dilated left atrium (LA Vol. 71ml). Mild mitral valve insufficiency
(Grade 1 on a 3-grade scale). Normal-sized right ventricle. Moderately
reduced RV function Normal-sized right atrium. Minimal tricuspid valve
insufficiency (Grade 0-1 on a 3-grade scale). Systolic pulmonary artery
pressure in the normal range (systolic PAP 27mmHg).
No thrombus detected. Minimal pericardial effusion, circular, maximum
2mm, no hemodynamic relevance.
**Echocardiography after Heart Catheterization:**
[Indication]{.underline}: Follow-up on pericardial effusion.
[Examination]{.underline}: TTE at rest, including duplex and
quantitative determination of parameters. [Echocardiographic
Finding:]{.underline} Regarding pericardial effusion, the status is the
same. Circular effusion, maximum 2mm.
**ECG after Heart Catheterization:**
76/min, sinus rhythm, complete left bundle branch block.
**Summary:** On 03/26/2016, biopsy and left heart catheterization were
successfully performed without complications. Here, too, the patient
exhibited a significantly impaired pump function, currently at 28%.
**Therapy and Progression:**
Throughout the inpatient stay, the patient remained cardiorespiratorily
stable at all times. Malignant arrhythmias were ruled out via telemetry.
After the intervention, echocardiography showed no pericardial effusion.
The results of the endomyocardial biopsies are still pending. An
appointment for results discussion and evaluation of further procedures
at our facility should be scheduled in 3 weeks. Following the biopsy,
Aspirin 100 as specified should be given for 4 weeks. We intensified the
ongoing heart failure therapy and added Spironolactone to the
medication, recommending further escalation based on hemodynamic
tolerability.
**Current Recommendations:** Close cardiological follow-up examinations,
electrolyte monitoring, and echocardiography are advised. Depending on
the left ventricular ejection fraction\'s course, the implantation of an
ICD or ICD/CRT system should be considered after 3 months. On the day of
discharge, we initiated the adjustment of a Life Vest, allowing the
patient to return home in good general condition.
**Medication upon Discharge: **
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torasemide (Torem) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
**Lab results upon Discharge:**
**Parameter** **Results** **Reference Range**
------------------------ ------------- ---------------------
Absolute Erythroblasts 0.01/nL \< 0.01/nL
Sodium 134 mEq/L 136-145 mEq/L
Potassium 4.5 mEq/L 3.5-4.5 mEq/L
Creatinine (Jaffé) 1.25 mg/dL 0.70-1.20 mg/dL
Urea 50 mg/dL 17-48 mg/dL
Total Bilirubin 1.9 mg/dL \< 1.20 mg/dL
CRP 4.1 mg/L \< 5.0 mg/L
Troponin-T 78 ng/L \< 14 ng/L
ALT 67 U/L \< 41 U/L
AST 78 U/L \< 50 U/L
Alkaline Phosphatase 151 U/L 40-130 U/L
gamma-GT 200 U/L 8-61 U/L
Free Triiodothyronine 2.3 ng/L 2.00-4.40 ng/L
Free Thyroxine 14.2 ng/L 9.30-17.00 ng/L
TSH 4.1 mU/L 0.27-4.20 mU/L
Hemoglobin 11.6 g/dL 13.5-17.0 g/dL
Hematocrit 34.5% 39.5-50.5%
Erythrocytes 3.7 /pL 4.3-5.8/pL
Leukocytes 9.56/nL 3.90-10.50/nL
MCV 92.5 fL 80.0-99.0 fL
MCH 31.1 pg 27.0-33.5 pg
MCHC 33.6 g/dL 31.5-36.0 g/dL
MPV 8.9 fL 7.0-12.0 fL
RDW-CV 14.0% 11.5-15.0%
Quick 89% 78-123%
INR 1.09 0.90-1.25
PTT Actin-FS 25.3 sec. 22.0-29.0 sec.
### Patient Report 1
**Dear colleague, **
We are reporting on the pending findings of the myocardial biopsies
taken from Mr. David Romero, born on 02/16/1942 on 03/26/2016 due to the
deterioration of LV function from 40% to 28% after interferon therapy
for HCV infection.
**Diagnoses:**
- Suspected myocarditis
- LifeVest
- Left ventricular ejection fraction of 28%
- Chronic hepatitis C
- Status post hepatitis A
- Post-antiviral therapy
- Exclusion of relevant coronary artery disease
**Current Medication:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torasemide (Torem) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
**Myocardial Biopsy on 01/27/2014:**
[Molecular Biology:]{.underline}
PCR examinations performed under the question of myocardial infection
with cardiotropic pathogens yielded a positive detection of HCV-specific
RNA in myocardial tissue without quantification possibility
(methodically determined). Otherwise, there was no evidence of
myocardial infection with enteroviruses, adenoviruses, Epstein-Barr
virus, Human Herpes Virus Type 6 A/B, or Erythrovirus genotypes 1/2 in
the myocardium.
[Assessment]{.underline}: Positive HCV-mRNA detection in myocardial
tissue. This positive test result does not unequivocally prove an
infection of myocardial cells, as contamination of the tissue sample
with HCV-infected peripheral blood cells cannot be ruled out in chronic
hepatitis.
**Histology and Immunohistochemistry**:
Unremarkable endocardium, normal cell content of the interstitium with
only isolated lymphocytes and histiocytes in the histologically examined
samples. Quantitatively, immunohistochemically examined native
preparations showed borderline high CD3-positive lymphocytes with a
diffuse distribution pattern at 10.2 cells/mm2. No increased
perforin-positive cytotoxic T cells. The expression of cell adhesion
molecules is discreetly elevated. Otherwise, only slight perivascular
but no interstitial fibrosis. Cardiomyocytes are properly arranged and
slightly hypertrophied (average diameter around 23 µm), the surrounding
capillaries are unremarkable. No evidence of acute
inflammation-associated myocardial cell necrosis (no active myocarditis)
and no interstitial scars from previous myocyte loss. No lipomatosis.
[Assessment:]{.underline} Based on the myocardial biopsy findings, there
is positive detection of HCV-RNA in the myocardial tissue samples, with
the possibility of tissue contamination with HCV-infected peripheral
blood cells. Significant myocardial inflammatory reaction cannot be
documented histologically and immunohistochemically. In the endocardial
samples, apart from mild hypertrophy of properly arranged
cardiomyocytes, there are no significant signs of myocardial damage
(interstitial fibrosis or scars from previous myocyte loss). Therefore,
the present findings do not indicate the need for specific further
antiviral or anti-inflammatory therapy, and the existing heart failure
medication can be continued unchanged. If LV function impairment
persists for an extended period, there is an indication for
antiarrhythmic protection of the patient using an ICD.
### Patient Report 2
**Dear colleague, **
We thank you for referring your patient Mr. David Romero, born on
02/16/1942, to us for echocardiographic follow-up on 05/04/2016.
**Diagnoses:**
- Dilatated cardiomyopathy
- LifeVest
- Left ventricular ejection fraction of 28%
- Chronic Hepatitis C
- Status post Hepatitis A
- Post-antiviral therapy
- Exclusion of relevant coronary artery disease
- Type 2 diabetes mellitus
- Hypothyroidism
**Current Medication:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torem (Torasemide) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without pressure pain,
spleen and liver not palpable. Normal bowel sounds.
**Echocardiography: M-mode and 2-dimensional.**
The left ventricle measures approximately 65/56 mm (normal up to 56 mm).
The right atrium and right ventricle are of normal dimensions.
Global progressive reduction in contractility, morphologically
unremarkable.
In Doppler echocardiography, normal heart valves are observed.
Mitral valve insufficiency Grade I.
[Assessment]{.underline}: Dilated cardiomyopathy with slightly reduced
left ventricular function. MI I TII °, PAP 23 mm Hg + CVP. No more
pulmonary embolism detectable.
**Summary:**
Currently, the cardiac situation is stable, LVEDD slightly decreasing.
### Patient Report 3
**Dear colleague, **
We thank you for referring your patient, Mr. David Romero, born on
02/16/1942 to us for echocardiographic follow-up on 06/15/2016.
**Diagnoses:**
- Dilatated cardiomyopathy
- LifeVest
- Left ventricular ejection fraction of 28%
- Chronic Hepatitis C
- Status post Hepatitis A
- Post-antiviral therapy
- Exclusion of relevant coronary artery disease
- Type 2 diabetes mellitus
- Hypothyroidism
**Medication upon Admission:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torasemide (Torem) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without guarding, spleen
and liver not palpable. Normal bowel sounds.
**Echocardiography from 06/15/2016**: Good ultrasound conditions.
The left ventricle is dilated to approximately 65/57 mm (normal up to 56
mm). The left atrium is dilated to 48 mm. Normal thickness of the left
ventricular myocardium. Ejection fraction is around 28%. Heart valves
show normal flow velocities.
**Summary:**
Currently, the cardiac situation is stable, LVEDD slightly decreasing,
potassium and creatinine levels were obtained. If EF remains this low,
an ICD may be indicated.
**Lab results from 06/15/2016:**
**Parameter** **Result** **Reference Range**
----------------------------------- ------------ ---------------------
Reticulocytes 0.01/nL \< 0.01/nL
Sodium 135 mEq/L 136-145 mEq/L
Potassium 4.8 mEq/L 3.5-4.5 mEq/L
Creatinine 1.34 mg/dL 0.70-1.20 mg/dL
BUN 49 mg/dL 17-48 mg/dL
Total Bilirubin 1.9 mg/dL \< 1.20 mg/dL
C-reactive Protein 4.1 mg/L \< 5.0 mg/L
Troponin-T 78 ng/L \< 14 ng/L
ALT 67 U/L \< 41 U/L
AST 78 U/L \< 50 U/L
Alkaline Phosphatase 151 U/L 40-130 U/L
gamma-GT 200 U/L 8-61 U/L
Free Triiodothyronine (T3) 2.3 ng/L 2.00-4.40 ng/L
Free Thyroxine (T4) 14.2 ng/L 9.30-17.00 ng/L
Thyroid Stimulating Hormone (TSH) 4.1 mU/L 0.27-4.20 mU/L
Hemoglobin 11.6 g/dL 13.5-17.0 g/dL
Hematocrit 34.5% 39.5-50.5%
Red Blood Cell Count 3.7 M/µL 4.3-5.8 M/µL
White Blood Cell Count 9.56 K/µL 3.90-10.50 K/µL
Platelet Count 280 K/µL 150-370 K/µL
MC 92.5 fL 80.0-99.0 fL
MCH 31.1 pg 27.0-33.5 pg
MCHC 33.6 g/dL 31.5-36.0 g/dL
MPV 8.9 fL 7.0-12.0 fL
RDW-CV 14.0% 11.5-15.0%
Quick 89% 78-123%
INR 1.09 0.90-1.25
Partial Thromboplastin Time 25.3 sec. 22.0-29.0 sec.
### Patient Report 4
**Dear colleague, **
We are reporting to you about Mr. David Romero, born on 02/16/1942, who
presented himself at our Cardiology University Outpatient Clinic on
06/30/2016.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function (ejection fraction
around 30%)
- LifeVest
- Planned CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Current Medication:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ---------------- ---------------
Aspirin 100 mg/tablet 1-0-0
Ramipril (Altace) 2.5 mg/tablet 1-0-1
Carvedilol (Coreg) 12.5 mg/tablet 1-0-1
Torasemide (Torem) 5 mg/tablet 1-0-0
Spironolactone (Aldactone) 25 mg/tablet 1-0-0
L-Thyroxine (Synthroid) 50 µg/tablet 1-0-0
**Echocardiography on 06/30/2016:** In sinus rhythm. Adequate ultrasound
window.
Moderately dilated left ventricle (LVDd 63mm). Significantly reduced
systolic LV function (EF biplane 29%). No LV hypertrophy.
**ECG on 06/30/2016:** Sinus rhythm, regular tracing, heart rate 69/min,
complete left bundle branch block, QRS 135 ms, ERBS with left bundle
branch block.
**Assessment**: Mr. Romero presents himself for the follow-up assessment
of known dilated cardiomyopathy. He currently reports minimal dyspnea.
Coronary heart disease has been ruled out. No virus was detected
bioptically. However, the recent echocardiography still shows severely
impaired LV function.
**Current Recommendations:** Given the presence of left bundle branch
block, there is an indication for CRT-D implantation. For this purpose,
we have scheduled a pre-admission appointment, with the implantation
planned for 07/04/2016. We kindly request a referral letter. The
LifeVest should continue to be worn until the implantation, despite the
pressure sores on the thorax.
### Patient Report 5
**Dear colleague, **
We would like to report to you about our patient, Mr. David Romero, born
on 02/16/1942, who was in our inpatient care from 07/04/2016 to
07/06/2016.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function (ejection fraction
around 30%)
- LifeVest
- Planned CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Medication upon Admission:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torem (Torasemide) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
Sitagliptin (Januvia) 100 mg 1-0-0
Insulin glargine (Lantus) 0-0-20IE
**Current Presentation:** The current admission was elective for CRT-D
implantation in dilated cardiomyopathy with severely impaired LV
function despite full heart failure medication and complete left bundle
branch block. Please refer to previous medical records for a detailed
history. On 07/05/2016, a CRT-ICD system was successfully implanted. The
peri- and post-interventional course was uncomplicated. Pneumothorax was
ruled out post-interventionally. The wound conditions are
irritation-free. The ICD card was given to the patient. We request
outpatient follow-up on the above-mentioned date for wound inspection
and CRT follow-up. Please adjust the known cardiovascular risk factors.
**Findings:**
**ECG upon Admission:** Sinus rhythm 66/min, PQ 176ms, QRS 126ms, QTc
432ms, Complete left bundle branch block with corresponding excitation
regression disorder.
**Procedure**: Implantation of a CRT-D with left ventricular multipoint
pacing left pectoral. Smooth triple puncture of the lateral left
subclavian vein and implantation of an active single-coil electrode in
the RV apex with very good electrical values. Trouble-free probing of
the CS and direct venography using a balloon occlusion catheter.
Identification of a suitable lateral vein and implantation of a
quadripolar electrode (Quartet, St. Jude Medical) with very good
electrical values. No phrenic stimulation up to 10 volts in all
polarities. Finally, implantation of an active P/S electrode in the
right atrial roof with equally very good electrical values. Connection
to the device and submuscular implantation. Wound irrigation and layered
wound closure with absorbable suture material. Finally, extensive
testing of all polarities of the LV electrode and activation of
multipoint pacing. Final setting of the ICD.
**Chest X-ray on 07/05/2016:**
[Clinical status, question, justifying indication:]{.underline} History
of CRT-D implantation. Question about lead position, pneumothorax?
**Findings**: New CRT-D unit left pectoral with leads projected onto the
right ventricle, the right atrium, and the sinus coronarius. No
pneumothorax.
Normal heart size. No pulmonary congestion. No diffuse infiltrates. No
pleural effusions.
**ECG at Discharge:** Continuous ventricular PM stimulation, HR: 66/min.
**Current Recommendations:**
- We request a follow-up appointment in our Pacemaker Clinic. Please
provide a referral slip.
- We ask for the protection of the left arm and avoidance of
elevations \> 90 degrees. Self-absorbing sutures have been used.
- We request regular wound checks.
### Patient Report 6
**Dear colleague, **
We thank you for referring your patient, Mr. David Romero, born on
02/16/1942, who presented to our Cardiological University Outpatient
Clinic on 08/26/2016.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Current Medication:**
**Medication ** **Dosage ** **Frequency**
---------------------------- ------------- ---------------
Aspirin 100 mg 1-0-0
Ramipril (Altace) 2.5 mg 1-0-1
Carvedilol (Coreg) 12.5 mg 1-0-1
Torem (Torasemide) 5 mg 1-0-0
Spironolactone (Aldactone) 25 mg 1-0-0
L-Thyroxine (Synthroid) 50 µg 1-0-0
Sitagliptin (Januvia) 100 mg 1-0-0
Insulin glargine (Lantus) 0-0-20IE
**Current Presentation**: Slightly increasing exertional dyspnea, no
coronary heart disease.
**Cardiovascular Risk Factors:**
- Family history: No
- Smoking: No
- Hypertension: No
- Diabetes: Yes
- Dyslipidemia: Yes
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without pressure pain,
spleen and liver not palpable. Normal bowel sounds.
**Findings**:
**Resting ECG:** Sinus rhythm, 83 bpm. Blood pressure: 120/70 mmHg.
**Echocardiography: M-mode and 2-dimensional**
Left ventricle dimensions: Approximately 57/45 mm (normal up to 56 mm),
moderately dilated
- Right atrium and right ventricle: Normal dimensions
- Normal thickness of left ventricular muscle
- Globally, mild reduction in contractility
- Heart valves: Morphologically normal
- Doppler-Echocardiography: No significant valve regurgitation
**Assessment**: Mildly dilated cardiomyopathy with slightly reduced left
ventricular function. Ejection fraction at 45 - 50%. Mild diastolic
dysfunction. Mild tricuspid regurgitation, pulmonary artery pressure 22
mm Hg, and left ventricular filling pressure slightly increased.
**Stress Echocardiography: Stress echocardiography with exercise test**
- Stress test protocol: Treadmill exercise test
- Reason for stress test: Exertional dyspnea
- Quality of the ultrasound: Good
- Initial workload: 50 watts
- Maximum workload achieved: 150 Watt
- Blood pressure response: Systolic BP increased from 112/80 mmHg to
175/90 mmHg
- Heart rate response: Increased from 71bpm to 124bpm
- Exercise terminated due to leg pain
**Resting ECG:** Sinus rhythm**.** No significant changes during
exercise
**Echocardiography at rest:** Normokinesis of all left ventricular
segments EF: 45 - 50%
**Echocardiography during exercise:** Increased contractility and wall
thickening of all segments
[Summary]{.underline}: No dynamic wall motion abnormalities. No evidence
of exercise-induced myocardial ischemia
**Carotid Doppler Ultrasound:** Both common carotid arteries are
smooth-walled**.** Intima-media thickness: 0.8 mm**.** Small plaque in
the carotid bulb on both sides**.** Normal flow in the internal and
external carotid arteries**.** Normal dimensions and flow in the
vertebral arteries
**Summary:** Non-obstructive carotid plaques**.** Indicated to lower LDL
to below 1.8 mmol/L
**Summary:**
- Stress echocardiography shows no evidence of ischemia, EF \>45-50%
- Carotid duplex shows minimal non-obstructive plaques
- Increase Simvastatin to 20 mg, target LDL-C \< 1.8 mmol/L
### Patient Report 7
**Dear colleague, **
We would like to inform you about the results of the cardiac
catheterization of Mr. David Romero, born on 02/16/1942 performed by us
on 08/10/2022.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Procedure:** Right femoral artery puncture. Left ventriculography with
a 5F pigtail catheter in the right anterior oblique projection. Coronary
angiography with 5F JL4.0 and 5F JR 4.0 catheters. End-diastolic
pressure in the left ventricle within the normal range, measured in
mmHg. No pathological pressure gradient across the aortic valve.
**Coronary angiography:**
- Unremarkable left main stem.
- The left anterior descending (LAD) artery shows mild wall changes,
with a maximum stenosis of 20-\<30%.
- The robust right coronary artery (RCA) is stenosed proximally by
30-40%, subsequently ectatic and then stenosed to 40-\<50% distally.
Slow contrast clearance. The right coronary artery is also stenosed
up to 30%.
- Left-dominant coronary circulation.
**Assessment**: Diffuse coronary atherosclerosis with less than 50%
stenosis in the RCA and evidence of endothelial dysfunction.
**Current Recommendations:**
- Initiation of Ranolazine
- Additional stress myocardial perfusion scintigraphy
### Patient Report 8
**Dear colleague, **
We would like to inform you about the results of the Myocardial
Perfusion Scintigraphy performed on our patient, Mr. David Romero, born
on 02/16/1942, on 09/23/2022.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function (ejection fraction
around 30%)
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without guarding, spleen
and liver not palpable. Normal bowel sounds.
**Myocardial Perfusion Scintigraphy:**
The myocardial perfusion scintigraphy was conducted using 365 MBq of
99m-Technetium MIBI during pharmacological stress and 383 MBq of
99m-Technetium MIBI at rest.
[Technique]{.underline}: Initially, the patient was pharmacologically
stressed with the intravenous administration of 400 µg of Regadenoson
over 20 seconds, accompanied by ergometer exercise at 50 W.
Subsequently, the intravenous injection of the radiopharmaceutical was
performed. The maximum blood pressure achieved during the stress phase
was 143/84 mm Hg, and the maximum heart rate reached was 102 beats per
minute.
Approximately 60 minutes later, ECG-triggered acquisition of a
360-degree SPECT study was conducted with reconstructions of short and
long-axis slices.
Due to inhomogeneities in the myocardial wall segments during stress,
rest images were acquired on another examination day. Following the
intravenous injection of the radiopharmaceutical, ECG-triggered
acquisition of a 360-degree SPECT study was performed, including
short-axis and long-axis slices, approximately 60 minutes later.
[Clinical Information:]{.underline} Known coronary heart disease (RCA
50%). ICD/CRT pacemaker.
[Findings]{.underline}: No clear perfusion defects are seen in the
scintigraphic images acquired after pharmacologic exposure to
Regadenoson. This finding remains unchanged in the scintigraphic images
acquired at rest.
Quantitative analysis shows a normal-sized ventricle with a normal left
ventricular ejection fraction (LVEF) of 53% under exercise conditions
and 47% at rest (EDV 81 mL). There are no clear wall motion
abnormalities. In the gated SPECT analysis, there are no definite wall
motion abnormalities observed in both stress and rest conditions.
**Quantitative Scoring:**
- SSS (Summed Stress Score): 3 (4.4%)
- SRS (Summed Rest Score): 0 (0.0%)
- SDS (Summed Difference Score): 3 (4.4%)
**Assessment**: No evidence of myocardial perfusion defects with
Regadenoson stress or at rest. Normal ventricular size and function with
no significant wall motion abnormalities.
### Patient Report 9
**Dear colleague, **
We would like to report on our patient, Mr. David Romero, born on
02/16/1942, who was under our inpatient care from 05/20/2023 to
05/21/2023.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Medical History:** The patient was admitted for device replacement due
and upgrading to a CRT-P pacemaker. At admission, the patient reported
no complaints of fever, cough, dyspnea, chest pain, or melena.
**Physical Examination:** The patient is fully oriented with good
general condition and normal mental state. Dry skin and mucous
membranes, normal breathing, no cyanosis. Cranial nerves are grossly
intact, no focal neurological deficits, good muscle strength and
sensitivity all around. Clear, rhythmic heart sounds, with a 2/6
systolic murmur at the apex. Lungs are evenly ventilated without rales.
Resonant percussion. Soft and supple abdomen without guarding, spleen
and liver not palpable. Normal bowel sounds.
**Medication upon Admission**
**Medication** **Dosage** **Frequency**
--------------------------- -------------- ----------------------
Insulin glargine (Lantus) 450 E/1.5 ml 0-0-0-6-8 IU
Insulin lispro (Humalog) 300 E/3 ml 5-8 IU-5-8 IU-5-8 IU
Levothyroxine (Synthroid) 100 mcg 1-0-0-0
Colecalciferol 12.5 mcg 2-0-0-0
Atorvastatin (Lipitor) 21.7 mg 0-0-1-0
Amlodipine (Norvasc) 6.94 mg 1-0-0-0
Ramipril (Altace) 5 mg 1-0-0-0
Torasemide (Torem) 5 mg 0-0-0.5-0
Carvedilol (Coreg) 25 mg 0.5-0-0.5-0
Simvastatin (Zocor) 40 mg 0-0-0.5-0
Aspirin 100 mg 1-0-0-0
**Therapy and Progression:** The patient\'s current admission was
elective for the implantation of a 3-chamber CRT-D device due to device
depletion. The procedure was performed without complications on
05/20/2023. The post-interventional course was uneventful. The
implantation site showed no irritation or significant hematoma at the
time of discharge, and no pneumothorax was detected on X-ray.
To protect the surgical wound, we request dry wound dressing for the
next 10 days and clinical wound checks. Suture removal is not necessary
with absorbable suture material. We advise against arm elevation for the
next 4 weeks, avoiding heavy lifting on the side of the device pocket
and gradual, pain-adapted full range of motion after 4 weeks.
**Current Recommendations:** We kindly request an outpatient follow-up
appointment in our Pacemaker Clinic.
**Medication upon Discharge:**
**Medication ** **Dosage ** **Frequency**
----------------------------- --------------- -----------------------
Insulin glargine (Lantus) 450 E./1.5 ml 0-0-0-/6-8 IU
Insulin lispro (Humalog) 300 E./3 ml 5-8 IU/-5-8 IU/5-8 IU
Levothyroxine (Synthroid) 100 µg 1-0-0-0
Colecalciferol (Vitamin D3) 12.5 µg 2-0-0-0
Atorvastatin (Lipitor) 21.7 mg 0-0-1-0
Amlodipine (Norvasc) 6.94 mg 1-0-0-0
Ramipril (Altace) 5 mg 1-0-0-0
Torasemide (Torem) 5 mg 0-0-0.5-0
Carvedilol (Coreg) 25 mg 0.5-0-0.5-0
Simvastatin (Zocor) 40 mg 0-0-0.5-0
Aspirin 100 mg 1-0-0-0
Colecalciferol 12.5 µg 2-0-0-0
**Addition: Findings:**
**ECG at Discharge:** Sinus rhythm, ventricular pacing, QRS 122ms, QTc
472ms
**Rhythm Examination on 05/20/2023:**
[Results:]{.underline} Replacement of a 3-chamber CRT-D device (new:
SJM/Abbott Quadra Assura) due to impending battery depletion:
Uncomplicated replacement. Tedious freeing of the submuscular device and
proximal lead portions using a plasma blade. Extraction of the old
device. Connection to the new device. Avoidance of device fixation in
the submuscular position. Hemostasis by electrocauterization. Layered
wound closure. Skin closure with absorbable intracutaneous sutures. End
adjustment of the CRT-D device is complete. [Procedure]{.underline}:
Compression of the wound with a sandbag and local cooling. First
outpatient follow-up in 8 weeks through our pacemaker clinic (please
schedule an appointment before discharge). Postoperative chest X-ray is
not necessary. Cefuroxime 1.5 mg again tonight.
**Transthoracic Echocardiography on 05/18/2023**
**Results:** Globally mildly impaired systolic LV function. Diastolic
dysfunction Grade 1 (LV relaxation disorder).
- Right Ventricle: Normal-sized right ventricle. Normal RV function.
Pulmonary arterial pressure is normal.
- Left Atrium: Slightly dilated left atrium.
- Right Atrium: Normal-sized right atrium.
- Mitral Valve: Morphologically unremarkable. Minimal mitral valve
regurgitation.
- Aortic Valve: Mildly sclerotic aortic valve cusps. No aortic valve
insufficiency. No aortic valve stenosis (AV PGmax 7 mmHg).
- Tricuspid Valve: Delicate tricuspid valve leaflets. Minimal
tricuspid valve regurgitation (TR Pmax 26 mmHg).
- Pulmonary Valve: No pulmonary valve insufficiency. Pericardium: No
pericardial effusion.
**Assessment**: Examination in sinus rhythm with bundle branch block.
Moderate ultrasound windows. Normal-sized left ventricle (LVED 54 mm)
with mildly reduced systolic LV function (EF biplan 55%) with mildly
reduced contractility without regional emphasis. Mild LV hypertrophy,
predominantly septal, without obstruction. Diastolic dysfunction Grade 1
(E/A 0.47) with a normal LV filling index (E/E\' mean 3.5). Slightly
sclerotic aortic valve without stenosis, no AI. Slightly dilated left
atrium (LAVI 31 ml/m²). Minimal MI. Normal-sized right ventricle with
normal function. Normal-sized right atrium (RAVI 21 ml/m²). Minimal TI.
As far as assessable, systolic PA pressure is within the normal range.
The IVC cannot be viewed from the subcostal angle. No thrombi are
visible. As far as assessable, no pericardial effusion is visible.
**Chest X-ray in two planes on 05/20/2023: **
[Clinical Information, Question, Justification:]{.underline} Post CRT
device replacement. Inquiry about position, pneumothorax.
[Findings]{.underline}: No pneumothorax following CRT device
replacement.
### Patient Report 0
**Dear colleague, **
We are writing to provide an update on Mr. David Romero, born on
02/16/1942, who presented at our Rhythm Clinic on 09/29/2023.
**Diagnoses:**
- Dilated cardiomyopathy
- Exclusion of coronary heart diseases
- Myocardial biopsy showed no inflammation
- Left bundle branch block
- Severely impaired left ventricular (LV) function
- LifeVest
- CRT-D implantation
- Chronic Hepatitis C
- Type 2 diabetes
**Current Medication:**
**Medication** **Dosage** **Frequency**
----------------------------- ------------------ ---------------
Lantus (Insulin glargine) 450 Units/1.5 mL 0-0-0-/6-8
Humalog (Insulin lispro) 300 Units/3 mL 5-8/0/5-8/5-8
Levothyroxine (Synthroid) 100 mcg 1-0-0-0
Vitamin D3 (Colecalciferol) 12.5 mcg 2-0-0-0
Lipitor (Atorvastatin) 21.7 mg 0-0-1-0
Norvasc (Amlodipine) 6.94 mg 1-0-0-0
Altace (Ramipril) 5 mg 1-0-0-0
Demadex (Torasemide) 5 mg 0-0-0.5-0
Coreg (Carvedilol) 25 mg 0.5-0-0.5-0
Zocor (Simvastatin) 40 mg 0-0-0.5-0
Aspirin 100 mg 1-0-0-0
Vitamin D3 (Colecalciferol) 12.5 mcg 2-0-0-0
**Measurement Results:**
Battery/Capacitor: Status: OK, Voltage: 8.4V
- Right Atrial: 375 Ohms 3.80 mV 0.375 V 0.50 ms
- Right Ventricular: 388 Ohms 11.80 mV 0.750 V 0.50 ms
- Left Ventricular: 350 Ohms 0.625 V 0.50 ms
- Defibrillation Impedance: Right Ventricular: 48 Ohms
**Implant Settings:**
- Bradycardia Setting: Mode: DDD
- Tachycardia Settings: Zone Detection Interval (ms) Detection Beats
ATP Shocks Details Status
- VFVF 260 ms 30 /
- VTVT1 330 ms 55 /
<!-- -->
- Probe Settings: Lead Sensitivity Sensing Polarity/Vector
Amplification/Pulse Width Stimulation Polarity/Vector Auto Amplitude
Control
- Right Atrial: 0.30 mV Bipolar/ 1.375 V/0.50 ms Bipolar/
- Right Ventricular: Bipolar/ 2.000 V/0.50 ms Bipolar/
- Left Ventricular: 2.000 V/0.50 ms tip 1 - RV Coil
**Assessment:**
- Routine visit with normal device function.
- Normal sinus rhythm with a heart rate of 65/min.
- Balanced heart rate histogram with a plateau at 60-70 bpm.
- Wound conditions are unremarkable.
- Battery status: OK.
- Atrial probe: Intact
- Right ventricular probe: Intact
- Left ventricular probe: Intact
- A follow-up appointment for the patient is requested in 6 months.
**Lab results:**
**Parameter** **Result** **Reference Range**
----------------------------------- ------------ ---------------------
Reticulocytes 0.01/nL \< 0.01/nL
Sodium 137 mEq/L 136-145 mEq/L
Potassium 4.2 mEq/L 3.5-4.5 mEq/L
Creatinine 1.34 mg/dL 0.70-1.20 mg/dL
BUN 49 mg/dL 17-48 mg/dL
Total Bilirubin 1.8 mg/dL \< 1.20 mg/dL
C-reactive Protein 5.9 mg/L \< 5.0 mg/L
ALT 67 U/L \< 41 U/L
AST 78 U/L \< 50 U/L
Alkaline Phosphatase 151 U/L 40-130 U/L
Gamma-Glutamyl Transferase 200 U/L 8-61 U/L
Free Triiodothyronine (T3) 2.3 ng/L 2.00-4.40 ng/L
Free Thyroxine (T4) 14.2 ng/L 9.30-17.00 ng/L
Thyroid Stimulating Hormone (TSH) 4.1 mU/L 0.27-4.20 mU/L
Hemoglobin 11.6 g/dL 13.5-17.0 g/dL
Hematocrit 34.5% 39.5-50.5%
Red Blood Cell Count 3.7 M/µL 4.3-5.8 M/µL
White Blood Cell Count 9.56 K/µL 3.90-10.50 K/µL
MCV 92.7 fL 80.0-99.0 fL
MCH 31.8 pg 27.0-33.5 pg
MCHC 33.9 g/dL 31.5-36.0 g/dL
MPV 8.9 fL 7.0-12.0 fL
RDW-CV 14.2% 11.5-15.0%
Quick 89% 78-123%
INR 1.09 0.90-1.25
Partial Thromboplastin Time 25.3 sec. 22.0-29.0 sec.
|
Increased from 28% to 29%
|
What drove gross margin change as of the FY2022 for American Express? If gross margin is not a useful metric for a company like this, then please state that and explain why.
|
Evidence 0:
CONSOLIDATED STATEMENTS OF INCOME
Year Ended December 31 (Millions, except per share amounts)
2022
2021
2020
Revenues
Non-interest revenues
Discount revenue
$
30,739
$
24,563
$
19,435
Net card fees
6,070
5,195
4,664
Service fees and other revenue
4,521
3,316
2,702
Processed revenue
1,637
1,556
1,301
Total non-interest revenues
42,967
34,630
28,102
Interest income
Interest on loans
11,967
8,850
9,779
Interest and dividends on investment securities
96
83
127
Deposits with banks and other
595
100
177
Total interest income
12,658
9,033
10,083
Interest expense
Deposits
1,527
458
943
Long-term debt and other
1,236
825
1,155
Total interest expense
2,763
1,283
2,098
Net interest income
9,895
7,750
7,985
Total revenues net of interest expense
52,862
42,380
36,087
Provisions for credit losses
Card Member receivables
627
(73)
1,015
Card Member loans
1,514
(1,155)
3,453
Other
41
(191)
262
Total provisions for credit losses
2,182
(1,419)
4,730
Total revenues net of interest expense after provisions for credit losses
50,680
43,799
31,357
Expenses
Card Member rewards
14,002
11,007
8,041
Business development
4,943
3,762
3,051
Card Member services
2,959
1,993
1,230
Marketing
5,458
5,291
3,696
Salaries and employee benefits
7,252
6,240
5,718
Other, net
6,481
4,817
5,325
Total expenses
41,095
33,110
27,061
Pretax income
9,585
10,689
4,296
Income tax provision
2,071
2,629
1,161
Net income
$
7,514
$
8,060
$
3,135
Earnings per Common Share (Note 21)
Basic
$
9.86
$
10.04
$
3.77
Diluted
$
9.85
$
10.02
$
3.77
Average common shares outstanding for earnings per common share:
Basic
751
789
805
Diluted
752
790
806
|
Performance is not measured through gross margin
|
Why does Starre lay claim to the asteroid?
A. She's trying to get away from her life. She can't stand how stubborn her Grandfather is.
B. She's trying to delay her arranged marriage, by preventing the asteroid from ever being delivered.
C. She told her Grandfather about the asteroid and told him she would marry Mac on top of it.
D. She's Burnside's granddaughter and is protecting it for him.
|
COSMIC YO-YO By ROSS ROCKLYNNE "Want an asteroid in your backyard? We supply cheap. Trouble also handled without charge." Interplanetary Hauling Company. (ADVT.) [Transcriber's Note: This etext was produced from Planet Stories Summer 1945. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] Bob Parker, looking through the photo-amplifiers at the wedge-shaped asteroid, was plainly flabbergasted. Not in his wildest imaginings had he thought they would actually find what they were looking for. "Cut the drive!" he yelled at Queazy. "I've got it, right on the nose. Queazy, my boy, can you imagine it? We're in the dough. Not only that, we're rich! Come here!" Queazy discharged their tremendous inertia into the motive-tubes in such a manner that the big, powerful ship was moving at the same rate as the asteroid below—47.05 miles per second. He came slogging back excitedly, put his eyes to the eyepiece. He gasped, and his big body shook with joyful ejaculations. "She checks down to the last dimension," Bob chortled, working with slide-rule and logarithm tables. "Now all we have to do is find out if she's made of tungsten, iron, quartz crystals, and cinnabar! But there couldn't be two asteroids of that shape anywhere else in the Belt, so this has to be it!" He jerked a badly crumpled ethergram from his pocket, smoothed it out, and thumbed his nose at the signature. "Whee! Mr. Andrew S. Burnside, you owe us five hundred and fifty thousand dollars!" Queazy straightened. A slow, likeable smile wreathed his tanned face. "Better take it easy," he advised, "until I land the ship and we use the atomic whirl spectroscope to determine the composition of the asteroid." "Have it your way," Bob Parker sang, happily. He threw the ethergram to the winds and it fell gently to the deck-plates. While Queazy—so called because his full name was Quentin Zuyler—dropped the ship straight down to the smooth surface of the asteroid, and clamped it tight with magnetic grapples, Bob flung open the lazarette, brought out two space-suits. Moments later, they were outside the ship, with star-powdered infinity spread to all sides. In the ship, the ethergram from Andrew S. Burnside, of Philadelphia, one of the richest men in the world, still lay on the deck-plates. It was addressed to: Mr. Robert Parker, President Interplanetary Hauling & Moving Co., 777 Main Street, Satterfield City, Fontanaland, Mars. The ethergram read: Received your advertising literature a week ago. Would like to state that yes I would like an asteroid in my back yard. Must meet following specifications: 506 feet length, long enough for wedding procession; 98 feet at base, tapering to 10 feet at apex; 9-12 feet thick; topside smooth-plane, underside rough-plane; composed of iron ore, tungsten, quartz crystals, and cinnabar. Must be in my back yard before 11:30 A.M. my time, for important wedding June 2, else order is void. Will pay $5.00 per ton. Bob Parker had received that ethergram three weeks ago. And if The Interplanetary Hauling & Moving Co., hadn't been about to go on the rocks (chiefly due to the activities of Saylor & Saylor, a rival firm) neither Bob nor Queazy would have thought of sending an answering ethergram to Burnside stating that they would fill the order. It was, plainly, a hair-brained request. And yet, if by some chance there was such a rigidly specified asteroid, their financial worries would be over. That they had actually discovered the asteroid, using their mass-detectors in a weight-elimination process, seemed like an incredible stroke of luck. For there are literally millions of asteroids in the asteroid belt, and they had been out in space only three weeks. The "asteroid in your back yard" idea had been Bob Parker's originally. Now it was a fad that was sweeping Earth, and Burnside wasn't the first rich man who had decided to hold a wedding on top of an asteroid. Unfortunately, other interplanetary moving companies had cashed in on that brainstorm, chiefly the firm of the Saylor brothers—which persons Bob Parker intended to punch in the nose some day. And would have before this if he hadn't been lanky and tall while they were giants. Now that he and Queazy had found the asteroid, they were desperate to get it to its destination, for fear that the Saylor brothers might get wind of what was going on, and try to beat them out of their profits. Which was not so far-fetched, because the firm of Saylor & Saylor made no pretense of being scrupulous. Now they scuffed along the smooth-plane topside of the asteroid, the magnets in their shoes keeping them from stepping off into space. They came to the broad base of the asteroid-wedge, walked over the edge and "down" the twelve-foot thickness. Here they squatted, and Bob Parker happily clamped the atomic-whirl spectroscope to the rough surface. By the naked eye, they could see iron ore, quartz crystals, cinnabar, but he had the spectroscope and there was no reason why he shouldn't use it. He satisfied himself as to the exterior of the asteroid, and then sent the twin beams deep into its heart. The beams crossed, tore atoms from molecules, revolved them like an infinitely fine powder. The radiations from the sundered molecules traveled back up the beams to the atomic-whirl spectroscope. Bob watched a pointer which moved slowly up and up—past tungsten, past iridium, past gold— Bob Parker said, in astonishment, "Hell! There's something screwy about this business. Look at that point—" Neither he nor Queazy had the opportunity to observe the pointer any further. A cold, completely disagreeable feminine voice said, "May I ask what you interlopers are doing on my asteroid?" Bob started so badly that the spectroscope's settings were jarred and the lights in its interior died. Bob twisted his head around as far as he could inside the "aquarium"—the glass helmet, and found himself looking at a space-suited girl who was standing on the edge of the asteroid "below." "Ma'am," said Bob, blinking, "did you say something?" Queazy made a gulping sound and slowly straightened. He automatically reached up as if he would take off his hat and twist it in his hands. "I said," remarked the girl, "that you should scram off of my asteroid. And quit poking around at it with that spectroscope. I've already taken a reading. Cinnabar, iron ore, quartz crystals, tungsten. Goodbye." Bob's nose twitched as he adjusted his glasses, which he wore even inside his suit. He couldn't think of anything pertinent to say. He knew that he was slowly working up a blush. Mildly speaking, the girl was beautiful, and though only her carefully made-up face was visible—cool blue eyes, masterfully coiffed, upswept, glinting brown hair, wilful lips and chin—Bob suspected the rest of her compared nicely. Her expression darkened as she saw the completely instinctive way he was looking at her and her radioed-voice rapped out, "Now you two boys go and play somewhere else! Else I'll let the Interplanetary Commission know you've infringed the law. G'bye!" She turned and disappeared. Bob awoke from his trance, shouted desperately, "Hey! Wait! You! " He and Queazy caught up with her on the side of the asteroid they hadn't yet examined. It was a rough plane, completing the rigid qualifications Burnside had set down. "Wait a minute," Bob Parker begged nervously. "I want to make some conversation, lady. I'm sure you don't understand the conditions—" The girl turned and drew a gun from a holster. It was a spasticizer, and it was three times as big as her gloved hand. "I understand conditions better than you do," she said. "You want to move this asteroid from its orbit and haul it back to Earth. Unfortunately, this is my home, by common law. Come back in a month. I don't expect to be here then." "A month!" Parker burst the word out. He started to sweat, then his face became grim. He took two slow steps toward the girl. She blinked and lost her composure and unconsciously backed up two steps. About twenty steps away was her small dumbbell-shaped ship, so shiny and unscarred that it reflected starlight in highlights from its curved surface. A rich girl's ship, Bob Parker thought angrily. A month would be too late! He said grimly, "Don't worry. I don't intend to pull any rough stuff. I just want you to listen to reason. You've taken a whim to stay on an asteroid that doesn't mean anything to you one way or another. But to us—to me and Queazy here—it means our business. We got an order for this asteroid. Some screwball millionaire wants it for a backyard wedding see? We get five hundred and fifty thousand dollars for it! If we don't take this asteroid to Earth before June 2, we go back to Satterfield City and work the rest of our lives in the glass factories. Don't we, Queazy?" Queazy said simply, "That's right, miss. We're in a spot. I assure you we didn't expect to find someone living here." The girl holstered her spasticizer, but her completely inhospitable expression did not change. She put her hands on the bulging hips of her space-suit. "Okay," she said. "Now I understand the conditions. Now we both understand each other. G'bye again. I'm staying here and—" she smiled sweetly "—it may interest you to know that if I let you have the asteroid you'll save your business, but I'll meet a fate worse than death! So that's that." Bob recognized finality when he saw it. "Come on, Queazy," he said fuming. "Let this brat have her way. But if I ever run across her without a space-suit on I'm going to give her the licking of her life, right where it'll do the most good!" He turned angrily, but Queazy grabbed his arm, his mouth falling open. He pointed off into space, beyond the girl. "What's that?" he whispered. "What's wha— Oh! " Bob Parker's stomach caved in. A few hundred feet away, floating gently toward the asteroid, came another ship—a ship a trifle bigger than their own. The girl turned, too. They heard her gasp. In another second, Bob was standing next to her. He turned the audio-switch to his headset off, and spoke to the girl by putting his helmet against hers. "Listen to me, miss," he snapped earnestly, when she tried to draw away. "Don't talk by radio. That ship belongs to the Saylor brothers! Oh, Lord, that this should happen! Somewhere along the line, we've been double-crossed. Those boys are after this asteroid too, and they won't hesitate to pull any rough stuff. We're in this together, understand? We got to back each other up." The girl nodded dumbly. Suddenly she seemed to be frightened. "It's—it's very important that this—this asteroid stay right where it is," she said huskily. "What—what will they do?" Bob Parker didn't answer. The big ship had landed, and little blue sparks crackled between the hull and the asteroid as the magnetic clamps took hold. A few seconds later, the airlocks swung down, and five men let themselves down to the asteroid's surface and stood surveying the three who faced them. The two men in the lead stood with their hands on their hips; their darkish, twin faces were grinning broadly. "A pleasure," drawled Wally Saylor, looking at the girl. "What do you think of this situation Billy?" "It's obvious," drawled Billy Saylor, rocking back and forth on his heels, "that Bob Parker and company have double-crossed us. We'll have to take steps." The three men behind the Saylor twins broke into rough, chuckling laughter. Bob Parker's gorge rose. "Scram," he said coldly. "We've got an ethergram direct from Andrew S. Burnside ordering this asteroid." "So have we," Wally Saylor smiled—and his smile remained fixed, dangerous. He started moving forward, and the three men in back came abreast, forming a semi-circle which slowly closed in. Bob Parker gave back a step, as he saw their intentions. "We got here first," he snapped harshly. "Try any funny stuff and we'll report you to the Interplanetary Commission!" It was Bob Parker's misfortune that he didn't carry a weapon. Each of these men carried one or more, plainly visible. But he was thinking of the girl's spasticizer—a paralyzing weapon. He took a hair-brained chance, jerked the spasticizer from the girl's holster and yelled at Queazy. Queazy got the idea, urged his immense body into motion. He hurled straight at Billy Saylor, lifted him straight off the asteroid and threw him away, into space. He yelled with triumph. At the same time, the spasticizer Bob held was shot cleanly out of his hand by Wally Saylor. Bob roared, started toward Wally Saylor, knocked the smoking gun from his hand with a sweeping arm. Then something crushing seemed to hit him in the stomach, grabbing at his solar plexus. He doubled up, gurgling with agony. He fell over on his back, and his boots were wrenched loose from their magnetic grip. Vaguely, before the flickering points of light in his brain subsided to complete darkness, he heard the girl's scream of rage—then a scream of pain. What had happened to Queazy he didn't know. He felt so horribly sick, he didn't care. Then—lights out. Bob Parker came to, the emptiness of remote starlight in his face. He opened his eyes. He was slowly revolving on an axis. Sometimes the Sun swept across his line of vision. A cold hammering began at the base of his skull, a sensation similar to that of being buried alive. There was no asteroid, no girl, no Queazy. He was alone in the vastness of space. Alone in a space-suit. "Queazy!" he whispered. "Queazy! I'm running out of air!" There was no answer from Queazy. With sick eyes, Bob studied the oxygen indicator. There was only five pounds pressure. Five pounds! That meant he had been floating around out here—how long? Days at least—maybe weeks! It was evident that somebody had given him a dose of spastic rays, enough to screw up every muscle in his body to the snapping point, putting him in such a condition of suspended animation that his oxygen needs were small. He closed his eyes, trying to fight against panic. He was glad he couldn't see any part of his body. He was probably scrawny. And he was hungry! "I'll starve," he thought. "Or suffocate to death first!" He couldn't keep himself from taking in great gulps of air. Minutes, then hours passed. He was breathing abnormally, and there wasn't enough air in the first place. He pleaded continually for Queazy, hoping that somehow Queazy could help, when probably Queazy was in the same condition. He ripped out wild curses directed at the Saylor brothers. Murderers, both of them! Up until this time, he had merely thought of them as business rivals. If he ever got out of this— He groaned. He never would get out of it! After another hour, he was gasping weakly, and yellow spots danced in his eyes. He called Queazy's name once more, knowing that was the last time he would have strength to call it. And this time the headset spoke back! Bob Parker made a gurgling sound. A voice came again, washed with static, far away, burbling, but excited. Bob made a rattling sound in his throat. Then his eyes started to close, but he imagined that he saw a ship, shiny and small, driving toward him, growing in size against the backdrop of the Milky Way. He relapsed, a terrific buzzing in his ears. He did not lose consciousness. He heard voices, Queazy's and the girl's, whoever she was. Somebody grabbed hold of his foot. His "aquarium" was unbuckled and good air washed over his streaming face. The sudden rush of oxygen to his brain dizzied him. Then he was lying on a bunk, and gradually the world beyond his sick body focussed in his clearing eyes and he knew he was alive—and going to stay that way, for awhile anyway. "Thanks, Queazy," he said huskily. Queazy was bending over him, his anxiety clearing away from his suddenly brightening face. "Don't thank me," he whispered. "We'd have both been goners if it hadn't been for her. The Saylor brothers left her paralyzed like us, and when she woke up she was on a slow orbit around her ship. She unstrapped her holster and threw it away from her and it gave her enough reaction to reach the ship. She got inside and used the direction-finder on the telaudio and located me first. The Saylors scattered us far and wide." Queazy's broad, normally good-humored face twisted blackly. "The so and so's didn't care if we lived or died." Bob saw the girl now, standing a little behind Queazy, looking down at him curiously, but unhappily. Her space-suit was off. She was wearing lightly striped blue slacks and blue silk blouse and she had a paper flower in her hair. Something in Bob's stomach caved in as his eyes widened on her. The girl said glumly, "I guess you men won't much care for me when you find out who I am and what I've done. I'm Starre Lowenthal—Andrew S. Burnside's granddaughter!" Bob came slowly to his feet, and matched Queazy's slowly growing anger. "Say that again?" he snapped. "This is some kind of dirty trick you and your grandfather cooked up?" "No!" she exclaimed. "No. My grandfather didn't even know there was an asteroid like this. But I did, long before he ordered it from you—or from the Saylor brothers. You see—well, my granddad's about the stubbornest old hoot-owl in this universe! He's always had his way, and when people stand in his way, that's just a challenge to him. He's been badgering me for years to marry Mac, and so has Mac—" "Who's Mac?" Queazy demanded. "My fiancé, I guess," she said helplessly. "He's one of my granddad's protégés. Granddad's always financing some likely young man and giving him a start in life. Mac has become pretty famous for his Mercurian water-colors—he's an artist. Well, I couldn't hold out any longer. If you knew my grandfather, you'd know how absolutely impossible it is to go against him when he's got his mind set! I was just a mass of nerves. So I decided to trick him and I came out to the asteroid belt and picked out an asteroid that was shaped so a wedding could take place on it. I took the measurements and the composition, then I told my grandfather I'd marry Mac if the wedding was in the back yard on top of an asteroid with those measurements and made of iron ore, tungsten, and so forth. He agreed so fast he scared me, and just to make sure that if somebody did find the asteroid in time they wouldn't be able to get it back to Earth, I came out here and decided to live here. Asteroids up to a certain size belong to whoever happens to be on them, by common law.... So I had everything figured out—except," she added bitterly, "the Saylor brothers! I guess Granddad wanted to make sure the asteroid was delivered, so he gave the order to several companies." Bob swore under his breath. He went reeling across to a port, and was gratified to see his and Queazy's big interplanetary hauler floating only a few hundred feet away. He swung around, looked at Queazy. "How long were we floating around out there?" "Three weeks, according to the chronometer. The Saylor boys gave us a stiff shot." " Ouch! " Bob groaned. Then he looked at Starre Lowenthal with determination. "Miss, pardon me if I say that this deal you and your granddad cooked up is plain screwy! With us on the butt end. But I'm going to put this to you plainly. We can catch up with the Saylor brothers even if they are three weeks ahead of us. The Saylor ship and ours both travel on the HH drive—inertia-less. But the asteroid has plenty of inertia, and so they'll have to haul it down to Earth by a long, spiraling orbit. We can go direct and probably catch up with them a few hundred thousand miles this side of Earth. And we can have a fling at getting the asteroid back!" Her eyes sparkled. "You mean—" she cried. Then her attractive face fell. "Oh," she said. " Oh! And when you get it back, you'll land it." "That's right," Bob said grimly. "We're in business. For us, it's a matter of survival. If the by-product of delivering the asteroid is your marriage—sorry! But until we do get the asteroid back, we three can work as a team if you're willing. We'll fight the other problem out later. Okay?" She smiled tremulously. "Okay, I guess." Queazy looked from one to another of them. He waved his hand scornfully at Bob. "You're plain nuts," he complained. "How do you propose to go about convincing the Saylor brothers they ought to let us have the asteroid back? Remember, commercial ships aren't allowed to carry long-range weapons. And we couldn't ram the Saylor brothers' ship—not without damaging our own ship just as much. Go ahead and answer that." Bob looked at Queazy dismally. "The old balance-wheel," he groaned at Starre. "He's always pulling me up short when I go off half-cocked. All I know is, that maybe we'll get a good idea as we go along. In the meantime, Starre—ahem—none of us has eaten in three weeks...?" Starre got the idea. She smiled dazzlingly and vanished toward the galley. Bob Parker was in love with Starre Lowenthal. He knew that after five days out, as the ship hurled itself at breakneck speed toward Earth; probably that distracting emotion was the real reason he couldn't attach any significance to Starre's dumbbell-shaped ship, which trailed astern, attached by a long cable. Starre apparently knew he was in love with her, too, for on the fifth day Bob was teaching her the mechanics of operating the hauler, and she gently lifted his hand from a finger-switch. "Even I know that isn't the control to the Holloway vacuum-feeder, Bob. That switch is for the—ah—the anathern tube, you told me. Right?" "Right," he said unsteadily. "Anyway, Starre, as I was saying, this ship operates according to the reverse Fitzgerald Contraction Formula. All moving bodies contract in the line of motion. What Holloway and Hammond did was to reverse that universal law. They caused the contraction first—motion had to follow! The gravitonic field affects every atom in the ship with the same speed at the same time. We could go from zero speed to our top speed of two thousand miles a second just like that!" He snapped his fingers. "No acceleration effects. This type of ship, necessary in our business, can stop flat, back up, ease up, move in any direction, and the passengers wouldn't have any feeling of motion at—Oh, hell!" Bob groaned, the serious glory of her eyes making him shake. He took her hand. "Starre," he said desperately, "I've got to tell you something—" She jerked her hand away. "No," she exclaimed in an almost frightened voice. "You can't tell me. There's—there's Mac," she finished, faltering. "The asteroid—" "You have to marry him?" Her eyes filled with tears. "I have to live up to the bargain." "And ruin your whole life," he ground out. Suddenly, he turned back to the control board, quartered the vision plate. He pointed savagely to the lower left quarter, which gave a rearward view of the dumbbell ship trailing astern. "There's your ship, Starre." He jabbed his finger at it. "I've got a feeling—and I can't put the thought into concrete words—that somehow the whole solution of the problem of grabbing the asteroid back lies there. But how? How? " Starre's blue eyes followed the long cable back to where it was attached around her ship's narrow midsection. She shook her head helplessly. "It just looks like a big yo-yo to me." "A yo-yo?" "Yes, a yo-yo. That's all." She was belligerent. "A yo-yo !" Bob Parker yelled the word and almost hit the ceiling, he got out of the chair so fast. "Can you imagine it! A yo-yo!" He disappeared from the room. "Queazy!" he shouted. " Queazy, I've got it! " It was Queazy who got into his space-suit and did the welding job, fastening two huge supra-steel "eyes" onto the dumbbell-shaped ship's narrow midsection. Into these eyes cables which trailed back to two winches in the big ship's nose were inserted, welded fast, and reinforced. The nose of the hauler was blunt, perfectly fitted for the job. Bob Parker practiced and experimented for three hours with this yo-yo of cosmic dimensions, while Starre and Queazy stood over him bursting into strange, delighted squeals of laughter whenever the yo-yo reached the end of its double cable and started rolling back up to the ship. Queazy snapped his fingers. "It'll work!" His gray eyes showed satisfaction. "Now, if only the Saylor brothers are where we calculated!" They weren't where Bob and Queazy had calculated, as they had discovered the next day. They had expected to pick up the asteroid on their mass-detectors a few hundred thousand miles outside of the Moon's orbit. But now they saw the giant ship attached like a leech to the still bigger asteroid—inside the Moon's orbit! A mere two hundred thousand miles from Earth! "We have to work fast," Bob stammered, sweating. He got within naked-eye distance of the Saylor brothers' ship. Below, Earth was spread out, a huge crescent shape, part of the Eastern hemisphere vaguely visible through impeding clouds and atmosphere. The enemy ship was two miles distant, a black shadow occulting part of the brilliant sky. It was moving along a down-spiraling path toward Earth. Queazy's big hand gripped his shoulder. "Go to it, Bob!" Bob nodded grimly. He backed the hauler up about thirty miles, then sent it forward again, directly toward the Saylor brothers' ship at ten miles per second. And resting on the blunt nose of the ship was the "yo-yo." There was little doubt the Saylors' saw their approach. But, scornfully, they made no attempt to evade. There was no possible harm the oncoming ship could wreak. Or at least that was what they thought, for Bob brought the hauler's speed down to zero—and Starre Lowenthal's little ship, possessing its own inertia, kept on moving! It spun away from the hauler's blunt nose, paying out two rigid lengths of cable behind it as it unwound, hurled itself forward like a fantastic spinning cannon ball. "It's going to hit!" The excited cry came from Starre. But Bob swore. The dumbbell ship reached the end of its cables, falling a bare twenty feet short of completing its mission. It didn't stop spinning, but came winding back up the cable, at the same terrific speed with which it had left. Bob sweated, having only fractions of seconds in which to maneuver for the "yo-yo" could strike a fatal blow at the hauler too. It was ticklish work completely to nullify the "yo-yo's" speed. Bob used exactly the same method of catching the "yo-yo" on the blunt nose of the ship as a baseball player uses to catch a hard-driven ball in his glove—namely, by matching the ball's speed and direction almost exactly at the moment of impact. And now Bob's hours of practice paid dividends, for the "yo-yo" came to rest snugly, ready to be released again. All this had happened in such a short space of time that the Saylor brothers must have had only a bare realization of what was going on. But by the time the "yo-yo" was flung at them again, this time with better calculations, they managed to put the firmly held asteroid between them and the deadly missile. But it was clumsy evasion, for the asteroid was several times as massive as the ship which was towing it, and its inertia was great. And as soon as the little ship came spinning back to rest, Bob flung the hauler to a new vantage point and again the "yo-yo" snapped out. And this time—collision! Bob yelled as he saw the stern section of the Saylor brothers' ship crumple like tissue paper crushed between the hand. The dumbbell-shaped ship, smaller, and therefore stauncher due to the principle of the arch, wound up again, wobbling a little. It had received a mere dent in its starboard half. Starre was chortling with glee. Queazy whispered, "Attaboy, Bob! This time we'll knock 'em out of the sky!" The "yo-yo" came to rest and at the same moment a gong rang excitedly. Bob knew what that meant. The Saylor brothers were trying to establish communication. Queazy was across the room in two running strides. He threw in the telaudio and almost immediately, Wally Saylor's big body built up in the plate. Wally Saylor's face was quivering with wrath. "What do you damned fools think you're trying to do?" he roared. "You've crushed in our stern section. You've sliced away half of our stern jets. Air is rushing out! You'll kill us!" "Now," Bob drawled, "you're getting the idea." "I'll inform the Interplanetary Commission!" screamed Saylor. " If you're alive," Bob snarled wrathfully. "And you won't be unless you release the asteroid." "I'll see you in Hades first!" "Hades," remarked Bob coldly, "here you come!" He snapped the hauler into its mile-a-second speed again, stopped it at zero. And the "yo-yo" went on its lone, destructive sortie. For a fraction of a second Wally Saylor exhibited the countenance of a doomed man. In the telaudio plate, he whirled, and diminished in size with a strangled yell. The "yo-yo" struck again, but Bob Parker maneuvered its speed in such a manner that it struck in the same place as before, but not as heavily, then rebounded and came spinning back with perfect, sparkling precision. And even before it snugged itself into its berth, it was apparent that the Saylor brothers had given up. Like a wounded terrier, their ship shook itself free of the asteroid, hung in black space for a second, then vanished with a flaming puff of released gravitons from its still-intact jets. The battle was won!
|
B. She's trying to delay her arranged marriage, by preventing the asteroid from ever being delivered.
|
Which molecular pathological finding was technically not evaluable (immunohistochemical) for Mr. Miller?
Choose the correct answer from the following options:
A. ATRX
B. IDH status
C. p53
D. CDKN2A/B
E. MGMT promoter
|
### Patient Report 0
**Dear colleague, **
Patient: Miller, John, born 04/07/1961
We report to you about our common patient, Mr. John Miller, who is in
our inpatient treatment since 07/30/2019.
**Diagnoses:**
\- Suspected right cerebral glioblastoma (first diagnosis)
\- Symptoms: Aphasia, passive confusion
**Patient history: **
Mr. Miller was admitted as an emergency. He was on the phone with a
friend when he suddenly began to exhibit speech difficulties and
struggled to find the right words. Consequently, his friend called 911.
Upon the ambulance\'s arrival, Mr. Miller was disoriented and exhibited
aggressive behavior. There was evidence of a torn door. He had blood on
his right forearm and around his mouth, but there were no indications of
a tongue bite or urinary incontinence.
Upon admission, Mr. Miller was coherent and showed no speech issues. He
attributed a mild weakness in his right arm to pre-existing pain in the
upper arm. An immediate CT scan revealed a mass suggestive of a
glioblastoma in the right cerebral hemisphere, leading to a
neurosurgical consultation.
Given the possibility of an epileptic seizure, Mr. Miller was
hospitalized and started on Levetiracetam. He is currently unaware of
his regular medications but takes antihypertensives and diabetes
medications, among others. His friend and brother have been notified and
are ensuring that a detailed medication list is provided.
After a brief stay in ward ABC, Mr. Miller was transferred to the
neurosurgery team for further evaluation and treatment. We appreciate
the prompt transfer and are available for any further inquiries.
Planned Procedures:
-Schedule EEG
-Clarify routine medications
**Surgery Report**
**Diagnosis:** Suspected HGG (high-grade glioma) of the right hemisphere
**Procedure:** Microsurgical navigation-guided resection of the tumor
with intraoperative neuromonitoring (stable MEPs) and intraoperative MRI
using 5-ALA. Pathology samples taken (Preliminary: HGG; Final to be
confirmed). Resection was followed by duragen placement, watertight
dural closure, and multilayer wound closure, with skin sutures.
**Time:**
-Start: 11:12 am
-Finish: 3:54 pm
-Duration: 4 hours 42 minutes
**Assessment:**
Mr. Miller presented with a seizure characterized by speech disturbance
and disorientation. Imaging revealed a significant right hemispheral
mass, likely representing a high-grade brain tumor. The need for
surgical resection was determined following discussions at our
interdisciplinary tumor board. After being informed about the procedure,
alternative treatments, the operation\'s urgency, benefits, and
potential risks, Mr. Miller provided written consent following ample
time for consideration and the opportunity to ask further questions.
**Procedure Details:**
The patient was positioned supine with his head secured in a Nova clamp.
Navigation data were read, followed by skin preparation, and the
surgical field was sterilized and draped. An arch-shaped incision was
made, followed by hemostasis, deep tissue dissection, placement of a
burr hole, and the creation of a large bone flap over the lesion. The
bone flap was then elevated. Multiple washings were performed, followed
by dural opening under microscopic visualization. A corticotomy was
carried out with bipolar forceps, CUSA, and suction to progressively
reduce the tumor, utilizing 5-ALA fluorescence and continuous
neurophysiological monitoring. An intraoperative MRI showed residual
tumor, prompting further resection. Hemostasis was achieved, and the
wound was closed using tabotamp, followed by duragen placement and dural
suturing. The bone flap was refixed using Dogbone plates. The wound area
was irrigated extensively once more, followed by subcutaneous and skin
sutures.
**Date:** 10/01/2019
**Clinical Indication:**Suspected recurrence of GBM in the right
hemisphere
**Requested Imaging:**cMRI with or without contrast + DTI.
**Findings:***Imaging Modality (GE 3.0T):* 3D FLAIR, DTI, SWI, T2\*
perfusion, 3D T1 with and without contrast, 3D T2,
subtraction.Following resection of a right hemispheric glioblastoma in
08/19, and compared to the last two scans (external: 07/19, internal:
08/19), there is a notable expansion of the prior detected flair
hyperintense regions. These now span from the right parietal-subcortical
area across the right basal ganglia to the right temporo-occipital/right
temporal pole. Specifically, at the dorsocranial edge of the resection
cavity, the hyperintense regions appear to have grown since 08/19. These
coincide with hyperperfused regions in the T2\* perfusion. Linear SWI
signal changes are suggestive of mild post-surgical bleeding. No
significant postoperative hemorrhage or territorial ischemia is
detected. A normal venous sinus drainage is observed. Right temporal
horn appears congested, possibly due to CSF trapping.
**Assessment:**
Following the resection of the right hemispheric glioblastoma in 08/19:
-Markedly progressive flair edema and evolving barrier disturbance.
Regions especially towards the dorsal side of the resection cavity show
this alongside associated hyperperfusion. With the recent PET imaging
from 10/19, there is an indication of progressive disease as per RANO
criteria.
-Long-term progressive congestion of the right temporal horn, likely CSF
trapping.
**Surgery Report****Diagnosis:**Tumor recurrence after resection of a
glioblastoma (IDH wild type, WHO CNS grade 4) of the right hemisphere in
08/2019. The patient underwent combined radiochemotherapy at the local
clinical center.
**Procedure:**Navigated, microsurgical resection supported by 5-ALA,
with stable MEPs through the previous right temporal access. iMRI
conducted between 11:10 and 11:50. Used Duragen/TachoSil, dog bones for
closure, followed by layered wound closure and skin sutures.
**Timing:**
-Incision: 09:23 am on 10/12/19
-Suture: 12:32 pm on 10/12/19
**Assessment:**The patient had previously undergone surgery for a
glioblastoma and a recurrence was detected on imaging. The tumor board
had already deliberated on the surgery. The patient was informed about
the surgical procedure, particularly about conducting an extensive
resection caudally without impacting function post-mapping. The patient
consented, understanding the potential for longer progression-free
survival.
**Procedure Details:**The patient was placed in a supine position with
the head turned to the side and fixed using the Noras clamp. The right
shoulder was padded. The surgical area was prepared by trimming hair
around the previous scar, followed by sterilization and draping. After a
team time-out, prophylactic antibiotics were administered. The previous
scar was reopened and old plates were removed. The microscope was then
swung into position and the dura was opened. Navigation proceeded
beneath the labbé vein, with tumor resection as guided by ALA
fluorescence. Post-tumor removal, extensive hemostasis was achieved
using absorbent cotton and TABOTAMP. Intraoperative MRI confirmed a
complete resection of the tumor. The dura was sealed using DuraGen,
ensuring a watertight closure. The bone flap was reinserted, followed by
subcutaneous suturing, skin suturing, and sterile dressing of the wound.
### Patient Report 1
**Dear colleague, **
We write to update you regarding our shared patient, Mr. John Miller,
born on 07/04/1961, who visited us on 12/02/2019.
**Diagnosis:**Glioblastoma recurrence, IDH 1 wild type.
**Tumor Location:**Right hemisphere including temporal regions.
**Clinical History & Treatment:**
-07/2019: Mr. Miller experienced speech disturbances and confusion.
-08/01/2019: Brain PET-MRI revealed a suspected malignancy in the right
hemisphere, including temporal areas.
-08/11/2019: Glioblastoma was resected at our facility.
-08-09/2019: He underwent adjuvant radiochemotherapy (43.4 at 2.7 Gy
with a boost of 52.4 Gy at 3 Gy) and Temodal treatment at the local
clinical center.
-10/01/2019: cMRI with suspected recurrence.
-10/12/2019: A recurrent resection was performed at our facility.
-11/02/2019: Postoperative brain MRI showed no suspected tumor remnants.
**Recent Evaluation (12/01/2019):**Mr. Miller visited our facility with
his brother. Our assessment, based on CTCAE criteria, indicated that he
is in a fair but stable general and nutritional health (KPS 70-80%,
weight undisclosed, height 175 cm). Neurological and general evaluations
revealed a degree of aphasia, mainly with word-finding difficulty, and
short-term memory impairment. However, he remains fully oriented and
independent in daily life.
**Additional Observations:**
His surgical wound has healed well.
**Pre-existing Conditions:** Arterial hypertension, Diabetes Mellitus
Type II
**Allergies:** None
**Current Medications:** Antihypertensive drugs and insulin.
Postoperatively, Mr. Miller remains in good health. A recent brain MRI
noted that the suspected recurring GBM lesion near the superior border
of the surgical site was entirely resected. Furthermore, CT scans at the
anterior medial and lateral edges suggest that a complete resection was
most likely achieved.
Given the presumed complete resection of the glioblastoma recurrence, we
have recommended Mr. Miller for a neuro-oncology review and a follow-up
with PET-MRI in three months.
**Next Steps:**
-He has a scheduled appointment in neuro-oncology on 01/23/2020 at 10:00
AM.
-A follow-up PET-MRI is set for 01/29/2020 at 12:45 PM. We have advised
Mr. Miller to fast for 4 hours prior and to bring a referral from his
primary care physician, along with a recent creatinine test result.
-A review of these findings will be held in our outpatient department on
01/30/2020 at 2:00 PM.
Thank you for your continued care and collaboration. Please do not
hesitate to reach out for any additional information.
Warm regards,
### Patient Report 2
**Dear colleague, **
Regarding our mutual patient, Mr. John Miller, born 04/07/1961:
**Diagnosis**:
Glioblastoma recurrence, IDH 1 wild type
**Tumor Location**:
Right hemisphere/temporal.
**Medical History**:
07/2019: Onset of speech arrest and confusion.
08/2019: PET brain MRI indicated a suspected malignant mass in the right
hemisphere
08/11/2019: Glioblastoma resection performed in our neurosurgery
department.
08-09/2019: He underwent adjuvant radiochemotherapy (43.4 at 2.7 Gy with
a boost of 52.4 Gy at 3 Gy) and Temodal treatment at the local clinical
center.
10/12/2019: A recurrent resection was performed at our facility.
11/02/2019: Postoperative brain MRI showed no suspected tumor remnants.
Mr. Miller came in on 12/01/2019 with his brother. Clinical examination
findings are as follows:
-General health: Stable with reduced vitality.
-Nutritional status: Stable (KPS 70-80%, weight in kg, height 169 cm).
-No evident motor, sensory, visual, or cranial nerve deficits.
-Neurocognitive deficits: Short-term memory issues.
-Aphasia: Grade II (mainly word-finding disorders). The patient is fully
oriented and independent in daily life.
-No evidence of recurrence in PET-MRI. Next imaging scheduled in
03/2020.
**Past Medical Conditions**:
-Hypertension
-Type II Diabetes Mellitus
**Allergies**: None.
**Medications**:
-Antihypertensive medications
-Insulin
Best Regards,
### Patient Report 3
**Dear colleague, **
Updating you on our mutual patient, Mr. John Miller, born 04/07/1961:
**Diagnosis**:
Recurrent Glioblastoma, IDH 1 wild type (ICD-10: 71.8).
**Molecular Pathology**:
No p.R132H mutation in IDH.
No combined 1p/19q loss.
Suspected CDKN2A/B deletion.
**Medical History**:
No pain or B symptoms.
Intermittent dizziness and headaches since the last check-up.
**Neurological Findings**:
Patient is alert and oriented.
Weight: 80 kg (total loss: 15 kg), Height: 175 cm.
Karnofsky Performance Score: 80%.
Motor function and sensory assessments were unremarkable.
**Allergies**: None.
**Medications**:
Lisinopril 10mg once daily in the morning
Bisoprolol 2.5mg once daily at bedtime
Januvia 50mg twice daily
Allopurinol 100mg once daily in the morning
Ezetimibe 10mg once daily at bedtime
Levetiracetam 1000mg once daily in the morning Insulin as per regimen
**Secondary Diagnoses**:
Hypertension
Type II Diabetes Mellitus
**Medical Course**:
Details from 07/2019 through 03/2020 provided, including surgeries,
radiochemotherapies, and diagnostics.
In summary, Mr. Miller\'s glioblastoma diagnosis in 07/2019 led to
various treatments, including radiochemotherapy and surgeries. His
recent PET/MRI on 01/2020 indicates potential recurrent areas.
Best regards,
**Patient:** John Miller
**DOB:** 04/07/1961
**Admission Date:** 04/11/2020
**Discharge Date:** 04/18/2020
**Admission Diagnosis:**
Recurrent tumor in the hippocampal region and along the prior resection
cavity.
History of glioblastoma (IDH wild type, WHO CNS grade 4) in the right
hemisphere, resected on 08/2019.
History of combined radiochemotherapy with Temodar (Temozolomide) from
August to September 2019 at the local clinical center.
Subsequent first re-resection in 10/2019.
**Presenting Complaint:**
Mr. Miller presented to the neurosurgical outpatient department
accompanied by his wife. Recent imaging indicated a potential recurrence
of the glioblastoma. The neuro-oncological board on 04/12/2020
recommended a re-resection.
**Physical eon Admission:**
Alert, oriented x4, cooperative.
Non-fluent aphasia.
Cranial nerves intact.
No sensory or motor deficits in the extremities.
Surgical scar clean and dry.
No signs of neurogenic bladder or rectal dysfunction.
KPSS 70%.
**Medications on Admission:**
-Lisinopril 10mg daily
-Bisoprolol 2.5mg nightly
-Januvia 50mg twice daily
-Allopurinol 100mg daily
-Atorvastatin 40mg nightly
-Ezetimibe 10mg nightly
-Levetiracetam 1000mg twice daily
-Actraphane insulin as prescribed
**Surgical Intervention (04/12/2020):**
Navigated microsurgical resection of tumor spots assisted with 5-ALA.
Stable MEPs were maintained. An intraoperative MRI (iMRI) was utilized.
Post-resection, the surgical area was managed using Tabotamp, Duragen,
TachoSil, and dog-bone plates, concluding with layered wound closure.
**Postoperative Course:**
Uncomplicated recovery.
Post-op MRI showed no residual tumor.
Surgical site remained clean, dry, and showed no signs of infection or
irritation.
**Discharge Diagnosis:**
Recurrence of known glioblastoma, WHO CNS grade 4.
**Interdisciplinary Neuro-oncological Tumor Board Recommendation
(04/20/2020):**
Molecular tumor board review.
Offer reinitiation of Temozolomide chemotherapy.
**Physical Examination on Discharge:**
Similar to admission, with suture in place and wound site in good
condition.
KPSS 70%.
**Medications on Discharge:**
-Allopurinol 100mg daily (morning)
-Atorvastatin 40mg nightly (evening)
-Bisoprolol 2.5mg nightly (evening)
-Ezetimibe 10mg nightly (evening)
-Sitagliptin (Januvia) 50mg twice daily (morning and evening)
-Levetiracetam (Keppra) 1000mg twice daily (morning and evening)
-Lisinopril 10mg daily (morning)
-Acetaminophen 500mg as needed for pain or fever
-Actraphane insulin as prescribed
**Surgery Report **
**Diagnosis**:
Tumor recurrence in the right hippocampal region and along the resection
cavity post glioblastoma resection on 8/11/2019.
Previous treatments include radiochemotherapy at our clinic. Local
therapy center (from August to September 2019) and re-resection on
10/12/2019.
**Surgery Type**:
Re-opening of the temporal region with navigation, microsurgery using
5-ALA assistance, iMRI, and re-resection, among other procedures.
**Procedure Details**:
Start: 11:50 pm on 04/12/2020
End: 4:00 pm on 04/12/2020
Duration: 4 hours 10 minutes.
**Assessment**:
Evidence of recurrent glioblastoma areas warranted another biopsy and
resection. After informing Mr. Miller about the procedure\'s risks and
benefits, he provided written consent.
**Operation**:
Details on the positioning, pre-operative preparations, resection, and
post-operative procedures are provided.
Best regards,
MEDICAL HISTORY:
Mr. Miller underwent surgery because of tumor recurrence in the right
hemisphere last November to treat a right temporal glioblastoma. He
presented to our private outpatient clinic due to a wound complication,
specifically a wound dehiscence measuring about 1 cm. On closer
examination, pus was noted. Despite being symptom-free otherwise, a
consultation with Dr. Doe was scheduled. After discussion, it was
decided to clean the wound, trim the deteriorated wound edges, and clean
the bone flap with an antibiotic solution before reinserting it. The
patient was thoroughly educated about the nature and risks of the
procedure and gave consent.
OPERATION:
The patient was placed in a supine position. The hair surrounding the
surgical site was trimmed, followed by skin disinfection and sterile
draping. The bicoronal skin incision was reopened. Deteriorated wound
edges were excised and the wound was extensively cleaned with
irrigation. The bone flap was removed and immersed in a Refobacin
solution. The epidural pannus tissue was removed. The dura was
completely sutured. Multiple samples were collected both subgaleally and
epidurally. A sponge was applied, followed by the reinsertion of the
bone flap using a dog bone miniplate fixation and local application of
vancomycin powder. A subgaleal drain was placed, and the skin was closed
using Donati continuous sutures. A sterile staple dressing was applied,
and the patient was transferred to recovery.
CLINICAL NOTES:
Epidural pannus suggestive of infection. Past surgical history includes
glioblastoma removal in 8/11/2019 and subsequent surgeries because of
recurrence, the last of these in 04/12/2020. Nature and type of growth?
MACROSCOPIC EXAMINATION:
Fixed tissue samples measuring 1.2 x 1.0 x 0.4 cm were entirely
embedded.
STAINING: 1 block, Hematoxylin & Eosin (HE), Periodic Acid Schiff (PAS).
MICROSCOPIC EXAMINATION:
Histology shows entirely necrotic tissue and some bony fragments. No
microorganisms were detected in the PAS stain.
FINDINGS:
Fully necrotic tissue. No signs of inflammation or malignancy in the
available samples.
OPERATIVE REPORT:
Diagnosis: Wound healing disruption high on the forehead, following a
resection of recurrent gioblastoma in the right hemisphere in
04/12/2020.
Procedure: Wound revision, thorough wound cleaning, reinsertion of the
autologous bone flap.
Time of incision: 3:30 PM, 06/01/2020
Time of suture completion: 4:35, 06/01/2020
Duration: 65 minutes
PATIENT HISTORY:
The patient had two prior surgeries with our team. The most recent was a
revision due to a wound healing complication. The patient was informed
about the procedure\'s nature, extent, risks, and potential outcomes,
and was given ample opportunity to ask questions. After thoughtful
consideration, written consent was obtained.
OPERATION:
The patient was positioned supine with the head in a neutral position.
The surgical area was sterilized and draped. Antibiotic prophylaxis was
administered, and a timeout procedure was conducted. The old wound was
reopened and slightly extended by about 1 cm in both directions. The
bone flap was removed, inspected, and cleaned. It was then reinserted
after refreshing the bone edges. The wound edges were refreshed, and the
wound was irrigated again before closure.
SUMMARY:
Successful wound revision without complications for a wound healing
complication post-glioblastoma surgery.
CLINICAL NOTES:
Complication in wound healing after glioblastoma removal and radiation
therapy. Possible inflammation? Evaluate for pus. Nature and type of
growth?
MACROSCOPIC EXAMINATION:
Subcutaneous tissue samples, 1.2 x 1.0 x 0.4 cm, were completely
embedded after being cut into two.
Epidural tissue samples, 5.6 x 5.3 x 1.0 cm, were partially embedded.
STAINING: 2 blocks, Hematoxylin & Eosin (HE), Periodic Acid Schiff
(PAS).
MICROSCOPIC EXAMINATION:
Histology displays connective tissue surrounded by a pronounced
inflammatory infiltrate, comprised of neutrophils, lymphocytes, and
numerous eosinophils. Additionally, budding capillaries were seen. No
specific findings in the PAS stain.
Histologically, connective tissue infiltrated by predominantly
lymphocytic inflammation was observed. Eosinophils and abundant necrotic
tissue were also seen. Additionally, polarizable material was noted,
occasionally engulfed by multinucleated giant cells. Hemorrhagic signs
were indicated by hemosiderin deposits. No specific findings in the PAS
stain.
FINDINGS:
1 & 2. Soft tissue displays acute phlegmonous inflammation and chronic
granulating inflammation.
**Brief report (07/15/2020): **
Diagnosis: superficial wound healing disorder and symptomatic, simple
focal epileptic seizure dated 06/01/2020.
Wound healing disorder right parietal at the site of previous right-side
glioblastoma, last revision 04/12/2020.
-Single right body focal seizure 04/20/2020, single generalized seizure
05/12/2020
-Previous resection on 10/12/2019
-Wound healing disorder with subsequent wound revision (06/2020)
-Last cMRI on 05/03/2020: no recurrence observed.
Secondary diagnoses:
Hypothyroidism
Surgery type: injection of Ropivacaine, smear for microbiology,
readaptation of three small wound dehiscence, tobacco bag suture,
overlay polyhexanide gel, plaster application
Instructions: Return next Tuesday for wound check. Sutures to remain in
place for 10 days. Check microbiology results on Thursday. Clinically,
no signs of infection observed.
Surgical report:
Diagnosis: superficial wound healing disorder and symptomatic, simple
focal epileptic seizure dated 04/20/2020.
ASSESSMENT:
The patient was presented at the emergency unit after observing a small
wound dehiscence after the aforementioned surgery for a wound healing
disorder. The treating surgeon recommended a second local wound revision
attempting to readapt the wound with a minor surgery. The patient
provided written consent for the procedure. The intervention was
conducted with standard coagulation parameters.
SURGERY:
Sterile preparation and draping of the surgical area. Initial injection
of Ropivacaine. This was followed by swab collection for microbiology.
The wound edges were excised and the wound dehiscence was readapted
using a tobacco bag suture. Afterward, polyhexanide gel was applied,
followed by a sterile plaster dressing.
Surgery report:
Diagnosis: significant scalp wound healing disorder in the prior
surgical access area post-resection and irradiation of a glioblastoma
multiforme.
Operating time: 49 minutes
### Patient Report 4
**Dear colleague, **
we report on Mr. Miller, John, born 04/07/1961, who was in our inpatient
treatment from 09/12/2020 to 10/07/2020.
Discharge diagnosis: Recurrence of the pre-described glioblastoma right
insular, WHO CNS grade 4 (IDH wild type, MGMT methylated).
Physical examination on admission:
Patient awake, fully oriented, cooperative. Non-fluent aphasia. Speech
clear and fluent. Latent left hemisymptomatic with strength grades 4+/5,
stance and gait unsteady. Cranial nerve status regular. Scar conditions
non-irritant except for frontal superficial erosion at frontal wound
pole of pterional approach. Karnofsky 70%.
Medication on Admission:
Levothyroxine sodium 50 μg/1 pc (Synthroid® 50 micrograms, tablets)
1-0-0-0
Lorazepam 0.5 mg/1 pc (Ativan® 0.5 mg, tablets) 1-1-1-1
Lacosamide 100 mg/1 pc (Vimpat® 100 mg film-coated tablets) 1-0-1-0
**Imaging: **
cMRI +/- contrast agent dated 09/15/2020: There is a contrast enhancing
formation on the right temporo-mesial with approach to the insular
cistern with suspected tumor recurrence.
PET dated 09/10/2020 (external): Significant tracer multinodulation with
active areas in the islet region is seen.
**Surgery of 09/18/2020: **
Reopening of existing skin incision and extension of craniotomy
cranially, microsurgical navigated tumor resection right insular (IONM:
MEP waste lower extremity with incomplete recovery) CUSA; extensive
hemostasis, intraoperative MRI, sutures, reimplantation of bone flap
with multilayer wound closure. Skin suture.
Histopathological report:
Recurrence of pre-described glioblastoma, WHO CNS grade 4 (IDH wild
type, MGMT methylated).
Course:
The patient initially presented postoperatively with left hemiplegia in
the sense of SMA. This regressed significantly during the inpatient
stay. Postoperative imaging revealed a regular resection finding. In
case of a possible adjustment disorder, the patient was treated with
sertraline and lorazepam. The wound was dry and non-irritant during the
inpatient stay. The patient received regular physiotherapeutic exercise.
The patient\'s case was discussed in our neuro-oncological tumor board
on 09/29/2020, where the decision was made for adjuvant definitive
radiochemotherapy. An inpatient transfer was offered by the colleagues
of radiotherapy.
Procedure:
We transfer Mr. Miller today in good clinical general condition to your
further treatment and thank you for the kind takeover. We ask for
regular wound controls as well as regular ECG controls to exclude a
QTc-time prolongation under sertraline. Furthermore, the medication with
lorazepam should be further phased out in the course of time. In case of
acute neurological deterioration, a re-presentation in our neurosurgical
outpatient clinic or surgical emergency room is possible at any time.
Clinical examination findings at discharge:
Patient awake, fully oriented, cooperative. Speech clear and fluent.
Cranial nerve status without pathological findings. Hemiparesis
left-sided strength grade 4/5, right-sided no sensorimotor deficit.
Stance and gait unsteady. Non-fluent aphasia. Wound dry, without
irritation. Karnofsky 70%.
**Medication at Discharge:**
Levothyroxine sodium 50 μg (Synthroid® 50 micrograms, tablets) 1-0-0-0
Lorazepam 0.5 mg (Ativan® 0.5 mg, tablets) as needed
Lacosamide 100 mg (Vimpat® 100 mg film-coated tablets) 1-0-1-0
Acetaminophen 500 mg (Tylenol® 500 mg tablets) 1-1-1-1
Sertraline 50 mg (Zoloft® 50 mg film-coated tablets) by regimen
**Magnetic Resonance Imaging (MRI) Report**
Date of Examination: 02/02/2021
Clinical Indication: Multifocal glioblastoma WHO grade IV, IDH wild
type, MGMT methylated.
Clinical Query: Hemiparesis on the right side. Is there a structural
correlate? Tumor progression?
**Previous Imaging**: Multiple prior studies. The most recent contrasted
MRI was on 09/15/2020.
**Findings**:
**Imaging Device**: GE 3T; Protocol: 3D FLAIR, 3D T1 Mprage with and
without contrast, SWI, DWI, 3D T2, axial T2\*, perfusion, DTI.
1. **Report: **
Known multimodal pretreated GBM since 2019, recently post-surgical
resection for tumor progression in the frontotemporal region on
09/18/2020. Also noted is a post-surgical resection of additional
foci in the right insular region. The resection cavity in the right
frontal, insular, and temporal regions appears unchanged in size and
configuration. Residual blood products are noted within.
There are increased areas of contrast enhancement compared to the
immediate post-operative images. There is a minor growth in a
nodular enhancement posterior to the right middle cerebral artery.
Adjacent to this, there\'s a new nodular enhancement, which could be
a postoperative reactive change or a new tumor lesion.
Ongoing diffusion abnormalities are observed in the right caput
nuclei caudatus, putamen, and globus pallidus, especially pronounced
in posterior sections.
Persistent FLAIR-hyperintense peritumoral edema in the right
hemisphere remains unchanged. The midline shift is approximately 9mm
to the left, which remains unchanged.
Post-operative swelling and fluid accumulation are noted at the
surgical entry point. The bone flap is in place. The width of both
the internal and external CSF spaces remains constant, with no
evidence of obstruction.
The orbital contents are symmetrical. Paranasal sinuses and mastoid
air cells are aerated appropriately.
**Impression**:
Residual tumor segments along the right middle cerebral artery showing
growth. Adjacent to it, a new nodular area of contrast enhancement
suggests either a postoperative change or a new tumor lesion.
Previously identified ischemic changes in the right caput nuclei
caudatus, putamen, and globus pallidus. Persistent brain edema with a
leftward midline shift of approximately 9mm remains unchanged.
### Patient Report 5
**Dear colleague, **
Herewith we report on our common patient Mr. John Miller, born
04/07/1961, who was at our clinic between 02/04/2021 to 04/22/2021.
-Recurrent manifestation of a glioblastoma
-Stage: WHO CNS grade 4
**Histology:**
Recurrence of the pre-described glioblastoma, WHO CNS grade 4.
Molecular pathological findings:
IDH status: no p.R132H mutation (immunohistochemical).
ATRX: preservation of nuclear expression (immunohistochemical).
p53: technically not evaluable (immunohistochemical).
1p/19q status: no combined loss (850k methylation analysis).
CDKN2A/B: Deleted (850k methylation analysis).
MGMT promoter: Methylated (850k methylation analysis).
Tumor localization: Islet/frontal right
Secondary diagnoses:
Symptomatic epilepsy
Hypothyroidism
Nausea
Leukopenia I° (CTCAE)
Anemia II° (CTCAE)
Previous course / therapies:
08/2019: PET brain MRI indicated a suspected malignant mass in the right
hemisphere
08/11/2019: Glioblastoma resection performed in our neurosurgery
department.
08-09/2019: He underwent adjuvant radiochemotherapy (43.4 at 2.7 Gy with
a boost of 52.4 Gy at 3 Gy) and Temodal treatment at the local clinical
center.
10/12/2019: A recurrent resection was performed at our facility.
11/02/2019: Postoperative brain MRI showed no suspected tumor remnants.
03/2020: Suspected recurrence
04/2020: Revision surgery
06-07/2020 Wound revisions and flap plasty for atrophic wound healing
disorder
02/2021: Suspected recurrence with new FLAIR-positive tumor
manifestation insular on the right side. Stereotactic biopsy with
evidence of glioblastoma.
Pathology: Renewed manifestation of a glioblastoma.
Recommended radiochemotherapy.
According to the interdisciplinary neuro-oncology board of 01/26/2021,
we gave the indication for adjuvant radiochemotherapy for the recurrence
of glioblastoma.
**Radiochemotherapy: **
Technique:
1\) Percutaneous intensity-modulated radiotherapy was administered to the
former recurrence tumor region in the frontal/insular right after CT-
and MRI-guided radiation planning with 6 MV-photons in helical
tomotherapy technique with a single dose of 2 Gy up to a total dose of
60 Gy with 5 fractions per week. Daily position controls by CT.
2\) Subsequently, local dose saturation of the macroscopic tumor remnant
was performed.
Insular right stereotactic ablative radiosurgery at the gamma knee to
saturate the macroscopic GammaKnife (Cobalt-60: 1.17 MeV and 1.33 MeV
photons) in mask fixation after CT- and MRI-guided radiotherapy planning
under image-guided setting (ConeBeam-CT) with a dose of 6 Gy in 2 Gy
single dose to the 68% isodose up to a total cumulative dose of 66 Gy.
Chemotherapy:
Concurrent chemotherapy with 75mg/m²KOF temozolomide daily (120 mg
daily).
Absolute dose: 5000 mg.
Treatment Period:
Radiotherapy 03/09/2021 -- 04/21/2021
Chemotherapy 03/09/2021 -- 04/21/2021
**Course under therapy:**
We took over Mr. Miller on 03/06/2021 in reduced general and slightly
reduced nutritional condition (Kanofsky index: 70 %, BMI: 18.5 kg/m²)
from the Clinic for Neurosurgery for adjuvant re-radiochemotherapy on
our radiooncology ward. At the time of admission, the patient had arm
right hemiparesis (strength grades arm: 2/5, leg: 3/5). The patient was
ambulatory with assistance. Cranial nerve status was unremarkable.
Headache, nausea or dizziness were denied.
On 09/03/2021, combined re-radiochemotherapy was initiated.
During the course of therapy with temozolomide, mild nausea occurred,
which was treated with ondansetron 4mg as needed and dimenhydrinate
Sustained-release tablets 150 mg as needed. Under this treatment the
symptoms clearly regressed. Mild constipation was treated. Laboratory
tests revealed mild leukopenia I° and anemia II° (CTCAE).
Otherwise, the re-radiochemotherapy was very well tolerated overall.
Mr. Miller received physiotherapeutic exercise and psychotherapy during
the entire physiotherapeutic training and psycho-oncological support.
Under the physiotherapeutic treatment, his motor skills improved
significantly. At the end of the therapy, the patient was also mobile
outside the
house without any aids.
On 04/22/2021 we discharged Mr. Miller to the outpatient care by his
family doctor.
04/11/2021: MR brain post contrast
After renewed radiochemotherapy for a glioblastoma recurrence, a
residual suspicious barrier disturbance adjacent to the adjacent to the
right cerebral artery. Stable nodular contrast enhancement
Postoperative/reactive changes as described above. MR perfusion
sequences show residual, contrast-absorbing tumor portions along the
right A. cerebri media. Lateral to this, new nodular contrast-absorbing
lesion, possibly postoperative reactive change. Previously known
ischemia at the right caput nuclei caudatii, putamen, and globus
pallidus. Unchanged medullary edema with midline shift to the left by
approximately 9mm.
Last lab:
MCHC 29.4 g/dL (32 - 36) 04/20/2021
MCH 25 pg (27 - 32) 04/20/2021
Leukocytes 3.32 G/l (4.0 - 9.0) 04/20/2021
Hematocrit 31.6 % (37 - 43) 04/20/2021
Hemoglobin 9.3 g/dL (12 - 16) 04/20/2021
Erythrocytes 3.7 T/l (4.1 - 5.4) 04/20/2021
Uric acid 2.2 mg/dl (2.5 - 5.5) 04/20/2021
**Further Procedure:**
Further skin care and behavior regarding side effects were explained to
the patient in detail.
A first radiooncological control appointment was scheduled for
06/05/2021 at 12:00 AM in our outpatient clinic. Prior to this, on
06/02/2021 at 10:30 AM, an imaging exam (brain MRI) is scheduled.
A further neuro-oncological connection is planned close to home via the
treating oncologist.
After radiation therapy, we recommend annual ophthalmological check-ups
and annual endocrinologic.
with testing of the hypothalamic-pituitary hormone axes.
### Patient Report 6
**Dear colleague, **
This is a report on our mutual patient, Mr. John Miller, born
04/07/1961:
Diagnosis:
Recurrent manifestation of glioblastoma
WHO CNS grade 4
Tumor localization: Right isle/frontal
Secondary diagnoses: Symptomatic epilepsy
Hypothyroidism Previous
Treatments / Therapies
Resection and revisions
Adjuvant radiochemotherapy
Two wound revisions and flap plasty for atrophic wound healing disorder
New FLAIR-positive tumor manifestation insular on the right side
(02/2021)
Stereotactic biopsy with histopathology with evidence of glioblastoma
Tumor Board: Recommended new radiochemotherapy up to a total dose of 60
Gy à 2 Gy single dose. Subsequent local dose saturation of the
macroscopic tumor remnant as radiosurgery on GammaKnife with a dose of 6
Gy à 2 Gy single dose (\@68% isodose).
03-4/2021 Repeat stereotactic RTx in the area of the basal ganglia and
the resection cavity right frontal on the Gamma Knife with 46 Gy à 2 Gy;
3 doses of Bevacizumab 7.5 mg/kg i.v.
**Summary:**
On 07/03/2023, we conducted an initial telephone follow-up with the
patient, Mr. Miller, for a radio-oncology consultation. Presently, Mr.
Miller is undergoing rehabilitation, from which he feels he is deriving
substantial benefits. His recent radiotherapy was well-received without
any complications. Since the onset of his symptoms, there have been no
new developments. Symptoms related to intracranial pressure or new
neurological deficits were denied. Fortunately, while on anticonvulsant
therapy with Lacosamide, Mr. Miller experienced no epileptic seizures.
His skin condition is normal. However, Mr. Miller did mention some
cognitive challenges that minimally impact his daily activities,
alongside feelings of fatigue and grade I CTCAE symptoms. The cMRI scan
from 06/02/2021 revealed a notable reduction in the barrier disturbance
of the right-sided basal ganglia. This was accompanied by small, mildly
hyperperfused residual findings near the third ventricle. Moreover, the
pinpoint contrast enhancement in the left parietal lobe appeared
unchanged, suggesting it is a scarring reaction. In collaboration with
the neurooncology team, Mr. Miller has discussed starting chemotherapy.
The next imaging assessment is scheduled for mid-September. We have also
scheduled another radio-oncologic follow-up with Mr. Miller for
September 28th, per his preference, via telephone. For patient safety,
Mr. Miller is prohibited from operating private or commercial vehicles
for 3 months post-intracerebral radiotherapy. This duration may extend
if there are existing or progressing brain conditions. Following
radiotherapy, we are mandated by the Radiation Protection Act to
facilitate regular checks. Hence, we encourage enrollment in the
aftercare calendar, prompt reporting of any significant findings, and
attendance of scheduled follow-ups. Alongside these, regular oncological
check-ups by specialist practitioners are mandatory. Mr. Miller has been
duly informed of all these requirements.
|
p53
|
What lesson did Brian learn from his experience?
A. That Serono Zeburzac was a rebel insider.
B. That Venus Consolidated served the best interests of the planet.
C. That the Venus Consolidated police weren’t honest.
D. That the rebels built mines as escape routes from the police.
|
MONOPOLY By Vic Phillips and Scott Roberts Sheer efficiency and good management can make a monopoly grow into being. And once it grows, someone with a tyrant mind is going to try to use it as a weapon if he can— [Transcriber's Note: This etext was produced from Astounding Science-Fiction April 1942. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] "That all, chief? Gonna quit now?" Brian Hanson looked disgustedly at Pete Brent, his lanky assistant. That was the first sign of animation he had displayed all day. "I am, but you're not," Hanson told him grimly. "Get your notes straightened up. Run those centrifuge tests and set up the still so we can get at that vitamin count early in the morning." "Tomorrow morning? Aw, for gosh sakes, chief, why don't you take a day off sometime, or better yet, a night off. It'd do you good to relax. Boy, I know a swell blonde you could go for. Wait a minute, I've got her radiophone number somewhere—just ask for Myrtle." Hanson shrugged himself out of his smock. "Never mind Myrtle, just have that equipment set up for the morning. Good night." He strode out of the huge laboratory, but his mind was still on the vitamin research they had been conducting, he barely heard the remarks that followed him. "One of these days the chief is going to have his glands catch up with him." "Not a chance," Pete Brent grunted. Brian Hanson wondered dispassionately for a moment how his assistants could fail to be as absorbed as he was by the work they were doing, then he let it go as he stepped outside the research building. He paused and let his eyes lift to the buildings that surrounded the compound. This was the administrative heart of Venus City. Out here, alone, he let his only known emotion sweep through him, pride. He had an important role in the building of this great new city. As head of the Venus Consolidated Research Organization, he was in large part responsible for the prosperity of this vigorous, young world. Venus Consolidated had built up this city and practically everything else that amounted to anything on this planet. True, there had been others, pioneers, before the company came, who objected to the expansion of the monopolistic control. But, if they could not realize that the company's regime served the best interests of the planet, they would just have to suffer the consequences of their own ignorance. There had been rumors of revolution among the disgruntled older families. He heard there had been killings, but that was nonsense. Venus Consolidated police had only powers of arrest. Anything involving executions had to be referred to the Interplanetary Council on Earth. He dismissed the whole business as he did everything else that did not directly influence his own department. He ignored the surface transport system and walked to his own apartment. This walk was part of a regular routine of physical exercise that kept his body hard and resilient in spite of long hours spent in the laboratory. As he opened the door of his apartment he heard the water running into his bath. Perfect timing. He was making that walk in precisely seven minutes, four and four-fifths seconds. He undressed and climbed into the tub, relaxing luxuriously in the exhilaration of irradiated water. He let all the problems of his work drift away, his mind was a peaceful blank. Then someone was hammering on his head. He struggled reluctantly awake. It was the door that was being attacked, not his head. The battering thunder continued persistently. He swore and sat up. "What do you want?" There was no answer; the hammering continued. "All right! All right! I'm coming!" He yelled, crawled out of the tub and reached for his bathrobe. It wasn't there. He swore some more and grabbed a towel, wrapping it inadequately around him; it didn't quite meet astern. He paddled wetly across the floor sounding like a flock of ducks on parade. Retaining the towel with one hand he inched the door cautiously open. "What the devil—" He stopped abruptly at the sight of a policeman's uniform. "Sorry, sir, but one of those rebels is loose in the Administration Center somewhere. We're making a check-up of all the apartments." "Well, you can check out; I haven't got any blasted rebels in here." The policeman's face hardened, then relaxed knowingly. "Oh, I see, sir. No rebels, of course. Sorry to have disturbed you. Have a good—Good night, sir," he saluted and left. Brian closed the door in puzzlement. What the devil had that flat-foot been smirking about? Well, maybe he could get his bath now. Hanson turned away from the door and froze in amazement. Through the open door of his bedroom he could see his bed neatly turned down as it should be, but the outline under the counterpane and the luxuriant mass of platinum-blond hair on the pillow was certainly no part of his regular routine. "Hello." The voice matched the calm alertness of a pair of deep-blue eyes. Brian just stared at her in numbed fascination. That was what the policeman had meant with his insinuating smirk. "Just ask for Myrtle." Pete Brent's joking words flashed back to him. Now he got it. This was probably the young fool's idea of a joke. He'd soon fix that. "All right, joke's over, you can beat it now." "Joke? I don't see anything funny, unless it's you and that suggestive towel. You should either abandon it or get one that goes all the way round." Brian slowly acquired a complexion suitable for painting fire plugs. "Shut up and throw me my dressing gown." He gritted. The girl swung her legs out of bed and Brian blinked; she was fully dressed. The snug, zippered overall suit she wore did nothing to conceal the fact that she was a female. He wrapped his bathrobe austerely around him. "Well, now what?" she asked and looked at him questioningly. "Well, what do you think?" he burst out angrily. "I'm going to finish my bath and I'd suggest you go down to the laboratory and hold hands with Pete. He'd appreciate it." He got the impression that the girl was struggling heroically to refrain from laughing and that didn't help his dignity any. He strode into the bathroom, slammed the door and climbed back into the bath. The door opened a little. "Well, good-by now." The girl said sweetly. "Remember me to the police force." "Get out of here!" he yelled and the door shut abruptly on a rippling burst of laughter. Damn women! It was getting so a man had to pack a gun with him or something. And Pete Brent. He thought with grim satisfaction of the unending extra work that was going to occur around the laboratory from now on. He sank back into the soothing liquid embrace of the bath and deliberately set his mind loose to wander in complete relaxation. A hammering thunder burst on the outer door. He sat up with a groan. "Lay off, you crazy apes!" he yelled furiously, but the pounding continued steadily. He struggled out of the bath, wrapped his damp bathrobe clammily around him and marched to the door with a seething fury of righteous anger burning within him. He flung the door wide, his mouth all set for a withering barrage, but he didn't get a chance. Four police constables and a sergeant swarmed into the room, shoving him away from the door. "Say! What the—" "Where is she?" the sergeant demanded. "Wherethehell's who?" "Quit stallin', bud. You know who. That female rebel who was in here." "Rebel? You're crazy! That was just ... Pete said ... rebel? Did you say rebel?" "Yeah, I said rebel, an' where is she?" "She ... why ... why ... she left, of course. You don't think I was going to have women running around in here, do you?" "She wuz in his bed when I seen her, sarge," one of the guards contributed. "But she ain't there now." "You don't think that I—" "Listen, bud, we don't do the thinkin' around here. You come on along and see the chief." Brian had had about enough. "I'm not going anywhere to see anybody. Maybe you don't know who I am. You can't arrest me." Brian Hanson, Chief of Research for Venus Consolidated, as dignified as possible in a damp bathrobe, glared out through the bars at a slightly bewildered Pete Brent. "What the devil do you want? Haven't you caused enough blasted trouble already?" "Me? For gosh sakes, chief—" "Yes, you! If sending that damn blonde to my apartment and getting me arrested is your idea of a joke—" "But, my gosh, I didn't send anybody, chief. And this is no joke. That wasn't Myrtle, that was Crystal James, old man James' daughter. They're about the oldest family on Venus. Police have been after her for months; she's a rebel and she's sure been raising plenty of hell around here. She got in and blew out the main communications control panel last night. Communications been tied up all day." Pete lowered his voice to an appreciative whisper, "Gosh, chief, I didn't know you had it in you. How long have you been in with that bunch? Is that girl as good-looking as they say she is?" "Now listen here, Brent. I don't know—" "Oh, it's all right, chief. You can trust me. I won't give you away." "There's nothing to give away, you fool!" Brian bellowed. "I don't know anything about any damn rebels. All I want is to get out of here—" "Gotcha, chief," Brent whispered understandingly. "I'll see if I can pass the word along." "Come here, you idiot!" Brian screamed after his erstwhile assistant. "Pipe down there, bud," a guard's voice cut in chillingly. Brian retired to his cell bunk and clutched his aching head in frustrated fury. For the nineteenth time Brian Hanson strode to the door of his cell and rattled the bars. "Listen here, guard, you've got to take a message to McHague. You can't hold me here indefinitely." "Shut up. Nobody ain't takin' no message to McHague. I don't care if you are—" Brian's eyes almost popped out as he saw a gloved hand reach around the guard's neck and jam a rag over his nose and mouth. Swift shadows moved expertly before his astonished gaze. Another guard was caught and silenced as he came around the end of the corridor. Someone was outside his cell door, a hooded figure which seemed, somehow, familiar. "Hello, pantless!" a voice breathed. He knew that voice! "What the devil are you doing here?" "Somebody by the name of Pete Brent tipped us off that you were in trouble because of me. But don't worry, we're going to get you out." "Damn that fool kid! Leave me alone. I don't want to get out of here that way!" he yelled wildly. "Guards! Help!" "Shut up! Do you want to get us shot?" "Sure I do. Guards! Guards!" Someone came running. "Guards are coming," a voice warned. He could hear the girl struggling with the lock. "Damn," she swore viciously. "This is the wrong key! Your goose is sure cooked now. Whether you like it or not, you'll hang with us when they find us trying to get you out of here." Brian felt as though something had kicked him in the stomach. She was right! He had to get out now. He wouldn't be able to explain this away. "Give me that key," he hissed and grabbed for it. He snapped two of the coigns off in the lock and went to work with the rest of the key. He had designed these escape-proof locks himself. In a few seconds the door swung open and they were fleeing silently down the jail corridor. The girl paused doubtfully at a crossing passage. "This way," he snarled and took the lead. He knew the ground plan of this jail perfectly. He had a moment of wonder at the crazy spectacle of himself, the fair-haired boy of Venus Consolidated, in his flapping bathrobe, leading a band of escaping rebels out of the company's best jail. They burst around a corner onto a startled guard. "They're just ahead of us," Brian yelled. "Come on!" "Right with you," the guard snapped and ran a few steps with them before a blackjack caught up with him and he folded into a corner. "Down this way, it's a short cut." Brian led the way to a heavily barred side door. The electric eye tripped a screaming alarm, but the broken key in Brian's hands opened the complicated lock in a matter of seconds. They were outside the jail on a side street, the door closed and the lock jammed immovably behind them. Sirens wailed. The alarm was out! The street suddenly burst into brilliance as the floodlights snapped on. Brian faltered to a stop and Crystal James pushed past him. "We've got reinforcements down here," she said, then skidded to a halt. Two guards barred the street ahead of them. Brian felt as though his stomach had fallen down around his ankles and was tying his feet up. He couldn't move. The door was jammed shut behind them, they'd have to surrender and there'd be no explaining this break. He started mentally cursing Pete Brent, when a projector beam slashed viciously by him. These guards weren't fooling! He heard a gasping grunt of pain as one of the rebels went down. They were shooting to kill. He saw a sudden, convulsive movement from the girl. A black object curved out against the lights. The sharp, ripping blast of an atomite bomb thundered along the street and slammed them to the ground. The glare left them blinded. He struggled to his feet. The guards had vanished, a shallow crater yawned in the road where they had been. "We've got to run!" the girl shouted. He started after her. Two surface transport vehicles waited around the corner. Brian and the rebels bundled into them and took away with a roar. The chase wasn't organized yet, and they soon lost themselves in the orderly rush of Venus City traffic. The two carloads of rebels cruised nonchalantly past the Administration Center and pulled into a private garage a little beyond. "What are we stopping here for?" Brian demanded. "We've got to get away." "That's just what we're doing," Crystal snapped. "Everybody out." The rebels piled out and the cars pulled away to become innocuous parts of the traffic stream. The rebels seemed to know where they were going and that gave them the edge on Brian. They followed Crystal down into the garage's repair pit. She fumbled in the darkness a moment, then a darker patch showed as a door swung open in the side of the pit. They filed into the solid blackness after her and the door thudded shut. The beam of a torch stabbed through the darkness and they clambered precariously down a steep, steel stairway. "Where the dickens are we?" Brian whispered hoarsely. "Oh, you don't have to whisper, we're safe enough here. This is one of the air shafts leading down to the old mines." "Old mines? What old mines?" "That's something you newcomers don't know anything about. This whole area was worked out long before Venus Consolidated came to the planet. These old tunnels run all under the city." They went five hundred feet down the air shaft before they reached a level tunnel. "What do we do? Hide here?" "I should say not. Serono Zeburzac, head of McHague's secret police will be after us now. We won't be safe anywhere near Venus City." "Don't be crazy. That Serono Zeburzac stuff is just a legend McHague keeps up to scare people with." "That's what you think," Crystal snapped. "McHague's legend got my father and he'll get all of us unless we run the whole company right off the planet." "Well, what the dickens does he look like?" Brian asked doubtfully. "I don't know, but his left hand is missing. Dad did some good shooting before he died," she said grimly. Brian was startled at the icy hardness of her voice. Two of the rebels pulled a screening tarpaulin aside and revealed one of the old-type ore cars that must have been used in the ancient mines. A brand-new atomic motor gleamed incongruously at one end. The rebels crowded into it and they went rumbling swiftly down the echoing passage. The lights of the car showed the old working, rotten and crumbling, fallen in in some places and signs of new work where the rebels had cleared away the debris of years. Brian struggled into a zippered overall suit as they followed a twisting, tortuous course for half an hour, switching from one tunnel to another repeatedly until he had lost all conception of direction. Crystal James, at the controls, seemed to know exactly where they were going. The tunnel emerged in a huge cavern that gloomed darkly away in every direction. The towering, massive remains of old machinery, eroded and rotten with age crouched like ancient, watching skeletons. "These were the old stamp mills," the girl said, and her voice seemed to be swallowed to a whisper in the vast, echoing darkness. Between two rows of sentinel ruins they came suddenly on two slim Venusian atmospheric ships. Dim light spilled over them from a ragged gash in the wall of the cavern. Brian followed Crystal into the smaller of the two ships and the rest of the rebels manned the other. "Wait a minute, how do we get out of here?" Brian demanded. "Through that hole up there," the girl said matter-of-factly. "You're crazy, you can't get through there." "Oh, yeah? Just watch this." The ship thundered to life beneath them and leaped off in a full-throttled take-off. "We're going to crash! That gap isn't wide enough!" The sides of the gap rushed in on the tips of the stubby wings. Brian braced himself for the crash, but it didn't come. At the last possible second, the ship rolled smoothly over. At the moment it flashed through the opening it was stood vertically on edge. Crystal held the ship in its roll and completed the maneuver outside the mountain while Brian struggled to get his internal economy back into some semblance of order. "That's some flying," he said as soon as he could speak. Crystal looked at him in surprise. "That's nothing. We Venusians fly almost as soon as we can walk." "Oh—I see," Brian said weakly and a few moments later he really did see. Two big, fast, green ships, carrying the insignia of the Venus Consolidated police, cruised suddenly out from a mountain air station. An aërial torpedo exploded in front of the rebel ship. Crystal's face set in grim lines as she pulled the ship up in a screaming climb. Brian got up off the floor. "You don't have to get excited like that," he complained. "They weren't trying to hit us." "That's what you think," Crystal muttered. "Those children don't play for peanuts." "But, girl, they're just Venus Consolidated police. They haven't got any authority to shoot anyone." "Authority doesn't make much difference to them," Crystal snapped bitterly. "They've been killing people all over the planet. What do you think this revolution is about?" "You must be mistak—" He slumped to the floor as Crystal threw the ship into a mad, rolling spin. A tremendous crash thundered close astern. "I guess that was a mistake!" Crystal yelled as she fought the controls. Brian almost got to his feet when another wild maneuver hurled him back to the floor. The police ship was right on their tail. The girl gunned her craft into a snap Immelmann and swept back on their pursuers, slicing in close over the ship. Brian's eyes bulged as he saw a long streak of paint and metal ripped off the wing of the police ship. He saw the crew battling their controls in startled terror. The ship slipped frantically away and fell into a spin. "That's them," Crystal said with satisfaction. "How are the others doing?" "Look! They're hit!" Brian felt sick. The slower rebel freight ship staggered drunkenly as a torpedo caught it and ripped away half a wing. It plunged down in flames with the white flowers of half a dozen parachutes blossoming around it. Brian watched in horror as the police ship came deliberately about. They heard its forward guns go into action. The bodies of the parachutists jerked and jumped like crazy marionettes as the bullets smashed into them. It was over in a few moments. The dead rebels drifted down into the mist-shrouded depths of the valley. "The dirty, murdering rats!" Brian's voice ripped out in a fury of outrage. "They didn't have a chance!" "Don't get excited," Crystal told him in a dead, flat voice. "That's just normal practice. If you'd stuck your nose out of your laboratory once in a while, you'd have heard of these things." "But why—" He ducked away instinctively as a flight of bullets spanged through the fuselage. "They're after us now!" Crystal's answer was to yank the ship into a rocketing climb. The police were watching for that. The big ship roared up after them. "Just follow along, suckers," Crystal invited grimly. She snapped the ship into a whip stall. For one nauseating moment they hung on nothing, then the ship fell over on its back and they screamed down in a terminal velocity dive, heading for the safety of the lower valley mists. The heavier police ship, with its higher wing-loading, could not match the maneuver. The rebel craft plunged down through the blinding fog. Half-seen, ghostly fingers of stone clutched up at them, talons of gray rock missed and fell away again as Crystal nursed the ship out of its dive. " Phew! " Brian gasped. "Well, we got away that time. How in thunder can you do it?" "Well, you don't do it on faith. Take a look at that fuel gauge! We may get as far as our headquarters—or we may not." For twenty long minutes they groped blindly through the fog, flying solely by instruments and dead reckoning. The needle of the fuel gauge flickered closer and closer to the danger point. They tore loose from the clinging fog as it swung firmly to "Empty." The drive sputtered and coughed and died. "That's figuring it nice and close," Crystal said in satisfaction. "We can glide in from here." "Into where?" Brian demanded. All he could see immediately ahead was the huge bulk of a mountain which blocked the entire width of the valley and soared sheer up to the high-cloud level. His eyes followed it up and up— "Look! Police ships. They've seen us!" "Maybe they haven't. Anyway, there's only one place we can land." The ship lunged straight for the mountain wall! "Are you crazy? Watch out—we'll crash!" "You leave the flying to me," Crystal snapped. She held the ship in its glide, aiming directly for the tangled foliage of the mountain face. Brian yelped and cowered instinctively back. The lush green of the mountainside swirled up to meet them. They ripped through the foliage—there was no crash. They burst through into a huge, brilliantly lighted cavern and settled to a perfect landing. Men came running. Crystal tumbled out of her ship. "Douse those lights," she shouted. "The police are outside." A tall, lean man with bulbous eyes and a face like a startled horse, rushed up to Crystal. "What do you mean by leading them here?" he yelled, waving his hands. "They jumped us when we had no fuel, and quit acting like an idiot." The man was shaking, his eyes looked wild. "They'll kill us. We've got to get out of here." "Wait, you fool. They may not even have seen us." But he was gone, running toward a group of ships lined up at the end of the cavern. "Who was that crazy coot and what is this place?" Brian demanded. "That was Gort Sterling, our leader," the girl said bitterly. "And this is our headquarters." One of the ships at the back of the cavern thundered to life, streaked across the floor and burst out through the opening Crystal's ship had left. "He hasn't got a chance! We'll be spotted for sure, now." The other rebels waited uncertainly, but not for long. There was the crescendoing roar of ships in a dive followed by the terrific crash of an explosion. "They got him!" Crystal's voice was a moan. "Oh, the fool, the fool!" "Sounded like more than one ship. They'll be after us, now. Is there any other way of getting out of this place?" "Not for ships. We'll have to walk and they'll follow us." "We've got to slow them down some way, then. I wonder how the devil they traced us? I thought we lost them in that fog." "It's that Serono Zeburzac, the traitor. He knows these mountains as well as we do." "How come?" "The Zeburzacs are one of the old families, but he sold out to McHague." "Well, what do we do now? Just stand here? It looks like everybody's leaving." "We might as well just wait," Crystal said hopelessly. "It won't do us any good to run out into the hills. Zeburzac and his men will follow." "We could slow them down some by swinging a couple of those ships around so their rocket exhausts sweep the entrance to the cavern," Brian suggested doubtfully. She looked at him steadily. "You sound like the only good rebel left. We can try it, anyway." They ran two ships out into the middle of the cavern, gunned them around and jockeyed them into position—not a moment too soon. Half a dozen police showed in brief silhouette as they slipped cautiously into the cavern, guns ready, expecting resistance. They met a dead silence. A score or more followed them without any attempt at concealment. Then Brian and Crystal cut loose with the drives of the two ships. Startled screams of agony burst from the crowded group of police as they were caught in the annihilating cross fire of roaring flame. They crisped and twisted, cooked to scorched horrors before they fell. A burst of thick, greasy smoke rushed out of the cavern. Two of the police, their clothes and flesh scorched and flaming, plunged as shrieking, living torches down the mountainside. Crystal was white and shaking, her face set in a mask of horror, as she climbed blindly from her ship. "Let's get away! I can smell them burning," she shuddered and covered her face with her hands. Brian grabbed her and shook her. "Snap out of it," he barked. "That's no worse than shooting helpless men in parachutes. We can't go, yet; we're not finished here." "Oh, let them shoot us! I can't go through that again!" "You don't have to. Wait here." He climbed back into one of the ships and cut the richness of the fuel mixture down till the exhaust was a lambent, shuddering stutter, verging on extinction. He dashed to the other ship and repeated the maneuver, fussing with the throttle till he had the fuel mixture adjusted to critical fineness. The beat of the stuttering exhaust seemed to catch up to the other and built to an aching pulsation. In a moment the whole mass of air in the cavern hit the frequency with a subtle, intangible thunder of vibration. Crystal screamed. "Brian! There's more police cutting in around the entrance." Brian clambered out of the ship and glanced at the glowing points in the rock where the police were cutting their way through outside the line of the exhaust flames. The pulsating thunder in the cavern crescendoed to an intolerable pitch. A huge mass of stalactites crashed to the floor. "It's time to check out," Brian shouted. Crystal led the way as they fled down the escape tunnel. The roaring crash of falling rock was a continuous, increasing avalanche of sound in the cavern behind them. They emerged from the tunnel on the face of the mountain, several hundred yards to the east of the cavern entrance. The ground shook and heaved beneath them. "The whole side of the mountain's sliding," Crystal screamed. "Run!" Brian shoved her and they plunged madly through the thick tangle of jungle away from the slide. Huge boulders leaped and smashed through the matted bush around them. Crystal went down as the ground slipped from under her. Brian grabbed her and a tree at the same time. The tree leaned and crashed down the slope, the whole jungle muttered and groaned and came to life as it joined the roaring rush of the slide. They were tumbled irresistibly downward, riding the edge of the slide for terrifying minutes till it stilled and left them bruised and shaken in a tangle of torn vegetation. The remains of two police ships, caught without warning in the rush as they attempted to land, stuck up grotesquely out of the foot of the slide. The dust was settling away. A flock of brilliant blue, gliding lizards barking in raucous terror, fled down the valley. Then they were gone and the primeval silence settled back into place. Brian and Crystal struggled painfully to solid ground. Crystal gazed with a feeling of awe at the devastated mountainside. "How did you do it?" "It's a matter of harmonics," Brian explained. "If you hit the right vibratory combination, you can shake anything down. But now that we've made a mess of the old homestead, what do we do?" "Walk," Crystal said laconically. She led the way as they started scrambling through the jungle up the mountainside. "Where are we heading for?" Brian grunted as he struggled along. "The headquarters of the Carlton family. They're the closest people we can depend on. They've kept out of the rebellion, but they're on our side. They've helped us before."
|
C. That the Venus Consolidated police weren’t honest.
|
Who wanted to mine Lovenbroy’s minerals?
A. Croanie
B. MUDDEL
C. Boge
D. Lovenbroy neighbors
|
CULTURAL EXCHANGE BY KEITH LAUMER It was a simple student exchange—but Retief gave them more of an education than they expected! [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, September 1962. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] I Second Secretary Magnan took his green-lined cape and orange-feathered beret from the clothes tree. "I'm off now, Retief," he said. "I hope you'll manage the administrative routine during my absence without any unfortunate incidents." "That seems a modest enough hope," Retief said. "I'll try to live up to it." "I don't appreciate frivolity with reference to this Division," Magnan said testily. "When I first came here, the Manpower Utilization Directorate, Division of Libraries and Education was a shambles. I fancy I've made MUDDLE what it is today. Frankly, I question the wisdom of placing you in charge of such a sensitive desk, even for two weeks. But remember. Yours is purely a rubber-stamp function." "In that case, let's leave it to Miss Furkle. I'll take a couple of weeks off myself. With her poundage, she could bring plenty of pressure to bear." "I assume you jest, Retief," Magnan said sadly. "I should expect even you to appreciate that Bogan participation in the Exchange Program may be the first step toward sublimation of their aggressions into more cultivated channels." "I see they're sending two thousand students to d'Land," Retief said, glancing at the Memo for Record. "That's a sizable sublimation." Magnan nodded. "The Bogans have launched no less than four military campaigns in the last two decades. They're known as the Hoodlums of the Nicodemean Cluster. Now, perhaps, we shall see them breaking that precedent and entering into the cultural life of the Galaxy." "Breaking and entering," Retief said. "You may have something there. But I'm wondering what they'll study on d'Land. That's an industrial world of the poor but honest variety." "Academic details are the affair of the students and their professors," Magnan said. "Our function is merely to bring them together. See that you don't antagonize the Bogan representative. This will be an excellent opportunity for you to practice your diplomatic restraint—not your strong point, I'm sure you'll agree." A buzzer sounded. Retief punched a button. "What is it, Miss Furkle?" "That—bucolic person from Lovenbroy is here again." On the small desk screen, Miss Furkle's meaty features were compressed in disapproval. "This fellow's a confounded pest. I'll leave him to you, Retief," Magnan said. "Tell him something. Get rid of him. And remember: here at Corps HQ, all eyes are upon you." "If I'd thought of that, I'd have worn my other suit," Retief said. Magnan snorted and passed from view. Retief punched Miss Furkle's button. "Send the bucolic person in." A tall broad man with bronze skin and gray hair, wearing tight trousers of heavy cloth, a loose shirt open at the neck and a short jacket, stepped into the room. He had a bundle under his arm. He paused at sight of Retief, looked him over momentarily, then advanced and held out his hand. Retief took it. For a moment the two big men stood, face to face. The newcomer's jaw muscles knotted. Then he winced. Retief dropped his hand and motioned to a chair. "That's nice knuckle work, mister," the stranger said, massaging his hand. "First time anybody ever did that to me. My fault though. I started it, I guess." He grinned and sat down. "What can I do for you?" Retief said. "You work for this Culture bunch, do you? Funny. I thought they were all ribbon-counter boys. Never mind. I'm Hank Arapoulous. I'm a farmer. What I wanted to see you about was—" He shifted in his chair. "Well, out on Lovenbroy we've got a serious problem. The wine crop is just about ready. We start picking in another two, three months. Now I don't know if you're familiar with the Bacchus vines we grow...?" "No," Retief said. "Have a cigar?" He pushed a box across the desk. Arapoulous took one. "Bacchus vines are an unusual crop," he said, puffing the cigar alight. "Only mature every twelve years. In between, the vines don't need a lot of attention, so our time's mostly our own. We like to farm, though. Spend a lot of time developing new forms. Apples the size of a melon—and sweet—" "Sounds very pleasant," Retief said. "Where does the Libraries and Education Division come in?" Arapoulous leaned forward. "We go in pretty heavy for the arts. Folks can't spend all their time hybridizing plants. We've turned all the land area we've got into parks and farms. Course, we left some sizable forest areas for hunting and such. Lovenbroy's a nice place, Mr. Retief." "It sounds like it, Mr. Arapoulous. Just what—" "Call me Hank. We've got long seasons back home. Five of 'em. Our year's about eighteen Terry months. Cold as hell in winter; eccentric orbit, you know. Blue-black sky, stars visible all day. We do mostly painting and sculpture in the winter. Then Spring; still plenty cold. Lots of skiing, bob-sledding, ice skating; and it's the season for woodworkers. Our furniture—" "I've seen some of your furniture," Retief said. "Beautiful work." Arapoulous nodded. "All local timbers too. Lots of metals in our soil and those sulphates give the woods some color, I'll tell you. Then comes the Monsoon. Rain—it comes down in sheets. But the sun's getting closer. Shines all the time. Ever seen it pouring rain in the sunshine? That's the music-writing season. Then summer. Summer's hot. We stay inside in the daytime and have beach parties all night. Lots of beach on Lovenbroy; we're mostly islands. That's the drama and symphony time. The theatres are set up on the sand, or anchored off-shore. You have the music and the surf and the bonfires and stars—we're close to the center of a globular cluster, you know...." "You say it's time now for the wine crop?" "That's right. Autumn's our harvest season. Most years we have just the ordinary crops. Fruit, grain, that kind of thing; getting it in doesn't take long. We spend most of the time on architecture, getting new places ready for the winter or remodeling the older ones. We spend a lot of time in our houses. We like to have them comfortable. But this year's different. This is Wine Year." Arapoulous puffed on his cigar, looked worriedly at Retief. "Our wine crop is our big money crop," he said. "We make enough to keep us going. But this year...." "The crop isn't panning out?" "Oh, the crop's fine. One of the best I can remember. Course, I'm only twenty-eight; I can't remember but two other harvests. The problem's not the crop." "Have you lost your markets? That sounds like a matter for the Commercial—" "Lost our markets? Mister, nobody that ever tasted our wines ever settled for anything else!" "It sounds like I've been missing something," said Retief. "I'll have to try them some time." Arapoulous put his bundle on the desk, pulled off the wrappings. "No time like the present," he said. Retief looked at the two squat bottles, one green, one amber, both dusty, with faded labels, and blackened corks secured by wire. "Drinking on duty is frowned on in the Corps, Mr. Arapoulous," he said. "This isn't drinking . It's just wine." Arapoulous pulled the wire retainer loose, thumbed the cork. It rose slowly, then popped in the air. Arapoulous caught it. Aromatic fumes wafted from the bottle. "Besides, my feelings would be hurt if you didn't join me." He winked. Retief took two thin-walled glasses from a table beside the desk. "Come to think of it, we also have to be careful about violating quaint native customs." Arapoulous filled the glasses. Retief picked one up, sniffed the deep rust-colored fluid, tasted it, then took a healthy swallow. He looked at Arapoulous thoughtfully. "Hmmm. It tastes like salted pecans, with an undercurrent of crusted port." "Don't try to describe it, Mr. Retief," Arapoulous said. He took a mouthful of wine, swished it around his teeth, swallowed. "It's Bacchus wine, that's all. Nothing like it in the Galaxy." He pushed the second bottle toward Retief. "The custom back home is to alternate red wine and black." Retief put aside his cigar, pulled the wires loose, nudged the cork, caught it as it popped up. "Bad luck if you miss the cork," Arapoulous said, nodding. "You probably never heard about the trouble we had on Lovenbroy a few years back?" "Can't say that I did, Hank." Retief poured the black wine into two fresh glasses. "Here's to the harvest." "We've got plenty of minerals on Lovenbroy," Arapoulous said, swallowing wine. "But we don't plan to wreck the landscape mining 'em. We like to farm. About ten years back some neighbors of ours landed a force. They figured they knew better what to do with our minerals than we did. Wanted to strip-mine, smelt ore. We convinced 'em otherwise. But it took a year, and we lost a lot of men." "That's too bad," Retief said. "I'd say this one tastes more like roast beef and popcorn over a Riesling base." "It put us in a bad spot," Arapoulous went on. "We had to borrow money from a world called Croanie. Mortgaged our crops. Had to start exporting art work too. Plenty of buyers, but it's not the same when you're doing it for strangers." "Say, this business of alternating drinks is the real McCoy," Retief said. "What's the problem? Croanie about to foreclose?" "Well, the loan's due. The wine crop would put us in the clear. But we need harvest hands. Picking Bacchus grapes isn't a job you can turn over to machinery—and anyway we wouldn't if we could. Vintage season is the high point of living on Lovenbroy. Everybody joins in. First, there's the picking in the fields. Miles and miles of vineyards covering the mountain sides, and crowding the river banks, with gardens here and there. Big vines, eight feet high, loaded with fruit, and deep grass growing between. The wine-carriers keep on the run, bringing wine to the pickers. There's prizes for the biggest day's output, bets on who can fill the most baskets in an hour.... The sun's high and bright, and it's just cool enough to give you plenty of energy. Come nightfall, the tables are set up in the garden plots, and the feast is laid on: roast turkeys, beef, hams, all kinds of fowl. Big salads. Plenty of fruit. Fresh-baked bread ... and wine, plenty of wine. The cooking's done by a different crew each night in each garden, and there's prizes for the best crews. "Then the wine-making. We still tramp out the vintage. That's mostly for the young folks but anybody's welcome. That's when things start to get loosened up. Matter of fact, pretty near half our young-uns are born after a vintage. All bets are off then. It keeps a fellow on his toes though. Ever tried to hold onto a gal wearing nothing but a layer of grape juice?" "Never did," Retief said. "You say most of the children are born after a vintage. That would make them only twelve years old by the time—" "Oh, that's Lovenbroy years; they'd be eighteen, Terry reckoning." "I was thinking you looked a little mature for twenty-eight," Retief said. "Forty-two, Terry years," Arapoulous said. "But this year it looks bad. We've got a bumper crop—and we're short-handed. If we don't get a big vintage, Croanie steps in. Lord knows what they'll do to the land. Then next vintage time, with them holding half our grape acreage—" "You hocked the vineyards?" "Yep. Pretty dumb, huh? But we figured twelve years was a long time." "On the whole," Retief said, "I think I prefer the black. But the red is hard to beat...." "What we figured was, maybe you Culture boys could help us out. A loan to see us through the vintage, enough to hire extra hands. Then we'd repay it in sculpture, painting, furniture—" "Sorry, Hank. All we do here is work out itineraries for traveling side-shows, that kind of thing. Now, if you needed a troop of Groaci nose-flute players—" "Can they pick grapes?" "Nope. Anyway, they can't stand the daylight. Have you talked this over with the Labor Office?" "Sure did. They said they'd fix us up with all the electronics specialists and computer programmers we wanted—but no field hands. Said it was what they classified as menial drudgery; you'd have thought I was trying to buy slaves." The buzzer sounded. Miss Furkle's features appeared on the desk screen. "You're due at the Intergroup Council in five minutes," she said. "Then afterwards, there are the Bogan students to meet." "Thanks." Retief finished his glass, stood. "I have to run, Hank," he said. "Let me think this over. Maybe I can come up with something. Check with me day after tomorrow. And you'd better leave the bottles here. Cultural exhibits, you know." II As the council meeting broke up, Retief caught the eye of a colleague across the table. "Mr. Whaffle, you mentioned a shipment going to a place called Croanie. What are they getting?" Whaffle blinked. "You're the fellow who's filling in for Magnan, over at MUDDLE," he said. "Properly speaking, equipment grants are the sole concern of the Motorized Equipment Depot, Division of Loans and Exchanges." He pursed his lips. "However, I suppose there's no harm in telling you. They'll be receiving heavy mining equipment." "Drill rigs, that sort of thing?" "Strip mining gear." Whaffle took a slip of paper from a breast pocket, blinked at it. "Bolo Model WV/1 tractors, to be specific. Why is MUDDLE interested in MEDDLE's activities?" "Forgive my curiosity, Mr. Whaffle. It's just that Croanie cropped up earlier today. It seems she holds a mortgage on some vineyards over on—" "That's not MEDDLE's affair, sir," Whaffle cut in. "I have sufficient problems as Chief of MEDDLE without probing into MUDDLE'S business." "Speaking of tractors," another man put in, "we over at the Special Committee for Rehabilitation and Overhaul of Under-developed Nations' General Economies have been trying for months to get a request for mining equipment for d'Land through MEDDLE—" "SCROUNGE was late on the scene," Whaffle said. "First come, first served. That's our policy at MEDDLE. Good day, gentlemen." He strode off, briefcase under his arm. "That's the trouble with peaceful worlds," the SCROUNGE committeeman said. "Boge is a troublemaker, so every agency in the Corps is out to pacify her. While my chance to make a record—that is, assist peace-loving d'Land—comes to naught." He shook his head. "What kind of university do they have on d'Land?" asked Retief. "We're sending them two thousand exchange students. It must be quite an institution." "University? D'Land has one under-endowed technical college." "Will all the exchange students be studying at the Technical College?" "Two thousand students? Hah! Two hundred students would overtax the facilities of the college." "I wonder if the Bogans know that?" "The Bogans? Why, most of d'Land's difficulties are due to the unwise trade agreement she entered into with Boge. Two thousand students indeed!" He snorted and walked away. Retief stopped by the office to pick up a short cape, then rode the elevator to the roof of the 230-story Corps HQ building and hailed a cab to the port. The Bogan students had arrived early. Retief saw them lined up on the ramp waiting to go through customs. It would be half an hour before they were cleared through. He turned into the bar and ordered a beer. A tall young fellow on the next stool raised his glass. "Happy days," he said. "And nights to match." "You said it." He gulped half his beer. "My name's Karsh. Mr. Karsh. Yep, Mr. Karsh. Boy, this is a drag, sitting around this place waiting...." "You meeting somebody?" "Yeah. Bunch of babies. Kids. How they expect—Never mind. Have one on me." "Thanks. You a Scoutmaster?" "I'll tell you what I am. I'm a cradle-robber. You know—" he turned to Retief—"not one of those kids is over eighteen." He hiccupped. "Students, you know. Never saw a student with a beard, did you?" "Lots of times. You're meeting the students, are you?" The young fellow blinked at Retief. "Oh, you know about it, huh?" "I represent MUDDLE." Karsh finished his beer, ordered another. "I came on ahead. Sort of an advance guard for the kids. I trained 'em myself. Treated it like a game, but they can handle a CSU. Don't know how they'll act under pressure. If I had my old platoon—" He looked at his beer glass, pushed it back. "Had enough," he said. "So long, friend. Or are you coming along?" Retief nodded. "Might as well." At the exit to the Customs enclosure, Retief watched as the first of the Bogan students came through, caught sight of Karsh and snapped to attention, his chest out. "Drop that, mister," Karsh snapped. "Is that any way for a student to act?" The youth, a round-faced lad with broad shoulders, grinned. "Heck, no," he said. "Say, uh, Mr. Karsh, are we gonna get to go to town? We fellas were thinking—" "You were, hah? You act like a bunch of school kids! I mean ... no! Now line up!" "We have quarters ready for the students," Retief said. "If you'd like to bring them around to the west side, I have a couple of copters laid on." "Thanks," said Karsh. "They'll stay here until take-off time. Can't have the little dears wandering around loose. Might get ideas about going over the hill." He hiccupped. "I mean they might play hookey." "We've scheduled your re-embarkation for noon tomorrow. That's a long wait. MUDDLE's arranged theater tickets and a dinner." "Sorry," Karsh said. "As soon as the baggage gets here, we're off." He hiccupped again. "Can't travel without our baggage, y'know." "Suit yourself," Retief said. "Where's the baggage now?" "Coming in aboard a Croanie lighter." "Maybe you'd like to arrange for a meal for the students here." "Sure," Karsh said. "That's a good idea. Why don't you join us?" Karsh winked. "And bring a few beers." "Not this time," Retief said. He watched the students, still emerging from Customs. "They seem to be all boys," he commented. "No female students?" "Maybe later," Karsh said. "You know, after we see how the first bunch is received." Back at the MUDDLE office, Retief buzzed Miss Furkle. "Do you know the name of the institution these Bogan students are bound for?" "Why, the University at d'Land, of course." "Would that be the Technical College?" Miss Furkle's mouth puckered. "I'm sure I've never pried into these details." "Where does doing your job stop and prying begin, Miss Furkle?" Retief said. "Personally, I'm curious as to just what it is these students are travelling so far to study—at Corps expense." "Mr. Magnan never—" "For the present. Miss Furkle, Mr. Magnan is vacationing. That leaves me with the question of two thousand young male students headed for a world with no classrooms for them ... a world in need of tractors. But the tractors are on their way to Croanie, a world under obligation to Boge. And Croanie holds a mortgage on the best grape acreage on Lovenbroy." "Well!" Miss Furkle snapped, small eyes glaring under unplucked brows. "I hope you're not questioning Mr. Magnan's wisdom!" "About Mr. Magnan's wisdom there can be no question," Retief said. "But never mind. I'd like you to look up an item for me. How many tractors will Croanie be getting under the MEDDLE program?" "Why, that's entirely MEDDLE business," Miss Furkle said. "Mr. Magnan always—" "I'm sure he did. Let me know about the tractors as soon as you can." Miss Furkle sniffed and disappeared from the screen. Retief left the office, descended forty-one stories, followed a corridor to the Corps Library. In the stacks he thumbed through catalogues, pored over indices. "Can I help you?" someone chirped. A tiny librarian stood at his elbow. "Thank you, ma'am," Retief said. "I'm looking for information on a mining rig. A Bolo model WV tractor." "You won't find it in the industrial section," the librarian said. "Come along." Retief followed her along the stacks to a well-lit section lettered ARMAMENTS. She took a tape from the shelf, plugged it into the viewer, flipped through and stopped at a squat armored vehicle. "That's the model WV," she said. "It's what is known as a continental siege unit. It carries four men, with a half-megaton/second firepower." "There must be an error somewhere," Retief said. "The Bolo model I want is a tractor. Model WV M-1—" "Oh, the modification was the addition of a bulldozer blade for demolition work. That must be what confused you." "Probably—among other things. Thank you." Miss Furkle was waiting at the office. "I have the information you wanted," she said. "I've had it for over ten minutes. I was under the impression you needed it urgently, and I went to great lengths—" "Sure," Retief said. "Shoot. How many tractors?" "Five hundred." "Are you sure?" Miss Furkle's chins quivered. "Well! If you feel I'm incompetent—" "Just questioning the possibility of a mistake, Miss Furkle. Five hundred tractors is a lot of equipment." "Was there anything further?" Miss Furkle inquired frigidly. "I sincerely hope not," Retief said. III Leaning back in Magnan's padded chair with power swivel and hip-u-matic concontour, Retief leafed through a folder labelled "CERP 7-602-Ba; CROANIE (general)." He paused at a page headed Industry. Still reading, he opened the desk drawer, took out the two bottles of Bacchus wine and two glasses. He poured an inch of wine into each and sipped the black wine meditatively. It would be a pity, he reflected, if anything should interfere with the production of such vintages.... Half an hour later he laid the folder aside, keyed the phone and put through a call to the Croanie Legation. He asked for the Commercial Attache. "Retief here, Corps HQ," he said airily. "About the MEDDLE shipment, the tractors. I'm wondering if there's been a slip up. My records show we're shipping five hundred units...." "That's correct. Five hundred." Retief waited. "Ah ... are you there, Retief?" "I'm still here. And I'm still wondering about the five hundred tractors." "It's perfectly in order. I thought it was all settled. Mr. Whaffle—" "One unit would require a good-sized plant to handle its output," Retief said. "Now Croanie subsists on her fisheries. She has perhaps half a dozen pint-sized processing plants. Maybe, in a bind, they could handle the ore ten WV's could scrape up ... if Croanie had any ore. It doesn't. By the way, isn't a WV a poor choice as a mining outfit? I should think—" "See here, Retief! Why all this interest in a few surplus tractors? And in any event, what business is it of yours how we plan to use the equipment? That's an internal affair of my government. Mr. Whaffle—" "I'm not Mr. Whaffle. What are you going to do with the other four hundred and ninety tractors?" "I understood the grant was to be with no strings attached!" "I know it's bad manners to ask questions. It's an old diplomatic tradition that any time you can get anybody to accept anything as a gift, you've scored points in the game. But if Croanie has some scheme cooking—" "Nothing like that, Retief. It's a mere business transaction." "What kind of business do you do with a Bolo WV? With or without a blade attached, it's what's known as a continental siege unit." "Great Heavens, Retief! Don't jump to conclusions! Would you have us branded as warmongers? Frankly—is this a closed line?" "Certainly. You may speak freely." "The tractors are for transshipment. We've gotten ourselves into a difficult situation, balance-of-payments-wise. This is an accommodation to a group with which we have rather strong business ties." "I understand you hold a mortgage on the best land on Lovenbroy," Retief said. "Any connection?" "Why ... ah ... no. Of course not, ha ha." "Who gets the tractors eventually?" "Retief, this is unwarranted interference!" "Who gets them?" "They happen to be going to Lovenbroy. But I scarcely see—" "And who's the friend you're helping out with an unauthorized transshipment of grant material?" "Why ... ah ... I've been working with a Mr. Gulver, a Bogan representative." "And when will they be shipped?" "Why, they went out a week ago. They'll be half way there by now. But look here, Retief, this isn't what you're thinking!" "How do you know what I'm thinking? I don't know myself." Retief rang off, buzzed the secretary. "Miss Furkle, I'd like to be notified immediately of any new applications that might come in from the Bogan Consulate for placement of students." "Well, it happens, by coincidence, that I have an application here now. Mr. Gulver of the Consulate brought it in." "Is Mr. Gulver in the office? I'd like to see him." "I'll ask him if he has time." "Great. Thanks." It was half a minute before a thick-necked red-faced man in a tight hat walked in. He wore an old-fashioned suit, a drab shirt, shiny shoes with round toes and an ill-tempered expression. "What is it you wish?" he barked. "I understood in my discussions with the other ... ah ... civilian there'd be no further need for these irritating conferences." "I've just learned you're placing more students abroad, Mr. Gulver. How many this time?" "Two thousand." "And where will they be going?" "Croanie. It's all in the application form I've handed in. Your job is to provide transportation." "Will there be any other students embarking this season?" "Why ... perhaps. That's Boge's business." Gulver looked at Retief with pursed lips. "As a matter of fact, we had in mind dispatching another two thousand to Featherweight." "Another under-populated world—and in the same cluster, I believe," Retief said. "Your people must be unusually interested in that region of space." "If that's all you wanted to know, I'll be on my way. I have matters of importance to see to." After Gulver left, Retief called Miss Furkle in. "I'd like to have a break-out of all the student movements that have been planned under the present program," he said. "And see if you can get a summary of what MEDDLE has been shipping lately." Miss Furkle compressed her lips. "If Mr. Magnan were here, I'm sure he wouldn't dream of interfering in the work of other departments. I ... overheard your conversation with the gentleman from the Croanie Legation—" "The lists, Miss Furkle." "I'm not accustomed," Miss Furkle said, "to intruding in matters outside our interest cluster." "That's worse than listening in on phone conversations, eh? But never mind. I need the information, Miss Furkle." "Loyalty to my Chief—" "Loyalty to your pay-check should send you scuttling for the material I've asked for," Retief said. "I'm taking full responsibility. Now scat." The buzzer sounded. Retief flipped a key. "MUDDLE, Retief speaking...." Arapoulous's brown face appeared on the desk screen. "How-do, Retief. Okay if I come up?" "Sure, Hank. I want to talk to you." In the office, Arapoulous took a chair. "Sorry if I'm rushing you, Retief," he said. "But have you got anything for me?" Retief waved at the wine bottles. "What do you know about Croanie?" "Croanie? Not much of a place. Mostly ocean. All right if you like fish, I guess. We import our seafood from there. Nice prawns in monsoon time. Over a foot long." "You on good terms with them?" "Sure, I guess so. Course, they're pretty thick with Boge." "So?" "Didn't I tell you? Boge was the bunch that tried to take us over here a dozen years back. They'd've made it too, if they hadn't had a lot of bad luck. Their armor went in the drink, and without armor they're easy game." Miss Furkle buzzed. "I have your lists," she said shortly. "Bring them in, please." The secretary placed the papers on the desk. Arapoulous caught her eye and grinned. She sniffed and marched from the room. "What that gal needs is a slippery time in the grape mash," Arapoulous observed. Retief thumbed through the papers, pausing to read from time to time. He finished and looked at Arapoulous. "How many men do you need for the harvest, Hank?" Retief inquired. Arapoulous sniffed his wine glass and looked thoughtful. "A hundred would help," he said. "A thousand would be better. Cheers." "What would you say to two thousand?" "Two thousand? Retief, you're not fooling?" "I hope not." He picked up the phone, called the Port Authority, asked for the dispatch clerk. "Hello, Jim. Say, I have a favor to ask of you. You know that contingent of Bogan students. They're traveling aboard the two CDT transports. I'm interested in the baggage that goes with the students. Has it arrived yet? Okay, I'll wait." Jim came back to the phone. "Yeah, Retief, it's here. Just arrived. But there's a funny thing. It's not consigned to d'Land. It's ticketed clear through to Lovenbroy." "Listen, Jim," Retief said. "I want you to go over to the warehouse and take a look at that baggage for me." Retief waited while the dispatch clerk carried out the errand. The level in the two bottles had gone down an inch when Jim returned to the phone. "Hey, I took a look at that baggage, Retief. Something funny going on. Guns. 2mm needlers, Mark XII hand blasters, power pistols—" "It's okay, Jim. Nothing to worry about. Just a mix-up. Now, Jim, I'm going to ask you to do something more for me. I'm covering for a friend. It seems he slipped up. I wouldn't want word to get out, you understand. I'll send along a written change order in the morning that will cover you officially. Meanwhile, here's what I want you to do...." Retief gave instructions, then rang off and turned to Arapoulous. "As soon as I get off a couple of TWX's, I think we'd better get down to the port, Hank. I think I'd like to see the students off personally."
|
C. Boge
|
Is model explanation output evaluated, what metric was used?
|
### Introduction
Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people. Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes. Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs. In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0. Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time. ### SNLI-VE-2.0
The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels: Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true. Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false. Neutral: if neither of the earlier two are true. The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3). However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\sim }31\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors. Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv ### SNLI-VE-2.0 ::: Re-annotation details
In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3). The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate. First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity: mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43). personal taste, e.g., “the sign is ugly”. lack of consensus on terms such as “many people” or “crowded”. To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets. To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41. After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE. Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class. ### SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment
Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets. ### SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Model.
To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1. BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral. Using the implementation from https://github.com/claudiogreco/coling18-gte. We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments: model selection as well as testing are done on the original uncorrected SNLI-VE. model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set. model selection as well as testing are done on the corrected SNLI-VE-2.0. Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy. ### SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Results.
The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%. The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant. Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model. ### Visual-Textual Entailment with Natural Language Explanations
In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time. ### Visual-Textual Entailment with Natural Language Explanations ::: e-SNLI-VE-2.0
e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets. We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time. To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40. ### Visual-Textual Entailment with Natural Language Explanations ::: Collecting Explanations
As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21. To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations
This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain
PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model.
As described in Section SECREF12, in the BUTD model, the hypothesis vector and the image vector were fused in a fixed-size feature vector f. The vector f was then given as input to an MLP which outputs a probability distribution over the three labels. In PaE-BUTD-VE, in addition to the classification layer, we add a 512-LSTM BIBREF12 decoder to generate an explanation. The decoder takes the feature vector f as initial state. Following Camburu BIBREF6, we prepend the label as a token at the beginning of the explanation to condition the explanation on the label. The ground truth label is provided at training time, whereas the predicted label is given at test time. At test time, we use beam search with a beam width of 3 to decode explanations. For memory and time reduction, we replaced words that appeared less than 15 times among explanations with “#UNK#”. This strategy reduces the output vocabulary size to approximately 8.6k words. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Loss.
The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\mathcal {L} = \alpha \mathcal {L}_{label} + (1-\alpha ) \mathcal {L}_{explanation} \; \textrm {;} \; \alpha \in [0,1]$. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model selection.
In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Results.
As summarised in Table TABREF30, we obtain a test balanced accuracy for PaE-BUTD-VE of 73%, while the same model trained without explanations obtains 72.52%. This is encouraging, since it shows that one can obtain additional natural language explanations without sacrificing performance (and eventually even improving the label performance, however, future work is needed to conclude whether the difference $0.48\%$ improvement in performance is statistically significant). Camburu BIBREF6 mentioned that the BLEU score was not an appropriate measure for the quality of explanations and suggested human evaluation instead. We therefore manually scored the relevance of 100 explanations that were generated when the model predicted correct labels. We found that only 20% of explanations were relevant. We highlight that the relevance of explanations is in terms of whether the explanation reflects ground-truth reasons supporting the correct label. This is not to be confused with whether an explanation is correctly illustrating the inner working of the model, which is left as future work. It is also important to note that on a similar experimental setting, Camburu report as low as 34.68% correct explanations, training with explanations that were actually collected for their task. Lastly, the model selection criterion at validation time was the prediction balanced accuracy, which may contribute to the low quality of explanations. While we show that adding an explanation module does not harm prediction performance, more work is necessary to get models that output trustable explanations. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict
When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32). ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model.
For the first network, we set $\alpha =0$ in the training loss of the PaE-BUTD-VE model to obtain a system that only learns to generate an explanation from the image-sentence input, without label prediction. Hence, in this setting, no label is prepended before the explanation. For the ExplToLabel-VE model, we use a 512-LSTM followed by an MLP with three 512-layers and ReLU activation, and softmax activation to classify the explanation between entailment, contradiction, and neutral. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model selection.
For ExplToLabel-VE, the best model is selected on balanced accuracy at validation time. For EtP-BUTD-VE, perplexity is used to select the best model parameters at validation time. It is computed between the explanations produced by the LSTM and ground truth explanations from the validation set. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Results.
When we train ExplToLabel-VE on e-SNLI-VE-2.0, we obtain a balanced accuracy of 90.55% on the test set. As reported in Table TABREF30, the overall PaE-BUTD-VE system achieves 69.40% balanced accuracy on the test set of e-SNLI-VE-2.0, which is a 3% decrease from the non-explanatory BUTD counterpart (72.52%). However, by setting $\alpha $ to zero and selecting the model that gives the best perplexity per word at validation, the quality of explanation significantly increased, with 35% relevance, based on manual evaluation. Thus, in our model, generating better explanations involves a small sacrifice in label prediction accuracy, implying a trade-off between explanation generation and accuracy. We note that there is room for improvement in our explanation generation method. For example, one can implement an attention mechanism similar to Xu BIBREF13, so that each generated word relates to a relevant part of the multimodal feature representation. ### Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Qualitative Analysis of Generated Explanations
We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations. Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset. Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification. ### Conclusion
In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems. ### Conclusion ::: Acknowledgements.
This work was supported by the Oxford Internet Institute, a JP Morgan PhD Fellowship 2019-2020, an Oxford-DeepMind Graduate Scholarship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund, as well as DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and the ERC under the Horizon 2020 program (grant agreement No. 853489). ### Appendix ::: Statistics of e-SNLI-VE-2.0
e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40. Including text hypotheses and explanations. ### Appendix ::: Details of the Mechanical Turk Task
We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location. Each assignment consisted of a set of 10 image-sentence pairs. For each pair, the participant was asked to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using a subset of the words that they highlighted. The instructions are shown in Figure FIGREF42. Workers were also guided with three annotated examples, one for each label. For each assignment of 10 questions, one trusted annotation with gold standard label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers. An example of question is shown in Figure FIGREF8 in the core paper. ### Appendix ::: Ambiguous Examples from SNLI-VE
Some examples in SNLI-VE were ambiguous and could find correct justifications for incompatible labels, as shown in Figures FIGREF44, FIGREF45, and FIGREF46. Figure 1. Examples from SNLI-VE-2.0. (a) In red, the neutral label from SNLI-VE is wrong, since the picture clearly shows that the crowd is outdoors. We corrected it to entailment in SNLIVE-2.0. (b) In green, an ambiguous instance. There is indeed an American flag in the background but it is very hard to see, hence the ambiguity between neutral and entailment, and even contradiction if one cannot spot it. Further, it is not clear whether “they” implies the whole group or the people visible in the image. Figure 2. MTurk annotation screen. (a) The label contradiction is chosen, (b) the evidence words “man”, “violin”, and “crowd” are highlighted, and (c) an explanation is written with these words. Table 1. Accuracies obtained with BUTD on SNLI-VE (valoriginal, test-original) and SNLI-VE-2.0 (val-corrected, testcorrected). Figure 3. Two image-sentence pairs from e-SNLI-VE-2.0 with (a) at the top, an uninformative explanation from e-SNLI, (b) at the bottom, an explanation collected from our crowdsourcing. We only collected new explanations for the neutral class (along with new labels). The SNLI premise is not included in e-SNLI-VE-2.0. Figure 4. PAE-BUTD-VE. The generation of explanation is conditioned on the image premise, textual hypothesis, and predicted label. Table 2. Label balanced accuracies and explanation relevance rates of our two explanatory systems on e-SNLI-VE-2.0. Comparison with their counterparts in e-SNLI [3]. Without the explanation component, the balanced accuracy on SNLI-VE-2.0 is 72.52% Figure 5. Architecture of ETP-BUTD-VE. Firstly, an explanation is generated, secondly the label is predicted from the explanation. The two models (in separate dashed rectangles) are not trained jointly. Figure 6. Both systems PAE-BUTD-VE and ETP-BUTD-VE predict the correct label, but only ETP-BUTD-VE generates a relevant explanation. Figure 7. Both systems PAE-BUTD-VE and ETP-BUTD-VE predict the correct label, but generate irrelevant explanations. Figure 8. Instructions given to workers on Mechanical Turk Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected. Figure 9. Ambiguous SNLI-VE instance. Some may argue that the woman’s face betrays sadness, but the image is not quite clear. Secondly, even with better resolution, facial expression may not be a strong enough evidence to support the hypothesis about the woman’s emotional state. Figure 10. Ambiguous SNLI-VE instance. The lack of consensus is on whether the man is “leering” at the woman. While it is likely the case, this interpretation in favour of entailment is subjective, and a cautious annotator would prefer to label the instance as neutral. Figure 11. Ambiguous SNLI-VE instance. Some may argue that it is impossible to certify from the image that the children are kindergarten students, and label the instance as neutral. On the other hand, the furniture may be considered as typical of kindergarten, which would be sufficient evidence for entailment.
|
balanced accuracy, i.e., the average of the three accuracies on each class
|
What solution does Paul Krugman suggest to address his concerns?
A. Journalists and authors should rely on only a handful of trusted sources.
B. Journalists and authors should show more care in referencing and crediting work done by all parties.
C. Journalists and authors should always fact-check information through Nobel laureates.
D. More media attention should be given to issues of academic plagiarism.
|
Krugman's Life of Brian Where it all started: Paul Krugman's "The Legend of Arthur." Letter from John Cassidy Paul Krugman replies to John Cassidy Letter from M. Mitchell Waldrop Paul Krugman replies to M. Mitchell Waldrop Letter from Kenneth J. Arrow Letter from Ted C. Fishman David Warsh's July 3, 1994, Boston Globe Letter from John Cassidy: Paul Krugman loves to berate journalists for their ignorance of economics, particularly his economics, but on this occasion, I fear, his logic is more addled than usual. I am reluctant to dignify his hatchet job with a lengthy reply, but some of his claims are so defamatory that they should be addressed, if only for the record. 1) Krugman claims that my opening sentence--"In a way, Bill Gates's current troubles with the Justice Department grew out of an economics seminar that took place thirteen years ago, at Harvard's John F. Kennedy School of Government"--is "pure fiction." Perhaps so, but in that case somebody should tell this to Joel Klein, the assistant attorney general in charge of the antitrust division. When I interviewed Klein for my piece about the Microsoft case, he singled out Brian Arthur as the economist who has most influenced his thinking about the way in which high-technology markets operate. It was Klein's words, not those of Arthur, that prompted me to use Arthur in the lead of the story. 2) Krugman wrote: "Cassidy's article tells the story of how Stanford Professor Brian Arthur came up with the idea of increasing returns." I wrote no such thing, and Arthur has never, to my knowledge, claimed any such thing. The notion of increasing returns has been around since Adam Smith, and it was written about at length by Alfred Marshall in 1890. What I did say in my article was that increasing returns was largely ignored by mainstream economists for much of the postwar era, a claim that simply isn't controversial. (As Krugman notes, one reason for this was technical, not ideological. Allowing for the possibility of increasing returns tends to rob economic models of two properties that economists cherish: simplicity and determinism. As long ago as 1939, Sir John Hicks, one of the founders of modern economics, noted that increasing returns, if tolerated, could lead to the "wreckage" of a large part of economic theory.) 3) Pace Krugman, I also did not claim that Arthur bears principal responsibility for the rediscovery of increasing returns by economists in the 1970s and 1980s. As Krugman notes, several scholars (himself included) who were working in the fields of game theory and international trade published articles incorporating increasing returns before Arthur did. My claim was simply that Arthur applied increasing returns to high-technology markets, and that his work influenced how other economists and government officials think about these markets. Krugman apart, virtually every economist I have spoken to, including Daniel Rubinfeld, a former Berkeley professor who is now the chief economist at the Justice Department's antitrust division, told me this was the case. (Rubinfeld also mentioned several other economists who did influential work, and I cited three of them in the article.) 4) Krugman appears to suggest that I made up some quotes, a charge that, if it came from a more objective source, I would consider to be a serious matter. In effect, he is accusing Brian Arthur, a man he calls a "nice guy," of being a fabricator or a liar. The quotes in question came from Arthur, and they were based on his recollections of two meetings that he attended some years ago. After Krugman's article appeared, the Santa Fe professor called me to say that he still recalled the meetings in question as I described them. Krugman, as he admits, wasn't present at either of the meetings. 5) For a man who takes his own cogitations extremely seriously, Krugman is remarkably cavalier about attributing motives and beliefs to others. "Cassidy has made it clear in earlier writing that he does not like mainstream economists, and he may have been overly eager to accept a story that puts them in a bad light," he pronounces. I presume this statement refers to a critical piece I wrote in 1996 about the direction that economic research, principally macroeconomic research, has taken over the past two decades. In response to that article, I received dozens of messages of appreciation from mainstream economists, including from two former presidents of the American Economic Association. Among the sources quoted in that piece were the then-chairman of the White House Council of Economic Advisers (Joseph Stiglitz), a governor of the Federal Reserve Board (Laurence Meyer), and a well-known Harvard professor (Gregory Mankiw). To claim, as Krugman does, that I "don't like mainstream economists" and that I am out to denigrate their work is malicious hogwash. The fact of the matter is that I spend much of my life reading the work of mainstream economists, speaking to them, and trying to find something they have written that might interest the general public. In my experience, most economists appreciate the attention. 6) I might attach more weight to Krugman's criticisms if I hadn't recently reread his informative 1994 book Peddling Prosperity , in which he devotes a chapter to the rediscovery of increasing returns by contemporary economists. Who are the first scholars Krugman mentions in his account? Paul David, an economic historian who wrote a famous paper about how the QWERTYUIOP typewriter keyboard evolved and, you guessed it, Brian Arthur. "Why QWERTYUIOP?" Krugman wrote. "In the early 1980s, Paul David and his Stanford colleague Brian Arthur asked that question, and quickly realized that it led them into surprisingly deep waters. ... What Paul David, Brian Arthur, and a growing number of other economists began to realize in the late seventies and early eighties was that stories like that of the typewriter keyboard are, in fact, pervasive in the economy." Evidently, Krugman felt four years ago that Arthur's contribution was important enough to merit a prominent mention in his book. Now, he dismisses the same work, saying it "didn't tell me anything that I didn't already know." Doubtless, this change in attitude on Krugman's part is unconnected to the fact that Arthur has started to receive some public recognition. The eminent MIT professor, whose early academic work received widespread media attention, is far too generous a scholar to succumb to such pettiness. --John Cassidy Paul Krugman replies to John Cassidy: I think that David Warsh's 1994 in the Boston Globe says it all. If other journalists would do as much homework as he did, I wouldn't have had to write that article. Letter from M. Mitchell Waldrop: Thanks to Paul Krugman for his lament about credulous reporters who refuse to let facts stand in the way of a good story ("The Legend of Arthur"). As a professional journalist, I found his points well taken--even when he cites my own book, Complexity as a classic example of the gullibility genre. Among many other things, Complexity tells the story of the Irish-born economist Brian Arthur and how he came to champion a principle known as "increasing returns." The recent New Yorker article explains how that principle has since become the intellectual foundation of the Clinton administration's antitrust case against Microsoft. Krugman's complaint is that the popular press--including Complexity and The New Yorker --is now hailing Brian Arthur as the originator of increasing returns, even though Krugman and many others had worked on the idea long before Arthur did. I leave it for others to decide whether I was too gullible in writing Complexity . For the record, however, I would like to inject a few facts into Krugman's story, which he summarizes nicely in the final paragraph: When Waldrop's book came out, I wrote him as politely as I could, asking exactly how he had managed to come up with his version of events. He did, to his credit, write back. He explained that while he had become aware of some other people working on increasing returns, trying to put them in would have pulled his story line out of shape. ... So what we really learn from the legend of Arthur is that some journalists like a good story too much to find out whether it is really true. Now, I will admit to many sins, not the least of them being a profound ignorance of graduate-level economics; I spent my graduate-school career in the physics department instead, writing a Ph.D. dissertation on the quantum-field theory of elementary particle collisions at relativistic energies. However, I am not so ignorant of the canons of journalism (and of common sense) that I would take a plausible fellow like Brian Arthur at face value without checking up on him. During my research for Complexity I spoke to a number of economists about his work, including Nobel laureate Kenneth Arrow, co-creator of the General Equilibrium Theory of economics that Brian so eloquently criticizes. They generally agreed that Brian was a maverick in the field--and perhaps a bit too much in love with his own self-image as a misunderstood outsider--but basically sound. None of them warned me that he was usurping credit where credit was not due. Which brings me to Professor Krugman's letter, and my reply. I remember the exchange very well. Obviously, however, my reply failed to make clear what I was really trying to say. So I'll try again: a) During our interviews, Brian went out of his way to impress upon me that many other economists had done work in increasing returns--Paul Krugman among them. He was anxious that they be given due credit in anything I wrote. So was I. b) Accordingly, I included a passage in Complexity in which Brian does indeed describe what others had done in the field--Paul Krugman among them. Elsewhere in that same chapter, I tried to make it clear that the concept of increasing returns was already well known to Brian's professors at Berkeley, where he first learned of it. Indeed, I quote Brian pointing out that increasing returns had been extensively discussed by the great English economist Alfred Marshall in 1891. c) So, when I received Krugman's letter shortly after Complexity came out, I was puzzled: He was complaining that I hadn't referenced others in the increasing-returns field--Paul Krugman among them--although I had explicitly done so. d) But, when I checked the published text, I was chagrined to discover that the critical passage mentioning Krugman wasn't there. e) Only then did I realize what had happened. After I had submitted the manuscript, my editor at Simon & Schuster had suggested a number of cuts to streamline what was already a long and involved chapter on Brian's ideas. I accepted some of the cuts, and restored others--including (I thought) the passage that mentioned Krugman. In the rush to get Complexity to press, however, that passage somehow wound up on the cutting-room floor anyway, and I didn't notice until too late. That oversight was my fault entirely, not my editor's, and certainly not Brian Arthur's. I take full responsibility, I regret it, and--if Simon & Schuster only published an errata column--I would happily correct it publicly. However, contrary to what Professor Krugman implies, it was an oversight, not a breezy disregard of facts for the sake of a good story. --M. Mitchell Waldrop Washington Paul Krugman replies to M. Mitchell Waldrop: I am truly sorry that The New Yorker has not yet established a Web presence so that we could include a link directly to the Cassidy piece. However, you can get a pretty good idea of what the piece said by reading the summary of it presented in "Tasty Bits from the Technology Front." Cassidy did not present a story about one guy among many who worked on increasing returns. On the contrary: He presented a morality play in which a lonely hero struggled to make his ideas heard against the unified opposition of a narrow-minded profession both intellectually and politically conservative. As TBTF's host--not exactly a naive reader--put it, "These ideas were anathema to mainstream economists in 1984 when Arthur first tried to publish them." That morality play--not the question of who deserves credit--was the main point of my column, because it is a pure (and malicious) fantasy that has nonetheless become part of the story line people tell about increasing returns and its relationship to mainstream economics. The fact, which is easily documented, is that during the years that, according to the legend, increasing returns was unacceptable in mainstream economics, papers about increasing returns were in fact being cheerfully published by all the major journals. And as I pointed out in the chronology I provided with the article, even standard reference volumes like the Handbook of International Economics (published in 1984, the year Arthur supposedly met a blank wall of resistance) have long contained chapters on increasing returns. Whatever the reason that Arthur had trouble getting his own paper published, ideological rigidity had nothing to do with it. How did this fantasy come to be so widely believed? I am glad to hear that you tried to tell a more balanced story, Mr. Waldrop, even if sloppy paperwork kept it from seeing the light of day. And I am glad that you talked to Ken Arrow. But Nobel laureates, who have wide responsibilities and much on their mind, are not necessarily on top of what has been going on in research outside their usual field. I happen to know of one laureate who, circa 1991, was quite unaware that anyone had thought about increasing returns in either growth or trade. Did you try talking to anyone else--say, to one of the economists who are the straight men in the stories you tell? For example, your book starts with the story of Arthur's meeting in 1987 with Al Fishlow at Berkeley, in which Fishlow supposedly said, "We know that increasing returns can't exist"--and Arthur went away in despair over the unwillingness of economists to think the unthinkable. Did you call Fishlow to ask whether he said it, and what he meant? Since by 1987 Paul Romer's 1986 papers on increasing returns and growth had started an avalanche of derivative work, he was certainly joking--what he probably meant was "Oh no, not you too." And let me say that I simply cannot believe that you could have talked about increasing returns with any significant number of economists outside Santa Fe without Romer's name popping up in the first 30 seconds of every conversation--unless you were very selective about whom you talked to. And oh, by the way, there are such things as libraries, where you can browse actual economics journals and see what they contain. The point is that it's not just a matter of failing to cite a few more people. Your book, like the Cassidy article, didn't just tell the story of Brian Arthur; it also painted a picture of the economics profession, its intellectual bigotry and prejudice, which happens to be a complete fabrication (with some real, named people cast as villains) that somehow someone managed to sell you. I wonder who? Even more to the point: How did Cassidy come by his story? Is it possible that he completely misunderstood what Brian Arthur was saying--that the whole business about the seminar at Harvard where nobody would accept increasing returns, about the lonely struggle of Arthur in the face of ideological rigidity, even the quotation from Arthur about economists being unwilling to consider the possibility of imperfect markets because of the Cold War (give me a break!) were all in Cassidy's imagination? Let me say that I am actually quite grateful to Cassidy and The New Yorker . A number of people have long been furious about your book--for example, Victor Norman, whom you portrayed as the first of many economists too dumb or perhaps narrow-minded to understand Arthur's brilliant innovation. Norman e-mailed me to say that "I have read the tales from the Vienna woods before and had hoped that it could be cleared up by someone at some point." Yet up to now there was nothing anyone could do about the situation. The trouble was that while "heroic rebel defies orthodoxy" is a story so good that nobody even tries to check it out, "guy makes minor contribution to well-established field, proclaims himself its founder" is so boring as to be unpublishable. (David Warsh's 1994 series of columns in the Boston Globe on the increasing-returns revolution in economics, the basis for a forthcoming book from Harvard University Press, is far and away the best reporting on the subject, did include a sympathetic but devastating exposé of Arthur's pretensions--but to little effect. [Click to read Warsh on Arthur.]) Only now did I have a publishable story: "guy makes minor contribution to well-established field, portrays himself as heroic rebel--and The New Yorker believes him." Thank you, Mr. Cassidy. Letter from Kenneth J. Arrow: Paul Krugman's attack on Brian Arthur ("The Legend of Arthur") requires a correction of its misrepresentations of fact. Arthur is a reputable and significant scholar whose work is indeed having influence in the field of industrial organization and in particular public policy toward antitrust policy in high-tech industries. Krugman admits that he wrote the article because he was "just pissed off," not a very good state for a judicious statement of facts, as his column shows. His theme is stated in his first paragraph: "Cassidy's article [in The New Yorker of Jan. 12] tells the story of how Stanford Professor Brian Arthur came up with the idea of increasing returns." Cassidy, however, said nothing of the sort. The concept of increasing returns is indeed very old, and Cassidy at no point attributed that idea to Arthur. Indeed, the phrase "increasing returns" appears just once in Cassidy's article and then merely to say that Arthur had used the term while others refer to network externalities. Further, Arthur has never made any such preposterous claim at any other time. On the contrary, his papers have fully cited the history of the field and made references to the previous papers, including those of Paul Krugman. (See Arthur's papers collected in the volume Increasing Returns and Path Dependence in the Economy, especially his preface and my foreword for longer comments on Arthur's work in historic perspective. Click to see the foreword.) Hence, Krugman's whole attack is directed at a statement made neither by Arthur nor by Cassidy. Krugman has not read Cassidy's piece with any care nor has he bothered to review what Arthur has in fact said. What Cassidy in fact did in his article was to trace a line of influence between one of Arthur's early articles and the current claims of the Department of Justice against Microsoft. It appears that Cassidy based his article on several interviews, not just one. The point that Arthur has emphasized and which is influential in the current debates about antitrust policy is the dynamic implication of increasing returns. It is the concept of path-dependence, that small events, whether random or the result of corporate strategic choice, may have large consequences because of increasing returns of various kinds. Initial small advantages become magnified, for example, by creating a large installed base, and direct the future, possibly in an inefficient direction. Techniques of production may be locked in at an early stage. Similar considerations apply to regional development and learning. --Kenneth J. Arrow Nobel laureate and Joan Kenney professor of economics emeritus Stanford University Letter from Ted C. Fishman: After reading Paul Krugman vent his spleen against fellow economist Brian Arthur in "The Legend of Arthur," I couldn't help wondering whose reputation he was out to trash, Arthur's or his own. Krugman seems to fear a plot to deny economists their intellectual due. If one exists, Arthur is not a likely suspect. In a series of long interviews with me a year ago (for Worth magazine), I tried, vainly, to get Arthur to tell me how his ideas about increasing returns have encouraged a new strain of economic investigations. Despite much prodding, Arthur obliged only by placing himself in a long line of theorists dating back to Adam Smith and Alfred Marshall. I also found him disarmingly generous in giving credit to the biologists, physicists, and fellow economists who have helped advance his own thinking. Savvy to the journalist's quest for heroes, Arthur urged me to focus on his ideas, not his rank among his peers. Krugman has made a career out of telling other economists to pay better attention to the facts, yet as a chronicler of Arthur's career and inner life, Krugman seems to have listened only to his own demons. --Ted C. Fishman (For additional background on the history of "increasing returns" and Brian Arthur's standing in the field, click for David Warsh's July 3, 1994, Boston Globe article on Brian Arthur)
|
B. Journalists and authors should show more care in referencing and crediting work done by all parties.
|
What can be determined about the language used in the futuristic civilization that Jerome visits?
A. They are lazy, based on the slurring and laws against physical exertion.
B. They are all drunks, based on the slurring.
C. They are all moving at a snail pace, based on the slurring and relaxed tempers.
D. They are all in a hurry, based on the slurring.
|
... and it comes out here By LESTER DEL REY Illustrated by DON SIBLEY [Transcriber's Note: This etext was produced from Galaxy Science Fiction February 1951. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] There is one fact no sane man can quarrel with ... everything has a beginning and an end. But some men aren't sane; thus it isn't always so! No, you're wrong. I'm not your father's ghost, even if I do look a bit like him. But it's a longish story, and you might as well let me in. You will, you know, so why quibble about it? At least, you always have ... or do ... or will. I don't know, verbs get all mixed up. We don't have the right attitude toward tenses for a situation like this. Anyhow, you'll let me in. I did, so you will. Thanks. You think you're crazy, of course, but you'll find out you aren't. It's just that things are a bit confused. And don't look at the machine out there too long—until you get used to it, you'll find it's hard on the eyes, trying to follow where the vanes go. You'll get used to it, of course, but it will take about thirty years. You're wondering whether to give me a drink, as I remember it. Why not? And naturally, since we have the same tastes, you can make the same for me as you're having. Of course we have the same tastes—we're the same person. I'm you thirty years from now, or you're me. I remember just how you feel; I felt the same way when he—that is, of course, I or we—came back to tell me about it, thirty years ago. Here, have one of these. You'll get to like them in a couple more years. And you can look at the revenue stamp date, if you still doubt my story. You'll believe it eventually, though, so it doesn't matter. Right now, you're shocked. It's a real wrench when a man meets himself for the first time. Some kind of telepathy seems to work between two of the same people. You sense things. So I'll simply go ahead talking for half an hour or so, until you get over it. After that you'll come along with me. You know, I could try to change things around by telling what happened to me; but he—I—told me what I was going to do, so I might as well do the same. I probably couldn't help telling you the same thing in the same words, even if I tried—and I don't intend to try. I've gotten past that stage in worrying about all this. So let's begin when you get up in half an hour and come out with me. You'll take a closer look at the machine, then. Yes, it'll be pretty obvious it must be a time machine. You'll sense that, too. You've seen it, just a small little cage with two seats, a luggage compartment, and a few buttons on a dash. You'll be puzzling over what I'll tell you, and you'll be getting used to the idea that you are the man who makes atomic power practical. Jerome Boell, just a plain engineer, the man who put atomic power in every home. You won't exactly believe it, but you'll want to go along. I'll be tired of talking by then, and in a hurry to get going. So I cut off your questions, and get you inside. I snap on a green button, and everything seems to cut off around us. You can see a sort of foggy nothing surrounding the cockpit; it is probably the field that prevents passage through time from affecting us. The luggage section isn't protected, though. You start to say something, but by then I'm pressing a black button, and everything outside will disappear. You look for your house, but it isn't there. There is exactly nothing there—in fact, there is no there . You are completely outside of time and space, as best you can guess how things are. You can't feel any motion, of course. You try to reach a hand out through the field into the nothing around you and your hand goes out, all right, but nothing happens. Where the screen ends, your hand just turns over and pokes back at you. Doesn't hurt, and when you pull your arm back, you're still sound and uninjured. But it looks frightening and you don't try it again. Then it comes to you slowly that you're actually traveling in time. You turn to me, getting used to the idea. "So this is the fourth dimension?" you ask. Then you feel silly, because you'll remember that I said you'd ask that. Well, I asked it after I was told, then I came back and told it to you, and I still can't help answering when you speak. "Not exactly," I try to explain. "Maybe it's no dimension—or it might be the fifth; if you're going to skip over the so-called fourth without traveling along it, you'd need a fifth. Don't ask me. I didn't invent the machine and I don't understand it." "But...." I let it go, and so do you. If you don't, it's a good way of going crazy. You'll see later why I couldn't have invented the machine. Of course, there may have been a start for all this once. There may have been a time when you did invent the machine—the atomic motor first, then the time-machine. And when you closed the loop by going back and saving yourself the trouble, it got all tangled up. I figured out once that such a universe would need some seven or eight time and space dimensions. It's simpler just to figure that this is the way time got bent back on itself. Maybe there is no machine, and it's just easier for us to imagine it. When you spend thirty years thinking about it, as I did—and you will—you get further and further from an answer. Anyhow, you sit there, watching nothing all around you, and no time, apparently, though there is a time effect back in the luggage space. You look at your watch and it's still running. That means you either carry a small time field with you, or you are catching a small increment of time from the main field. I don't know, and you won't think about that then, either. I'm smoking, and so are you, and the air in the machine is getting a bit stale. You suddenly realize that everything in the machine is wide open, yet you haven't seen any effects of air loss. "Where are we getting our air?" you ask. "Or why don't we lose it?" "No place for it to go," I explain. There isn't. Out there is neither time nor space, apparently. How could the air leak out? You still feel gravity, but I can't explain that, either. Maybe the machine has a gravity field built in, or maybe the time that makes your watch run is responsible for gravity. In spite of Einstein, you have always had the idea that time is an effect of gravity, and I sort of agree, still. Then the machine stops—at least, the field around us cuts off. You feel a dankish sort of air replace the stale air, and you breathe easier, though we're in complete darkness, except for the weak light in the machine, which always burns, and a few feet of rough dirty cement floor around. You take another cigaret from me and you get out of the machine, just as I do. I've got a bundle of clothes and I start changing. It's a sort of simple, short-limbed, one-piece affair I put on, but it feels comfortable. "I'm staying here," I tell you. "This is like the things they wear in this century, as near as I can remember it, and I should be able to pass fairly well. I've had all my fortune—the one you make on that atomic generator—invested in such a way I can get it on using some identification I've got with me, so I'll do all right. I know they still use some kind of money, you'll see evidence of that. And it's a pretty easygoing civilization, from what I could see. We'll go up and I'll leave you. I like the looks of things here, so I won't be coming back with you." You nod, remembering I've told you about it. "What century is this, anyway?" I'd told you that, too, but you've forgotten. "As near as I can guess, it's about 2150. He told me, just as I'm telling you, that it's an interstellar civilization." You take another cigaret from me, and follow me. I've got a small flashlight and we grope through a pile of rubbish, out into a corridor. This is a sub-sub-sub-basement. We have to walk up a flight of stairs, and there is an elevator waiting, fortunately with the door open. "What about the time machine?" you ask. "Since nobody ever stole it, it's safe." We get in the elevator, and I say "first" to it. It gives out a coughing noise and the basement openings begin to click by us. There's no feeling of acceleration—some kind of false gravity they use in the future. Then the door opens, and the elevator says "first" back at us. It's obviously a service elevator and we're in a dim corridor, with nobody around. I grab your hand and shake it. "You go that way. Don't worry about getting lost; you never did, so you can't. Find the museum, grab the motor, and get out. And good luck to you." You act as if you're dreaming, though you can't believe it's a dream. You nod at me and I move out into the main corridor. A second later, you see me going by, mixed into a crowd that is loafing along toward a restaurant, or something like it, that is just opening. I'm asking questions of a man, who points, and I turn and move off. You come out of the side corridor and go down a hall, away from the restaurant. There are quiet little signs along the hall. You look at them, realizing for the first time that things have changed. Steij:neri, Faunten, Z:rgat Dispenseri. The signs are very quiet and dignified. Some of them can be decoded to stationery shops, fountains, and the like. What a zergot is, you don't know. You stop at a sign that announces: Trav:l Biwrou—F:rst-Clas Twrz—Marz, Viin*s, and x: Trouj:n Planets. Spej:l reits tu aol s*nz wixin 60 lyt iirz! But there is only a single picture of a dull-looking metal sphere, with passengers moving up a ramp, and the office is closed. You begin to get the hang of the spelling they use, though. Now there are people around you, but nobody pays much attention to you. Why should they? You wouldn't care if you saw a man in a leopard-skin suit; you'd figure it was some part in a play and let it go. Well, people don't change much. You get up your courage and go up to a boy selling something that might be papers on tapes. "Where can I find the Museum of Science?" "Downayer rien turn lefa the sign. Stoo bloss," he tells you. Around you, you hear some pretty normal English, but there are others using stuff as garbled as his. The educated and uneducated? I don't know. You go right until you find a big sign built into the rubbery surface of the walk: Miuzi:m *v Syens . There's an arrow pointing and you turn left. Ahead of you, two blocks on, you can see a pink building, with faint aqua trimming, bigger than most of the others. They are building lower than they used to, apparently. Twenty floors up seems about the maximum. You head for it, and find the sidewalk is marked with the information that it is the museum. You go up the steps, but you see that it seems to be closed. You hesitate for a moment, then. You're beginning to think the whole affair is complete nonsense, and you should get back to the time machine and go home. But then a guard comes to the gate. Except for the short legs in his suit and the friendly grin on his face, he looks like any other guard. What's more, he speaks pretty clearly. Everyone says things in a sort of drawl, with softer vowels and slurred consonants, but it's rather pleasant. "Help you, sir? Oh, of course. You must be playing in 'Atoms and Axioms.' The museum's closed, but I'll be glad to let you study whatever you need for realism in your role. Nice show. I saw it twice." "Thanks," you mutter, wondering what kind of civilization can produce guards as polite as that. "I—I'm told I should investigate your display of atomic generators." He beams at that. "Of course." The gate is swung to behind you, but obviously he isn't locking it. In fact, there doesn't seem to be a lock. "Must be a new part. You go down that corridor, up one flight of stairs and left. Finest display in all the known worlds. We've got the original of the first thirteen models. Professor Jonas was using them to check his latest theory of how they work. Too bad he could not explain the principle, either. Someone will, some day, though. Lord, the genius of that twentieth century inventor! It's quite a hobby with me, sir. I've read everything I could get on the period. Oh—congratulations on your pronunciation. Sounds just like some of our oldest tapes." You get away from him, finally, after some polite thanks. The building seems deserted and you wander up the stairs. There's a room on your right filled with something that proclaims itself the first truly plastic diamond former, and you go up to it. As you come near, it goes through a crazy wiggle inside, stops turning out a continual row of what seem to be bearings, and slips something the size of a penny toward you. "Souvenir," it announces in a well-modulated voice. "This is a typical gem of the twentieth century, properly cut to 58 facets, known technically as a Jaegger diamond, and approximately twenty carats in size. You can have it made into a ring on the third floor during morning hours for one-tenth credit. If you have more than one child, press the red button for the number of stones you desire." You put it in your pocket, gulping a little, and get back to the corridor. You turn left and go past a big room in which models of spaceships—from the original thing that looks like a V-2, and is labeled first Lunar rocket, to a ten-foot globe, complete with miniature manikins—are sailing about in some kind of orbits. Then there is one labeled Wep:nz , filled with everything from a crossbow to a tiny rod four inches long and half the thickness of a pencil, marked Fynal Hand Arm . Beyond is the end of the corridor, and a big place that bears a sign, Mad:lz *v Atamic Pau:r Sorsez . By that time, you're almost convinced. And you've been doing a lot of thinking about what you can do. The story I'm telling has been sinking in, but you aren't completely willing to accept it. You notice that the models are all mounted on tables and that they're a lot smaller than you thought. They seem to be in chronological order, and the latest one, marked 2147—Rincs Dyn*pat: , is about the size of a desk telephone. The earlier ones are larger, of course, clumsier, but with variations, probably depending on the power output. A big sign on the ceiling gives a lot of dope on atomic generators, explaining that this is the first invention which leaped full blown into basically final form. You study it, but it mentions casually the inventor, without giving his name. Either they don't know it, or they take it for granted that everyone does, which seems more probable. They call attention to the fact that they have the original model of the first atomic generator built, complete with design drawings, original manuscript on operation, and full patent application. They state that it has all major refinements, operating on any fuel, producing electricity at any desired voltage up to five million, any chosen cyclic rate from direct current to one thousand megacycles, and any amperage up to one thousand, its maximum power output being fifty kilowatts, limited by the current-carrying capacity of the outputs. They also mention that the operating principle is still being investigated, and that only such refinements as better alloys and the addition of magnetric and nucleatric current outlets have been added since the original. So you go to the end and look over the thing. It's simply a square box with a huge plug on each side, and a set of vernier controls on top, plus a little hole marked, in old-style spelling, Drop BBs or wire here . Apparently that's the way it's fueled. It's about one foot on each side. "Nice," the guard says over your shoulder. "It finally wore out one of the cathogrids and we had to replace that, but otherwise it's exactly as the great inventor made it. And it still operates as well as ever. Like to have me tell you about it?" "Not particularly," you begin, and then realize bad manners might be conspicuous here. While you're searching for an answer, the guard pulls something out of his pocket and stares at it. "Fine, fine. The mayor of Altasecarba—Centaurian, you know—is arriving, but I'll be back in about ten minutes. He wants to examine some of the weapons for a monograph on Centaurian primitives compared to nineteenth century man. You'll pardon me?" You pardon him pretty eagerly and he wanders off happily. You go up to the head of the line, to that Rinks Dynapattuh, or whatever it transliterates to. That's small and you can carry it. But the darned thing is absolutely fixed. You can't see any bolts, but you can't budge it, either. You work down the line. It'd be foolish to take the early model if you can get one with built-in magnetic current terminals—Ehrenhaft or some other principle?—and nuclear binding-force energy terminals. But they're all held down by the same whatchamaycallem effect. And, finally, you're right back beside the original first model. It's probably bolted down, too, but you try it tentatively and you find it moves. There's a little sign under it, indicating you shouldn't touch it, since the gravostatic plate is being renewed. Well, you won't be able to change the time cycle by doing anything I haven't told you, but a working model such as that is a handy thing. You lift it; it only weighs about fifty pounds! Naturally, it can be carried. You expect a warning bell, but nothing happens. As a matter of fact, if you'd stop drinking so much of that scotch and staring at the time machine out there now, you'd hear what I'm saying and know what will happen to you. But of course, just as I did, you're going to miss a lot of what I say from now on, and have to find out for yourself. But maybe some of it helps. I've tried to remember how much I remembered, after he told me, but I can't be sure. So I'll keep on talking. I probably can't help it, anyhow. Pre-set, you might say. Well, you stagger down the corridor, looking out for the guard, but all seems clear. Then you hear his voice from the weapons room. You bend down and try to scurry past, but you know you're in full view. Nothing happens, though. You stumble down the stairs, feeling all the futuristic rays in the world on your back, and still nothing happens. Ahead of you, the gate is closed. You reach it and it opens obligingly by itself. You breathe a quick sigh of relief and start out onto the street. Then there's a yell behind you. You don't wait. You put one leg in front of the other and you begin racing down the walk, ducking past people, who stare at you with expressions you haven't time to see. There's another yell behind you. Something goes over your head and drops on the sidewalk just in front of your feet, with a sudden ringing sound. You don't wait to find out about that, either. Somebody reaches out a hand to catch you and you dart past. The street is pretty clear now and you jolt along, with your arms seeming to come out of the sockets, and that atomic generator getting heavier at every step. Out of nowhere, something in a blue uniform about six feet tall and on the beefy side appears—and the badge hasn't changed much. The cop catches your arm and you know you're not going to get away, so you stop. "You can't exert yourself that hard in this heat, fellow," the cop says. "There are laws against that, without a yellow sticker. Here, let me grab you a taxi." Reaction sets in a bit and your knees begin to buckle, but you shake your head and come up for air. "I—I left my money home," you begin. The cop nods. "Oh, that explains it. Fine, I won't have to give you an appearance schedule. But you should have come to me." He reaches out and taps a pedestrian lightly on the shoulder. "Sir, an emergency request. Would you help this gentleman?" The pedestrian grins, looks at his watch, and nods. "How far?" You did notice the name of the building from which you came and you mutter it. The stranger nods again, reaches out and picks up the other side of the generator, blowing a little whistle the cop hands him. Pedestrians begin to move aside, and you and the stranger jog down the street at a trot, with a nice clear path, while the cop stands beaming at you both. That way, it isn't so bad. And you begin to see why I decided I might like to stay in the future. But all the same, the organized cooperation here doesn't look too good. The guard can get the same and be there before you. And he is. He stands just inside the door of the building as you reach it. The stranger lifts an eyebrow and goes off at once when you nod at him, not waiting for thanks. And the guard comes up, holding some dinkus in his hand, about the size of a big folding camera and not too dissimilar in other ways. He snaps it open and you get set to duck. "You forgot the prints, monograph, and patent applications," he says. "They go with the generator—we don't like to have them separated. A good thing I knew the production office of 'Atoms and Axioms' was in this building. Just let us know when you're finished with the model and we'll pick it up." You swallow several sets of tonsils you had removed years before, and take the bundle of papers he hands you out of the little case. He pumps you for some more information, which you give him at random. It seems to satisfy your amiable guard friend. He finally smiles in satisfaction and heads back to the museum. You still don't believe it, but you pick up the atomic generator and the information sheets, and you head down toward the service elevator. There is no button on it. In fact, there's no door there. You start looking for other doors or corridors, but you know this is right. The signs along the halls are the same as they were. Then there's a sort of cough and something dilates in the wall. It forms a perfect door and the elevator stands there waiting. You get in, gulping out something about going all the way down, and then wonder how a machine geared for voice operation can make anything of that. What the deuce would that lowest basement be called? But the elevator has closed and is moving downward in a hurry. It coughs again and you're at the original level. You get out—and realize you don't have a light. You'll never know what you stumbled over, but, somehow, you move back in the direction of the time machine, bumping against boxes, staggering here and there, and trying to find the right place by sheer feel. Then a shred of dim light appears; it's the weak light in the time machine. You've located it. You put the atomic generator in the luggage space, throw the papers down beside it, and climb into the cockpit, sweating and mumbling. You reach forward toward the green button and hesitate. There's a red one beside it and you finally decide on that. Suddenly, there's a confused yell from the direction of the elevator and a beam of light strikes against your eyes, with a shout punctuating it. Your finger touches the red button. You'll never know what the shouting was about—whether they finally doped out the fact that they'd been robbed, or whether they were trying to help you. You don't care which it is. The field springs up around you and the next button you touch—the one on the board that hasn't been used so far—sends you off into nothingness. There is no beam of light, you can't hear a thing, and you're safe. It isn't much of a trip back. You sit there smoking and letting your nerves settle back to normal. You notice a third set of buttons, with some pencil marks over them—"Press these to return to yourself 30 years"—and you begin waiting for the air to get stale. It doesn't because there is only one of you this time. Instead, everything flashes off and you're sitting in the machine in your own back yard. You'll figure out the cycle in more details later. You get into the machine in front of your house, go to the future in the sub-basement, land in your back yard, and then hop back thirty years to pick up yourself, landing in front of your house. Just that. But right then, you don't care. You jump out and start pulling out that atomic generator and taking it inside. It isn't hard to disassemble, but you don't learn a thing; just some plates of metal, some spiral coils, and a few odds and ends—all things that can be made easily enough, all obviously of common metals. But when you put it together again, about an hour later, you notice something. Everything in it is brand-new and there's one set of copper wires missing! It won't work. You put some #12 house wire in, exactly like the set on the other side, drop in some iron filings, and try it again. And with the controls set at 120 volts, 60 cycles and 15 amperes, you get just that. You don't need the power company any more. And you feel a little happier when you realize that the luggage space wasn't insulated from time effects by a field, so the motor has moved backward in time, somehow, and is back to its original youth—minus the replaced wires the guard mentioned—which probably wore out because of the makeshift job you've just done. But you begin getting more of a jolt when you find that the papers are all in your own writing, that your name is down as the inventor, and that the date of the patent application is 1951. It will begin to soak in, then. You pick up an atomic generator in the future and bring it back to the past—your present—so that it can be put in the museum with you as the inventor so you can steal it to be the inventor. And you do it in a time machine which you bring back to yourself to take yourself into the future to return to take back to yourself.... Who invented what? And who built which? Before long, your riches from the generator are piling in. Little kids from school are coming around to stare at the man who changed history and made atomic power so common that no nation could hope to be anything but a democracy and a peaceful one—after some of the worst times in history for a few years. Your name eventually becomes as common as Ampere, or Faraday, or any other spelled without a capital letter. But you're thinking of the puzzle. You can't find any answer. One day you come across an old poem—something about some folks calling it evolution and others calling it God. You go out, make a few provisions for the future, and come back to climb into the time machine that's waiting in the building you had put around it. Then you'll be knocking on your own door, thirty years back—or right now, from your view—and telling your younger self all these things I'm telling you. But now.... Well, the drinks are finished. You're woozy enough to go along with me without protest, and I want to find out just why those people up there came looking for you and shouting, before the time machine left. Let's go.
|
A. They are lazy, based on the slurring and laws against physical exertion.
|
How was Joe able to find an apartment to break into to commit his crime of theivery?
A. Hendricks had left out a book with unsecured addresses.
B. He paid someone to allow him to rob them and then report his crime.
C. He unsuccessfully attempted robbery until he was successful.
D. Hendricks had shown him the apartment that he could rob and be caught for.
|
Going straight meant crooked planning. He'd never make it unless he somehow managed to PICK A CRIME By RICHARD R. SMITH Illustrated by DICK FRANCIS [Transcriber's Note: This etext was produced from Galaxy Science Fiction May 1958. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] The girl was tall, wide-eyed and brunette. She had the right curves in the right places and would have been beautiful if her nose had been smaller, if her mouth had been larger and if her hair had been wavy instead of straight. "Hank said you wanted to see me," she said when she stopped beside Joe's table. "Yeah." Joe nodded at the other chair. "Have a seat." He reached into a pocket, withdrew five ten-dollar bills and handed them to her. "I want you to do a job for me. It'll only take a few minutes." The girl counted the money, then placed it in her purse. Joe noticed a small counterfeit-detector inside the purse before she closed it. "What's the job?" "Tell you later." He gulped the remainder of his drink, almost pouring it down his throat. "Hey. You trying to make yourself sick?" "Not sick. Drunk. Been trying to get drunk all afternoon." As the liquor settled in his stomach, he waited for the warm glow. But the glow didn't come ... the bartender had watered his drink again. "Trying to get drunk?" the girl inquired. "Are you crazy?" "No. It's simple. If I get drunk, I can join the AAA and get free room and board for a month while they give me a treatment." It was easy enough to understand, he reflected, but a lot harder to do. The CPA robot bartenders saw to it that anyone got high if they wanted, but comparatively few got drunk. Each bartender could not only mix drinks but could also judge by a man's actions and speech when he was on the verge of drunkenness. At the proper time—since drunkenness was illegal—a bartender always watered the drinks. Joe had tried dozens of times in dozens of bars to outsmart them, but had always failed. And in all of New York's millions, there had been only a hundred cases of intoxication during the previous year. The girl laughed. "If you're that hard up, I don't know if I should take this fifty or not. Why don't you go out and get a job like everyone else?" As an answer, Joe handed her his CPA ID card. She grunted when she saw the large letters that indicated the owner had Dangerous Criminal Tendencies. When she handed the card back, Joe fought an impulse to tear it to pieces. He'd done that once and gone through a mountain of red tape to get another—everyone was required by law to carry a CPA ID card and show it upon request. "I'm sorry," the girl said. "I didn't know you were a DCT." "And who'll hire a guy with criminal tendencies? You know the score. When you try to get a job, they ask to see your ID before they even tell you if there's an opening or not. If your CPA ID says you're a DCT, you're SOL and they tell you there's no openings. Oh, I've had several jobs ... jobs like all DCTs get. I've been a garbage man, street-cleaner, ditch-digger—" On the other side of the room, the jukebox came to life with a roar and a group of teen-agers scrambled to the dance floor. Feeling safe from hidden microphones because of the uproar, he leaned across the table and whispered in the girl's ear, "That's what I want to hire you for. I want you to help me commit a crime. If I get convicted of a crime, I'll be able to get a good job!" The girl's lips formed a bright red circle. "Say! You really got big plans, don't you?" He smiled at her admiration. It was something big to plan a crime. A civilization weary of murder, robbery, kidnapping, counterfeiting, blackmail, rape, arson, and drunkenness had originated the CPA—Crime Prevention Association. There were no longer any prisons—CPA officials had declared loudly and emphatically that their job was to prevent crime, not punish it. And prevent it they did, with thousands of ingenious crime-prevention devices and methods. They had made crime almost impossible, and during the previous year, only a few hundred men in the whole country had been convicted of criminal acts. No crime was ever punished. If a man was smart enough to kill someone, for instance, he wasn't sent to prison to be punished; he wasn't punished at all. Instead, he was sent to a hospital where all criminal tendencies were removed from his mind by psychologists, shock treatments, encephalographic devices, a form of prefrontal lobotomy and a dozen other methods. An expensive operation, but since there were few criminals—only ten in New York during the past year—any city could afford the CPA hospitals. The CPA system was, actually, cheaper than previous methods because it did away with the damage caused by countless crimes; did away with prisons and their guards, large police forces, squad cars and weapons. And, ironically, a man who did commit a crime was a sort of hero. He was a hero to the millions of men and women who had suppressed impulses to kill someone, beat their mates, get drunk, or kick a dog. Not only a hero, but because of the CPA Treatment, he was—when he left one of the CPA hospitals—a thoroughly honest and hard-working individual ... a man who could be trusted with any responsibility, any amount of money. And therefore, an EX (a convicted criminal who received the treatment was commonly called an Ex because he was in the strictest sense of the word an Ex-criminal) ... an Ex was always offered the best jobs. "Well," the girl said. "I'm honored. Really. But I got a date at ten. Let's get it over with. You said it'd only take a few minutes." "Okay. Let's go." The girl followed him across the room, around tables, through a door, down a hall, through a back door and into the alley. She followed him up the dark alley until he turned suddenly and ripped her blouse and skirt. He surprised her completely, but when she recovered, she backed away, her body poised like a wrestler's. "What's the big idea?" "Scream," Joe said. "Scream as loud as you can, and when the cops get here, tell 'em I tried to rape you." The plan was perfect, he told himself. Attempted rape was one of the few things that was a crime merely because a man attempted it. A crime because it theoretically inflicted psychological injury upon the intended victim—and because millions of women voters had voted it a crime. On the other hand, attempted murder, robbery, kidnapping, etc., were not crimes. They weren't crimes because the DCT didn't complete the act, and if he didn't complete the act, that meant simply that the CPA had once again functioned properly. The girl shook her head vigorously. "Sorry, buddy. Can't help you that way. Why didn't you tell me what you wanted?" "What's the matter?" Joe complained. "I'm not asking you to do anything wrong." "You stupid jerk. What do you think this is—the Middle Ages? Don't you know almost every woman knows how to defend herself? I'm a sergeant in the WSDA!" Joe groaned. The WSDA—Women's Self-Defense Association—a branch of the CPA. The WSDA gave free instruction in judo and jujitsu, even developed new techniques of wrestling and instructed only women in those new techniques. The girl was still shaking her head. "Can't do it, buddy. I'd lose my rank if you were convicted of—" "Do I have to make you scream?" Joe inquired tiredly and advanced toward the girl. "—and that rank carries a lot of weight. Hey! Stop it! " Joe discovered to his dismay that the girl was telling the truth when she said she was a sergeant in the WSDA. He felt her hands on his body, and in the time it takes to blink twice, he was flying through the air. The alley's concrete floor was hard—it had always been hard, but he became acutely aware of its lack of resiliency when his head struck it. There was a wonderful moment while the world was filled with beautiful stars and streaks of lightning through which he heard distant police sirens. But the wonderful moment didn't last long and darkness closed in on him. When he awoke, a rough voice was saying, "Okay. Snap out of it." He opened his eyes and recognized the police commissioner's office. It would be hard not to recognize: the room was large, devoid of furniture except for a desk and chairs, but the walls were lined with the controls of television screens, electronic calculators and a hundred other machines that formed New York's mechanical police force. Commissioner Hendricks was a remarkable character. There was something wrong with his glands, and he was a huge, greasy bulk of a man with bushy eyebrows and a double chin. His steel-gray eyes showed something of his intelligence and he would have gone far in politics if fate hadn't made him so ugly, for more than half the voters who elected men to high political positions were women. Anyone who knew Hendricks well liked him, for he was a friendly, likable person. But the millions of women voters who saw his face on posters and on their TV screens saw only the ugly face and heard only the harsh voice. The President of the United States was a capable man, but also a very handsome one, and the fact that a man who looked something like a bulldog had been elected as New York's police commissioner was a credit to Hendricks and millions of women voters. "Where's the girl?" Joe asked. "I processed her while you were out cold. She left. Joe, you—" "Okay," Joe said. "I'll save you the trouble. I admit it. Attempted rape. I confess." Hendricks smiled. "Sorry, Joe. You missed the boat again." He reached out and turned a dial on his desk top. "We had a microphone hidden in that alley. We have a lot of microphones hidden in a lot of alleys. You'd be surprised at the number of conspiracies that take place in alleys!" Joe listened numbly to his voice as it came from one of the hundreds of machines on the walls, " Scream. Scream as loud as you can, and when the cops get here, tell 'em I tried to rape you. " And then the girl's voice, " Sorry, buddy. Can't help— " He waved his hand. "Okay. Shut it off. I confess to conspiracy." Hendricks rose from behind the desk, walked leisurely to where Joe was slouched in a chair. "Give me your CPA ID." Joe handed him the card with trembling fingers. He felt as if the world had collapsed beneath him. Conspiracy to commit a crime wasn't a crime. Anyone could conspire. And if the conspirators were prevented from committing a crime, then that meant the CPA had functioned properly once again. That meant the CPA had once again prevented crime, and the CPA didn't punish crimes or attempted crimes, and it didn't attempt to prevent crimes by punishment. If it did, that would be a violation of the New Civil Rights. Hendricks crossed the room, deposited the card in a slot and punched a button. The machine hummed and a new card appeared. When Hendricks handed him the new card, Joe saw that the words DANGEROUS CRIMINAL TENDENCIES were now in red and larger than before. And, in slightly smaller print, the ID card stated that the owner was a DCT First Class. "You've graduated," Hendricks said coldly. "You guys never learn, do you? Now you're a DCT First Class instead of a Second Class. You know what that means?" Hendricks leaned closer until Joe could feel his breath on his face. "That means your case history will be turned over to the newspapers. You'll be the hobby of thousands of amateur cops. You know how it works? It's like this. The Joneses are sitting around tomorrow night and they're bored. Then Mr. Jones says, 'Let's go watch this Joe Harper.' So they look up your record—amateur cops always keep records of First Classes in scrapbooks—and they see that you stop frequently at Walt's Tavern. "So they go there and they sit and drink and watch you, trying not to let you know they're watching you. They watch you all night, just hoping you'll do something exciting, like trying to kill someone, so they can be the first ones to yell ' Police! ' They'll watch you because it's exciting to be an amateur cop, and if they ever did prevent you from committing a crime, they'd get a nice reward and they'd be famous." "Lay off," Joe said. "I got a headache. That girl—" Hendricks leaned even closer and glared. "You listen, Joe. This is interesting. You see, it doesn't stop with Mr. and Mrs. Jones. There's thousands of people like them. Years ago, they got their kicks from reading about guys like you, but these days things are dull because it's rare when anyone commits a crime. So every time you walk down the street, there'll be at least a dozen of 'em following you, and no matter where you go, you can bet there'll be some of 'em sitting next to you, standing next to you. "During the day, they'll take your picture with their spy cameras that look like buttons on their coats. At night, they'll peep at you through your keyhole. Your neighbors across the street will watch you through binoculars and—" "Lay off!" Joe squirmed in the chair. He'd been lectured by Hendricks before and it was always an unpleasant experience. The huge man was like a talking machine once he got started, a machine that couldn't be stopped. "And the kids are the worst," Hendricks continued. "They have Junior CPA clubs. They keep records of hoodlums like you in little cardboard boxes. They'll stare at you on the street and stare at you through restaurant windows while you're eating meals. They'll follow you in public rest rooms and watch you out of the corners of their eyes while they wash their little hands, and almost every day when you look back, you'll see a dozen freckle-faced little boys following you half a block behind, giggling and gaping at you. They'll follow you until the day you die, because you're a freak!" Joe couldn't stand the breath in his face any longer. He rose and paced the floor. "And it doesn't end there , Joe. It goes on and on. You'll be the object of every do-gooder and parlor psychologist. Strangers will stop you on the street and say, 'I'd like to help you, friend.' Then they'll ask you queer questions like, 'Did your father reject you when you were a child?' 'Do you like girls?' 'How does it feel to be a DCT First Class?' And then there'll be the strangers who hate DCTs. They'll stop you on the street and insult you, call you names, spit on you and—" "Okay, goddam it! Stop it! " Hendricks stopped, wiped the sweat from his face with a handkerchief and lit a cigarette. "I'm doing you a favor, Joe. I'm trying to explain something you're too dumb to realize by yourself. We've taught everyone to hate crime and criminals ... to hate them as nothing has ever been hated before. Today a criminal is a freak, an alien. Your life will be a living hell if you don't leave New York. You should go to some small town where there aren't many people, or be a hermit, or go to Iceland or—" Joe eyed the huge man suspiciously. " Favor , did you say? The day you do me a favor—" Hendricks shrugged his shoulders negligently. "Not entirely a favor. I want to get rid of you. Usually I come up here and sit around and read books. But guys like you are a nuisance and take up my time." "I couldn't leave if I wanted to," Joe said. "I'm flat broke. Thanks to your CPA system, a DCT can't get a decent job." Hendricks reached into a pocket, withdrew several bills and extended them. "I'll loan you some money. You can sign an IOU and pay me back a little at a time." Joe waved the money away. "Listen, why don't you do me a favor? Why don't you frame me? If I'm such a nuisance, pin a crime on me—any crime." "Can't do it. Convicting a man of a crime he didn't commit is a violation of Civil Rights and a crime in itself." "Umm." "Why don't you take the free psycho treatment? A man doesn't have to be a DCT. With the free treatment, psychologists can remove all your criminal tendencies and—" "Go to those head-shrinkers ?" Hendricks shrugged again. "Have it your way." Joe laughed. "If your damned CPA is so all-powerful, why can't you make me go?" "Violation of Civil Rights." "Damn it, there must be some way you can help me! We both want the same thing. We both want to see me convicted of a crime." "How can I help you without committing a crime myself?" Hendricks walked to his desk, opened a drawer and removed a small black book. "See this? It contains names and addresses of all the people in New York who aren't properly protected. Every week we find people who aren't protected properly—blind spots in our protection devices. As soon as we find them, we take steps to install anti-robbery devices, but this is a big city and sometimes it takes days to get the work done. "In the meantime, any one of these people could be robbed. But what can I do? I can't hold this book in front of your nose and say, 'Here, Joe, pick a name and go out and rob him.'" He laughed nervously. "If I did that, I'd be committing a crime myself!" He placed the book on the desk top, took a handkerchief from a pocket again and wiped sweat from his face. "Excuse me a minute. I'm dying of thirst. There's a water cooler in the next room." Joe stared at the door to the adjoining office as it closed behind the big man. Hendricks was—unbelievably—offering him a victim, offering him a crime! Almost running to the desk, Joe opened the book, selected a name and address and memorized it: John Gralewski, Apt. 204, 2141 Orange St. When Hendricks came back, Joe said, "Thanks." "Huh? Thanks for what? I didn't do anything." When Joe reached the street, he hurried toward the nearest subway. As a child, he had been frightened of the dark. As a man, he wasn't afraid of the dark itself, but the darkened city always made him feel ill at ease. The uneasiness was, more than anything else, caused by his own imagination. He hated the CPA and at night he couldn't shrug the feeling that the CPA lurked in every shadow, watching him, waiting for him to make a mistake. Imagination or not, the CPA was almost everywhere a person went. Twenty-four hours a day, millions of microphones hidden in taverns, alleys, restaurants, subways and every other place imaginable waited for someone to say the wrong thing. Everything the microphones picked up was routed to the CPA Brain, a monster electronic calculator. If the words "Let's see a movie" were received in the Brain, they were discarded. But if the words "Let's roll this guy" were received, the message was traced and a police helicopter would be at the scene in two minutes. And scattered all over the city were not only hidden microphones, but hidden television cameras that relayed visual messages to the Brain, and hidden machines that could detect a knife or a gun in someone's pocket at forty yards. Every place of business from the largest bank to the smallest grocery store was absolutely impenetrable. No one had even tried to rob a place of business for years. Arson was next to impossible because of the heat-detectors—devices placed in every building that could detect, radarlike, any intensity of heat above that caused by a cigarette lighter. Chemical research had made poisoning someone an impossibility. There were no drugs containing poison, and while an ant-poison might kill ants, no concentrated amount of it would kill a human. The FBI had always been a powerful organization, but under the supervision of the CPA, it was a scientific colossus and to think of kidnapping someone or to contemplate the use of narcotics was pointless. A counterfeiter's career was always short-lived: every place of business and millions of individuals had small counterfeit-detectors that could spot a fake and report it directly to the Brain. And the percentage of crimes had dwindled even more with the appearance of the robot police officers. Many a criminal in the past had gambled that he could outshoot a pursuing policeman. But the robots were different: they weren't flesh and blood. Bullets bounced off them and their aim was infallible. It was like a fantastic dream come true. Only the dream wasn't fantastic any more. With the huge atomic power plants scattered across the country and supplying endless electrical power at ridiculously low prices, no endeavor that required power was fantastic. The power required to operate the CPA devices cost each taxpayer an average of four dollars a year, and the invention, development and manufacture of the devices had cost even less. And the CPA had attacked crime through society itself, striking at the individual. In every city there were neon signs that blinked subliminally with the statement, CRIME IS FILTH. Listening to a radio or watching television, if a person heard station identification, he invariably heard or saw just below perception the words CRIME IS FILTH. If he went for a walk or a ride, he saw the endless subliminal posters declaring CRIME IS FILTH, and if he read a magazine or newspaper he always found, in those little dead spaces where an editor couldn't fit anything else, the below-perception words CRIME IS FILTH. It was monotonous and, after a while, a person looked at the words and heard them without thinking about them. And they were imprinted on his subconscious over and over, year after year, until he knew that crime was the same as filth and that criminals were filthy things. Except men like Joe Harper. No system is perfect. Along with thousands of other DCTs, Joe refused to believe it, and when he reached apartment 204 at 2141 Orange Street, he felt as if he'd inherited a gold mine. The hall was dimly lit, but when he stood before the door numbered 204, he could see that the wall on either side of it was new . That is, instead of being covered with dust, dirt and stains as the other walls were, it was clean. The building was an old one, the hall was wide, and the owner had obviously constructed a wall across the hall, creating another room. If the owner had reported the new room as required by law, it would have been wired with CPA burglarproof devices, but evidently he didn't want to pay for installation. When Joe entered the cubbyhole, he had to stand to one side in order to close the door behind him. The place was barely large enough for the bed, chair and bureau; it was a place where a man could fall down at night and sleep, but where no normal man could live day after day. Fearing that someone might detect him before he actually committed the crime, Joe hurried to the bureau and searched it. He broke out in a sweat when he found nothing but underwear and old magazines. If he stole underwear and magazines, it would still be a crime, but the newspapers would splash satirical headlines. Instead of being respected as a successful criminal, he would be ridiculed. He stopped sweating when he found a watch under a pile of underwear. The crystal was broken, one hand was missing and it wouldn't run, but—perfection itself—engraved on the back was the inscription, To John with Love . His trial would be a clean-cut one: it would be easy for the CPA to prove ownership and that a crime had been committed. Chuckling with joy, he opened the window and shouted, " Thief! Police! Help! " He waited a few seconds and then ran. When he reached the street, a police helicopter landed next to him. Strong metal arms seized him; cameras clicked and recorded the damning evidence. When Joe was securely handcuffed to a seat inside the helicopter, the metal police officers rang doorbells. There was a reward for anyone who reported a crime, but no one admitted shouting the warning. He was having a nightmare when he heard the voice, "Hey. Wake up. Hey!" He opened his eyes, saw Hendricks' ugly face and thought for a minute he was still having the nightmare. "I just saw your doctor," Hendricks said. "He says your treatment is over. You can go home now. I thought I'd give you a lift." As Joe dressed, he searched his mind and tried to find some difference. During the treatment, he had been unconscious or drugged, unable to think. Now he could think clearly, but he could find no difference in himself. He felt more relaxed than he'd ever felt before, but that could be an after-effect of all the sedatives he'd been given. And, he noticed when he looked in the mirror, he was paler. The treatment had taken months and he had, between operations, been locked in his room. Hendricks was standing by the window. Joe stared at the massive back. Deliberately goading his mind, he discovered the biggest change: Before, the mere sight of the man had aroused an intense hatred. Now, even when he tried, he succeeded in arousing only a mild hatred. They had toned down his capacity to hate, but not done away with it altogether. "Come here and take a look at your public," said Hendricks. Joe went to the window. Three stories below, a large crowd had gathered on the hospital steps: a band, photographers, television trucks, cameramen and autograph hunters. He'd waited a long time for this day. But now—another change in him— He put the emotion into words: "I don't feel like a hero. Funny, but I don't." "Hero!" Hendricks laughed and, with his powerful lungs, it sounded like a bull snorting. "You think a successful criminal is a hero? You stupid—" He laughed again and waved a hand at the crowd below them. "You think those people are down there because they admire what you did? They're down there waiting for you because they're curious, because they're glad the CPA caught you, and because they're glad you're an Ex. You're an ex -criminal now, and because of your treatment, you'll never be able to commit another crime as long as you live. And that's the kind of guy they admire, so they want to see you, shake your hand and get your autograph." Joe didn't understand Hendricks completely, but the part he did understand he didn't believe. A crowd was waiting for him. He could see the people with his own eyes. When he left the hospital, they'd cheer and shout and ask for his autograph. If he wasn't a hero, what was he ? It took half an hour to get through the crowd. Cameras clicked all around him, a hundred kids asked for his autograph, everyone talked at once and cheered, smiled, laughed, patted him on the back and cheered some more. Only one thing confused him during all the excitement: a white-haired old lady with tears in her eyes said, "Thank heaven it was only a watch. Thank heaven you didn't kill someone! God bless you, son." And then the old lady had handed him a box of fudge and left him in total confusion. What she said didn't make sense. If he had killed someone rather than stealing a watch, he would be even more of a hero and the crowd would have cheered even louder. He knew: he had stood outside the CPA hospitals many times and the crowds always cheered louder when an ex-murderer came out. In Hendricks' robot-chauffeured car, he ate the fudge and consoled himself with the thought, People are funny. Who can understand 'em? Feeling happy for one of the few times in his life, he turned toward Hendricks and said, "Thanks for what you did. It turned out great. I'll be able to get a good job now." "That's why I met you at the hospital," Hendricks said. "I want to explain some things. I've known you for a long time and I know you're spectacularly dumb. You can't figure out some things for yourself and I don't want you walking around the rest of your life thinking I did you a favor." Joe frowned. Few men had ever done him a favor and he had rarely thanked anyone for anything. And now ... after thanking the man who'd done him the biggest favor of all, the man was denying it! "You robbed Gralewski's apartment," Hendricks said. "Gralewski is a CPA employee and he doesn't live in the apartment you robbed. The CPA pays the rent for that one and he lives in another. We have a lot of places like that. You see, it gives us a way to get rid of saps like you before they do real damage. We use it as a last resort when a DCT First Class won't take the free psycho treatment or—" "Well, it's still a favor." Hendricks' face hardened. "Favor? You wouldn't know a favor if you stumbled over one. I did it because it's standard procedure for your type of case. Anyone can—free of charge—have treatment by the best psychologists. Any DCT can stop being a DCT by simply asking for the treatment and taking it. But you wouldn't do that. You wanted to commit a crime, get caught and be a hero ... an Ex ." The car passed one of the CPA playgrounds. Boys and girls of all ages were laughing, squealing with joy as they played games designed by CPA psychologists to relieve tension. And—despite the treatment, Joe shuddered when he saw the psychologists standing to one side, quietly watching the children. The whole world was filled with CPA employees and volunteer workers. Everywhere you went, it was there, quietly watching you and analyzing you, and if you showed criminal tendencies, it watched you even more closely and analyzed you even more deeply until it took you apart and put you back together again the way it wanted you to be. "Being an Ex, you'll get the kind of job you always wanted," Hendricks continued. "You'll get a good-paying job, but you'll work for it. You'll work eight hours a day, work harder than you've ever worked before in your life, because every time you start to loaf, a voice in your head is going to say, Work! Work! Exes always get good jobs because employers know they're good workers. "But during these next few days, you'll discover what being an Ex is like. You see, Joe, the treatment can't possibly take all the criminal tendencies out of a man. So the treatment does the next best thing—you'll find a set of laws written in your mind. You might want to break one now and then, but you won't be able. I'll give you an illustration...." Joe's face reddened as Hendricks proceeded to call him a series of names. He wanted to smash the fat, grinning face, but the muscles in his arm froze before it moved it an inch. And worse than that, a brief pain ripped through his skull. A pain so intense that, had it lasted a second longer, he would have screamed in agony. And above the pain, a voice whispered in his head, Unlawful to strike someone except in self-defense . He opened his mouth to tell Hendricks exactly what he thought of him, the CPA, the whole world. But the words stayed in his throat, the pain returned, and the mental voice whispered, Unlawful to curse . He had never heard how the treatment prevented an Ex from committing a crime. And now that he knew, it didn't seem fair. He decided to tell the whole story to the newspapers as soon as he could. And as soon as that decision formed in his mind, his body froze, the pain returned and the voice, Unlawful to divulge CPA procedure . "See what I mean?" Hendricks asked. "A century ago, you would have been locked in a prison and taxpayers' money would have supported you until the day you died. With the CPA system, you're returned to society, a useful citizen, unable to commit the smallest crime. And you've got a big hand in your dirty little mind that's going to slap it every time you get the wrong kind of thought. It'll keep slapping you until you learn. It might take weeks, months or years, but you'll learn sooner or later to not even think about doing anything wrong." He lit a cigarette and blew a smoke ring at the car's plush ceiling. "It's a great system, isn't it, Joe? A true democracy. Even a jerk like you is free to do what he wants, as long as it's legal." "I think it's a lousy, filthy system." Joe's head was still tingling with pain and he felt suffocated. The CPA was everywhere, only now it was also inside his head, telling him he couldn't do this, couldn't do that. All his life it had been telling him he couldn't do things he wanted to do and now .... Hendricks laughed. "You'll change your opinion. We live in a clean, wonderful world, Joe. A world of happy, healthy people. Except for freaks like yourself, criminals are—" "Let me out!" Joe grabbed at the door and was on the sidewalk, slamming the door behind him before the car stopped completely. He stared at the car as it pulled away from the curb and glided into the stream of traffic again. He realized he was a prisoner ... a prisoner inside his own body ... made a prisoner by a world that hated him back. He wanted to spit his contempt, but the increasingly familiar pain and voice prevented him. It was unlawful to spit on a sidewalk.
|
A. Hendricks had left out a book with unsecured addresses.
|
Which description is the best representation of Yrtok's role in the story?
A. She figured out what was wrong with Ammet when he fell.
B. She was the reason they had a quality water supply.
C. She found the purple berries, an important source of food for the stranded crew.
D. Her fall leads Kolin to find Ashlew
|
By H. B. Fyfe THE TALKATIVE TREE Dang vines! Beats all how some plants have no manners—but what do you expect, when they used to be men! All things considered—the obscure star, the undetermined damage to the stellar drive and the way the small planet's murky atmosphere defied precision scanners—the pilot made a reasonably good landing. Despite sour feelings for the space service of Haurtoz, steward Peter Kolin had to admit that casualties might have been far worse. Chief Steward Slichow led his little command, less two third-class ration keepers thought to have been trapped in the lower hold, to a point two hundred meters from the steaming hull of the Peace State . He lined them up as if on parade. Kolin made himself inconspicuous. "Since the crew will be on emergency watches repairing the damage," announced the Chief in clipped, aggressive tones, "I have volunteered my section for preliminary scouting, as is suitable. It may be useful to discover temporary sources in this area of natural foods." Volunteered HIS section! thought Kolin rebelliously. Like the Supreme Director of Haurtoz! Being conscripted into this idiotic space fleet that never fights is bad enough without a tin god on jets like Slichow! Prudently, he did not express this resentment overtly. His well-schooled features revealed no trace of the idea—or of any other idea. The Planetary State of Haurtoz had been organized some fifteen light-years from old Earth, but many of the home world's less kindly techniques had been employed. Lack of complete loyalty to the state was likely to result in a siege of treatment that left the subject suitably "re-personalized." Kolin had heard of instances wherein mere unenthusiastic posture had betrayed intentions to harbor treasonable thoughts. "You will scout in five details of three persons each," Chief Slichow said. "Every hour, each detail will send one person in to report, and he will be replaced by one of the five I shall keep here to issue rations." Kolin permitted himself to wonder when anyone might get some rest, but assumed a mildly willing look. (Too eager an attitude could arouse suspicion of disguising an improper viewpoint.) The maintenance of a proper viewpoint was a necessity if the Planetary State were to survive the hostile plots of Earth and the latter's decadent colonies. That, at least, was the official line. Kolin found himself in a group with Jak Ammet, a third cook, and Eva Yrtok, powdered foods storekeeper. Since the crew would be eating packaged rations during repairs, Yrtok could be spared to command a scout detail. Each scout was issued a rocket pistol and a plastic water tube. Chief Slichow emphasized that the keepers of rations could hardly, in an emergency, give even the appearance of favoring themselves in regard to food. They would go without. Kolin maintained a standard expression as the Chief's sharp stare measured them. Yrtok, a dark, lean-faced girl, led the way with a quiet monosyllable. She carried the small radio they would be permitted to use for messages of utmost urgency. Ammet followed, and Kolin brought up the rear. To reach their assigned sector, they had to climb a forbidding ridge of rock within half a kilometer. Only a sparse creeper grew along their way, its elongated leaves shimmering with bronze-green reflections against a stony surface; but when they topped the ridge a thick forest was in sight. Yrtok and Ammet paused momentarily before descending. Kolin shared their sense of isolation. They would be out of sight of authority and responsible for their own actions. It was a strange sensation. They marched down into the valley at a brisk pace, becoming more aware of the clouds and atmospheric haze. Distant objects seemed blurred by the mist, taking on a somber, brooding grayness. For all Kolin could tell, he and the others were isolated in a world bounded by the rocky ridge behind them and a semi-circle of damp trees and bushes several hundred meters away. He suspected that the hills rising mistily ahead were part of a continuous slope, but could not be sure. Yrtok led the way along the most nearly level ground. Low creepers became more plentiful, interspersed with scrubby thickets of tangled, spike-armored bushes. Occasionally, small flying things flickered among the foliage. Once, a shrub puffed out an enormous cloud of tiny spores. "Be a job to find anything edible here," grunted Ammet, and Kolin agreed. Finally, after a longer hike than he had anticipated, they approached the edge of the deceptively distant forest. Yrtok paused to examine some purple berries glistening dangerously on a low shrub. Kolin regarded the trees with misgiving. "Looks as tough to get through as a tropical jungle," he remarked. "I think the stuff puts out shoots that grow back into the ground to root as they spread," said the woman. "Maybe we can find a way through." In two or three minutes, they reached the abrupt border of the odd-looking trees. Except for one thick trunked giant, all of them were about the same height. They craned their necks to estimate the altitude of the monster, but the top was hidden by the wide spread of branches. The depths behind it looked dark and impenetrable. "We'd better explore along the edge," decided Yrtok. "Ammet, now is the time to go back and tell the Chief which way we're— Ammet! " Kolin looked over his shoulder. Fifty meters away, Ammet sat beside the bush with the purple berries, utterly relaxed. "He must have tasted some!" exclaimed Kolin. "I'll see how he is." He ran back to the cook and shook him by the shoulder. Ammet's head lolled loosely to one side. His rather heavy features were vacant, lending him a doped appearance. Kolin straightened up and beckoned to Yrtok. For some reason, he had trouble attracting her attention. Then he noticed that she was kneeling. "Hope she didn't eat some stupid thing too!" he grumbled, trotting back. As he reached her, whatever Yrtok was examining came to life and scooted into the underbrush with a flash of greenish fur. All Kolin saw was that it had several legs too many. He pulled Yrtok to her feet. She pawed at him weakly, eyes as vacant as Ammet's. When he let go in sudden horror, she folded gently to the ground. She lay comfortably on her side, twitching one hand as if to brush something away. When she began to smile dreamily, Kolin backed away. The corners of his mouth felt oddly stiff; they had involuntarily drawn back to expose his clenched teeth. He glanced warily about, but nothing appeared to threaten him. "It's time to end this scout," he told himself. "It's dangerous. One good look and I'm jetting off! What I need is an easy tree to climb." He considered the massive giant. Soaring thirty or forty meters into the thin fog and dwarfing other growth, it seemed the most promising choice. At first, Kolin saw no way, but then the network of vines clinging to the rugged trunk suggested a route. He tried his weight gingerly, then began to climb. "I should have brought Yrtok's radio," he muttered. "Oh, well, I can take it when I come down, if she hasn't snapped out of her spell by then. Funny … I wonder if that green thing bit her." Footholds were plentiful among the interlaced lianas. Kolin progressed rapidly. When he reached the first thick limbs, twice head height, he felt safer. Later, at what he hoped was the halfway mark, he hooked one knee over a branch and paused to wipe sweat from his eyes. Peering down, he discovered the ground to be obscured by foliage. "I should have checked from down there to see how open the top is," he mused. "I wonder how the view will be from up there?" "Depends on what you're looking for, Sonny!" something remarked in a soughing wheeze. Kolin, slipping, grabbed desperately for the branch. His fingers clutched a handful of twigs and leaves, which just barely supported him until he regained a grip with the other hand. The branch quivered resentfully under him. "Careful, there!" whooshed the eerie voice. "It took me all summer to grow those!" Kolin could feel the skin crawling along his backbone. "Who are you?" he gasped. The answering sigh of laughter gave him a distinct chill despite its suggestion of amiability. "Name's Johnny Ashlew. Kinda thought you'd start with what I am. Didn't figure you'd ever seen a man grown into a tree before." Kolin looked about, seeing little but leaves and fog. "I have to climb down," he told himself in a reasonable tone. "It's bad enough that the other two passed out without me going space happy too." "What's your hurry?" demanded the voice. "I can talk to you just as easy all the way down, you know. Airholes in my bark—I'm not like an Earth tree." Kolin examined the bark of the crotch in which he sat. It did seem to have assorted holes and hollows in its rough surface. "I never saw an Earth tree," he admitted. "We came from Haurtoz." "Where's that? Oh, never mind—some little planet. I don't bother with them all, since I came here and found out I could be anything I wanted." "What do you mean, anything you wanted?" asked Kolin, testing the firmness of a vertical vine. "Just what I said," continued the voice, sounding closer in his ear as his cheek brushed the ridged bark of the tree trunk. "And, if I do have to remind you, it would be nicer if you said 'Mr. Ashlew,' considering my age." "Your age? How old—?" "Can't really count it in Earth years any more. Lost track. I always figured bein' a tree was a nice, peaceful life; and when I remembered how long some of them live, that settled it. Sonny, this world ain't all it looks like." "It isn't, Mr. Ashlew?" asked Kolin, twisting about in an effort to see what the higher branches might hide. "Nope. Most everything here is run by the Life—that is, by the thing that first grew big enough to do some thinking, and set its roots down all over until it had control. That's the outskirts of it down below." "The other trees? That jungle?" "It's more'n a jungle, Sonny. When I landed here, along with the others from the Arcturan Spark , the planet looked pretty empty to me, just like it must have to—Watch it, there, Boy! If I didn't twist that branch over in time, you'd be bouncing off my roots right now!" "Th-thanks!" grunted Kolin, hanging on grimly. "Doggone vine!" commented the windy whisper. " He ain't one of my crowd. Landed years later in a ship from some star towards the center of the galaxy. You should have seen his looks before the Life got in touch with his mind and set up a mental field to help him change form. He looks twice as good as a vine!" "He's very handy," agreed Kolin politely. He groped for a foothold. "Well … matter of fact, I can't get through to him much, even with the Life's mental field helping. Guess he started living with a different way of thinking. It burns me. I thought of being a tree, and then he came along to take advantage of it!" Kolin braced himself securely to stretch tiring muscles. "Maybe I'd better stay a while," he muttered. "I don't know where I am." "You're about fifty feet up," the sighing voice informed him. "You ought to let me tell you how the Life helps you change form. You don't have to be a tree." "No?" " Uh -uh! Some of the boys that landed with me wanted to get around and see things. Lots changed to animals or birds. One even stayed a man—on the outside anyway. Most of them have to change as the bodies wear out, which I don't, and some made bad mistakes tryin' to be things they saw on other planets." "I wouldn't want to do that, Mr. Ashlew." "There's just one thing. The Life don't like taking chances on word about this place gettin' around. It sorta believes in peace and quiet. You might not get back to your ship in any form that could tell tales." "Listen!" Kolin blurted out. "I wasn't so much enjoying being what I was that getting back matters to me!" "Don't like your home planet, whatever the name was?" "Haurtoz. It's a rotten place. A Planetary State! You have to think and even look the way that's standard thirty hours a day, asleep or awake. You get scared to sleep for fear you might dream treason and they'd find out somehow." "Whooeee! Heard about them places. Must be tough just to live." Suddenly, Kolin found himself telling the tree about life on Haurtoz, and of the officially announced threats to the Planetary State's planned expansion. He dwelt upon the desperation of having no place to hide in case of trouble with the authorities. A multiple system of such worlds was agonizing to imagine. Somehow, the oddity of talking to a tree wore off. Kolin heard opinions spouting out which he had prudently kept bottled up for years. The more he talked and stormed and complained, the more relaxed he felt. "If there was ever a fellow ready for this planet," decided the tree named Ashlew, "you're it, Sonny! Hang on there while I signal the Life by root!" Kolin sensed a lack of direct attention. The rustle about him was natural, caused by an ordinary breeze. He noticed his hands shaking. "Don't know what got into me, talking that way to a tree," he muttered. "If Yrtok snapped out of it and heard, I'm as good as re-personalized right now." As he brooded upon the sorry choice of arousing a search by hiding where he was or going back to bluff things out, the tree spoke. "Maybe you're all set, Sonny. The Life has been thinkin' of learning about other worlds. If you can think of a safe form to jet off in, you might make yourself a deal. How'd you like to stay here?" "I don't know," said Kolin. "The penalty for desertion—" "Whoosh! Who'd find you? You could be a bird, a tree, even a cloud." Silenced but doubting, Kolin permitted himself to try the dream on for size. He considered what form might most easily escape the notice of search parties and still be tough enough to live a long time without renewal. Another factor slipped into his musings: mere hope of escape was unsatisfying after the outburst that had defined his fuming hatred for Haurtoz. I'd better watch myself! he thought. Don't drop diamonds to grab at stars! "What I wish I could do is not just get away but get even for the way they make us live … the whole damn set-up. They could just as easy make peace with the Earth colonies. You know why they don't?" "Why?" wheezed Ashlew. "They're scared that without talk of war, and scouting for Earth fleets that never come, people would have time to think about the way they have to live and who's running things in the Planetary State. Then the gravy train would get blown up—and I mean blown up!" The tree was silent for a moment. Kolin felt the branches stir meditatively. Then Ashlew offered a suggestion. "I could tell the Life your side of it," he hissed. "Once in with us, you can always make thinking connections, no matter how far away. Maybe you could make a deal to kill two birds with one stone, as they used to say on Earth…." Chief Steward Slichow paced up and down beside the ration crate turned up to serve him as a field desk. He scowled in turn, impartially, at his watch and at the weary stewards of his headquarters detail. The latter stumbled about, stacking and distributing small packets of emergency rations. The line of crewmen released temporarily from repair work was transient as to individuals but immutable as to length. Slichow muttered something profane about disregard of orders as he glared at the rocky ridges surrounding the landing place. He was so intent upon planning greetings with which to favor the tardy scouting parties that he failed to notice the loose cloud drifting over the ridge. It was tenuous, almost a haze. Close examination would have revealed it to be made up of myriads of tiny spores. They resembled those cast forth by one of the bushes Kolin's party had passed. Along the edges, the haze faded raggedly into thin air, but the units evidently formed a cohesive body. They drifted together, approaching the men as if taking intelligent advantage of the breeze. One of Chief Slichow's staggering flunkies, stealing a few seconds of relaxation on the pretext of dumping an armful of light plastic packing, wandered into the haze. He froze. After a few heartbeats, he dropped the trash and stared at ship and men as if he had never seen either. A hail from his master moved him. "Coming, Chief!" he called but, returning at a moderate pace, he murmured, "My name is Frazer. I'm a second assistant steward. I'll think as Unit One." Throughout the cloud of spores, the mind formerly known as Peter Kolin congratulated itself upon its choice of form. Nearer to the original shape of the Life than Ashlew got , he thought. He paused to consider the state of the tree named Ashlew, half immortal but rooted to one spot, unable to float on a breeze or through space itself on the pressure of light. Especially, it was unable to insinuate any part of itself into the control center of another form of life, as a second spore was taking charge of the body of Chief Slichow at that very instant. There are not enough men , thought Kolin. Some of me must drift through the airlock. In space, I can spread through the air system to the command group. Repairs to the Peace State and the return to Haurtoz passed like weeks to some of the crew but like brief moments in infinity to other units. At last, the ship parted the air above Headquarters City and landed. The unit known as Captain Theodor Kessel hesitated before descending the ramp. He surveyed the field, the city and the waiting team of inspecting officers. "Could hardly be better, could it?" he chuckled to the companion unit called Security Officer Tarth. "Hardly, sir. All ready for the liberation of Haurtoz." "Reformation of the Planetary State," mused the captain, smiling dreamily as he grasped the handrail. "And then—formation of the Planetary Mind!" END Transcriber's Note: This e-text was produced from Worlds of If January 1962 . Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.
|
D. Her fall leads Kolin to find Ashlew
|
When discussing these films, which word best describes the author?
A. vague
B. optimistic
C. knowledgeable
D. biased
|
War and Pieces No movie in the last decade has succeeded in psyching out critics and audiences as fully as the powerful, rambling war epic The Thin Red Line , Terrence Malick's return to cinema after 20 years. I've sat through it twice and am still trying to sort out my responses, which run from awe to mockery and back. Like Saving Private Ryan , the picture wallops you in the gut with brilliant, splattery battle montages and Goyaesque images of hell on earth. But Malick, a certified intellectual and the Pynchonesque figure who directed Badlands and Days of Heaven in the 1970s and then disappeared, is in a different philosophical universe from Steven Spielberg. Post-carnage, his sundry characters philosophize about their experiences in drowsy, runic voice-overs that come at you like slow bean balls: "Why does nature vie with itself? ... Is there an avenging power in nature, not one power but two?" Or "This great evil: Where's it come from? What seed, what root did it grow from? Who's doin' this? Who's killin' us, robbin' us of life and light?" First you get walloped with viscera, then you get beaned by blather. Those existential speculations don't derive from the screenplay's source, an archetypal but otherwise down-to-earth 1962 novel by James Jones (who also wrote From Here to Eternity ) about the American invasion of the South Pacific island of Guadalcanal. They're central to Malick's vision of the story, however, and not specious. In the combat genre, the phrase "war is hell" usually means nothing more than that it's a bummer to lose a limb or two, or to see your buddy get his head blown off. A true work of art owes us more than literal horrors, and Malick obliges by making his theater of war the setting for nothing less than a meditation on the existence of God. He tells the story solemnly, in three parts, with a big-deal cast (Sean Penn, Nick Nolte, John Cusack) and a few other major stars (John Travolta, Woody Harrelson, George Clooney) dropping by for cameos. After an Edenic prelude, in which a boyishly idealistic absent without leave soldier, Pvt. Witt (Jim Caviezel), swims with native youths to the accompaniment of a heavenly children's choir, the first part sees the arrival of the Allied forces on the island, introduces the principal characters (none of whom amounts to a genuine protagonist), and lays out the movie's geographical and philosophical terrain. The centerpiece--the fighting--goes on for over an hour and features the most frantic and harrowing sequences, chiefly the company's initially unsuccessful frontal assault on a Japanese hilltop bunker. The coda lasts nearly 40 minutes and is mostly talk and cleanup, the rhythms growing more relaxed until a final, incongruous spasm of violence--whereupon the surviving soldiers pack their gear and motor off to another South Pacific battle. In the final shot, a twisted tree grows on the waterline of the beach, the cycle of life beginning anew. The Thin Red Line has a curious sound-scape, as the noise of battle frequently recedes to make room for interior monologues and Hans Zimmer's bump-bump, minimalist New Age music. Pvt. Bell (Ben Chaplin) talks to his curvy, redheaded wife, viewed in deliriously sensual flashbacks. ("Love: Where does it come from? Who lit this flame in us?") Lt. Col. Tall (Nolte), a borderline lunatic passed over one too many times for promotion and itching to win a battle no matter what the human cost, worries groggily about how his men perceive him. The dreamer Witt poses folksy questions about whether we're all a part of one big soul. If the movie has a spine, it's his off-and-on dialogue with Sgt. Welsh (Penn), who's increasingly irritated by the private's beatific, almost Billy Budd-like optimism. Says Welsh, "In this world, a man himself is nothin', and there ain't no world but this one." Replies Witt, high cheekbones glinting, "I seen another world." At first it seems as if Witt will indeed be Billy Budd to Welsh's vindictive Claggart. But if Witt is ultimately an ethereal martyr, Welsh turns out to be a Bogart-like romantic who can't stop feeling pain in the face of an absent God. He speaks the movie's epitaph, "Darkness and light, strife and love: Are they the workings of one mind, the feature of the same face? O my soul, let me be in you now. Look out through my eyes. Look out at the things you made, all things shining." Malick puts a lot of shining things on the screen: soldiers, natives, parrots, bats, rodents, visions of Eden by way of National Geographic and of the Fall by way of Alpo. Malick's conception of consciousness distributes it among the animate and inanimate alike; almost every object is held up for rapturous contemplation. I could cite hundreds of images: A soldier in a rocking boat hovers over a letter he's writing, which is crammed from top to bottom and side to side with script. (You don't know the man, but you can feel in an instant his need to cram everything in.) A small, white-bearded Melanesian man strolls nonchalantly past a platoon of tensely trudging grunts who can't believe they're encountering this instead of a hail of Japanese bullets. Two shots bring down the first pair of soldiers to advance on the hill; a second later, the sun plays mystically over the tall, yellow grass that has swallowed their bodies. John Toll's camera rushes in on a captured Japanese garrison: One Japanese soldier shrieks; another, skeletal, laughs and laughs; a third weeps over a dying comrade. The face of a Japanese soldier encased in earth speaks from the dead, "Are you righteous? Know that I was, too." Whether or not these pearllike epiphanies are strung is another matter. Malick throws out his overarching theme--is nature two-sided, at war with itself?--in the first few minutes but, for all his startling juxtapositions, he never dramatizes it with anything approaching the clarity of, say, Brian De Palma's Casualties of War (1989). Besides the dialogue between Welsh and Witt, The Thin Red Line 's other organizing story involves a wrenching tug of war between Nolte's ambition-crazed Tall and Capt. Staros (Elias Koteas), who refuses an order to send his men on what will surely be a suicidal--and futile--assault on a bunker. But matters of cause and effect don't really interest Malick. Individual acts of conscience can and do save lives, and heroism can win a war or a battle, he acknowledges. But Staros is ultimately sent packing, and Malick never bothers to trace the effect of his action on the Guadalcanal operation. In fact, the entire battle seems to take place in a crazed void. Tall quotes Homer's "rosy-fingered dawn" and orders a meaningless bombardment to "buck the men up--it'll look like the Japs are catching hell." Soldiers shoot at hazy figures, unsure whether they're Japanese or American. Men collide, blow themselves in half with their own mishandled grenades, stab themselves frantically with morphine needles, shove cigarettes up their noses to keep the stench of the dying and the dead at bay. A tiny bird, mortally wounded, flutters in the grass. Malick is convincing--at times overwhelming--on the subject of chaos. It's when he tries to ruminate on order that he gets gummed up, retreating to one of his gaseous multiple mouthpieces: "Where is it that we were together? Who is it that I lived with? Walked with? The brother. ... The friend. ... One mind." I think I'd have an easier time with Malick's metaphysical speculations if I had a sense of some concomitant geopolitical ones--central to any larger musings on forces of nature as viewed through the prism of war. Couldn't it be that the German and Japanese fascist orders were profoundly anti-natural, and that the Allies' cause was part of a violent but natural correction? You don't have to buy into Spielberg's Lincolnesque pieties in Saving Private Ryan to believe that there's a difference between World War II and Vietnam (or, for that matter, World War II and the invasion of Grenada or our spats with Iraq). While he was at Harvard, Malick might have peeled himself off the lap of his pointy-headed mentor, Stanley Cavell, the philosopher and film theorist, and checked out a few of Michael Waltzer's lectures on just and unjust wars. Maybe then he'd view Guadalcanal not in an absurdist vacuum (the soldiers come, they kill and are killed, they leave) but in the larger context of a war that was among the most rational (in its aims, if not its methods) fought in the last several centuries. For all his visionary filmmaking, Malick's Zen neutrality sometimes seems like a cultivated--and pretentious--brand of fatuousness. John Travolta's empty nightclub impersonation of Bill Clinton in Primary Colors (1998) had one positive result: It gave him a jump-start on Jan Schlichtmann, the reckless personal injury lawyer at the center of A Civil Action . Travolta's Schlichtmann is much more redolent of Clinton: slick and selfish and corrupt in lots of ways but basically on the side of the angels, too proud and arrogant to change tactics when all is certainly lost. Schlichtmann pursued--and more or less blew--a civil liability case against the corporate giants Beatrice and W.R. Grace over the allegedly carcinogenic water supply of Woburn, Mass. Boston writer Jonathan Harr, in the book the movie is based on, went beyond the poison in the Woburn wells to evoke (stopping just short of libel) the poison of the civil courts, where platoons of overpaid corporate lawyers can drive opponents with pockets less deep and psyches less stable into bankruptcy and hysteria. Director Steven Zaillian's version doesn't capture the mounting rage that one experiences while reading Harr's book, or even the juicy legal machinations that Francis Ford Coppola giddily manipulated in his underrated adaptation of John Grisham's The Rainmaker (1997). But A Civil Action is a sturdy piece of work, an old-fashioned conversion narrative with some high-tech zip. Schlichtmann doesn't take this "orphan" case--brought by the parents of several children who died of leukemia--because he wants to do good but because he figures that Grace and Beatrice will fork over huge sums of money to keep the parents from testifying publicly about their children's last days. He might succeed, too, if it weren't for Jerome Facher (Robert Duvall), the Beatrice lawyer who knows how to keep Schlichtmann shadowboxing while his small firm's financial resources dwindle to nothing. Zaillian is at his most assured when he cuts back and forth between Facher's Harvard Law School lectures on what not to do in court and Schlichtmann's fumbling prosecution. The sequence has the extra dimension of good journalism: It dramatizes and comments simultaneously. Plus, it gives Duvall a splendid platform for impish understatement. (Duvall has become more fun to watch than just about anyone in movies.) Elsewhere, Zaillian takes a more surface approach, sticking to legal minutiae and rarely digging for the deeper evil. As in his Searching for Bobby Fischer (1993), the outcome of every scene is predictable, but how Zaillian gets from beat to beat is surprisingly fresh. He also gets sterling bit performances from Sydney Pollack as the spookily sanguine Grace CEO, William H. Macy as Schlichtmann's rabbity accountant, and Kathleen Quinlan as the mother of one of the victims. Quinlan knows that when you're playing a woman who has lost a child you don't need to emote--you reveal the emotion by trying not to emote. To the families involved in the Woburn tragedy, the real climax of this story isn't the downbeat ending of the book or the sleight of hand, "let's call the Environmental Protection Agency," upbeat ending of the movie. The climax is the publication of a book that takes the plaintiffs' side and that remains on the best-seller list in hardcover and paperback for years. The climax is the movie starring John Travolta. Beatrice and Grace made out OK legally, but some of us will never use their products again without thinking about Travolta losing his shirt in the name of those wasted-away little kids.
|
C. knowledgeable
|
What didn't this "bodyguard" do for Gabe?
A. tell his wife the truth
B. pulled him out of a helicopter crash
C. chase him across multiple planets
D. stop him from being beaten up
|
Bodyguard By CHRISTOPHER GRIMM Illustrated by CAVAT [Transcriber's Note: This etext was produced from Galaxy Science Fiction February 1956. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] When overwhelming danger is constantly present,of course a man is entitled to have a bodyguard. The annoyance was that he had to do it himself ... and his body would not cooperate! The man at the bar was exceptionally handsome, and he knew it. So did the light-haired girl at his side, and so did the nondescript man in the gray suit who was watching them from a booth in the corner. Everyone in the room was aware of the big young man, and most of the humans present were resentful, for he handled himself consciously and arrogantly, as if his appearance alone were enough to make him superior to anyone. Even the girl with him was growing restless, for she was accustomed to adulation herself, and next to Gabriel Lockard she was almost ordinary-looking. As for the extraterrestrials—it was a free bar—they were merely amused, since to them all men were pathetically and irredeemably hideous. Gabe threw his arm wide in one of his expansive gestures. There was a short man standing next to the pair—young, as most men and women were in that time, thanks to the science which could stave off decay, though not death—but with no other apparent physical virtue, for plastic surgery had not fulfilled its bright promise of the twentieth century. The drink he had been raising to his lips splashed all over his clothing; the glass shattered at his feet. Now he was not only a rather ugly little man, but also a rather ridiculous one—or at least he felt he was, which was what mattered. "Sorry, colleague," Gabe said lazily. "All my fault. You must let me buy you a replacement." He gestured to the bartender. "Another of the same for my fellow-man here." The ugly man dabbed futilely at his dripping trousers with a cloth hastily supplied by the management. "You must allow me to pay your cleaning bill," Gabe said, taking out his wallet and extracting several credit notes without seeming to look at them. "Here, have yourself a new suit on me." You could use one was implied. And that, coming on top of Gabriel Lockard's spectacular appearance, was too much. The ugly man picked up the drink the bartender had just set before him and started to hurl it, glass and all, into Lockard's handsome face. Suddenly a restraining hand was laid upon his arm. "Don't do that," the nondescript man who had been sitting in the corner advised. He removed the glass from the little man's slackening grasp. "You wouldn't want to go to jail because of him." The ugly man gave him a bewildered stare. Then, seeing the forces now ranged against him—including his own belated prudence—were too strong, he stumbled off. He hadn't really wanted to fight, only to smash back, and now it was too late for that. Gabe studied the newcomer curiously. "So, it's you again?" The man in the gray suit smiled. "Who else in any world would stand up for you?" "I should think you'd have given up by now. Not that I mind having you around, of course," Gabriel added too quickly. "You do come in useful at times, you know." "So you don't mind having me around?" The nondescript man smiled again. "Then what are you running from, if not me? You can't be running from yourself—you lost yourself a while back, remember?" Gabe ran a hand through his thick blond hair. "Come on, have a drink with me, fellow-man, and let's let bygones be bygones. I owe you something—I admit that. Maybe we can even work this thing out." "I drank with you once too often," the nondescript man said. "And things worked out fine, didn't they? For you." His eyes studied the other man's incredibly handsome young face, noted the suggestion of bags under the eyes, the beginning of slackness at the lips, and were not pleased with what they saw. "Watch yourself, colleague," he warned as he left. "Soon you might not be worth the saving." "Who was that, Gabe?" the girl asked. He shrugged. "I never saw him before in my life." Of course, knowing him, she assumed he was lying, but, as a matter of fact, just then he happened to have been telling the truth. Once the illuminators were extinguished in Gabriel Lockard's hotel suite, it seemed reasonably certain to the man in the gray suit, as he watched from the street, that his quarry would not go out again that night. So he went to the nearest airstation. There he inserted a coin in a locker, into which he put most of his personal possessions, reserving only a sum of money. After setting the locker to respond to the letter combination bodyguard , he went out into the street. If he had met with a fatal accident at that point, there would have been nothing on his body to identify him. As a matter of fact, no real identification was possible, for he was no one and had been no one for years. The nondescript man hailed a cruising helicab. "Where to, fellow-man?" the driver asked. "I'm new in the parish," the other man replied and let it hang there. "Oh...? Females...? Narcophagi...? Thrill-mills?" But to each of these questions the nondescript man shook his head. "Games?" the driver finally asked, although he could guess what was wanted by then. "Dice...? Roulette...? Farjeen?" "Is there a good zarquil game in town?" The driver moved so he could see the face of the man behind him in the teleview. A very ordinary face. "Look, colleague, why don't you commit suicide? It's cleaner and quicker." "I can't contact your attitude," the passenger said with a thin smile. "Bet you've never tried the game yourself. Each time it happens, there's a ... well, there's no experience to match it at a thrill-mill." He gave a sigh that was almost an audible shudder, and which the driver misinterpreted as an expression of ecstasy. "Each time, eh? You're a dutchman then?" The driver spat out of the window. "If it wasn't for the nibble, I'd throw you right out of the cab. Without even bothering to take it down even. I hate dutchmen ... anybody with any legitimate feelings hates 'em." "But it would be silly to let personal prejudice stand in the way of a commission, wouldn't it?" the other man asked coolly. "Of course. You'll need plenty of foliage, though." "I have sufficient funds. I also have a gun." "You're the dictator," the driver agreed sullenly. II It was a dark and rainy night in early fall. Gabe Lockard was in no condition to drive the helicar. However, he was stubborn. "Let me take the controls, honey," the light-haired girl urged, but he shook his handsome head. "Show you I can do something 'sides look pretty," he said thickly, referring to an earlier and not amicable conversation they had held, and of which she still bore the reminder on one thickly made-up cheek. Fortunately the car was flying low, contrary to regulations, so that when they smashed into the beacon tower on the outskirts of the little town, they didn't have far to fall. And hardly had their car crashed on the ground when the car that had been following them landed, and a short fat man was puffing toward them through the mist. To the girl's indignation, the stranger not only hauled Gabe out onto the dripping grass first, but stopped and deliberately examined the young man by the light of his minilume, almost as if she weren't there at all. Only when she started to struggle out by herself did he seem to remember her existence. He pulled her away from the wreck just a moment before the fuel tank exploded and the 'copter went up in flames. Gabe opened his eyes and saw the fat man gazing down at him speculatively. "My guardian angel," he mumbled—shock had sobered him a little, but not enough. He sat up. "Guess I'm not hurt or you'd have thrown me back in." "And that's no joke," the fat man agreed. The girl shivered and at that moment Gabriel suddenly seemed to recall that he had not been alone. "How about Helen? She on course?" "Seems to be," the fat man said. "You all right, miss?" he asked, glancing toward the girl without, she thought, much apparent concern. " Mrs. ," Gabriel corrected. "Allow me to introduce you to Mrs. Gabriel Lockard," he said, bowing from his seated position toward the girl. "Pretty bauble, isn't she?" "I'm delighted to meet you, Mrs. Gabriel Lockard," the fat man said, looking at her intently. His small eyes seemed to strip the make-up from her cheek and examine the livid bruise underneath. "I hope you'll be worthy of the name." The light given off by the flaming car flickered on his face and Gabriel's and, she supposed, hers too. Otherwise, darkness surrounded the three of them. There were no public illuminators this far out—even in town the lights were dimming and not being replaced fast enough nor by the newer models. The town, the civilization, the planet all were old and beginning to slide downhill.... Gabe gave a short laugh, for no reason that she could see. There was the feeling that she had encountered the fat man before, which was, of course, absurd. She had an excellent memory for faces and his was not included in her gallery. The girl pulled her thin jacket closer about her chilly body. "Aren't you going to introduce your—your friend to me, Gabe?" "I don't know who he is," Gabe said almost merrily, "except that he's no friend of mine. Do you have a name, stranger?" "Of course I have a name." The fat man extracted an identification card from his wallet and read it. "Says here I'm Dominic Bianchi, and Dominic Bianchi is a retail milgot dealer.... Only he isn't a retail milgot dealer any more; the poor fellow went bankrupt a couple of weeks ago, and now he isn't ... anything." "You saved our lives," the girl said. "I'd like to give you some token of my—of our appreciation." Her hand reached toward her credit-carrier with deliberate insult. He might have saved her life, but only casually, as a by-product of some larger scheme, and her appreciation held little gratitude. The fat man shook his head without rancor. "I have plenty of money, thank you, Mrs. Gabriel Lockard.... Come," he addressed her husband, "if you get up, I'll drive you home. I warn you, be more careful in the future! Sometimes," he added musingly, "I almost wish you would let something happen. Then my problem would not be any problem, would it?" Gabriel shivered. "I'll be careful," he vowed. "I promise—I'll be careful." When he was sure that his charge was safely tucked in for the night, the fat man checked his personal possessions. He then requested a taxi driver to take him to the nearest zarquil game. The driver accepted the commission phlegmatically. Perhaps he was more hardened than the others had been; perhaps he was unaware that the fat man was not a desperate or despairing individual seeking one last chance, but what was known colloquially as a flying dutchman, a man, or woman, who went from one zarquil game to another, loving the thrill of the sport, if you could call it that, for its own sake, and not for the futile hope it extended and which was its sole shred of claim to moral justification. Perhaps—and this was the most likely hypothesis—he just didn't care. Zarquil was extremely illegal, of course—so much so that there were many legitimate citizens who weren't quite sure just what the word implied, knowing merely that it was one of those nameless horrors so deliciously hinted at by the fax sheets under the generic term of "crimes against nature." Actually the phrase was more appropriate to zarquil than to most of the other activities to which it was commonly applied. And this was one crime—for it was crime in law as well as nature—in which victim had to be considered as guilty as perpetrator; otherwise the whole legal structure of society would collapse. Playing the game was fabulously expensive; it had to be to make it profitable for the Vinzz to run it. Those odd creatures from Altair's seventh planet cared nothing for the welfare of the completely alien human beings; all they wanted was to feather their own pockets with interstellar credits, so that they could return to Vinau and buy many slaves. For, on Vinau, bodies were of little account, and so to them zarquil was the equivalent of the terrestrial game musical chairs. Which was why they came to Terra to make profits—there has never been big money in musical chairs as such. When the zarquil operators were apprehended, which was not frequent—as they had strange powers, which, not being definable, were beyond the law—they suffered their sentences with equanimity. No Earth court could give an effective prison sentence to a creature whose life spanned approximately two thousand terrestrial years. And capital punishment had become obsolete on Terra, which very possibly saved the terrestrials embarrassment, for it was not certain that their weapons could kill the Vinzz ... or whether, in fact, the Vinzz merely expired after a period of years out of sheer boredom. Fortunately, because trade was more profitable than war, there had always been peace between Vinau and Terra, and, for that reason, Terra could not bar the entrance of apparently respectable citizens of a friendly planet. The taxi driver took the fat man to one of the rather seedy locales in which the zarquil games were usually found, for the Vinzz attempted to conduct their operations with as much unobtrusiveness as was possible. But the front door swung open on an interior that lacked the opulence of the usual Vinoz set-up; it was down-right shabby, the dim olive light hinting of squalor rather than forbidden pleasures. That was the trouble in these smaller towns—you ran greater risks of getting involved in games where the players had not been carefully screened. The Vinoz games were usually clean, because that paid off better, but, when profits were lacking, the Vinzz were capable of sliding off into darkside practices. Naturally the small-town houses were more likely to have trouble in making ends meet, because everybody in the parish knew everybody else far too well. The fat man wondered whether that had been his quarry's motive in coming to such desolate, off-trail places—hoping that eventually disaster would hit the one who pursued him. Somehow, such a plan seemed too logical for the man he was haunting. However, beggars could not be choosers. The fat man paid off the heli-driver and entered the zarquil house. "One?" the small green creature in the slightly frayed robe asked. "One," the fat man answered. III The would-be thief fled down the dark alley, with the hot bright rays from the stranger's gun lancing out after him in flamboyant but futile patterns. The stranger, a thin young man with delicate, angular features, made no attempt to follow. Instead, he bent over to examine Gabriel Lockard's form, appropriately outstretched in the gutter. "Only weighted out," he muttered, "he'll be all right. Whatever possessed you two to come out to a place like this?" "I really think Gabriel must be possessed...." the girl said, mostly to herself. "I had no idea of the kind of place it was going to be until he brought me here. The others were bad, but this is even worse. It almost seems as if he went around looking for trouble, doesn't it?" "It does indeed," the stranger agreed, coughing a little. It was growing colder and, on this world, the cities had no domes to protect them from the climate, because it was Earth and the air was breathable and it wasn't worth the trouble of fixing up. The girl looked closely at him. "You look different, but you are the same man who pulled us out of that aircar crash, aren't you? And before that the man in the gray suit? And before that...?" The young man's cheekbones protruded as he smiled. "Yes, I'm all of them." "Then what they say about the zarquil games is true? There are people who go around changing their bodies like—like hats?" Automatically she reached to adjust the expensive bit of blue synthetic on her moon-pale hair, for she was always conscious of her appearance; if she had not been so before marriage, Gabriel would have taught her that. He smiled again, but coughed instead of speaking. "But why do you do it? Why! Do you like it? Or is it because of Gabriel?" She was growing a little frantic; there was menace here and she could not understand it nor determine whether or not she was included in its scope. "Do you want to keep him from recognizing you; is that it?" "Ask him." "He won't tell me; he never tells me anything. We just keep running. I didn't recognize it as running at first, but now I realize that's what we've been doing ever since we were married. And running from you, I think?" There was no change of expression on the man's gaunt face, and she wondered how much control he had over a body that, though second- or third- or fourth-hand, must be new to him. How well could he make it respond? What was it like to step into another person's casing? But she must not let herself think that way or she would find herself looking for a zarquil game. It would be one way of escaping Gabriel, but not, she thought, the best way; her body was much too good a one to risk so casually. It was beginning to snow. Light, feathery flakes drifted down on her husband's immobile body. She pulled her thick coat—of fur taken from some animal who had lived and died light-years away—more closely about herself. The thin young man began to cough again. Overhead a tiny star seemed to detach itself from the pale flat disk of the Moon and hurl itself upward—one of the interstellar ships embarking on its long voyage to distant suns. She wished that somehow she could be on it, but she was here, on this solitary old world in a barren solar system, with her unconscious husband and a strange man who followed them, and it looked as if here she would stay ... all three of them would stay.... "If you're after Gabriel, planning to hurt him," she asked, "why then do you keep helping him?" "I am not helping him . And he knows that." "You'll change again tonight, won't you?" she babbled. "You always change after you ... meet us? I think I'm beginning to be able to identify you now, even when you're ... wearing a new body; there's something about you that doesn't change." "Too bad he got married," the young man said. "I could have followed him for an eternity and he would never have been able to pick me out from the crowd. Too bad he got married anyway," he added, his voice less impersonal, "for your sake." She had come to the same conclusion in her six months of marriage, but she would not admit that to an outsider. Though this man was hardly an outsider; he was part of their small family group—as long as she had known Gabriel, so long he must have known her. And she began to suspect that he was even more closely involved than that. "Why must you change again?" she persisted, obliquely approaching the subject she feared. "You have a pretty good body there. Why run the risk of getting a bad one?" "This isn't a good body," he said. "It's diseased. Sure, nobody's supposed to play the game who hasn't passed a thorough medical examination. But in the places to which your husband has been leading me, they're often not too particular, as long as the player has plenty of foliage." "How—long will it last you?" "Four or five months, if I'm careful." He smiled. "But don't worry, if that's what you're doing; I'll get it passed on before then. It'll be expensive—that's all. Bad landing for the guy who gets it, but then it was tough on me too, wasn't it?" "But how did you get into this ... pursuit?" she asked again. "And why are you doing it?" People didn't have any traffic with Gabriel Lockard for fun, not after they got to know him. And this man certainly should know him better than most. "Ask your husband." The original Gabriel Lockard looked down at the prostrate, snow-powdered figure of the man who had stolen his body and his name, and stirred it with his toe. "I'd better call a cab—he might freeze to death." He signaled and a cab came. "Tell him, when he comes to," he said to the girl as he and the driver lifted the heavy form of her husband into the helicar, "that I'm getting pretty tired of this." He stopped for a long spell of coughing. "Tell him that sometimes I wonder whether cutting off my nose wouldn't, in the long run, be most beneficial for my face." "Sorry," the Vinzz said impersonally, in English that was perfect except for the slight dampening of the sibilants, "but I'm afraid you cannot play." "Why not?" The emaciated young man began to put on his clothes. "You know why. Your body is worthless. And this is a reputable house." "But I have plenty of money." The young man coughed. The Vinzz shrugged. "I'll pay you twice the regular fee." The green one shook his head. "Regrettably, I do mean what I say. This game is really clean." "In a town like this?" "That is the reason we can afford to be honest." The Vinzz' tendrils quivered in what the man had come to recognize as amusement through long, but necessarily superficial acquaintance with the Vinzz. His heavy robe of what looked like moss-green velvet, but might have been velvet-green moss, encrusted with oddly faceted alien jewels, swung with him. "We do a lot of business here," he said unnecessarily, for the whole set-up spelled wealth far beyond the dreams of the man, and he was by no means poor when it came to worldly goods. "Why don't you try another town where they're not so particular?" The young man smiled wryly. Just his luck to stumble on a sunny game. He never liked to risk following his quarry in the same configuration. And even though only the girl had actually seen him this time, he wouldn't feel at ease until he had made the usual body-shift. Was he changing because of Gabriel, he wondered, or was he using his own discoverment and identification simply as an excuse to cover the fact that none of the bodies that fell to his lot ever seemed to fit him? Was he activated solely by revenge or as much by the hope that in the hazards of the game he might, impossible though it now seemed, some day win another body that approached perfection as nearly as his original casing had? He didn't know. However, there seemed to be no help for it now; he would have to wait until they reached the next town, unless the girl, seeing him reappear in the same guise, would guess what had happened and tell her husband. He himself had been a fool to admit to her that the hulk he inhabited was a sick one; he still couldn't understand how he could so casually have entrusted her with so vital a piece of information. The Vinzz had been locking antennae with another of his kind. Now they detached, and the first approached the man once more. "There is, as it happens, a body available for a private game," he lisped. "No questions to be asked or answered. All I can tell you is that it is in good health." The man hesitated. "But unable to pass the screening?" he murmured aloud. "A criminal then." The green one's face—if you could call it a face—remained impassive. "Male?" "Of course," the Vinzz said primly. His kind did have certain ultimate standards to which they adhered rigidly, and one of those was the curious tabu against mixed games, strictly enforced even though it kept them from tapping a vast source of potential players. There had also never been a recorded instance of humans and extraterrestrials exchanging identities, but whether that was the result of tabu or biological impossibility, no one could tell. It might merely be prudence on the Vinzz' part—if it had ever been proved that an alien life-form had "desecrated" a human body, Earthmen would clamor for war ... for on this planet humanity held its self-bestowed purity of birthright dear—and the Vinzz, despite being unquestionably the stronger, were pragmatic pacifists. It had been undoubtedly some rabid member of the anti-alien groups active on Terra who had started the rumor that the planetary slogan of Vinau was, "Don't beat 'em; cheat 'em." "It would have to be something pretty nuclear for the other guy to take such a risk." The man rubbed his chin thoughtfully. "How much?" "Thirty thousand credits." "Why, that's three times the usual rate!" "The other will pay five times the usual rate." "Oh, all right," the delicate young man gave in. It was a terrific risk he was agreeing to take, because, if the other was a criminal, he himself would, upon assuming the body, assume responsibility for all the crimes it had committed. But there was nothing else he could do. He looked at himself in the mirror and found he had a fine new body; tall and strikingly handsome in a dark, coarse-featured way. Nothing to match the one he had lost, in his opinion, but there were probably many people who might find this one preferable. No identification in the pockets, but it wasn't necessary; he recognized the face. Not that it was a very famous or even notorious one, but the dutchman was a careful student of the "wanted" fax that had decorated public buildings from time immemorial, for he was ever mindful of the possibility that he might one day find himself trapped unwittingly in the body of one of the men depicted there. And he knew that this particular man, though not an important criminal in any sense of the word, was one whom the police had been ordered to burn on sight. The abolishing of capital punishment could not abolish the necessity for self-defense, and the man in question was not one who would let himself be captured easily, nor whom the police intended to capture easily. This might be a lucky break for me after all , the new tenant thought, as he tried to adjust himself to the body. It, too, despite its obvious rude health, was not a very comfortable fit. I can do a lot with a hulk like this. And maybe I'm cleverer than the original owner; maybe I'll be able to get away with it. IV "Look, Gabe," the girl said, "don't try to fool me! I know you too well. And I know you have that man's—the real Gabriel Lockard's—body." She put unnecessary stardust on her nose as she watched her husband's reflection in the dressing table mirror. Lockard—Lockard's body, at any rate—sat up and felt his unshaven chin. "That what he tell you?" "No, he didn't tell me anything really—just suggested I ask you whatever I want to know. But why else should he guard somebody he obviously hates the way he hates you? Only because he doesn't want to see his body spoiled." "It is a pretty good body, isn't it?" Gabe flexed softening muscles and made no attempt to deny her charge; very probably he was relieved at having someone with whom to share his secret. "Not as good as it must have been," the girl said, turning and looking at him without admiration. "Not if you keep on the way you're coursing. Gabe, why don't you...?" "Give it back to him, eh?" Lockard regarded his wife appraisingly. "You'd like that, wouldn't you? You'd be his wife then. That would be nice—a sound mind in a sound body. But don't you think that's a little more than you deserve?" "I wasn't thinking about that, Gabe," she said truthfully enough, for she hadn't followed the idea to its logical conclusion. "Of course I'd go with you," she went on, now knowing she lied, "when you got your ... old body back." Sure , she thought, I'd keep going with you to farjeen houses and thrill-mills. Actually she had accompanied him to a thrill-mill only once, and from then on, despite all his threats, she had refused to go with him again. But that once had been enough; nothing could ever wash that experience from her mind or her body. "You wouldn't be able to get your old body back, though, would you?" she went on. "You don't know where it's gone, and neither, I suppose, does he?" "I don't want to know!" he spat. "I wouldn't want it if I could get it back. Whoever it adhered to probably killed himself as soon as he looked in a mirror." He swung long legs over the side of his bed. "Christ, anything would be better than that! You can't imagine what a hulk I had!" "Oh, yes, I can," she said incautiously. "You must have had a body to match your character. Pity you could only change one."
|
A. tell his wife the truth
|
What model do they use to classify phonetic segments?
|
### Introduction
Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 . Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches. Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 ). ### Ultrasound Tongue Imaging
There are several challenges associated with the automatic processing of ultrasound tongue images. Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall. Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults. Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency. Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images. ### Related Work
Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset. ### Ultrasound Data
We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances. ### Data Selection
For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers. For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples. ### Preprocessing and Model Architectures
For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input. The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept. ### Training Scenarios and Speaker Means
We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems. This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512. ### Results and Discussion
Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems. When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples. Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers. Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation). ### Future Work
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal. ### Conclusion
In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system. Fig. 1. Ultrasound samples for the four output classes based on place of articulation. The top row contains samples from speaker 12 (male, aged six), and the bottom row from speaker 13 (female, aged eleven). All samples show a midsaggital view of the oral cavity with the tip of the tongue facing right. Each sample is the mid-point frame of a phone uttered in an aCa context (e.g. apa, ata, ara, aka). See the UltraSuite repository2 for details on interpreting ultrasound tongue images. Fig. 2. Ultrasound mean image for speaker 12 (top row) and speaker 13 (bottom row). Means on the left column are taken over the training data, while means on the right are taken over the test data. Table 1. Phonetic segment accuracy for the four training scenarios. Fig. 3. Accuracy scores for adapted CNN Raw, varying amount of adaptation examples. We separately restrict training and development data to either n or all examples, whichever is smallest. Fig. 4. Pair-wise scatterplots for the CNN system without speaker mean. Each sample is a speaker with axes representing accuracy under a training scenario. Percentages in top left and bottom right corners indicate amount of speakers above or below the dashed identity line, respectively. Speaker accuracies are compared after being rounded to two decimal places.
|
feedforward neural networks, convolutional neural networks
|
How is the back-translation model trained?
|
### Introduction
Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising performances BIBREF1 , however, their success is limited to the setting with rich supervision, which is costly to obtain. There have been recent attempts at low-resource semantic parsing, including data augmentation methods which are learned from a small number of annotated examples BIBREF2 , and methods for adapting to unseen domains while only being trained on annotated examples in other domains. This work investigates neural semantic parsing in a low-resource setting, in which case we only have our prior knowledge about a limited number of simple mapping rules, including a small amount of domain-independent word-level matching tables if necessary, but have no access to either annotated programs or execution results. Our key idea is to use these rules to collect modest question-programs pairs as the starting point, and then leverage automatically generated examples to improve the accuracy and generality of the model. This presents three challenges including how to generate examples in an efficient way, how to measure the quality of generated examples which might contain errors and noise, and how to train a semantic parser that makes robust predictions for examples covered by rules and generalizes well to uncovered examples. We address the aforementioned challenges with a framework consisting of three key components. The first component is a data generator. It includes a neural semantic parsing model, which maps a natural language question to a program, and a neural question generation model, which maps a program to a natural language question. We learn these two models in a back-translation paradigm using pseudo parallel examples, inspired by its big success on unsupervised neural machine translation BIBREF3 , BIBREF4 . The second component is a quality controller, which is used for filtering out noise and errors contained in the pseudo data. We construct a phrase table with frequent mapping patterns, therefore noise and errors with low frequency can be filtered out. A similar idea has been worked as posterior regularization in neural machine translation BIBREF5 , BIBREF6 . The third component is a meta learner. Instead of transferring a model pretrained with examples covered by rules to the generated examples, we leverage model-agnostic meta-learning BIBREF7 , an elegant meta-learning algorithm which has been successfully applied to a wide range of tasks including few-shot learning and adaptive control. We regard different data sources as different tasks, and use outputs of the quality controller for stable training. We test our approach on three tasks with different programs, including SQL (and SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and subject-predicate pairs over a large-scale knowledge graph BIBREF10 . The program for SQL queries for single-turn questions and subject-predicate pairs over knowledge graph is simple while the program for SQL queries for multi-turn questions have top-tier complexity among currently proposed tasks. Results show that our approach yields large improvements over rule-based systems, and incorporating different strategies incrementally improves the overall performance. On WikiSQL, our best performing system achieves execution accuracy of 72.7%, comparable to a strong system learned from denotations BIBREF11 with an accuracy of 74.8%. ### Problem Statement
We focus on the task of executive semantic parsing. The goal is to map a natural language question/utterance INLINEFORM0 to a logical form/program INLINEFORM1 , which can be executed over a world INLINEFORM2 to obtain the correct answer INLINEFORM3 . We consider three tasks. The first task is single-turn table-based semantic parsing, in which case INLINEFORM0 is a self-contained question, INLINEFORM1 is a SQL query in the form of “SELECT agg col INLINEFORM2 WHERE col INLINEFORM3 = val INLINEFORM4 AND ...”, and INLINEFORM5 is a web table consisting of multiple rows and columns. We use WikiSQL BIBREF8 as the testbed for this task. The second task is multi-turn table-based semantic parsing. Compared to the first task, INLINEFORM6 could be a follow-up question, the meaning of which depends on the conversation history. Accordingly, INLINEFORM7 in this task supports additional operations that copy previous turn INLINEFORM8 to the current turn. We use SequentialQA BIBREF9 for evaluation. In the third task, we change INLINEFORM9 to a large-scale knowledge-graph (i.e. Freebase) and consider knowledge-based question answering for single-turn questions. We use SimpleQuestions BIBREF10 as the testbed, where the INLINEFORM10 is in the form of a simple INLINEFORM11 -calculus like INLINEFORM12 , and the generation of INLINEFORM13 is equivalent to the prediction of the predicate and the subject entity. We study the problem in a low-resource setting. In the training process, we don't have annotated logical forms INLINEFORM0 or execution results INLINEFORM1 . Instead, we have a collection of natural language questions for the task, a limited number of simple mapping rules based on our prior knowledge about the task, and may also have a small amount of domain-independent word-level matching tables if necessary. These rules are not perfect, with low coverage, and can even be incorrect for some situations. For instance, when predicting a SQL command in the first task, we have a prior knowledge that (1) WHERE values potentially have co-occurring words with table cells; (2) the words “more” and “greater” tend to be mapped to WHERE operator “ INLINEFORM2 ”; (3) within a WHERE clause, header and cell should be in the same column; and (4) the word “average” tends to be mapped to aggregator “avg”. Similarly, when predicting a INLINEFORM3 -calculus in the third task, the entity name might be present in the question, and among all the predicates connected to the entity, the predicate with maximum number of co-occurred words might be correct. We would like to study to what extent our model can achieve if we use rules as the starting point. ### Learning Algorithm
We describe our approach for low-resource neural semantic parsing in this section. We propose to train a neural semantic parser using back-translation and meta-learning. The learning process is summarized in Algorithm FIGREF1 . We describe the three components in this section, namely back-translation, quality control, and meta-learning. ### Back-Translation
Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth. ### Quality Controller
Directly using generated datapoints as supervised training data is not desirable because those generated datapoints contain noises or errors. To address this, we follow the application of posterior regularization in neural machine translation BIBREF5 , and implement a dictionary-based discriminator which is used to measure the quality of a pseudo data. The basic idea is that although these generated datapoints are not perfect, the frequent patterns of the mapping between a phrase in INLINEFORM0 to a token in INLINEFORM1 are helpful in filtering out noise in the generated data with low frequency BIBREF6 . There are multiple ways to collect the phrase table information, such as using statistical phrase-level alignment algorithms like Giza++ or directly counting the co-occurrence of any question word and logical form token. We use the latter one in this work. Further details are described in the appendix. ### Meta-Learning
A simple way to update the semantic parser is to merge the datapoints in hand and train a one-size-fits-all model BIBREF2 . However, this will hurt model's stability on examples covered by rules, and examples of the same task may vary widely BIBREF12 . Dealing with different types of examples requires the model to possess different abilities. For example, tackling examples uncovered by rules in WikiSQL requires the model to have the additional ability to map a column name to a totally different utterance, such as “country” to “nation”. Another simple solution is self-training BIBREF13 . One can train a model with examples covered by rules, and use the model as a teacher to make predictions on examples uncovered by rules and update the model on these predictions. However, self-training is somewhat tautological because the model is learned to make predictions which it already can produce. We learn the semantic parser with meta-learning, regarding learning from examples covered by rules or uncovered by rules as two (pseudo) tasks. Compared to the aforementioned strategies, the advantage of exploring meta-learning here is two-fold. First, we learn a specific model for each task, which provides guarantees about its stability on examples covered by rules. In the test phase, we can use the rule to detect which task an example belongs to, and use the corresponding task-specific model to make predictions. When dealing with examples covered by rules, we can either directly use rules to make predictions or use the updated model, depending on the accuracy of the learned model on the examples covered by rules on development set. Second, latent patterns of examples may vary widely in terms of whether or not they are covered by rules. Meta-learning is more desirable in this situation because it learns the model's ability to learn, improving model's versatility rather than mapping the latent patterns learned from datapoints in one distribution to datapoints in another distribution by force. Figure FIGREF1 is an illustration of data combination, self-training, and meta-learning. Meta-learning includes two optimizations: the learner that learns new tasks, and the meta-learner that trains the learner. In this work, the meta-learner is optimized by finding a good initialization that is highly adaptable. Specifically, we use model-agnostic meta-learning, MAML BIBREF7 , a powerful meta-learning algorithm with desirable properties including introducing no additional parameters and making no assumptions of the form of the model. In MAML, task-specific parameter INLINEFORM0 is initialized by INLINEFORM1 , and updated using gradient decent based on the loss function INLINEFORM2 of task INLINEFORM3 . In this work, the loss functions of two tasks are the same. The updated parameter INLINEFORM4 is then used to calculate the model's performance across tasks to update the parameter INLINEFORM5 . In this work, following the practical suggestions given by BIBREF17 , we update INLINEFORM6 in the inner-loop and regard the outputs of the quality controller as the input of both tasks. If we only have examples covered by rules, such as those used in the initialization phase, meta-learning learns to learn a good initial parameter that is evaluated by its usefulness on the examples from the same distribution. In the training phase, datapoints from both tasks are generated, and meta-learning learns to learn an initialization parameter which can be quickly and efficiently adapted to examples from both tasks. ### Experiment
We conduct experiments on three tasks to test our approach, including generating SQL (or SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and predicting subject-predicate pairs over a knowledge graph BIBREF10 . We describe task definition, base models, experiments settings and empirical results for each task, respectively. ### Table-Based Semantic Parsing
Given a natural language INLINEFORM0 and a table INLINEFORM1 with INLINEFORM2 columns and INLINEFORM3 rows as the input, the task is to output a SQL query INLINEFORM4 , which could be executed on table INLINEFORM5 to yield the correct answer of INLINEFORM6 . We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables. In this work, we do not use either SQL queries or answers in the training process. We use execution accuracy as the evaluation metric, which measures the percentage of generated SQL queries that result in the correct answer. We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%. We implement a neural network modular approach as the base model, which includes different modules to predict different SQL constituents. This approach is based on the understanding of the SQL grammar in WikiSQL, namely “SELECT $agg $column WHERE $column $op $value (AND $column $op $value)*”, where tokens starting with “$” are the slots to be predicted BIBREF18 . In practice, modular approaches typically achieve higher accuracy than end-to-end learning approach. Specifically, at the first step we implement a sequential labeling module to detect WHERE values and link them to table cells. Advantages of starting from WHERE values include that WHERE values are less ambiguous compared to other slots, and that the number of WHERE clauses can be naturally detected. After that, for each WHERE value, we use the preceding and following contexts in the question to predict its WHERE column and the WHERE operator through two unidirectional LSTM. Column attention BIBREF18 is used for predicting a particular column. Similar LSTM-based classifiers are used to predict SELECT column and SELECT aggregator. According to whether the training data can be processed by our rules, we divide it into two parts: rule covered part and rule uncovered part. For the rule covered part we could get rule covered training data using our rules. For the rule uncovered part we could also get training data using the trained Base model we have, we refer to these data as self-inference training data. Furthermore, we could get more training data by back translation, we refer to these data as question-generation training data. For all the settings, the Base Model is initialized with rule covered training data. In Base + Self Training Method, we finetune the Base model with self-inference training data. In Base + Question Generation Method, we use question-generation training data to finetune our model. In Base + BT Method, we use both self-inference and question-generation data to finetune our model. In Base + BT + QC, we add our quality controller. In Base + BT + QC + MAML, we further add meta-learning. Results are given in Table TABREF5 . We can see that back-translation, quality control and MAML incrementally improves the accuracy. Question generation is better than self-training here because the logical form in WikiSQL is relatively simple, so the distribution of the sampled logical forms is similar to the original one. In the back-translation setting, generated examples come from both self-training and the question generation model. The model performs better than rules on rule-covered examples, and improves the accuracy on uncovered examples. Figure FIGREF12 shows the learning curves of the COLUMN prediction model with or without using MAML. The model using MAML has a better starting point during training, which reflects the effectiveness of the pre-trained parameter. ### Knowledge-Based Question Answering
We test our approach on question answering over another genre of environment: knowledge graph consisting of subject-relation-object triples. Given a natural language question and a knowledge graph, the task aims to correctly answer the question with evidences from the knowledge graph. We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple. Questions are constructed in a way that subject and relation are mentioned in the question, and that object is the answer. The task requires predicting the entityId and the relation involved in the question. Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction. We follow BIBREF22 , and implement a KBQA pipeline consisting of three modules in this work. At the first step, we use a sequence labeling model, i.e. LSTM-CRF, to detect entity mention words in the question. After that, we use an entity linking model with BM25 built on Elasticsearch. Top-K ranked similar entities are retrieved as candidate list. Then, we get all the relations connected to entities in the candidate list as candidate relations, and use a relation prediction model, which is based on Match-LSTM BIBREF23 , to predict the relation. Finally, from all the entities connected to the predicted relation, we choose the one with highest BM25 score as the predicted entity. We use FB2M as the KB, which includes about 2 million triples. The settings are the same as those described in table-based semantic parsing. Results are given in Table TABREF10 , which are consistent with the numbers in WikiSQL. Using back-translation, quality control and MAML incrementally improves the accuracy, and our approach generalizes well to rule-uncovered examples. ### Conversational Table-Based Semantic Parsing
We consider the task of conversational table-based semantic parsing in this part. Compared to single-turn table-based semantic parsing as described in subsection SECREF6 , the meaning of a natural language may also depends on questions of past turns, which is the common ellipsis and co-reference phenomena in conversational agents. Given a natural language question at the current turn, a web table, and previous turn questions in a conversation as the input, the task aims to generate a program (i.e. logical form), which can be executed on the table to obtain the correct answer of the current turn question. We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 . It contains 6,066 question sequences covering 17,553 question-answer pairs. Each sequence includes 2.9 natural language questions on average. Different from WikiSQL which provides the correct logical form for each question, SequentialQA only annotates the correct answer. This dataset is also harder than the previous two, since it requires complex, highly compositional logical forms to get the answer. Existing approaches are evaluated by question answering accuracy, which measures whether the predicted answer is correct or not. The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%. We implement a modular approach on top of a grammar of derivation rules (actions) as the base model. Similar to BIBREF9 , our grammar consists of predefined actions used for predicting SELECT column, WHERE column, WHERE operator, WHERE value, and determining whether it is required to copy the entire action sequence of the previous turn questions. After encoding a question and previous turn questions into vectors, we first use a controller module to predict an action sequence consisting of slots, and then use specific modules to predict the argument of each slot. Similar to BIBREF9 , we use a recurrent structure as the backbone of each module and use the softmax layer for making prediction. The settings are the same as those described in table-based semantic parsing. From Table TABREF20 , we can see that question generation does not work well on this task. This is because the difficulty in generating sequential questions and complex target logical forms. Applying MAML to examples not coming from question generation performs best. We leave contextual question generation as a future work. ### Conclusion and Future Directions
We present an approach to learn neural semantic parser from simple domain-independent rules, instead of annotated logical forms or denotations. Our approach starts from examples covered by rules, which are used to initialize a semantic parser and a question generator in a back-translation paradigm. Generated examples are measured and filtered based on statistic analysis, and then used with model-agnostic meta-learning, which guarantees model's accuracy and stability on rule-covered examples, and acquires the versatility to generalize well on rule-uncovered examples. We conduct experiments on three datasets for table-based and knowledge-based question answering tasks. Results show that incorporating different strategies incrementally improves the performance. Our best model on WikiSQL achieves comparable accuracy to the system learned from denotation. In the future, we plan to focus on more complex logical forms. Figure 1: An illustration of the difference between (a) data combination which learns a monolithic, one-size-fits-all model, (b) self-training which learns from predictions which the model produce and (c) meta-learning that reuse the acquired ability to learn. Table 1: Results on WikiSQL testset. BT stands for back-translation. QC stands for quality control. Table 2: Token-level dictionary for aggregators (upper group) and operators (lower group) in WikiSQL. Table 3: Results on SimpleQuestions testset. BT stands for back-translation. QC stands for quality control. Figure 2: Learning curve of the WHERE column prediction model on WikiSQL devset. Table 4: Results on SequentialQA testset. BT stands for back-translation. QC stands for quality control. Table 5: Token-level dictionary used for additional actions in SequentialQA.
|
applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5, both models are improved following the back-translation protocol that target sequences should follow the real data distribution
|
How did Orison feel on the first day of her job?
A. confused about her job duties
B. frustrated with the other women that worked there
C. excited about such a large raise
D. in love with the quirkiness of the employees
|
CINDERELLA STORY By ALLEN KIM LANG What a bank! The First Vice-President was a cool cat—the elevator and the money operators all wore earmuffs—was just as phony as a three-dollar bill! [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, May 1961. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] I The First Vice-President of the William Howard Taft National Bank and Trust Company, the gentleman to whom Miss Orison McCall was applying for a job, was not at all the public picture of a banker. His suit of hound's-tooth checks, the scarlet vest peeping above the vee of his jacket, were enough to assure Orison that the Taft Bank was a curious bank indeed. "I gotta say, chick, these references of yours really swing," said the Vice-President, Mr. Wanji. "Your last boss says you come on real cool in the secretary-bit." "He was a very kind employer," Orison said. She tried to keep from staring at the most remarkable item of Mr. Wanji's costume, a pair of furry green earmuffs. It was not cold. Mr. Wanji returned to Orison her letters of reference. "What color bread you got eyes for taking down, baby?" he asked. "Beg pardon?" "What kinda salary you bucking for?" he translated, bouncing up and down on the toes of his rough-leather desert boots. "I was making one-twenty a week in my last position," Miss McCall said. "You're worth more'n that, just to jazz up the decor," Mr. Wanji said. "What you say we pass you a cee-and-a-half a week. Okay?" He caught Orison's look of bewilderment. "One each, a Franklin and a Grant," he explained further. She still looked blank. "Sister, you gonna work in a bank, you gotta know who's picture's on the paper. That's a hunnerd-fifty a week, doll." "That will be most satisfactory, Mr. Wanji," Orison said. It was indeed. "Crazy!" Mr. Wanji grabbed Orison's right hand and shook it with athletic vigor. "You just now joined up with our herd. I wanna tell you, chick, it's none too soon we got some decent scenery around this tomb, girlwise." He took her arm and led her toward the bank of elevators. The uniformed operator nodded to Mr. Wanji, bowed slightly to Orison. He, too, she observed, wore earmuffs. His were more formal than Mr. Wanji's, being midnight blue in color. "Lift us to five, Mac," Mr. Wanji said. As the elevator door shut he explained to Orison, "You can make the Taft Bank scene anywhere between the street floor and floor five. Basement and everything higher'n fifth floor is Iron Curtain Country far's you're concerned. Dig, baby?" "Yes, sir," Orison said. She was wondering if she'd be issued earmuffs, now that she'd become an employee of this most peculiar bank. The elevator opened on five to a tiny office, just large enough to hold a single desk and two chairs. On the desk were a telephone and a microphone. Beside them was a double-decked "In" and "Out" basket. "Here's where you'll do your nine-to-five, honey," Mr. Wanji said. "What will I be doing, Mr. Wanji?" Orison asked. The Vice-President pointed to the newspaper folded in the "In" basket. "Flip on the microphone and read the paper to it," he said. "When you get done reading the paper, someone will run you up something new to read. Okay?" "It seems a rather peculiar job," Orison said. "After all, I'm a secretary. Is reading the newspaper aloud supposed to familiarize me with the Bank's operation?" "Don't bug me, kid," Mr. Wanji said. "All you gotta do is read that there paper into this here microphone. Can do?" "Yes, sir," Orison said. "While you're here, Mr. Wanji, I'd like to ask you about my withholding tax, social security, credit union, coffee-breaks, union membership, lunch hour and the like. Shall we take care of these details now? Or would you—" "You just take care of that chicken-flickin' kinda stuff any way seems best to you, kid," Mr. Wanji said. "Yes, sir," Orison said. This laissez-faire policy of Taft Bank's might explain why she'd been selected from the Treasury Department's secretarial pool to apply for work here, she thought. Orison McCall, girl Government spy. She picked up the newspaper from the "In" basket, unfolded it to discover the day's Wall Street Journal , and began at the top of column one to read it aloud. Wanji stood before the desk, nodding his head as he listened. "You blowing real good, kid," he said. "The boss is gonna dig you the most." Orison nodded. Holding her newspaper and her microphone, she read the one into the other. Mr. Wanji flicked his fingers in a good-by, then took off upstairs in the elevator. By lunchtime Orison had finished the Wall Street Journal and had begun reading a book an earmuffed page had brought her. The book was a fantastic novel of some sort, named The Hobbit . Reading this peculiar fare into the microphone before her, Miss McCall was more certain than ever that the Taft Bank was, as her boss in Washington had told her, the front for some highly irregular goings-on. An odd business for a Federal Mata Hari, Orison thought, reading a nonsense story into a microphone for an invisible audience. Orison switched off her microphone at noon, marked her place in the book and took the elevator down to the ground floor. The operator was a new man, ears concealed behind scarlet earmuffs. In the car, coming down from the interdicted upper floors, were several gentlemen with briefcases. As though they were members of a ballet-troupe, these gentlemen whipped off their hats with a single motion as Orison stepped aboard the elevator. Each of the chivalrous men, hat pressed to his heart, wore a pair of earmuffs. Orison nodded bemused acknowledgment of their gesture, and got off in the lobby vowing never to put a penny into this curiousest of banks. Lunch at the stand-up counter down the street was a normal interlude. Girls from the ground-floor offices of Taft Bank chattered together, eyed Orison with the coolness due so attractive a competitor, and favored her with no gambit to enter their conversations. Orison sighed, finished her tuna salad on whole-wheat, then went back upstairs to her lonely desk and her microphone. By five, Orison had finished the book, reading rapidly and becoming despite herself engrossed in the saga of Bilbo Baggins, Hobbit. She switched off the microphone, put on her light coat, and rode downstairs in an elevator filled with earmuffed, silent, hat-clasping gentlemen. What I need, Orison thought, walking rapidly to the busline, is a double Scotch, followed by a double Scotch. And what the William Howard Taft National Bank and Trust Company needs is a joint raid by forces of the U.S. Treasury Department and the American Psychiatric Association. Earmuffs, indeed. Fairy-tales read into a microphone. A Vice-President with the vocabulary of a racetrack tout. And what goes on in those upper floors? Orison stopped in at the restaurant nearest her apartment house—the Windsor Arms—and ordered a meal and a single Martini. Her boss in Washington had told her that this job of hers, spying on Taft Bank from within, might prove dangerous. Indeed it was, she thought. She was in danger of becoming a solitary drinker. Home in her apartment, Orison set the notes of her first day's observations in order. Presumably Washington would call tonight for her initial report. Item: some of the men at the Bank wore earmuffs, several didn't. Item: the Vice-President's name was Mr. Wanji: Oriental? Item: the top eight floors of the Taft Bank Building seemed to be off-limits to all personnel not wearing earmuffs. Item: she was being employed at a very respectable salary to read newsprint and nonsense into a microphone. Let Washington make sense of that, she thought. In a gloomy mood, Orison McCall showered and dressed for bed. Eleven o'clock. Washington should be calling soon, inquiring after the results of her first day's spying. No call. Orison slipped between the sheets at eleven-thirty. The clock was set; the lights were out. Wasn't Washington going to call her? Perhaps, she thought, the Department had discovered that the Earmuffs had her phone tapped. "Testing," a baritone voice muttered. Orison sat up, clutching the sheet around her throat. "Beg pardon?" she said. "Testing," the male voice repeated. "One, two, three; three, two, one. Do you read me? Over." Orison reached under the bed for a shoe. Gripping it like a Scout-ax, she reached for the light cord with her free hand and tugged at it. The room was empty. "Testing," the voice repeated. "What you're testing," Orison said in a firm voice, "is my patience. Who are you?" "Department of Treasury Monitor J-12," the male voice said. "Do you have anything to report, Miss McCall?" "Where are you, Monitor?" she demanded. "That's classified information," the voice said. "Please speak directly to your pillow, Miss McCall." Orison lay down cautiously. "All right," she whispered to her pillow. "Over here," the voice instructed her, coming from the unruffled pillow beside her. Orison transferred her head to the pillow to her left. "A radio?" she asked. "Of a sort," Monitor J-12 agreed. "We have to maintain communications security. Have you anything to report?" "I got the job," Orison said. "Are you ... in that pillow ... all the time?" "No, Miss McCall," the voice said. "Only at report times. Shall we establish our rendezvous here at eleven-fifteen, Central Standard Time, every day?" "You make it sound so improper," Orison said. "I'm far enough away to do you no harm, Miss McCall," the monitor said. "Now, tell me what happened at the bank today." Orison briefed her pillow on the Earmuffs, on her task of reading to a microphone, and on the generally mimsy tone of the William Howard Taft National Bank and Trust Company. "That's about it, so far," she said. "Good report," J-12 said from the pillow. "Sounds like you've dropped into a real snakepit, beautiful." "How do you know ... why do you think I'm beautiful?" Orison asked. "Native optimism," the voice said. "Good night." J-12 signed off with a peculiar electronic pop that puzzled Orison for a moment. Then she placed the sound: J-12 had kissed his microphone. Orison flung the shoe and the pillow under her bed, and resolved to write Washington for permission to make her future reports by registered mail. II At ten o'clock the next morning, reading page four of the current Wall Street Journal , Orison was interrupted by the click of a pair of leather heels. The gentleman whose heels had just slammed together was bowing. And she saw with some gratification that he was not wearing earmuffs. "My name," the stranger said, "is Dink Gerding. I am President of this bank, and wish at this time to welcome you to our little family." "I'm Orison McCall," she said. A handsome man, she mused. Twenty-eight? So tall. Could he ever be interested in a girl just five-foot-three? Maybe higher heels? "We're pleased with your work, Miss McCall," Dink Gerding said. He took the chair to the right of her desk. "It's nothing," Orison said, switching off the microphone. "On the contrary, Miss McCall. Your duties are most important," he said. "Reading papers and fairy-tales into this microphone is nothing any reasonably astute sixth-grader couldn't do as well," Orison said. "You'll be reading silently before long," Mr. Gerding said. He smiled, as though this explained everything. "By the way, your official designation is Confidential Secretary. It's me whose confidences you're to keep secret. If I ever need a letter written, may I stop down here and dictate it?" "Please do," Orison said. This bank president, for all his grace and presence, was obviously as kookie as his bank. "Have you ever worked in a bank before, Miss McCall?" Mr. Gerding asked, as though following her train of thought. "No, sir," she said. "Though I've been associated with a rather large financial organization." "You may find some of our methods a little strange, but you'll get used to them," he said. "Meanwhile, I'd be most grateful if you'd dispense with calling me 'sir.' My name is Dink. It is ridiculous, but I'd enjoy your using it." "Dink?" she asked. "And I suppose you're to call me Orison?" "That's the drill," he said. "One more question, Orison. Dinner this evening?" Direct, she thought. Perhaps that's why he's president of a bank, and still so young. "We've hardly met," she said. "But we're on a first-name basis already," he pointed out. "Dance?" "I'd love to," Orison said, half expecting an orchestra to march, playing, from the elevator. "Then I'll pick you up at seven. Windsor Arms, if I remember your personnel form correctly." He stood, lean, all bone and muscle, and bowed slightly. West Point? Hardly. His manners were European. Sandhurst, perhaps, or Saint Cyr. Was she supposed to reply with a curtsy? Orison wondered. "Thank you," she said. He was a soldier, or had been: the way, when he turned, his shoulders stayed square. The crisp clicking of his steps, a military metronome, to the elevator. When the door slicked open Orison, staring after Dink, saw that each of the half-dozen men aboard snapped off their hats (but not their earmuffs) and bowed, the earmuffed operator bowing with them. Small bows, true; just head-and-neck. But not to her. To Dink Gerding. Orison finished the Wall Street Journal by early afternoon. A page came up a moment later with fresh reading-matter: a copy of yesterday's Congressional Record . She launched into the Record , thinking as she read of meeting again this evening that handsome madman, that splendid lunatic, that unlikely bank-president. "You read so well , darling," someone said across the desk. Orison looked up. "Oh, hello," she said. "I didn't hear you come up." "I walk ever so lightly," the woman said, standing hip-shot in front of the desk, "and pounce ever so hard." She smiled. Opulent, Orison thought. Built like a burlesque queen. No, she thought, I don't like her. Can't. Wouldn't if I could. Never cared for cats. "I'm Orison McCall," she said, and tried to smile back without showing teeth. "Delighted," the visitor said, handing over an undelighted palm. "I'm Auga Vingt. Auga, to my friends." "Won't you sit down, Miss Vingt?" "So kind of you, darling," Auga Vingt said, "but I shan't have time to visit. I just wanted to stop and welcome you as a Taft Bank co-worker. One for all, all for one. Yea, Team. You know." "Thanks," Orison said. "Common courtesy," Miss Vingt explained. "Also, darling, I'd like to draw your attention to one little point. Dink Gerding—you know, the shoulders and muscles and crewcut? Well, he's posted property. Should you throw your starveling charms at my Dink, you'd only get your little eyes scratched out. Word to the wise, n'est-ce pas ?" "Sorry you have to leave so suddenly," Orison said, rolling her Wall Street Journal into a club and standing. "Darling." "So remember, Tiny, Dink Gerding is mine. You're all alone up here. You could get broken nails, fall down the elevator shaft, all sorts of annoyance. Understand me, darling?" "You make it very clear," Orison said. "Now you'd best hurry back to your stanchion, Bossy, before the hay's all gone." "Isn't it lovely, the way you and I reached an understanding right off?" Auga asked. "Well, ta-ta." She turned and walked to the elevator, displaying, Orison thought, a disgraceful amount of ungirdled rhumba motion. The elevator stopped to pick up the odious Auga. A passenger, male, stepped off. "Good morning, Mr. Gerding," Miss Vingt said, bowing. "Carry on, Colonel," the stranger replied. As the elevator door closed, he stepped up to Orison's desk. "Good morning. Miss McCall," he said. "What is this?" Orison demanded. "Visiting-day at the zoo?" She paused and shook her head. "Excuse me, sir," she said. "It's just that ... Vingt thing...." "Auga is rather intense," the new Mr. Gerding said. "Yeah, intense," Orison said. "Like a kidney-stone." "I stopped by to welcome you to the William Howard Taft National Bank and Trust Company family, Miss McCall," he said. "I'm Kraft Gerding, Dink's elder brother. I understand you've met Dink already." "Yes, sir," Orison said. The hair of this new Mr. Gerding was cropped even closer than Dink's. His mustache was gray-tipped, like a patch of frosted furze; and his eyes, like Dink's, were cobalt blue. The head, Orison mused, would look quite at home in one of Kaiser Bill's spike-topped Pickelhauben ; but the ears were in evidence, and seemed normal. Mr. Kraft Gerding bowed—what continental manners these bankers had!—and Orison half expected him to free her hand from the rolled-up paper she still clutched and plant a kiss on it. Instead, Kraft Gerding smiled a smile as frosty as his mustache and said, "I understand that my younger brother has been talking with you, Miss McCall. Quite proper, I know. But I must warn you against mixing business with pleasure." Orison jumped up, tossing the paper into her wastebasket. "I quit!" she shouted. "You can take this crazy bank ... into bankruptcy, for all I care. I'm not going to perch up here, target for every uncaged idiot in finance, and listen to another word." "Dearest lady, my humblest pardon," Kraft Gerding said, bowing again, a bit lower. "Your work is splendid; your presence is Taft Bank's most charming asset; my only wish is to serve and protect you. To this end, dear lady, I feel it my duty to warn you against my brother. A word to the wise...." " N'est-ce pas? " Orison said. "Well, Buster, here's a word to the foolish. Get lost." Kraft Gerding bowed and flashed his gelid smile. "Until we meet again?" "I'll hold my breath," Orison promised. "The elevator is just behind you. Push a button, will you? And bon voyage ." Kraft Gerding called the elevator, marched aboard, favored Orison with a cold, quick bow, then disappeared into the mysterious heights above fifth floor. First the unspeakable Auga Vingt, then the obnoxious Kraft Gerding. Surely, Orison thought, recovering the Wall Street Journal from her wastebasket and smoothing it, no one would convert a major Midwestern bank into a lunatic asylum. How else, though, could the behavior of the Earmuffs be explained? Could madmen run a bank? Why not, she thought. History is rich in examples of madmen running nations, banks and all. She began again to read the paper into the microphone. If she finished early enough, she might get a chance to prowl those Off-Limits upper floors. Half an hour further into the paper, Orison jumped, startled by the sudden buzz of her telephone. She picked it up. " Wanji e-Kal, Datto. Dink ger-Dink d'summa. " Orison scribbled down this intelligence in bemused Gregg before replying, "I'm a local girl. Try me in English." "Oh. Hi, Miss McCall," the voice said. "Guess I goofed. I'm in kinda clutch. This is Wanji. I got a kite for Mr. Dink Gerding. If you see him, tell him the escudo green is pale. Got that, doll?" "Yes, Mr. Wanji. I'll tell Mr. Gerding." Orison clicked the phone down. What now, Mata Hari? she asked herself. What was the curious language Mr. Wanji had used? She'd have to report the message to Washington by tonight's pillow, and let the polyglots of Treasury Intelligence puzzle it out. Meanwhile, she thought, scooting her chair back from her desk, she had a vague excuse to prowl the upper floors. The Earmuffs could only fire her. Orison folded the paper and put it in the "Out" basket. Someone would be here in a moment with something new to read. She'd best get going. The elevator? No. The operators had surely been instructed to keep her off the upstairs floors. But the building had a stairway. III The door on the sixth floor was locked. Orison went on up the stairs to seven. The glass of the door there was painted black on the inside, and the landing was cellar-dark. Orison closed her eyes for a moment. There was a curious sound. The buzzing of a million bees, barely within the fringes of her hearing. Somehow, a very pleasant sound. She opened her eyes and tried the knob. The door opened. Orison was blinded by the lights, brilliant as noonday sun. The room extended through the entire seventh floor, its windows boarded shut, its ceiling a mass of fluorescent lamps. Set about the floor were galvanized steel tanks, rectangular and a little bigger than bathtubs. Orison counted the rows of tanks. Twelve rows, nine tiers. One hundred and eight tanks. She walked closer. The tubs were laced together by strands of angel-hair, delicate white lattices scintillating with pink. She walked to the nearest of the tubs and looked in. It was half full of a greenish fluid, seething with tiny pink bubbles. For a moment Orison thought she saw Benjamin Franklin winking up at her from the liquid. Then she screamed. The pink bubbles, the tiny flesh-colored flecks glinting light from the spun-sugar bridges between the tanks, were spiders. Millions upon millions of spiders, each the size of a mustard-seed; crawling, leaping, swinging, spinning webs, seething in the hundred tanks. Orison put her hands over her ears and screamed again, backing toward the stairway door. Into a pair of arms. "I had hoped you'd be happy here, Miss McCall," Kraft Gerding said. Orison struggled to release herself. She broke free only to have her wrists seized by two Earmuffs that had appeared with the elder Gerding. "It seems that our Pandora doesn't care for spiders," he said. "Really, Miss McCall, our little pets are quite harmless. Were we to toss you into one of these tanks...." Orison struggled against her two sumo -sized captors, whose combined weights exceeded hers by some quarter-ton, without doing more than lifting her feet from the floor. "... your flesh would be unharmed, though they spun and darted all around you. Our Microfabridae are petrovorous, Miss McCall. Of course, once they discovered your teeth, and through them a skeleton of calcium, a delicacy they find most toothsome, you'd be filleted within minutes." "Elder Compassion wouldn't like your harming the girl, Sire," one of the earmuffed sumo -wrestlers protested. "Elder Compassion has no rank," Kraft Gerding said. "Miss McCall, you must tell me what you were doing here, or I'll toss you to the spiders." "Dink ... Dink!" Orison shouted. "My beloved younger brother is otherwise engaged than in the rescue of damsels in distress," Kraft said. "Someone, after all, has to mind the bank." "I came to bring a message to Dink," Orison said. "Let me go, you acromegalic apes!" "The message?" Kraft Gerding demanded. "Something about escudo green. Put me down!" Suddenly she was dropped. Her mountainous keepers were on the floor as though struck by lightning, their arms thrown out before them, their faces abject against the floor. Kraft Gerding was slowly lowering himself to one knee. Dink had entered the spider-room. Without questions, he strode between the shiko-ing Earmuffs and put his arms around Orison. "They can't harm you," he said. She turned to press her face against his chest. "You're all right, child. Breathe deep, swallow, and turn your brain back on. All right, now?" "All right," she said, still trembling. "They were going to throw me to the spiders." "Kraft told you that?" Dink Gerding released her and turned to the kneeling man. "Stand up, Elder Brother." "I...." Dink brought his right fist up from hip-level, crashing it into Kraft's jaw. Kraft Gerding joined the Earmuffs on the floor. "If you'd care to stand again, Elder Brother, you may attempt to recover your dignity without regard for the difference in our rank." Kraft struggled to one knee and remained kneeling, gazing up at Dink through half-closed eyes. "No? Then get out of here, all of you. Samma! " Kraft Gerding arose, stared for a moment at Dink and Orison, then, with the merest hint of a bow, led his two giant Earmuffs to the elevator. "I wish you hadn't come up here, Orison," Dink said. "Why did you do it?" "Have you read the story of Bluebeard?" Orison asked. She stood close to Dink, keeping her eyes on the nearest spidertank. "I had to see what it was you kept up here so secretly, what it was that I was forbidden to see. My excuse was to have been that I was looking for you, to deliver a message from Mr. Wanji. He said I was to tell you that the escudo green is pale." "You're too curious, and Wanji is too careless," Dink said. "Now, what is this thing you have about spiders?" "I've always been terrified of them," Orison said. "When I was a little girl, I had to stay upstairs all day one Sunday because there was a spider hanging from his thread in the stairway. I waited until Dad came home and took it down with a broom. Even then, I didn't have appetite for supper." "Strange," Dink said. He walked over to the nearest tank and plucked one of the tiny pink creatures from a web-bridge. "This is no spider, Orison," he said. She backed away from Dink Gerding and the minuscule creature he cupped in the palm of his hand. "These are Microfabridae, more nearly related to shellfish than to spiders," he said. "They're stone-and-metal eaters. They literally couldn't harm a fly. Look at it, Orison." He extended his palm. Orison forced herself to look. The little creature, flesh-colored against his flesh, was nearly invisible, scuttling around the bowl of his hand. "Pretty little fellow, isn't he?" Dink asked. "Here. You hold him." "I'd rather not," she protested. "I'd be happier if you did," Dink said. Orison extended her hand as into a furnace. Dink brushed the Microfabridus from his palm to hers. It felt crisp and hard, like a legged grain of sand. Dink took a magnifier from his pocket and unfolded it, to hold it over Orison's palm. "He's like a baby crawdad," Orison said. "A sort of crustacean," Dink agreed. "We use them in a commercial process we're developing. That's why we keep this floor closed off and secret. We don't have a patent on the use of Microfabridae, you see." "What do they do?" Orison asked. "That's still a secret," Dink said, smiling. "I can't tell even you that, not yet, even though you're my most confidential secretary." "What's he doing now?" Orison asked, watching the Microfabridus, perched up on the rear four of his six microscopic legs, scratching against her high-school class-ring with his tiny chelae. "They like gold," Dink explained, peering across her shoulder, comfortably close. "They're attracted to it by a chemical tropism, as children are attracted to candy. Toss him back into his tank, Orison. We'd better get you down where you belong." Orison brushed the midget crustacean off her finger into the nearest tank, where he joined the busy boil of his fellows. She felt her ring. It was pitted where the Microfabridus had been nibbling. "Strange, using crawdads in a bank," she said. She stood silent for a moment. "I thought I heard music," she said. "I heard it when I came in. Something like the sighing of wind in winter trees." "That's the hymn of the Microfabridae," Dink said. "They all sing together while they work, a chorus of some twenty million voices." He took her arm. "If you listen very carefully, you'll find the song these little workers sing the most beautiful music in the world." Orison closed her eyes, leaning back into Dink's arms, listening to the music that seemed on the outermost edge of her hearing. Wildness, storm and danger were its theme, counterpointed by promises of peace and harbor. She heard the wash of giant waves in the song, the crash of breakers against granite, cold and insatiable. And behind this, the quiet of sheltered tide-pools, the soft lub of sea-arms landlocked. "It's an ancient song," Dink said. "The Microfabridae have been singing it for a million years." He released her, and opened a wood-covered wooden box. He scooped up a cupful of the sand inside. "Hold out your hands," he told Orison. He filled them with the sand. "Throw our singers some supper for their song," he said. Orison went with her cupped hands to the nearest tank and sprinkled the mineral fishfood around inside it. The Microfabridae leaped from the liquid like miniature porpoises, seizing the grains of sand in mid-air. "They're so very strange," Orison said. At the bottom of the tank she thought she saw Ben Franklin again, winking at her through the bubbling life. Nonsense, she thought, brushing her hands.
|
A. confused about her job duties
|
What context do they use?
|
### Introduction
Following a turbulent election season, 2016's cyber world is awash with hate speech. Automatic detection of hate speech has become an urgent need since human supervision is unable to deal with large quantities of emerging texts. Context information, by our definition, is the text, symbols or any other kind of information related to the original text. While intuitively, context accompanying hate speech is useful for detecting hate speech, context information of hate speech has been overlooked in existing datasets and automatic detection models. Online hate speech tends to be subtle and creative, which makes context especially important for automatic hate speech detection. For instance, (1) barryswallows: Merkel would never say NO This comment is posted for the News titled by "German lawmakers approve 'no means no' rape law after Cologne assaults". With context, it becomes clear that this comment is a vicious insult towards female politician. However, almost all the publicly available hate speech annotated datasets do not contain context information. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . We have created a new dataset consisting of 1528 Fox News user comments, which were taken from 10 complete discussion threads for 10 widely read Fox News articles. It is different from previous datasets from the following two perspectives. First, it preserves rich context information for each comment, including its user screen name, all comments in the same thread and the news article the comment is written for. Second, there is no biased data selection and all comments in each news comment thread were annotated. In this paper, we explored two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information in automatic hate speech detection. First, logistic regression models have been used in several prior hate speech detection studies BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF0 , BIBREF2 , BIBREF9 and various features have been tried including character-level and word-level n-gram features, syntactic features, linguistic features, and comment embedding features. However, all the features were derived from the to-be-classified text itself. In contrast, we experiment with logistic regression models using features extracted from context text as well. Second, neural network models BIBREF10 , BIBREF11 , BIBREF12 have the potential to capture compositional meanings of text, but they have not been well explored for online hate speech detection until recently BIBREF13 . We experiment with neural net models containing separate learning components that model compositional meanings of context information. Furthermore, recognizing unique strengths of each type of models, we build ensemble models of the two types of models. Evaluation shows that context-aware logistic regression models and neural net models outperform their counterparts that are blind with context information. Especially, the final ensemble models outperform a strong baseline system by around 10% in F1-score. ### Related Works
Recently, a few datasets with human labeled hate speech have been created, however, most of existing datasets do not contain context information. Due to the sparsity of hate speech in everyday posts, researchers tend to sample candidates from bootstrapping instead of random sampling, in order to increase the chance of seeing hate speech. Therefore, the collected data instances are likely to be from distinct contexts. For instance, in the Primary Data Set described in BIBREF14 and later used by BIBREF9 , 10% of the dataset is randomly selected while the remaining consists of comments tagged by users and editors. BIBREF15 built a balanced data set of 24.5k tweets by selecting from Twitter accounts that claimed to be racist or were deemed racist using their followed news sources. BIBREF5 collected hateful tweets related to the murder of Drummer Lee Rigby in 2013. BIBREF0 provided a corpus of 16k annotated tweets in which 3.3k are labeled as sexist and 1.9k are labeled as racist. They created this corpus by bootstrapping from certain key words ,specific hashtags and certain prolific users. BIBREF16 created a dataset of 9000 human labeled paragraphs that were collected using regular expression matching in order to find hate speech targeting Judaism and Israel. BIBREF7 extracted data instances from instagram that were associated with certain user accounts. BIBREF2 presented a very large corpus containing over 115k Wikipedia comments that include around 37k randomly sampled comments and the remaining 78k comments were selected from Wikipedia blocked comments. Most of existing hate speech detection models are feature-based and use features derived from the target text itself. BIBREF5 experimented with different classification methods including Bayesian Logistic Regression, Random Forest Decision Trees and SVMs, using features such as n-grams, reduced n-grams, dependency paths, and hateful terms. BIBREF0 proposed a logistic regression model using character n-gram features. BIBREF14 used the paragraph2vec for joint modeling of comments and words, then the generated embeddings were used as feature in a logistic regression model. BIBREF9 experimented with various syntactic, linguistic and distributional semantic features including word length, sentence length, part of speech tags, and embedding features, in order to improve performance of logistic regression classifiers. Recently, BIBREF17 surveyed current approaches for hate speech detection, which interestingly also called to attention on modeling context information for resolving difficult hate speech instances. ### Corpus Overview
The Fox News User Comments corpus consists of 1528 annotated comments (435 labeled as hateful) that were posted by 678 different users in 10 complete news discussion threads in the Fox News website. The 10 threads were manually selected and represent popular discussion threads during August 2016. All of the comments included in these 10 threads were annotated. The number of comments in each of the 10 threads is roughly equal. Rich context information was kept for each comment, including its user screen name, the comments and their nested structure and the original news article. The data corpus along with annotation guidelines is posted on github. ### Annotation Guidelines
Our annotation guidelines are similar to the guidelines used by BIBREF9 . We define hateful speech to be the language which explicitly or implicitly threatens or demeans a person or a group based upon a facet of their identity such as gender, ethnicity, or sexual orientation. The labeling of hateful speech in our corpus is binary. A comment will be labeled as hateful or non-hateful. ### Annotation Procedure
We identified two native English speakers for annotating online user comments. The two annotators first discussed and practices before they started annotation. They achieved a surprisingly high Kappa score BIBREF18 of 0.98 on 648 comments from 4 threads. We think that thorough discussions in the training stage is the key for achieving this high inter-agreement. For those comments which annotators disagreed on, we label them as hateful as long as one annotator labeled them as hateful. Then one annotator continued to annotate the remaining 880 comments from the remaining six discussion threads. ### Characteristics in Fox News User Comments corpus
Hateful comments in the Fox News User Comments Corpus is often subtle, creative and implicit. Therefore, context information is necessary in order to accurately identify such hate speech. The hatefulness of many comments depended on understanding their contexts. For instance, (3) mastersundholm: Just remember no trabjo no cervesa This comment is posted for the news "States moving to restore work requirements for food stamp recipients". This comment implies that Latino immigrants abuse the usage of food stamp policy, which is clearly a stereotyping. Many hateful comments use implicit and subtle language, which contain no clear hate indicating word or phrase. In order to recognize such hard cases, we hypothesize that neural net models are more suitable by capturing overall composite meanings of a comment. For instance, the following comment is a typical implicit stereotyping against women. (4) MarineAssassin: Hey Brianne - get in the kitchen and make me a samich. Chop Chop 11% of our annotated comments have more than 50 words each. In such long comments, the hateful indicators usually appear in a small region of a comment while the majority of the comment is neutral. For example, (5) TMmckay: I thought ...115 words... Too many blacks winning, must be racist and needs affirmative action to make whites equally win! Certain user screen names indicate hatefulness, which imply that comments posted by these users are likely to contain hate speech. In the following example, commie is a slur for communists. (6)nocommie11: Blah blah blah. Israel is the only civilized nation in the region to keep the unwashed masses at bay. ### Logistic Regression Models
In logistic regression models, we extract four types of features, word-level and character-level n-gram features as well as two types of lexicon derived features. We extract these four types of features from the target comment first. Then we extract these features from two sources of context texts, specifically the title of the news article that the comment was posted for and the screen name of the user who posted the comment. For logistic regression model implementation, we use l2 loss. We adopt the balanced class weight as described in Scikit learn. Logistic regression model with character-level n-gram features is presented as a strong baseline for comparison since it was shown very effective. BIBREF0 , BIBREF9 For character level n-grams, we extract character level bigrams, tri-grams and four-grams. For word level n-grams, we extract unigrams and bigrams. Linguistic Inquiry and Word Count, also called LIWC, has been proven useful for text analysis and classification BIBREF19 . In the LIWC dictionary, each word is labeled with several semantic labels. In our experiment, we use the LIWC 2015 dictionary which contain 125 semantic categories. Each word is converted into a 125 dimension LIWC vector, one dimension per semantic category. The LIWC feature vector for a comment or its context is a 125 dimension vector as well, which is the sum of all its words' LIWC vectors. NRC emotion lexicon contains a list of English words that were labeled with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and sentiment polarities (negative and positive) BIBREF20 . We use NRC emotion lexicon to capture emotion clues in text. Each word is converted into a 10 dimension emotion vector, corresponding to eight emotion types and two polarity labels. The emotion vector for a comment or its context is a 10 dimension vector as well, which is the sum of all its words' emotion vectors. As shown in table TABREF20 , given comment as the only input content, the combination of character n-grams, word n-grams, LIWC feature and NRC feature achieves the best performance. It shows that in addition to character level features, adding more features can improve hate speech detection performance. However, the improvement is limited. Compared with baseline model, the F1 score only improves 1.3%. In contrast, when context information was taken into account, the performance greatly improved. Specifically, after incorporating features extracted from the news title and username, the model performance was improved by around 4% in both F1 score and AUC score. This shows that using additional context based features in logistic regression models is useful for hate speech detection. ### Neural Network Models
Our neural network model mainly consists of three parallel LSTM BIBREF21 layers. It has three different inputs, including the target comment, its news title and its username. Comment and news title are encoded into a sequence of word embeddings. We use pre-trained word embeddings in word2vec. Username is encoded into a sequence of characters. We use one-hot encoding of characters. Comment is sent into a bi-directional LSTM with attention mechanism. BIBREF22 . News title and username are sent into a bi-directional LSTM. Note that we did not apply attention mechanism to the neural network models for username and news title because both types of context are relatively short and attention mechanism tends to be useful when text input is long. The three LSTM output layers are concatenated, then connected to a sigmoid layer, which outputs predictions. The number of hidden units in each LSTM used in our model is set to be 100. The recurrent dropout rate of LSTMs is set to 0.2. In addition, we use binary cross entropy as the loss function and a batch size of 128. The neural network models are trained for 30 epochs. As shown in table TABREF21 , given comment as the only input content, the bi-directional LSTM model with attention mechanism achieves the best performance. Note that the attention mechanism significantly improves the hate speech detection performance of the bi-directional LSTM model. We hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment, which is especially the case for long comments. ### Ensemble Models
To study the difference of logistic regression model and neural network model and potentially get performance improvement, we will build and evaluate ensemble models. As shown in table TABREF24 , both ensemble models significantly improved hate speech detection performance. Figure FIGREF28 shows the system prediction results of comments that were labeled as hateful in the dataset. It can be seen that the two models perform differently. We further examined predicted comments and find that both types of models have unique strengths in identifying certain types of hateful comments. The feature-based logistic regression models are capable of making good use of character-level n-gram features, which are powerful in identifying hateful comments that contains OOV words, capitalized words or misspelled words. We provide two examples from the hateful comments that were only labeled by the logistic regression model: (7)kmawhmf:FBLM. Here FBLM means fuck Black Lives Matter. This hateful comment contains only character information which can exactly be made use of by our logistic regression model. (8)SFgunrmn: what a efen loon, but most femanazis are. This comment deliberately misspelled feminazi for femanazis, which is a derogatory term for feminists. It shows that logistic regression model is capable in dealing with misspelling. The LSTM with attention mechanism are suitable for identifying specific small regions indicating hatefulness in long comments. In addition, the neural net models are powerful in capturing implicit hateful language as well. The following are two hateful comment examples that were only identified by the neural net model: (9)freedomscout: @LarJass Many religions are poisonous to logic and truth, that much is true...and human beings still remain fallen human beings even they are Redeemed by the Sacrifice of Jesus Christ. So there's that. But the fallacies of thinking cannot be limited or attributed to religion but to error inherent in human motivation, the motivation to utter self-centeredness as fallen sinful human beings. Nearly all of the world's many religions are expressions of that utter sinful nature...Christianity and Judaism being the sole exceptions. This comment is expressing the stereotyping against religions which are not Christian or Judaism. The hatefulness is concentrated within the two bolded segments. (10)mamahattheridge: blacks Love being victims. In this comment, the four words themselves are not hateful at all. But when combined together, it is clearly hateful against black people. ### Evaluation
We evaluate our model by 10 fold cross validation using our newly created Fox News User Comments Corpus. Both types of models use the exact same 10 folds of training data and test data. We report experimental results using multiple metrics, including accuracy, precision/recall/F1-score, and accuracy area under curve (AUC). ### Experimental Results
Table TABREF20 shows the performance of logistic regression models. The first section of table TABREF20 shows the performance of logistic regression models using features extracted from a target comment only. The result shows that the logistic regression model was improved in every metric after adding both word-level n-gram features and lexicon derived features. However, the improvements are moderate. The second section shows the performance of logistic regression models using the four types of features extracted from both a target comment and its contextsThe result shows that the logistic regression model using features extracted from a comment and both types of context achieved the best performance and obtained improvements of 2.8% and 2.5% in AUC score and F1-score respectively. Table TABREF21 shows the performance of neural network models. The first section of table TABREF21 shows the performance of several neural network models that use comments as the only input. The model names are self-explanatory. We can see that the attention mechanism coupled with the bi-directional LSTM neural net greatly improved the online hate speech detection, by 5.7% in AUC score. The second section of table TABREF21 shows performance of the best neural net model (bi-directional LSTM with attention) after adding additional learning components that take context as input. The results show that adding username and news title can both improve model performance. Using news title gives the best F1 score while using both news title and username gives the best AUC score. Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions. We can see that both ensemble models further improved hate speech detection performance compared with using one model only and achieved the best classification performance. Compared with the logistic regression baseline, the Max Score Ensemble model improved the recall by more than 20% with a comparable precision and improved the F1 score by around 10%, in addition, the Average Score Ensemble model improved the AUC score by around 7%. ### Conclusion
We demonstrated the importance of utilizing context information for online hate speech detection. We first presented a corpus of hateful speech consisting of full threads of online discussion posts. In addition, we presented two types of models, feature-based logistic regression models and neural network models, in order to incorporate context information for improving hate speech detection performance. Furthermore, we show that ensemble models leveraging strengths of both types of models achieve the best performance for automatic online hate speech detection. Table 1: Performance of Logistic Regression Models Table 2: Performance of Neural Network Models Table 3: Performance of Ensemble Models Figure 1: System Prediction Results of Comments that were Annotated as Hateful
|
title of the news article, screen name of the user
|
Why doesn't the Earth shoot the spaceship out of the sky?
A. The Earth does not have weapons that are capable of going as high as the spaceship. Nor are their weapons capable of penetrating the spaceship's hull.
B. Bal and Ethaniel are using the spaceship to broadcast a message of peace in all the languages of the world.
C. The combination of the Christmas holiday, aliens that look like angels, and what looks to be the star of Bethlehem, has convinced the people of Earth that Bal and Ethaniel are friends and not foes.
D. The spaceship is lit up as brightly as a star. The light is bright enough to convince the humans that firing upon it would be futile.
|
SECOND LANDING By FLOYD WALLACE A gentle fancy for the Christmas Season—an oft-told tale with a wistful twistful of Something that left the Earth with a wing and a prayer. Earth was so far away that it wasn't visible. Even the sun was only a twinkle. But this vast distance did not mean that isolation could endure forever. Instruments within the ship intercepted radio broadcasts and, within the hour, early TV signals. Machines compiled dictionaries and grammars and began translating the major languages. The history of the planet was tabulated as facts became available. The course of the ship changed slightly; it was not much out of the way to swing nearer Earth. For days the two within the ship listened and watched with little comment. They had to decide soon. "We've got to make or break," said the first alien. "You know what I'm in favor of," said the second. "I can guess," said Ethaniel, who had spoken first. "The place is a complete mess. They've never done anything except fight each other—and invent better weapons." "It's not what they've done," said Bal, the second alien. "It's what they're going to do, with that big bomb." "The more reason for stopping," said Ethaniel. "The big bomb can destroy them. Without our help they may do just that." "I may remind you that in two months twenty-nine days we're due in Willafours," said Bal. "Without looking at the charts I can tell you we still have more than a hundred light-years to go." "A week," said Ethaniel. "We can spare a week and still get there on time." "A week?" said Bal. "To settle their problems? They've had two world wars in one generation and that the third and final one is coming up you can't help feeling in everything they do." "It won't take much," said Ethaniel. "The wrong diplomatic move, or a trigger-happy soldier could set it off. And it wouldn't have to be deliberate. A meteor shower could pass over and their clumsy instruments could interpret it as an all-out enemy attack." "Too bad," said Bal. "We'll just have to forget there ever was such a planet as Earth." "Could you? Forget so many people?" "I'm doing it," said Bal. "Just give them a little time and they won't be here to remind me that I have a conscience." "My memory isn't convenient," said Ethaniel. "I ask you to look at them." Bal rustled, flicking the screen intently. "Very much like ourselves," he said at last. "A bit shorter perhaps, and most certainly incomplete. Except for the one thing they lack, and that's quite odd, they seem exactly like us. Is that what you wanted me to say?" "It is. The fact that they are an incomplete version of ourselves touches me. They actually seem defenseless, though I suppose they're not." "Tough," said Bal. "Nothing we can do about it." "There is. We can give them a week." "In a week we can't negate their entire history. We can't begin to undo the effect of the big bomb." "You can't tell," said Ethaniel. "We can look things over." "And then what? How much authority do we have?" "Very little," conceded Ethaniel. "Two minor officials on the way to Willafours—and we run directly into a problem no one knew existed." "And when we get to Willafours we'll be busy. It will be a long time before anyone comes this way again." "A very long time. There's nothing in this region of space our people want," said Ethaniel. "And how long can Earth last? Ten years? Even ten months? The tension is building by the hour." "What can I say?" said Bal. "I suppose we can stop and look them over. We're not committing ourselves by looking." They went much closer to Earth, not intending to commit themselves. For a day they circled the planet, avoiding radar detection, which for them was not difficult, testing, and sampling. Finally Ethaniel looked up from the monitor screen. "Any conclusions?" "What's there to think? It's worse than I imagined." "In what way?" "Well, we knew they had the big bomb. Atmospheric analysis showed that as far away as we were." "I know." "We also knew they could deliver the big bomb, presumably by some sort of aircraft." "That was almost a certainty. They'd have no use for the big bomb without aircraft." "What's worse is that I now find they also have missiles, range one thousand miles and upward. They either have or are near a primitive form of space travel." "Bad," said Ethaniel. "Sitting there, wondering when it's going to hit them. Nervousness could set it off." "It could, and the missiles make it worse," said Bal. "What did you find out at your end?" "Nothing worthwhile. I was looking at the people while you were investigating their weapons." "You must think something." "I wish I knew what to think. There's so little time," Ethaniel said. "Language isn't the difficulty. Our machines translate their languages easily and I've taken a cram course in two or three of them. But that's not enough, looking at a few plays, listening to advertisements, music, and news bulletins. I should go down and live among them, read books, talk to scholars, work with them, play." "You could do that and you'd really get to know them. But that takes time—and we don't have it." "I realize that." "A flat yes or no," said Bal. "No. We can't help them," said Ethaniel. "There is nothing we can do for them—but we have to try." "Sure, I knew it before we started," said Bal. "It's happened before. We take the trouble to find out what a people are like and when we can't help them we feel bad. It's going to be that way again." He rose and stretched. "Well, give me an hour to think of some way of going at it." It was longer than that before they met again. In the meantime the ship moved much closer to Earth. They no longer needed instruments to see it. The planet revolved outside the visionports. The southern plains were green, coursed with rivers; the oceans were blue; and much of the northern hemisphere was glistening white. Ragged clouds covered the pole, and a dirty pall spread over the mid-regions of the north. "I haven't thought of anything brilliant," said Ethaniel. "Nor I," said Bal. "We're going to have to go down there cold. And it will be cold." "Yes. It's their winter." "I did have an idea," said Bal. "What about going down as supernatural beings?" "Hardly," said Ethaniel. "A hundred years ago it might have worked. Today they have satellites. They are not primitives." "I suppose you're right," said Bal. "I did think we ought to take advantage of our physical differences." "If we could I'd be all for it. But these people are rough and desperate. They wouldn't be fooled by anything that crude." "Well, you're calling it," said Bal. "All right," said Ethaniel. "You take one side and I the other. We'll tell them bluntly what they'll have to do if they're going to survive, how they can keep their planet in one piece so they can live on it." "That'll go over big. Advice is always popular." "Can't help it. That's all we have time for." "Special instructions?" "None. We leave the ship here and go down in separate landing craft. You can talk with me any time you want to through our communications, but don't unless you have to." "They can't intercept the beams we use." "They can't, and even if they did they wouldn't know what to do with our language. I want them to think that we don't need to talk things over." "I get it. Makes us seem better than we are. They think we know exactly what we're doing even though we don't." "If we're lucky they'll think that." Bal looked out of the port at the planet below. "It's going to be cold where I'm going. You too. Sure we don't want to change our plans and land in the southern hemisphere? It's summer there." "I'm afraid not. The great powers are in the north. They are the ones we have to reach to do the job." "Yeah, but I was thinking of that holiday you mentioned. We'll be running straight into it. That won't help us any." "I know, they don't like their holidays interrupted. It can't be helped. We can't wait until it's over." "I'm aware of that," said Bal. "Fill me in on that holiday, anything I ought to know. Probably religious in origin. That so?" "It was religious a long time ago," said Ethaniel. "I didn't learn anything exact from radio and TV. Now it seems to be chiefly a time for eating, office parties, and selling merchandise." "I see. It has become a business holiday." "That's a good description. I didn't get as much of it as I ought to have. I was busy studying the people, and they're hard to pin down." "I see. I was thinking there might be some way we could tie ourselves in with this holiday. Make it work for us." "If there is I haven't thought of it." "You ought to know. You're running this one." Bal looked down at the planet. Clouds were beginning to form at the twilight edge. "I hate to go down and leave the ship up here with no one in it." "They can't touch it. No matter how they develop in the next hundred years they still won't be able to get in or damage it in any way." "It's myself I'm thinking about. Down there, alone." "I'll be with you. On the other side of the Earth." "That's not very close. I'd like it better if there were someone in the ship to bring it down in a hurry if things get rough. They don't think much of each other. I don't imagine they'll like aliens any better." "They may be unfriendly," Ethaniel acknowledged. Now he switched a monitor screen until he looked at the slope of a mountain. It was snowing and men were cutting small green trees in the snow. "I've thought of a trick." "If it saves my neck I'm for it." "I don't guarantee anything," said Ethaniel. "This is what I was thinking of: instead of hiding the ship against the sun where there's little chance it will be seen, we'll make sure that they do see it. Let's take it around to the night side of the planet and light it up." "Say, pretty good," said Bal. "They can't imagine that we'd light up an unmanned ship," said Ethaniel. "Even if the thought should occur to them they'll have no way of checking it. Also, they won't be eager to harm us with our ship shining down on them." "That's thinking," said Bal, moving to the controls. "I'll move the ship over where they can see it best and then I'll light it up. I'll really light it up." "Don't spare power." "Don't worry about that. They'll see it. Everybody on Earth will see it." Later, with the ship in position, glowing against the darkness of space, pulsating with light, Bal said: "You know, I feel better about this. We may pull it off. Lighting the ship may be just the help we need." "It's not we who need help, but the people of Earth," said Ethaniel. "See you in five days." With that he entered a small landing craft, which left a faintly luminescent trail as it plunged toward Earth. As soon as it was safe to do so, Bal left in another craft, heading for the other side of the planet. And the spaceship circled Earth, unmanned, blazing and pulsing with light. No star in the winter skies of the planet below could equal it in brilliancy. Once a man-made satellite came near but it was dim and was lost sight of by the people below. During the day the ship was visible as a bright spot of light. At evening it seemed to burn through the sunset colors. And the ship circled on, bright, shining, seeming to be a little piece clipped from the center of a star and brought near Earth to illuminate it. Never, or seldom, had Earth seen anything like it. In five days the two small landing craft that had left it arched up from Earth and joined the orbit of the large ship. The two small craft slid inside the large one and doors closed behind them. In a short time the aliens met again. "We did it," said Bal exultantly as he came in. "I don't know how we did it and I thought we were going to fail but at the last minute they came through." Ethaniel smiled. "I'm tired," he said, rustling. "Me too, but mostly I'm cold," said Bal, shivering. "Snow. Nothing but snow wherever I went. Miserable climate. And yet you had me go out walking after that first day." "From my own experience it seemed to be a good idea," said Ethaniel. "If I went out walking one day I noticed that the next day the officials were much more cooperative. If it worked for me I thought it might help you." "It did. I don't know why, but it did," said Bal. "Anyway, this agreement they made isn't the best but I think it will keep them from destroying themselves." "It's as much as we can expect," said Ethaniel. "They may have small wars after this, but never the big one. In fifty or a hundred years we can come back and see how much they've learned." "I'm not sure I want to," said Bal. "Say, what's an angel?" "Why?" "When I went out walking people stopped to look. Some knelt in the snow and called me an angel." "Something like that happened to me," said Ethaniel. "I didn't get it but I didn't let it upset me," said Bal. "I smiled at them and went about my business." He shivered again. "It was always cold. I walked out, but sometimes I flew back. I hope that was all right." In the cabin Bal spread his great wings. Renaissance painters had never seen his like but knew exactly how he looked. In their paintings they had pictured him innumerable times. "I don't think it hurt us that you flew," said Ethaniel. "I did so myself occasionally." "But you don't know what an angel is?" "No. I didn't have time to find out. Some creature of their folklore I suppose. You know, except for our wings they're very much like ourselves. Their legends are bound to resemble ours." "Sure," said Bal. "Anyway, peace on Earth." THE END Transcriber's Note: This etext was produced from Amazing Science Fiction Stories January 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
|
C. The combination of the Christmas holiday, aliens that look like angels, and what looks to be the star of Bethlehem, has convinced the people of Earth that Bal and Ethaniel are friends and not foes.
|
What is Major Banes' opinion of Lt. Alice Britton's husband?
A. He thinks highly of his ability but not about his personality
B. He thinks he's very skilled as a pilot and a great husband too
C. He doesn't think much of him at all
D. He thinks he's a talentless impulsive man who bought his way to his position
|
SPATIAL DELIVERY BY RANDALL GARRETT Women on space station assignments shouldn't get pregnant. But there's a first time for everything. Here's the story of such a time——and an historic situation. [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, October 1954. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] One thousand seventy-five miles above the wrinkled surface of Earth, a woman was in pain. There, high in the emptiness of space, Space Station One swung in its orbit. Once every two hours, the artificial satellite looped completely around the planet, watching what went on below. Outside its bright steel hull was the silence of the interplanetary vacuum; inside, in the hospital ward, Lieutenant Alice Britton clutched at the sheets of her bed in pain, then relaxed as it faded away. Major Banes looked at her and smiled a little. "How do you feel, Lieutenant?" She smiled back; she knew the pain wouldn't return for a few minutes yet. "Fine, doctor. It's no worse than I was expecting. How long will it before we can contact White Sands?" The major looked nervously at his wristwatch. "Nearly an hour. You'll be all right." "Certainly," she agreed, running a hand through her brown hair, "I'll be okay. Just you be on tap when I call." The major's grin broadened. "You don't think I'd miss a historical event like this, do you? You take it easy. We're over Eastern Europe now, but as soon as we get within radio range of New Mexico, I'll beam a call in." He paused, then repeated, "You just take it easy. Call the nurse if anything happens." Then he turned and walked out of the room. Alice Britton closed her eyes. Major Banes was all smiles and cheer now, but he hadn't been that way five months ago. She chuckled softly to herself as she thought of his blistering speech. "Lieutenant Britton, you're either careless or brainless; I don't know which! Your husband may be the finest rocket jockey in the Space Service, but that doesn't give him the right to come blasting up here on a supply rocket just to get you pregnant!" Alice had said: "I'm sure the thought never entered his mind, doctor. I know it never entered mine." "But that was two and a half months ago! Why didn't you come to me before this? Of all the tom-fool—" His voice had died off in suppressed anger. "I didn't know," she had said stolidly. "You know my medical record." "I know. I know." A puzzled frown had come over his face then, a frown which almost hid the green eyes that contrasted so startlingly with the flaming red of his hair. "The question is: what do we do next? We're not equipped for obstetrics up here." "Send me back down to Earth, of course." And he had looked up at her scathingly. "Lieutenant Britton, it is my personal opinion that you need your head examined, and not by a general practitioner, either! Why, I wouldn't let you get into an airplane, much less land on Earth in a rocket! If you think I'd permit you to subject yourself to eight gravities of acceleration in a rocket landing, you're daffy!" She hadn't thought of it before, but the major was right. The terrible pressure of a rocket landing would increase her effective body weight to nearly half a ton; an adult human being couldn't take that sort of punishment for long, much less the tiny life that was growing within her. So she had stayed on in the Space Station, doing her job as always. As Chief Radar Technician, she was important in the operation of the station. Her pregnancy had never made her uncomfortable; the slow rotation of the wheel-shaped station about its axis gave an effective gravity at the rim only half that of Earth's surface, and the closer to the hub she went, the less her weight became. According to the major, the baby was due sometime around the first of September. "Two hundred and eighty days," he had said. "Luckily, we can pinpoint it almost exactly. And at a maximum of half of Earth gravity, you shouldn't weigh more than seventy pounds then. You're to report to me at least once a week, Lieutenant." As the words went through her mind, another spasm of pain hit her, and she clenched her fists tightly on the sheets again. It went away, and she took a deep breath. Everything had been fine until today. And then, only half an hour ago, a meteor had hit the radar room. It had been only a tiny bit of rock, no bigger than a twenty-two bullet, and it hadn't been traveling more than ten miles per second, but it had managed to punch its way through the shielding of the station. The self-sealing walls had closed the tiny hole quickly, but even in that short time, a lot of air had gone whistling out into the vacuum of space. The depressurization hadn't hurt her too much, but the shock had been enough to start labor. The baby was going to come two months early. She relaxed a little more, waiting for the next pain. There was nothing to worry about; she had absolute faith in the red-haired major. The major himself was not so sure. He sat in his office, massaging his fingertips and looking worriedly at the clock on the wall. The Chief Nurse at a nearby desk took off her glasses and looked at him speculatively. "Something wrong, doctor?" "Incubator," he said, without taking his eyes off the clock. "I beg your pardon?" "Incubator. We can't deliver a seven-month preemie without an incubator." The nurse's eyes widened. "Good Lord! I never thought of that! What are you going to do?" "Right now, I can't do anything. I can't beam a radio message through to the Earth. But as soon as we get within radio range of White Sands, I'll ask them to send up an emergency rocket with an incubator. But—" "But what?" "Will we have time? The pains are coming pretty fast now. It will be at least three hours before they can get a ship up here. If they miss us on the next time around, it'll be five hours. She can't hold out that long." The Chief Nurse turned her eyes to the slowly moving second hand of the wall clock. She could feel a lump in her throat. Major Banes was in the Communications Center a full five minutes before the coastline of California appeared on the curved horizon of the globe beneath them. He had spent the hour typing out a complete report of what had happened to Alice Britton and a list of what he needed. He handed it to the teletype operator and paced the floor impatiently as he waited for the answer. When the receiver teletype began clacking softly, he leaned over the page, waiting anxiously for every word. WHITE SANDS ROCKET BASE 4 JULY 1984 0913 HRS URGENT TO: MAJ PETER BANES (MC) 0-266118 SS-1 MEDICAL OFFICER FROM: GEN DAVID BARRETT 0-199515 COMMANDING WSRB ROCKET. ORBIT NOW BEING COMPUTED FOR RENDEZVOUS WITH SS-1 AS OF NEXT PASSAGE ABOVE USA. CAPT. JAMES BRITTON PILOTING. MEDICS LOADING SHIP TWELVE WITH INCUBATOR AND OTHER SUPPLIES. BASE OBSTETRICIAN LT COL GATES ALSO COMING TO ASSIST IN DELIVERY. HANG ON. OVER. Banes nodded and turned to the operator. "I want a direct open telephone line to my office in case I have to get another message to the base before we get out of range again." He turned and left through the heavy door. Each room of the space station was protected by airtight doors and individual heating units; if some accident, such as a really large meteor hit, should release the air from one room, nearby rooms would be safe. Banes' next stop was the hospital ward. Alice Britton was resting quietly, but there were lines of strain around her eyes which hadn't been there an hour before. "How's it coming, Lieutenant?" She smiled, but another spasm hit her before she could answer. After a time, she said: "I'm doing fine, but you look as if you'd been through the mill. What's eating you?" He forced a nervous smile. "Nothing but the responsibility. You're going to be a very famous woman, you know. You'll be the mother of the first child born in space. And it's my job to see to it that you're both all right." She grinned. "Another Dr. Dafoe?" "Something on that order, I suppose. But it won't be all my glory. Colonel Gates, the O.B. man, was supposed to come up for the delivery in September, so when White Sands contacted us, they said he was coming immediately." He paused, and a genuine smile crossed his face. "Your husband is bringing him up." "Jim! Coming up here? Wonderful! But I'm afraid the colonel will be too late. This isn't going to last that long." Banes had to fight hard to keep his face smiling when she said that, but he managed an easy nod. "We'll see. Don't hurry it, though. Let nature take its course. I'm not such a glory hog that I'd not let Gates have part of it—or all of it, for that matter. Relax and take it easy." He went on talking, trying to keep the conversation light, but his eyes kept wandering to his wristwatch, timing Alice's pain intervals. They were coming too close together to suit him. There was a faint rap, and the heavy airtight door swung open to admit the Chief Nurse. "There's a message for you in your office, doctor. I'll send a nurse in to be with her." He nodded, then turned back to Alice. "Stiff uppah lip, and all that sort of rot," he said in a phony British accent. "Oh, raw ther , old chap," she grinned. Back in his office, Banes picked up the teletype flimsy. WHITE SANDS ROCKET BASE 4 JULY 1984 0928 HRS URGENT TO: MAJ PETER BANES (MC) 0-266118 SS-1 MEDICAL OFFICER FROM: GEN DAVID BARRETT 0-199515 COMMANDING WSRB ROCKET. ORBIT COMPUTED FOR RENDEZVOUS AT 1134 HRS MST. CAPT BRITTON SENDS PERSONAL TO LT BRITTON AS FOLLOWS: HOLD THE FORT, BABY, THE WHOLE WORLD IS PRAYING FOR YOU. OUT. Banes sat on the edge of his desk, pounding a fist into the palm of his left hand. "Two hours. It isn't soon enough. She'll never hold out that long. And we don't have an incubator." His voice was a clipped monotone, timed with the rhythmic slamming of his fist. The Chief Nurse said: "Can't we build something that will do until the rocket gets here?" Banes looked at her, his face expressionless. "What would we build it out of? There's not a spare piece of equipment in the station. It costs money to ship material up here, you know. Anything not essential is left on the ground." The phone rang. Banes picked it up and identified himself. The voice at the other end said: "This is Communications, Major. I tape recorded all the monitor pickups from the Earth radio stations, and it looks as though the Space Service has released the information to the public. Lieutenant Britton's husband was right when he said the whole world's praying for her. Do you want to hear the tapes?" "Not now, but thanks for the information." He hung up and looked into the Chief Nurse's eyes. "They've released the news to the public." She frowned. "That really puts you on the spot. If the baby dies, they'll blame you." Banes slammed his fist to the desk. "Do you think I give a tinker's dam about that? I'm interested in saving a life, not in worrying about what people may think!" "Yes, sir. I just thought—" "Well, think about something useful! Think about how we're going to save that baby!" He paused as he saw her eyes. "I'm sorry, Lieutenant. My nerves are all raw, I guess. But, dammit, my field is space medicine. I can handle depressurization, space sickness, and things like that, but I don't know anything about babies! I know what I read in medical school, and I watched a delivery once, but that's all I know. I don't even have any references up here; people aren't supposed to go around having babies on a space station!" "It's all right, doctor. Shall I prepare the delivery room?" His laugh was hard and short. "Delivery room! I wish to Heaven we had one! Prepare the ward room next to the one she's in now, I guess. It's the best we have. "So help me Hannah, I'm going to see some changes made in regulations! A situation like this won't happen again!" The nurse left quietly. She knew Banes wasn't really angry at the Brittons; it was simply his way of letting off steam to ease the tension within him. The slow, monotonous rotation of the second hand on the wall clock seemed to drag time grudgingly along with it. Banes wished he could smoke to calm his raw nerves, but it was strictly against regulations. Air was too precious to be used up by smoking. Every bit of air on board had had to be carried up in rockets when the station was built in space. The air purifiers in the hydroponics section could keep the air fresh enough for breathing, but fire of any kind would overtax the system, leaving too little oxygen in the atmosphere. It was a few minutes of ten when he decided he'd better get back to Alice Britton. She was trying to read a book between spasms, but she wasn't getting much read. She dropped it to the floor when he came in. "Am I glad to see you! It won't be long now." She looked at him analytically. "Say! Just what is eating you? You look more haggard than I do!" Again he tried to force a smile, but it didn't come off too well. "Nothing serious. I just want to make sure everything comes out all right." She smiled. "It will. You're all set. You ordered the instruments months ago. Or did you forget something?" That hit home, but he just grinned feebly. "I forgot to get somebody to boil water." "Whatever for?" "Coffee, of course. Didn't you know that? Papa always heats up the water; that keeps him out of the way, and the doctor has coffee afterwards." Alice's hands grasped the sheet again, and Banes glanced at his watch. Ninety seconds! It was long and hard. When the pain had ebbed away, he said: "We've got the delivery room all ready. It won't be much longer now." "I'll say it won't! How about the incubator?" There was a long pause. Finally, he said softly: "There isn't any incubator. I didn't take the possibility of a premature delivery into account. It's my fault. I've done what I could, though; the ship is bringing one up. I—I think we'll be able to keep the child alive until—" He stopped. Alice was bubbling up with laughter. "Lieutenant! Lieutenant Britton! Alice! This is no time to get hysterical! Stop it!" Her laughter slowed to a chuckle. " Me get hysterical! That's a good one! What about you? You're so nervous you couldn't sip water out of a bathtub without spilling it!" He blinked. "What do you mean?" Another pain came, and he had to wait until it was over before he got her answer. "Doctor," she said, "I thought you would have figured it out. Ask yourself just one question. Ask yourself, 'Why is a space station like an incubator?'" Space Ship Twelve docked at Space Station One at exactly eleven thirty-four, and two men in spacesuits pushed a large, bulky package through the airlock. Major Peter Banes, haggard but smiling, met Captain Britton in the corridor as he and the colonel entered the hospital ward. Banes nodded to Colonel Gates, then turned to Britton. "I don't know whether to congratulate you or take a poke at you, Captain, but I suppose congratulations come first. Your son, James Edward Britton II, is doing fine, thank you." "You mean— already ?" The colonel said nothing, but he raised an eyebrow. "Over an hour ago," said Banes. "But—but—the incubator—" Banes' grin widened. "We'll put the baby in it, now that we've got it, but it really isn't necessary. Your wife figured that one out. A space station is a kind of incubator itself, you see. It protects us poor, weak humans from the terrible conditions of space. So all we had to do was close up one of the airtight rooms, sterilize it, warm it up, and put in extra oxygen from the emergency tanks. Young James is perfectly comfortable." "Excellent, Major!" said the colonel. "Don't thank me. It was Captain Britton's wife who—" But Captain Britton wasn't listening any more. He was headed toward his wife's room at top speed.
|
A. He thinks highly of his ability but not about his personality
|
Why was zarquil not played often by those in the area?
A. It was an illegal game.
B. It was only played by Dutchmen.
C. It was fabulously expensive.
D. It was dangerous.
|
Bodyguard By CHRISTOPHER GRIMM Illustrated by CAVAT [Transcriber's Note: This etext was produced from Galaxy Science Fiction February 1956. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] When overwhelming danger is constantly present,of course a man is entitled to have a bodyguard. The annoyance was that he had to do it himself ... and his body would not cooperate! The man at the bar was exceptionally handsome, and he knew it. So did the light-haired girl at his side, and so did the nondescript man in the gray suit who was watching them from a booth in the corner. Everyone in the room was aware of the big young man, and most of the humans present were resentful, for he handled himself consciously and arrogantly, as if his appearance alone were enough to make him superior to anyone. Even the girl with him was growing restless, for she was accustomed to adulation herself, and next to Gabriel Lockard she was almost ordinary-looking. As for the extraterrestrials—it was a free bar—they were merely amused, since to them all men were pathetically and irredeemably hideous. Gabe threw his arm wide in one of his expansive gestures. There was a short man standing next to the pair—young, as most men and women were in that time, thanks to the science which could stave off decay, though not death—but with no other apparent physical virtue, for plastic surgery had not fulfilled its bright promise of the twentieth century. The drink he had been raising to his lips splashed all over his clothing; the glass shattered at his feet. Now he was not only a rather ugly little man, but also a rather ridiculous one—or at least he felt he was, which was what mattered. "Sorry, colleague," Gabe said lazily. "All my fault. You must let me buy you a replacement." He gestured to the bartender. "Another of the same for my fellow-man here." The ugly man dabbed futilely at his dripping trousers with a cloth hastily supplied by the management. "You must allow me to pay your cleaning bill," Gabe said, taking out his wallet and extracting several credit notes without seeming to look at them. "Here, have yourself a new suit on me." You could use one was implied. And that, coming on top of Gabriel Lockard's spectacular appearance, was too much. The ugly man picked up the drink the bartender had just set before him and started to hurl it, glass and all, into Lockard's handsome face. Suddenly a restraining hand was laid upon his arm. "Don't do that," the nondescript man who had been sitting in the corner advised. He removed the glass from the little man's slackening grasp. "You wouldn't want to go to jail because of him." The ugly man gave him a bewildered stare. Then, seeing the forces now ranged against him—including his own belated prudence—were too strong, he stumbled off. He hadn't really wanted to fight, only to smash back, and now it was too late for that. Gabe studied the newcomer curiously. "So, it's you again?" The man in the gray suit smiled. "Who else in any world would stand up for you?" "I should think you'd have given up by now. Not that I mind having you around, of course," Gabriel added too quickly. "You do come in useful at times, you know." "So you don't mind having me around?" The nondescript man smiled again. "Then what are you running from, if not me? You can't be running from yourself—you lost yourself a while back, remember?" Gabe ran a hand through his thick blond hair. "Come on, have a drink with me, fellow-man, and let's let bygones be bygones. I owe you something—I admit that. Maybe we can even work this thing out." "I drank with you once too often," the nondescript man said. "And things worked out fine, didn't they? For you." His eyes studied the other man's incredibly handsome young face, noted the suggestion of bags under the eyes, the beginning of slackness at the lips, and were not pleased with what they saw. "Watch yourself, colleague," he warned as he left. "Soon you might not be worth the saving." "Who was that, Gabe?" the girl asked. He shrugged. "I never saw him before in my life." Of course, knowing him, she assumed he was lying, but, as a matter of fact, just then he happened to have been telling the truth. Once the illuminators were extinguished in Gabriel Lockard's hotel suite, it seemed reasonably certain to the man in the gray suit, as he watched from the street, that his quarry would not go out again that night. So he went to the nearest airstation. There he inserted a coin in a locker, into which he put most of his personal possessions, reserving only a sum of money. After setting the locker to respond to the letter combination bodyguard , he went out into the street. If he had met with a fatal accident at that point, there would have been nothing on his body to identify him. As a matter of fact, no real identification was possible, for he was no one and had been no one for years. The nondescript man hailed a cruising helicab. "Where to, fellow-man?" the driver asked. "I'm new in the parish," the other man replied and let it hang there. "Oh...? Females...? Narcophagi...? Thrill-mills?" But to each of these questions the nondescript man shook his head. "Games?" the driver finally asked, although he could guess what was wanted by then. "Dice...? Roulette...? Farjeen?" "Is there a good zarquil game in town?" The driver moved so he could see the face of the man behind him in the teleview. A very ordinary face. "Look, colleague, why don't you commit suicide? It's cleaner and quicker." "I can't contact your attitude," the passenger said with a thin smile. "Bet you've never tried the game yourself. Each time it happens, there's a ... well, there's no experience to match it at a thrill-mill." He gave a sigh that was almost an audible shudder, and which the driver misinterpreted as an expression of ecstasy. "Each time, eh? You're a dutchman then?" The driver spat out of the window. "If it wasn't for the nibble, I'd throw you right out of the cab. Without even bothering to take it down even. I hate dutchmen ... anybody with any legitimate feelings hates 'em." "But it would be silly to let personal prejudice stand in the way of a commission, wouldn't it?" the other man asked coolly. "Of course. You'll need plenty of foliage, though." "I have sufficient funds. I also have a gun." "You're the dictator," the driver agreed sullenly. II It was a dark and rainy night in early fall. Gabe Lockard was in no condition to drive the helicar. However, he was stubborn. "Let me take the controls, honey," the light-haired girl urged, but he shook his handsome head. "Show you I can do something 'sides look pretty," he said thickly, referring to an earlier and not amicable conversation they had held, and of which she still bore the reminder on one thickly made-up cheek. Fortunately the car was flying low, contrary to regulations, so that when they smashed into the beacon tower on the outskirts of the little town, they didn't have far to fall. And hardly had their car crashed on the ground when the car that had been following them landed, and a short fat man was puffing toward them through the mist. To the girl's indignation, the stranger not only hauled Gabe out onto the dripping grass first, but stopped and deliberately examined the young man by the light of his minilume, almost as if she weren't there at all. Only when she started to struggle out by herself did he seem to remember her existence. He pulled her away from the wreck just a moment before the fuel tank exploded and the 'copter went up in flames. Gabe opened his eyes and saw the fat man gazing down at him speculatively. "My guardian angel," he mumbled—shock had sobered him a little, but not enough. He sat up. "Guess I'm not hurt or you'd have thrown me back in." "And that's no joke," the fat man agreed. The girl shivered and at that moment Gabriel suddenly seemed to recall that he had not been alone. "How about Helen? She on course?" "Seems to be," the fat man said. "You all right, miss?" he asked, glancing toward the girl without, she thought, much apparent concern. " Mrs. ," Gabriel corrected. "Allow me to introduce you to Mrs. Gabriel Lockard," he said, bowing from his seated position toward the girl. "Pretty bauble, isn't she?" "I'm delighted to meet you, Mrs. Gabriel Lockard," the fat man said, looking at her intently. His small eyes seemed to strip the make-up from her cheek and examine the livid bruise underneath. "I hope you'll be worthy of the name." The light given off by the flaming car flickered on his face and Gabriel's and, she supposed, hers too. Otherwise, darkness surrounded the three of them. There were no public illuminators this far out—even in town the lights were dimming and not being replaced fast enough nor by the newer models. The town, the civilization, the planet all were old and beginning to slide downhill.... Gabe gave a short laugh, for no reason that she could see. There was the feeling that she had encountered the fat man before, which was, of course, absurd. She had an excellent memory for faces and his was not included in her gallery. The girl pulled her thin jacket closer about her chilly body. "Aren't you going to introduce your—your friend to me, Gabe?" "I don't know who he is," Gabe said almost merrily, "except that he's no friend of mine. Do you have a name, stranger?" "Of course I have a name." The fat man extracted an identification card from his wallet and read it. "Says here I'm Dominic Bianchi, and Dominic Bianchi is a retail milgot dealer.... Only he isn't a retail milgot dealer any more; the poor fellow went bankrupt a couple of weeks ago, and now he isn't ... anything." "You saved our lives," the girl said. "I'd like to give you some token of my—of our appreciation." Her hand reached toward her credit-carrier with deliberate insult. He might have saved her life, but only casually, as a by-product of some larger scheme, and her appreciation held little gratitude. The fat man shook his head without rancor. "I have plenty of money, thank you, Mrs. Gabriel Lockard.... Come," he addressed her husband, "if you get up, I'll drive you home. I warn you, be more careful in the future! Sometimes," he added musingly, "I almost wish you would let something happen. Then my problem would not be any problem, would it?" Gabriel shivered. "I'll be careful," he vowed. "I promise—I'll be careful." When he was sure that his charge was safely tucked in for the night, the fat man checked his personal possessions. He then requested a taxi driver to take him to the nearest zarquil game. The driver accepted the commission phlegmatically. Perhaps he was more hardened than the others had been; perhaps he was unaware that the fat man was not a desperate or despairing individual seeking one last chance, but what was known colloquially as a flying dutchman, a man, or woman, who went from one zarquil game to another, loving the thrill of the sport, if you could call it that, for its own sake, and not for the futile hope it extended and which was its sole shred of claim to moral justification. Perhaps—and this was the most likely hypothesis—he just didn't care. Zarquil was extremely illegal, of course—so much so that there were many legitimate citizens who weren't quite sure just what the word implied, knowing merely that it was one of those nameless horrors so deliciously hinted at by the fax sheets under the generic term of "crimes against nature." Actually the phrase was more appropriate to zarquil than to most of the other activities to which it was commonly applied. And this was one crime—for it was crime in law as well as nature—in which victim had to be considered as guilty as perpetrator; otherwise the whole legal structure of society would collapse. Playing the game was fabulously expensive; it had to be to make it profitable for the Vinzz to run it. Those odd creatures from Altair's seventh planet cared nothing for the welfare of the completely alien human beings; all they wanted was to feather their own pockets with interstellar credits, so that they could return to Vinau and buy many slaves. For, on Vinau, bodies were of little account, and so to them zarquil was the equivalent of the terrestrial game musical chairs. Which was why they came to Terra to make profits—there has never been big money in musical chairs as such. When the zarquil operators were apprehended, which was not frequent—as they had strange powers, which, not being definable, were beyond the law—they suffered their sentences with equanimity. No Earth court could give an effective prison sentence to a creature whose life spanned approximately two thousand terrestrial years. And capital punishment had become obsolete on Terra, which very possibly saved the terrestrials embarrassment, for it was not certain that their weapons could kill the Vinzz ... or whether, in fact, the Vinzz merely expired after a period of years out of sheer boredom. Fortunately, because trade was more profitable than war, there had always been peace between Vinau and Terra, and, for that reason, Terra could not bar the entrance of apparently respectable citizens of a friendly planet. The taxi driver took the fat man to one of the rather seedy locales in which the zarquil games were usually found, for the Vinzz attempted to conduct their operations with as much unobtrusiveness as was possible. But the front door swung open on an interior that lacked the opulence of the usual Vinoz set-up; it was down-right shabby, the dim olive light hinting of squalor rather than forbidden pleasures. That was the trouble in these smaller towns—you ran greater risks of getting involved in games where the players had not been carefully screened. The Vinoz games were usually clean, because that paid off better, but, when profits were lacking, the Vinzz were capable of sliding off into darkside practices. Naturally the small-town houses were more likely to have trouble in making ends meet, because everybody in the parish knew everybody else far too well. The fat man wondered whether that had been his quarry's motive in coming to such desolate, off-trail places—hoping that eventually disaster would hit the one who pursued him. Somehow, such a plan seemed too logical for the man he was haunting. However, beggars could not be choosers. The fat man paid off the heli-driver and entered the zarquil house. "One?" the small green creature in the slightly frayed robe asked. "One," the fat man answered. III The would-be thief fled down the dark alley, with the hot bright rays from the stranger's gun lancing out after him in flamboyant but futile patterns. The stranger, a thin young man with delicate, angular features, made no attempt to follow. Instead, he bent over to examine Gabriel Lockard's form, appropriately outstretched in the gutter. "Only weighted out," he muttered, "he'll be all right. Whatever possessed you two to come out to a place like this?" "I really think Gabriel must be possessed...." the girl said, mostly to herself. "I had no idea of the kind of place it was going to be until he brought me here. The others were bad, but this is even worse. It almost seems as if he went around looking for trouble, doesn't it?" "It does indeed," the stranger agreed, coughing a little. It was growing colder and, on this world, the cities had no domes to protect them from the climate, because it was Earth and the air was breathable and it wasn't worth the trouble of fixing up. The girl looked closely at him. "You look different, but you are the same man who pulled us out of that aircar crash, aren't you? And before that the man in the gray suit? And before that...?" The young man's cheekbones protruded as he smiled. "Yes, I'm all of them." "Then what they say about the zarquil games is true? There are people who go around changing their bodies like—like hats?" Automatically she reached to adjust the expensive bit of blue synthetic on her moon-pale hair, for she was always conscious of her appearance; if she had not been so before marriage, Gabriel would have taught her that. He smiled again, but coughed instead of speaking. "But why do you do it? Why! Do you like it? Or is it because of Gabriel?" She was growing a little frantic; there was menace here and she could not understand it nor determine whether or not she was included in its scope. "Do you want to keep him from recognizing you; is that it?" "Ask him." "He won't tell me; he never tells me anything. We just keep running. I didn't recognize it as running at first, but now I realize that's what we've been doing ever since we were married. And running from you, I think?" There was no change of expression on the man's gaunt face, and she wondered how much control he had over a body that, though second- or third- or fourth-hand, must be new to him. How well could he make it respond? What was it like to step into another person's casing? But she must not let herself think that way or she would find herself looking for a zarquil game. It would be one way of escaping Gabriel, but not, she thought, the best way; her body was much too good a one to risk so casually. It was beginning to snow. Light, feathery flakes drifted down on her husband's immobile body. She pulled her thick coat—of fur taken from some animal who had lived and died light-years away—more closely about herself. The thin young man began to cough again. Overhead a tiny star seemed to detach itself from the pale flat disk of the Moon and hurl itself upward—one of the interstellar ships embarking on its long voyage to distant suns. She wished that somehow she could be on it, but she was here, on this solitary old world in a barren solar system, with her unconscious husband and a strange man who followed them, and it looked as if here she would stay ... all three of them would stay.... "If you're after Gabriel, planning to hurt him," she asked, "why then do you keep helping him?" "I am not helping him . And he knows that." "You'll change again tonight, won't you?" she babbled. "You always change after you ... meet us? I think I'm beginning to be able to identify you now, even when you're ... wearing a new body; there's something about you that doesn't change." "Too bad he got married," the young man said. "I could have followed him for an eternity and he would never have been able to pick me out from the crowd. Too bad he got married anyway," he added, his voice less impersonal, "for your sake." She had come to the same conclusion in her six months of marriage, but she would not admit that to an outsider. Though this man was hardly an outsider; he was part of their small family group—as long as she had known Gabriel, so long he must have known her. And she began to suspect that he was even more closely involved than that. "Why must you change again?" she persisted, obliquely approaching the subject she feared. "You have a pretty good body there. Why run the risk of getting a bad one?" "This isn't a good body," he said. "It's diseased. Sure, nobody's supposed to play the game who hasn't passed a thorough medical examination. But in the places to which your husband has been leading me, they're often not too particular, as long as the player has plenty of foliage." "How—long will it last you?" "Four or five months, if I'm careful." He smiled. "But don't worry, if that's what you're doing; I'll get it passed on before then. It'll be expensive—that's all. Bad landing for the guy who gets it, but then it was tough on me too, wasn't it?" "But how did you get into this ... pursuit?" she asked again. "And why are you doing it?" People didn't have any traffic with Gabriel Lockard for fun, not after they got to know him. And this man certainly should know him better than most. "Ask your husband." The original Gabriel Lockard looked down at the prostrate, snow-powdered figure of the man who had stolen his body and his name, and stirred it with his toe. "I'd better call a cab—he might freeze to death." He signaled and a cab came. "Tell him, when he comes to," he said to the girl as he and the driver lifted the heavy form of her husband into the helicar, "that I'm getting pretty tired of this." He stopped for a long spell of coughing. "Tell him that sometimes I wonder whether cutting off my nose wouldn't, in the long run, be most beneficial for my face." "Sorry," the Vinzz said impersonally, in English that was perfect except for the slight dampening of the sibilants, "but I'm afraid you cannot play." "Why not?" The emaciated young man began to put on his clothes. "You know why. Your body is worthless. And this is a reputable house." "But I have plenty of money." The young man coughed. The Vinzz shrugged. "I'll pay you twice the regular fee." The green one shook his head. "Regrettably, I do mean what I say. This game is really clean." "In a town like this?" "That is the reason we can afford to be honest." The Vinzz' tendrils quivered in what the man had come to recognize as amusement through long, but necessarily superficial acquaintance with the Vinzz. His heavy robe of what looked like moss-green velvet, but might have been velvet-green moss, encrusted with oddly faceted alien jewels, swung with him. "We do a lot of business here," he said unnecessarily, for the whole set-up spelled wealth far beyond the dreams of the man, and he was by no means poor when it came to worldly goods. "Why don't you try another town where they're not so particular?" The young man smiled wryly. Just his luck to stumble on a sunny game. He never liked to risk following his quarry in the same configuration. And even though only the girl had actually seen him this time, he wouldn't feel at ease until he had made the usual body-shift. Was he changing because of Gabriel, he wondered, or was he using his own discoverment and identification simply as an excuse to cover the fact that none of the bodies that fell to his lot ever seemed to fit him? Was he activated solely by revenge or as much by the hope that in the hazards of the game he might, impossible though it now seemed, some day win another body that approached perfection as nearly as his original casing had? He didn't know. However, there seemed to be no help for it now; he would have to wait until they reached the next town, unless the girl, seeing him reappear in the same guise, would guess what had happened and tell her husband. He himself had been a fool to admit to her that the hulk he inhabited was a sick one; he still couldn't understand how he could so casually have entrusted her with so vital a piece of information. The Vinzz had been locking antennae with another of his kind. Now they detached, and the first approached the man once more. "There is, as it happens, a body available for a private game," he lisped. "No questions to be asked or answered. All I can tell you is that it is in good health." The man hesitated. "But unable to pass the screening?" he murmured aloud. "A criminal then." The green one's face—if you could call it a face—remained impassive. "Male?" "Of course," the Vinzz said primly. His kind did have certain ultimate standards to which they adhered rigidly, and one of those was the curious tabu against mixed games, strictly enforced even though it kept them from tapping a vast source of potential players. There had also never been a recorded instance of humans and extraterrestrials exchanging identities, but whether that was the result of tabu or biological impossibility, no one could tell. It might merely be prudence on the Vinzz' part—if it had ever been proved that an alien life-form had "desecrated" a human body, Earthmen would clamor for war ... for on this planet humanity held its self-bestowed purity of birthright dear—and the Vinzz, despite being unquestionably the stronger, were pragmatic pacifists. It had been undoubtedly some rabid member of the anti-alien groups active on Terra who had started the rumor that the planetary slogan of Vinau was, "Don't beat 'em; cheat 'em." "It would have to be something pretty nuclear for the other guy to take such a risk." The man rubbed his chin thoughtfully. "How much?" "Thirty thousand credits." "Why, that's three times the usual rate!" "The other will pay five times the usual rate." "Oh, all right," the delicate young man gave in. It was a terrific risk he was agreeing to take, because, if the other was a criminal, he himself would, upon assuming the body, assume responsibility for all the crimes it had committed. But there was nothing else he could do. He looked at himself in the mirror and found he had a fine new body; tall and strikingly handsome in a dark, coarse-featured way. Nothing to match the one he had lost, in his opinion, but there were probably many people who might find this one preferable. No identification in the pockets, but it wasn't necessary; he recognized the face. Not that it was a very famous or even notorious one, but the dutchman was a careful student of the "wanted" fax that had decorated public buildings from time immemorial, for he was ever mindful of the possibility that he might one day find himself trapped unwittingly in the body of one of the men depicted there. And he knew that this particular man, though not an important criminal in any sense of the word, was one whom the police had been ordered to burn on sight. The abolishing of capital punishment could not abolish the necessity for self-defense, and the man in question was not one who would let himself be captured easily, nor whom the police intended to capture easily. This might be a lucky break for me after all , the new tenant thought, as he tried to adjust himself to the body. It, too, despite its obvious rude health, was not a very comfortable fit. I can do a lot with a hulk like this. And maybe I'm cleverer than the original owner; maybe I'll be able to get away with it. IV "Look, Gabe," the girl said, "don't try to fool me! I know you too well. And I know you have that man's—the real Gabriel Lockard's—body." She put unnecessary stardust on her nose as she watched her husband's reflection in the dressing table mirror. Lockard—Lockard's body, at any rate—sat up and felt his unshaven chin. "That what he tell you?" "No, he didn't tell me anything really—just suggested I ask you whatever I want to know. But why else should he guard somebody he obviously hates the way he hates you? Only because he doesn't want to see his body spoiled." "It is a pretty good body, isn't it?" Gabe flexed softening muscles and made no attempt to deny her charge; very probably he was relieved at having someone with whom to share his secret. "Not as good as it must have been," the girl said, turning and looking at him without admiration. "Not if you keep on the way you're coursing. Gabe, why don't you...?" "Give it back to him, eh?" Lockard regarded his wife appraisingly. "You'd like that, wouldn't you? You'd be his wife then. That would be nice—a sound mind in a sound body. But don't you think that's a little more than you deserve?" "I wasn't thinking about that, Gabe," she said truthfully enough, for she hadn't followed the idea to its logical conclusion. "Of course I'd go with you," she went on, now knowing she lied, "when you got your ... old body back." Sure , she thought, I'd keep going with you to farjeen houses and thrill-mills. Actually she had accompanied him to a thrill-mill only once, and from then on, despite all his threats, she had refused to go with him again. But that once had been enough; nothing could ever wash that experience from her mind or her body. "You wouldn't be able to get your old body back, though, would you?" she went on. "You don't know where it's gone, and neither, I suppose, does he?" "I don't want to know!" he spat. "I wouldn't want it if I could get it back. Whoever it adhered to probably killed himself as soon as he looked in a mirror." He swung long legs over the side of his bed. "Christ, anything would be better than that! You can't imagine what a hulk I had!" "Oh, yes, I can," she said incautiously. "You must have had a body to match your character. Pity you could only change one."
|
A. It was an illegal game.
|
What is the new task proposed in this work?
|
### Introduction
With the popularity of shared videos, social networks, online course, etc, the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to. Accessing large collections of multimedia or spoken content is difficult and time-consuming for humans, even if these materials are more attractive for humans than plain text information. Hence, it will be great if the machine can automatically listen to and understand the spoken content, and even visualize the key information for humans. This paper presents an initial attempt towards the above goal: machine comprehension of spoken content. In an initial task, we wish the machine can listen to and understand an audio story, and answer the questions related to that audio content. TOEFL listening comprehension test is for human English learners whose native language is not English. This paper reports how today's machine can perform with such a test. The listening comprehension task considered here is highly related to Spoken Question Answering (SQA) BIBREF0 , BIBREF1 . In SQA, when the users enter questions in either text or spoken form, the machine needs to find the answer from some audio files. SQA usually worked with ASR transcripts of the spoken content, and used information retrieval (IR) techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer. Sibyl BIBREF4 , a factoid SQA system, used some IR techniques and utilized several levels of linguistic information to deal with the task. Question Answering in Speech Transcripts (QAST) BIBREF5 , BIBREF6 , BIBREF7 has been a well-known evaluation program of SQA for years. However, most previous works on SQA mainly focused on factoid questions like “What is name of the highest mountain in Taiwan?”. Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content. More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously. With the fast development of deep learning, neural networks have successfully applied to speech recognition BIBREF8 , BIBREF9 , BIBREF10 or NLP tasks BIBREF11 , BIBREF12 . A number of recent efforts have explored various ways to understand multimedia in text form BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . They incorporated attention mechanisms BIBREF16 with Long Short-Term Memory based networks BIBREF19 . In Question Answering field, most of the works focused on understanding text documents BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . Even though BIBREF24 tried to answer the question related to the movie, they only used the text and image in the movie for that. It seems that none of them have studied and focused on comprehension of spoken content yet. ### Task Definition and Contributions
In this paper, we develop and propose a new task of machine comprehension of spoken content which was never mentioned before to our knowledge. We take TOEFL listening comprehension test as an corpus for this work. TOEFL is an English examination which tests the knowledge and skills of academic English for English learners whose native languages is not English. In this examination, the subjects would first listen to an audio story around five minutes and then answer several question according to that story. The story is related to the college life such as conversation between the student and the professor or a lecture in the class. Each question has four choices where only one is correct. An real example in the TOEFL examination is shown in Fig. 1 . The upper part is the manual transcription of a small part of the audio story. The questions and four choices are listed too. The correct choice to the question in Fig. 1 is choice A. The questions in TOEFL are not simple even for a human with relatively good knowledge because the question cannot be answered by simply matching the words in the question and in the choices with those in the story, and key information is usually buried by many irrelevant utterances. To answer the questions like “Why does the student go to professor's office?", the listeners have to understand the whole audio story and draw the inferences to answer the question correctly. As a result, this task is believed to be very challenging for the state-of-the-art spoken language understanding technologies. We propose a listening comprehension model for the task defined above, the Attention-based Multi-hop Recurrent Neural Network (AMRNN) framework, and show that this model is able to perform reasonably well for the task. In the proposed approach, the audio of the stories is first transcribed into text by ASR, and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question. The initial experiments showed that the proposed model achieves encouraging scores on the TOEFL listening comprehension test. The attention-mechanism proposed in this paper can be applied on either word or sentence levels. We found that sentence-level attention achieved better results on the manual transcriptions without ASR errors, but word-level attention outperformed the sentence-level on ASR transcriptions with errors. ### Proposed Approach
The overall structure of the proposed model is in Fig 2 . The input of model includes the transcriptions of an audio story, a question and four answer choices, all represented as word sequences. The word sequence of the input question is first represented as a question vector $V_Q$ in Section "Experiments" . With the question vector $V_Q$ , the attention mechanism is applied to extract the question-related information from the story in Section "Story Attention Module" . The machine then goes through the story by the attention mechanism several times and obtain an answer selection vector $V_{Q_n}$ in Section "Hopping" . This answer selection vector $V_{Q_n}$ is finally used to evaluate the confidence of each choice in Section "Answer Selection" , and the choice with the highest score is taken as the output. All the model parameters in the above procedure are jointly trained with the target where 1 for the correct choice and 0 otherwise. ### Question Representation
Fig. 3 (A) shows the procedure of encoding the input question into a vector representation $V_Q$ . The input question is a sequence of T words, $w_1,w_2,...,w_T$ , every word $W_{i}$ represented in 1-Of-N encoding. A bidirectional Gated Recurrent Unit (GRU) network BIBREF25 , BIBREF26 , BIBREF27 takes one word from the input question sequentially at a time. In Fig 3 (A), the hidden layer output of the forward GRU (green rectangle) at time index $t$ is denoted by $y_{f}(t)$ , and that of the backward GRU (blue rectangle) is by $y_{b}(t)$ . After looking through all the words in the question, the hidden layer output of forward GRU network at the last time index $y_{f}(T)$ , and that of backward GRU network at the first time index $y_{b}(1)$ , are concatenated to form the question vector representation $V_{Q}$ , or $V_{Q} = [y_{f}(T) \Vert y_{b}(1)]$ . ### Story Attention Module
Fig. 3 (B) shows the attention mechanism which takes the question vector $V_Q$ obtained in Fig. 3 (A) and the story transcriptions as the input to encode the whole story into a story vector representation $V_{S}$ . The story transcription is a very long word sequence with many sentences, so we only show two sentences each with 4 words for simplicity. There is a bidirectional GRU in Fig 3 (B) encoding the whole story into a story vector representation $V_{S}$ . The word vector representation of the $t$ -th word $S_{t}$ is constructed by concatenating the hidden layer outputs of forward and backward GRU networks, that is $S_t = [y_{f}(t) \Vert y_{b}(t)]$ . Then the attention value $\alpha _t$ for each time index ${t}$ is the cosine similarity between the question vector $V_{Q}$ and the word vector representation $S_{t}$ of each word, $V_{S}$0 . With attention values $V_{S}$2 , there can be two different attention mechanisms, word-level and sentence-level, to encode the whole story into the story vector representations $V_{S}$3 . Word-level Attention: We normalize all the attention values $\alpha _t$ into $\alpha _t^\prime $ such that they sum to one over the whole story. Then all the word vector $S_{t}$ from the bidirectional GRU network for every word in the story are weighted with this normalized attention value $\alpha _{t}^\prime $ and sum to give the story vector, that is $V_{S} = \sum _{t}\alpha _{t}^{\prime }S_{t}$ . Sentence-level Attention: Sentence-level attention means the model collects the information only at the end of each sentence. Therefore, the normalization is only performed over those words at the end of the sentences to obtain $\alpha _t^{\prime \prime }$ . The story vector representation is then $V_{S} = \sum _{t=eos}\alpha _t^{\prime \prime }*S_{t}$ , where only those words at the end of sentences (eos) contribute to the weighted sum. So $V_{S} = \alpha _4^{\prime \prime }*S_4 + \alpha _8^{\prime \prime }*S_8$ in the example of the Fig. 3 ### Hopping
The overall picture of the proposed model is shown in Fig 2 , in which Fig. 3 (A) and (B) are component modules (labeled as Fig. 3 (A) and (B)) of the complete proposed model. In the left of Fig. 2 , the input question is first converted into a question vector $V_{Q_0}$ by the module in Fig. 3 (A). This $V_{Q_0}$ is used to compute the attention values $\alpha _{t}$ to obtain the story vector $V_{S_1}$ by the module in Fig. 3 (B). Then $V_{Q_0}$ and $V_{S_1}$ are summed to form a new question vector $V_{Q_1}$ . This process is called the first hop (hop 1) in Fig. 2 . The output of the first hop $V_{Q_1}$ can be used to compute the new attention to obtain a new story vector $V_{S_1}$ . This can be considered as the machine going over the story again to re-focus the story with a new question vector. Again, $V_{Q_1}$ and $V_{Q_0}$0 are summed to form $V_{Q_0}$1 (hop 2). After $V_{Q_0}$2 hops ( $V_{Q_0}$3 should be pre-defined), the output of the last hop $V_{Q_0}$4 is used for the answer selection in the Section "Answer Selection" . ### Answer Selection
As in the upper part of Fig. 2 , the same way previously used to encode the question into $V_Q$ in Fig. 3 (A) is used here to encode four choice into choice vector representations $V_A$ , $V_B$ , $V_C$ , $V_D$ . Then the cosine similarity between the output of the last hop $V_{Q_n}$ and the choice vectors are computed, and the choice with highest similarity is chosen. ### Experimental Setup
$\bullet $ Dataset Collection: The collected TOEFL dataset included 963 examples in total (717 for training, 124 for validation, 122 for testing). Each example included a story, a question and 4 choices. Besides the audio recording of each story, the manual transcriptions of the story are also available. We used a pydub library BIBREF28 to segment the full audio recording into utterances. Each audio recording has 57.9 utterances in average. There are in average 657.7 words in a story, 12.01 words in question and 10.35 words in each choice. $\bullet $ Speech Recognition: We used the CMU speech recognizer - Sphinx BIBREF29 to transcribe the audio story. The recognition word error rate (WER) was 34.32%. $\bullet $ Pre-processing: We used a pre-trained 300 dimension glove vector model BIBREF30 to obtain the vector representation for each word. Each utterance in the stories, question and each choice can be represented as a fixed length vector by adding the vectors of the all component words. Before training, we pruned the utterances in the story whose vector representation has cosine distance far from the question's. The percentage of the pruned utterances was determined by the performance of the model on the development set. The vector representations of utterances, questions and choices were only used in this pre-processing stage and the baseline approaches in Section "Baselines" , not used in the proposed model. $\bullet $ Training Details: The size of the hidden layer for both the forward and backward GRU networks were 128. All the bidirectional GRU networks in the proposed model shared the same set of parameters to avoid overfitting. We used RmsProp BIBREF31 with initial learning rate of 1e-5 with momentum 0.9. Dropout rate was 0.2. Batch size was 40. The number of hop was tuned from 1 to 3 by development set. ### Baselines
We compared the proposed model with some commonly used simple baselines in BIBREF24 and the memory network BIBREF16 . $\bullet $ Choice Length: The most naive baseline is to select the choices based on the number of words in it without listening to the stories and looking at the questions. This included: (i) selecting the longest choice, (ii) selecting the shortest choice or (iii) selecting the choice with the length most different from the rest choices. $\bullet $ Within-Choices similarity: With the vector representations for the choices in pre-processing of Section "Experimental Setup" , we computed the cosine distance among the four choices and selected the one which is (i) the most similar to or (ii) the most different from the others. $\bullet $ Question and Choice Similarity: With the vector representations for the choices and questions in pre-processing of Section "Experimental Setup" , the choice with the highest cosine similarity to the question is selected. $\bullet $ Sliding Window BIBREF24 , BIBREF32 : This model try to found a window of $W$ utterances in the story with the maximum similarity to the question. The similarity between a window of utterances and a question was the averaged cosine similarity of the utterances in the window and the question by their glove vector representation. After obtaining the window with the largest cosine similarity to the question, the confidence score of each choice is the average cosine similarity between the utterances in the window and the choice. The choice with the highest score is selected as the answer. $\bullet $ Memory Network BIBREF16 : We implemented the memory network with some modifications for this task to find out if memory network was able to deal it. The original memory network didn't have the embedding module for the choices, so we used the module for question in the memory network to embed the choices. Besides, in order to have the memory network select the answer out of four choices, instead of outputting a word in its original version, we computed the cosine similarity between the the output of the last hop and the choices to select the closest choice as the answer. We shared all the parameters of embedding layers in the memory network for avoiding overfitting. Without this modification, very poor results were obtained on the testing set. The embedding size of the memory network was set 128, stochastic gradient descent was used as BIBREF16 with initial learning rate of 0.01. Batch size was 40. The size of hop was tuned from 1 to 3 by development set. ### Results
We used the accuracy (number of question answered correctly / total number of questions) as our evaluation metric. The results are showed in Table 1 . We trained the model on the manual transcriptions of the stories, while tested the model on the testing set with both manual transcriptions (column labelled “Manual”) and ASR transcriptions (column labelled “ASR”). $\bullet $ Choice Length: Part (a) shows the performance of three models for selecting the answer with the longest, shortest or most different length, ranging from 23% to 35%. $\bullet $ Within Choices similarity: Part (b) shows the performance of two models for selecting the choice which is most similar to or the most different from the others. The accuracy are 36.09% and 27.87% respectively. $\bullet $ Question and Choice Similarity: In part (c), selecting the choice which is the most similar to the question only yielded 24.59%, very close to randomly guess. $\bullet $ Sliding Window: Part (d) for sliding window is the first baseline model considering the transcription of the stories. We tried the window size {1,2,3,5,10,15,20,30} and found the best window size to be 5 on the development set. This implied the useful information for answering the questions is probably within 5 sentences. The performance of 31.15% and 33.61% with and without ASR errors respectively tells how ASR errors affected the results, and the task here is too difficult for this approach to get good results. $\bullet $ Memory Network: The results of memory network in part (e) shows this task is relatively difficult for it, even though memory network was successful in some other tasks. However, the performance of 39.17% accuracy was clearly better than all approaches mentioned above, and it's interesting that this result was independent of the ASR errors and the reason is under investigation. The performance was 31% accuracy when we didn't use the shared embedding layer in the memory network. $\bullet $ AMRNN model: The results of the proposed model are listed in part (f), respectively for the attention mechanism on word-level and sentence-level. Without the ASR errors, the proposed model with sentence-level attention gave an accuracy as high as 51.67%, and slightly lower for word-level attention. It's interesting that without ASR errors, sentence-level attention is about 2.5% higher than word-level attention. Very possibly because that getting the information from the whole sentence is more useful than listening carefully at every words, especially for the conceptual and high-level questions in this task. Paying too much attention to every single word may be a bit noisy. On the other hand, the 34.32% ASR errors affected the model on sentence-level more than on word-level. This is very possibly because the incorrectly recognized words may seriously change the meaning of the whole sentences. However, with attention on word-level, when a word is incorrectly recognized, the model may be able to pay attention on other correctly recognized words to compensate for ASR errors and still come up with correct answer. ### Analysis on a typical example
Fig 4 shows the visualization of the attention weights obtained for a typical example story in the testing set, with the proposed AMRNN model using word-level or sentence-level attention on manual or ASR transcriptions respectively. The darker the color, the higher the weights. Only a small part of the story is shown where the response of the model made good difference. This story was mainly talking about the thick cloud and some mysteries on Venus. The question for this story is “What is a possible origin of Venus'clouds?" and the correct choice is “Gases released as a result of volcanic activity". In the manual transcriptions cases (left half of Fig 4 ), both models, with word-level or sentence-level attention, answered the question right and focused on the core and informative words/sentences to the question. The sentence-level model successfully captured the sentence including “...volcanic eruptions often omits gases.”; while the word-level model captured some important key words like “volcanic eruptions", “emit gases". However, in ASR cases (right half of Fig 4 ), the ASR errors misled both models to put some attention on some irrelevant words/sentences. The sentence-level model focus on the irrelevant sentence “In other area, you got canyons..."; while the word-level model focused on some irrelevant words “canyons", “rift malaise", but still capture some correct important words like “volcanic" or “eruptions" to answer correctly. By the darkness of the color, we can observe that the problem caused by ASR errors was more serious for the sentence-level attention when capturing the key concepts needed for the question. This may explain why in part (f) of Table 1 we find degradation caused by ASR errors was less for word-level model than for sentence-level model. ### Conclusions
In this paper we create a new task with the TOEFL corpus. TOEFL is an English examination, where the English learner is asked to listen to a story up to 5 minutes and then answer some corresponding questions. The learner needs to do deduction, logic and summarization for answering the question. We built a model which is able to deal with this challenging task. On manual transcriptions, the proposed model achieved 51.56% accuracy, while the very capable memory network got only 39.17% accuracy. Even on ASR transcriptions with WER of 34.32%, the proposed model still yielded 48.33% accuracy. We also found that although sentence-level attention achieved the best results on the manual transcription, word-level attention outperformed the sentence-level when there were ASR errors. Figure 2: The overall structure of the proposed Attention-based Multi-hop Recurrent Neural Network (AMRNN) model. Figure 3: (A) The Question Vector Representation and (B) The Attention Mechanism. Table 1: Accuracy results of different models Figure 4: Visualization of the attention weights in sentence-level and in word-level on a small section of the manual or ASR transcriptions of an example story given a question. The darker the color, the higher the weights. The question of this story is “What is a possible origin of Venus’clouds?” and the correct answer choice is “Gases released as a result of volcanic activity”.
|
listening comprehension task
|
Which model generalized the best?
|
### Introduction
Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 . In this paper we show that NLI systems that have been very successful in specific NLI benchmarks, fail to generalize when trained on a specific NLI dataset and then these trained models are tested across test sets taken from different NLI benchmarks. The results we get are in line with BIBREF1 , showing that the generalization capability of the individual NLI systems is very limited, but, what is more, they further show the only system that was less prone to breaking in BIBREF1 , breaks too in the experiments we have conducted. We train six different state-of-the-art models on three different NLI datasets and test these trained models on an NLI test set taken from another dataset designed for the same NLI task, namely for the task to identify for sentence pairs in the dataset if one sentence entails the other one, if they are in contradiction with each other or if they are neutral with respect to inferential relationship. One would expect that if a model learns to correctly identify inferential relationships in one dataset, then it would also be able to do so in another dataset designed for the same task. Furthermore, two of the datasets, SNLI BIBREF4 and MultiNLI BIBREF5 , have been constructed using the same crowdsourcing approach and annotation instructions BIBREF5 , leading to datasets with the same or at least very similar definition of entailment. It is therefore reasonable to expect that transfer learning between these datasets is possible. As SICK BIBREF6 dataset has been machine-constructed, a bigger difference in performance is expected. In this paper we show that, contrary to our expectations, most models fail to generalize across the different datasets. However, our experiments also show that BERT BIBREF0 performs much better than the other models in experiments between SNLI and MultiNLI. Nevertheless, even BERT fails when testing on SICK. In addition to the negative results, our experiments further highlight the power of pre-trained language models, like BERT, in NLI. The negative results of this paper are significant for the NLP research community as well as to NLP practice as we would like our best models to not only to be able to perform well in a specific benchmark dataset, but rather capture the more general phenomenon this dataset is designed for. The main contribution of this paper is that it shows that most of the best performing neural network models for NLI fail in this regard. The second, and equally important, contribution is that our results highlight that the current NLI datasets do not capture the nuances of NLI extensively enough. ### Related Work
The ability of NLI systems to generalize and related skepticism has been raised in a number of recent papers. BIBREF1 show that the generalization capabilities of state-of-the-art NLI systems, in cases where some kind of external lexical knowledge is needed, drops dramatically when the SNLI test set is replaced by a test set where the premise and the hypothesis are otherwise identical except for at most one word. The results show a very significant drop in accuracy. BIBREF7 recognize the generalization problem that comes with training on datasets like SNLI, which tend to be homogeneous and with little linguistic variation. In this context, they propose to better train NLI models by making use of adversarial examples. Multiple papers have reported hidden bias and annotation artifacts in the popular NLI datasets SNLI and MultiNLI allowing classification based on the hypothesis sentences alone BIBREF8 , BIBREF9 , BIBREF10 . BIBREF11 evaluate the robustness of NLI models using datasets where label preserving swapping operations have been applied, reporting significant performance drops compared to the results with the original dataset. In these experiments, like in the BreakingNLI experiment, the systems that seem to be performing the better, i.e. less prone to breaking, are the ones where some kind of external knowledge is used by the model (KIM by BIBREF2 is one of those systems). On a theoretical and methodological level, there is discussion on the nature of various NLI datasets, as well as the definition of what counts as NLI and what does not. For example, BIBREF12 , BIBREF13 present an overview of the most standard datasets for NLI and show that the definitions of inference in each of them are actually quite different, capturing only fragments of what seems to be a more general phenomenon. BIBREF4 show that a simple LSTM model trained on the SNLI data fails when tested on SICK. However, their experiment is limited to this single architecture and dataset pair. BIBREF5 show that different models that perform well on SNLI have lower accuracy on MultiNLI. However in their experiments they did not systematically test transfer learning between the two datasets, but instead used separate systems where the training and test data were drawn from the same corpora. ### Experimental Setup
In this section we describe the datasets and model architectures included in the experiments. ### Data
We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set. For all the datasets we report the baseline performance where the training and test data are drawn from the same corpus. We then take these trained models and test them on a test set taken from another NLI corpus. For the case where the models are trained with SNLI + MultiNLI we report the baseline using the SNLI test data. All the experimental combinations are listed in Table 1 . Examples from the selected datasets are provided in Table 2 . To be more precise, we vary three things: training dataset, model and testing dataset. We should qualify this though, since the three datasets we look at, can also be grouped by text domain/genre and type of data collection, with MultiNLI and SNLI using the same data collection style, and SNLI and SICK using roughly the same domain/genre. Hopefully, our set up will let us determine which of these factors matters the most. We describe the source datasets in more detail below. The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 . The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data. We used the matched development set (MultiNLI-m) for the experiments. The MultiNLI dataset was annotated using very similar instructions as for the SNLI dataset. Therefore we can assume that the definitions of entailment, contradiction and neutral is the same in these two datasets. SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 . ### Model and Training Details
We perform experiments with six high-performing models covering the sentence encoding models, cross-sentence attention models as well as fine-tuned pre-trained language models. For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments. For BiLSTM-max we used the Adam optimizer BIBREF21 , a learning rate of 5e-4 and batch size of 64. The learning rate was decreased by the factor of 0.2 after each epoch if the model did not improve. Dropout of 0.1 was used between the layers of the multi-layer perceptron classifier, except before the last layer.The BiLSTM-max models were initialized with pre-trained GloVe 840B word embeddings of size 300 dimensions BIBREF22 , which were fine-tuned during training. Our BiLSMT-max model was implemented in PyTorch. For HBMP, ESIM, KIM and BERT we used the original implementations with the default settings and hyperparameter values as described in BIBREF18 , BIBREF19 , BIBREF2 and BIBREF0 respectively. For BERT we used the uncased 768-dimensional model (BERT-base). For ESIM + ELMo we used the AllenNLP BIBREF23 PyTorch implementation with the default settings and hyperparameter values. ### Experimental Results
Table 4 contains all the experimental results. Our experiments show that, while all of the six models perform well when the test set is drawn from the same corpus as the training and development set, accuracy is significantly lower when we test these trained models on a test set drawn from a separate NLI corpus, the average difference in accuracy being 24.9 points across all experiments. Accuracy drops the most when a model is tested on SICK. The difference in this case is between 19.0-29.0 points when trained on MultiNLI, between 31.6-33.7 points when trained on SNLI and between 31.1-33.0 when trained on SNLI + MultiNLI. This was expected, as the method of constructing the sentence pairs was different, and hence there is too much difference in the kind of sentence pairs included in the training and test sets for transfer learning to work. However, the drop was more dramatic than expected. The most surprising result was that the accuracy of all models drops significantly even when the models were trained on MultiNLI and tested on SNLI (3.6-11.1 points). This is surprising as both of these datasets have been constructed with a similar data collection method using the same definition of entailment, contradiction and neutral. The sentences included in SNLI are also much simpler compared to those in MultiNLI, as they are taken from the Flickr image captions. This might also explain why the difference in accuracy for all of the six models is lowest when the models are trained on MultiNLI and tested on SNLI. It is also very surprising that the model with the biggest difference in accuracy was ESIM + ELMo which includes a pre-trained ELMo language model. BERT performed significantly better than the other models in this experiment having an accuracy of 80.4% and only 3.6 point difference in accuracy. The poor performance of most of the models with the MultiNLI-SNLI dataset pair is also very surprising given that neural network models do not seem to suffer a lot from introduction of new genres to the test set which were not included in the training set, as can be seen from the small difference in test accuracies for the matched and mismatched test sets (see e.g BIBREF5 ). In a sense SNLI could be seen as a separate genre not included in MultiNLI. This raises the question if the SNLI and MultiNLI have e.g. different kinds of annotation artifacts, which makes transfer learning between these datasets more difficult. All the models, except BERT, perform almost equally poorly across all the experiments. Both BiLSTM-max and HBMP have an average drop in accuracy of 24.4 points, while the average for KIM is 25.5 and for ESIM + ELMo 25.6. ESIM has the highest average difference of 27.0 points. In contrast to the findings of BIBREF1 , utilizing external knowledge did not improve the model's generalization capability, as KIM performed equally poorly across all dataset combinations. Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 . To understand better the types of errors made by neural network models in NLI we looked at some example failure-pairs for selected models. Tables 5 and 6 contain some randomly selected failure-pairs for two models: BERT and HBMP, and for three set-ups: SNLI $\rightarrow $ SICK, SNLI $\rightarrow $ MultiNLI and MultiNLI $\rightarrow $ SICK. We chose BERT as the current the state of the art NLI model. HBMP was selected as a high performing model in the sentence encoding model type. Although the listed sentence pairs represent just a small sample of the errors made by these models, they do include some interesting examples. First, it seems that SICK has a more narrow notion of contradiction – corresponding more to logical contradiction – compared to the contradiction in SNLI and MultiNLI, where especially in SNLI the sentences are contradictory if they describe a different state of affairs. This is evident in the sentence pair: A young child is running outside over the fallen leaves and A young child is lying down on a gravel road that is covered with dead leaves, which is predicted by BERT to be contradiction although the gold label is neutral. Another interesting example is the sentence pair: A boat pear with people boarding and disembarking some boats. and people are boarding and disembarking some boats, which is incorrectly predicted by BERT to be contradiction although it has been labeled as entailment. Here the two sentences describe the same event from different points of view: the first one describing a boat pear with some people on it and the second one describing the people directly. Interestingly the added information about the boat pear seems to confuse the model. ### Discussion and Conclusion
In this paper we have shown that neural network models for NLI fail to generalize across different NLI benchmarks. We experimented with six state-of-the-art models covering sentence encoding approaches, cross-sentence attention models and pre-trained and fine-tuned language models. For all the systems, the accuracy drops between 3.6-33.7 points (the average drop being 24.9 points), when testing with a test set drawn from a separate corpus from that of the training data, as compared to when the test and training data are splits from the same corpus. Our findings, together with the previous negative findings, indicate that the state-of-the-art models fail to capture the semantics of NLI in a way that will enable them to generalize across different NLI situations. The results highlight two issues to be taken into consideration: a) using datasets involving a fraction of what NLI is, will fail when tested in datasets that are testing for a slightly different definition of inference. This is evident when we move from the SNLI to the SICK dataset. b) NLI is to some extent genre/context dependent. Training on SNLI and testing on MultiNLI gives worse results than vice versa. This is particularly evident in the case of BERT. These results highlight that training on multiple genres helps. However, this help is still not enough given that, even in the case of training on MultiNLI (multi genre) and training on SNLI (single genre and same definition of inference with MultiNLI), accuracy drops significantly. We also found that involving a large pre-trained language model helps with transfer learning when the datasets are similar enough, as is the case with SNLI and MultiNLI. Our results further corroborate the power of pre-trained and fine-tuned language models like BERT in NLI. However, not even BERT is able to generalize from SNLI and MultiNLI to SICK, possibly due to the difference between what kind of inference relations are contained in these datasets. Our findings motivate us to look for novel neural network architectures and approaches that better capture the semantics on natural language inference beyond individual datasets. However, there seems to be a need to start with better constructed datasets, i.e. datasets that will not only capture fractions of what NLI is in reality. Better NLI systems need to be able to be more versatile on the types of inference they can recognize. Otherwise, we would be stuck with systems that can cover only some aspects of NLI. On a theoretical level, and in connection to the previous point, we need a better understanding of the range of phenomena NLI must be able to cover and focus our future endeavours for dataset construction towards this direction. In order to do this a more systematic study is needed on the different kinds of entailment relations NLI datasets need to include. Our future work will include a more systematic and broad-coverage analysis of the types of errors the models make and in what kinds of sentence-pairs they make successful predictions. ### Acknowledgments
The first author is supported by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113). The first author also gratefully acknowledges the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence. The second author is supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. Table 1: Dataset combinations used in the experiments. The rows in bold are baseline experiments, where the test data comes from the same benchmark as the training and development data. Table 2: Example sentence pairs from the three datasets. Table 3: Model architectures used in the experiments. Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined. Table 5: Example failure-pairs for BERT. Table 6: Example failure-pairs for HBMP.
|
BERT
|
What's ironic about the narrator's and Kroger's decision to sign on for the flight scheduled to Venus?
A. The narrator is going to fabricate more events to make his story sound appealing to the general public
B. They have the least amount of technical experience compared to the other members of the Martian crew
C. They were permitted to attend due to their 'experience,' but their experience created a major crisis on Earth
D. The narrator's deadpan tone is not likely to convey the true excitement of the Venusian journey
|
THE DOPE on Mars By JACK SHARKEY Somebody had to get the human angle on this trip ... but what was humane about sending me? Illustrated by WOOD My agent was the one who got me the job of going along to write up the first trip to Mars. He was always getting me things like that—appearances on TV shows, or mentions in writers' magazines. If he didn't sell much of my stuff, at least he sold me . "It'll be the biggest break a writer ever got," he told me, two days before blastoff. "Oh, sure there'll be scientific reports on the trip, but the public doesn't want them; they want the human slant on things." "But, Louie," I said weakly, "I'll probably be locked up for the whole trip. If there are fights or accidents, they won't tell me about them." "Nonsense," said Louie, sipping carefully at a paper cup of scalding coffee. "It'll be just like the public going along vicariously. They'll identify with you." "But, Louie," I said, wiping the dampness from my palms on the knees of my trousers as I sat there, "how'll I go about it? A story? An article? A you-are-there type of report? What?" Louie shrugged. "So keep a diary. It'll be more intimate, like." "But what if nothing happens?" I insisted hopelessly. Louie smiled. "So you fake it." I got up from the chair in his office and stepped to the door. "That's dishonest," I pointed out. "Creative is the word," Louie said. So I went on the first trip to Mars. And I kept a diary. This is it. And it is honest. Honest it is. October 1, 1960 They picked the launching date from the March, 1959, New York Times , which stated that this was the most likely time for launching. Trip time is supposed to take 260 days (that's one way), so we're aimed toward where Mars will be (had better be, or else). There are five of us on board. A pilot, co-pilot, navigator and biochemist. And, of course, me. I've met all but the pilot (he's very busy today), and they seem friendly enough. Dwight Kroger, the biochemist, is rather old to take the "rigors of the journey," as he puts it, but the government had a choice between sending a green scientist who could stand the trip or an accomplished man who would probably not survive, so they picked Kroger. We've blasted off, though, and he's still with us. He looks a damn sight better than I feel. He's kind of balding, and very iron-gray-haired and skinny, but his skin is tan as an Indian's, and right now he's telling jokes in the washroom with the co-pilot. Jones (that's the co-pilot; I didn't quite catch his first name) is scarlet-faced, barrel-chested and gives the general appearance of belonging under the spreading chestnut tree, not in a metal bullet flinging itself out into airless space. Come to think of it, who does belong where we are? The navigator's name is Lloyd Streeter, but I haven't seen his face yet. He has a little cubicle behind the pilot's compartment, with all kinds of maps and rulers and things. He keeps bent low over a welded-to-the-wall (they call it the bulkhead, for some reason or other) table, scratching away with a ballpoint pen on the maps, and now and then calling numbers over a microphone to the pilot. His hair is red and curly, and he looks as though he'd be tall if he ever gets to stand up. There are freckles on the backs of his hands, so I think he's probably got them on his face, too. So far, all he's said is, "Scram, I'm busy." Kroger tells me that the pilot's name is Patrick Desmond, but that I can call him Pat when I get to know him better. So far, he's still Captain Desmond to me. I haven't the vaguest idea what he looks like. He was already on board when I got here, with my typewriter and ream of paper, so we didn't meet. My compartment is small but clean. I mean clean now. It wasn't during blastoff. The inertial gravities didn't bother me so much as the gyroscopic spin they put on the ship so we have a sort of artificial gravity to hold us against the curved floor. It's that constant whirly feeling that gets me. I get sick on merry-go-rounds, too. They're having pork for dinner today. Not me. October 2, 1960 Feeling much better today. Kroger gave me a box of Dramamine pills. He says they'll help my stomach. So far, so good. Lloyd came by, also. "You play chess?" he asked. "A little," I admitted. "How about a game sometime?" "Sure," I said. "Do you have a board?" He didn't. Lloyd went away then, but the interview wasn't wasted. I learned that he is tall and does have a freckled face. Maybe we can build a chessboard. With my paper and his ballpoint pen and ruler, it should be easy. Don't know what we'll use for pieces, though. Jones (I still haven't learned his first name) has been up with the pilot all day. He passed my room on the way to the galley (the kitchen) for a cup of dark brown coffee (they like it thick) and told me that we were almost past the Moon. I asked to look, but he said not yet; the instrument panel is Top Secret. They'd have to cover it so I could look out the viewing screen, and they still need it for steering or something. I still haven't met the pilot. October 3, 1960 Well, I've met the pilot. He is kind of squat, with a vulturish neck and close-set jet-black eyes that make him look rather mean, but he was pleasant enough, and said I could call him Pat. I still don't know Jones' first name, though Pat spoke to him, and it sounded like Flants. That can't be right. Also, I am one of the first five men in the history of the world to see the opposite side of the Moon, with a bluish blurred crescent beyond it that Pat said was the Earth. The back of the Moon isn't much different from the front. As to the space in front of the ship, well, it's all black with white dots in it, and none of the dots move, except in a circle that Pat says is a "torque" result from the gyroscopic spin we're in. Actually, he explained to me, the screen is supposed to keep the image of space locked into place no matter how much we spin. But there's some kind of a "drag." I told him I hoped it didn't mean we'd land on Mars upside down. He just stared at me. I can't say I was too impressed with that 16 x 19 view of outer space. It's been done much better in the movies. There's just no awesomeness to it, no sense of depth or immensity. It's as impressive as a piece of velvet with salt sprinkled on it. Lloyd and I made a chessboard out of a carton. Right now we're using buttons for men. He's one of these fast players who don't stop and think out their moves. And so far I haven't won a game. It looks like a long trip. October 4, 1960 I won a game. Lloyd mistook my queen-button for my bishop-button and left his king in jeopardy, and I checkmated him next move. He said chess was a waste of time and he had important work to do and he went away. I went to the galley for coffee and had a talk about moss with Kroger. He said there was a good chance of lichen on Mars, and I misunderstood and said, "A good chance of liking what on Mars?" and Kroger finished his coffee and went up front. When I got back to my compartment, Lloyd had taken away the chessboard and all his buttons. He told me later he needed it to back up a star map. Pat slept mostly all day in his compartment, and Jones sat and watched the screen revolve. There wasn't much to do, so I wrote a poem, sort of. Mary, Mary, quite contrary, How does your garden grow? With Martian rime, Venusian slime, And a radioactive hoe. I showed it to Kroger. He says it may prove to be environmentally accurate, but that I should stick to prose. October 5, 1960 Learned Jones' first name. He wrote something in the ship's log, and I saw his signature. His name is Fleance, like in "Macbeth." He prefers to be called Jones. Pat uses his first name as a gag. Some fun. And only 255 days to go. April 1, 1961 I've skipped over the last 177 days or so, because there's nothing much new. I brought some books with me on the trip, books that I'd always meant to read and never had the time. So now I know all about Vanity Fair , Pride and Prejudice , War and Peace , Gone with the Wind , and Babbitt . They didn't take as long as I thought they would, except for Vanity Fair . It must have been a riot when it first came out. I mean, all those sly digs at the aristocracy, with copious interpolations by Mr. Thackeray in case you didn't get it when he'd pulled a particularly good gag. Some fun. And only 78 days to go. June 1, 1961 Only 17 days to go. I saw Mars on the screen today. It seems to be descending from overhead, but Pat says that that's the "torque" doing it. Actually, it's we who are coming in sideways. We've all grown beards, too. Pat said it was against regulations, but what the hell. We have a contest. Longest whiskers on landing gets a prize. I asked Pat what the prize was and he told me to go to hell. June 18, 1961 Mars has the whole screen filled. Looks like Death Valley. No sign of canals, but Pat says that's because of the dust storm down below. It's nice to have a "down below" again. We're going to land, so I have to go to my bunk. It's all foam rubber, nylon braid supports and magnesium tubing. Might as well be cement for all the good it did me at takeoff. Earth seems awfully far away. June 19, 1961 Well, we're down. We have to wear gas masks with oxygen hook-ups. Kroger says the air is breathable, but thin, and it has too much dust in it to be any fun to inhale. He's all for going out and looking for lichen, but Pat says he's got to set up camp, then get instructions from Earth. So we just have to wait. The air is very cold, but the Sun is hot as hell when it hits you. The sky is a blinding pink, or maybe more of a pale fuchsia. Kroger says it's the dust. The sand underfoot is kind of rose-colored, and not really gritty. The particles are round and smooth. No lichen so far. Kroger says maybe in the canals, if there are any canals. Lloyd wants to play chess again. Jones won the beard contest. Pat gave him a cigar he'd smuggled on board (no smoking was allowed on the ship), and Jones threw it away. He doesn't smoke. June 20, 1961 Got lost today. Pat told me not to go too far from camp, so, when I took a stroll, I made sure every so often that I could still see the rocket behind me. Walked for maybe an hour; then the oxygen gauge got past the halfway mark, so I started back toward the rocket. After maybe ten steps, the rocket disappeared. One minute it was standing there, tall and silvery, the next instant it was gone. Turned on my radio pack and got hold of Pat. Told him what happened, and he told Kroger. Kroger said I had been following a mirage, to step back a bit. I did, and I could see the ship again. Kroger said to try and walk toward where the ship seemed to be, even when it wasn't in view, and meantime they'd come out after me in the jeep, following my footprints. Started walking back, and the ship vanished again. It reappeared, disappeared, but I kept going. Finally saw the real ship, and Lloyd and Jones waving their arms at me. They were shouting through their masks, but I couldn't hear them. The air is too thin to carry sound well. All at once, something gleamed in their hands, and they started shooting at me with their rifles. That's when I heard the noise behind me. I was too scared to turn around, but finally Jones and Lloyd came running over, and I got up enough nerve to look. There was nothing there, but on the sand, paralleling mine, were footprints. At least I think they were footprints. Twice as long as mine, and three times as wide, but kind of featureless because the sand's loose and dry. They doubled back on themselves, spaced considerably farther apart. "What was it?" I asked Lloyd when he got to me. "Damned if I know," he said. "It was red and scaly, and I think it had a tail. It was two heads taller than you." He shuddered. "Ran off when we fired." "Where," said Jones, "are Pat and Kroger?" I didn't know. I hadn't seen them, nor the jeep, on my trip back. So we followed the wheel tracks for a while, and they veered off from my trail and followed another, very much like the one that had been paralleling mine when Jones and Lloyd had taken a shot at the scaly thing. "We'd better get them on the radio," said Jones, turning back toward the ship. There wasn't anything on the radio but static. Pat and Kroger haven't come back yet, either. June 21, 1961 We're not alone here. More of the scaly things have come toward the camp, but a few rifle shots send them away. They hop like kangaroos when they're startled. Their attitudes aren't menacing, but their appearance is. And Jones says, "Who knows what's 'menacing' in an alien?" We're going to look for Kroger and Pat today. Jones says we'd better before another windstorm blows away the jeep tracks. Fortunately, the jeep has a leaky oil pan, so we always have the smears to follow, unless they get covered up, too. We're taking extra oxygen, shells, and rifles. Food, too, of course. And we're locking up the ship. It's later , now. We found the jeep, but no Kroger or Pat. Lots of those big tracks nearby. We're taking the jeep to follow the aliens' tracks. There's some moss around here, on reddish brown rocks that stick up through the sand, just on the shady side, though. Kroger must be happy to have found his lichen. The trail ended at the brink of a deep crevice in the ground. Seems to be an earthquake-type split in solid rock, with the sand sifting over this and the far edge like pink silk cataracts. The bottom is in the shade and can't be seen. The crack seems to extend to our left and right as far as we can look. There looks like a trail down the inside of the crevice, but the Sun's setting, so we're waiting till tomorrow to go down. Going down was Jones' idea, not mine. June 22, 1961 Well, we're at the bottom, and there's water here, a shallow stream about thirty feet wide that runs along the center of the canal (we've decided we're in a canal). No sign of Pat or Kroger yet, but the sand here is hard-packed and damp, and there are normal-size footprints mingled with the alien ones, sharp and clear. The aliens seem to have six or seven toes. It varies from print to print. And they're barefoot, too, or else they have the damnedest-looking shoes in creation. The constant shower of sand near the cliff walls is annoying, but it's sandless (shower-wise) near the stream, so we're following the footprints along the bank. Also, the air's better down here. Still thin, but not so bad as on the surface. We're going without masks to save oxygen for the return trip (Jones assures me there'll be a return trip), and the air's only a little bit sandy, but handkerchiefs over nose and mouth solve this. We look like desperadoes, what with the rifles and covered faces. I said as much to Lloyd and he told me to shut up. Moss all over the cliff walls. Swell luck for Kroger. We've found Kroger and Pat, with the help of the aliens. Or maybe I should call them the Martians. Either way, it's better than what Jones calls them. They took away our rifles and brought us right to Kroger and Pat, without our even asking. Jones is mad at the way they got the rifles so easily. When we came upon them (a group of maybe ten, huddling behind a boulder in ambush), he fired, but the shots either bounced off their scales or stuck in their thick hides. Anyway, they took the rifles away and threw them into the stream, and picked us all up and took us into a hole in the cliff wall. The hole went on practically forever, but it didn't get dark. Kroger tells me that there are phosphorescent bacteria living in the mold on the walls. The air has a fresh-dug-grave smell, but it's richer in oxygen than even at the stream. We're in a small cave that is just off a bigger cave where lots of tunnels come together. I can't remember which one we came in through, and neither can anyone else. Jones asked me what the hell I kept writing in the diary for, did I want to make it a gift to Martian archeologists? But I said where there's life there's hope, and now he won't talk to me. I congratulated Kroger on the lichen I'd seen, but he just said a short and unscientific word and went to sleep. There's a Martian guarding the entrance to our cave. I don't know what they intend to do with us. Feed us, I hope. So far, they've just left us here, and we're out of rations. Kroger tried talking to the guard once, but he (or it) made a whistling kind of sound and flashed a mouthful of teeth. Kroger says the teeth are in multiple rows, like a tiger shark's. I'd rather he hadn't told me. June 23, 1961, I think We're either in a docket or a zoo. I can't tell which. There's a rather square platform surrounded on all four sides by running water, maybe twenty feet across, and we're on it. Martians keep coming to the far edge of the water and looking at us and whistling at each other. A little Martian came near the edge of the water and a larger Martian whistled like crazy and dragged it away. "Water must be dangerous to them," said Kroger. "We shoulda brought water pistols," Jones muttered. Pat said maybe we can swim to safety. Kroger told Pat he was crazy, that the little island we're on here underground is bordered by a fast river that goes into the planet. We'd end up drowned in some grotto in the heart of the planet, says Kroger. "What the hell," says Pat, "it's better than starving." It is not. June 24, 1961, probably I'm hungry . So is everybody else. Right now I could eat a dinner raw, in a centrifuge, and keep it down. A Martian threw a stone at Jones today, and Jones threw one back at him and broke off a couple of scales. The Martian whistled furiously and went away. When the crowd thinned out, same as it did yesterday (must be some sort of sleeping cycle here), Kroger talked Lloyd into swimming across the river and getting the red scales. Lloyd started at the upstream part of the current, and was about a hundred yards below this underground island before he made the far side. Sure is a swift current. But he got the scales, walked very far upstream of us, and swam back with them. The stream sides are steep, like in a fjord, and we had to lift him out of the swirling cold water, with the scales gripped in his fist. Or what was left of the scales. They had melted down in the water and left his hand all sticky. Kroger took the gummy things, studied them in the uncertain light, then tasted them and grinned. The Martians are made of sugar. Later, same day . Kroger said that the Martian metabolism must be like Terran (Earth-type) metabolism, only with no pancreas to make insulin. They store their energy on the outside of their bodies, in the form of scales. He's watched them more closely and seen that they have long rubbery tubes for tongues, and that they now and then suck up water from the stream while they're watching us, being careful not to get their lips (all sugar, of course) wet. He guesses that their "blood" must be almost pure water, and that it washes away (from the inside, of course) the sugar they need for energy. I asked him where the sugar came from, and he said probably their bodies isolated carbon from something (he thought it might be the moss) and combined it with the hydrogen and oxygen in the water (even I knew the formula for water) to make sugar, a common carbohydrate. Like plants, on Earth, he said. Except, instead of using special cells on leaves to form carbohydrates with the help of sunpower, as Earth plants do in photosynthesis (Kroger spelled that word for me), they used the shape of the scales like prisms, to isolate the spectra (another Kroger word) necessary to form the sugar. "I don't get it," I said politely, when he'd finished his spiel. "Simple," he said, as though he were addressing me by name. "They have a twofold reason to fear water. One: by complete solvency in that medium, they lose all energy and die. Two: even partial sprinkling alters the shape of the scales, and they are unable to use sunpower to form more sugar, and still die, if a bit slower." "Oh," I said, taking it down verbatim. "So now what do we do?" "We remove our boots," said Kroger, sitting on the ground and doing so, "and then we cross this stream, fill the boots with water, and spray our way to freedom." "Which tunnel do we take?" asked Pat, his eyes aglow at the thought of escape. Kroger shrugged. "We'll have to chance taking any that seem to slope upward. In any event, we can always follow it back and start again." "I dunno," said Jones. "Remember those teeth of theirs. They must be for biting something more substantial than moss, Kroger." "We'll risk it," said Pat. "It's better to go down fighting than to die of starvation." The hell it is. June 24, 1961, for sure The Martians have coal mines. That's what they use those teeth for. We passed through one and surprised a lot of them chewing gritty hunks of anthracite out of the walls. They came running at us, whistling with those tubelike tongues, and drooling dry coal dust, but Pat swung one of his boots in an arc that splashed all over the ground in front of them, and they turned tail (literally) and clattered off down another tunnel, sounding like a locomotive whistle gone berserk. We made the surface in another hour, back in the canal, and were lucky enough to find our own trail to follow toward the place above which the jeep still waited. Jones got the rifles out of the stream (the Martians had probably thought they were beyond recovery there) and we found the jeep. It was nearly buried in sand, but we got it cleaned off and running, and got back to the ship quickly. First thing we did on arriving was to break out the stores and have a celebration feast just outside the door of the ship. It was pork again, and I got sick. June 25, 1961 We're going back . Pat says that a week is all we were allowed to stay and that it's urgent to return and tell what we've learned about Mars (we know there are Martians, and they're made of sugar). "Why," I said, "can't we just tell it on the radio?" "Because," said Pat, "if we tell them now, by the time we get back we'll be yesterday's news. This way we may be lucky and get a parade." "Maybe even money," said Kroger, whose mind wasn't always on science. "But they'll ask why we didn't radio the info, sir," said Jones uneasily. "The radio," said Pat, nodding to Lloyd, "was unfortunately broken shortly after landing." Lloyd blinked, then nodded back and walked around the rocket. I heard a crunching sound and the shattering of glass, not unlike the noise made when one drives a rifle butt through a radio. Well, it's time for takeoff. This time it wasn't so bad. I thought I was getting my space-legs, but Pat says there's less gravity on Mars, so escape velocity didn't have to be so fast, hence a smoother (relatively) trip on our shock-absorbing bunks. Lloyd wants to play chess again. I'll be careful not to win this time. However, if I don't win, maybe this time I'll be the one to quit. Kroger is busy in his cramped lab space trying to classify the little moss he was able to gather, and Jones and Pat are up front watching the white specks revolve on that black velvet again. Guess I'll take a nap. June 26, 1961 Hell's bells . Kroger says there are two baby Martians loose on board ship. Pat told him he was nuts, but there are certain signs he's right. Like the missing charcoal in the air-filtration-and-reclaiming (AFAR) system. And the water gauges are going down. But the clincher is those two sugar crystals Lloyd had grabbed up when we were in that zoo. They're gone. Pat has declared a state of emergency. Quick thinking, that's Pat. Lloyd, before he remembered and turned scarlet, suggested we radio Earth for instructions. We can't. Here we are, somewhere in a void headed for Earth, with enough air and water left for maybe three days—if the Martians don't take any more. Kroger is thrilled that he is learning something, maybe, about Martian reproductive processes. When he told Pat, Pat put it to a vote whether or not to jettison Kroger through the airlock. However, it was decided that responsibility was pretty well divided. Lloyd had gotten the crystals, Kroger had only studied them, and Jones had brought them aboard. So Kroger stays, but meanwhile the air is getting worse. Pat suggested Kroger put us all into a state of suspended animation till landing time, eight months away. Kroger said, "How?" June 27, 1961 Air is foul and I'm very thirsty. Kroger says that at least—when the Martians get bigger—they'll have to show themselves. Pat says what do we do then ? We can't afford the water we need to melt them down. Besides, the melted crystals might all turn into little Martians. Jones says he'll go down spitting. Pat says why not dismantle interior of rocket to find out where they're holing up? Fine idea. How do you dismantle riveted metal plates? June 28, 1961 The AFAR system is no more and the water gauges are still dropping. Kroger suggests baking bread, then slicing it, then toasting it till it turns to carbon, and we can use the carbon in the AFAR system. We'll have to try it, I guess. The Martians ate the bread. Jones came forward to tell us the loaves were cooling, and when he got back they were gone. However, he did find a few of the red crystals on the galley deck (floor). They're good-sized crystals, too. Which means so are the Martians. Kroger says the Martians must be intelligent, otherwise they couldn't have guessed at the carbohydrates present in the bread after a lifelong diet of anthracite. Pat says let's jettison Kroger. This time the vote went against Kroger, but he got a last-minute reprieve by suggesting the crystals be pulverized and mixed with sulphuric acid. He says this'll produce carbon. I certainly hope so. So does Kroger. Brief reprieve for us. The acid-sugar combination not only produces carbon but water vapor, and the gauge has gone up a notch. That means that we have a quart of water in the tanks for drinking. However, the air's a bit better, and we voted to let Kroger stay inside the rocket. Meantime, we have to catch those Martians. June 29, 1961 Worse and worse . Lloyd caught one of the Martians in the firing chamber. We had to flood the chamber with acid to subdue the creature, which carbonized nicely. So now we have plenty of air and water again, but besides having another Martian still on the loose, we now don't have enough acid left in the fuel tanks to make a landing. Pat says at least our vector will carry us to Earth and we can die on our home planet, which is better than perishing in space. The hell it is. March 3, 1962 Earth in sight . The other Martian is still with us. He's where we can't get at him without blow-torches, but he can't get at the carbon in the AFAR system, either, which is a help. However, his tail is prehensile, and now and then it snakes out through an air duct and yanks food right off the table from under our noses. Kroger says watch out. We are made of carbohydrates, too. I'd rather not have known. March 4, 1962 Earth fills the screen in the control room. Pat says if we're lucky, he might be able to use the bit of fuel we have left to set us in a descending spiral into one of the oceans. The rocket is tighter than a submarine, he insists, and it will float till we're rescued, if the plates don't crack under the impact. We all agreed to try it. Not that we thought it had a good chance of working, but none of us had a better idea. I guess you know the rest of the story, about how that destroyer spotted us and got us and my diary aboard, and towed the rocket to San Francisco. News of the "captured Martian" leaked out, and we all became nine-day wonders until the dismantling of the rocket. Kroger says he must have dissolved in the water, and wonders what that would do. There are about a thousand of those crystal-scales on a Martian. So last week we found out, when those red-scaled things began clambering out of the sea on every coastal region on Earth. Kroger tried to explain to me about salinity osmosis and hydrostatic pressure and crystalline life, but in no time at all he lost me. The point is, bullets won't stop these things, and wherever a crystal falls, a new Martian springs up in a few weeks. It looks like the five of us have abetted an invasion from Mars. Needless to say, we're no longer heroes. I haven't heard from Pat or Lloyd for a week. Jones was picked up attacking a candy factory yesterday, and Kroger and I were allowed to sign on for the flight to Venus scheduled within the next few days—because of our experience. Kroger says there's only enough fuel for a one-way trip. I don't care. I've always wanted to travel with the President. —JACK SHARKEY Transcriber's Note: This etext was produced from Galaxy Magazine June 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. Minor spelling and typographical errors have been corrected without note.
|
C. They were permitted to attend due to their 'experience,' but their experience created a major crisis on Earth
|
What doesn't Baron think was a reason for their failure?
A. McIvers
B. the Major's experience
C. poor mapping
D. faulty equipment
|
Brightside Crossing by Alan E. Nourse JAMES BARON was not pleased to hear that he had had a visitor when he reached the Red Lion that evening. He had no stomach for mysteries, vast or trifling, and there were pressing things to think about at this time. Yet the doorman had flagged him as he came in from the street: “A thousand pardons, Mr. Baron. The gentleman—he would leave no name. He said you’d want to see him. He will be back by eight.” Now Baron drummed his fingers on the table top, staring about the quiet lounge. Street trade was discouraged at the Red Lion, gently but persuasively; the patrons were few in number. Across to the right was a group that Baron knew vaguely—Andean climbers, or at least two of them were. Over near the door he recognized old Balmer, who had mapped the first passage to the core of Vulcan Crater on Venus. Baron returned his smile with a nod. Then he settled back and waited impatiently for the intruder who demanded his time without justifying it. Presently a small, grizzled man crossed the room and sat down at Baron’s table. He was short and wiry. His face held no key to his age—he might have been thirty or a thousand—but he looked weary and immensely ugly. His cheeks and forehead were twisted and brown, with scars that were still healing. The stranger said, “I’m glad you waited. I’ve heard you’re planning to attempt the Brightside.” Baron stared at the man for a moment. “I see you can read telecasts,” he said coldly. “The news was correct. We are going to make a Brightside Crossing.” “At perihelion?” “Of course. When else?” The grizzled man searched Baron’s face for a moment without expression. Then he said slowly, “No, I’m afraid you’re not going to make the Crossing.” “Say, who are you, if you don’t mind?” Baron demanded. “The name is Claney,” said the stranger. There was a silence. Then: “Claney? Peter Claney?” “That’s right.” Baron’s eyes were wide with excitement, all trace of anger gone. “Great balls of fire, man— where have you been hiding? We’ve been trying to contact you for months!” “I know. I was hoping you’d quit looking and chuck the whole idea.” “Quit looking!” Baron bent forward over the table. “My friend, we’d given up hope, but we’ve never quit looking. Here, have a drink. There’s so much you can tell us.” His fingers were trembling. Peter Claney shook his head. “I can’t tell you anything you want to hear.” “But you’ve got to. You’re the only man on Earth who’s attempted a Brightside Crossing and lived through it! And the story you cleared for the news—it was nothing. We need details . Where did your equipment fall down? Where did you miscalculate? What were the trouble spots?” Baron jabbed a finger at Claney’s face. “That, for instance—epithelioma? Why? What was wrong with your glass? Your filters? We’ve got to know those things. If you can tell us, we can make it across where your attempt failed—” “You want to know why we failed?” asked Claney. “Of course we want to know. We have to know.” “It’s simple. We failed because it can’t be done. We couldn’t do it and neither can you. No human beings will ever cross the Brightside alive, not if they try for centuries.” “Nonsense,” Baron declared. “We will.” Claney shrugged. “I was there. I know what I’m saying. You can blame the equipment or the men—there were flaws in both quarters—but we just didn’t know what we were fighting. It was the planet that whipped us, that and the Sun . They’ll whip you, too, if you try it.” “Never,” said Baron. “Let me tell you,” Peter Claney said. I’d been interested in the Brightside for almost as long as I can remember (Claney said). I guess I was about ten when Wyatt and Carpenter made the last attempt—that was in 2082, I think. I followed the news stories like a tri-V serial and then I was heartbroken when they just disappeared. I know now that they were a pair of idiots, starting off without proper equipment, with practically no knowledge of surface conditions, without any charts—they couldn’t have made a hundred miles—but I didn’t know that then and it was a terrible tragedy. After that, I followed Sanderson’s work in the Twilight Lab up there and began to get Brightside into my blood, sure as death. But it was Mikuta’s idea to attempt a Crossing. Did you ever know Tom Mikuta? I don’t suppose you did. No, not Japanese—Polish-American. He was a major in the Interplanetary Service for some years and hung onto the title after he gave up his commission. He was with Armstrong on Mars during his Service days, did a good deal of the original mapping and surveying for the Colony there. I first met him on Venus; we spent five years together up there doing some of the nastiest exploring since the Matto Grasso. Then he made the attempt on Vulcan Crater that paved the way for Balmer a few years later. I’d always liked the Major—he was big and quiet and cool, the sort of guy who always had things figured a little further ahead than anyone else and always knew what to do in a tight place. Too many men in this game are all nerve and luck, with no judgment. The Major had both. He also had the kind of personality that could take a crew of wild men and make them work like a well-oiled machine across a thousand miles of Venus jungle. I liked him and I trusted him. He contacted me in New York and he was very casual at first. We spent an evening here at the Red Lion, talking about old times; he told me about the Vulcan business, and how he’d been out to see Sanderson and the Twilight Lab on Mercury, and how he preferred a hot trek to a cold one any day of the year—and then he wanted to know what I’d been doing since Venus and what my plans were. “No particular plans,” I told him. “Why?” He looked me over. “How much do you weigh, Peter?” I told him one-thirty-five. “That much!” he said. “Well, there can’t be much fat on you, at any rate. How do you take heat?” “You should know,” I said. “Venus was no icebox.” “No, I mean real heat.” Then I began to get it. “You’re planning a trip.” “That’s right. A hot trip.” He grinned at me. “Might be dangerous, too.” “What trip?” “Brightside of Mercury,” the Major said. I whistled cautiously. “At aphelion?” He threw his head back. “Why try a Crossing at aphelion? What have you done then? Four thousand miles of butcherous heat, just to have some joker come along, use your data and drum you out of the glory by crossing at perihelion forty-four days later? No, thanks. I want the Brightside without any nonsense about it.” He leaned across me eagerly. “I want to make a Crossing at perihelion and I want to cross on the surface. If a man can do that, he’s got Mercury. Until then, nobody’s got Mercury. I want Mercury—but I’ll need help getting it.” I’d thought of it a thousand times and never dared consider it. Nobody had, since Wyatt and Carpenter disappeared. Mercury turns on its axis in the same time that it wheels around the Sun, which means that the Brightside is always facing in. That makes the Brightside of Mercury at perihelion the hottest place in the Solar System, with one single exception: the surface of the Sun itself. It would be a hellish trek. Only a few men had ever learned just how hellish and they never came back to tell about it. It was a real hell’s Crossing, but someday, I thought, somebody would cross it. I wanted to be along. The Twilight Lab, near the northern pole of Mercury, was the obvious jumping-off place. The setup there wasn’t very extensive—a rocket landing, the labs and quarters for Sanderson’s crew sunk deep into the crust, and the tower that housed the Solar ’scope that Sanderson had built up there ten years before. Twilight Lab wasn’t particularly interested in the Brightside, of course—the Sun was Sanderson’s baby and he’d picked Mercury as the closest chunk of rock to the Sun that could hold his observatory. He’d chosen a good location, too. On Mercury, the Brightside temperature hits 770° F. at perihelion and the Darkside runs pretty constant at -410° F. No permanent installation with a human crew could survive at either extreme. But with Mercury’s wobble, the twilight zone between Brightside and Darkside offers something closer to survival temperatures. Sanderson built the Lab up near the pole, where the zone is about five miles wide, so the temperature only varies 50 to 60 degrees with the libration. The Solar ’scope could take that much change and they’d get good clear observation of the Sun for about seventy out of the eighty-eight days it takes the planet to wheel around. The Major was counting on Sanderson knowing something about Mercury as well as the Sun when we camped at the Lab to make final preparations. Sanderson did. He thought we’d lost our minds and he said so, but he gave us all the help he could. He spent a week briefing Jack Stone, the third member of our party, who had arrived with the supplies and equipment a few days earlier. Poor Jack met us at the rocket landing almost bawling, Sanderson had given him such a gloomy picture of what Brightside was like. Stone was a youngster—hardly twenty-five, I’d say—but he’d been with the Major at Vulcan and had begged to join this trek. I had a funny feeling that Jack really didn’t care for exploring too much, but he thought Mikuta was God, followed him around like a puppy. It didn’t matter to me as long as he knew what he was getting in for. You don’t go asking people in this game why they do it—they’re liable to get awfully uneasy and none of them can ever give you an answer that makes sense. Anyway, Stone had borrowed three men from the Lab, and had the supplies and equipment all lined up when we got there, ready to check and test. We dug right in. With plenty of funds—tri-V money and some government cash the Major had talked his way around—our equipment was new and good. Mikuta had done the designing and testing himself, with a big assist from Sanderson. We had four Bugs, three of them the light pillow-tire models, with special lead-cooled cut-in engines when the heat set in, and one heavy-duty tractor model for pulling the sledges. The Major went over them like a kid at the circus. Then he said, “Have you heard anything from McIvers?” “Who’s he?” Stone wanted to know. “He’ll be joining us. He’s a good man—got quite a name for climbing, back home.” The Major turned to me. “You’ve probably heard of him.” I’d heard plenty of stories about Ted McIvers and I wasn’t too happy to hear that he was joining us. “Kind of a daredevil, isn’t he?” “Maybe. He’s lucky and skillful. Where do you draw the line? We’ll need plenty of both.” “Have you ever worked with him?” I asked. “No. Are you worried?” “Not exactly. But Brightside is no place to count on luck.” The Major laughed. “I don’t think we need to worry about McIvers. We understood each other when I talked up the trip to him and we’re going to need each other too much to do any fooling around.” He turned back to the supply list. “Meanwhile, let’s get this stuff listed and packed. We’ll need to cut weight sharply and our time is short. Sanderson says we should leave in three days.” Two days later, McIvers hadn’t arrived. The Major didn’t say much about it. Stone was getting edgy and so was I. We spent the second day studying charts of the Brightside, such as they were. The best available were pretty poor, taken from so far out that the detail dissolved into blurs on blow-up. They showed the biggest ranges of peaks and craters and faults, and that was all. Still, we could use them to plan a broad outline of our course. “This range here,” the Major said as we crowded around the board, “is largely inactive, according to Sanderson. But these to the south and west could be active. Seismograph tracings suggest a lot of activity in that region, getting worse down toward the equator—not only volcanic, but sub-surface shifting.” Stone nodded. “Sanderson told me there was probably constant surface activity.” The Major shrugged. “Well, it’s treacherous, there’s no doubt of it. But the only way to avoid it is to travel over the Pole, which would lose us days and offer us no guarantee of less activity to the west. Now we might avoid some if we could find a pass through this range and cut sharp east—” It seemed that the more we considered the problem, the further we got from a solution. We knew there were active volcanoes on the Brightside—even on the Darkside, though surface activity there was pretty much slowed down and localized. But there were problems of atmosphere on Brightside, as well. There was an atmosphere and a constant atmospheric flow from Brightside to Darkside. Not much—the lighter gases had reached escape velocity and disappeared from Brightside millennia ago—but there was CO 2 , and nitrogen, and traces of other heavier gases. There was also an abundance of sulfur vapor, as well as carbon disulfide and sulfur dioxide. The atmospheric tide moved toward the Darkside, where it condensed, carrying enough volcanic ash with it for Sanderson to estimate the depth and nature of the surface upheavals on Brightside from his samplings. The trick was to find a passage that avoided those upheavals as far as possible. But in the final analysis, we were barely scraping the surface. The only way we would find out what was happening where was to be there. Finally, on the third day, McIvers blew in on a freight rocket from Venus. He’d missed the ship that the Major and I had taken by a few hours, and had conned his way to Venus in hopes of getting a hop from there. He didn’t seem too upset about it, as though this were his usual way of doing things and he couldn’t see why everyone should get so excited. He was a tall, rangy man with long, wavy hair prematurely gray, and the sort of eyes that looked like a climber’s—half-closed, sleepy, almost indolent, but capable of abrupt alertness. And he never stood still; he was always moving, always doing something with his hands, or talking, or pacing about. Evidently the Major decided not to press the issue of his arrival. There was still work to do, and an hour later we were running the final tests on the pressure suits. That evening, Stone and McIvers were thick as thieves, and everything was set for an early departure after we got some rest. “And that,” said Baron, finishing his drink and signaling the waiter for another pair, “was your first big mistake.” Peter Claney raised his eyebrows. “McIvers?” “Of course.” Claney shrugged, glanced at the small quiet tables around them. “There are lots of bizarre personalities around a place like this, and some of the best wouldn’t seem to be the most reliable at first glance. Anyway, personality problems weren’t our big problem right then. Equipment worried us first and route next.” Baron nodded in agreement. “What kind of suits did you have?” “The best insulating suits ever made,” said Claney. “Each one had an inner lining of a fiberglass modification, to avoid the clumsiness of asbestos, and carried the refrigerating unit and oxygen storage which we recharged from the sledges every eight hours. Outer layer carried a monomolecular chrome reflecting surface that made us glitter like Christmas trees. And we had a half-inch dead-air space under positive pressure between the two layers. Warning thermocouples, of course—at 770 degrees, it wouldn’t take much time to fry us to cinders if the suits failed somewhere.” “How about the Bugs?” “They were insulated, too, but we weren’t counting on them too much for protection.” “You weren’t!” Baron exclaimed. “Why not?” “We’d be in and out of them too much. They gave us mobility and storage, but we knew we’d have to do a lot of forward work on foot.” Claney smiled bitterly. “Which meant that we had an inch of fiberglass and a half-inch of dead air between us and a surface temperature where lead flowed like water and zinc was almost at melting point and the pools of sulfur in the shadows were boiling like oatmeal over a campfire.” Baron licked his lips. His fingers stroked the cool, wet glass as he set it down on the tablecloth. “Go on,” he said tautly. “You started on schedule?” “Oh, yes,” said Claney, “we started on schedule, all right. We just didn’t quite end on schedule, that was all. But I’m getting to that.” He settled back in his chair and continued. We jumped off from Twilight on a course due southeast with thirty days to make it to the Center of Brightside. If we could cross an average of seventy miles a day, we could hit Center exactly at perihelion, the point of Mercury’s closest approach to the Sun—which made Center the hottest part of the planet at the hottest it ever gets. The Sun was already huge and yellow over the horizon when we started, twice the size it appears on Earth. Every day that Sun would grow bigger and whiter, and every day the surface would get hotter. But once we reached Center, the job was only half done—we would still have to travel another two thousand miles to the opposite twilight zone. Sanderson was to meet us on the other side in the Laboratory’s scout ship, approximately sixty days from the time we jumped off. That was the plan, in outline. It was up to us to cross those seventy miles a day, no matter how hot it became, no matter what terrain we had to cross. Detours would be dangerous and time-consuming. Delays could cost us our lives. We all knew that. The Major briefed us on details an hour before we left. “Peter, you’ll take the lead Bug, the small one we stripped down for you. Stone and I will flank you on either side, giving you a hundred-yard lead. McIvers, you’ll have the job of dragging the sledges, so we’ll have to direct your course pretty closely. Peter’s job is to pick the passage at any given point. If there’s any doubt of safe passage, we’ll all explore ahead on foot before we risk the Bugs. Got that?” McIvers and Stone exchanged glances. McIvers said: “Jack and I were planning to change around. We figured he could take the sledges. That would give me a little more mobility.” The Major looked up sharply at Stone. “Do you buy that, Jack?” Stone shrugged. “I don’t mind. Mac wanted—” McIvers made an impatient gesture with his hands. “It doesn’t matter. I just feel better when I’m on the move. Does it make any difference?” “I guess it doesn’t,” said the Major. “Then you’ll flank Peter along with me. Right?” “Sure, sure.” McIvers pulled at his lower lip. “Who’s going to do the advance scouting?” “It sounds like I am,” I cut in. “We want to keep the lead Bug light as possible.” Mikuta nodded. “That’s right. Peter’s Bug is stripped down to the frame and wheels.” McIvers shook his head. “No, I mean the advance work. You need somebody out ahead—four or five miles, at least—to pick up the big flaws and active surface changes, don’t you?” He stared at the Major. “I mean, how can we tell what sort of a hole we may be moving into, unless we have a scout up ahead?” “That’s what we have the charts for,” the Major said sharply. “Charts! I’m talking about detail work. We don’t need to worry about the major topography. It’s the little faults you can’t see on the pictures that can kill us.” He tossed the charts down excitedly. “Look, let me take a Bug out ahead and work reconnaissance, keep five, maybe ten miles ahead of the column. I can stay on good solid ground, of course, but scan the area closely and radio back to Peter where to avoid the flaws. Then—” “No dice,” the Major broke in. “But why not? We could save ourselves days!” “I don’t care what we could save. We stay together. When we get to the Center, I want live men along with me. That means we stay within easy sight of each other at all times. Any climber knows that everybody is safer in a party than one man alone—any time, any place.” McIvers stared at him, his cheeks an angry red. Finally he gave a sullen nod. “Okay. If you say so.” “Well, I say so and I mean it. I don’t want any fancy stuff. We’re going to hit Center together, and finish the Crossing together. Got that?” McIvers nodded. Mikuta then looked at Stone and me and we nodded, too. “All right,” he said slowly. “Now that we’ve got it straight, let’s go.” It was hot. If I forget everything else about that trek, I’ll never forget that huge yellow Sun glaring down, without a break, hotter and hotter with every mile. We knew that the first few days would be the easiest and we were rested and fresh when we started down the long ragged gorge southeast of the Twilight Lab. I moved out first; back over my shoulder, I could see the Major and McIvers crawling out behind me, their pillow tires taking the rugged floor of the gorge smoothly. Behind them, Stone dragged the sledges. Even at only 30 per cent Earth gravity they were a strain on the big tractor, until the ski-blades bit into the fluffy volcanic ash blanketing the valley. We even had a path to follow for the first twenty miles. I kept my eyes pasted to the big polaroid binocs, picking out the track the early research teams had made out into the edge of Brightside. But in a couple of hours we rumbled past Sanderson’s little outpost observatory and the tracks stopped. We were in virgin territory and already the Sun was beginning to bite. We didn’t feel the heat so much those first days out. We saw it. The refrig units kept our skins at a nice comfortable seventy-five degrees Fahrenheit inside our suits, but our eyes watched that glaring Sun and the baked yellow rocks going past, and some nerve pathways got twisted up, somehow. We poured sweat as if we were in a superheated furnace. We drove eight hours and slept five. When a sleep period came due, we pulled the Bugs together into a square, threw up a light aluminum sun-shield and lay out in the dust and rocks. The sun-shield cut the temperature down sixty or seventy degrees, for whatever help that was. And then we ate from the forward sledge—sucking through tubes—protein, carbohydrates, bulk gelatin, vitamins. The Major measured water out with an iron hand, because we’d have drunk ourselves into nephritis in a week otherwise. We were constantly, unceasingly thirsty. Ask the physiologists and psychiatrists why—they can give you have a dozen interesting reasons—but all we knew, or cared about, was that it happened to be so. We didn’t sleep the first few stops, as a consequence. Our eyes burned in spite of the filters and we had roaring headaches, but we couldn’t sleep them off. We sat around looking at each other. Then McIvers would say how good a beer would taste, and off we’d go. We’d have murdered our grandmothers for one ice-cold bottle of beer. After a few driving periods, I began to get my bearings at the wheel. We were moving down into desolation that made Earth’s old Death Valley look like a Japanese rose garden. Huge sun-baked cracks opened up in the floor of the gorge, with black cliffs jutting up on either side; the air was filled with a barely visible yellowish mist of sulfur and sulfurous gases. It was a hot, barren hole, no place for any man to go, but the challenge was so powerful you could almost feel it. No one had ever crossed this land before and escaped. Those who had tried it had been cruelly punished, but the land was still there, so it had to be crossed. Not the easy way. It had to be crossed the hardest way possible: overland, through anything the land could throw up to us, at the most difficult time possible. Yet we knew that even the land might have been conquered before, except for that Sun. We’d fought absolute cold before and won. We’d never fought heat like this and won. The only worse heat in the Solar System was the surface of the Sun itself. Brightside was worth trying for. We would get it or it would get us. That was the bargain. I learned a lot about Mercury those first few driving periods. The gorge petered out after a hundred miles and we moved onto the slope of a range of ragged craters that ran south and east. This range had shown no activity since the first landing on Mercury forty years before, but beyond it there were active cones. Yellow fumes rose from the craters constantly; their sides were shrouded with heavy ash. We couldn’t detect a wind, but we knew there was a hot, sulfurous breeze sweeping in great continental tides across the face of the planet. Not enough for erosion, though. The craters rose up out of jagged gorges, huge towering spears of rock and rubble. Below were the vast yellow flatlands, smoking and hissing from the gases beneath the crust. Over everything was gray dust—silicates and salts, pumice and limestone and granite ash, filling crevices and declivities—offering a soft, treacherous surface for the Bug’s pillow tires. I learned to read the ground, to tell a covered fault by the sag of the dust; I learned to spot a passable crack, and tell it from an impassable cut. Time after time the Bugs ground to a halt while we explored a passage on foot, tied together with light copper cable, digging, advancing, digging some more until we were sure the surface would carry the machines. It was cruel work; we slept in exhaustion. But it went smoothly, at first. Too smoothly, it seemed to me, and the others seemed to think so, too. McIvers’ restlessness was beginning to grate on our nerves. He talked too much, while we were resting or while we were driving; wisecracks, witticisms, unfunny jokes that wore thin with repetition. He took to making side trips from the route now and then, never far, but a little further each time. Jack Stone reacted quite the opposite; he grew quieter with each stop, more reserved and apprehensive. I didn’t like it, but I figured that it would pass off after a while. I was apprehensive enough myself; I just managed to hide it better. And every mile the Sun got bigger and whiter and higher in the sky and hotter. Without our ultra-violet screens and glare filters we would have been blinded; as it was our eyes ached constantly and the skin on our faces itched and tingled at the end of an eight-hour trek. But it took one of those side trips of McIvers’ to deliver the penultimate blow to our already fraying nerves. He had driven down a side-branch of a long canyon running off west of our route and was almost out of sight in a cloud of ash when we heard a sharp cry through our earphones. I wheeled my Bug around with my heart in my throat and spotted him through the binocs, waving frantically from the top of his machine. The Major and I took off, lumbering down the gulch after him as fast as the Bugs could go, with a thousand horrible pictures racing through our minds.... We found him standing stock-still, pointing down the gorge and, for once, he didn’t have anything to say. It was the wreck of a Bug; an old-fashioned half-track model of the sort that hadn’t been in use for years. It was wedged tight in a cut in the rock, an axle broken, its casing split wide open up the middle, half-buried in a rock slide. A dozen feet away were two insulated suits with white bones gleaming through the fiberglass helmets. This was as far as Wyatt and Carpenter had gotten on their Brightside Crossing. On the fifth driving period out, the terrain began to change. It looked the same, but every now and then it felt different. On two occasions I felt my wheels spin, with a howl of protest from my engine. Then, quite suddenly, the Bug gave a lurch; I gunned my motor and nothing happened. I could see the dull gray stuff seeping up around the hubs, thick and tenacious, splattering around in steaming gobs as the wheels spun. I knew what had happened the moment the wheels gave and, a few minutes later, they chained me to the tractor and dragged me back out of the mire. It looked for all the world like thick gray mud, but it was a pit of molten lead, steaming under a soft layer of concealing ash. I picked my way more cautiously then. We were getting into an area of recent surface activity; the surface was really treacherous. I caught myself wishing that the Major had okayed McIvers’ scheme for an advanced scout; more dangerous for the individual, maybe, but I was driving blind now and I didn’t like it. One error in judgment could sink us all, but I wasn’t thinking much about the others. I was worried about me , plenty worried. I kept thinking, better McIvers should go than me. It wasn’t healthy thinking and I knew it, but I couldn’t get the thought out of my mind. It was a grueling eight hours and we slept poorly. Back in the Bug again, we moved still more slowly—edging out on a broad flat plateau, dodging a network of gaping surface cracks—winding back and forth in an effort to keep the machines on solid rock. I couldn’t see far ahead, because of the yellow haze rising from the cracks, so I was almost on top of it when I saw a sharp cut ahead where the surface dropped six feet beyond a deep crack. I let out a shout to halt the others; then I edged my Bug forward, peering at the cleft. It was deep and wide. I moved fifty yards to the left, then back to the right. There was only one place that looked like a possible crossing; a long, narrow ledge of gray stuff that lay down across a section of the fault like a ramp. Even as I watched it, I could feel the surface crust under the Bug trembling and saw the ledge shift over a few feet.
|
B. the Major's experience
|
How much is performance improved by disabling attention in certain heads?
|
### Introduction
Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. ### Related work
There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. ### Methodology
We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. ### Experiments
In this section, we present the experiments conducted to address the above research questions. ### Experiments ::: BERT's self-attention patterns
Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. ### Experiments ::: BERT's self-attention patterns ::: Results
fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. ### Experiments ::: Relation-specific heads in BERT
In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. ### Experiments ::: Relation-specific heads in BERT ::: Results
The heatmap of averaged attention scores over all collected examples (fig:framenetresults) suggests that 2 out of 144 heads tend to attend to the parts of the sentence that FrameNet annotators identified as core elements of the same frame. fig:framenetresults shows an example of this attention pattern for these two heads. Both show high attention weight for “he” while processing “agitated” in the sentence “He was becoming agitated" (the frame “Emotion_directed”). ### Experiments ::: Change in self-attention patterns after fine-tuning
Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. ### Experiments ::: Change in self-attention patterns after fine-tuning ::: Results
fig:cosine shows that for all the tasks except QQP, it is the last two layers that undergo the largest changes compared to the pre-trained BERT model. At the same time, tab:glue-results shows that fine-tuned BERT outperforms pre-trained BERT by a significant margin on all the tasks (with an average of 35.9 points of absolute difference). This leads us to conclude that the last two layers encode task-specific features that are attributed to the gain of scores, while earlier layers capture more fundamental and low-level information used in fine-tuned models. Randomly initialized BERT consistently produces lower scores than the ones achieved with pre-trained BERT. In fact, for some tasks (STS-B and QNLI), initialization with random weights gives worse performance that that of pre-trained BERT alone without fine-tuning. This suggests that pre-trained BERT does indeed contain linguistic knowledge that is helpful for solving these GLUE tasks. These results are consistent with similar studies, e.g., BIBREF20's results on fine-tuning a convolutional neural network pre-trained on ImageNet or BIBREF21's results on transfer learning for medical natural language inference. ### Experiments ::: Attention to linguistic features
In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. ### Experiments ::: Attention to linguistic features ::: Results
Contrary to our initial hypothesis that the vertical attention pattern may be motivated by linguistically meaningful features, we found that it is associated predominantly, if not exclusively, with attention to [CLS] and [SEP] tokens (see Figure FIGREF32. Note that the absolute [SEP] weights for the SST-2 sentiment analysis task are greater than for other tasks, which is explained by the fact that there is only one sentence in the model inputs, i.e. only one [SEP] token instead of two. There is also a clear tendency for earlier layers to pay attention to [CLS] and for later layers to [SEP], and this trend is consistent across all the tasks. We did detect heads that paid increased (compared to the pre-trained BERT) attention to nouns and direct objects of the main predicates (on the MRPC, RTE and QQP tasks), and negation tokens (on the QNLI task), but the attention weights of such tokens were negligible compared to [CLS] and [SEP]. Therefore, we believe that the striped attention maps generally come from BERT pre-training tasks rather than from task-specific linguistic reasoning. ### Experiments ::: Token-to-token attention
To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. ### Experiments ::: Token-to-token attention ::: Results
Our token-to-token attention experiments for detecting heads that prioritize noun-pronoun and verb-subject links resulted in a set of potential head candidates that coincided with diagonally structured attention maps. We believe that this happened due to the inherent property of English syntax where the dependent elements frequently appear close to each other, so it is difficult to distinguish such relations from the previous/following token attention coming from language model pre-training. Our investigation of attention distribution for the [CLS] token in the output layer suggests that for most tasks, with the exception of STS-B, RTE and QNLI, the [SEP] gets attended the most, as shown in fig:cls. Based on manual inspection, for the mentioned remaining tasks, the greatest attention weights correspond to the punctuation tokens, which are in a sense similar to [SEP]. ### Experiments ::: Disabling self-attention heads
Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. ### Experiments ::: Disabling self-attention heads ::: Results
Our experiments suggest that certain heads have a detrimental effect on the overall performance of BERT, and this trend holds for all the chosen tasks. Unexpectedly, disabling some heads leads not to a drop in accuracy, as one would expect, but to an increase in performance. This is effect is different across tasks and datasets. While disabling some heads improves the results, disabling the others hurts the results. However, it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. The gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for STS-B, to the maximum of 1.2% for MRPC (see fig:disableheadsall). In fact, for some tasks, such as MRPC and RTE, disabling a random head gives, on average, an increase in performance. Furthermore, disabling a whole layer, that is, all 12 heads in a given layer, also improves the results. fig:disablelayers shows the resulting model performance on the target GLUE tasks when different layers are disabled. Notably, disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%. However, effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%. ### Discussion
In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. ### Conclusion
In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. Figure 1: Typical self-attention classes used for training a neural network. Both axes on every image represent BERT tokens of an input example, and colors denote absolute attention weights (darker colors stand for greater weights). The first three types are most likely associated with language model pre-training, while the last two potentially encode semantic and syntactic information. Figure 2: Estimated percentages of the identified selfattention classes for each of the selected GLUE tasks. Table 1: GLUE task performance of BERT models with different initialization. We report the scores on the validation, rather than test data, so these results differ from the original BERT paper. Figure 3: Detection of pre-trained BERT’s heads that encode information correlated to semantic links in the input text. Two heads (middle) demonstrate their ability to capture semantic relations. For a random annotated FrameNet example (bottom) full attention maps with a zoom in the target token attention distribution are shown (leftmost and rightmost). Figure 4: FrameNet annotation example for the “address” lexical unit with two core frame elements of different types annotated. Figure 5: Per-head cosine similarity between pre-trained BERT’s and fine-tuned BERT’s self-attention maps for each of the selected GLUE tasks, averaged over validation dataset examples. Darker colors correspond to greater differences. Figure 6: Per-task attention weights to the [SEP] (top row) and the [CLS] (bottom row) tokens averaged over input sequences’ lengths and over dataset examples. Darker colors correspond to greater absolute weights. Figure 7: Per-task attention weights corresponding to the [CLS] token averaged over input sequences’ lengths and over dataset examples, and extracted from the final layer. Darker colors correspond to greater absolute weights. Figure 8: Performance of the model while disabling one head at a time. The orange line indicates the baseline performance with no disabled heads. Darker colors correspond to greater performance scores. Figure 9: Performance of the model while disabling one layer (that is, all 12 heads in this layer) at a time. The orange line indicates the baseline performance with no disabled layers. Darker colors correspond to greater performance scores.
|
disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%, this operation vary across tasks
|
What is monotonicity reasoning?
|
### Introduction
Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 . Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( "Introduction" ) and ( "Introduction" ). All [ workers $\leavevmode {\color {blue!80!black}\downarrow }$ ] [joined for a French dinner $\leavevmode {\color {red!80!black}\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\leavevmode {\color {red!80!black}\uparrow }$ ] joined for a dinner Not all workers joined for a dinner A context is upward entailing (shown by [... $\leavevmode {\color {red!80!black}\uparrow }$ ]) that allows an inference from ( "Introduction" ) to ( "Introduction" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\leavevmode {\color {blue!80!black}\downarrow }$ ]) allows an inference from ( "Introduction" ) to ( "Introduction" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( "Introduction" )), as witness the fact that ( "Introduction" ) entails ( "Introduction" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure. For previous datasets containing monotonicity inference problems, FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manually-curated datasets for testing a wide range of linguistic phenomena. However, monotonicity problems are limited to very small sizes (FraCaS: 37/346 examples and GLUE: 93/1650 examples). The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning. To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section "Dataset" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning. We measure the performance of state-of-the-art NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning (Section "Results and Discussion" ). The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences. In addition, we analyzed the performance of models trained with an automatically created monotonicity dataset, HELP BIBREF11 . The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set, while they perform worse in the opposite direction. This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set, and models might lack the ability to capture the structural relations between monotonicity operators and their arguments. ### Monotonicity
As an example of a monotonicity inference, consider the example with the determiner every in ( "Monotonicity" ); here the premise $P$ entails the hypothesis $H$ . $P$ : Every [ $_{\scriptsize \mathsf {NP}}$ person $\leavevmode {\color {blue!80!black}\downarrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket $\leavevmode {\color {red!80!black}\uparrow }$ ] $H$ : Every young person bought a ticket Every is downward entailing in the first argument ( $\mathsf {NP}$ ) and upward entailing in the second argument ( $\mathsf {VP}$ ), and thus the term person can be more specific by adding modifiers (person $\sqsupseteq $ young person), replacing it with its hyponym (person $\sqsupseteq $ spectator), or adding conjunction (person $\sqsupseteq $ person and alien). On the other hand, the term buy a ticket can be more general by removing modifiers (bought a movie ticket $\sqsubseteq $ bought a ticket), replacing it with its hypernym (bought a movie ticket $\sqsubseteq $ bought a show ticket), or adding disjunction (bought a movie ticket $\sqsubseteq $ bought or sold a movie ticket). Table 1 shows determiners modeled as binary operators and their polarities with respect to the first and second arguments. There are various types of downward operators, not limited to determiners (see Table 2 ). As shown in ( "Monotonicity" ), if a propositional object is embedded in a downward monotonic context (e.g., when), the polarity of words over its scope can be reversed. $P$ : When [every [ $_{\scriptsize \mathsf {NP}}$ young person $\leavevmode {\color {red!80!black}\uparrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a ticket $\leavevmode {\color {blue!80!black}\downarrow }$ ]], [that shop was open] $H$ : When [every [ $_{\scriptsize \mathsf {NP}}$ person] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket]], [that shop was open] Thus, the polarity ( $\leavevmode {\color {red!80!black}\uparrow }$ and $\leavevmode {\color {blue!80!black}\downarrow }$ ), where the replacement with more general (specific) phrases licenses entailment, needs to be determined by the interaction of monotonicity properties and syntactic structures; polarity of each constituent is calculated based on a monotonicity operator of functional expressions (e.g., every, when) and their function-term relations. ### Human-oriented dataset
To create monotonicity inference problems, we should satisfy three requirements: (a) detect the monotonicity operators and their arguments; (b) based on the syntactic structure, induce the polarity of the argument positions; and (c) replace the phrase in the argument position with a more general or specific phrase in natural and various ways (e.g., by using lexical knowledge or logical connectives). For (a) and (b), we first conduct polarity computation on a syntactic structure for each sentence, and then select premises involving upward/downward expressions. For (c), we use crowdsourcing to narrow or broaden the arguments. The motivation for using crowdsourcing is to collect naturally alike monotonicity inference problems that include various expressions. One problem here is that it is unclear how to instruct workers to create monotonicity inference problems without knowledge of natural language syntax and semantics. We must make tasks simple for workers to comprehend and provide sound judgements. Moreover, recent studies BIBREF12 , BIBREF3 , BIBREF13 point out that previous crowdsourced datasets, such as SNLI BIBREF14 and MultiNLI BIBREF10 , include hidden biases. As these previous datasets are motivated by approximated entailments, workers are asked to freely write hypotheses given a premise, which does not strictly restrict them to creating logically complex inferences. Taking these concerns into consideration, we designed two-step tasks to be performed via crowdsourcing for creating a monotonicity test set; (i) a hypothesis creation task and (ii) a validation task. The task (i) is to create a hypothesis by making some polarized part of an original sentence more specific. Instead of writing a complete sentence from scratch, workers are asked to rewrite only a relatively short sentence. By restricting workers to rewrite only a polarized part, we can effectively collect monotonicity inference examples. The task (ii) is to annotate an entailment label for the premise-hypothesis pair generated in (i). Figure 1 summarizes the overview of our human-oriented dataset creation. We used the crowdsourcing platform Figure Eight for both tasks. As a resource, we use declarative sentences with more than five tokens from the Parallel Meaning Bank (PMB) BIBREF15 . The PMB contains syntactically correct sentences annotated with its syntactic category in Combinatory Categorial Grammar (CCG; BIBREF16 , BIBREF16 ) format, which is suitable for our purpose. To get a whole CCG derivation tree, we parse each sentence by the state-of-the-art CCG parser, depccg BIBREF17 . Then, we add a polarity to every constituent of the CCG tree by the polarity computation system ccg2mono BIBREF18 and make the polarized part a blank field. We ran a trial rephrasing task on 500 examples and detected 17 expressions that were too general and thus difficult to rephrase them in a natural way (e.g., every one, no time). We removed examples involving such expressions. To collect more downward inference examples, we select examples involving determiners in Table 1 and downward operators in Table 2 . As a result, we selected 1,485 examples involving expressions having arguments with upward monotonicity and 1,982 examples involving expressions having arguments with downward monotonicity. We present crowdworkers with a sentence whose polarized part is underlined, and ask them to replace the underlined part with more specific phrases in three different ways. In the instructions, we showed examples rephrased in various ways: by adding modifiers, by adding conjunction phrases, and by replacing a word with its hyponyms. Workers were paid US$0.05 for each set of substitutions, and each set was assigned to three workers. To remove low-quality examples, we set the minimum time it should take to complete each set to 200 seconds. The entry in our task was restricted to workers from native speaking English countries. 128 workers contributed to the task, and we created 15,339 hypotheses (7,179 upward examples and 8,160 downward examples). The gold label of each premise-hypothesis pair created in the previous task is automatically determined by monotonicity calculus. That is, a downward inference pair is labeled as entailment, while an upward inference pair is labeled as non-entailment. However, workers sometimes provided some ungrammatical or unnatural sentences such as the case where a rephrased phrase does not satisfy the selectional restrictions (e.g., original: Tom doesn't live in Boston, rephrased: Tom doesn't live in yes), making it difficult to judge their entailment relations. Thus, we performed an annotation task to ensure accurate labeling of gold labels. We asked workers about the entailment relation of each premise-hypothesis pair as well as how natural it is. Worker comprehension of an entailment relation directly affects the quality of inference problems. To avoid worker misunderstandings, we showed workers the following definitions of labels and five examples for each label: entailment: the case where the hypothesis is true under any situation that the premise describes. non-entailment: the case where the hypothesis is not always true under a situation that the premise describes. unnatural: the case where either the premise and/or the hypothesis is ungrammatical or does not make sense. Workers were paid US$0.04 for each question, and each question was assigned to three workers. To collect high-quality annotation results, we imposed ten test questions on each worker, and removed workers who gave more than three wrong answers. We also set the minimum time it should take to complete each question to 200 seconds. 1,237 workers contributed to this task, and we annotated gold labels of 15,339 premise-hypothesis pairs. Table 3 shows the numbers of cases where answers matched gold labels automatically determined by monotonicity calculus. This table shows that there exist inference pairs whose labels are difficult even for humans to determine; there are 3,354 premise-hypothesis pairs whose gold labels as annotated by polarity computations match with those answered by all workers. We selected these naturalistic monotonicity inference pairs for the candidates of the final test set. To make the distribution of gold labels symmetric, we checked these pairs to determine if we can swap the premise and the hypothesis, reverse their gold labels, and create another monotonicity inference pair. In some cases, shown below, the gold label cannot be reversed if we swap the premise and the hypothesis. In ( UID15 ), child and kid are not hyponyms but synonyms, and the premise $P$ and the hypothesis $H$ are paraphrases. $P$ : Tom is no longer a child $H$ : Tom is no longer a kid These cases are not strict downward inference problems, in the sense that a phrase is not replaced by its hyponym/hypernym. Consider the example ( UID16 ). $P$ : The moon has no atmosphere $H$ : The moon has no atmosphere, and the gravity force is too low The hypothesis $H$ was created by asking workers to make atmosphere in the premise $P$ more specific. However, the additional phrase and the gravity force is too low does not form constituents with atmosphere. Thus, such examples are not strict downward monotone inferences. In such cases as (a) and (b), we do not swap the premise and the hypothesis. In the end, we collected 4,068 examples from crowdsourced datasets. ### Linguistics-oriented dataset
We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications. The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain well-designed inference problems, which might be challenging problems for NLI models. We collected 1,184 examples from 11 linguistics publications BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Regarding previous manually-curated datasets, we collected 93 examples for monotonicity reasoning from the GLUE diagnostic dataset, and 37 single-premise problems from FraCaS. Both the GLUE diagnostic dataset and FraCaS categorize problems by their types of monotonicity reasoning, but we found that each dataset has different classification criteria. Thus, following GLUE, we reclassified problems into three types of monotone reasoning (upward, downward, and non-monotone) by checking if they include (i) the target monotonicity operator in both the premise and the hypothesis and (ii) the phrase replacement in its argument position. In the GLUE diagnostic dataset, there are several problems whose gold labels are contradiction. We regard them as non-entailment in that the premise does not semantically entail the hypothesis. ### Statistics
We merged the human-oriented dataset created via crowdsourcing and the linguistics-oriented dataset created from linguistics publications to create the current version of the monotonicity entailment dataset (MED). Table 4 shows some examples from the MED dataset. We can see that our dataset contains various phrase replacements (e.g., conjunction, relative clauses, and comparatives). Table 5 reports the statistics of the MED dataset, including 5,382 premise-hypothesis pairs (1,820 upward examples, 3,270 downward examples, and 292 non-monotone examples). Regarding non-monotone problems, gold labels are always non-entailment, whether a hypothesis is more specific or general than its premise, and thus almost all non-monotone problems are labeled as non-entailment. The size of the word vocabulary in the MED dataset is 4,023, and overlap ratios of vocabulary with previous standard NLI datasets is 95% with MultiNLI and 90% with SNLI. We assigned a set of annotation tags for linguistic phenomena to each example in the test set. These tags allow us to analyze how well models perform on each linguistic phenomenon related to monotonicity reasoning. We defined 6 tags (see Table 4 for examples): lexical knowledge (2,073 examples): inference problems that require lexical relations (i.e., hypernyms, hyponyms, or synonyms) reverse (240 examples): inference problems where a propositional object is embedded in a downward environment more than once conjunction (283 examples): inference problems that include the phrase replacement by adding conjunction (and) to the hypothesis disjunction (254 examples): inference problems that include the phrase replacement by adding disjunction (or) to the hypothesis conditionals (149 examples): inference problems that include conditionals (e.g., if, when, unless) in the hypothesis negative polarity items (NPIs) (338 examples): inference problems that include NPIs (e.g., any, ever, at all, anything, anyone, anymore, anyhow, anywhere) in the hypothesis ### Baselines
To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment. Table 6 shows that the accuracies of all models were better on upward inferences, in accordance with the reported results of the GLUE leaderboard. The overall accuracy of each model was low. In particular, all models underperformed the majority baseline on downward inferences, despite some models having rich lexical knowledge from a knowledge base (KIM) or pretraining (BERT). This indicates that downward inferences are difficult to perform even with the expansion of lexical knowledge. In addition, it is interesting to see that if a model performed better on upward inferences, it performed worse on downward inferences. We will investigate these results in detail below. ### Data augmentation for analysis
To explore whether the performance of models on monotonicity reasoning depends on the training set or the model themselves, we conducted further analysis performed by data augmentation with the automatically generated monotonicity dataset HELP BIBREF11 . HELP contains 36K monotonicity inference examples (7,784 upward examples, 21,192 downward examples, and 1,105 non-monotone examples). The size of the HELP word vocabulary is 15K, and the overlap ratio of vocabulary between HELP and MED is 15.2%. We trained BERT on MultiNLI only and on MultiNLI augmented with HELP, and compared their performance. Following BIBREF3 , we also checked the performance of a hypothesis-only model trained with each training set to test whether our test set contains undesired biases. Table 7 shows that the performance of BERT with the hypothesis-only training set dropped around 10-40% as compared with the one with the premise-hypothesis training set, even if we use the data augmentation technique. This indicates that the MED test set does not allow models to predict from hypotheses alone. Data augmentation by HELP improved the overall accuracy to 71.6%, but there is still room for improvement. In addition, while adding HELP increased the accuracy on downward inferences, it slightly decreased accuracy on upward inferences. The size of downward examples in HELP is much larger than that of upward examples. This might improve accuracy on downward inferences, but might decrease accuracy on upward inferences. To investigate the relationship between accuracy on upward inferences and downward inferences, we checked the performance throughout training BERT with only upward and downward inference examples in HELP (Figure 2 (i), (ii)). These two figures show that, as the size of the upward training set increased, BERT performed better on upward inferences but worse on downward inferences, and vice versa. Figure 2 (iii) shows performance on a different ratio of upward and downward inference training sets. When downward inference examples constitute more than half of the training set, accuracies on upward and downward inferences were reversed. As the ratio of downward inferences increased, BERT performed much worse on upward inferences. This indicates that a training set in one direction (upward or downward entailing) of monotonicity might be harmful to models when learning the opposite direction of monotonicity. Previous work using HELP BIBREF11 reported that the BERT trained with MultiNLI and HELP containing both upward and downward inferences improved accuracy on both directions of monotonicity. MultiNLI rarely comes from downward inferences (see Section "Discussion" ), and its size is large enough to be immune to the side-effects of downward inference examples in HELP. This indicates that MultiNLI might act as a buffer against side-effects of the monotonicity-driven data augmentation technique. Table 8 shows the evaluation results by genre. This result shows that inference problems collected from linguistics publications are more challenging than crowdsourced inference problems, even if we add HELP to training sets. As shown in Figure 2 , the change in performance on problems from linguistics publications is milder than that on problems from crowdsourcing. This result also indicates the difficulty of problems from linguistics publications. Regarding non-monotone problems collected via crowdsourcing, there are very few non-monotone problems, so accuracy is 100%. Adding non-monotone problems to our test set is left for future work. Table 9 shows the evaluation results by type of linguistic phenomenon. While accuracy on problems involving NPIs and conditionals was improved on both upward and downward inferences, accuracy on problems involving conjunction and disjunction was improved on only one direction. In addition, it is interesting to see that the change in accuracy on conjunction was opposite to that on disjunction. Downward inference examples involving disjunction are similar to upward inference ones; that is, inferences from a sentence to a shorter sentence are valid (e.g., Not many campers have had a sunburn or caught a cold $\Rightarrow $ Not many campers have caught a cold). Thus, these results were also caused by addition of downward inference examples. Also, accuracy on problems annotated with reverse tags was apparently better without HELP because all examples are upward inferences embedded in a downward environment twice. Table 9 also shows that accuracy on conditionals was better on upward inferences than that on downward inferences. This indicates that BERT might fail to capture the monotonicity property that conditionals create a downward entailing context in their scope while they create an upward entailing context out of their scope. Regarding lexical knowledge, the data augmentation technique improved the performance much better on downward inferences which do not require lexical knowledge. However, among the 394 problems for which all models provided wrong answers, 244 problems are non-lexical inference problems. This indicates that some non-lexical inference problems are more difficult than lexical inference problems, though accuracy on non-lexical inference problems was better than that on lexical inference problems. ### Discussion
One of our findings is that there is a type of downward inferences to which every model fails to provide correct answers. One such example is concerned with the contrast between few and a few. Among 394 problems for which all models provided wrong answers, 148 downward inference problems were problems involving the downward monotonicity operator few such as in the following example: $P$ : Few of the books had typical or marginal readers $H$ : Few of the books had some typical readers We transformed these downward inference problems to upward inference problems in two ways: (i) by replacing the downward operator few with the upward operator a few, and (ii) by removing the downward operator few. We tested BERT using these transformed test sets. The results showed that BERT predicted the same answers for the transformed test sets. This suggests that BERT does not understand the difference between the downward operator few and the upward operator a few. The results of crowdsourcing tasks in Section 3.1.3 showed that some downward inferences can naturally be performed in human reasoning. However, we also found that the MultiNLI training set BIBREF10 , which is one of the dataset created from naturally-occurring texts, contains only 77 downward inference problems, including the following one. $P$ : No racin' on the Range $H$ : No horse racing is allowed on the Range One possible reason why there are few downward inferences is that certain pragmatic factors can block people to draw a downward inference. For instance, in the case of the inference problem in ( "Discussion" ), unless the added disjunct in $H$ , i.e., a small cat with green eyes, is salient in the context, it would be difficult to draw the conclusion $H$ from the premise $P$ . $P$ : I saw a dog $H$ : I saw a dog or a small cat with green eyes Such pragmatic factors would be one of the reasons why it is difficult to obtain downward inferences in naturally occurring texts. ### Conclusion
We introduced a large monotonicity entailment dataset, called MED. To illustrate the usefulness of MED, we tested state-of-the-art NLI models, and found that performance on the new test set was substantially worse for all state-of-the-art NLI models. In addition, the accuracy on downward inferences was inversely proportional to the one on upward inferences. An experiment with the data augmentation technique showed that accuracy on upward and downward inferences depends on the proportion of upward and downward inferences in the training set. This indicates that current neural models might have limitations on their generalization ability in monotonicity reasoning. We hope that the MED will be valuable for future research on more advanced models that are capable of monotonicity reasoning in a proper way. ### Acknowledgement
This work was partially supported by JST AIP- PRISM Grant Number JPMJCR18Y1, Japan, and JSPS KAKENHI Grant Number JP18H03284, Japan. We thank our three anonymous reviewers for helpful suggestions. We are also grateful to Koki Washio, Masashi Yoshikawa, and Thomas McLachlan for helpful discussion. Table 1: Determiners and their polarities. Table 2: Examples of downward operators. Figure 1: Overview of our human-oriented dataset creation. E: entailment, NE: non-entailment. Table 3: Numbers of cases where answers matched automatically determined gold labels. Table 4: Examples in the MED dataset. Crowd: problems collected through crowdsourcing, Paper: problems collected from linguistics publications, up: upward monotone, down: downward monotone, non: non-monotone, cond: condisionals, rev: reverse, conj: conjunction, disj: disjunction, lex: lexical knowledge, E: entailment, NE: non-entailment. Table 5: Statistics for the MED dataset. Table 6: Accuracies (%) for different models and training datasets. Table 7: Evaluation results on types of monotonicity reasoning. –Hyp: Hypothesis-only model. Figure 2: Accuracy throughout training BERT (i) with only upward examples and (ii) with only downward examples. We checked the accuracy at sizes [50, 100, 200, 500, 1000, 2000, 5000] for each direction. (iii) Performance on different ratios of upward/downward training sets. The total size of the training sets was 5,000 examples. Table 8: Evaluation results by genre. Paper: problems collected from linguistics publications, Crowd: problems via crowdsourcing. Table 9: Evaluation results by linguistic phenomenon type. (non-)Lexical: problems that (do not) require lexical relations. Numbers in parentheses are numbers of problems.
|
a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures
|
What is special about this particular UN mission?
A. It was the first political mission with Americans on the team.
B. It was the first high-profile mission in Africa.
C. It was the first attempt at using a specific power.
D. It was the first mission specifically oriented at avoiding nuclear war.
|
Transcriber's Note: This etext was produced from Analog, January 1961. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. THE GREEN BERET By TOM PURDOM It's not so much the decisions a man does make that mark him as a Man—but the ones he refrains from making. Like the decision "I've had enough!" Illustrated by Schoenherr Read locked the door and drew his pistol. Sergeant Rashid handed Premier Umluana the warrant. "We're from the UN Inspector Corps," Sergeant Rashid said. "I'm very sorry, but we have to arrest you and bring you in for trial by the World Court." If Umluana noticed Read's gun, he didn't show it. He read the warrant carefully. When he finished, he said something in Dutch. "I don't know your language," Rashid said. "Then I'll speak English." Umluana was a small man with wrinkled brow, glasses and a mustache. His skin was a shade lighter than Read's. "The Inspector General doesn't have the power to arrest a head of state—especially the Premier of Belderkan. Now, if you'll excuse me, I must return to my party." In the other room people laughed and talked. Glasses clinked in the late afternoon. Read knew two armed men stood just outside the door. "If you leave, Premier, I'll have to shoot you." "I don't think so," Umluana said. "No, if you kill me, all Africa will rise against the world. You don't want me dead. You want me in court." Read clicked off the safety. "Corporal Read is very young," Rashid said, "but he's a crack shot. That's why I brought him with me. I think he likes to shoot, too." Umluana turned back to Rashid a second too soon. He saw the sergeant's upraised hand before it collided with his neck. "Help! Kidnap. " Rashid judo chopped him and swung the inert body over his shoulders. Read pulled a flat grenade from his vest pocket. He dropped it and yellow psycho gas hissed from the valve. "Let's be off," Rashid said. The door lock snapped as they went out the window. Two men with rifles plunged into the gas; sighing, they fell to the floor in a catatonic trance. A little car skimmed across the lawn. Bearing the Scourge of Africa, Rashid struggled toward it. Read walked backward, covering their retreat. The car stopped, whirling blades holding it a few inches off the lawn. They climbed in. "How did it go?" The driver and another inspector occupied the front seat. "They'll be after us in half a minute." The other inspector carried a light machine gun and a box of grenades. "I better cover," he said. "Thanks," Rashid said. The inspector slid out of the car and ran to a clump of bushes. The driver pushed in the accelerator. As they swerved toward the south, Read saw a dozen armed men run out of the house. A grenade arced from the bushes and the pursuers recoiled from the cloud that rose before them. "Is he all right?" the driver asked. "I don't think I hurt him." Rashid took a syrette from his vest pocket. "Well, Read, it looks like we're in for a fight. In a few minutes Miaka Station will know we're coming. And God knows what will happen at the Game Preserve." Read wanted to jump out of the car. He could die any minute. But he had set his life on a well-oiled track and he couldn't get off until they reached Geneva. "They don't know who's coming," he said. "They don't make them tough enough to stop this boy." Staring straight ahead, he didn't see the sergeant smile. Two types of recruits are accepted by the UN Inspector Corps: those with a fanatic loyalty to the ideals of peace and world order, and those who are loyal to nothing but themselves. Read was the second type. A tall, lanky Negro he had spent his school days in one of the drab suburbs that ring every prosperous American city. It was the home of factory workers, clerks, semiskilled technicians, all who do the drudge work of civilization and know they will never do more. The adults spent their days with television, alcohol and drugs; the young spent their days with gangs, sex, television and alcohol. What else was there? Those who could have told him neither studied nor taught at his schools. What he saw on the concrete fields between the tall apartment houses marked the limits of life's possibilities. He had belonged to a gang called The Golden Spacemen. "Nobody fools with me," he bragged. "When Harry Read's out, there's a tiger running loose." No one knew how many times he nearly ran from other clubs, how carefully he picked the safest spot on the battle line. "A man ought to be a man," he once told a girl. "He ought to do a man's work. Did you ever notice how our fathers look, how they sleep so much? I don't want to be like that. I want to be something proud." He joined the UN Inspector Corps at eighteen, in 1978. The international cops wore green berets, high buttonless boots, bush jackets. They were very special men. For the first time in his life, his father said something about his ambitions. "Don't you like America, Harry? Do you want to be without a country? This is the best country in the world. All my life I've made a good living. Haven't you had everything you ever wanted? I've been a king compared to people overseas. Why, you stay here and go to trade school and in two years you'll be living just like me." "I don't want that," Read said. "What do you mean, you don't want that?" "You could join the American Army," his mother said. "That's as good as a trade school. If you have to be a soldier." "I want to be a UN man. I've already enlisted. I'm in! What do you care what I do?" The UN Inspector Corps had been founded to enforce the Nuclear Disarmament Treaty of 1966. Through the years it had acquired other jobs. UN men no longer went unarmed. Trained to use small arms and gas weapons, they guarded certain borders, bodyguarded diplomats and UN officials, even put down riots that threatened international peace. As the UN evolved into a strong world government, the UN Inspector Corps steadily acquired new powers. Read went through six months training on Madagascar. Twice he nearly got expelled for picking fights with smaller men. Rather than resign, he accepted punishment which assigned him to weeks of dull, filthy extra labor. He hated the restrictions and the iron fence of regulations. He hated boredom, loneliness and isolation. And yet he responded with enthusiasm. They had given him a job. A job many people considered important. He took his turn guarding the still disputed borders of Korea. He served on the rescue teams that patrol the busy Polar routes. He mounted guard at the 1980 World's Fair in Rangoon. "I liked Rangoon," he even told a friend. "I even liked Korea. But I think I liked the Pole job best. You sit around playing cards and shooting the bull and then there's a plane crash or something and you go out and win a medal. That's great for me. I'm lazy and I like excitement." One power implied in the UN Charter no Secretary General or Inspector General had ever tried to use. The power to arrest any head of state whose country violated international law. Could the World Court try and imprison a politician who had conspired to attack another nation? For years Africa had been called "The South America of the Old World." Revolution followed revolution. Colonies became democracies. Democracies became dictatorships or dissolved in civil war. Men planted bases on the moon and in four years, 1978-82, ringed the world with matter transmitters; but the black population of Africa still struggled toward political equality. Umluana took control of Belderkan in 1979. The tiny, former Dutch colony, had been a tottering democracy for ten years. The very day he took control the new dictator and his African party began to build up the Belderkan Army. For years he had preached a new Africa, united, free of white masters, the home of a vigorous and perfect Negro society. His critics called him a hypocritical racist, an opportunist using the desires of the African people to build himself an empire. He began a propaganda war against neighboring South Africa, promising the liberation of that strife-torn land. Most Negro leaders, having just won representation in the South African Parliament, told him to liberate his own country. They believed they could use their first small voice in the government to win true freedom for their people. But the radio assault and the arms buildup continued. Early in 1982, South Africa claimed the Belderkan Army exceeded the size agreed to in the Disarmament Treaty. The European countries and some African nations joined in the accusation. China called the uproar a vicious slur on a new African nation. The United States and Russia, trying not to get entangled, asked for more investigation by the UN. But the evidence was clear. Umluana was defying world law. If he got away with it, some larger and more dangerous nation might follow his precedent. And the arms race would begin again. The Inspector General decided. They would enter Belderkan, arrest Umluana and try him by due process before the World Court. If the plan succeeded, mankind would be a long step farther from nuclear war. Read didn't know much about the complicated political reasons for the arrest. He liked the Corp and he liked being in the Corp. He went where they sent him and did what they told him to do. The car skimmed above the tree-tops. The driver and his two passengers scanned the sky. A plane would have been a faster way to get out of the country. But then they would have spent hours flying over Africa, with Belderkan fighters in hot pursuit, other nations joining the chase and the world uproar gaining volume. By transmitter, if all went well, they could have Umluana in Geneva in an hour. They were racing toward Miaka, a branch transmitter station. From Miaka they would transmit to the Belderkan Preserve, a famous tourist attraction whose station could transmit to any point on the globe. Even now a dozen inspectors were taking over the Game Preserve station and manning its controls. They had made no plans to take over Miaka. They planned to get there before it could be defended. "There's no military base near Miaka," Rashid said. "We might get there before the Belderkans." "Here comes our escort," Read said. A big car rose from the jungle. This one had a recoilless rifle mounted on the roof. The driver and the gunner waved and fell in behind them. "One thing," Read said, "I don't think they'll shoot at us while he's in the car." "Don't be certain, corporal. All these strong-arm movements are alike. I'll bet Umluana's lieutenants are hoping he'll become a dead legend. Then they can become live conquerors." Sergeant Rashid came from Cairo. He had degrees in science and history from Cambridge but only the Corp gave him work that satisfied his conscience. He hated war. It was that simple. Read looked back. He saw three spots of sunlight about two hundred feet up and a good mile behind. "Here they come, Sarge." Rashid turned his head. He waved frantically. The two men in the other car waved back. "Shall I duck under the trees?" the driver asked. "Not yet. Not until we have to." Read fingered the machine gun he had picked up when he got in the car. He had never been shot at. Twice he had faced an unarmed mob, but a few shots had sent them running. Birds flew screaming from their nests. Monkeys screeched and threw things at the noisy, speeding cars. A little cloud of birds surrounded each vehicle. The escort car made a sharp turn and charged their pursuers. The big rifle fired twice. Read saw the Belderkan cars scatter. Suddenly machine-gun bullets cracked and whined beside him. "Evade," Rashid said. "Don't go down." Without losing any forward speed, the driver took them straight up. Read's stomach bounced. A shell exploded above them. The car rocked. He raised his eyes and saw a long crack in the roof. "Hit the floor," Rashid said. They knelt on the cramped floor. Rashid put on his gas mask and Read copied him. Umluana breathed like a furnace, still unconscious from the injection Rashid had given him. I can't do anything , Read thought. They're too far away to shoot back. All we can do is run. The sky was clear and blue. The jungle was a noisy bazaar of color. In the distance guns crashed. He listened to shells whistle by and the whipcrack of machine-gun bullets. The car roller-coastered up and down. Every time a shell passed, he crawled in waves down his own back. Another explosion, this time very loud. Rashid raised his eyes above the seat and looked out the rear window. "Two left. Keep down, Read." "Can't we go down?" Read said. "They'll get to Miaka before us." He shut his eyes when he heard another loud explosion. Sergeant Rashid looked out the window again. He swore bitterly in English and Egyptian. Read raised his head. The two cars behind them weren't fighting each other. A long way back the tree-tops burned. "How much farther?" Rashid said. The masks muffled their voices. "There it is now. Shall I take us right in?" "I think you'd better." The station was a glass diamond in a small clearing. The driver slowed down, then crashed through the glass walls and hovered by the transmitter booth. Rashid opened the door and threw out two grenades. Read jumped out and the two of them struggled toward the booth with Umluana. The driver, pistol in hand, ran for the control panel. There were three technicians in the station and no passengers. All three panicked when the psycho gas enveloped them. They ran howling for the jungle. Through the window of his mask, Read saw their pursuers land in the clearing. Machine-gun bullets raked the building. They got Umluana in the booth and hit the floor. Read took aim and opened fire on the largest car. "Now, I can shoot back," he said. "Now we'll see what they do." "Are you ready, Rashid?" yelled the driver. "Man, get us out of here!" The booth door shut. When it opened, they were at the Game Preserve. The station jutted from the side of a hill. A glass-walled waiting room surrounded the bank of transmitter booths. Read looked out the door and saw his first battlefield. Directly in front of him, his head shattered by a bullet, a dead inspector lay behind an overturned couch. Read had seen dozens of training films taken during actual battles or after atomic attacks. He had laughed when other recruits complained. "That's the way this world is. You people with the weak stomachs better get used to it." Now he slid against the rear wall of the transmitter booth. A wounded inspector crawled across the floor to the booth. Read couldn't see his wound, only the pain scratched on his face and the blood he deposited on the floor. "Did you get Umluana?" he asked Sergeant Rashid. "He's in the booth. What's going on?" Rashid's Middle East Oxford seemed more clipped than ever. "They hit us with two companies of troops a few minutes ago. I think half our men are wounded." "Can we get out of here?" "They machine-gunned the controls." Rashid swore. "You heard him, Read! Get out there and help those men." He heard the screams of the wounded, the crack of rifles and machine guns, all the terrifying noise of war. But since his eighteenth year he had done everything his superiors told him to do. He started crawling toward an easy-chair that looked like good cover. A bullet cracked above his head, so close he felt the shock wave. He got up, ran panicky, crouched, and dove behind the chair. An inspector cracked the valve on a smoke grenade. A white fog spread through the building. They could see anyone who tried to rush them but the besiegers couldn't pick out targets. Above the noise, he heard Rashid. "I'm calling South Africa Station for a copter. It's the only way out of here. Until it comes, we've got to hold them back." Read thought of the green beret he had stuffed in his pocket that morning. He stuck it on his head and cocked it. He didn't need plain clothes anymore and he wanted to wear at least a part of his uniform. Bullets had completely shattered the wall in front of him. He stared through the murk, across the broken glass. He was Corporal Harry Read, UN Inspector Corps—a very special man. If he didn't do a good job here, he wasn't the man he claimed to be. This might be the only real test he would ever face. He heard a shout in rapid French. He turned to his right. Men in red loincloths ran zigzagging toward the station. They carried light automatic rifles. Half of them wore gas masks. "Shoot the masks," he yelled. "Aim for the masks." The machine gun kicked and chattered on his shoulder. He picked a target and squeezed off a burst. Tensely, he hunted for another mask. Three grenades arced through the air and yellow gas spread across the battlefield. The attackers ran through it. A few yards beyond the gas, some of them turned and ran for their own lines. In a moment only half a dozen masked men still advanced. The inspectors fired a long, noisy volley. When they stopped only four attackers remained on their feet. And they were running for cover. The attackers had come straight up a road that led from the Game Preserve to the station. They had not expected any resistance. The UN men had already taken over the station, chased out the passengers and technicians and taken up defense positions; they had met the Belderkans with a dozen grenades and sent them scurrying for cover. The fight so far had been vicious but disorganized. But the Belderkans had a few hundred men and knew they had wrecked the transmitter controls. The first direct attack had been repulsed. They could attack many more times and continue to spray the building with bullets. They could also try to go around the hill and attack the station from above; if they did, the inspectors had a good view of the hill and should see them going up. The inspectors had taken up good defensive positions. In spite of their losses, they still had enough firepower to cover the area surrounding the station. Read surveyed his sector of fire. About two hundred yards to his left, he saw the top of a small ditch. Using the ditch for cover, the Belderkans could sneak to the top of the hill. Gas grenades are only three inches long. They hold cubic yards of gas under high pressure. Read unclipped a telescoping rod from his vest pocket. He opened it and a pair of sights flipped up. A thin track ran down one side. He had about a dozen grenades left, three self-propelling. He slid an SP grenade into the rod's track and estimated windage and range. Sighting carefully, not breathing, muscles relaxed, the rod rock steady, he fired and lobbed the little grenade into the ditch. He dropped another grenade beside it. The heavy gas would lie there for hours. Sergeant Rashid ran crouched from man to man. He did what he could to shield the wounded. "Well, corporal, how are you?" "Not too bad, sergeant. See that ditch out there? I put a little gas in it." "Good work. How's your ammunition?" "A dozen grenades. Half a barrel of shells." "The copter will be here in half an hour. We'll put Umluana on, then try to save ourselves. Once he's gone, I think we ought to surrender." "How do you think they'll treat us?" "That we'll have to see." An occasional bullet cracked and whined through the misty room. Near him a man gasped frantically for air. On the sunny field a wounded man screamed for help. "There's a garage downstairs," Rashid said. "In case the copter doesn't get here on time, I've got a man filling wine bottles with gasoline." "We'll stop them, Sarge. Don't worry." Rashid ran off. Read stared across the green land and listened to the pound of his heart. What were the Belderkans planning? A mass frontal attack? To sneak in over the top of the hill? He didn't think, anymore than a rabbit thinks when it lies hiding from the fox or a panther thinks when it crouches on a branch above the trail. His skin tightened and relaxed on his body. "Listen," said a German. Far down the hill he heard the deep-throated rumble of a big motor. "Armor," the German said. The earth shook. The tank rounded the bend. Read watched the squat, angular monster until its stubby gun pointed at the station. It stopped less than two hundred yards away. A loud-speaker blared. ATTENTION UN SOLDIERS. ATTENTION UN SOLDIERS. YOU MAY THINK US SAVAGES BUT WE HAVE MODERN WEAPONS. WE HAVE ATOMIC WARHEADS, ALL GASES, ROCKETS AND FLAME THROWERS. IF YOU DO NOT SURRENDER OUR PREMIER, WE WILL DESTROY YOU. "They know we don't have any big weapons," Read said. "They know we have only gas grenades and small arms." He looked nervously from side to side. They couldn't bring the copter in with that thing squatting out there. A few feet away, sprawled behind a barricade of tables, lay a man in advanced shock. His deadly white skin shone like ivory. They wouldn't even look like that. One nuclear shell from that gun and they'd be vaporized. Or perhaps the tank had sonic projectors; then the skin would peel off their bones. Or they might be burned, or cut up by shrapnel, or gassed with some new mist their masks couldn't filter. Read shut his eyes. All around him he heard heavy breathing, mumbled comments, curses. Clothes rustled as men moved restlessly. But already the voice of Sergeant Rashid resounded in the murky room. "We've got to knock that thing out before the copter comes. Otherwise, he can't land. I have six Molotov cocktails here. Who wants to go hunting with me?" For two years Read had served under Sergeant Rashid. To him, the sergeant was everything a UN inspector should be. Rashid's devotion to peace had no limits. Read's psych tests said pride alone drove him on. That was good enough for the UN; they only rejected men whose loyalties might conflict with their duties. But an assault on the tank required something more than a hunger for self-respect. Read had seen the inspector who covered their getaway. He had watched their escort charge three-to-one odds. He had seen another inspector stay behind at Miaka Station. And here, in this building, lay battered men and dead men. All UN inspectors. All part of his life. And he was part of their life. Their blood, their sacrifice, and pain, had become a part of him. "I'll take a cocktail, Sarge." "Is that Read?" "Who else did you expect?" "Nobody. Anybody else?" "I'll go," the Frenchman said. "Three should be enough. Give us a good smoke screen." Rashid snapped orders. He put the German inspector in charge of Umluana. Read, the Frenchman and himself, he stationed at thirty-foot intervals along the floor. "Remember," Rashid said. "We have to knock out that gun." Read had given away his machine gun. He held a gas-filled bottle in each hand. His automatic nestled in its shoulder holster. Rashid whistled. Dozens of smoke grenades tumbled through the air. Thick mist engulfed the tank. Read stood up and ran forward. He crouched but didn't zigzag. Speed counted most here. Gunfire shook the hill. The Belderkans couldn't see them but they knew what was going on and they fired systematically into the smoke. Bullets ploughed the ground beside him. He raised his head and found the dim silhouette of the tank. He tried not to think about bullets ploughing through his flesh. A bullet slammed into his hip. He fell on his back, screaming. "Sarge. Sarge. " "I'm hit, too," Rashid said. "Don't stop if you can move." Listen to him. What's he got, a sprained ankle? But he didn't feel any pain. He closed his eyes and threw himself onto his stomach. And nearly fainted from pain. He screamed and quivered. The pain stopped. He stretched out his hands, gripping the wine bottles, and inched forward. Pain stabbed him from stomach to knee. "I can't move, Sarge." "Read, you've got to. I think you're the only—" "What?" Guns clattered. Bullets cracked. "Sergeant Rashid! Answer me." He heard nothing but the lonely passage of the bullets in the mist. "I'm a UN man," he mumbled. "You people up there know what a UN man is? You know what happens when you meet one?" When he reached the tank, he had another bullet in his right arm. But they didn't know he was coming and when you get within ten feet of a tank, the men inside can't see you. He just had to stand up and drop the bottle down the gun barrel. That was all—with a broken hip and a wounded right arm. He knew they would see him when he stood up but he didn't think about that. He didn't think about Sergeant Rashid, about the complicated politics of Africa, about crowded market streets. He had to kill the tank. That was all he thought about. He had decided something in the world was more important than himself, but he didn't know it or realize the psychologists would be surprised to see him do this. He had made many decisions in the last few minutes. He had ceased to think about them or anything else. With his cigarette lighter, he lit the rag stuffed in the end of the bottle. Biting his tongue, he pulled himself up the front of the tank. His long arm stretched for the muzzle of the gun. He tossed the bottle down the dark throat. As he fell, the machine-gun bullets hit him in the chest, then in the neck. He didn't feel them. He had fainted the moment he felt the bottle leave his hand. The copter landed ten minutes later. Umluana left in a shower of bullets. A Russian private, the ranking man alive in the station, surrendered the survivors to the Belderkans. His mother hung the Global Medal above the television set. "He must have been brave," she said. "We had a fine son." "He was our only son," her husband said. "What did he volunteer for? Couldn't somebody else have done it?" His wife started to cry. Awkwardly, he embraced her. He wondered what his son had wanted that he couldn't get at home. THE END
|
C. It was the first attempt at using a specific power.
|
Why did Mr. & Mrs. Lane agree so quickly to Peggy’s bargain?
A. They didn’t want to argue about it anymore.
B. They didn’t want her to pursue a different career.
C. They understood that she was determined and realistic in her plans.
D. They remembered that she wanted to move to New York since she was young.
|
PEGGY FINDS THE THEATER I Dramatic Dialogue “Of course, this is no surprise to us,” Thomas Lane said to his daughter Peggy, who perched tensely on the edge of a kitchen stool. “We could hardly have helped knowing that you’ve wanted to be an actress since you were out of your cradle. It’s just that decisions like this can’t be made quickly.” “But, Dad!” Peggy almost wailed. “You just finished saying yourself that I’ve been thinking about this and wanting it for years! You can’t follow that by calling it a quick decision!” She turned to her mother, her hazel eyes flashing under a mass of dark chestnut curls. “Mother, you understand, don’t you?” Mrs. Lane smiled gently and placed her soft white hand on her daughter’s lean brown one. “Of course I understand, Margaret, and so does your father. We both want to do what’s best for you, not to stand in your way. The only question is whether the time is right, or if you should wait longer.” 2 “Wait! Mother—Dad—I’m years behind already! The theater is full of beginners a year and even two years younger than I am, and girls of my age have lots of acting credits already. Besides, what is there to wait for?” Peggy’s father put down his coffee cup and leaned back in the kitchen chair until it tilted on two legs against the wall behind him. He took his time before answering. When he finally spoke, his voice was warm and slow. “Peg, I don’t want to hold up your career. I don’t have any objections to your wanting to act. I think—judging from the plays I’ve seen you in at high school and college—that you have a real talent. But I thought that if you would go on with college for three more years and get your degree, you would gain so much worth-while knowledge that you’d use and enjoy for the rest of your life—” “But not acting knowledge!” Peggy cried. “There’s more to life than that,” her father put in. “There’s history and literature and foreign languages and mathematics and sciences and music and art and philosophy and a lot more—all of them fascinating and all important.” “None of them is as fascinating as acting to me,” Peggy replied, “and none of them is nearly as important to my life.” 3 Mrs. Lane nodded. “Of course, dear. I know just how you feel about it,” she said. “I would have answered just the same way when I was your age, except that for me it was singing instead of acting. But—” and here her pleasant face betrayed a trace of sadness—“but I was never able to be a singer. I guess I wasn’t quite good enough or else I didn’t really want it hard enough—to go on with all the study and practice it needed.” She paused and looked thoughtfully at her daughter’s intense expression, then took a deep breath before going on. “What you must realize, Margaret, is that you may not quite make the grade. We think you’re wonderful, but the theater is full of young girls whose parents thought they were the most talented things alive; girls who won all kinds of applause in high-school and college plays; girls who have everything except luck. You may be one of these girls, and if you are, we want you to be prepared for it. We want you to have something to fall back on, just in case you ever need it.” Mr. Lane, seeing Peggy’s hurt look, was quick to step in with reassurance. “We don’t think you’re going to fail, Peg. We have every confidence in you and your talents. I don’t see how you could miss being the biggest success ever—but I’m your father, not a Broadway critic or a play producer, and I could be wrong. And if I am wrong, I don’t want you to be hurt. All I ask is that you finish college and get a teacher’s certificate so that you can always find useful work if you have to. Then you can try your luck in the theater. Doesn’t that make sense?” 4 Peggy stared at the faded linoleum on the floor for a few moments before answering. Then, looking first at her mother and then at her father, she replied firmly, “No, it doesn’t! It might make sense if we were talking about anything else but acting, but we’re not. If I’m ever going to try, I’ll have a better chance now than I will in three years. But I can see your point of view, Dad, and I’ll tell you what—I’ll make a bargain with you.” “What sort of bargain, Peg?” her father asked curiously. “If you let me go to New York now, and if I can get into a good drama school there, I’ll study and try to find acting jobs at the same time. That way I’ll still be going to school and I’ll be giving myself a chance. And if I’m not started in a career in one year, I’ll go back to college and get my teacher’s certificate before I try the theater again. How does that sound to you?” “It sounds fair enough,” Tom Lane admitted, “but are you so confident that you’ll see results in one year? After all, some of our top stars worked many times that long before getting any recognition.” “I don’t expect recognition in one year, Dad,” Peggy said. “I’m not that conceited or that silly. All I hope is that I’ll be able to get a part in that time, and maybe be able to make a living out of acting. And that’s probably asking too much. If I have to, I’ll make a living at something else, maybe working in an office or something, while I wait for parts. What I want to prove in this year is that I can act. If I can’t, I’ll come home.” 5 “It seems to me, Tom, that Margaret has a pretty good idea of what she’s doing,” Mrs. Lane said. “She sounds sensible and practical. If she were all starry-eyed and expected to see her name in lights in a few weeks, I’d vote against her going, but I’m beginning to think that maybe she’s right about this being the best time.” “Oh, Mother!” Peggy shouted, jumping down from the stool and throwing her arms about her mother’s neck. “I knew you’d understand! And you understand too, don’t you, Dad?” she appealed. Her father replied in little puffs as he drew on his pipe to get it started. “I ... never said ... I didn’t ... understand you ... did I?” His pipe satisfactorily sending up thick clouds of fragrant smoke, he took it out of his mouth before continuing more evenly. “Peg, your mother and I are cautious only because we love you so much and want what’s going to make you happy. At the same time, we want to spare you any unnecessary unhappiness along the way. Remember, I’m not a complete stranger to show business. Before I came out here to Rockport to edit the Eagle , I worked as a reporter on one of the best papers in New York. I saw a lot ... I met a lot of actors and actresses ... and I know how hard the city often was for them. But I don’t want to protect you from life. That’s no good either. Just let me think about it a little longer and let me talk to your mother some more.” 6 Mrs. Lane patted Peggy’s arm and said, “We won’t keep you in suspense long, dear. Why don’t you go out for a walk for a while and let us go over the situation quietly? We’ll decide before bedtime.” Peggy nodded silently and walked to the kitchen door, where she paused to say, “I’m just going out to the barn to see if Socks is all right for the night. Then maybe I’ll go down to Jean’s for a while.” As she stepped out into the soft summer dusk she turned to look back just in time to see her mother throw her a comically exaggerated wink of assurance. Feeling much better, Peggy shut the screen door behind her and started for the barn. Ever since she had been a little girl, the barn had been Peggy’s favorite place to go to be by herself and think. Its musty but clean scent of straw and horses and leather made her feel calm and alive. Breathing in its odor gratefully, she walked into the half-dark to Socks’s stall. As the little bay horse heard her coming, she stamped one foot and softly whinnied a greeting. Peggy stopped first at the bag that hung on the wall among the bridles and halters and took out a lump of sugar as a present. Then, after stroking Socks’s silky nose, she held out her palm with the sugar cube. Socks took it eagerly and pushed her nose against Peggy’s hand in appreciation. As Peggy mixed some oats and barley for her pet and checked to see that there was enough straw in the stall, she thought about her life in Rockport and the new life that she might soon be going to. 7 Rockport, Wisconsin, was a fine place, as pretty a small town as any girl could ask to grow up in. And not too small, either, Peggy thought. Its 16,500 people supported good schools, an excellent library, and two good movie houses. What’s more, the Rockport Community College attracted theater groups and concert artists, so that life in the town had always been stimulating. And of course, all of this was in addition to the usual growing-up pleasures of swimming and sailing, movie dates, and formal dances—everything that a girl could want. Peggy had lived all her life here, knew every tree-shaded street, every country road, field, lake, and stream. All of her friends were here, friends she had known since her earliest baby days. It would be hard to leave them, she knew, but there was no doubt in her mind that she was going to do so. If not now, then as soon as she possibly could. It was not any dissatisfaction with her life, her friends, or her home that made Peggy want to leave Rockport. She was not running away from anything, she reminded herself; she was running to something. To what? To the bright lights, speeding taxis, glittering towers of a make-believe movie-set New York? Would it really be like that? Or would it be something different, something like the dreary side-street world of failure and defeat that she had also seen in movies? 8 Seeing the image of herself hungry and tired, going from office to office looking for a part in a play, Peggy suddenly laughed aloud and brought herself back to reality, to the warm barn smell and the big, soft-eyed gaze of Socks. She threw her arm around the smooth bay neck and laid her face next to the horse’s cheek. “Socks,” she murmured, “I need some of your horse sense if I’m going to go out on my own! We’ll go for a fast run in the morning and see if some fresh air won’t clear my silly mind!” With a final pat, she left the stall and the barn behind, stepping out into the deepening dusk. It was still too early to go back to the house to see if her parents had reached a decision about her future. Fighting down an impulse to rush right into the kitchen to see how they were coming along, Peggy continued down the driveway and turned left on the slate sidewalk past the front porch of her family’s old farmhouse and down the street toward Jean Wilson’s house at the end of the block. As she walked by her own home, she noticed with a familiar tug at her heart how the lilac bushes on the front lawn broke up the light from the windows behind them into a pattern of leafy lace. For a moment, or maybe a little more, she wondered why she wanted to leave this. What for? What could ever be better? 9 II Dramatic Decision Upstairs at the Wilsons’, Peggy found Jean swathed in bath towels, washing her long, straight red hair, which was now white with lather and piled up in a high, soapy knot. “You just washed it yesterday!” Peggy said. “Are you doing it again—or still?” Jean grinned, her eyes shut tight against the soapsuds. “Again, I’m afraid,” she answered. “Maybe it’s a nervous habit!” “It’s a wonder you’re not bald, with all the rubbing you give your hair,” Peggy said with a laugh. “Well, if I do go bald, at least it will be with a clean scalp!” Jean answered with a humorous crinkle of her freckled nose. Taking a deep breath and puffing out her cheeks comically, she plunged her head into the basin and rinsed off the soap with a shampoo hose. When she came up at last, dripping-wet hair was tightly plastered to the back of her head. “There!” she announced. “Don’t I look beautiful?” 10 After a brisk rubdown with one towel, Jean rolled another dry towel around her head like an Indian turban. Then, having wrapped herself in an ancient, tattered, plaid bathrobe, she led Peggy out of the steamy room and into her cozy, if somewhat cluttered, bedroom. When they had made themselves comfortable on the pillow-strewn daybeds, Jean came straight to the point. “So the grand debate is still going on, is it? When do you think they’ll make up their minds?” she asked. “How do you know they haven’t decided anything yet?” Peggy said, in a puzzled tone. “Oh, that didn’t take much deduction, my dear Watson,” Jean laughed. “If they had decided against the New York trip, your face would be as long as Socks’s nose, and it’s not half that long. And if the answer was yes, I wouldn’t have to wait to hear about it! You would have been flying around the room and talking a mile a minute. So I figured that nothing was decided yet.” “You know, if I were as smart as you,” Peggy said thoughtfully, “I would have figured out a way to convince Mother and Dad by now.” “Oh, don’t feel bad about being dumb,” Jean said in mock tones of comfort. “If I were as pretty and talented as you are, I wouldn’t need brains, either!” With a hoot of laughter, she rolled quickly aside on the couch to avoid the pillow that Peggy threw at her. A short, breathless pillow fight followed, leaving the girls limp with laughter and with Jean having to retie her towel turban. From her new position, flat on the floor, Peggy looked up at her friend with a rueful smile. 11 “You know, I sometimes think that we haven’t grown up at all!” she said. “I can hardly blame my parents for thinking twice—and a lot more—before treating me like an adult.” “Nonsense!” Jean replied firmly. “Your parents know a lot better than to confuse being stuffy with being grown-up and responsible. And, besides, I know that they’re not the least bit worried about your being able to take care of yourself. I heard them talking with my folks last night, and they haven’t got a doubt in the world about you. But they know how hard it can be to get a start as an actress, and they want to be sure that you have a profession in case you don’t get a break in show business.” “I know,” Peggy answered. “We had a long talk about it this evening after dinner.” Then she told her friend about the conversation and her proposed “bargain” with her parents. “They both seemed to think it was fair,” she concluded, “and when I went out, they were talking it over. They promised me an answer by bedtime, and I’m over here waiting until the jury comes in with its decision. You know,” she said suddenly, sitting up on the floor and crossing her legs under her, “I bet they wouldn’t hesitate a minute if you would only change your mind and decide to come with me and try it too!” 12 After a moment’s thoughtful silence, Jean answered slowly, “No, Peg. I’ve thought this all out before, and I know it would be as wrong for me as it is right for you. I know we had a lot of fun in the dramatic groups, and I guess I was pretty good as a comedienne in a couple of the plays, but I know I haven’t got the real professional thing—and I know that you have. In fact, the only professional talent I think I do have for the theater is the ability to recognize talent when I see it—and to recognize that it’s not there when it isn’t!” “But, Jean,” Peggy protested, “you can handle comedy and character lines as well as anyone I know!” Jean nodded, accepting the compliment and seeming at the same time to brush it off. “That doesn’t matter. You know even better than I that there’s a lot more to being an actress—a successful one—than reading lines well. There’s the ability to make the audience sit up and notice you the minute you walk on, whether you have lines or not. And that’s something you can’t learn; you either have it, or you don’t. It’s like being double-jointed. I can make an audience laugh when I have good lines, but you can make them look at you and respond to you and be with you all the way, even with bad lines. That’s why you’re going to go to New York and be an actress. And that’s why I’m not.” “But, Jean—” Peggy began. 13 “No buts!” Jean cut in. “We’ve talked about this enough before, and I’m not going to change my mind. I’m as sure about what I want as you are about what you want. I’m going to finish college and get my certificate as an English teacher.” “And what about acting? Can you get it out of your mind as easily as all that?” Peggy asked. “That’s the dark and devious part of my plan,” Jean answered with a mysterious laugh that ended in a comic witch’s cackle and an unconvincing witch-look that was completely out of place on her round, freckled face. “Once I get into a high school as an English teacher, I’m going to try to teach a special course in the literature of the theater and maybe another one in stagecraft. I’m going to work with the high-school drama group and put on plays. That way, I’ll be in a spot where I can use my special talent of recognizing talent. And that way,” she added, becoming much more serious, “I have a chance really to do something for the theater. If I can help and encourage one or two people with real talent like yours, then I’ll feel that I’ve really done something worth while.” Peggy nodded silently, not trusting herself to speak for fear of saying something foolishly sentimental, or even of crying. Her friend’s earnestness about the importance of her work and her faith in Peggy’s talent had touched her more than she could say. 14 The silence lasted what seemed a terribly long time, until Jean broke it by suddenly jumping up and flinging a last pillow which she had been hiding behind her back. Running out of the bedroom, she called, “Come on! I’ll race you down to the kitchen for cocoa! By the time we’re finished, it’ll be about time for your big Hour of Decision scene!” It was nearly ten o’clock when Peggy finally felt that her parents had had enough time to talk things out. Leaving the Wilson house, she walked slowly despite her eagerness, trying in all fairness to give her mother and father every minute she could. Reaching her home, she cut across the lawn behind the lilac bushes, to the steps up to the broad porch that fronted the house. As she climbed the steps, she heard her father’s voice raised a little above its normal soft, deep tone, but she could not make out the words. Crossing the porch, she caught sight of him through the window. He was speaking on the telephone, and now she caught his words. “Fine. Yes.... Yes—I think we can. Very well, day after tomorrow, then. That’s right—all three of us. And, May—it’ll be good to see you again, after all these years! Good-by.” As Peggy entered the room, her father put down the phone and turned to Mrs. Lane. “Well, Betty,” he said, “it’s all set.” “What’s all set, Dad?” Peggy said, breaking into a run to her father’s side. 15 “Everything’s all set, Peg,” her father said with a grin. “And it’s set just the way you wanted it! There’s not a man in the world who can hold out against two determined women.” He leaned back against the fireplace mantel, waiting for the explosion he felt sure was to follow his announcement. But Peggy just stood, hardly moving a muscle. Then she walked carefully, as if she were on the deck of a rolling ship, to the big easy chair and slowly sat down. “Well, for goodness’ sake!” her mother cried. “Where’s the enthusiasm?” Peggy swallowed hard before answering. When her voice came, it sounded strange, about two tones higher than usual. “I ... I’m trying to be sedate ... and poised ... and very grown-up,” she said. “But it’s not easy. All I want to do is to—” and she jumped out of the chair—“to yell whoopee !” She yelled at the top of her lungs. After the kisses, the hugs, and the first excitement, Peggy and her parents adjourned to the kitchen, the favorite household conference room, for cookies and milk and more talk. “Now, tell me, Dad,” Peggy asked, her mouth full of oatmeal cookies, no longer “sedate” or “poised,” but her natural, bubbling self. “Who was that on the phone, and where are the three of us going, and what’s all set?” 16 “One thing at a time,” her father said. “To begin with, we decided almost as soon as you left that we were going to let you go to New York to try a year’s experience in the theater. But then we had to decide just where you would live, and where you should study, and how much money you would need, and a whole lot of other things. So I called New York to talk to an old friend of mine who I felt would be able to give us some help. Her name is May Berriman, and she’s spent all her life in the theater. In fact, she was a very successful actress. Now she’s been retired for some years, but I thought she might give us some good advice.” “And did she?” Peggy asked. “We were luckier than I would have thought possible,” Mrs. Lane put in. “It seems that May bought a big, old-fashioned town house and converted it into a rooming house especially for young actresses. She always wanted a house of her own with a garden in back, but felt it was foolish for a woman living alone. This way, she can afford to run a big place and at the same time not be alone. And best of all, she says she has a room that you can have!” “Oh, Mother! It sounds wonderful!” Peggy exulted. “I’ll be with other girls my own age who are actresses, and living with an experienced actress! I’ll bet she can teach me loads!” “I’m sure she can,” her father said. “And so can the New York Dramatic Academy.” “Dad!” Peggy shouted, almost choking on a cooky. “Don’t tell me you’ve managed to get me accepted there! That’s the best dramatic school in the country! How—?” 17 “Don’t get too excited, Peg,” Mr. Lane interrupted. “You’re not accepted anywhere yet, but May Berriman told me that the Academy is the best place to study acting, and she said she would set up an audition for you in two days. The term starts in a couple of weeks, so there isn’t much time to lose.” “Two days! Do you mean we’ll be going to New York day after tomorrow, just like that?” “Oh, no,” her mother answered calmly. “We’re going to New York tomorrow on the first plane that we can get seats on. Your father doesn’t believe in wasting time, once his mind is made up.” “Tomorrow?” Peggy repeated, almost unable to believe what she had heard. “What are we sitting here talking for, then? I’ve got a million things to do! I’ve got to get packed ... I’ve got to think of what to read for the audition! I can study on the plane, I guess, but ... oh! I’ll be terrible in a reading unless I can have more time! Oh, Mother, what parts will I do? Where’s the Shakespeare? Where’s—” “Whoa!” Mr. Lane said, catching Peggy’s arm to prevent her from rushing out of the kitchen. “Not now, young lady! We’ll pack in the morning, talk about what you should read, and take an afternoon plane to New York. But tonight, you’d better think of nothing more than getting to bed. This is going to be a busy time for all of us.” Reluctantly, Peggy agreed, recognizing the sense of what her father said. She finished her milk and cookies, kissed her parents good night and went upstairs to bed. But it was one thing to go to bed and another to go to sleep. 18 Peggy lay on her back, staring at the ceiling and the patterns of light and shade cast by the street lamp outside as it shone through the leaves of the big maple tree. As she watched the shifting shadows, she reviewed the roles she had played since her first time in a high-school play. Which should she refresh herself on? Which ones would she do best? And which ones were most suited to her now? She recognized that she had grown and developed past some of the roles which had once seemed perfectly suited to her talent and her appearance. But both had changed. She was certainly not a mature actress yet, from any point of view, but neither was she a schoolgirl. Her trim figure was well formed; her face had lost the undefined, simple cuteness of the early teens, and had gained character. She didn’t think she should read a young romantic part like Juliet. Not that she couldn’t do it, but perhaps something sharper was called for. Perhaps Viola in Twelfth Night ? Or perhaps not Shakespeare at all. Maybe the people at the Academy would think she was too arty or too pretentious? Maybe she should do something dramatic and full of stormy emotion, like Blanche in A Streetcar Named Desire ? Or, better for her development and age, a light, brittle, comedy role...? 19 Nothing seemed quite right. Peggy’s thoughts shifted with the shadows overhead. All the plays she had ever seen or read or acted in melted together in a blur, until the characters from one seemed to be talking with the characters from another and moving about in an enormous set made of pieces from two or three different plays. More actors kept coming on in a fantastic assortment of costumes until the stage was full. Then the stage lights dimmed, the actors joined hands across the stage to bow, the curtain slowly descended, the lights went out—and Peggy was fast asleep.
|
C. They understood that she was determined and realistic in her plans.
|
Given the way that the marocca grow, will the narrator and Captain Hannah likely have to make trips back to Mypore II in the future to transport more marocca?
A. Yes, because the marocca plants will not have a very long lifespan on Gloryanna III.
B. No, because the marocca will be so difficult to maintain on Gloryanna III that any hopes of restarting a marocca industry on the planet will be abandoned.
C. No, because the plants grow extraordinarily fast and they reproduce on a large-scale.
D. Yes, because the marocca do not produce many fruits, so more plants will have to be transported to make the plant profitable.
|
CAKEWALK TO GLORYANNA BY L. J. STECHER, JR. [Transcriber's Note: This etext was produced from Worlds of Tomorrow June 1963 Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] The job was easy. The profit was enormous. The only trouble was—the cargo had a will of its own! Captain Hannah climbed painfully down from the Delta Crucis , hobbled across the spaceport to where Beulah and I were waiting to greet him and hit me in the eye. Beulah—that's his elephant, but I have to take care of her for him because Beulah's baby belongs to me and Beulah has to take care of it—kept us apart until we both cooled down a little. Then, although still somewhat dubious about it, she let us go together across the field to the spaceport bar. I didn't ask Captain Hannah why he had socked me. Although he has never been a handsome man, he usually has the weathered and austere dignity that comes from plying the remote reaches among the stars. Call it the Look of Eagles. Captain Hannah had lost the Look of Eagles. His eyes were swollen almost shut; every inch of him that showed was a red mass of welts piled on more welts, as though he had tangled with a hive of misanthropic bees. The gold-braided hat of his trade was not clamped in its usual belligerent position slightly over one eye. It was riding high on his head, apparently held up by more of the ubiquitous swellings. I figured that he figured that I had something to do with the way he looked. "Shipping marocca to Gloryanna III didn't turn out to be a cakewalk after all?" I suggested. He glared at me in silence. "Perhaps you would like a drink first, and then you would be willing to tell me about it?" I decided that his wince was intended for a nod, and ordered rhial. I only drink rhial when I've been exposed to Captain Hannah. It was almost a pleasure to think that I was responsible, for a change, for having him take the therapy. "A Delta Class freighter can carry almost anything," he said at last, in a travesty of his usual forceful voice. "But some things it should never try." He lapsed back into silence after this uncharacteristic admission. I almost felt sorry for him, but just then Beulah came racking across the field with her two-ton infant in tow, to show her off to Hannah. I walled off my pity. He had foisted those two maudlin mastodons off onto me in one of our earlier deals, and if I had somehow been responsible for his present troubles, it was no more than he deserved. I rated winning for once. "You did succeed in getting the marocca to Gloryanna III?" I asked anxiously, after the elephants had been admired and sent back home. The success of that venture—even if the job had turned out to be more difficult than we had expected—meant an enormous profit to both of us. The fruit of the marocca is delicious and fabulously expensive. The plant grew only on the single planet Mypore II. Transshipped seeds invariably failed to germinate, which explained its rarity. The Myporians were usually, and understandably, bitterly, opposed to letting any of the living plants get shipped off their planet. But when I offered them a sizable piece of cash plus a perpetual share of the profits for letting us take a load of marocca plants to Gloryanna III, they relented and, for the first time in history, gave their assent. In fact, they had seemed delighted. "I got them there safely," said Captain Hannah. "And they are growing all right?" I persisted. "When I left, marocca was growing like mad," said Captain Hannah. I relaxed and leaned back in my chair. I no longer felt the need of rhial for myself. "Tell me about it," I suggested. "It was you who said that we should carry those damn plants to Gloryanna III," he said balefully. "I ought to black your other eye." "Simmer down and have some more rhial," I told him. "Sure I get the credit for that. Gloryanna III is almost a twin to Mypore II. You know that marocca takes a very special kind of environment. Bright sun most of the time—that means an almost cloudless environment. A very equable climate. Days and nights the same length and no seasons—that means no ecliptical and no axial tilt. But our tests showed that the plants had enough tolerance to cause no trouble in the trip in Delta Crucis ." A light dawned. "Our tests were no good?" "Your tests were no good," agreed the captain with feeling. "I'll tell you about it first, and then I'll black your other eye," he decided. "You'll remember that I warned you that we should take some marocca out into space and solve any problems we might find before committing ourselves to hauling a full load of it?" asked Captain Hannah. "We couldn't," I protested. "The Myporians gave us a deadline. If we had gone through all of that rigamarole, we would have lost the franchise. Besides, they gave you full written instructions about what to do under all possible circumstances." "Sure. Written in Myporian. A very difficult language to translate. Especially when you're barricaded in the head." I almost asked him why he had been barricaded in the bathroom of the Delta Crucis , but I figured it was safer to let him tell me in his own way, in his own time. "Well," he said, "I got into parking orbit around Mypore without any trouble. The plastic film kept the water in the hydroponic tanks without any trouble, even in a no-gravity condition. And by the time I had lined up for Gloryanna and Jumped, I figured, like you said, that the trip would be a cakewalk. "Do you remember how the plants always keep their leaves facing the sun? They twist on their stems all day, and then they go on twisting them all night, still pointing at the underground sun, so that they're aimed right at sunrise. So the stem looks like a corkscrew?" I nodded. "Sure. That's why they can't stand an axial tilt. They 'remember' the rate and direction of movement, and keep it up during the night time. So what? We had that problem all figured out." "You think so? That solution was one of yours, too, wasn't it?" He gazed moodily at his beaker of rhial. "I must admit it sounded good to me, too. In Limbo, moving at multiple light-speeds, the whole Universe, of course, turns into a bright glowing spot in our direction of motion, with everything else dark. So I lined up the Delta Crucis perpendicular to her direction of motion, put a once-every-twenty-one hour spin on her to match the rotation rates of Mypore II and Gloryanna III, and uncovered the view ports to let in the light. It gradually brightened until 'noon time', with the ports pointing straight at the light source, and then dimmed until we had ten and one-half hours of darkness. "Of course, it didn't work." "For Heaven's sake, why not?" "For Heaven's sake why should it? With no gravity for reference, how were the plants supposed to know that the 'sun' was supposed to be moving?" "So what did you do?" I asked, when that had sunk in. "If the stem doesn't keep winding, the plants die; and they can only take a few extra hours of night time before they run down." "Oh," said Captain Hannah in quiet tones of controlled desperation, "it was very simple. I just put enough spin on the ship to make artificial gravity, and then I strung a light and moved it every fifteen minutes for ten and one-half hours, until I had gone halfway around the room. Then I could turn the light off and rest for ten and one-half hours. The plants liked it fine. "Of course, first I had to move all the hydroponic tanks from their original positions perpendicular to the axial thrust line of the ship to a radial position. And because somehow we had picked up half of the plants in the northern hemisphere of Mypore and the other half in the southern hemisphere, it turned out that half of the plants had a sinistral corkscrew and the other half had a dextral. So I had to set the plants up in two different rooms, and run an artificial sun for each, going clockwise with one, widdershins with the other. "I won't even talk about what I went through while I was shifting the hydroponic tanks, when all the plastic membranes that were supposed to keep the water in place started to break." "I'd like to know," I said sincerely. He stared at me in silence for a moment. "Well, it filled the cabin with great solid bubbles of water. Water bubbles will oscillate and wobble like soap bubbles," he went on dreamily, "but of course, they're not empty, like soap bubbles. The surface acts a little like a membrane, so that sometimes two of the things will touch and gently bounce apart without joining. But just try touching one of them. You could drown—I almost did. Several times. "I got a fire pump—an empty one. You know the kind; a wide cylinder with a piston with a handle, and a hose that you squirt the water out of, or can suck water in with. The way you use it is, you float up on a big ball of water, with the pump piston down—closed. You carefully poke the end of the hose into the ball of water, letting only the metal tip touch. Never the hose. If you let the hose touch, the water runs up it and tries to drown you. Then you pull up on the piston, and draw all the water into the cylinder. Of course, you have to hold the pump with your feet while you pull the handle with your free hand." "Did it work?" I asked eagerly. "Eventually. Then I stopped to think of what to do with the water. It was full of minerals and manure and such, and I didn't want to introduce it into the ship's tanks." "But you solved the problem?" "In a sense," said the captain. "I just emptied the pump back into the air, ignored the bubbles, repositioned the tanks, put spin on the ship and then ladled the liquid back into the tanks with a bucket." "Didn't you bump into a lot of the bubbles and get yourself dunked a good deal while you were working with the tanks?" He shrugged. "I couldn't say. By that time I was ignoring them. It was that or suicide. I had begun to get the feeling that they were stalking me. So I drew a blank." "Then after that you were all right, except for the tedium of moving the lights around?" I asked him. I answered myself at once. "No. There must be more. You haven't told me why you hid out in the bathroom, yet." "Not yet," said Captain Hannah. "Like you, I figured I had the situation fairly well under control, but like you, I hadn't thought things through. The plastic membranes hadn't torn when we brought the tanks in board the Delta Crucis . It never occurred to me to hunt around for the reasons for the change. But I wouldn't have had long to hunt anyway, because in a few hours the reasons came looking for me. "They were a tiny skeeter-like thing. A sort of midge or junior grade mosquito. They had apparently been swimming in the water during their larval stage. Instead of making cocoons for themselves, they snipped tiny little pieces of plastic to use as protective covers in the pupal stage. I guess they were more like butterflies than mosquitoes in their habits. And now they were mature. "There were thousands and thousands of them, and each one of them made a tiny, maddening whine as it flew." "And they bit? That explains your bumps?" I asked sympathetically. "Oh, no. These things didn't bite, they itched. And they got down inside of everything they could get down inside, and clung. That included my ears and my eyes and my nose. "I broke out a hand sprayer full of a DDT solution, and sprayed it around me to try to clear the nearby air a little, so that I could have room to think. The midges loved it. But the plants that were in reach died so fast that you could watch their leaves curl up and drop off. "I couldn't figure whether to turn up the fans and dissipate the cloud—by spreading it all through the ship—or whether to try to block off the other plant room, and save it at least. So I ended up by not doing anything, which was the right thing to do. No more plants died from the DDT. "So then I did a few experiments, and found that the regular poison spray in the ship's fumigation system worked just fine. It killed the bugs without doing the plants any harm at all. Of course, the fumigation system is designed to work with the fumigator off the ship, because it's poisonous to humans too. "I finally blocked the vents and the door edges in the head, after running some remote controls into there, and then started the fumigation system going. While I was sitting there with nothing much to do, I tried to translate what I could of the Myporian instructions. It was on page eleven that it mentioned casually that the midges—the correct word is carolla—are a necessary part of the life cycle of the marocca. The larvae provide an enzyme without which the plants die. "Of course. I immediately stopped slapping at the relatively few midges that had made their way into the head with me, and started to change the air in the ship to get rid of the poison. I knew it was too late before I started, and for once I was right. "The only live midges left in the ship were the ones that had been with me during the fumigation process. I immediately tried to start a breeding ground for midges, but the midges didn't seem to want to cooperate. Whatever I tried to do, they came back to me. I was the only thing they seemed to love. I didn't dare bathe, or scratch, or even wriggle, for fear of killing more of them. And they kept on itching. It was just about unbearable, but I bore it for three interminable days while the midges died one by one. It was heartbreaking—at least, it was to me. "And it was unnecessary, too. Because apparently the carolla had already laid their eggs, or whatever it is that they do, before I had fumigated them. After my useless days of agony, a new batch came swarming out. And this time there were a few of a much larger thing with them—something like an enormous moth. The new thing just blundered around aimlessly. "I lit out for the head again, to keep away from that intolerable whining. This time I took a luxurious shower and got rid of most of the midges that came through the door with me. I felt almost comfortable, in fact, until I resumed my efforts to catch up on my reading. "The mothlike things—they are called dingleburys—also turn out to provide a necessary enzyme. They are supposed to have the same timing of their life cycle as the carolla. Apparently the shaking up I had given their larvae in moving the tanks and dipping the water up in buckets and all that had inhibited them in completing their cycle the first time around. "And the reason they had the same life cycle as the carolla was that the adult dinglebury will eat only the adult carolla, and it has to fill itself full to bursting before it will reproduce. If I had the translation done correctly, they were supposed to dart gracefully around, catching carolla on the wing and stuffing themselves happily. "I had to find out what was wrong with my awkward dingleburys. And that, of course, meant going out into the ship again. But I had to do that anyway, because it was almost 'daylight', and time for me to start shifting the lights again. "The reason for the dingleburys' problem is fairly obvious. When you set up artificial gravity by spinning a ship, the gravity is fine down near the skin where the plants are. But the gravity potential is very high, and it gets very light up where things fly around, going to zero on the middle line of the ship. And the unfamiliar gravity gradient, together with the Coriolis effect and all, makes the poor dingleburys dizzy, so they can't catch carolla. "And if you think I figured all that out about dingleburys getting dizzy at the time, in that madhouse of a ship, then you're crazy. What happened was that I saw that there was one of the creatures that didn't seem to be having any trouble, but was acting like the book said it should. I caught it and examined it. The poor thing was blind, and was capturing her prey by sound alone. "So I spent the whole day—along with my usual chore of shifting the lights—blindfolding dingleburys. Which is a hell of a sport for a man who is captain of his own ship." I must say that I agreed with him, but it seemed to be a good time for me to keep my mouth shut. "Well, after the dingleburys had eaten and propagated, they became inquisitive. They explored the whole ship, going into places I wouldn't have believed it to be possible for them to reach, including the inside of the main computer, which promptly shorted out. I finally figured that one of the things had managed to crawl up the cooling air exhaust duct, against the flow of air, to see what was going on inside. "I didn't dare to get rid of the things without checking my book, of course, so it was back to the head for me. 'Night' had come again—and it was the only place I could get any privacy. There were plenty of the carolla left to join me outside. "I showered and swatted and started to read. I got as far as where it said that the dingleburys continued to be of importance, and then I'm afraid I fell asleep. "I got up with the sun the next morning. Hell, I had to, considering that it was I who turned the sun on! I found that the dingleburys immediately got busy opening small buds on the stems of the marocca plants. Apparently they were pollinating them. I felt sure that these buds weren't the marocca blossoms from which the fruit formed—I'd seen a lot of those while we were on Mypore II and they were much bigger and showier than these little acorn-sized buds. "Of course, I should have translated some more of my instruction book, but I was busy. "Anyway, the action of the dingleburys triggered the violent growth phase of the marocca plants. Did you know that they plant marocca seedlings, back on Mypore II, at least a hundred feet apart? If you'll recall, a mature field, which was the only kind we ever saw, is one solid mass of green growth. "The book says that it takes just six hours for a marocca field to shift from the seedling stage to the mature stage. It didn't seem that long. You could watch the stuff grow—groping and crawling along; one plant twining with another as they climbed toward the light. "It was then that I began to get worried. If they twined around the light, they would keep me from moving it, and they would shadow it so it wouldn't do its job right. In effect, their growth would put out the sun. "I thought of putting up an electrically charged fence around the light, but the bugs had put most of my loose equipment out of action, so I got a machete. When I took a swing at one of the vines, something bit me on the back of the neck so hard it almost knocked me down. It was one of the dingleburys, and it was as mad as blazes. It seems that one of the things they do is to defend the marocca against marauders. That was the first of my welts, and it put me back in the head in about two seconds. "And what's more, I found that I couldn't kill the damn things. Not if I wanted to save the plants. The growth only stops at the end of six hours, after the blossoms appear and are visited by the dingleburys. No dingleburys, no growth stoppage. "So for the next several hours I had to keep moving those lights, and keep them clear of the vines, and keep the vines from shadowing each other to the point where they curled up and died, and I had to do it gently , surrounded by a bunch of worried dingleburys. "Every time they got a little too worried, or I slipped and bumped into a plant too hard, or looked crosseyed at them, they bit me. If you think I look bad now, you should have seen me just about the time the blossoms started to burst. "I was worried about those blossoms. I felt sure that they would smell terrible, or make me sick, or hypnotize me, or something. But they just turned out to be big, white, odorless flowers. They did nothing for me or to me. They drove the dingleburys wild, though, I'm happy to say. Made them forget all about me. "While they were having their orgy, I caught up on my reading. It was necessary for me to cut back the marocca vines. For one thing, I couldn't get up to the area of the bridge. For another, the main computer was completely clogged. I could use the auxiliary, on the bridge, if I could get to it, but it's a poor substitute. For another thing, I would have to cut the stuff way back if I was ever going to get the plants out of the ship. And I was a little anxious to get my Delta Crucis back to normal as soon as possible. But before cutting, I had to translate the gouge. "It turns out that it's all right to cut marocca as soon as it stops growing. To keep the plants from dying, though, you have to mulch the cuttings and then feed them back to the plants, where the roots store whatever they need against the time of the next explosive period of growth. Of course, if you prefer you can wait for the vines to die back naturally, which takes several months. "There was one little catch, of course. The cuttings from the vines will poison the plants if they are fed back to them without having been mixed with a certain amount of processed mulch. Enzymes again. And there was only one special processor on board. "I was the special processor. That's what the instructions said—I translated very carefully—it required an 'organic processor'. "So I had to eat pounds of that horrible tasting stuff every day, and process it the hard way. "I didn't even have time to scratch my bites. I must have lost weight everywhere but in the swollen places, and they looked worse than they do now. The doctor says it may take a year before the bumps all go away—if they ever do—but I have improved a lot already. "For a while I must have been out of my head. I got so caught up in the rhythm of the thing that I didn't even notice when we slipped out of Limbo into real space near Gloryanna III. It was three days, the Control Tower on Gloryanna III told me, that they tried continuously to raise me on the communications gear before I heard the alarm bell and answered them, so I had to do a good deal of backtracking before I could get into parking orbit around the planet, and then set Delta Crucis down safely. Even as shaky as I was, Delta Crucis behaved like a lady. "I hadn't chopped off all of the new growth, although I had the plants down to manageable size. Some of the blossoms left on the plants had formed fruit, and the fruit had ripened and dried, and the seeds had developed fully. They were popping and spreading fine dust-like spores all over the ship, those last few hours before I landed. "By that time, though, an occasional sneezing fit and watering eyes didn't bother me any. I was far beyond the point where hay fever could add to my troubles. "When I opened the airlock door, though, the spores drifting outside set the customs inspectors to sneezing and swearing more than seemed reasonable at the time." Captain Hannah inhaled a sip of rhial, and seemed to be enjoying the powerful stuff. He acted as if he thought he had finished. "Well, go on," I urged him. "The marocca plants were still in good shape, weren't they?" Hannah nodded. "They were growing luxuriously." He nodded his head a couple of more times, in spite of the discomfort it must have given him. He said, "They made me burn the entire crop right away, of course. They didn't get all of the carolla or dingleburys, though. Or spores." "Gloryanna III is the original home planet of marocca. They hated the stuff, of course, but they liked the profit. Then, when a plague almost wiped out the dingleburys, they introduced khorram furs as a cash crop. It wasn't as lucrative, but it was so much more pleasant that they outlawed marocca. Took them almost fifty years to stamp it out completely. Meanwhile, some clever native shipped a load of the stuff to Mypore II. He took his time, did it without any trouble and made his fortune. And got out again quickly. "The Gloryannans were going to hold my Delta Crucis as security to pay for the cost of stamping out marocca all over again—those spores sprout fast—and for a time I was worried. "Of course, when I showed them our contract—that you alone were responsible for everything once I landed the plants safely on Gloryanna III, they let me go. "They'll send you the bill. They don't figure it will take them more than a few months to complete the job." Captain Hannah stopped talking and stood up, painfully and a little unsteadily. I'm afraid I didn't even notice when he blacked my other eye. I was too busy reaching for the rhial. END
|
C. No, because the plants grow extraordinarily fast and they reproduce on a large-scale.
|
What other interesting correlations are observed?
|
### Introduction
The main motivation of this work has been started with a question "What do people do to maintain their health?"– some people do balanced diet, some do exercise. Among diet plans some people maintain vegetarian diet/vegan diet, among exercises some people do swimming, cycling or yoga. There are people who do both. If we want to know the answers of the following questions– "How many people follow diet?", "How many people do yoga?", "Does yogi follow vegetarian/vegan diet?", may be we could ask our acquainted person but this will provide very few intuition about the data. Nowadays people usually share their interests, thoughts via discussions, tweets, status in social media (i.e. Facebook, Twitter, Instagram etc.). It's huge amount of data and it's not possible to go through all the data manually. We need to mine the data to get overall statistics and then we will also be able to find some interesting correlation of data. Several works have been done on prediction of social media content BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Prieto et al. proposed a method to extract a set of tweets to estimate and track the incidence of health conditions in society BIBREF5 . Discovering public health topics and themes in tweets had been examined by Prier et al. BIBREF6 . Yoon et al. described a practical approach of content mining to analyze tweet contents and illustrate an application of the approach to the topic of physical activity BIBREF7 . Twitter data constitutes a rich source that can be used for capturing information about any topic imaginable. In this work, we use text mining to mine the Twitter health-related data. Text mining is the application of natural language processing techniques to derive relevant information BIBREF8 . Millions of tweets are generated each day on multifarious issues BIBREF9 . Twitter mining in large scale has been getting a lot of attention last few years. Lin and Ryaboy discussed the evolution of Twitter infrastructure and the development of capabilities for data mining on "big data" BIBREF10 . Pandarachalil et al. provided a scalable and distributed solution using Parallel python framework for Twitter sentiment analysis BIBREF9 . Large-scale Twitter Mining for drug-related adverse events was developed by Bian et al. BIBREF11 . In this paper, we use parallel and distributed technology Apache Kafka BIBREF12 to handle the large streaming twitter data. The data processing is conducted in parallel with data extraction by integration of Apache Kafka and Spark Streaming. Then we use Topic Modeling to infer semantic structure of the unstructured data (i.e Tweets). Topic Modeling is a text mining technique which automatically discovers the hidden themes from given documents. It is an unsupervised text analytic algorithm that is used for finding the group of words from the given document. We build the model using three different algorithms Latent Semantic Analysis (LSA) BIBREF13 , Non-negative Matrix Factorization (NMF) BIBREF14 , and Latent Dirichlet Allocation (LDA) BIBREF15 and infer the topic of tweets. To observe the model behavior, we test the model to infer new tweets. The implication of our work is to annotate unlabeled data using the model and find interesting correlation. ### Data Collection
Tweet messages are retrieved from the Twitter source by utilizing the Twitter API and stored in Kafka topics. The Producer API is used to connect the source (i.e. Twitter) to any Kafka topic as a stream of records for a specific category. We fetch data from a source (Twitter), push it to a message queue, and consume it for further analysis. Fig. FIGREF2 shows the overview of Twitter data collection using Kafka. ### Apache Kafka
In order to handle the large streaming twitter data, we use parallel and distributed technology for big data framework. In this case, the output of the twitter crawling is queued in messaging system called Apache Kafka. This is a distributed streaming platform created and open sourced by LinkedIn in 2011 BIBREF12 . We write a Producer Client which fetches latest tweets continuously using Twitter API and push them to single node Kafka Broker. There is a Consumer that reads data from Kafka (Fig. FIGREF2 ). ### Apache Zookeeper
Apache Zookeeper is a distributed, open-source configuration, synchronization service along with naming registry for distributed applications. Kafka uses Zookeeper to store metadata about the Kafka cluster, as well as consumer client details. ### Data Extraction using Tweepy
The twitter data has been crawled using Tweepy which is a Python library for accessing the Twitter API. We use Twitter streaming API to extract 40k tweets (April 17-19, 2019). For the crawling, we focus on several keywords that are related to health. The keywords are processed in a non-case-sensitive way. We use filter to stream all tweets containing the word `yoga', `healthylife', `healthydiet', `diet',`hiking', `swimming', `cycling', `yogi', `fatburn', `weightloss', `pilates', `zumba', `nutritiousfood', `wellness', `fitness', `workout', `vegetarian', `vegan', `lowcarb', `glutenfree', `calorieburn'. The streaming API returns tweets, as well as several other types of messages (e.g. a tweet deletion notice, user update profile notice, etc), all in JSON format. We use Python libraries json for parsing the data, pandas for data manipulation. ### Data Pre-processing
Data pre-processing is one of the key components in many text mining algorithms BIBREF8 . Data cleaning is crucial for generating a useful topic model. We have some prerequisites i.e. we download the stopwords from NLTK (Natural Language Toolkit) and spacy's en model for text pre-processing. It is noticeable that the parsed full-text tweets have many emails, `RT', newline and extra spaces that is quite distracting. We use Python Regular Expressions (re module) to get rid of them. Then we tokenize each text into a list of words, remove punctuation and unnecessary characters. We use Python Gensim package for further processing. Gensim's simple_preprocess() is used for tokenization and removing punctuation. We use Gensim's Phrases model to build bigrams. Certain parts of English speech, like conjunctions ("for", "or") or the word "the" are meaningless to a topic model. These terms are called stopwords and we remove them from the token list. We use spacy model for lemmatization to keep only noun, adjective, verb, adverb. Stemming words is another common NLP technique to reduce topically similar words to their root. For example, "connect", "connecting", "connected", "connection", "connections" all have similar meanings; stemming reduces those terms to "connect". The Porter stemming algorithm BIBREF16 is the most widely used method. ### Methodology
We use Twitter health-related data for this analysis. In subsections [subsec:3.1]3.1, [subsec:3.2]3.2, [subsec:3.3]3.3, and [subsec:3.4]3.4 elaborately present how we can infer the meaning of unstructured data. Subsection [subsec:3.5]3.5 shows how we do manual annotation for ground truth comparison. Fig. FIGREF6 shows the overall pipeline of correlation mining. ### Construct document-term matrix
The result of the data cleaning stage is texts, a tokenized, stopped, stemmed and lemmatized list of words from a single tweet. To understand how frequently each term occurs within each tweet, we construct a document-term matrix using Gensim's Dictionary() function. Gensim's doc2bow() function converts dictionary into a bag-of-words. In the bag-of-words model, each tweet is represented by a vector in a m-dimensional coordinate space, where m is number of unique terms across all tweets. This set of terms is called the corpus vocabulary. ### Topic Modeling
Topic modeling is a text mining technique which provides methods for identifying co-occurring keywords to summarize collections of textual information. This is used to analyze collections of documents, each of which is represented as a mixture of topics, where each topic is a probability distribution over words BIBREF17 . Applying these models to a document collection involves estimating the topic distributions and the weight each topic receives in each document. A number of algorithms exist for solving this problem. We use three unsupervised machine learning algorithms to explore the topics of the tweets: Latent Semantic Analysis (LSA) BIBREF13 , Non-negative Matrix Factorization (NMF) BIBREF14 , and Latent Dirichlet Allocation (LDA) BIBREF15 . Fig. FIGREF7 shows the general idea of topic modeling methodology. Each tweet is considered as a document. LSA, NMF, and LDA use Bag of Words (BoW) model, which results in a term-document matrix (occurrence of terms in a document). Rows represent terms (words) and columns represent documents (tweets). After completing topic modeling, we identify the groups of co-occurring words in tweets. These group co-occurring related words makes "topics". LSA (Latent Semantic Analysis) BIBREF13 is also known as LSI (Latent Semantic Index). It learns latent topics by performing a matrix decomposition on the document-term matrix using Singular Value Decomposition (SVD) BIBREF18 . After corpus creation in [subsec:3.1]Subsection 3.1, we generate an LSA model using Gensim. Non-negative Matrix Factorization (NMF) BIBREF14 is a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of non-negative data vectors. It is a matrix factorization method where we constrain the matrices to be non-negative. We apply Term Weighting with term frequency-inverse document frequency (TF-IDF) BIBREF19 to improve the usefulness of the document-term matrix (created in [subsec:3.1]Subsection 3.1) by giving more weight to the more "important" terms. In Scikit-learn, we can generate at TF-IDF weighted document-term matrix by using TfidfVectorizer. We import the NMF model class from sklearn.decomposition and fit the topic model to tweets. Latent Dirichlet Allocation (LDA) BIBREF15 is widely used for identifying the topics in a set of documents, building on Probabilistic Latent Semantic Analysis (PLSI) BIBREF20 . LDA considers each document as a collection of topics in a certain proportion and each topic as a collection of keywords in a certain proportion. We provide LDA the optimal number of topics, it rearranges the topics' distribution within the documents and keywords' distribution within the topics to obtain a good composition of topic-keywords distribution. We have corpus generated in [subsec:3.1]Subsection 3.1 to train the LDA model. In addition to the corpus and dictionary, we provide the number of topics as well. ### Optimal number of Topics
Topic modeling is an unsupervised learning, so the set of possible topics are unknown. To find out the optimal number of topic, we build many LSA, NMF, LDA models with different values of number of topics (k) and pick the one that gives the highest coherence score. Choosing a `k' that marks the end of a rapid growth of topic coherence usually offers meaningful and interpretable topics. We use Gensim's coherencemodel to calculate topic coherence for topic models (LSA and LDA). For NMF, we use a topic coherence measure called TC-W2V. This measure relies on the use of a word embedding model constructed from the corpus. So in this step, we use the Gensim implementation of Word2Vec BIBREF21 to build a Word2Vec model based on the collection of tweets. We achieve the highest coherence score = 0.4495 when the number of topics is 2 for LSA, for NMF the highest coherence value is 0.6433 for K = 4, and for LDA we also get number of topics is 4 with the highest coherence score which is 0.3871 (see Fig. FIGREF8 ). For our dataset, we picked k = 2, 4, and 4 with the highest coherence value for LSA, NMF, and LDA correspondingly (Fig. FIGREF8 ). Table TABREF13 shows the topics and top-10 keywords of the corresponding topic. We get more informative and understandable topics using LDA model than LSA. LSA decomposed matrix is a highly dense matrix, so it is difficult to index individual dimension. LSA is unable to capture the multiple meanings of words. It offers lower accuracy than LDA. In case of NMF, we observe same keywords are repeated in multiple topics. Keywords "go", "day" both are repeated in Topic 2, Topic 3, and Topic 4 (Table TABREF13 ). In Table TABREF13 keyword "yoga" has been found both in Topic 1 and Topic 4. We also notice that keyword "eat" is in Topic 2 and Topic 3 (Table TABREF13 ). If the same keywords being repeated in multiple topics, it is probably a sign that the `k' is large though we achieve the highest coherence score in NMF for k=4. We use LDA model for our further analysis. Because LDA is good in identifying coherent topics where as NMF usually gives incoherent topics. However, in the average case NMF and LDA are similar but LDA is more consistent. ### Topic Inference
After doing topic modeling using three different method LSA, NMF, and LDA, we use LDA for further analysis i.e. to observe the dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet of training data. To observe the model behavior on new tweets those are not included in training set, we follow the same procedure to observe the dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet on testing data. Table TABREF30 shows some tweets and corresponding dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet. ### Manual Annotation
To calculate the accuracy of model in comparison with ground truth label, we selected top 500 tweets from train dataset (40k tweets). We extracted 500 new tweets (22 April, 2019) as a test dataset. We did manual annotation both for train and test data by choosing one topic among the 4 topics generated from LDA model (7th, 8th, 9th, and 10th columns of Table TABREF13 ) for each tweet based on the intent of the tweet. Consider the following two tweets: Tweet 1: Learning some traditional yoga with my good friend. Tweet 2: Why You Should #LiftWeights to Lose #BellyFat #Fitness #core #abs #diet #gym #bodybuilding #workout #yoga The intention of Tweet 1 is yoga activity (i.e. learning yoga). Tweet 2 is more about weight lifting to reduce belly fat. This tweet is related to workout. When we do manual annotation, we assign Topic 2 in Tweet 1, and Topic 1 in Tweet 2. It's not wise to assign Topic 2 for both tweets based on the keyword "yoga". During annotation, we focus on functionality of tweets. ### Visualization
We use LDAvis BIBREF22 , a web-based interactive visualization of topics estimated using LDA. Gensim's pyLDAVis is the most commonly used visualization tool to visualize the information contained in a topic model. In Fig. FIGREF21 , each bubble on the left-hand side plot represents a topic. The larger the bubble, the more prevalent is that topic. A good topic model has fairly big, non-overlapping bubbles scattered throughout the chart instead of being clustered in one quadrant. A model with too many topics, is typically have many overlaps, small sized bubbles clustered in one region of the chart. In right hand side, the words represent the salient keywords. If we move the cursor over one of the bubbles (Fig. FIGREF21 ), the words and bars on the right-hand side have been updated and top-30 salient keywords that form the selected topic and their estimated term frequencies are shown. We observe interesting hidden correlation in data. Fig. FIGREF24 has Topic 2 as selected topic. Topic 2 contains top-4 co-occurring keywords "vegan", "yoga", "job", "every_woman" having the highest term frequency. We can infer different things from the topic that "women usually practice yoga more than men", "women teach yoga and take it as a job", "Yogi follow vegan diet". We would say there are noticeable correlation in data i.e. `Yoga-Veganism', `Women-Yoga'. ### Topic Frequency Distribution
Each tweet is composed of multiple topics. But, typically only one of the topics is dominant. We extract the dominant and 2nd dominant topic for each tweet and show the weight of the topic (percentage of contribution in each tweet) and the corresponding keywords. We plot the frequency of each topic's distribution on tweets in histogram. Fig. FIGREF25 shows the dominant topics' frequency and Fig. FIGREF25 shows the 2nd dominant topics' frequency on tweets. From Fig. FIGREF25 we observe that Topic 1 became either the dominant topic or the 2nd dominant topic for most of the tweets. 7th column of Table TABREF13 shows the corresponding top-10 keywords of Topic 1. ### Comparison with Ground Truth
To compare with ground truth, we gradually increased the size of dataset 100, 200, 300, 400, and 500 tweets from train data and test data (new tweets) and did manual annotation both for train/test data based on functionality of tweets (described in [subsec:3.5]Subsection 3.5). For accuracy calculation, we consider the dominant topic only. We achieved 66% train accuracy and 51% test accuracy when the size of dataset is 500 (Fig. FIGREF28 ). We did baseline implementation with random inference by running multiple times with different seeds and took the average accuracy. For dataset 500, the accuracy converged towards 25% which is reasonable as we have 4 topics. ### Observation and Future Work
In Table TABREF30 , we show some observations. For the tweets in 1st and 2nd row (Table TABREF30 ), we observed understandable topic. We also noticed misleading topic and unrelated topic for few tweets (3rd and 4th row of Table TABREF30 ). In the 1st row of Table TABREF30 , we show a tweet from train data and we got Topic 2 as a dominant topic which has 61% of contribution in this tweet. Topic 1 is 2nd dominant topic and 18% contribution here. 2nd row of Table TABREF30 shows a tweet from test set. We found Topic 2 as a dominant topic with 33% of contribution and Topic 4 as 2nd dominant topic with 32% contribution in this tweet. In the 3rd (Table TABREF30 ), we have a tweet from test data and we got Topic 2 as a dominant topic which has 43% of contribution in this tweet. Topic 3 is 2nd dominant with 23% contribution which is misleading topic. The model misinterprets the words `water in hand' and infers topic which has keywords "swimming, swim, pool". But the model should infer more reasonable topic (Topic 1 which has keywords "diet, workout") here. We got Topic 2 as dominant topic for the tweet in 4th row (Table TABREF30 ) which is unrelated topic for this tweet and most relevant topic of this tweet (Topic 2) as 2nd dominant topic. We think during accuracy comparison with ground truth 2nd dominant topic might be considered. In future, we will extract more tweets and train the model and observe the model behavior on test data. As we found misleading and unrelated topic in test cases, it is important to understand the reasons behind the predictions. We will incorporate Local Interpretable model-agnostic Explanation (LIME) BIBREF23 method for the explanation of model predictions. We will also do predictive causality analysis on tweets. ### Conclusions
It is challenging to analyze social media data for different application purpose. In this work, we explored Twitter health-related data, inferred topic using topic modeling (i.e. LSA, NMF, LDA), observed model behavior on new tweets, compared train/test accuracy with ground truth, employed different visualizations after information integration and discovered interesting correlation (Yoga-Veganism) in data. In future, we will incorporate Local Interpretable model-agnostic Explanation (LIME) method to understand model interpretability. Figure 2: Methodology of correlation mining of Twitter health data. Figure 3: Topic Modeling using LSA, NMF, and LDA. After topic modeling we identify topic/topics (circles). Red pentagrams and green triangles represent group of co-occurring related words of corresponding topic. Figure 1: Twitter Data Collection. Figure 4: Optimal Number of Topics vs Coherence Score. Number of Topics (k) are selected based on the highest coherence score. Graphs are rendered in high resolution and can be zoomed in. Table 1: Topics and top-10 keywords of the corresponding topic Figure 5: Visualization using pyLDAVis. Best viewed in electronic format (zoomed in). Figure 6: Visualization using pyLDAVis. Red bubble in left hand side represents selected Topic which is Topic 2. Red bars in right hand side show estimated term frequencies of top-30 salient keywords that form the Topic 2. Best viewed in electronic format (zoomed in) Figure 7: Frequency of each topic’s distribution on tweets. Table 2: The Dominant & 2nd Dominant Topic of a Tweet and corresponding Topic Contribution on that specific Tweet. Figure 8: Percentage of Accuracy (y-axis) vs Size of Dataset (x-axis). Size of Dataset = 100, 200, 300, 400, and 500 tweets. Blue line shows the accuracy of Train data and Orange line represents Test accuracy. Best viewed in electronic format (zoomed in).
|
Women-Yoga
|
What does the colonel seem to think about the bomb situation at the mental institution?
A. He wants to let Thaddeus create more things to study them
B. He is worried about the perception if others hear about what's happening
C. He wants to keep the story away from the newspapers so that others cannot learn Thaddeus' secrets
D. It figures that this is where this is happening, so he's frustrated for yet another bomb case
|
Transcriber's Note: This etext was produced from Astounding Science Fiction November 1959. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed. A FILBERT IS A NUT BY RICK RAPHAEL That the gentleman in question was a nut was beyond question. He was an institutionalized psychotic. He was nutty enough to think he could make an atom bomb out of modeling clay! Illustrated by Freas Miss Abercrombie, the manual therapist patted the old man on the shoulder. "You're doing just fine, Mr. Lieberman. Show it to me when you have finished." The oldster in the stained convalescent suit gave her a quick, shy smile and went back to his aimless smearing in the finger paints. Miss Abercrombie smoothed her smock down over trim hips and surveyed the other patients working at the long tables in the hospital's arts and crafts shop. Two muscular and bored attendants in spotless whites, lounged beside the locked door and chatted idly about the Dodgers' prospects for the pennant. Through the barred windows of the workshop, rolling green hills were seen, their tree-studded flanks making a pleasant setting for the mental institution. The crafts building was a good mile away from the main buildings of the hospital and the hills blocked the view of the austere complex of buildings that housed the main wards. The therapist strolled down the line of tables, pausing to give a word of advice here, and a suggestion there. She stopped behind a frowning, intense patient, rapidly shaping blobs of clay into odd-sized strips and forms. As he finished each piece, he carefully placed it into a hollow shell hemisphere of clay. "And what are we making today, Mr. Funston?" Miss Abercrombie asked. The flying fingers continued to whip out the bits of shaped clay as the patient ignored the question. He hunched closer to his table as if to draw away from the woman. "We mustn't be antisocial, Mr. Funston," Miss Abercrombie said lightly, but firmly. "You've been coming along famously and you must remember to answer when someone talks to you. Now what are you making? It looks very complicated." She stared professionally at the maze of clay parts. Thaddeus Funston continued to mold the clay bits and put them in place. Without looking up from his bench he muttered a reply. "Atom bomb." A puzzled look crossed the therapist's face. "Pardon me, Mr. Funston. I thought you said an 'atom bomb.'" "Did," Funston murmured. Safely behind the patient's back, Miss Abercrombie smiled ever so slightly. "Why that's very good, Mr. Funston. That shows real creative thought. I'm very pleased." She patted him on the shoulder and moved down the line of patients. A few minutes later, one of the attendants glanced at his watch, stood up and stretched. "All right, fellows," he called out, "time to go back. Put up your things." There was a rustle of paint boxes and papers being shuffled and chairs being moved back. A tall, blond patient with a flowing mustache, put one more dab of paint on his canvas and stood back to survey the meaningless smears. He sighed happily and laid down his palette. At the clay table, Funston feverishly fabricated the last odd-shaped bit of clay and slapped it into place. With a furtive glance around him, he clapped the other half of the clay sphere over the filled hemisphere and then stood up. The patients lined up at the door, waiting for the walk back across the green hills to the main hospital. The attendants made a quick count and then unlocked the door. The group shuffled out into the warm, afternoon sunlight and the door closed behind them. Miss Abercrombie gazed around the cluttered room and picked up her chart book of patient progress. Moving slowly down the line of benches, she made short, precise notes on the day's work accomplished by each patient. At the clay table, she carefully lifted the top half of the clay ball and stared thoughtfully at the jumbled maze of clay strips laced through the lower hemisphere. She placed the lid back in place and jotted lengthily in her chart book. When she had completed her rounds, she slipped out of the smock, tucked the chart book under her arm and left the crafts building for the day. The late afternoon sun felt warm and comfortable as she walked the mile to the main administration building where her car was parked. As she drove out of the hospital grounds, Thaddeus Funston stood at the barred window of his locked ward and stared vacantly over the hills towards the craft shop. He stood there unmoving until a ward attendant came and took his arm an hour later to lead him off to the patients' mess hall. The sun set, darkness fell over the stilled hospital grounds and the ward lights winked out at nine o'clock, leaving just a single light burning in each ward office. A quiet wind sighed over the still-warm hills. At 3:01 a.m., Thaddeus Funston stirred in his sleep and awakened. He sat up in bed and looked around the dark ward. The quiet breathing and occasional snores of thirty other sleeping patients filled the room. Funston turned to the window and stared out across the black hills that sheltered the deserted crafts building. He gave a quick cry, shut his eyes and clapped his hands over his face. The brilliance of a hundred suns glared in the night and threw stark shadows on the walls of the suddenly-illuminated ward. An instant later, the shattering roar and blast of the explosion struck the hospital buildings in a wave of force and the bursting crash of a thousand windows was lost in the fury of the explosion and the wild screams of the frightened and demented patients. It was over in an instant, and a stunned moment later, recessed ceiling lights began flashing on throughout the big institution. Beyond the again-silent hills, a great pillar of smoke, topped by a small mushroom-shaped cloud, rose above the gaping hole that had been the arts and crafts building. Thaddeus Funston took his hands from his face and lay back in his bed with a small, secret smile on his lips. Attendants and nurses scurried through the hospital, seeing how many had been injured in the explosion. None had. The hills had absorbed most of the shock and apart from a welter of broken glass, the damage had been surprisingly slight. The roar and flash of the explosion had lighted and rocked the surrounding countryside. Soon firemen and civil defense disaster units from a half-dozen neighboring communities had gathered at the still-smoking hole that marked the site of the vanished crafts building. Within fifteen minutes, the disaster-trained crews had detected heavy radiation emanating from the crater and there was a scurry of men and equipment back to a safe distance, a few hundred yards away. At 5:30 a.m., a plane landed at a nearby airfield and a platoon of Atomic Energy Commission experts, military intelligence men, four FBI agents and an Army full colonel disembarked. At 5:45 a.m. a cordon was thrown around both the hospital and the blast crater. In Ward 4-C, Thaddeus Funston slept peacefully and happily. "It's impossible and unbelievable," Colonel Thomas Thurgood said for the fifteenth time, later that morning, as he looked around the group of experts gathered in the tent erected on the hill overlooking the crater. "How can an atom bomb go off in a nut house?" "It apparently was a very small bomb, colonel," one of the haggard AEC men offered timidly. "Not over three kilotons." "I don't care if it was the size of a peanut," Thurgood screamed. "How did it get here?" A military intelligence agent spoke up. "If we knew, sir, we wouldn't be standing around here. We don't know, but the fact remains that it WAS an atomic explosion." Thurgood turned wearily to the small, white-haired man at his side. "Let's go over it once more, Dr. Crane. Are you sure you knew everything that was in that building?" Thurgood swept his hand in the general direction of the blast crater. "Colonel, I've told you a dozen times," the hospital administrator said with exasperation, "this was our manual therapy room. We gave our patients art work. It was a means of getting out of their systems, through the use of their hands, some of the frustrations and problems that led them to this hospital. They worked with oil and water paints and clay. If you can make an atomic bomb from vermillion pigments, then Madame Curie was a misguided scrubwoman." "All I know is that you say this was a crafts building. O.K. So it was," Thurgood sighed. "I also know that an atomic explosion at 3:02 this morning blew it to hell and gone. "And I've got to find out how it happened." Thurgood slumped into a field chair and gazed tiredly up at the little doctor. "Where's that girl you said was in charge of this place?" "We've already called for Miss Abercrombie and she's on her way here now," the doctor snapped. Outside the tent, a small army of military men and AEC technicians moved around the perimeter of the crater, scintillators in hand, examining every tiny scrap that might have been a part of the building at one time. A jeep raced down the road from the hospital and drew up in front of the tent. An armed MP helped Miss Abercrombie from the vehicle. She walked to the edge of the hill and looked down with a stunned expression. "He did make an atom bomb," she cried. Colonel Thurgood, who had snapped from his chair at her words, leaped forward to catch her as she collapsed in a faint. At 4:00 p.m., the argument was still raging in the long, narrow staff room of the hospital administration building. Colonel Thurgood, looking more like a patient every minute, sat on the edge of his chair at the head of a long table and pounded with his fist on the wooden surface, making Miss Abercrombie's chart book bounce with every beat. "It's ridiculous," Thurgood roared. "We'll all be the laughingstocks of the world if this ever gets out. An atomic bomb made out of clay. You are all nuts. You're in the right place, but count me out." At his left, Miss Abercrombie cringed deeper into her chair at the broadside. Down both sides of the long table, psychiatrists, physicists, strategists and radiologists sat in various stages of nerve-shattered weariness. "Miss Abercrombie," one of the physicists spoke up gently, "you say that after the patients had departed the building, you looked again at Funston's work?" The therapist nodded unhappily. "And you say that, to the best of your knowledge," the physicist continued, "there was nothing inside the ball but other pieces of clay." "I'm positive that's all there was in it," Miss Abercrombie cried. There was a renewed buzz of conversation at the table and the senior AEC man present got heads together with the senior intelligence man. They conferred briefly and then the intelligence officer spoke. "That seems to settle it, colonel. We've got to give this Funston another chance to repeat his bomb. But this time under our supervision." Thurgood leaped to his feet, his face purpling. "Are you crazy?" he screamed. "You want to get us all thrown into this filbert factory? Do you know what the newspapers would do to us if they ever got wind of the fact, that for one, tiny fraction of a second, anyone of us here entertained the notion that a paranoidal idiot with the IQ of an ape could make an atomic bomb out of kid's modeling clay? "They'd crucify us, that's what they'd do!" At 8:30 that night, Thaddeus Funston, swathed in an Army officer's greatcoat that concealed the strait jacket binding him and with an officer's cap jammed far down over his face, was hustled out of a small side door of the hospital and into a waiting staff car. A few minutes later, the car pulled into the flying field at the nearby community and drove directly to the military transport plane that stood at the end of the runway with propellers turning. Two military policemen and a brace of staff psychiatrists sworn to secrecy under the National Atomic Secrets Act, bundled Thaddeus aboard the plane. They plopped him into a seat directly in front of Miss Abercrombie and with a roar, the plane raced down the runway and into the night skies. The plane landed the next morning at the AEC's atomic testing grounds in the Nevada desert and two hours later, in a small hot, wooden shack miles up the barren desert wastelands, a cluster of scientists and military men huddled around a small wooden table. There was nothing on the table but a bowl of water and a great lump of modeling clay. While the psychiatrists were taking the strait jacket off Thaddeus in the staff car outside, Colonel Thurgood spoke to the weary Miss Abercrombie. "Now you're positive this is just about the same amount and the same kind of clay he used before?" "I brought it along from the same batch we had in the store room at the hospital," she replied, "and it's the same amount." Thurgood signaled to the doctors and they entered the shack with Thaddeus Funston between them. The colonel nudged Miss Abercrombie. She smiled at Funston. "Now isn't this nice, Mr. Funston," she said. "These nice men have brought us way out here just to see you make another atom bomb like the one you made for me yesterday." A flicker of interest lightened Thaddeus' face. He looked around the shack and then spotted the clay on the table. Without hesitation, he walked to the table and sat down. His fingers began working the damp clay, making first the hollow, half-round shell while the nation's top atomic scientists watched in fascination. His busy fingers flew through the clay, shaping odd, flat bits and clay parts that were dropped almost aimlessly into the open hemisphere in front of him. Miss Abercrombie stood at his shoulder as Thaddeus hunched over the table just as he had done the previous day. From time to time she glanced at her watch. The maze of clay strips grew and as Funston finished shaping the other half hemisphere of clay, she broke the tense silence. "Time to go back now, Mr. Funston. You can work some more tomorrow." She looked at the men and nodded her head. The two psychiatrists went to Thaddeus' side as he put the upper lid of clay carefully in place. Funston stood up and the doctors escorted him from the shack. There was a moment of hushed silence and then pandemonium burst. The experts converged on the clay ball, instruments blossoming from nowhere and cameras clicking. For two hours they studied and gently probed the mass of child's clay and photographed it from every angle. Then they left for the concrete observatory bunker, several miles down range where Thaddeus and the psychiatrists waited inside a ring of stony-faced military policemen. "I told you this whole thing was asinine," Thurgood snarled as the scientific teams trooped into the bunker. Thaddeus Funston stared out over the heads of the MPs through the open door, looking uprange over the heat-shimmering desert. He gave a sudden cry, shut his eyes and clapped his hands over his face. A brilliance a hundred times brighter than the glaring Nevada sun lit the dim interior of the bunker and the pneumatically-operated door slammed shut just before the wave of the blast hit the structure. Six hours and a jet plane trip later, Thaddeus, once again in his strait jacket, sat between his armed escorts in a small room in the Pentagon. Through the window he could see the hurried bustle of traffic over the Potomac and beyond, the domed roof of the Capitol. In the conference room next door, the joint chiefs of staff were closeted with a gray-faced and bone-weary Colonel Thurgood and his baker's dozen of AEC brains. Scraps of the hot and scornful talk drifted across a half-opened transom into the room where Thaddeus Funston sat in a neatly-tied bundle. In the conference room, a red-faced, four-star general cast a chilling glance at the rumpled figure of Colonel Thurgood. "I've listened to some silly stories in my life, colonel," the general said coldly, "but this takes the cake. You come in here with an insane asylum inmate in a strait jacket and you have the colossal gall to sit there and tell me that this poor soul has made not one, but two atomic devices out of modeling clay and then has detonated them." The general paused. "Why don't you just tell me, colonel, that he can also make spaceships out of sponge rubber?" the general added bitingly. In the next room, Thaddeus Funston stared out over the sweeping panorama of the Washington landscape. He stared hard. In the distance, a white cloud began billowing up from the base of the Washington Monument, and with an ear-shattering, glass-splintering roar, the great shaft rose majestically from its base and vanished into space on a tail of flame. THE END
|
B. He is worried about the perception if others hear about what's happening
|
What is the author's least favorite film out of the four reviews?
A. Fight Club
B. Boys Don't Cry
C. Mumford
D. Happy Texas
|
Boys Do Bleed Fight Club is silly stuff, sensationalism that mistakes itself for satire, but it's also a brash and transporting piece of moviemaking, like Raging Bull on acid. The film opens with--literally--a surge of adrenalin, which travels through the bloodstream and into the brain of its protagonist, Jack (Edward Norton), who's viewed, as the camera pulls out of his insides, with a gun stuck in his mouth. How'd he get into this pickle? He's going to tell you, breezily, and the director, David Fincher, is going to illustrate his narrative--violently. Fincher ( Seven , 1995; The Game , 1997) is out to bombard you with so much feverish imagery that you have no choice but to succumb to the movie's reeling, punch-drunk worldview. By the end, you might feel as if you, too, have a mouthful of blood. Not to mention a hole in your head. Fight Club careers from one resonant satirical idea to the next without quite deciding whether its characters are full of crap or are Gen X prophets. It always gives you a rush, though. At first, it goofs on the absurd feminization of an absurdly macho culture. An increasingly desperate insomniac, Jack finds relief (and release) only at meetings for the terminally ill. At a testicular cancer group, he's enfolded in the ample arms of Bob (the singer Meat Loaf Aday), a former bodybuilder who ruined his health with steroids and now has "bitch tits." Jack and Bob subscribe to a new form of male bonding: They cling to each other and sob. But Jack's idyll is rudely disrupted by--wouldn't you know it?--a woman. A dark-eyed, sepulchral head case named Marla Singer (Helena Bonham Carter) begins showing up at all the same disparate meetings for essentially the same voyeuristic ends, and the presence of this "tourist" makes it impossible for Jack to emote. Jack finds another outlet, though. On a plane, he meets Tyler Durden (Brad Pitt), a cryptic hipster with a penchant for subversive acts both large (he makes high-priced soaps from liposuctioned human fat) and small (he splices frames from porn flicks into kiddie movies). When Jack's apartment mysteriously explodes--along with his carefully chosen IKEA furniture--he moves into Tyler's squalid warehouse and helps to found a new religion: Fight Club, in which young males gather after hours in the basement of a nightclub to pound one another (and be pounded) to a bloody pulp. That last parenthesis isn't so parenthetical. In some ways, it's the longing to be beaten into oblivion that's the strongest. "Self-improvement," explains Tyler, "is masturbation"; self-destruction is the new way. Tyler's manifesto calls for an end to consumerism ("Things you own end up owning you"), and since society is going down ("Martha Stewart is polishing brass on the Titanic "), the only creative outlet left is annihilation. "It's only after we've lost everything that we're free to do anything," he says. Fincher and his screenwriter, Jim Uhls, seem to think they've broken new ground in Fight Club , that their metaphor for our discontents hits harder than anyone else's. Certainly it produces more bloody splatter. But 20 years ago, the same impulse was called punk and, as Greil Marcus documents in Lipstick Traces , it was other things before that. Yes, the mixture of Johnny Rotten, Jake La Motta, and Jesus is unique; and the Faludi-esque emasculation themes are more explicit. But there's something deeply movie-ish about the whole conceit, as if the novelist and director were weaned on Martin Scorsese pictures and never stopped dreaming of recapturing that first masochistic rush. The novel, the first by Chuck Palahniuk (the surname sounds like Eskimo for "palooka"--which somehow fits), walks a line between the straight and ironic--it isn't always clear if its glib sociological pronouncements are meant to be taken straight or as the ravings of a delusional mama's boy. But onscreen, when Pitt announces to the assembled fighters that they are the "middle children of history" with "no purpose and no place"--emasculated on one hand by the lack of a unifying crisis (a world war or depression) and on the other by lack of material wealth as promised by television--he seems meant to be intoning gospel. "We are a generation of men raised by women," Tyler announces, and adds, "If our fathers bail, what does that tell you about God?" (I give up: What?) F ight Club could use a few different perspectives: a woman's, obviously, but also an African-American's--someone who'd have a different take on the "healing" properties of violence. It's also unclear just what has emasculated Jack: Is it that he's a materialist or that the materials themselves (i.e., IKEA's lacquered particle boards) don't measure up to his fantasies of opulence? Is he motivated by spiritual hunger or envy? Tyler's subsequent idea of confining his group's mayhem to franchise coffee bars and corporate-subsidized art is a witty one--it's like a parody of neo-Nazism as re-enacted by yuppies. It might have been a howl if performed by, say, the troupe of artsy German nihilists in Joel and Ethan Coen's The Big Lebowski (1998). Somehow Brad Pitt doesn't have the same piquancy. Actually, Pitt isn't as terrible as usual: He's playing not a character but a conceit, and he can bask in his movie-idol arrogance, which seems to be the most authentic emotion he has. But the film belongs to Norton. As a ferocious skinhead in last year's American History X , Norton was taut and ropy, his long torso curled into a sneer; here, he's skinny and wilting, a quivering pansy. Even when he fights he doesn't transform--he's a raging wimp. The performance is marvelous, and it makes poetic sense in light of the movie's climactic twist. But that twist will annoy more people than it will delight, if only because it shifts the drama from the realm of the sociological to that of the psychoanalytic. The finale, scored with the Pixies' great "Where Is My Mind?" comes off facetiously--as if Fincher is throwing the movie away. Until then, however, he has done a fabulous job of keeping it spinning. The most thrilling thing about Fight Club isn't what it says but how Uhls and Fincher pull you into its narrator's head and simulate his adrenalin rushes. A veteran of rock videos, Fincher is one of those filmmakers who helps make the case that MTV--along with digital editing--has transformed cinema for better as well as worse. The syntax has become more intricate. Voice-over narration, once considered uncinematic, is back in style, along with novelistic asides, digressions, fantasies, and flashbacks. To make a point, you can jazzily interject anything--even, as in Three Kings , a shot of a bullet slicing through internal organs. Films like Fight Club might not gel, but they have a breathless, free-associational quality that points to new possibilities in storytelling. Or maybe old possibilities: The language of movies hasn't seemed this unfettered since the pre-sound days of Sergei Eisenstein and Abel Gance. An actress named Hilary Swank gives one of the most rapturous performances I've ever seen as the cross-dressing Brandon Teena (a k a Teena Brandon) in Kimberly Peirce's stark and astonishingly beautiful debut feature, Boys Don't Cry . The movie opens with Teena being shorn of her hated female tresses and becoming "Brandon," who swaggers around in tight jeans and leather jackets. The joy is in watching the actor transform, and I don't just mean Swank: I mean Teena Brandon playing Brandon Teena--the role she has been longing for her whole life. In a redneck Nebraska bar, Brandon throws back a shot of whiskey and the gesture--a macho cliché--becomes an act of self-discovery. Every gesture does. "You're gonna have a shiner in the morning," someone tells Brandon after a barroom brawl, and he takes the news with a glee that's almost mystical: "I am????? Oh, shit!!!" he cries, grinning. That might be my favorite moment in the picture, because Swank's ecstatic expression carries us through the next hour, as Brandon acts out his urban-cowboy fantasies--"surfing" from the bumper of a pickup truck, rolling in the mud, and straddling a barstool with one hand on a brewski and the other on the shoulder of a gorgeous babe. That the people with whom Brandon feels most at home would kill him if they knew his true gender is the movie's most tragic irony--and the one that lifts it out of the realm of gay-martyr hagiography and into something more complex and irreducible: a meditation on the irrelevance of gender. Peirce's triumph is to make these scenes at once exuberant (occasionally hilarious) and foreboding, so that all the seeds of Brandon's killing are right there on the screen. John (Peter Sarsgaard), one of his future rapists and murderers, calls him "little buddy" and seems almost attracted to him; Sarsgaard's performance is a finely chiseled study of how unresolved emotion can suddenly resolve itself into violence. Though harrowing, the second half of Boys Don't Cry isn't as great as the first. The early scenes evoke elation and dread simultaneously, the later ones just dread; and the last half-hour is unrelieved torture. What keeps the movie tantalizing is Chloë Sevigny's Lana, who might or might not know that Brandon is a girl but who's entranced by him anyway. With her lank hair, hooded eyes, and air of sleepy sensuality, Sevigny--maybe even more than Swank--embodies the mystery of sex that's at the core of Boys Don't Cry . Everything she does is deliberate, ironic, slightly unreadable--and unyielding. She's could be saying, "I'm in this world but not of it. ... You'd never dream what's underneath." I n brief: If a friend tells you you'll love Happy Texas , rethink the friendship. This clunky mistaken-identity comedy about escaped cons who impersonate gay pageant directors doesn't even make sense on its own low farcical terms; it's mostly one lame homo joke after another. The only bright spot is Steve Zahn, who could be the offspring of Michael J. Fox and Crispin Glover if they'd mated on the set of Back to the Future (1985). It's hard to make a serious case for Lawrence Kasdan's Mumford , which has apparently flopped but which you can still catch at second- and third-tier theaters. It looks peculiar--a Norman Rockwell painting with noir shadows. And its tale of a small town healed by a depressive (Loren Dean) posing as a psychologist is full of doddering misconceptions about psychotherapy. I almost don't know why I loved it, but the relaxed pacing and the witty turns by Martin Short, Ted Danson, David Paymer, and Mary McDonnell surely helped. I can't decide if the weirdly affectless Dean is inspired or inept, but my indecision suggests why he works in the role. There's no doubt, however, about his even more depressive love object, Hope Davis, who posseses the cinema's most expressive honking-nasal voice and who slumps through the movie like the world's most lyrical anti-ballerina. Even her puffy cheeks are eloquent: They made me think of Mumford as the home of the psychological mumps.
|
D. Happy Texas
|
How can the description the protagonist’s eyes as “aflame” be understood as symbolic?
A. It is symbolic for his drive to win the war.
B. It is symbolic for his drive to find shelter.
C. It is symbolic for his drive to return home to his wife.
D. It is symbolic for his drive to cross the Rio Grande.
|
HOMECOMING BY MIGUEL HIDALGO What lasts forever? Does love? Does death?... Nothing lasts forever.... Not even forever [Transcriber's Note: This etext was produced from Worlds of If Science Fiction, April 1958. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] The large horse plodded slowly over the shifting sand. The rider was of medium size, with huge, strong hands and seemingly hollow eyes. Strange eyes, alive and aflame. They had no place in the dust-caked, tired body, yet there they were, seeking, always seeking—searching the clear horizon, and never seeming to find what they sought. The horse moved faster now. They were nearing a river; the water would be welcome on tired bodies and dry throats. He spurred his horse, and when they reached the water's edge, he dismounted and unsaddled the horse. Then both man and horse plunged headlong into the waiting torrent, deep into the cool embrace of the clear liquid. They soaked it into their pores and drank deeply of it, feeling life going once more through their veins. Satisfied, they lifted themselves from the water, and the man lay down on the yellow sand of the river bank to sleep. When he awoke, the sun was almost setting. The bright shafts of red light spilled across the sky, making the mountains silent scarlet shadows on the face of the rippling water. Quickly he gathered driftwood, and built a small fire. From his pack he removed some of the coffee he had found in one of the ruined cities. He brought water from the river in the battered coffee-pot he had salvaged, and while he waited for it to boil, he went to his horse, Conqueror, stroking his mane and whispering in his ear. Then he led him silently to a grassy slope where he hobbled him and left him for the night. In the fading light, he ate the hard beef jerky and drank the scalding coffee. Refreshed and momentarily content, he sat staring into the dying fire, seeing the bright glowing coals as living fingers clutching at the wood in consuming embrace, taking all and returning nothing but ashes. Slowly his eyelids yielded. His body sagged, and blood seemed to fill his brain, bathing it in a gentle, warm flood. He slept. His brain slept. But the portion of his brain called memory stirred. It was all alone; all else was at rest. Images began to appear, drawn from inexhaustible files, wherein are kept all thoughts, past, present, and future.... It was the night before he was to go overseas. World War III had been declared, and he had enlisted, receiving his old rank of captain. He was with his wife in the living room of their home. They had put the children to bed—their sons—and now sat on the couch, watching the blazing fire. It was then that he had showed it to her. "I've got something to tell you, and something to show you." He had removed the box from his pocket and opened it. And heard her cry of surprised joy. "Oh, a ring, and it's a diamond, too!" she cried in her rich, happy voice which always seemed to send a thrill through his body. "It's for you; so long as you wear it, I'll come back, even from the dead, if need be. Read the inscription." She held the ring up to the light and read aloud, "It is forever." Then she had slipped the ring on her finger and her arms around him. He held her very close, feeling the warmth from her body flowing into his and making him oblivious to everything except that she was there in his arms and that he was sinking deep, deep into a familiar sea, where he had been many times before but each time found something new and unexplored, some vastly different emotion he could never quite explain. "Wait!" she cried. "I've something for you, too." She took off the locket she wore about her neck and held it up to the shimmering light, letting it spin at the end of its chain. It caught the shadows of the fire and reflected them, greatly magnified, over the room. It was in the shape of a star, encrusted with emeralds, with one large ruby in the center. When he opened it, he found a picture of her in one side, and in the other a picture of the children. He took her in his arms again, and loosened her long, black hair, burying his face in it for a moment. Then he kissed her, and instantly was drawn down into the abyss which seemed to have no beginning or any end. The next morning had been bleak and gray. The mist clung to the wet, sodden ground, and the air was heavy in his lungs. He had driven off in the jeep the army had sent for him, watching her there on the porch until the mist swirled around her feet and she ran back into the house and slammed the door. His cold fingers found the locket, making a little bulge under his uniform, and the touch of it seemed to warm the blood in his veins. Three days later they had landed in Spain, merged with another division, then crossed the Pyrenees into France, and finally to Paris where the fighting had begun. Already the city was a silent graveyard, littered with the rubble of towers and cathedrals which had once been great. Three years later they were on the road to Moscow. Over a thousand miles lay behind, a dead man on every foot of those miles. Yet victory was near. The Russians had not yet used the H-bomb; the threat of annihilation by the retaliation forces had been too great. He had done well in the war, and had been decorated many times for bravery in action. Now he felt the victory that seemed to be in the air, and he had wished it would come quickly, so that he might return to her. Home. The very feel of the word was everything a battle-weary soldier needed to make him fight harder and live longer. Suddenly he had become aware of a droning, wooshing sound above him. It grew louder and louder until he knew what it was. "Heavy bombers!" The alarm had sounded, and the men had headed for their foxholes. But the planes had passed over, the sun glinting on their bellies, reflecting a blinding light. They were bound for bigger, more important targets. When the all-clear had sounded, the men clambered from their shelters. An icy wind swept the field, bringing with it clouds which covered the sun. A strange fear had gripped him then.... Across the Atlantic, over the pole, via Alaska, the great bombers flew. In cities, great and small, the air raid sirens sounded, high screaming noises which had jarred the people from sleep in time to die. The defending planes roared into the sky to intercept the on-rushing bombers. The horrendous battle split the universe. Many bombers fell, victims of fanatical suicide planes, or of missiles that streaked across the sky which none could escape. But too many bombers got through, dropping their deadly cargo upon the helpless cities. And not all the prayers or entreaties to any God had stopped their carnage. First there had been the red flashes that melted buildings into molten streams, and then the great triple-mushroom cloud filled with the poisonous gases that the wind swept away to other cities, where men had not died quickly and mercifully, but had rotted away, leaving shreds of putrid flesh behind to mark the places where they had crawled. The retaliatory forces had roared away to bomb the Russian cities. Few, if any, had returned. Too much blood and life were on their hands. Those who had remained alive had found a resting place on the crown of some distant mountain. Others had preferred the silent peaceful sea, where flesh stayed not long on bones, and only darting fishes and merciful beams of filtered light found their aluminum coffins. The war had ended. To no avail. Neither side had won. Most of the cities and the majority of the population of both countries had been destroyed. Even their governments had vanished, leaving a silent nothingness. The armies that remained were without leaders, without sources of supplies, save what they could forage and beg from an unfriendly people. They were alone now, a group of tired, battered men, for whom life held nothing. Their families had long since died, their bodies turned to dust, their spirits fled on the winds to a new world. Yet these remnants of an army must return—or at least try. Their exodus was just beginning. Somehow he had managed to hold together the few men left from his force. He had always nourished the hope that she might still be alive. And now that the war was over he had to return—had to know whether she was still waiting for him. They had started the long trek. Throughout Europe anarchy reigned. He and his men were alone. All they could do now was fight. Finally they reached the seaport city of Calais. With what few men he had left, he had commandeered a small yacht, and they had taken to the sea. After months of storms and bad luck, they had been shipwrecked somewhere off the coast of Mexico. He had managed to swim ashore, and had been found by a fisherman's family. Many months he had spent swimming and fishing, recovering his strength, inquiring about the United States. The Mexicans had spoken with fear of the land across the Rio Grande. All its great cities had been destroyed, and those that had been only partially destroyed were devoid of people. The land across the Rio Grande had become a land of shadows. The winds were poisoned, and the few people who might have survived, were crazed and maimed by the blasts. Few men had dared cross the Rio Grande into "El Mundo gris de Noviembre"—the November world. Those who had, had never returned. In time he had traveled north until he reached the Rio Grande. He had waded into the muddy waters and somehow landed on the American side. In the November world. It was rightly called. The deserts were long. All plant life had died, leaving to those once great fertile stretches, nothing but the sad, temporal beauty that comes with death. No people had he seen. Only the ruins of what had once been their cities. He had walked through them, and all that he had seen were the small mutant rodents, and all that he had heard was the occasional swish of the wind as it whisked along what might have been dead leaves, but wasn't. He had been on the trail for a long time. His food was nearly exhausted. The mountains were just beginning, and he hoped to find food there. He had not found food, but his luck had been with him. He had found a horse. Not a normal horse, but a mutation. It was almost twice as large as a regular horse. Its skin seemed to shimmer and was like glassy steel to the touch. From the center of its forehead grew a horn, straight out, as the horn of a unicorn. But most startling of all were the animal's eyes which seemed to speak—a silent mental speech, which he could understand. The horse had looked up as he approached it and seemed to say: "Follow me." And he had followed. Over a mountain, until they came to a pass, and finally to a narrow path which led to an old cabin. He had found it empty, but there were cans of food and a rifle and many shells. He had remained there a long time—how long he could not tell, for he could only measure time by the cycles of the sun and the moon. Finally he had taken the horse, the rifle and what food was left, and once again started the long journey home. The farther north he went, the more life seemed to have survived. He had seen great herds of horses like his own, stampeding across the plains, and strange birds which he could not identify. Yet he had seen no human beings. But he knew he was closer now. Closer to home. He recognized the land. How, he did not know, for it was much changed. A sensing, perhaps, of what it had once been. He could not be more than two days' ride away. Once he was through this desert, he would find her, he would be with her once again; all would be well, and his long journey would be over. The images faded. Even memory slept in a flow of warm blood. Body and mind slept into the shadows of the dawn. He awoke and stretched the cramped muscles of his body. At the edge of the water he removed his clothes and stared at himself in the rippling mirror. His muscles were lean and hard, evenly placed throughout the length of his frame. A deep ridge ran down the length of his torso, separating the muscles, making the chest broad. Well satisfied with his body, he plunged into the cold water, deep down, until he thought his lungs would burst; then swiftly returned to the clean air, tingling in every pore. He dried himself and dressed. Conqueror was eating the long grass near the stream. Quickly he saddled him. No time for breakfast. He would ride all day and the next night. And he would be home. Still northward. The hours crawled slower than a dying man. The sun was a torch that pierced his skin, seeming to melt his bones into a burning stream within his body. But day at last gave way to night, and the sun to the moon. The torch became a white pock-marked goddess, with streaming hair called stars. In the moonlight he had not seen the crater until he was at its very edge. Even then he might not have seen it had not the horse stopped suddenly. The wind swirled through its vast emptiness, slapping his face with dusty hands. For a moment he thought he heard voices—mournful, murmuring voices, echoing up from the misty depths. He turned quickly away and did not look back. Night paled into day; day burned into night. There were clouds in the sky now, and a gentle wind caressed the sweat from his tired body. He stopped. There it was! Barely discernible through the moonlight, he saw it. Home. Quickly he dismounted and ran. Now he could see a small light in the window, and he knew they were there. His breath came in hard ragged gulps. At the window he peered in, and as his eyes became accustomed to the inner gloom, he saw how bare the room was. No matter. Now that he was home he would build new furniture, and the house would be even better than it had been before. Then he saw her. She was sitting motionless in a straight wooden chair beside the fireplace, the feeble light cast by the embers veiling her in mauve shadows. He waited, wondering if she were.... Presently she stirred like a restless child in sleep, then moved from the chair to the pile of wood near the hearth, and replenished the fire. The wood caught quickly, sending up long tongues of flame, and forming a bright pool of light around her. His blood froze. The creature illuminated by the firelight was a monster. Large greasy scales covered its face and arms, and there was no hair on its head. Its gums were toothless cavities in a sunken, mumbling mouth. The eyes, turned momentarily toward the window, were empty of life. "No, no!" he cried soundlessly. This was not his house. In his delirium he had only imagined he had found it. He had been searching so long. He would go on searching. He was turning wearily away from the window when the movement of the creature beside the fire held his attention. It had taken a ring from one skeleton-like finger and stood, turning the ring slowly as if trying to decipher some inscription inside it. He knew then. He had come home. Slowly he moved toward the door. A great weakness was upon him. His feet were stones, reluctant to leave the earth. His body was a weed, shriveled by thirst. He grasped the doorknob and clung to it, looking up at the night sky and trying to draw strength from the wind that passed over him. It was no use. There was no strength. Only fear—a kind of fear he had never known. He fumbled at his throat, his fingers crawling like cold worms around his neck until he found the locket and the clasp which had held it safely through endless nightmare days and nights. He slipped the clasp and the locket fell into his waiting hand. As one in a dream, he opened it, and stared at the pictures, now in the dim moonlight no longer faces of those he loved, but grey ghosts from the past. Even the ruby had lost its glow. What had once been living fire was now a dull glob of darkness. "Nothing is forever!" He thought he had shouted the words, but only a thin sound, the sound of leaves ruffled by the wind, came back to him. He closed the locket and fastened the clasp, and hung it on the doorknob. It moved slowly in the wind, back and forth, like a pendulum. "Forever—forever. Only death is forever." He could have sworn he heard the words. He ran. Away from the house. To the large horse with a horn in the center of its forehead, like a unicorn. Once in the saddle, the spurt of strength left him. His shoulders slumped, his head dropped onto his chest. Conqueror trotted away, the sound of his hooves echoing hollowly in the vast emptiness.
|
C. It is symbolic for his drive to return home to his wife.
|
How are different domains weighted in WDIRL?
|
### Introduction
Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. ### Preliminary and Related Work ::: Domain Adaptation
For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. ### Preliminary and Related Work ::: Domain Invariant Representation Learning
Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. ### Problem of Domain-Invariant Representation Learning
In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. ### Problem of Domain-Invariant Representation Learning ::: Remark.
According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. ### Weighted Domain Invariant Representation Learning
According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. ### Weighted Domain Invariant Representation Learning ::: Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight
The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. ### Weighted Domain Invariant Representation Learning ::: Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight
In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. ### Experiment ::: Experiment Design
Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. ### Experiment ::: Dataset and Task Design
We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. ### Experiment ::: Dataset and Task Design ::: Binary-Class.
From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. ### Experiment ::: Dataset and Task Design ::: Multi-Class.
We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. ### Experiment ::: Implementation Detail
For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. ### Experiment ::: Main Result
Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. ### Conclusion
In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution. Table 1: Mean accuracy ± standard deviation over five runs on the 12 binary-class cross-domain tasks. Figure 1: Mean accuracy of WCMD†† over different initialization of w. The empirical optimum value of w makes w1PS(Y = 1) = 0.75. The dot line in the same color denotes performance of the CMD model and ‘w0’ annotates performance of WCMD†† when initializing w with w0. Figure 2: Relative improvement over the SO baseline under different degrees of P(Y) shift on the B→D and B →K binary-class domain adaptation tasks. Table 2: Mean accuracy ± standard deviation over five runs on the 2 within-group and 2 cross-group multiclass domain-adaptation tasks.
|
To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$
|
What were the previous state of the art benchmarks?
|
### Introduction
To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification. An interesting line of work in this respect is the work of BIBREF5 . Here the authors have proposed the challenging task of generating natural questions for an image. One aspect that is central to a question is the context that is relevant to generate it. However, this context changes for every image. As can be seen in Figure FIGREF1 , an image with a person on a skateboard would result in questions related to the event. Whereas for a little girl, the questions could be related to age rather than the action. How can one have widely varying context provided for generating questions? To solve this problem, we use the context obtained by considering exemplars, specifically we use the difference between relevant and irrelevant exemplars. We consider different contexts in the form of Location, Caption, and Part of Speech tags. The human annotated questions are (b) for the first image and (a) for the second image. Our method implicitly uses a differential context obtained through supporting and contrasting exemplars to obtain a differentiable embedding. This embedding is used by a question decoder to decode the appropriate question. As discussed further, we observe this implicit differential context to perform better than an explicit keyword based context. The difference between the two approaches is illustrated in Figure FIGREF2 . This also allows for better optimization as we can backpropagate through the whole network. We provide detailed empirical evidence to support our hypothesis. As seen in Figure FIGREF1 our method generates natural questions and improves over the state-of-the-art techniques for this problem. To summarize, we propose a multimodal differential network to solve the task of visual question generation. Our contributions are: (1) A method to incorporate exemplars to learn differential embeddings that captures the subtle differences between supporting and contrasting examples and aid in generating natural questions. (2) We provide Multimodal differential embeddings, as image or text alone does not capture the whole context and we show that these embeddings outperform the ablations which incorporate cues such as only image, or tags or place information. (3) We provide a thorough comparison of the proposed network against state-of-the-art benchmarks along with a user study and statistical significance test. ### Related Work
Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community. Recently there have been many deep learning based approaches as well for solving the text-based question generation task such as BIBREF6 . Further, BIBREF7 have proposed a method to generate a factoid based question based on triplet set {subject, relation and object} to capture the structural representation of text and the corresponding generated question. These methods, however, were limited to text-based question generation. There has been extensive work done in the Vision and Language domain for solving image captioning, paragraph generation, Visual Question Answering (VQA) and Visual Dialog. BIBREF8 , BIBREF9 , BIBREF10 proposed conventional machine learning methods for image description. BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 have generated descriptive sentences from images with the help of Deep Networks. There have been many works for solving Visual Dialog BIBREF19 , BIBREF20 , BIBREF2 , BIBREF21 , BIBREF22 . A variety of methods have been proposed by BIBREF23 , BIBREF24 , BIBREF1 , BIBREF25 , BIBREF26 , BIBREF27 for solving VQA task including attention-based methods BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . However, Visual Question Generation (VQG) is a separate task which is of interest in its own right and has not been so well explored BIBREF5 . This is a vision based novel task aimed at generating natural and engaging question for an image. BIBREF35 proposed a method for continuously generating questions from an image and subsequently answering those questions. The works closely related to ours are that of BIBREF5 and BIBREF36 . In the former work, the authors used an encoder-decoder based framework whereas in the latter work, the authors extend it by using a variational autoencoder based sequential routine to obtain natural questions by performing sampling of the latent variable. ### Approach
In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars. We first analyze whether the exemplars are valid or not. We illustrate this in figure FIGREF3 . We used a pre-trained RESNET-101 BIBREF37 object classification network on the target, supporting and contrasting images. We observed that the supporting image and target image have quite similar probability scores. The contrasting exemplar image, on the other hand, has completely different probability scores. Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions. We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail. ### Method
The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 where INLINEFORM0 is a vector for all possible parameters of our model. INLINEFORM1 is the ground truth question. The log probability for the question is calculated by using joint probability over INLINEFORM2 with the help of chain rule. For a particular question, the above term is obtained as: INLINEFORM3 where INLINEFORM0 is length of the sequence, and INLINEFORM1 is the INLINEFORM2 word of the question. We have removed INLINEFORM3 for simplicity. Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. During inference, we sample a question word INLINEFORM0 from the softmax distribution and continue sampling until the end token or maximum length for the question is reached. We experimented with both sampling and argmax and found out that argmax works better. This result is provided in the supplementary material. ### Multimodal Differential Network
The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We used an efficient KNN-based approach (k-d tree) with Euclidean metric to obtain the exemplars. This is obtained through a coarse quantization of nearest neighbors of the training examples into 50 clusters, and selecting the nearest as supporting and farthest as the contrasting exemplars. We experimented with ITML based metric learning BIBREF40 for image features. Surprisingly, the KNN-based approach outperforms the latter one. We also tried random exemplars and different number of exemplars and found that INLINEFORM0 works best. We provide these results in the supplementary material. We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0 The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0 where INLINEFORM0 is the 4096-dimensional convolutional feature from the FC7 layer of pretrained VGG-19 Net BIBREF43 . INLINEFORM1 are the weights and INLINEFORM2 is the bias for different layers. INLINEFORM3 is the concatenation operator. Similarly, We obtain context vectors INLINEFORM0 & INLINEFORM1 for the supporting and contrasting exemplars. Details for other fusion methods are present in supplementary.The aim of the triplet network BIBREF44 is to obtain context vectors that bring the supporting exemplar embeddings closer to the target embedding and vice-versa. This is obtained as follows: DISPLAYFORM0 where INLINEFORM0 is the euclidean distance between two embeddings INLINEFORM1 and INLINEFORM2 . M is the training dataset that contains all set of possible triplets. INLINEFORM3 is the triplet loss function. This is decomposed into two terms, one that brings the supporting sample closer and one that pushes the contrasting sample further. This is given by DISPLAYFORM0 Here INLINEFORM0 represent the euclidean distance between the target and supporting sample, and target and opposing sample respectively. The parameter INLINEFORM1 controls the separation margin between these and is obtained through validation data. ### Decoder: Question Generator
The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 Where INLINEFORM0 is the probability distribution over all question tokens. INLINEFORM1 is cross entropy loss. ### Cost function
Our objective is to minimize the total loss, that is the sum of cross entropy loss and triplet loss over all training examples. The total loss is: DISPLAYFORM0 where INLINEFORM0 is the total number of samples, INLINEFORM1 is a constant, which controls both the loss. INLINEFORM2 is the triplet loss function EQREF13 . INLINEFORM3 is the cross entropy loss between the predicted and ground truth questions and is given by: INLINEFORM4 where, INLINEFORM0 is the total number of question tokens, INLINEFORM1 is the ground truth label. The code for MDN-VQG model is provided . ### Variations of Proposed Method
While, we advocate the use of multimodal differential network for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture. These are as follows: Tag Net: In this variant, we consider extracting the part-of-speech (POS) tags for the words present in the caption and obtaining a Tag embedding by considering different methods of combining the one-hot vectors. Further details and experimental results are present in the supplementary. This Tag embedding is then combined with the image embedding and provided to the decoder network. Place Net: In this variant we explore obtaining embeddings based on the visual scene understanding. This is obtained using a pre-trained PlaceCNN BIBREF45 that is trained to classify 365 different types of scene categories. We then combine the activation map for the input image and the VGG-19 based place embedding to obtain the joint embedding used by the decoder. Differential Image Network: Instead of using multimodal differential network for generating embeddings, we also evaluate differential image network for the same. In this case, the embedding does not include the caption but is based only on the image feature. We also experimented with using multiple exemplars and random exemplars. Further details, pseudocode and results regarding these are present in the supplementary material. ### Dataset
We conduct our experiments on Visual Question Generation (VQG) dataset BIBREF5 , which contains human annotated questions based on images of MS-COCO dataset. This dataset was developed for generating natural and engaging questions based on common sense reasoning. We use VQG-COCO dataset for our experiments which contains a total of 2500 training images, 1250 validation images, and 1250 testing images. Each image in the dataset contains five natural questions and five ground truth captions. It is worth noting that the work of BIBREF36 also used the questions from VQA dataset BIBREF1 for training purpose, whereas the work by BIBREF5 uses only the VQG-COCO dataset. VQA-1.0 dataset is also built on images from MS-COCO dataset. It contains a total of 82783 images for training, 40504 for validation and 81434 for testing. Each image is associated with 3 questions. We used pretrained caption generation model BIBREF13 to extract captions for VQA dataset as the human annotated captions are not there in the dataset. We also get good results on the VQA dataset (as shown in Table TABREF26 ) which shows that our method doesn't necessitate the presence of ground truth captions. We train our model separately for VQG-COCO and VQA dataset. ### Inference
We made use of the 1250 validation images to tune the hyperparameters and are providing the results on test set of VQG-COCO dataset. During inference, We use the Representation module to find the embeddings for the image and ground truth caption without using the supporting and contrasting exemplars. The mixture module provides the joint representation of the target image and ground truth caption. Finally, the decoder takes in the joint features and generates the question. We also experimented with the captions generated by an Image-Captioning network BIBREF13 for VQG-COCO dataset and the result for that and training details are present in the supplementary material. ### Experiments
We evaluate our proposed MDN method in the following ways: First, we evaluate it against other variants described in section SECREF19 and SECREF10 . Second, we further compare our network with state-of-the-art methods for VQA 1.0 and VQG-COCO dataset. We perform a user study to gauge human opinion on naturalness of the generated question and analyze the word statistics in Figure FIGREF22 . This is an important test as humans are the best deciders of naturalness. We further consider the statistical significance for the various ablations as well as the state-of-the-art models. The quantitative evaluation is conducted using standard metrics like BLEU BIBREF46 , METEOR BIBREF47 , ROUGE BIBREF48 , CIDEr BIBREF49 . Although these metrics have not been shown to correlate with `naturalness' of the question these still provide a reasonable quantitative measure for comparison. Here we only provide the BLEU1 scores, but the remaining BLEU-n metric scores are present in the supplementary. We observe that the proposed MDN provides improved embeddings to the decoder. We believe that these embeddings capture instance specific differential information that helps in guiding the question generation. Details regarding the metrics are given in the supplementary material. ### Ablation Analysis
We considered different variations of our method mentioned in section SECREF19 and the various ways to obtain the joint multimodal embedding as described in section SECREF10 . The results for the VQG-COCO test set are given in table TABREF24 . In this table, every block provides the results for one of the variations of obtaining the embeddings and different ways of combining them. We observe that the Joint Method (JM) of combining the embeddings works the best in all cases except the Tag Embeddings. Among the ablations, the proposed MDN method works way better than the other variants in terms of BLEU, METEOR and ROUGE metrics by achieving an improvement of 6%, 12% and 18% in the scores respectively over the best other variant. ### Baseline and State-of-the-Art
The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores. ### Statistical Significance Analysis
We have analysed Statistical Significance BIBREF50 of our MDN model for VQG for different variations of the mixture module mentioned in section SECREF10 and also against the state-of-the-art methods. The Critical Difference (CD) for Nemenyi BIBREF51 test depends upon the given INLINEFORM0 (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different and vice-versa. Figure FIGREF29 visualizes the post-hoc analysis using the CD diagram. From the figure, it is clear that MDN-Joint works best and is statistically significantly different from the state-of-the-art methods. ### Perceptual Realism
A human is the best judge of naturalness of any question, We evaluated our proposed MDN method using a `Naturalness' Turing test BIBREF52 on 175 people. People were shown an image with 2 questions just as in figure FIGREF1 and were asked to rate the naturalness of both the questions on a scale of 1 to 5 where 1 means `Least Natural' and 5 is the `Most Natural'. We provided 175 people with 100 such images from the VQG-COCO validation dataset which has 1250 images. Figure FIGREF30 indicates the number of people who were fooled (rated the generated question more or equal to the ground truth question). For the 100 images, on an average 59.7% people were fooled in this experiment and this shows that our model is able to generate natural questions. ### Conclusion
In this paper we have proposed a novel method for generating natural questions for an image. The approach relies on obtaining multimodal differential embeddings from image and its caption. We also provide ablation analysis and a detailed comparison with state-of-the-art methods, perform a user study to evaluate the naturalness of our generated questions and also ensure that the results are statistically significant. In future, we would like to analyse means of obtaining composite embeddings. We also aim to consider the generalisation of this approach to other vision and language tasks. Supplementary Material Section SECREF8 will provide details about training configuration for MDN, Section SECREF9 will explain the various Proposed Methods and we also provide a discussion in section regarding some important questions related to our method. We report BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE and CIDER metric scores for VQG-COCO dataset. We present different experiments with Tag Net in which we explore the performance of various tags (Noun, Verb, and Question tags) and different ways of combining them to get the context vectors. Multimodal Differential Network [1] MDN INLINEFORM0 Finding Exemplars: INLINEFORM1 INLINEFORM2 Compute Triplet Embedding: INLINEFORM3 INLINEFORM4 Compute Triplet Fusion Embedding : INLINEFORM5 INLINEFORM6 INLINEFORM7 Compute Triplet Loss: INLINEFORM8 Compute Decode Question Sentence: INLINEFORM9 INLINEFORM10 —————————————————– Triplet Fusion INLINEFORM11 , INLINEFORM12 INLINEFORM13 :Image feature,14x14x512 INLINEFORM14 : Caption feature,1x512 Match Dimension: INLINEFORM15 ,196x512 INLINEFORM16 196x512 If flag==Joint Fusion: INLINEFORM17 INLINEFORM18 , [ INLINEFORM19 (MDN-Mul), INLINEFORM20 (MDN-Add)] If flag==Attention Fusion : INLINEFORM21 Semb INLINEFORM22 Dataset and Training Details Dataset We conduct our experiments on two types of dataset: VQA dataset BIBREF1 , which contains human annotated questions based on images on MS-COCO dataset. Second one is VQG-COCO dataset based on natural question BIBREF55 . VQA dataset VQA dataset BIBREF1 is built on complex images from MS-COCO dataset. It contains a total of 204721 images, out of which 82783 are for training, 40504 for validation and 81434 for testing. Each image in the MS-COCO dataset is associated with 3 questions and each question has 10 possible answers. So there are 248349 QA pair for training, 121512 QA pairs for validating and 244302 QA pairs for testing. We used pre-trained caption generation model BIBREF53 to extract captions for VQA dataset. VQG dataset The VQG-COCO dataset BIBREF55 , is developed for generating natural and engaging questions that are based on common sense reasoning. This dataset contains a total of 2500 training images, 1250 validation images and 1250 testing images. Each image in the dataset contains 5 natural questions. Training Configuration We have used RMSPROP optimizer to update the model parameter and configured hyper-parameter values to be as follows: INLINEFORM23 to train the classification network . In order to train a triplet model, we have used RMSPROP to optimize the triplet model model parameter and configure hyper-parameter values to be: INLINEFORM24 . We also used learning rate decay to decrease the learning rate on every epoch by a factor given by: INLINEFORM25 where values of a=1500 and b=1250 are set empirically. Ablation Analysis of Model While, we advocate the use of multimodal differential network (MDN) for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture namely (a) Differential Image Network, (b) Tag net and (c) Place net. These are described in detail as follows: Differential Image Network For obtaining the exemplar image based context embedding, we propose a triplet network consist of three network, one is target net, supporting net and opposing net. All these three networks designed with convolution neural network and shared the same parameters. The weights of this network are learnt through end-to-end learning using a triplet loss. The aim is to obtain latent weight vectors that bring the supporting exemplar close to the target image and enhances the difference between opposing examples. More formally, given an image INLINEFORM26 we obtain an embedding INLINEFORM27 using a CNN that we parameterize through a function INLINEFORM28 where INLINEFORM29 are the weights of the CNN. This is illustrated in figure FIGREF43 . Tag net The tag net consists of two parts Context Extractor & Tag Embedding Net. This is illustrated in figure FIGREF45 . Extract Context: The first step is to extract the caption of the image using NeuralTalk2 BIBREF53 model. We find the part-of-speech(POS) tag present in the caption. POS taggers have been developed for two well known corpuses, the Brown Corpus and the Penn Treebanks. For our work, we are using the Brown Corpus tags. The tags are clustered into three category namely Noun tag, Verb tag and Question tags (What, Where, ...). Noun tag consists of all the noun & pronouns present in the caption sentence and similarly, verb tag consists of verb & adverbs present in the caption sentence. The question tags consists of the 7-well know question words i.e., why, how, what, when, where, who and which. Each tag token is represented as a one-hot vector of the dimension of vocabulary size. For generalization, we have considered 5 tokens from each category of the Tags. Tag Embedding Net: The embedding network consists of word embedding followed by temporal convolutions neural network followed by max-pooling network. In the first step, sparse high dimensional one-hot vector is transformed to dense low dimension vector using word embedding. After this, we apply temporal convolution on the word embedding vector. The uni-gram, bi-gram and tri-gram feature are computed by applying convolution filter of size 1, 2 and 3 respectability. Finally, we applied max-pooling on this to get a vector representation of the tags as shown figure FIGREF45 . We concatenated all the tag words followed by fully connected layer to get feature dimension of 512. We also explored joint networks based on concatenation of all the tags, on element-wise addition and element-wise multiplication of the tag vectors. However, we observed that convolution over max pooling and joint concatenation gives better performance based on CIDer score. INLINEFORM30 Where, T_CNN is Temporally Convolution Neural Network applied on word embedding vector with kernel size three. Place net Visual object and scene recognition plays a crucial role in the image. Here, places in the image are labeled with scene semantic categories BIBREF45 , comprise of large and diverse type of environment in the world, such as (amusement park, tower, swimming pool, shoe shop, cafeteria, rain-forest, conference center, fish pond, etc.). So we have used different type of scene semantic categories present in the image as a place based context to generate natural question. A place365 is a convolution neural network is modeled to classify 365 types of scene categories, which is trained on the place2 dataset consist of 1.8 million of scene images. We have used a pre-trained VGG16-places365 network to obtain place based context embedding feature for various type scene categories present in the image. The context feature INLINEFORM31 is obtained by: INLINEFORM32 Where, INLINEFORM33 is Place365_CNN. We have extracted INLINEFORM34 features of dimension 14x14x512 for attention model and FC8 features of dimension 365 for joint, addition and hadamard model of places365. Finally, we use a linear transformation to obtain a 512 dimensional vector. We explored using the CONV5 having feature dimension 14x14 512, FC7 having 4096 and FC8 having feature dimension of 365 of places365. Ablation Analysis Sampling Exemplar: KNN vs ITML Our method is aimed at using efficient exemplar-based retrieval techniques. We have experimented with various exemplar methods, such as ITML BIBREF40 based metric learning for image features and KNN based approaches. We observed KNN based approach (K-D tree) with Euclidean metric is a efficient method for finding exemplars. Also we observed that ITML is computationally expensive and also depends on the training procedure. The table provides the experimental result for Differential Image Network variant with k (number of exemplars) = 2 and Hadamard method: Question Generation approaches: Sampling vs Argmax We obtained the decoding using standard practice followed in the literature BIBREF38 . This method selects the argmax sentence. Also, we evaluated our method by sampling from the probability distributions and provide the results for our proposed MDN-Joint method for VQG dataset as follows: How are exemplars improving Embedding In Multimodel differential network, we use exemplars and train them using a triplet loss. It is known that using a triplet network, we can learn a representation that accentuates how the image is closer to a supporting exemplar as against the opposing exemplar BIBREF42 , BIBREF41 . The Joint embedding is obtained between the image and language representations. Therefore the improved representation helps in obtaining an improved context vector. Further we show that this also results in improving VQG. Are exemplars required? We had similar concerns and validated this point by using random exemplars for the nearest neighbor for MDN. (k=R in table TABREF35 ) In this case the method is similar to the baseline. This suggests that with random exemplar, the model learns to ignore the cue. Are captions necessary for our method? This is not actually necessary. In our method, we have used an existing image captioning method BIBREF13 to generate captions for images that did not have them. For VQG dataset, captions were available and we have used that, but, for VQA dataset captions were not available and we have generated captions while training. We provide detailed evidence with respect to caption-question pairs to ensure that we are generating novel questions. While the caption generates scene description, our proposed method generates semantically meaningful and novel questions. Examples for Figure 1 of main paper: First Image:- Caption- A young man skateboarding around little cones. Our Question- Is this a skateboard competition? Second Image:- Caption- A small child is standing on a pair of skis. Our Question:- How old is that little girl? Intuition behind Triplet Network: The intuition behind use of triplet networks is clear through this paper BIBREF41 that first advocated its use. The main idea is that when we learn distance functions that are “close” for similar and “far” from dissimilar representations, it is not clear that close and far are with respect to what measure. By incorporating a triplet we learn distance functions that learn that “A is more similar to B as compared to C”. Learning such measures allows us to bring target image-caption joint embeddings that are closer to supporting exemplars as compared to contrasting exemplars. Analysis of Network Analysis of Tag Context Tag is language based context. These tags are extracted from caption, except question-tags which is fixed as the 7 'Wh words' (What, Why, Where, Who, When, Which and How). We have experimented with Noun tag, Verb tag and 'Wh-word' tag as shown in tables. Also, we have experimented in each tag by varying the number of tags from 1 to 7. We combined different tags using 1D-convolution, concatenation, and addition of all the tags and observed that the concatenation mechanism gives better results. As we can see in the table TABREF33 that taking Nouns, Verbs and Wh-Words as context, we achieve significant improvement in the BLEU, METEOR and CIDEr scores from the basic models which only takes the image and the caption respectively. Taking Nouns generated from the captions and questions of the corresponding training example as context, we achieve an increase of 1.6% in Bleu Score and 2% in METEOR and 34.4% in CIDEr Score from the basic Image model. Similarly taking Verbs as context gives us an increase of 1.3% in Bleu Score and 2.1% in METEOR and 33.5% in CIDEr Score from the basic Image model. And the best result comes when we take 3 Wh-Words as context and apply the Hadamard Model with concatenating the 3 WH-words. Also in Table TABREF34 we have shown the results when we take more than one words as context. Here we show that for 3 words i.e 3 nouns, 3 verbs and 3 Wh-words, the Concatenation model performs the best. In this table the conv model is using 1D convolution to combine the tags and the joint model combine all the tags. Analysis of Context: Exemplars In Multimodel Differential Network and Differential Image Network, we use exemplar images(target, supporting and opposing image) to obtain the differential context. We have performed the experiment based on the single exemplar(K=1), which is one supporting and one opposing image along with target image, based on two exemplar(K=2), i.e. two supporting and two opposing image along with single target image. similarly we have performed experiment for K=3 and K=4 as shown in table- TABREF35 . Mixture Module: Other Variations Hadamard method uses element-wise multiplication whereas Addition method uses element-wise addition in place of the concatenation operator of the Joint method. The Hadamard method finds a correlation between image feature and caption feature vector while the Addition method learns a resultant vector. In the attention method, the output INLINEFORM35 is the weighted average of attention probability vector INLINEFORM36 and convolutional features INLINEFORM37 . The attention probability vector weights the contribution of each convolutional feature based on the caption vector. This attention method is similar to work stack attention method BIBREF54 . The attention mechanism is given by: DISPLAYFORM0 where INLINEFORM38 is the 14x14x512-dimensional convolution feature map from the fifth convolution layer of VGG-19 Net of image INLINEFORM39 and INLINEFORM40 is the caption context vector. The attention probability vector INLINEFORM41 is a 196-dimensional vector. INLINEFORM42 are the weights and INLINEFORM43 is the bias for different layers. We evaluate the different approaches and provide results for the same. Here INLINEFORM44 represents element-wise addition. Evaluation Metrics Our task is similar to encoder -decoder framework of machine translation. we have used same evaluation metric is used in machine translation. BLEU BIBREF46 is the first metric to find the correlation between generated question with ground truth question. BLEU score is used to measure the precision value, i.e That is how much words in the predicted question is appeared in reference question. BLEU-n score measures the n-gram precision for counting co-occurrence on reference sentences. we have evaluated BLEU score from n is 1 to 4. The mechanism of ROUGE-n BIBREF48 score is similar to BLEU-n,where as, it measures recall value instead of precision value in BLEU. That is how much words in the reference question is appeared in predicted question.Another version ROUGE metric is ROUGE-L, which measures longest common sub-sequence present in the generated question. METEOR BIBREF47 score is another useful evaluation metric to calculate the similarity between generated question with reference one by considering synonyms, stemming and paraphrases. the output of the METEOR score measure the word matches between predicted question and reference question. In VQG, it compute the word match score between predicted question with five reference question. CIDer BIBREF49 score is a consensus based evaluation metric. It measure human-likeness, that is the sentence is written by human or not. The consensus is measured, how often n-grams in the predicted question are appeared in the reference question. If the n-grams in the predicted question sentence is appeared more frequently in reference question then question is less informative and have low CIDer score. We provide our results using all these metrics and compare it with existing baselines. Figure 1: Can you guess which among the given questions is human annotated and which is machine generated? 0 Figure 2: Here we provide intuition for using implicit embeddings instead of explicit ones. As explained in section 1, the question obtained by the implicit embeddings are natural and holistic than the explicit ones. Figure 3: An illustrative example shows the validity of our obtained exemplars with the help of an object classification network, RESNET-101. We see that the probability scores of target and supporting exemplar image are similar. That is not the case with the contrasting exemplar. The corresponding generated questions when considering the individual images are also shown. Figure 4: This is an overview of our Multimodal Differential Network for Visual Question Generation. It consists of a Representation Module which extracts multimodal features, a Mixture Module that fuses the multimodal representation and a Decoder that generates question using an LSTM based language model. In this figure, we have shown the Joint Mixture Module. We train our network with a Cross-Entropy and Triplet Loss. Figure 5: These are some examples from the VQG-COCO dataset which provide a comparison between our generated questions and human annotated questions. (a) is the human annotated question for all the images. More qualitative results are present in the supplementary material. Figure 6: Sunburst plot for VQG-COCO: The ith ring captures the frequency distribution over words for the ith word of the generated question. The angle subtended at the center is proportional to the frequency of the word. While some words have high frequency, the outer rings illustrate a fine blend of words. We have restricted the plot to 5 rings for easy readability. Best viewed in color. Table 1: Analysis of variants of our proposed method on VQG-COCO Dataset as mentioned in section 4.4 and different ways of getting a joint embedding (Attention (AtM), Hadamard (HM), Addition (AM) and Joint (JM) method as given in section 4.1.3) for each method. Refer section 5.1 for more details. B1 is BLEU1. Table 2: State-of-the-Art comparison on VQA-1.0 Dataset. The first block consists of the state-of-the-art results, second block refers to the baselines mentioned in section 5.2, third block provides the results for the variants of mixture module present in section 4.1.3. Figure 8: Perceptual Realism Plot for human survey. Here every question has different number of responses and hence the threshold which is the half of total responses for each question is varying. This plot is only for 50 of the 100 questions involved in the survey. See section 5.4 for more details. Table 3: State-of-the-Art (SOTA) comparison on VQGCOCO Dataset. The first block consists of the SOTA results, second block refers to the baselines mentioned in section 5.2, third block shows the results for the best method for different ablations mentioned in table 1. Figure 7: The mean rank of all the models on the basis of METEOR score are plotted on the x-axis. Here Joint refers to our MDN-Joint model and others are the different variations described in section 4.1.3 and Natural (Mostafazadeh et al., 2016), Creative (Jain et al., 2017). The colored lines between the two models represents that these models are not significantly different from each other.
|
BIBREF35 for VQA dataset, BIBREF5, BIBREF36
|
What are the specific tasks being unified?
|
### Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches. However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost. Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance. To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows. We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model. Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task. The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6. ### Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods. Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster. Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job. Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component. ### Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to. The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain. ### Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$. Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement. Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data. ### The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. ### The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information. The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model. ### The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model. The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively. ### The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows. While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix. $Attention$ denotes the traditional attention and it can be defined as follows. where $d_k$ is the length of hidden vector. ### The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence. Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed. where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively. ### The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features. Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model. ### Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold. ### Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20. In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer. ### Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts. ### Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23. Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score. ### Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model. As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model. ### Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head. From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge. Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance. ### Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information. As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score. Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well. Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way. ### Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset. ### Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). Fig. 1. An illustrative example of QA-CTS task. TABLE I AN ILLUSTRATIVE EXAMPLE OF NAMED ENTITY FEATURE TAGS Fig. 2. The architecture of our proposed model for QA-CTS task TABLE II STATISTICS OF DIFFERENT TYPES OF QUESTION ANSWER INSTANCES TABLE V COMPARATIVE RESULTS FOR DIFFERENT INTEGRATION METHOD OF OUR PROPOSED MODEL TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL TABLE VI COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (WITHOUT TWO-STAGE TRAINING AND NAMED ENTITY INFORMATION) TABLE VII COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (WITH TWO-STAGE TRAINING AND NAMED ENTITY INFORMATION) TABLE VIII COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (USING MIXED-DATA PRE-TRAINED PARAMETERS)
|
three types of questions, namely tumor size, proximal resection margin and distal resection margin
|
What is the size of the dataset?
|
### Introduction
Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning. Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 . Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe). While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma. This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract. ### Background
Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word ÙØ³ÙÙØªØÙÙ> (wsyftHwn) “and they will open” has the triliteral root ÙØªØ> (ftH), which has the basic meaning of opening, has prefixes ÙØ³> (ws) “and will”, suffixes ÙÙ> (wn) “they”, stem ÙÙØªØ> (yftH) “open”, and lemma ÙØªØ> (ftH) “the concept of opening”. IR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words ÙØªØ¨Ø Ù ÙØªØ¨Ø©Ø ÙØªØ§Ø¨> (ktb, mktbp, ktAb) “wrote, library, book” have the same root ÙØªØ¨> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users. Other researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural Ø£ÙÙØ§Ù > (AqlAm) “pens” will not match the stem of its singular form ÙÙÙ > (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs Ø§Ø³ØªØ·Ø§Ø¹Ø ÙØ³ØªØ·Ùع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems. A lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots. Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma. In this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments. ### Data Description
To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each. Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 . As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA. ### system Description
We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation. From a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word ÙØ¨ÙÙØ¯> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms ÙÙØ¨ÙÙÙÙØ¯ÙØ ÙÙØ¨ÙÙÙÙØ¯Ù> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 . The idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma Ø¨ÙØ¯> (banod, bnd after diacritics removal) “item”. While comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article اÙ> (Al) “the”, etc. It worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word Ø³ÙØ§Ø±Ø©> (syArp) “car” has two morphological analyses with different lemmas, namely Ø³ÙØ§Ø±> (syAr) “walker”, and Ø³ÙØ§Ø±Ø©> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language. The lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html ### Evaluation
Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in ÙÙÙØº ÙÙÙØºØ ÙÙÙØ¬ ÙÙÙØ¬> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas ÙÙØ ÙÙ٠ا> (fy, fymA) are both valid for the function word ÙÙ٠ا> (fymA) “while”). Table TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task. In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple. ### Error Analysis
Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word Ø£ÙØ§Ø¯ÙÙ ÙØ©> (AkAdymyp) can be either a noun and its lemma is Ø£ÙØ§Ø¯ÙÙ ÙØ©> (AkAdymyp) “academy”, or an adjective and its lemma is Ø£ÙØ§Ø¯ÙÙ Ù> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities. In the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems. ### Discussion
In this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications. Table 1: Examples of complex verb lemmatization cases Table 2: Examples of complex noun lemmatization cases Figure 2: Buckwalter analysis (diacritization forms and lemmas are highlighted) Figure 1: Lemmatization of WikiNews corpus Table 3: Lemmatization accuracy using WikiNews testset Figure 4: Lemmatization online demo (part of Farasa Arabic NLP tools)
|
Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each
|
What is so unique about the cockatoos on this planet?
A. They are able to copy speech.
B. They live in abundance in the Baldric, despite it being a dangerous area.
C. They are identical to Earth parrots, despite being on a different planet.
D. They are able to physically mimic any picture.
|
DOUBLE TROUBLE by CARL JACOBI Grannie Annie, that waspish science-fiction writer, was in a jam again. What with red-spot fever, talking cockatoos and flagpole trees, I was running in circles—especially since Grannie became twins every now and then. [Transcriber's Note: This etext was produced from Planet Stories Spring 1945. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] We had left the offices of Interstellar Voice three days ago, Earth time, and now as the immense disc of Jupiter flamed across the sky, entered the outer limits of the Baldric. Grannie Annie strode in the lead, her absurd long-skirted black dress looking as out of place in this desert as the trees. Flagpole trees. They rose straight up like enormous cat-tails, with only a melon-shaped protuberance at the top to show they were a form of vegetation. Everything else was blanketed by the sand and the powerful wind that blew from all quarters. As we reached the first of those trees, Grannie came to a halt. "This is the Baldric all right. If my calculations are right, we've hit it at its narrowest spot." Ezra Karn took a greasy pipe from his lips and spat. "It looks like the rest of this God-forsaken moon," he said, "'ceptin for them sticks." Xartal, the Martian illustrator, said nothing. He was like that, taciturn, speaking only when spoken to. He could be excused this time, however, for this was only our third day on Jupiter's Eighth Moon, and the country was still strange to us. When Annabella C. Flowers, that renowned writer of science fiction, visiphoned me at Crater City, Mars, to meet her here, I had thought she was crazy. But Miss Flowers, known to her friends as Grannie Annie, had always been mildly crazy. If you haven't read her books, you've missed something. She's the author of Lady of the Green Flames , Lady of the Runaway Planet , Lady of the Crimson Space-Beast , and other works of science fiction. Blood-and-thunder as these books are, however, they have one redeeming feature—authenticity of background. Grannie Annie was the original research digger-upper, and when she laid the setting of a yarn on a star of the sixth magnitude, only a transportation-velocity of less than light could prevent her from visiting her "stage" in person. Therefore when she asked me to meet her at the landing field of Interstellar Voice on Jupiter's Eighth Moon, I knew she had another novel in the state of embryo. What I didn't expect was Ezra Karn. He was an old prospector Grannie had met, and he had become so attached to the authoress he now followed her wherever she went. As for Xartal, he was a Martian and was slated to do the illustrations for Grannie's new book. Five minutes after my ship had blasted down, the four of us met in the offices of Interstellar Voice . And then I was shaking hands with Antlers Park, the manager of I. V. himself. "Glad to meet you," he said cordially. "I've just been trying to persuade Miss Flowers not to attempt a trip into the Baldric." "What's the Baldric?" I had asked. Antlers Park flicked the ash from his cheroot and shrugged. "Will you believe me, sir," he said, "when I tell you I've been out here on this forsaken moon five years and don't rightly know myself?" I scowled at that; it didn't make sense. "However, as you perhaps know, the only reason for colonial activities here at all is because of the presence of an ore known as Acoustix. It's no use to the people of Earth but of untold value on Mars. I'm not up on the scientific reasons, but it seems that life on the red planet has developed with a supersonic method of vocal communication. The Martian speaks as the Earthman does, but he amplifies his thoughts' transmission by way of wave lengths as high as three million vibrations per second. The trouble is that by the time the average Martian reaches middle age, his ability to produce those vibrations steadily decreases. Then it was found that this ore, Acoustix, revitalized their sounding apparatus, and the rush was on." "What do you mean?" Park leaned back. "The rush to find more of the ore," he explained. "But up until now this moon is the only place where it can be found. "There are two companies here," he continued, " Interstellar Voice and Larynx Incorporated . Chap by the name of Jimmy Baker runs that. However, the point is, between the properties of these two companies stretches a band or belt which has become known as the Baldric. "There are two principal forms of life in the Baldric; flagpole trees and a species of ornithoid resembling cockatoos. So far no one has crossed the Baldric without trouble." "What sort of trouble?" Grannie Annie had demanded. And when Antlers Park stuttered evasively, the old lady snorted, "Fiddlesticks, I never saw trouble yet that couldn't be explained. We leave in an hour." So now here we were at the outer reaches of the Baldric, four travelers on foot with only the barest necessities in the way of equipment and supplies. I walked forward to get a closer view of one of the flagpole trees. And then abruptly I saw something else. A queer-looking bird squatted there in the sand, looking up at me. Silver in plumage, it resembled a parrot with a crest; and yet it didn't. In some strange way the thing was a hideous caricature. "Look what I found," I yelled. "What I found," said the cockatoo in a very human voice. "Thunder, it talks," I said amazed. "Talks," repeated the bird, blinking its eyes. The cockatoo repeated my last statement again, then rose on its short legs, flapped its wings once and soared off into the sky. Xartal, the Martian illustrator, already had a notebook in his hands and was sketching a likeness of the creature. Ten minutes later we were on the move again. We saw more silver cockatoos and more flagpole trees. Above us, the great disc of Jupiter began to descend toward the horizon. And then all at once Grannie stopped again, this time at the top of a high ridge. She shielded her eyes and stared off into the plain we had just crossed. "Billy-boy," she said to me in a strange voice, "look down there and tell me what you see." I followed the direction of her hand and a shock went through me from head to foot. Down there, slowly toiling across the sand, advanced a party of four persons. In the lead was a little old lady in a black dress. Behind her strode a grizzled Earth man in a flop-brimmed hat, another Earth man, and a Martian. Detail for detail they were a duplicate of ourselves! "A mirage!" said Ezra Karn. But it wasn't a mirage. As the party came closer, we could see that their lips were moving, and their voices became audible. I listened in awe. The duplicate of myself was talking to the duplicate of Grannie Annie, and she was replying in the most natural way. Steadily the four travelers approached. Then, when a dozen yards away, they suddenly faded like a negative exposed to light and disappeared. "What do you make of it?" I said in a hushed voice. Grannie shook her head. "Might be a form of mass hypnosis superinduced by some chemical radiations," she replied. "Whatever it is, we'd better watch our step. There's no telling what might lie ahead." We walked after that with taut nerves and watchful eyes, but we saw no repetition of the "mirage." The wind continued to blow ceaselessly, and the sand seemed to grow more and more powdery. For some time I had fixed my gaze on a dot in the sky which I supposed to be a high-flying cockatoo. As that dot continued to move across the heavens in a single direction, I called Grannie's attention to it. "It's a kite," she nodded. "There should be a car attached to it somewhere." She offered no further explanation, but a quarter of an hour later as we topped another rise a curious elliptical car with a long slanting windscreen came into view. Attached to its hood was a taut wire which slanted up into the sky to connect with the kite. A man was driving and when he saw us, he waved. Five minutes later Grannie was shaking his hand vigorously and mumbling introductions. "This is Jimmy Baker," she said. "He manages Larynx Incorporated , and he's the real reason we're here." I decided I liked Baker the moment I saw him. In his middle thirties, he was tall and lean, with pleasant blue eyes which even his sand goggles could not conceal. "I can't tell you how glad I am you're here, Grannie," he said. "If anybody can help me, you can." Grannie's eyes glittered. "Trouble with the mine laborers?" she questioned. Jimmy Baker nodded. He told his story over the roar of the wind as we headed back across the desert. Occasionally he touched a stud on an electric windlass to which the kite wire was attached. Apparently these adjustments moved planes or fins on the kite and accounted for the car's ability to move in any direction. "If I weren't a realist, I'd say that Larynx Incorporated has been bewitched," he began slowly. "We pay our men high wages and give them excellent living conditions with a vacation on Callisto every year. Up until a short time ago most of them were in excellent health and spirits. Then the Red Spot Fever got them." "Red Spot Fever?" Grannie looked at him curiously. Jimmy Baker nodded. "The first symptoms are a tendency to garrulousness on the part of the patient. Then they disappear." He paused to make an adjustment of the windlass. "They walk out into the Baldric," he continued, "and nothing can stop them. We tried following them, of course, but it was no go. As soon as they realize they're being followed, they stop. But the moment our eyes are turned, they give us the slip." "But surely you must have some idea of where they go," Grannie said. Baker lit a cigarette. "There's all kinds of rumors," he replied, "but none of them will hold water. By the way, there's a cockatoo eyrie ahead of us." I followed his gaze and saw a curious structure suspended between a rude circle of flagpole trees. A strange web-like formation of translucent gauzy material, it was. Fully two hundred cockatoos were perched upon it. They watched us with their mild eyes as we passed, but they didn't move. After that we were rolling up the driveway that led to the offices of Larynx Incorporated . As Jimmy Baker led the way up the inclined ramp, a door in the central building opened, and a man emerged. His face was drawn. "Mr. Baker," he said breathlessly, "seventy-five workers at Shaft Four have headed out into the Baldric." Baker dropped his cigarette and ground his heel on it savagely. "Shaft Four, eh?" he repeated. "That's our principal mine. If the fever spreads there, I'm licked." He motioned us into his office and strode across to a desk. Silent Xartal, the Martian illustrator, took a chair in a corner and got his notebook out, sketching the room's interior. Grannie Annie remained standing. Presently the old lady walked across to the desk and helped herself to the bottle of Martian whiskey there. "There must be ways of stopping this," she said. "Have you called in any physicians? Why don't you call an enforced vacation and send the men away until the plague has died down?" Baker shook his head. "Three doctors from Callisto were here last month. They were as much at loss as I am. As for sending the men away, I may have to do that, but when I do, it means quits. Our company is chartered with Spacolonial, and you know what that means. Failure to produce during a period of thirty days or more, and you lose all rights." A visiphone bell sounded, and Baker walked across to the instrument. A man's face formed in the vision plate. Baker listened, said "Okay" and threw off the switch. "The entire crew of Shaft Four have gone out into the Baldric," he said slowly. There was a large map hanging on the wall back of Baker's desk. Grannie Annie walked across to it and began to study its markings. "Shaft Four is at the outer edge of the Baldric at a point where that corridor is at its widest," she said. Baker looked up. "That's right. We only began operations there a comparatively short time ago. Struck a rich vein of Acoustix that runs deep in. If that vein holds out, we'll double the output of Interstellar Voice , our rival, in a year." Grannie nodded. "I think you and I and Xartal had better take a run up there," she said. "But first I want to see your laboratory." There was no refusing her. Jimmy Baker led the way down to a lower level where a huge laboratory and experimental shop ran the length of the building. Grannie seized a light weight carry-case and began dropping articles into it. A pontocated glass lens, three or four Wellington radite bulbs, each with a spectroscopic filament, a small dynamo that would operate on a kite windlass, and a quantity of wire and other items. The kite car was brought out again, and the old woman, Baker and the Martian took their places in it. Then Jimmy waved, and the car began to roll down the ramp. Not until they had vanished in the desert haze did I sense the loneliness of this outpost. With that loneliness came a sudden sense of foreboding. Had I been a fool to let Grannie go? I thought of her, an old woman who should be in a rocking chair, knitting socks. If anything happened to Annabella C. Flowers, I would never forgive myself and neither would her millions of readers. Ezra Karn and I went back into the office. The old prospector chuckled. "Dang human dynamo. Got more energy than a runaway comet." A connecting door on the far side of the office opened onto a long corridor which ended at a staircase. "Let's look around," I said. We passed down the corridor and climbed the staircase to the second floor. Here were the general offices of Larynx Incorporated , and through glass doors I could see clerks busy with counting machines and report tapes. In another chamber the extremely light Acoustix ore was being packed into big cases and marked for shipment. At the far end a door to a small room stood open. Inside a young man was tilted back in a swivel chair before a complicated instrument panel. "C'mon in," he said, seeing us. "If you want a look at your friends, here they are." He flicked a stud, and the entire wall above the panel underwent a slow change of colors. Those colors whirled kaleidescopically, then coalesced into a three-dimensional scene. It was a scene of a rapidly unfolding desert country as seen from the rear of a kite car. Directly behind the windscreen, backs turned to me, were Jimmy Baker, Grannie, and Xartal. It was as if I were standing directly behind them. "It's Mr. Baker's own invention," the operator said. "An improvement on the visiphone." "Do you mean to say you can follow the movements of that car and its passengers wherever it goes? Can you hear them talk too?" "Sure." The operator turned another dial, and Grannie's falsetto voice entered the room. It stopped abruptly. "The machine uses a lot of power," the operator said, "and as yet we haven't got much." The cloud of anxiety which had wrapped itself about me disappeared somewhat as I viewed this device. At least I could now keep myself posted of Grannie's movements. Karn and I went down to the commissary where we ate our supper. When we returned to Jimmy Baker's office, the visiphone bell was ringing. I went over to it and turned it on, and to my surprise the face of Antlers Park flashed on the screen. "Hello," he said in his friendly way. "I see you arrived all right. Is Miss Flowers there?" "Miss Flowers left with Mr. Baker for Shaft Four," I said. "There's trouble up there. Red spot fever." "Fever, eh?" repeated Park. "That's a shame. Is there anything I can do?" "Tell me," I said, "has your company had any trouble with this plague?" "A little. But up until yesterday the fever's been confined to the other side of the Baldric. We had one partial case, but my chemists gave the chap an antitoxin that seems to have worked. Come to think of it, I might drive over to Shaft Four and give Jimmy Baker the formula. I haven't been out in the Baldric for years, but if you didn't have any trouble, I shouldn't either." We exchanged a few more pleasantries, and then he rang off. In exactly an hour I went upstairs to the visiscreen room. Then once more I was directly behind my friends, listening in on their conversation. The view through the windscreen showed an irregular array of flagpole trees, with the sky dotted by high-flying cockatoos. "There's an eyrie over there," Jimmy Baker was saying. "We might as well camp beside it." Moments later a rude circle of flagpole trees loomed ahead. Across the top of them was stretched a translucent web. Jimmy and Grannie got out of the car and began making camp. Xartal remained in his seat. He was drawing pictures on large pieces of pasteboard, and as I stood there in the visiscreen room, I watched him. There was no doubt about it, the Martian was clever. He would make a few rapid lines on one of the pasteboards, rub it a little to get the proper shading and then go on to the next. In swift rotation likenesses of Ezra Karn, of myself, of Jimmy Baker, and of Antlers Park took form. Ezra spoke over my shoulder. "He's doing scenes for Grannie's new book," he said. "The old lady figures on using the events here for a plot. Look at that damned nosy bird! " A silver cockatoo had alighted on the kite car and was surveying curiously Xartal's work. As each drawing was completed, the bird scanned it with rapt attention. Abruptly it flew to the top of the eyrie, where it seemed to be having a consultation with its bird companions. And then abruptly it happened. The cockatoos took off in mass flight. A group of Earth people suddenly materialized on the eyrie, talking and moving about as if it were the most natural thing in the world. With a shock I saw the likeness of myself; I saw Ezra Karn; and I saw the image of Jimmy Baker. The real Jimmy Baker stood next to Grannie, staring up at this incredible mirage. Grannie let out a whoop. "I've got it!" she said. "Those things we see up there are nothing more than mental images. They're Xartal's drawings!" "Don't you see," the lady continued. "Everything that Xartal put on paper has been seen by one or more of these cockatoos. The cockatoos are like Earth parrots all right, but not only have they the power of copying speech, they also have the ability to recreate a mental image of what they have seen. In other words their brains form a powerful photographic impression of the object. That impression is then transmitted simultaneously in telepathic wavelengths to common foci. That eyrie might be likened to a cinema screen, receiving brain vibrations from a hundred different sources that blend into the light field to form what are apparently three-dimensional images." The Larynx manager nodded slowly. "I see," he said. "But why don't the birds reconstruct images from the actual person. Why use drawings?" "Probably because the drawings are exaggerated in certain details and made a greater impression on their brains," Grannie replied. Up on the eyrie a strange performance was taking place. The duplicate of Grannie Annie was bowing to the duplicate of Jimmy Baker, and the image of Ezra Karn was playing leap frog with the image of Antlers Park. Then abruptly the screen before me blurred and went blank. "Sorry," the operator said. "I've used too much power already. Have to give the generators a chance to build it up again." Nodding, I turned and motioned to Karn. We went back downstairs. "That explains something at any rate," the old prospector said. "But how about that Red spot fever?" On Jimmy Baker's desk was a large file marked: FEVER VICTIMS. I opened it and found it contained the case histories of those men who had been attacked by the strange malady. Reading them over, I was struck by one detail. Each patient had received the first symptoms, not while working in the mines, but while sleeping or lounging in the barracks. Five minutes later Karn and I were striding down a white ramp that led to the nearest barracks. The building came into sight, a low rectangular structure, dome-roofed to withstand the violent winds. Inside double tiers of bunks stretched along either wall. In those bunks some thirty men lay sleeping. The far wall was taken up by a huge window of denvo-quartz. As I stood there, something suddenly caught Ezra Karn's eye. He began to walk toward that window. "Look here," he said. Six feet up on that window a small almost imperceptible button of dull metal had been wedged into an aperture cut in the quartz. The central part of the button appeared to be a powerful lens of some kind, and as I seized it and pulled it loose, I felt the hum of tiny clock work. All at once I had it! Red spot fever. Heat fever from the infra-red rays of Jupiter's great spot. Someone had constructed this lens to concentrate and amplify the power of those rays. The internal clockwork served a double purpose. It opened a shutter, and it rotated the lens slowly so that it played for a time on each of the sleeping men. I slid the metal button in my pocket and left the barracks at a run. Back in the visiscreen room, I snapped to the operator: "Turn it on!" The kite car swam into view in the screen above the instrument panel. I stared with open eyes. Jimmy Baker no longer was in the car, nor was Xartal, the Martian. Grannie Annie was there, but seated at the controls was Antlers Park, the manager of Interstellar Voice. Ezra Karn jabbed my elbow. "Grannie's coming back. I thought she'd be getting sick of this blamed moon." It didn't make sense. In all the years I'd known Annabella C. Flowers, never yet had I seen her desert a case until she had woven the clues and facts to a logical conclusion. "Ezra," I said, "we're going to drive out and meet them. There's something screwy here." Ten minutes later in another kite car we were driving at a fast clip through the powdery sands of the Baldric. And before long we saw another car approaching. It was Grannie. As the car drew up alongside I saw her sitting in her prim way next to Antlers Park. Park said: "We left the others at the mine. Miss Flowers is going back with me to my offices to help me improve the formula for that new antitoxin." He waved his hand, and the car moved off. I watched it as it sped across the desert, and a growing suspicion began to form in my mind. Then, like a knife thrust, the truth struck me. "Ezra!" I yelled, swinging the car. "That wasn't Grannie! That was one of those damned cockatoo images. We've got to catch him." The other car was some distance ahead now. Park looked back and saw us following. He did something to the kite wire, and his car leaped ahead. I threw the speed indicator hard over. Our kite was a huge box affair with a steady powerful pull to the connecting wire. Park's vehicle was drawn by a flat triangular kite that dove and fluttered with each variance of the wind. Steadily we began to close in. The manager of Interstellar Voice turned again, and something glinted in his hand. There was a flash of purple flame, and a round hole appeared in our windscreen inches above Karn's head. "Heat gun!" Ezra yelled. Now we were rocketing over the sand dunes, winding in and out between the flagpole trees. I had to catch that car I told myself. Grannie Annie's very life might be at stake, not to mention the lives of hundreds of mine workers. Again Park took aim and again a hole shattered our windscreen. The wind shifted and blew from another quarter. The box kite soared, but the triangular kite faltered. Taking advantage of Park's loss of speed, I raced alongside. The I. V. manager lifted his weapon frantically. But before he could use it a third time, Ezra Karn had whipped a lariat from his belt and sent it coiling across the intervening space. The thong yanked tight about the manager's throat. Park did the only thing he could do. He shut off power, and the two cars coasted to a halt. Then I was across in the other seat, wrenching the weapon free from his grasp. "What have you done with Miss Flowers?" I demanded. The manager's eyes glittered with fear as he saw my finger tense on the trigger. Weakly he lifted an arm and pointed to the northwest. "Val-ley. Thir-ty miles. Entrance hidden by wall of ... flagpole trees." I leaped into the driver's seat and gave the kite its head. And now the country began to undergo a subtle change. The trees seemed to group themselves in a long flanking corridor in a northwesterly direction, as if to hide some secret that lay beyond. Twice I attempted to penetrate that wall, only to find my way blocked by those curious growths. Then a corridor opened before me; a mile forward and the desert began again. But it was a new desert this time: the sand packed hard as granite, the way ahead utterly devoid of vegetation. In the distance black bulging hills extended to right and left, with a narrow chasm or doorway between. I headed for that entrance, and when I reached it, I shut off power with an exclamation of astonishment. There was a huge chair-shaped rock there, and seated upon it was Grannie Annie. She had a tablet in her hands, and she was writing. "Grannie!" I yelled. "What're you doing here? Where's Mr. Baker?" She rose to her feet and clambered down the rock. "Getting back Jimmy's mine laborers," she said, a twinkle in her eyes. "I see you've got Antlers Park. I'm glad of that. It saves me a lot of trouble." She took off her spectacles and wiped them on her sleeve. "Don't look so fuddled, Billy-boy. Come along, and I'll show you." She led the way through the narrow passage into the valley. A deep gorge, it was, with the black sheer cliffs on either side pressing close. Ten feet forward, I stopped short, staring in amazement. Advancing toward me like a column of infantry came a long line of Larynx miners. They walked slowly, looking straight ahead, moving down the center of the gorge toward the entrance. But there was more! A kite car was drawn up to the side. The windscreen had been removed, and mounted on the hood was a large bullet-like contrivance that looked not unlike a search lamp. A blinding shaft of bluish radiance spewed from its open end. Playing it back and forth upon the marching men were Jimmy Baker and Xartal, the Martian. "Ultra violet," Grannie Annie explained. "The opposite end of the vibratory scale and the only thing that will combat the infra-red rays that cause red spot fever. Those men won't stop walking until they've reached Shaft Four." Grannie Annie told her story during the long ride back to Shaft Four. We drove slowly, keeping the line of marching Larynx miners always ahead of us. Jimmy Baker had struck a new big lode of Acoustix, a lode which if worked successfully would see Larynx Incorporated become a far more powerful exporting concern than Interstellar Voice . Antlers Park didn't want that. It was he or his agents who placed those lens buttons in the Larynx barracks. For he knew that just as Jupiter's great spot was responsible for a climate and atmosphere suitable for an Earthman on this Eighth Moon, so also was that spot a deadly power in itself, capable when its rays were concentrated of causing a fatal sickness. Then suddenly becoming fearful of Grannie's prying, Antlers Park strove to head her off before she reached Shaft Four. He did head her off and managed to lure her and Baker and Xartal into the Shaft barracks where they would be exposed to the rays from the lens button. But Grannie only pretended to contract the plague. Park then attempted to outwit Ezra Karn and me by returning in Jimmy Baker's kite car with a cockatoo image of Grannie.
|
D. They are able to physically mimic any picture.
|
Why was human cloning banned?
A. It was a preemptive measure. It's too complex to allow it to be explored unregulated.
B. It is objectively immoral and "evil."
C. It was an easy political stance for Bill Clinton to take.
D. There was no real research behind it, so there was no pushback on a bad.
|
Human Clones: Why Not? If you can clone a sheep, you can almost certainly clone a human being. Some of the most powerful people in the world have felt compelled to act against this threat. President Clinton swiftly imposed a ban on federal funding for human-cloning research. Bills are in the works in both houses of Congress to outlaw human cloning--a step urged on all governments by the pope himself. Cloning humans is taken to be either 1) a fundamentally evil thing that must be stopped or, at the very least, 2) a complex ethical issue that needs legislation and regulation. But what, exactly, is so bad about it? Start by asking whether human beings have a right to reproduce. I say "yes." I have no moral right to tell other people they shouldn't be able to have children, and I don't see that Bill Clinton has that right either. When Clinton says, "Let us resist the temptation to copy ourselves," it comes from a man not known for resisting other temptations of the flesh. And for a politician, making noise about cloning is pretty close to a fleshly temptation itself. It's an easy way to show sound-bite leadership on an issue that everybody is talking about, without much risk of bitter consequences. After all, how much federally funded research was stopped by this ban? Probably almost none, because Clinton has maintained Ronald Reagan's policy of minimizing federal grants for research in human reproduction. Besides, most researchers thought cloning humans was impossible--so, for the moment, there's unlikely to be a grant-request backlog. There is nothing like banning the nonexistent to show true leadership. The pope, unlike the president, is known for resisting temptation. He also openly claims the authority to decide how people reproduce. I respect the pope's freedom to lead his religion, and his followers' freedom to follow his dictate. But calling for secular governments to implement a ban, thus extending his power beyond those he can persuade, shows rather explicitly that the pope does not respect the freedom of others. The basic religious doctrine he follows was set down some two millennia ago. Sheep feature prominently in the Bible, but cloning does not. So the pope's views on cloning are 1 st century rules applied using 15 th century religious thinking to a 21 st century issue. If humans have a right to reproduce, what right does society have to limit the means? Essentially all reproduction is done these days with medical help--at delivery, and often before. Truly natural human reproduction would mean 50 percent infant mortality and make pregnancy-related death the No. 1 killer of adult women. True, some forms of medical help are more invasive than others. With in vitro fertilization, the sperm and egg are combined in the lab and surgically implanted in the womb. Less than two decades ago, a similar concern was raised over the ethical issues involved in "test-tube babies." To date, nearly 30,000 such babies have been born in the United States alone. Many would-be parents have been made happy. Who has been harmed? The cloning procedure is similar to IVF. The only difference is that the DNA of sperm and egg would be replaced by DNA from an adult cell. What law or principle--secular, humanist, or religious--says that one combination of genetic material in a flask is OK, but another is not? No matter how closely you study the 1 st century texts, I don't think you'll find the answer. Even if people have the right to do it, is cloning a good idea? Suppose that every prospective parent in the world stopped having children naturally, and instead produced clones of themselves. What would the world be like in another 20 or 30 years? The answer is: much like today. Cloning would only copy the genetic aspects of people who are already here. Hating a world of clones is hating the current populace. Never before was Pogo so right: We have met the enemy, and he is us ! Adifferent scare scenario is a world filled with copies of famous people only. We'll treat celebrity DNA like designer clothes, hankering for Michael Jordan's genes the way we covet his Nike sneakers today. But even celebrity infatuation has its limits. People are not more taken with celebrities than they are with themselves. Besides, such a trend would correct itself in a generation or two, because celebrity is closely linked to rarity. The world seems amused by one Howard Stern, but give us a hundred or a million of them, and they'll seem a lot less endearing. Clones already exist. About one in every 1,000 births results in a pair of babies with the same DNA. We know them as identical twins. Scientific studies on such twins--reared together or apart--show that they share many characteristics. Just how many they share is a contentious topic in human biology. But genetic determinism is largely irrelevant to the cloning issue. Despite how many or how few individual characteristics twins--or other clones--have in common, they are different people in the most fundamental sense . They have their own identities, their own thoughts, and their own rights. Should you be confused on this point, just ask a twin. Suppose that Unsolved Mysteries called you with news of a long-lost identical twin. Would that suddenly make you less of a person, less of an individual? It is hard to see how. So, why would a clone be different? Your clone would be raised in a different era by different people--like the lost identical twin, only younger than you. A person's basic humanity is not governed by how he or she came into this world, or whether somebody else happens to have the same DNA. Twins aren't the only clones in everyday life. Think about seedless grapes or navel oranges--if there are no seeds, where did they come from? It's the plant equivalent of virgin birth--which is to say that they are all clones, propagated by cutting a shoot and planting it. Wine is almost entirely a cloned product. The grapes used for wine have seeds, but they've been cloned from shoots for more than a hundred years in the case of many vineyards. The same is true for many flowers. Go to a garden store, and you'll find products with delightful names like "Olivia's Cloning Compound," a mix of hormones to dunk on the cut end of a shoot to help it take root. One recurring image in anti-cloning propaganda is of some evil dictator raising an army of cloned warriors. Excuse me, but who is going to raise such an army ("raise" in the sense used by parents)? Clones start out life as babies . Armies are far easier to raise the old fashioned way--by recruiting or drafting naive young adults. Dulce et decorum est pro patria mori has worked well enough to send countless young men to their deaths through the ages. Why mess with success? Remember that cloning is not the same as genetic engineering. We don't get to make superman--we have to find him first. Maybe we could clone the superwarrior from Congressional Medal of Honor winners. Their bravery might--or might not--be genetically determined. But, suppose that it is. You might end up with such a brave battalion of heroes that when a grenade lands in their midst, there is a competition to see who gets to jump on it to save the others. Admirable perhaps, but not necessarily the way to win a war. And what about the supply sergeants? The army has a lot more of them than heroes. You could try to breed an expert for every job, including the petty bureaucrats, but what's the point? There's not exactly a shortage of them. What if Saddam Hussein clones were to rule Iraq for another thousand years? Sounds bad, but Saddam's natural son Uday is reputed to make his father seem saintly by comparison. We have no more to fear from a clone of Saddam, or of Hitler, than we do from their natural-born kin--which is to say, we don't have much to fear: Dictators' kids rarely pose a problem. Stalin's daughter retired to Arizona, and Kim Jong Il of North Korea is laughable as Great Leader, Version 2.0. The notion of an 80-year-old man cloning himself to cheat death is quaint, but it is unrealistic. First, the baby wouldn't really be him. Second, is the old duffer really up to changing diapers? A persistent octogenarian might convince a younger couple to have his clone and raise it, but that is not much different from fathering a child via a surrogate mother. Fear of clones is just another form of racism. We all agree it is wrong to discriminate against people based on a set of genetic characteristics known as "race." Calls for a ban on cloning amount to discrimination against people based on another genetic trait--the fact that somebody already has an identical DNA sequence. The most extreme form of discrimination is genocide--seeking to eliminate that which is different. In this case, the genocide is pre-emptive--clones are so scary that we must eliminate them before they exist with a ban on their creation. What is so special about natural reproduction anyway? Cloning is the only predictable way to reproduce, because it creates the identical twin of a known adult. Sexual reproduction is a crap shoot by comparison--some random mix of mom and dad. In evolutionary theory, this combination is thought to help stir the gene pool, so to speak. However, evolution for humans is essentially over, because we use medical science to control the death rate. Whatever the temptations of cloning, the process of natural reproduction will always remain a lot more fun. An expensive and uncomfortable lab procedure will never offer any real competition for sex. The people most likely to clone will be those in special circumstances--infertile couples who must endure IVF anyway, for example. Even there, many will mix genetics to mimic nature. Another special case is where one member of a couple has a severe genetic disease. They might choose a clone of the healthy parent, rather than burden their child with a joint heritage that could be fatal. The most upsetting possibility in human cloning isn't superwarriors or dictators. It's that rich people with big egos will clone themselves. The common practice of giving a boy the same name as his father or choosing a family name for a child of either sex reflects our hunger for vicarious immortality. Clones may resonate with this instinct and cause some people to reproduce this way. So what? Rich and egotistic folks do all sorts of annoying things, and the law is hardly the means with which to try and stop them. The "deep ethical issues" about cloning mainly boil down to jealousy. Economic jealousy is bad enough, and it is a factor here, but the thing that truly drives people crazy is sexual jealousy. Eons of evolution through sexual selection have made the average man or woman insanely jealous of any interloper who gains a reproductive advantage--say by diddling your spouse. Cloning is less personal than cuckoldry, but it strikes a similar chord: Someone has got the reproductive edge on you. Once the fuss has died down and further animal research has paved the way, direct human cloning will be one more option among many specialized medical interventions in human reproduction, affecting only a tiny fraction of the population. Research into this area could bring far wider benefits. Clinton's knee-jerk policy changes nothing in the short run, but it is ultimately a giant step backward. In using an adult cell to create a clone, the "cellular clock" that determines the difference between an embryo and adult was somehow reset. Work in this area might help elucidate the process by which aging occurs and yield a way to reset the clocks in some of our own cells, allowing us to regenerate. Selfishly speaking, that would be more exciting to me than cloning, because it would help me . That's a lot more directly useful than letting me sire an identical twin 40 years my junior. To some, the scientist laboring away to unlock the mysteries of life is a source of evil, never to be trusted. To others, including me, the scientist is the ray of light, illuminating the processes that make the universe work and making us better through that knowledge. Various arguments can be advanced toward either view, but one key statistic is squarely on my side. The vast majority of people, including those who rail against science, owe their very lives to previous medical discoveries. They embody the fruits of science. Don't let the forces of darkness, ignorance, and fear turn us back from research. Instead, let us raise--and yes, even clone--new generations of hapless ingrates, who can whine and rail against the discoveries of the next age.
|
A. It was a preemptive measure. It's too complex to allow it to be explored unregulated.
|
How much faster is training time for MGNC-CNN over the baselines?
|
### Introduction
Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 . An important consideration for such models is the specification of the word embeddings. Several options exist. For example, Kalchbrenner et al. kalchbrenner2014convolutional initialize word vectors to random low-dimensional vectors to be fit during training, while Johnson and Zhang johnson2014effective use fixed, one-hot encodings for each word. By contrast, Kim kim2014convolutional initializes word vectors to those estimated via the word2vec model trained on 100 billion words of Google News BIBREF5 ; these are then updated during training. Initializing embeddings to pre-trained word vectors is intuitively appealing because it allows transfer of learned distributional semantics. This has allowed a relatively simple CNN architecture to achieve remarkably strong results. Many pre-trained word embeddings are now readily available on the web, induced using different models, corpora, and processing steps. Different embeddings may encode different aspects of language BIBREF6 , BIBREF7 , BIBREF8 : those based on bag-of-words (BoW) statistics tend to capture associations (doctor and hospital), while embeddings based on dependency-parses encode similarity in terms of use (doctor and surgeon). It is natural to consider how these embeddings might be combined to improve NLP models in general and CNNs in particular. Contributions. We propose MGNC-CNN, a novel, simple, scalable CNN architecture that can accommodate multiple off-the-shelf embeddings of variable sizes. Our model treats different word embeddings as distinct groups, and applies CNNs independently to each, thus generating corresponding feature vectors (one per embedding) which are then concatenated at the classification layer. Inspired by prior work exploiting regularization to encode structure for NLP tasks BIBREF9 , BIBREF10 , we impose different regularization penalties on weights for features generated from the respective word embedding sets. Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time. ### Related Work
Prior work has considered combining latent representations of words that capture syntactic and semantic properties BIBREF12 , and inducing multi-modal embeddings BIBREF13 for general NLP tasks. And recently, Luo et al. luo2014pre proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification. More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement). ### Model Description
We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets. Basic CNN. In this model we first replace each word in a sentence with its vector representation, resulting in a sentence matrix INLINEFORM0 , where INLINEFORM1 is the (zero-padded) sentence length, and INLINEFORM2 is the dimensionality of the embeddings. We apply a convolution operation between linear filters with parameters INLINEFORM3 and the sentence matrix. For each INLINEFORM4 , where INLINEFORM5 denotes `height', we slide filter INLINEFORM6 across INLINEFORM7 , considering `local regions' of INLINEFORM8 adjacent rows at a time. At each local region, we perform element-wise multiplication and then take the element-wise sum between the filter and the (flattened) sub-matrix of INLINEFORM9 , producing a scalar. We do this for each sub-region of INLINEFORM10 that the filter spans, resulting in a feature map vector INLINEFORM11 . We can use multiple filter sizes with different heights, and for each filter size we can have multiple filters. Thus the model comprises INLINEFORM12 weight vectors INLINEFORM13 , each of which is associated with an instantiation of a specific filter size. These in turn generate corresponding feature maps INLINEFORM14 , with dimensions varying with filter size. A 1-max pooling operation is applied to each feature map, extracting the largest number INLINEFORM15 from each feature map INLINEFORM16 . Finally, we combine all INLINEFORM17 together to form a feature vector INLINEFORM18 to be fed through a softmax function for classification. We regularize weights at this level in two ways. (1) Dropout, in which we randomly set elements in INLINEFORM19 to zero during the training phase with probability INLINEFORM20 , and multiply INLINEFORM21 with the parameters trained in INLINEFORM22 at test time. (2) An l2 norm penalty, for which we set a threshold INLINEFORM23 for the l2 norm of INLINEFORM24 during training; if this is exceeded, we rescale the vector accordingly. For more details, see BIBREF4 . MG-CNN. Assuming we have INLINEFORM0 word embeddings with corresponding dimensions INLINEFORM1 , we can simply treat each word embedding independently. In this case, the input to the CNN comprises multiple sentence matrices INLINEFORM2 , where each INLINEFORM3 may have its own width INLINEFORM4 . We then apply different groups of filters INLINEFORM5 independently to each INLINEFORM6 , where INLINEFORM7 denotes the set of filters for INLINEFORM8 . As in basic CNN, INLINEFORM9 may have multiple filter sizes, and multiple filters of each size may be introduced. At the classification layer we then obtain a feature vector INLINEFORM10 for each embedding set, and we can simply concatenate these together to form the final feature vector INLINEFORM11 to feed into the softmax function, where INLINEFORM12 . This representation contains feature vectors generated from all sets of embeddings under consideration. We call this method multiple group CNN (MG-CNN). Here groups refer to the features generated from different embeddings. Note that this differs from `multi-channel' models because at the convolution layer we use different filters on each word embedding matrix independently, whereas in a standard multi-channel approach each filter would consider all channels simultaneously and generate a scalar from all channels at each local region. As above, we impose a max l2 norm constraint on the final feature vector INLINEFORM13 for regularization. Figure FIGREF1 illustrates this approach. MGNC-CNN. We propose an augmentation of MG-CNN, Multi-Group Norm Constraint CNN (MGNC-CNN), which differs in its regularization strategy. Specifically, in this variant we impose grouped regularization constraints, independently regularizing subcomponents INLINEFORM0 derived from the respective embeddings, i.e., we impose separate max norm constraints INLINEFORM1 for each INLINEFORM2 (where INLINEFORM3 again indexes embedding sets); these INLINEFORM4 hyper-parameters are to be tuned on a validation set. Intuitively, this method aims to better capitalize on features derived from word embeddings that capture discriminative properties of text for the task at hand by penalizing larger weight estimates for features derived from less discriminative embeddings. ### Datasets
Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set. Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each. TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances. Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced. ### Pre-trained Word Embeddings
We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed" nodes with prepositions and notated inverse relations separately, e.g., “dog barks" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters. ### Setup
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 . We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation, creating nested development sets with which to tune hyperparameters. For all experiments we used filters sizes of 3, 4 and 5 and we created 100 feature maps for each filter size. We applied 1 max-pooling and dropout (rate: 0.5) at the classification layer. For training we used back-propagation in mini-batches and used AdaDelta as the stochastic gradient descent (SGD) update rule, and set mini-batch size as 50. In this work, we treat word embeddings as part of the parameters of the model, and update them as well during training. In all our experiments, we only tuned the max norm constraint(s), fixing all other hyperparameters. ### Results and Discussion
We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC. We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour. We note that the method proposed by Astudillo et al. astudillo2015learning is able to accommodate multiple embedding sets with different dimensions by projecting the original word embeddings into a lower-dimensional space. However, this work requires training the optimal projection matrix on laebled data first, which again incurs large overhead. Of course, our model also has its own limitations: in MGNC-CNN, we need to tune the norm constraint hyperparameter for all the word embeddings. As the number of word embedding increases, this will increase the running time. However, this tuning procedure is embarrassingly parallel. ### Conclusions
We have proposed MGNC-CNN: a simple, flexible CNN architecture for sentence classification that can exploit multiple, variable sized word embeddings. We demonstrated that this consistently achieves better results than a baseline architecture that exploits only a single set of word embeddings, and also a naive concatenation approach to capitalizing on multiple embeddings. Furthermore, our results are comparable to those achieved with a recently proposed model BIBREF11 that is much more complex. However, our simple model is easy to implement and requires an order of magnitude less training time. Furthermore, our model is much more flexible than previous approaches, because it can accommodate variable-size word embeddings. ### Acknowledgments
This work was supported in part by the Army Research Office (grant W911NF-14-1-0442) and by The Foundation for Science and Technology, Portugal (grant UTAP-EXPL/EEIESS/0031/2014). This work was also made possible by the support of the Texas Advanced Computer Center (TACC) at UT Austin. Figure 1: Illustration of MG-CNN and MGNC-CNN. The filters applied to the respective embeddings are completely independent. MG-CNN applies a max norm constraint to o, while MGNC-CNN applies max norm constraints on o1 and o2 independently (group regularization). Note that one may easily extend the approach to handle more than two embeddings at once. Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these. Table 2: Best λ2 value on the validation set for each method w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding.
|
It is an order of magnitude more efficient in terms of training time., his model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour
|
What are the aesthetic emotions formalized?
|
###
1.1em ### :::
1.1.1em ### ::: :::
1.1.1.1em Thomas Haider$^{1,3}$, Steffen Eger$^2$, Evgeny Kim$^3$, Roman Klinger$^3$, Winfried Menninghaus$^1$ $^{1}$Department of Language and Literature, Max Planck Institute for Empirical Aesthetics $^{2}$NLLG, Department of Computer Science, Technische Universitat Darmstadt $^{3}$Institut für Maschinelle Sprachverarbeitung, University of Stuttgart {thomas.haider, w.m}@ae.mpg.de, [email protected] {roman.klinger, evgeny.kim}@ims.uni-stuttgart.de Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of $\kappa =.70$, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion. Emotion, Aesthetic Emotions, Literature, Poetry, Annotation, Corpora, Emotion Recognition, Multi-Label ### Introduction
Emotions are central to human experience, creativity and behavior. Models of affect and emotion, both in psychology and natural language processing, commonly operate on predefined categories, designated either by continuous scales of, e.g., Valence, Arousal and Dominance BIBREF0 or discrete emotion labels (which can also vary in intensity). Discrete sets of emotions often have been motivated by theories of basic emotions, as proposed by Ekman1992—Anger, Fear, Joy, Disgust, Surprise, Sadness—and Plutchik1991, who added Trust and Anticipation. These categories are likely to have evolved as they motivate behavior that is directly relevant for survival. However, art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions. These differences between real-life and art contexts have not been considered in natural language processing work so far. To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations. For these reasons, we argue that the analysis of literature (with a focus on poetry) should rely on specifically selected emotion items rather than on the narrow range of basic emotions only. Our selection is based on previous research on this issue in psychological studies on art reception and, specifically, on poetry. For instance, knoop2016mapping found that Beauty is a major factor in poetry reception. We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states. Further, we consider the aspect that, when selecting specific emotion labels, the perspective of annotators plays a major role. Whether emotions are elicited in the reader, expressed in the text, or intended by the author largely changes the permissible labels. For example, feelings of Disgust or Love might be intended or expressed in the text, but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader. Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors. We opted for this reader perspective based on previous research in NLP BIBREF5, BIBREF6 and work in empirical aesthetics BIBREF7, that specifically measured the reception of poetry. Our final set of emotion labels consists of Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia. In addition to selecting an adapted set of emotions, the annotation of poetry brings further challenges, one of which is the choice of the appropriate unit of annotation. Previous work considers words BIBREF8, BIBREF9, sentences BIBREF10, BIBREF11, utterances BIBREF12, sentence triples BIBREF13, or paragraphs BIBREF14 as the units of annotation. For poetry, reasonable units follow the logical document structure of poems, i.e., verse (line), stanza, and, owing to its relative shortness, the complete text. The more coarse-grained the unit, the more difficult the annotation is likely to be, but the more it may also enable the annotation of emotions in context. We find that annotating fine-grained units (lines) that are hierarchically ordered within a larger context (stanza, poem) caters to the specific structure of poems, where emotions are regularly mixed and are more interpretable within the whole poem. Consequently, we allow the mixing of emotions already at line level through multi-label annotation. The remainder of this paper includes (1) a report of the annotation process that takes these challenges into consideration, (2) a description of our annotated corpora, and (3) an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry. In a first study, the annotators work on the annotations in a closely supervised fashion, carefully reading each verse, stanza, and poem. In a second study, the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza. Using these two settings, we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs. crowdsourcing setting in this novel annotation task. Particularly, we are interested in estimating the potential of a crowdsourcing environment for the task of self-perceived emotion annotation in poetry, given time and cost overhead associated with in-house annotation process (that usually involve training and close supervision of the annotators). We provide the final datasets of German and English language poems annotated with reader emotions on verse level at https://github.com/tnhaider/poetry-emotion. ### Related Work ::: Poetry in Natural Language Processing
Natural language understanding research on poetry has investigated stylistic variation BIBREF15, BIBREF16, BIBREF17, with a focus on broadly accepted formal features such as meter BIBREF18, BIBREF19, BIBREF20 and rhyme BIBREF21, BIBREF22, as well as enjambement BIBREF23, BIBREF24 and metaphor BIBREF25, BIBREF26. Recent work has also explored the relationship of poetry and prose, mainly on a syntactic level BIBREF27, BIBREF28. Furthermore, poetry also lends itself well to semantic (change) analysis BIBREF29, BIBREF30, as linguistic invention BIBREF31, BIBREF32 and succinctness BIBREF33 are at the core of poetic production. Corpus-based analysis of emotions in poetry has been considered, but there is no work on German, and little on English. kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC, within the categories positive/negative outlook, positive/negative emotion and phys./psych. well-being. hou-frank-2015-analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm. barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words `Joy', `Anger', `Fear' and `Sadness' (moreover translating these from English to Spanish). With these word lists, they distinguish the categories `Love', `Songs to Lisi', `Satire' and `Philosophical-Moral-Religious' in Quevedo's poetry. Similarly, alsharif2013emotion classify unique Arabic `emotional text forms' based on word unigrams. Mohanty2018 create a corpus of 788 poems in the Indian Odia language, annotate it on text (poem) level with binary negative and positive sentiment, and are able to distinguish these with moderate success. Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekman's six categories + Love + Courage. They achieve a Fleiss Kappa of .48. In contrast to our work, these studies focus on basic emotions and binary sentiment polarity only, rather than addressing aesthetic emotions. Moreover, they annotate on the level of complete poems (instead of fine-grained verse and stanza-level). ### Related Work ::: Emotion Annotation
Emotion corpora have been created for different tasks and with different annotation strategies, with different units of analysis and different foci of emotion perspective (reader, writer, text). Examples include the ISEAR dataset BIBREF34 (document-level); emotion annotation in children stories BIBREF10 and news headlines BIBREF35 (sentence-level); and fine-grained emotion annotation in literature by Kim2018 (phrase- and word-level). We refer the interested reader to an overview paper on existing corpora BIBREF36. We are only aware of a limited number of publications which look in more depth into the emotion perspective. buechel-hahn-2017-emobank report on an annotation study that focuses both on writer's and reader's emotions associated with English sentences. The results show that the reader perspective yields better inter-annotator agreement. Yang2009 also study the difference between writer and reader emotions, but not with a modeling perspective. The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs. ### Related Work ::: Emotion Classification
The task of emotion classification has been tackled before using rule-based and machine learning approaches. Rule-based emotion classification typically relies on lexical resources of emotionally charged words BIBREF9, BIBREF37, BIBREF8 and offers a straightforward and transparent way to detect emotions in text. In contrast to rule-based approaches, current models for emotion classification are often based on neural networks and commonly use word embeddings as features. Schuff2017 applied models from the classes of CNN, BiLSTM, and LSTM and compare them to linear classifiers (SVM and MaxEnt), where the BiLSTM shows best results with the most balanced precision and recall. AbdulMageed2017 claim the highest F$_1$ with gated recurrent unit networks BIBREF38 for Plutchik's emotion model. More recently, shared tasks on emotion analysis BIBREF39, BIBREF40 triggered a set of more advanced deep learning approaches, including BERT BIBREF41 and other transfer learning methods BIBREF42. ### Data Collection
For our annotation and modeling studies, we build on top of two poetry corpora (in English and German), which we refer to as PO-EMO. This collection represents important contributions to the literary canon over the last 400 years. We make this resource available in TEI P5 XML and an easy-to-use tab separated format. Table TABREF9 shows a size overview of these data sets. Figure FIGREF8 shows the distribution of our data over time via density plots. Note that both corpora show a relative underrepresentation before the onset of the romantic period (around 1750). ### Data Collection ::: German
The German corpus contains poems available from the website lyrik.antikoerperchen.de (ANTI-K), which provides a platform for students to upload essays about poems. The data is available in the Hypertext Markup Language, with clean line and stanza segmentation. ANTI-K also has extensive metadata, including author names, years of publication, numbers of sentences, poetic genres, and literary periods, that enable us to gauge the distribution of poems according to periods. The 158 poems we consider (731 stanzas) are dispersed over 51 authors and the New High German timeline (1575–1936 A.D.). This data has been annotated, besides emotions, for meter, rhythm, and rhyme in other studies BIBREF22, BIBREF43. ### Data Collection ::: English
The English corpus contains 64 poems of popular English writers. It was partly collected from Project Gutenberg with the GutenTag tool, and, in addition, includes a number of hand selected poems from the modern period and represents a cross section of popular English poets. We took care to include a number of female authors, who would have been underrepresented in a uniform sample. Time stamps in the corpus are organized by the birth year of the author, as assigned in Project Gutenberg. ### Expert Annotation
In the following, we will explain how we compiled and annotated three data subsets, namely, (1) 48 German poems with gold annotation. These were originally annotated by three annotators. The labels were then aggregated with majority voting and based on discussions among the annotators. Finally, they were curated to only include one gold annotation. (2) The remaining 110 German poems that are used to compute the agreement in table TABREF20 and (3) 64 English poems contain the raw annotation from two annotators. We report the genesis of our annotation guidelines including the emotion classes. With the intention to provide a language resource for the computational analysis of emotion in poetry, we aimed at maximizing the consistency of our annotation, while doing justice to the diversity of poetry. We iteratively improved the guidelines and the annotation workflow by annotating in batches, cleaning the class set, and the compilation of a gold standard. The final overall cost of producing this expert annotated dataset amounts to approximately 3,500. ### Expert Annotation ::: Workflow
The annotation process was initially conducted by three female university students majoring in linguistics and/or literary studies, which we refer to as our “expert annotators”. We used the INCePTION platform for annotation BIBREF44. Starting with the German poems, we annotated in batches of about 16 (and later in some cases 32) poems. After each batch, we computed agreement statistics including heatmaps, and provided this feedback to the annotators. For the first three batches, the three annotators produced a gold standard using a majority vote for each line. Where this was inconclusive, they developed an adjudicated annotation based on discussion. Where necessary, we encouraged the annotators to aim for more consistency, as most of the frequent switching of emotions within a stanza could not be reconstructed or justified. In poems, emotions are regularly mixed (already on line level) and are more interpretable within the whole poem. We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem. Hence, we instruct the annotators to read a complete stanza or full poem, and then annotate each line in the context of its stanza. To reflect on the emotional complexity of poetry, we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid `empty' annotations. Rather, they were advised to use fewer labels and more consistent annotation. This additional constraint is necessary to avoid “wild”, non-reconstructable or non-justified annotations. All subsequent batches (all except the first three) were only annotated by two out of the three initial annotators, coincidentally those two who had the lowest initial agreement with each other. We asked these two experts to use the generated gold standard (48 poems; majority votes of 3 annotators plus manual curation) as a reference (“if in doubt, annotate according to the gold standard”). This eliminated some systematic differences between them and markedly improved the agreement levels, roughly from 0.3–0.5 Cohen's $\kappa $ in the first three batches to around 0.6–0.8 $\kappa $ for all subsequent batches. This annotation procedure relaxes the reader perspective, as we encourage annotators (if in doubt) to annotate how they think the other annotators would annotate. However, we found that this formulation improves the usability of the data and leads to a more consistent annotation. ### Expert Annotation ::: Emotion Labels
We opt for measuring the reader perspective rather than the text surface or author's intent. To closer define and support conceptualizing our labels, we use particular `items', as they are used in psychological self-evaluations. These items consist of adjectives, verbs or short phrases. We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on self-evaluations of participants. The resulting factors are shown in Table TABREF17. We attempt to cover all identified factors and supplement with basic emotions BIBREF46, BIBREF47, where possible. We started with a larger set of labels to then delete and substitute (tone down) labels during the initial annotation process to avoid infrequent classes and inconsistencies. Further, we conflate labels if they show considerable confusion with each other. These iterative improvements particularly affected Confusion, Boredom and Other that were very infrequently annotated and had little agreement among annotators ($\kappa <.2$). For German, we also removed Nostalgia ($\kappa =.218$) after gold standard creation, but after consideration, added it back for English, then achieving agreement. Nostalgia is still available in the gold standard (then with a second label Beauty/Joy or Sadness to keep consistency). However, Confusion, Boredom and Other are not available in any sub-corpus. Our final set consists of nine classes, i.e., (in order of frequency) Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia. In the following, we describe the labels and give further details on the aggregation process. Annoyance (annoys me/angers me/felt frustrated): Annoyance implies feeling annoyed, frustrated or even angry while reading the line/stanza. We include the class Anger here, as this was found to be too strong in intensity. Awe/Sublime (found it overwhelming/sense of greatness): Awe/Sublime implies being overwhelmed by the line/stanza, i.e., if one gets the impression of facing something sublime or if the line/stanza inspires one with awe (or that the expression itself is sublime). Such emotions are often associated with subjects like god, death, life, truth, etc. The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms. Awe is a more common English term. Beauty/Joy (found it beautiful/pleasing/makes me happy/joyful): kant2000critique already spoke of a “feeling of beauty”, and it should be noted that it is not a `merely pleasing emotion'. Therefore, in our pilot annotations, Beauty and Joy were separate labels. However, schindler2017measuring found that items for Beauty and Joy load into the same factors. Furthermore, our pilot annotations revealed, while Beauty is the more dominant and frequent feeling, both labels regularly accompany each other, and they often get confused across annotators. Therefore, we add Joy to form an inclusive label Beauty/Joy that increases annotation consistency. Humor (found it funny/amusing): Implies feeling amused by the line/stanza or if it makes one laugh. Nostalgia (makes me nostalgic): Nostalgia is defined as a sentimental longing for things, persons or situations in the past. It often carries both positive and negative feelings. However, since this label is quite infrequent, and not available in all subsets of the data, we annotated it with an additional Beauty/Joy or Sadness label to ensure annotation consistency. Sadness (makes me sad/touches me): If the line/stanza makes one feel sad. It also includes a more general `being touched / moved'. Suspense (found it gripping/sparked my interest): Choose Suspense if the line/stanza keeps one in suspense (if the line/stanza excites one or triggers one's curiosity). We further removed Anticipation from Suspense/Anticipation, as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item. Uneasiness (found it ugly/unsettling/disturbing / frightening/distasteful): This label covers situations when one feels discomfort about the line/stanza (if the line/stanza feels distasteful/ugly, unsettling/disturbing or frightens one). The labels Ugliness and Disgust were conflated into Uneasiness, as both are seldom felt in poetry (being inadequate/too strong/high in arousal), and typically lead to Uneasiness. Vitality (found it invigorating/spurs me on/inspires me): This label is meant for a line/stanza that has an inciting, encouraging effect (if the line/stanza conveys a feeling of movement, energy and vitality which animates to action). Similar terms are Activation and Stimulation. ### Expert Annotation ::: Agreement
Table TABREF20 shows the Cohen's $\kappa $ agreement scores among our two expert annotators for each emotion category $e$ as follows. We assign each instance (a line in a poem) a binary label indicating whether or not the annotator has annotated the emotion category $e$ in question. From this, we obtain vectors $v_i^e$, for annotators $i=0,1$, where each entry of $v_i^e$ holds the binary value for the corresponding line. We then apply the $\kappa $ statistics to the two binary vectors $v_i^e$. Additionally to averaged $\kappa $, we report micro-F1 values in Table TABREF21 between the multi-label annotations of both expert annotators as well as the micro-F1 score of a random baseline as well as of the majority emotion baseline (which labels each line as Beauty/Joy). We find that Cohen $\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy. Furthermore, as shown in Figure FIGREF23, we find that no single poem aggregates to more than six emotion labels, while no stanza aggregates to more than four emotion labels. However, most lines and stanzas prefer one or two labels. German poems seem more emotionally diverse where more poems have three labels than two labels, while the majority of English poems have only two labels. This is however attributable to the generally shorter English texts. ### Crowdsourcing Annotation
After concluding the expert annotation, we performed a focused crowdsourcing experiment, based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19. With this experiment, we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform. A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations. We opted for a maximally simple annotation environment, where we asked participants to annotate English 4-line stanzas with self-perceived reader emotions. We choose English due to the higher availability of English language annotators on crowdsourcing platforms. Each annotator rates each stanza independently of surrounding context. ### Crowdsourcing Annotation ::: Data and Setup
For consistency and to simplify the task for the annotators, we opt for a trade-off between completeness and granularity of the annotation. Specifically, we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems. The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation. The annotators are asked to answer the following questions for each instance. Question 1 (single-choice): Read the following stanza and decide for yourself which emotions it evokes. Question 2 (multiple-choice): Which additional emotions does the stanza evoke? The answers to both questions correspond to the emotion labels we defined to use in our annotation, as described in Section SECREF19. We add an additional answer choice “None” to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions. Each instance is annotated by ten people. We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task. We pay 0.09 per instance. The final cost of the crowdsourcing experiment is 74. ### Crowdsourcing Annotation ::: Results
In the following, we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling. For instance, one could assign the label of a specific emotion to an instance if just one annotators picks it, or one could assign the label only if all annotators agree on this emotion. To evaluate this, we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas, 1000 times overall (i.e., 1000$\times $59 times, bootstrap resampling). For each of these repetitions, we compare the agreement of these two groups of 5 annotators. Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it, at least two annotators pick it, etc. up to all five pick it. We show the results in Table TABREF27. The $\kappa $ scores show the average agreement between the two groups of five annotators, when the adjudicated class is picked based on the particular threshold of annotators with the same label choice. We see that some emotions tend to have higher agreement scores than others, namely Annoyance (.66), Sadness (up to .52), and Awe/Sublime, Beauty/Joy, Humor (all .46). The maximum agreement is reached mostly with a threshold of 2 (4 times) or 3 (3 times). We further show in the same table the average numbers of labels from each strategy. Obviously, a lower threshold leads to higher numbers (corresponding to a disjunction of annotations for each emotion). The drop in label counts is comparably drastic, with on average 18 labels per class. Overall, the best average $\kappa $ agreement (.32) is less than half of what we saw for the expert annotators (roughly .70). Crowds especially disagree on many more intricate emotion labels (Uneasiness, Vitality, Nostalgia, Suspense). We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18. Sadness is used most often to annotate a stanza, and it is often confused with Suspense, Uneasiness, and Nostalgia. Further, Beauty/Joy partially overlaps with Awe/Sublime, Nostalgia, and Sadness. On average, each crowd annotator uses two emotion labels per stanza (56% of cases); only in 36% of the cases the annotators use one label, and in 6% and 1% of the cases three and four labels, respectively. This contrasts with the expert annotators, who use one label in about 70% of the cases and two labels in 30% of the cases for the same 59 four-liners. Concerning frequency distribution for emotion labels, both experts and crowds name Sadness and Beauty/Joy as the most frequent emotions (for the `best' threshold of 3) and Nostalgia as one of the least frequent emotions. The Spearman rank correlation between experts and crowds is about 0.55 with respect to the label frequency distribution, indicating that crowds could replace experts to a moderate degree when it comes to extracting, e.g., emotion distributions for an author or time period. Now, we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level (rather than only on a distributional level). ### Crowdsourcing Annotation ::: Comparing Experts with Crowds
To gauge the quality of the crowd annotations in comparison with our experts, we calculate agreement on the emotions between experts and an increasing group size from the crowd. For each stanza instance $s$, we pick $N$ crowd workers, where $N\in \lbrace 4,6,8,10\rbrace $, then pick their majority emotion for $s$, and additionally pick their second ranked majority emotion if at least $\frac{N}{2}-1$ workers have chosen it. For the experts, we aggregate their emotion labels on stanza level, then perform the same strategy for selection of emotion labels. Thus, for $s$, both crowds and experts have 1 or 2 emotions. For each emotion, we then compute Cohen's $\kappa $ as before. Note that, compared to our previous experiments in Section SECREF26 with a threshold, each stanza now receives an emotion annotation (exactly one or two emotion labels), both by the experts and the crowd-workers. In Figure FIGREF30, we plot agreement between experts and crowds on stanza level as we vary the number $N$ of crowd workers involved. On average, there is roughly a steady linear increase in agreement as $N$ grows, which may indicate that $N=20$ or $N=30$ would still lead to better agreement. Concerning individual emotions, Nostalgia is the emotion with the least agreement, as opposed to Sadness (in our sample of 59 four-liners): the agreement for this emotion grows from $.47$ $\kappa $ with $N=4$ to $.65$ $\kappa $ with $N=10$. Sadness is also the most frequent emotion, both according to experts and crowds. Other emotions for which a reasonable agreement is achieved are Annoyance, Awe/Sublime, Beauty/Joy, Humor ($\kappa $ > 0.2). Emotions with little agreement are Vitality, Uneasiness, Suspense, Nostalgia ($\kappa $ < 0.2). By and large, we note from Figure FIGREF18 that expert annotation is more restrictive, with experts agreeing more often on particular emotion labels (seen in the darker diagonal). The results of the crowdsourcing experiment, on the other hand, are a mixed bag as evidenced by a much sparser distribution of emotion labels. However, we note that these differences can be caused by 1) the disparate training procedure for the experts and crowds, and 2) the lack of opportunities for close supervision and on-going training of the crowds, as opposed to the in-house expert annotators. In general, however, we find that substituting experts with crowds is possible to a certain degree. Even though the crowds' labels look inconsistent at first view, there appears to be a good signal in their aggregated annotations, helping to approximate expert annotations to a certain degree. The average $\kappa $ agreement (with the experts) we get from $N=10$ crowd workers (0.24) is still considerably below the agreement among the experts (0.70). ### Modeling
To estimate the difficulty of automatic classification of our data set, we perform multi-label document classification (of stanzas) with BERT BIBREF41. For this experiment we aggregate all labels for a stanza and sort them by frequency, both for the gold standard and the raw expert annotations. As can be seen in Figure FIGREF23, a stanza bears a minimum of one and a maximum of four emotions. Unfortunately, the label Nostalgia is only available 16 times in the German data (the gold standard) as a second label (as discussed in Section SECREF19). None of our models was able to learn this label for German. Therefore we omit it, leaving us with eight proper labels. We use the code and the pre-trained BERT models of Farm, provided by deepset.ai. We test the multilingual-uncased model (Multiling), the german-base-cased model (Base), the german-dbmdz-uncased model (Dbmdz), and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs, both on token (masked words) and sequence (next line) prediction (Base$_{\textsc {Tuned}}$). We split the randomized German dataset so that each label is at least 10 times in the validation set (63 instances, 113 labels), and at least 10 times in the test set (56 instances, 108 labels) and leave the rest for training (617 instances, 946 labels). We train BERT for 10 epochs (with a batch size of 8), optimize with entropy loss, and report F1-micro on the test set. See Table TABREF36 for the results. We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels. The BASE and BASE$_{\textsc {TUNED}}$ models perform slightly worse than DBMDZ. The effect of tuning of the BASE model is questionable, probably because of the restricted vocabulary (30k). We found that tuning on poetry does not show obvious improvements. Lastly, we find that models that were trained on lines (instead of stanzas) do not achieve the same F1 (~.42 for the German models). ### Concluding Remarks
In this paper, we presented a dataset of German and English poetry annotated with reader response to reading poetry. We argued that basic emotions as proposed by psychologists (such as Ekman and Plutchik) that are often used in emotion analysis from text are of little use for the annotation of poetry reception. We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreement—in terms of $\kappa $ score—on the final dataset. The task of collecting reader-perceived emotion response to poetry in a crowdsourcing setting is not straightforward. In contrast to expert annotators, who were closely supervised and reflected upon the task, the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand. However, using a larger number of crowd annotators may lead to finding an aggregation strategy with a better trade-off between quality and quantity of adjudicated labels. For future work, we thus propose to repeat the experiment with larger number of crowdworkers, and develop an improved training strategy that would suit the crowdsourcing environment. The dataset presented in this paper can be of use for different application scenarios, including multi-label emotion classification, style-conditioned poetry generation, investigating the influence of rhythm/prosodic features on emotion, or analysis of authors, genres and diachronic variation (e.g., how emotions are represented differently in certain periods). Further, though our modeling experiments are still rudimentary, we propose that this data set can be used to investigate the intra-poem relations either through multi-task learning BIBREF49 and/or with the help of hierarchical sequence classification approaches. ### Acknowledgements
A special thanks goes to Gesine Fuhrmann, who created the guidelines and tirelessly documented the annotation progress. Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback. For help with the conceptualization of labels we thank Ines Schindler. This research has been partially conducted within the CRETA center (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) at the Technische Universität Darmstadt under grant No. GRK 1994/1. ### Appendix
We illustrate two examples of our German gold standard annotation, a poem each by Friedrich Hölderlin and Georg Trakl, and an English poem by Walt Whitman. Hölderlin's text stands out, because the mood changes starkly from the first stanza to the second, from Beauty/Joy to Sadness. Trakl's text is a bit more complex with bits of Nostalgia and, most importantly, a mixture of Uneasiness with Awe/Sublime. Whitman's poem is an example of Vitality and its mixing with Sadness. The English annotation was unified by us for space constraints. For the full annotation please see https://github.com/tnhaider/poetry-emotion/ ### Appendix ::: Friedrich Hölderlin: Hälfte des Lebens (1804)
### Appendix ::: Georg Trakl: In den Nachmittag geflüstert (1912)
### Appendix ::: Walt Whitman: O Captain! My Captain! (1865)
Figure 1: Temporal distribution of poetry corpora (Kernel Density Plots with bandwidth = 0.2). Table 1: Statistics on our poetry corpora PO-EMO. Table 2: Aesthetic Emotion Factors (Schindler et al., 2017). Table 3: Cohen’s kappa agreement levels and normalized line-level emotion frequencies for expert annotators (Nostalgia is not available in the German data). Table 4: Top: averaged kappa scores and micro-F1 agreement scores, taking one annotator as gold. Bottom: Baselines. Figure 2: Emotion cooccurrence matrices for the German and English expert annotation experiments and the English crowdsourcing experiment. Figure 3: Distribution of number of distinct emotion labels per logical document level in the expert-based annotation. No whole poem has more than 6 emotions. No stanza has more than 4 emotions. Table 5: Results obtained via boostrapping for annotation aggregation. The row Threshold shows how many people within a group of five annotators should agree on a particular emotion. The column labeled Counts shows the average number of times certain emotion was assigned to a stanza given the threshold. Cells with ‘–’ mean that neither of two groups satisfied the threshold. Figure 4: Agreement between experts and crowds as a function of the number N of crowd workers. Table 6: BERT-based multi-label classification on stanzalevel. Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels.
|
feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking), Emotions that exhibit this dual capacity have been defined as “aesthetic emotions”
|
Why can't the Captain find Purnie?
A. Purnie lost consciousness outside of time.
B. Purnie drowned in the ocean.
C. Purnie is covered by the petrified logs and too weak to call out for help.
D. Purnie lost consciousness and is now invisible.
|
BEACH SCENE By MARSHALL KING Illustrated by WOOD [Transcriber's Note: This etext was produced from Galaxy Magazine October 1960. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.] It was a fine day at the beach for Purnie's game—but his new friends played very rough! Purnie ran laughing and shouting through the forest until he could run no more. He fell headlong into a patch of blue moss and whooped with delight in having this day free for exploring. He was free to see the ocean at last. When he had caught his breath, he looked back through the forest. No sign of the village; he had left it far behind. Safe from the scrutiny of brothers and parents, there was nothing now to stop him from going to the ocean. This was the moment to stop time. "On your mark!" he shouted to the rippling stream and its orange whirlpools. He glanced furtively from side to side, pretending that some object might try to get a head start. "Get set!" he challenged the thin-winged bees that hovered over the abundant foliage. "Stop!" He shrieked this command upward toward the dense, low-hanging purple clouds that perennially raced across the treetops, making one wonder how tall the trees really were. His eyes took quick inventory. It was exactly as he knew it would be: the milky-orange stream had become motionless and its minute whirlpools had stopped whirling; a nearby bee hung suspended over a paka plant, its transparent wings frozen in position for a downward stroke; and the heavy purple fluid overhead held fast in its manufacture of whorls and nimbi. With everything around him in a state of perfect tableau, Purnie hurried toward the ocean. If only the days weren't so short! he thought. There was so much to see and so little time. It seemed that everyone except him had seen the wonders of the beach country. The stories he had heard from his brothers and their friends had taunted him for as long as he could remember. So many times had he heard these thrilling tales that now, as he ran along, he could clearly picture the wonderland as though he were already there. There would be a rockslide of petrified logs to play on, the ocean itself with waves higher than a house, the comical three-legged tripons who never stopped munching on seaweed, and many kinds of other wonderful creatures found only at the ocean. He bounced through the forest as though the world was reserved this day just for him. And who could say it wasn't? he thought. Wasn't this his fifth birthday? He ran along feeling sorry for four-year-olds, and even for those who were only four and a half, for they were babies and wouldn't dare try slipping away to the ocean alone. But five! "I'll set you free, Mr. Bee—just wait and see!" As he passed one of the many motionless pollen-gathering insects he met on the way, he took care not to brush against it or disturb its interrupted task. When Purnie had stopped time, the bees—like all the other creatures he met—had been arrested in their native activities, and he knew that as soon as he resumed time, everything would pick up where it had left off. When he smelled an acid sweetness that told him the ocean was not far off, his pulse quickened in anticipation. Rather than spoil what was clearly going to be a perfect day, he chose to ignore the fact that he had been forbidden to use time-stopping as a convenience for journeying far from home. He chose to ignore the oft-repeated statement that an hour of time-stopping consumed more energy than a week of foot-racing. He chose to ignore the negative maxim that "small children who stop time without an adult being present, may not live to regret it." He chose, instead, to picture the beaming praise of family and friends when they learned of his brave journey. The journey was long, the clock stood still. He stopped long enough to gather some fruit that grew along the path. It would serve as his lunch during this day of promise. With it under his arm he bounded along a dozen more steps, then stopped abruptly in his tracks. He found himself atop a rocky knoll, overlooking the mighty sea! He was so overpowered by the vista before him that his "Hurrah!" came out as a weak squeak. The ocean lay at the ready, its stilled waves awaiting his command to resume their tidal sweep. The breakers along the shoreline hung in varying stages of disarray, some having already exploded into towering white spray while others were poised in smooth orange curls waiting to start that action. And there were new friends everywhere! Overhead, a flock of spora were frozen in a steep glide, preparatory to a beach landing. Purnie had heard of these playful creatures many times. Today, with his brothers in school, he would have the pets all to himself. Further down the beach was a pair of two-legged animals poised in mid-step, facing the spot where Purnie now stood. Some distance behind them were eight more, each of whom were motionless in a curious pose of interrupted animation. And down in the water, where the ocean ran itself into thin nothingness upon the sand, he saw standing here and there the comical tripons, those three-legged marine buffoons who made handsome careers of munching seaweed. "Hi there!" Purnie called. When he got no reaction, he remembered that he himself was "dead" to the living world: he was still in a zone of time-stopping, on the inside looking out. For him, the world would continue to be a tableau of mannikins until he resumed time. "Hi there!" he called again; but now his mental attitude was that he expected time to resume. It did! Immediately he was surrounded by activity. He heard the roar of the crashing orange breakers, he tasted the dew of acid that floated from the spray, and he saw his new friends continue the actions which he had stopped while back in the forest. He knew, too, that at this moment, in the forest, the little brook picked up its flow where it had left off, the purple clouds resumed their leeward journey up the valley, and the bees continued their pollen-gathering without having missed a single stroke of their delicate wings. The brook, the clouds, and the insects had not been interrupted in the least; their respective tasks had been performed with continuing sureness. It was time itself that Purnie had stopped, not the world around him. He scampered around the rockpile and down the sandy cliff to meet the tripons who, to him, had just come to life. "I can stand on my head!" He set down his lunch and balanced himself bottoms-up while his legs pawed the air in an effort to hold him in position. He knew it was probably the worst head-stand he had ever done, for he felt weak and dizzy. Already time-stopping had left its mark on his strength. But his spirits ran on unchecked. The tripon thought Purnie's feat was superb. It stopped munching long enough to give him a salutory wag of its rump before returning to its repast. Purnie ran from pillar to post, trying to see and do everything at once. He looked around to greet the flock of spora, but they had glided to a spot further along the shore. Then, bouncing up to the first of the two-legged animals, he started to burst forth with his habitual "Hi there!" when he heard them making sounds of their own. "... will be no limit to my operations now, Benson. This planet makes seventeen. Seventeen planets I can claim as my own!" "My, my. Seventeen planets. And tell me, Forbes, just what the hell are you going to do with them—mount them on the wall of your den back in San Diego?" "Hi there, wanna play?" Purnie's invitation got nothing more than startled glance from the animals who quickly returned to their chatter. He scampered up the beach, picked up his lunch, and ran back to them, tagging along at their heels. "I've got my lunch, want some?" "Benson, you'd better tell your men back there to stop gawking at the scenery and get to work. Time is money. I didn't pay for this expedition just to give your flunkies a vacation." The animals stopped so suddenly that Purnie nearly tangled himself in their heels. "All right, Forbes, just hold it a minute. Listen to me. Sure, it's your money that put us here; it's your expedition all the way. But you hired me to get you here with the best crew on earth, and that's just what I've done. My job isn't over yet. I'm responsible for the safety of the men while we're here, and for the safe trip home." "Precisely. And since you're responsible, get 'em working. Tell 'em to bring along the flag. Look at the damn fools back there, playing in the ocean with a three-legged ostrich!" "Good God, man, aren't you human? We've only been on this planet twenty minutes! Naturally they want to look around. They half expected to find wild animals or worse, and here we are surrounded by quaint little creatures that run up to us like we're long-lost brothers. Let the men look around a minute or two before we stake out your claim." "Bah! Bunch of damn children." As Purnie followed along, a leg shot out at him and missed. "Benson, will you get this bug-eyed kangaroo away from me!" Purnie shrieked with joy at this new frolic and promptly stood on his head. In this position he got an upside down view of them walking away. He gave up trying to stay with them. Why did they move so fast, anyway? What was the hurry? As he sat down and began eating his lunch, three more of the creatures came along making excited noises, apparently trying to catch up to the first two. As they passed him, he held out his lunch. "Want some?" No response. Playing held more promise than eating. He left his lunch half eaten and went down to where they had stopped further along the beach. "Captain Benson, sir! Miles has detected strong radiation in the vicinity. He's trying to locate it now." "There you are, Forbes. Your new piece of real estate is going to make you so rich that you can buy your next planet. That'll make eighteen, I believe." "Radiation, bah! We've found low-grade ore on every planet I've discovered so far, and this one'll be no different. Now how about that flag? Let's get it up, Benson. And the cornerstone, and the plaque." "All right, lads. The sooner we get Mr. Forbes's pennant raised and his claim staked out, the sooner we can take time to look around. Lively now!" When the three animals went back to join the rest of their group, the first two resumed walking. Purnie followed along. "Well, Benson, you won't have to look far for materials to use for the base of the flag pole. Look at that rockpile up there. "Can't use them. They're petrified logs. The ones on top are too high to carry down, and if we move those on the bottom, the whole works will slide down on top of us." "Well—that's your problem. Just remember, I want this flag pole to be solid. It's got to stand at least—" "Don't worry, Forbes, we'll get your monument erected. What's this with the flag? There must be more to staking a claim than just putting up a flag." "There is, there is. Much more. I've taken care of all requirements set down by law to make my claim. But the flag? Well, you might say it represents an empire, Benson. The Forbes Empire. On each of my flags is the word FORBES, a symbol of development and progress. Call it sentiment if you will." "Don't worry, I won't. I've seen real-estate flags before." "Damn it all, will you stop referring to this as a real-estate deal? What I'm doing is big, man. Big! This is pioneering." "Of course. And if I'm not mistaken, you've set up a neat little escrow system so that you not only own the planets, but you will virtually own the people who are foolish enough to buy land on them." "I could have your hide for talking to me like this. Damn you, man! It's people like me who pay your way. It's people like me who give your space ships some place to go. It's people like me who pour good money into a chancey job like this, so that people like you can get away from thirteen-story tenement houses. Did you ever think of that?" "I imagine you'll triple your money in six months." When they stopped, Purnie stopped. At first he had been interested in the strange sounds they were making, but as he grew used to them, and as they in turn ignored his presence, he hopped alongside chattering to himself, content to be in their company. He heard more of these sounds coming from behind, and he turned to see the remainder of the group running toward them. "Captain Benson! Here's the flag, sir. And here's Miles with the scintillometer. He says the radiation's getting stronger over this way!" "How about that, Miles?" "This thing's going wild, Captain. It's almost off scale." Purnie saw one of the animals hovering around him with a little box. Thankful for the attention, he stood on his head. "Can you do this?" He was overjoyed at the reaction. They all started making wonderful noises, and he felt most satisfied. "Stand back, Captain! Here's the source right here! This little chuck-walla's hotter than a plutonium pile!" "Let me see that, Miles. Well, I'll be damned! Now what do you suppose—" By now they had formed a widening circle around him, and he was hard put to think of an encore. He gambled on trying a brand new trick: he stood on one leg. "Benson, I must have that animal! Put him in a box." "Now wait a minute, Forbes. Universal Law forbids—" "This is my planet and I am the law. Put him in a box!" "With my crew as witness, I officially protest—" "Good God, what a specimen to take back. Radio-active animals! Why, they can reproduce themselves, of course! There must be thousands of these creatures around here someplace. And to think of those damn fools on Earth with their plutonium piles! Hah! Now I'll have investors flocking to me. How about it, Benson—does pioneering pay off or doesn't it?" "Not so fast. Since this little fellow is radioactive, there may be great danger to the crew—" "Now look here! You had planned to put mineral specimens in a lead box, so what's the difference? Put him in a box." "He'll die." "I have you under contract, Benson! You are responsible to me, and what's more, you are on my property. Put him in a box." Purnie was tired. First the time-stopping, then this. While this day had brought more fun and excitement than he could have hoped for, the strain was beginning to tell. He lay in the center of the circle happily exhausted, hoping that his friends would show him some of their own tricks. He didn't have to wait long. The animals forming the circle stepped back and made way for two others who came through carrying a box. Purnie sat up to watch the show. "Hell, Captain, why don't I just pick him up? Looks like he has no intention of running away." "Better not, Cabot. Even though you're shielded, no telling what powers the little fella has. Play it safe and use the rope." "I swear he knows what we're saying. Look at those eyes." "All right, careful now with that line." "Come on, baby. Here you go. That's a boy!" Purnie took in these sounds with perplexed concern. He sensed the imploring quality of the creature with the rope, but he didn't know what he was supposed to do. He cocked his head to one side as he wiggled in anticipation. He saw the noose spinning down toward his head, and, before he knew it, he had scooted out of the circle and up the sandy beach. He was surprised at himself for running away. Why had he done it? He wondered. Never before had he felt this fleeting twinge that made him want to protect himself. He watched the animals huddle around the box on the beach, their attention apparently diverted to something else. He wished now that he had not run away; he felt he had lost his chance to join in their fun. "Wait!" He ran over to his half-eaten lunch, picked it up, and ran back into the little crowd. "I've got my lunch, want some?" The party came to life once more. His friends ran this way and that, and at last Purnie knew that the idea was to get him into the box. He picked up the spirit of the tease, and deliberately ran within a few feet of the lead box, then, just as the nearest pursuer was about to push him in, he sidestepped onto safer ground. Then he heard a deafening roar and felt a warm, wet sting in one of his legs. "Forbes, you fool! Put away that gun!" "There you are, boys. It's all in knowing how. Just winged him, that's all. Now pick him up." The pang in his leg was nothing: Purnie's misery lay in his confusion. What had he done wrong? When he saw the noose spinning toward him again, he involuntarily stopped time. He knew better than to use this power carelessly, but his action now was reflex. In that split second following the sharp sting in his leg, his mind had grasped in all directions to find an acceptable course of action. Finding none, it had ordered the stoppage of time. The scene around him became a tableau once more. The noose hung motionless over his head while the rest of the rope snaked its way in transverse waves back to one of the two-legged animals. Purnie dragged himself through the congregation, whimpering from his inability to understand. As he worked his way past one creature after another, he tried at first to not look them in the eye, for he felt sure he had done something wrong. Then he thought that by sneaking a glance at them as he passed, he might see a sign pointing to their purpose. He limped by one who had in his hand a small shiny object that had been emitting smoke from one end; the smoke now billowed in lifeless curls about the animal's head. He hobbled by another who held a small box that had previously made a hissing sound whenever Purnie was near. These things told him nothing. Before starting his climb up the knoll, he passed a tripon which, true to its reputation, was comical even in fright. Startled by the loud explosion, it had jumped four feet into the air before Purnie had stopped time. Now it hung there, its beak stuffed with seaweed and its three legs drawn up into a squatting position. Leaving the assorted statues behind, he limped his way up the knoll, torn between leaving and staying. What an odd place, this ocean country! He wondered why he had not heard more detail about the beach animals. Reaching the top of the bluff, he looked down upon his silent friends with a feeling of deep sorrow. How he wished he were down there playing with them. But he knew at last that theirs was a game he didn't fit into. Now there was nothing left but to resume time and start the long walk home. Even though the short day was nearly over, he knew he didn't dare use time-stopping to get himself home in nothing flat. His fatigued body and clouded mind were strong signals that he had already abused this faculty. When Purnie started time again, the animal with the noose stood in open-mouthed disbelief as the rope fell harmlessly to the sand—on the spot where Purnie had been standing. "My God, he's—he's gone." Then another of the animals, the one with the smoking thing in his hand, ran a few steps toward the noose, stopped and gaped at the rope. "All right, you people, what's going on here? Get him in that box. What did you do with him?" The resumption of time meant nothing at all to those on the beach, for to them time had never stopped. The only thing they could be sure of was that at one moment there had been a fuzzy creature hopping around in front of them, and the next moment he was gone. "Is he invisible, Captain? Where is he?" "Up there, Captain! On those rocks. Isn't that him?" "Well, I'll be damned!" "Benson, I'm holding you personally responsible for this! Now that you've botched it up, I'll bring him down my own way." "Just a minute, Forbes, let me think. There's something about that fuzzy little devil that we should.... Forbes! I warned you about that gun!" Purnie moved across the top of the rockpile for a last look at his friends. His weight on the end of the first log started the slide. Slowly at first, the giant pencils began cascading down the short distance to the sand. Purnie fell back onto solid ground, horrified at the spectacle before him. The agonizing screams of the animals below filled him with hysteria. The boulders caught most of them as they stood ankle-deep in the surf. Others were pinned down on the sand. "I didn't mean it!" Purnie screamed. "I'm sorry! Can't you hear?" He hopped back and forth near the edge of the rise, torn with panic and shame. "Get up! Please get up!" He was horrified by the moans reaching his ears from the beach. "You're getting all wet! Did you hear me? Please get up." He was choked with rage and sorrow. How could he have done this? He wanted his friends to get up and shake themselves off, tell him it was all right. But it was beyond his power to bring it about. The lapping tide threatened to cover those in the orange surf. Purnie worked his way down the hill, imploring them to save themselves. The sounds they made carried a new tone, a desperate foreboding of death. "Rhodes! Cabot! Can you hear me?" "I—I can't move, Captain. My leg, it's.... My God, we're going to drown!" "Look around you, Cabot. Can you see anyone moving?" "The men on the beach are nearly buried, Captain. And the rest of us here in the water—" "Forbes. Can you see Forbes? Maybe he's—" His sounds were cut off by a wavelet gently rolling over his head. Purnie could wait no longer. The tides were all but covering one of the animals, and soon the others would be in the same plight. Disregarding the consequences, he ordered time to stop. Wading down into the surf, he worked a log off one victim, then he tugged the animal up to the sand. Through blinding tears, Purnie worked slowly and carefully. He knew there was no hurry—at least, not as far as his friends' safety was concerned. No matter what their condition of life or death was at this moment, it would stay the same way until he started time again. He made his way deeper into the orange liquid, where a raised hand signalled the location of a submerged body. The hand was clutching a large white banner that was tangled among the logs. Purnie worked the animal free and pulled it ashore. It was the one who had been carrying the shiny object that spit smoke. Scarcely noticing his own injured leg, he ferried one victim after another until there were no more in the surf. Up on the beach, he started unraveling the logs that pinned down the animals caught there. He removed a log from the lap of one, who then remained in a sitting position, his face contorted into a frozen mask of agony and shock. Another, with the weight removed, rolled over like an iron statue into a new position. Purnie whimpered in black misery as he surveyed the chaotic scene before him. At last he could do no more; he felt consciousness slipping away from him. He instinctively knew that if he lost his senses during a period of time-stopping, events would pick up where they had left off ... without him. For Purnie, this would be death. If he had to lose consciousness, he knew he must first resume time. Step by step he plodded up the little hill, pausing every now and then to consider if this were the moment to start time before it was too late. With his energy fast draining away, he reached the top of the knoll, and he turned to look down once more on the group below. Then he knew how much his mind and body had suffered: when he ordered time to resume, nothing happened. His heart sank. He wasn't afraid of death, and he knew that if he died the oceans would roll again and his friends would move about. But he wanted to see them safe. He tried to clear his mind for supreme effort. There was no urging time to start. He knew he couldn't persuade it by bits and pieces, first slowly then full ahead. Time either progressed or it didn't. He had to take one viewpoint or the other. Then, without knowing exactly when it happened, his mind took command.... His friends came to life. The first one he saw stir lay on his stomach and pounded his fists on the beach. A flood of relief settled over Purnie as sounds came from the animal. "What's the matter with me? Somebody tell me! Am I nuts? Miles! Schick! What's happening?" "I'm coming, Rhodes! Heaven help us, man—I saw it, too. We're either crazy or those damn logs are alive!" "It's not the logs. How about us? How'd we get out of the water? Miles, we're both cracking." "I'm telling you, man, it's the logs, or rocks or whatever they are. I was looking right at them. First they're on top of me, then they're piled up over there!" "Damnit, the logs didn't pick us up out of the ocean, did they? Captain Benson!" "Are you men all right?" "Yes sir, but—" "Who saw exactly what happened?" "I'm afraid we're not seeing right, Captain. Those logs—" "I know, I know. Now get hold of yourselves. We've got to round up the others and get out of here while time is on our side." "But what happened, Captain?" "Hell, Rhodes, don't you think I'd like to know? Those logs are so old they're petrified. The whole bunch of us couldn't lift one. It would take super-human energy to move one of those things." "I haven't seen anything super-human. Those ostriches down there are so busy eating seaweed—" "All right, let's bear a hand here with the others. Some of them can't walk. Where's Forbes?" "He's sitting down there in the water, Captain, crying like a baby. Or laughing. I can't tell which." "We'll have to get him. Miles, Schick, come along. Forbes! You all right?" "Ho-ho-ho! Seventeen! Seventeen! Seventeen planets, Benson, and they'll do anything I say! This one's got a mind of its own. Did you see that little trick with the rocks? Ho-ho!" "See if you can find his gun, Schick; he'll either kill himself or one of us. Tie his hands and take him back to the ship. We'll be along shortly." "Hah-hah-hah! Seventeen! Benson, I'm holding you personally responsible for this. Hee-hee!" Purnie opened his eyes as consciousness returned. Had his friends gone? He pulled himself along on his stomach to a position between two rocks, where he could see without being seen. By the light of the twin moons he saw that they were leaving, marching away in groups of two and three, the weak helping the weaker. As they disappeared around the curving shoreline, the voices of the last two, bringing up the rear far behind the others, fell faintly on his ears over the sound of the surf. "Is it possible that we're all crazy, Captain?" "It's possible, but we're not." "I wish I could be sure." "See Forbes up ahead there? What do you think of him?" "I still can't believe it." "He'll never be the same." "Tell me something. What was the most unusual thing you noticed back there?" "You must be kidding, sir. Why, the way those logs were off of us suddenly—" "Yes, of course. But I mean beside that." "Well, I guess I was kind of busy. You know, scared and mixed up." "But didn't you notice our little pop-eyed friend?" "Oh, him. I'm afraid not, Captain. I—I guess I was thinking mostly of myself." "Hmmm. If I could only be sure I saw him. If only someone else saw him too." "I'm afraid I don't follow you, sir." "Well, damn it all, you know that Forbes took a pot shot at him. Got him in the leg. That being the case, why would the fuzzy little devil come back to his tormentors—back to us—when we were trapped under those logs?" "Well, I guess as long as we were trapped, he figured we couldn't do him any more harm.... I'm sorry, that was a stupid answer. I guess I'm still a little shaky." "Forget it. Look, you go ahead to the ship and make ready for take-off. I'll join you in a few minutes. I think I'll go back and look around. You know. Make sure we haven't left anyone." "No need to do that. They're all ahead of us. I've checked." "That's my responsibility, Cabot, not yours. Now go on." As Purnie lay gathering strength for the long trek home, he saw through glazed eyes one of the animals coming back along the beach. When it was nearly directly below him, he could hear it making sounds that by now had become familiar. "Where are you?" Purnie paid little attention to the antics of his friend; he was beyond understanding. He wondered what they would say at home when he returned. "We've made a terrible mistake. We—" The sounds faded in and out on Purnie's ears as the creature turned slowly and called in different directions. He watched the animal walk over to the pile of scattered logs and peer around and under them. "If you're hurt I'd like to help!" The twin moons were high in the sky now, and where their light broke through the swirling clouds a double shadow was cast around the animal. With foggy awareness, Purnie watched the creature shake its head slowly, then walk away in the direction of the others. Purnie's eyes stared, without seeing, at the panorama before him. The beach was deserted now, and his gaze was transfixed on a shimmering white square floating on the ocean. Across it, the last thing Purnie ever saw, was emblazoned the word FORBES.
|
A. Purnie lost consciousness outside of time.
|
What dataset/corpus is this work evaluated over?
|
### Introduction
Recent NLP studies have thrived on distributional hypothesis. More recently, there have been efforts in applying the intuition to larger semantic units, such as sentences, or documents. However, approaches based on distributional semantics are limited by the grounding problem BIBREF0 , which calls for techniques to ground certain conceptual knowledge in perceptual information. Both NLP and vision communities have proposed various multi-modal learning methods to bridge the gap between language and vision. However, how general sentence representations can be benefited from visual grounding has not been fully explored yet. Very recently, BIBREF1 proposed a multi-modal encoder-decoder framework that, given an image caption, jointly predicts another caption and the features of associated image. The work showed promising results for further improving general sentence representations by grounding them visually. However, according to the model, visual association only occurs at the final hidden state of the encoder, potentially limiting the effect of visual grounding. Attention mechanism helps neural networks to focus on specific input features relevant to output. In the case of visually grounded multi-modal framework, applying such attention mechanism could help the encoder to identify visually significant words or phrases. We hypothesize that a language-attentive multi-modal framework has an intuitive basis on how humans mentally visualize certain concepts in sentences during language comprehension. In this paper, we propose an enhanced multi-modal encoder-decoder model, in which the encoder attends to the input sentence and the decoders predict image features and the target sentence. We train the model on images and respective captions from COCO5K dataset BIBREF2 . We augment the state-of-the-art sentence representations with those produced by our model and conduct a series of experiments on transfer tasks to test the quality of sentence representations. Through detailed analysis, we confirm our hypothesis that self-attention help our model produce more feature-rich visually grounded sentence representations. ### Related Work
Sentence Representations. Since the inception of word embeddings BIBREF3 , extensive work have emerged for larger semantic units, such as sentences and paragraphs. These works range from deep neural models BIBREF4 to log-bilinear models BIBREF5 , BIBREF6 . A recent work proposed using supervised learning of a specific task as a leverage to obtain general sentence representation BIBREF7 . Joint Learning of Language and Vision. Convergence between computer vision and NLP researches have increasingly become common. Image captioning BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 and image synthesis BIBREF12 are two common tasks. There have been significant studies focusing on improving word embeddings BIBREF13 , BIBREF14 , phrase embeddings BIBREF15 , sentence embeddings BIBREF1 , BIBREF16 , language models BIBREF17 through multi-modal learning of vision and language. Among all studies, BIBREF1 is the first to apply skip-gram-like intuition (predicting multiple modalities from langauge) to joint learning of language and vision in the perspective of general sentence representations. Attention Mechanism in Multi-Modal Semantics. Attention mechanism was first introduced in BIBREF18 for neural machine translation. Similar intuitions have been applied to various NLP BIBREF19 , BIBREF20 , BIBREF21 and vision tasks BIBREF8 . BIBREF8 applied attention mechanism to images to bind specific visual features to language. Recently, self-attention mechanism BIBREF21 has been proposed for situations where there are no extra source of information to “guide the extraction of sentence embedding”. In this work, we propose a novel sentence encoder for the multi-modal encoder-decoder framework that leverages the self-attention mechanism. To the best of our knowledge, such attempt is the first among studies on joint learning of language and vision. ### Proposed Method
Given a data sample INLINEFORM0 , where INLINEFORM1 is the source caption, INLINEFORM2 is the target caption, and INLINEFORM3 is the hidden representation of the image, our goal is to predict INLINEFORM4 and INLINEFORM5 with INLINEFORM6 , and the hidden representation in the middle serves as the general sentence representation. ### Visually Grounded Encoder-Decoder Framework
We base our model on the encoder-decoder framework introduced in BIBREF1 . A bidirectional Long Short-Term Memory (LSTM) BIBREF22 encodes an input sentence and produces a sentence representation for the input. A pair of LSTM cells encodes the input sequence in both directions and produce two final hidden states: INLINEFORM0 and INLINEFORM1 . The hidden representation of the entire sequence is produced by selecting maximum elements between the two hidden states: INLINEFORM2 . The decoder calculates the probability of a target word INLINEFORM0 at each time step INLINEFORM1 , conditional to the sentence representation INLINEFORM2 and all target words before INLINEFORM3 . INLINEFORM4 . The objective of the basic encoder-decoder model is thus the negative log-likelihood of the target sentence given all model parameters: INLINEFORM0 . ### Visual Grounding
Given the source caption representation INLINEFORM0 and the relevant image representation INLINEFORM1 , we associate the two representations by projecting INLINEFORM2 into image feature space. We train the model to rank the similarity between predicted image features INLINEFORM3 and the target image features INLINEFORM4 higher than other pairs, which is achieved by ranking loss functions. Although margin ranking loss has been the dominant choice for training cross-modal feature matching BIBREF17 , BIBREF1 , BIBREF23 , we find that log-exp-sum pairwise ranking BIBREF24 yields better results in terms of evaluation performance and efficiency. Thus, the objective for ranking DISPLAYFORM0 where INLINEFORM0 is the set of negative examples and INLINEFORM1 is cosine similarity. ### Visual Grounding with Self-Attention
Let INLINEFORM0 be the encoder hidden state at timestep INLINEFORM1 concatenated from two opposite directional LSTMs ( INLINEFORM2 is the dimensionality of sentence representations). Let INLINEFORM3 be the hidden state matrix where INLINEFORM4 -th column of INLINEFORM5 is INLINEFORM6 . The self-attention mechanism aims to learn attention weight INLINEFORM7 , i.e. how much attention must be paid to hidden state INLINEFORM8 , based on all hidden states INLINEFORM9 . Since there could be multiple ways to attend depending on desired features, we allow multiple attention vectors to be learned. Attention matrix INLINEFORM10 is a stack of INLINEFORM11 attention vectors, obtained through attention layers: INLINEFORM12 . INLINEFORM13 and INLINEFORM14 are attention parameters and INLINEFORM15 is a hyperparameter. The context matrix INLINEFORM16 is obtained by INLINEFORM17 . Finally, we compress the context matrix into a fixed size representation INLINEFORM18 by max-pooling all context vectors: INLINEFORM19 . Attended representation INLINEFORM20 and encoder-decoder representation INLINEFORM21 are concatenated into the final self-attentive sentence representation INLINEFORM22 . This hybrid representation replaces INLINEFORM23 and is used to predict image features (Section SECREF2 ) and target caption (Section SECREF1 ). ### Learning Objectives
Following the experimental design of BIBREF1 , we conduct experiments on three different learning objectives: Cap2All, Cap2Cap, Cap2Img. Under Cap2All, the model is trained to predict both the target caption and the associated image: INLINEFORM0 . Under Cap2Cap, the model is trained to predict only the target caption ( INLINEFORM1 ) and, under Cap2Img, only the associated image ( INLINEFORM2 ). ### Implementation Details
Word embeddings INLINEFORM0 are initialized with GloVe BIBREF25 . The hidden dimension of each encoder and decoder LSTM cell ( INLINEFORM1 ) is 1024. We use Adam optimizer BIBREF26 and clip the gradients to between -5 and 5. Number of layers, dropout, and non-linearity for image feature prediction layers are 4, 0.3 and ReLU BIBREF27 respectively. Dimensionality of hidden attention layers ( INLINEFORM3 ) is 350 and number of attentions ( INLINEFORM4 ) is 30. We employ orthogonal initialization BIBREF28 for recurrent weights and xavier initialization BIBREF29 for all others. For the datasets, we use Karpathy and Fei-Fei's split for MS-COCO dataset BIBREF10 . Image features are prepared by extracting hidden representations at the final layer of ResNet-101 BIBREF30 . We evaluate sentence representation quality using SentEval BIBREF7 , BIBREF1 scripts. Mini-batch size is 128 and negative samples are prepared from remaining data samples in the same mini-batch. ### Evaluation
Adhering to the experimental settings of BIBREF1 , we concatenate sentence representations produced from our model with those obtained from the state-of-the-art unsupervised learning model (Layer Normalized Skip-Thoughts, ST-LN) BIBREF31 . We evaluate the quality of sentence representations produced from different variants of our encoders on well-known transfer tasks: movie review sentiment (MR) BIBREF32 , customer reviews (CR) BIBREF33 , subjectivity (SUBJ) BIBREF34 , opinion polarity (MPQA) BIBREF35 , paraphrase identification (MSRP) BIBREF36 , binary sentiment classification (SST) BIBREF37 , SICK entailment and SICK relatedness BIBREF38 . ### Results
Results are shown in Table TABREF11 . Results show that incorporating self-attention mechanism in the encoder is beneficial for most tasks. However, original models were better in some tasks (CR, MPQA, MRPC), suggesting that self-attention mechanism could sometimes introduce noise in sentence features. Overall, utilizing self-attentive sentence representation further improves performances in 5 out of 8 tasks. Considering that models with self-attention employ smaller LSTM cells (1024) than those without (2048) (Section SECREF6 ), the performance improvements are significant. Results on COCO5K image and caption retrieval tasks (not included in the paper due to limited space) show comparable performances to other more specialized methods BIBREF10 , BIBREF39 . ### Attention Mechanism at Work
In order to study the effects of incorporating self-attention mechanism in joint prediction of image and language features, we examine attention vectors for selected samples from MS-COCO dataset and compare them to associated images (Figure FIGREF13 ). For example, given the sentence “man in black shirt is playing guitar”, our model identifies words that have association with strong visual imagery, such as “man”, “black” and “guitar”. Given the second sentence, our model learned to attend to visually significant words such as “cat” and “bowl”. These findings show that visually grounding self-attended sentence representations helps to expose word-level visual features onto sentence representations BIBREF1 . ### Conclusion and Future Work
In this paper, we proposed a novel encoder that exploits self-attention mechanism. We trained the model using MS-COCO dataset and evaluated sentence representations produced by our model (combined with universal sentence representations) on several transfer tasks. Results show that the self-attention mechanism not only improves the qualities of general sentence representations but also guides the encoder to emphasize certain visually associable words, which helps to make visual features more prominent in the sentence representations. As future work, we intend to explore cross-modal attention mechanism to further intertwine language and visual information for the purpose of improving sentence representation quality. Table 1: Classification performance on transfer tasks. We report F1-score for MRPC, Pearson coefficient for SICK-R and accuracy for most others. All sentence representations have been concatenated with ST-LN embeddings. Note that the discrepancy between results reported in this paper and the referenced paper is likely due to differences in minor implementation details and experimental environment. Our models are denoted by †. Figure 1: Activated attention weights on two samples from MS-COCO dataset. Vertical axis shows attention vectors learned by our model (compressed due to space limit). Note how the sentence encoder learned to identify words with strong visual associations.
|
Karpathy and Fei-Fei's split for MS-COCO dataset BIBREF10
|
Members of western society in 1996 are _________ expletives compared to members of western society from three decades prior.
A. more offended by
B. more creative in their use of
C. less offended by
D. less creative in their use of
|
Maledict oratory The high costs of low language. Sunday, Jan. 14, 1996: A day that will live in--well, not infamy, exactly. Blasphemy would be closer to it. Early that afternoon, the Pittsburgh Steelers defeated the Indianapolis Colts to win the American Football Conference championship. Linebacker Greg Lloyd, accepting the trophy in front of a national television audience, responded with enthusiasm. "Let's see if we can bring this damn thing back here next year," he said, "along with the [expletive] Super Bowl." A few hours later, Michael Irvin of the Dallas Cowboys offered this spirited defense of his coach on TV after his team won the National Football Conference title: "Nobody deserves it more than Barry Switzer. He took all of this [expletive] ." Iwatched those episodes, and, incongruous as it may sound, I thought of Kenneth Tynan. Britain's great postwar drama critic was no fan of American football, but he was a fan of swearing. Thirty years earlier, almost to the week, Tynan was interviewed on BBC television in his capacity as literary director of Britain's National Theater and asked if he would allow the theater to present a play in which sex took place on stage. "Certainly," he replied. "I think there are very few rational people in this world to whom the word '[expletive]' is particularly diabolical or revolting or totally forbidden." It turned out there were a few more than Tynan thought. Within 24 hours, resolutions had been introduced in the House of Commons calling for his prosecution on charges of obscenity, for his removal as a theater official, and for censure of the network for allowing an obscene word to go out on the airwaves. Tynan escaped punishment, but he acquired a public reputation for tastelessness that he carried for the rest his life. To much of ordinary Britain, he became the man who had said "[expletive]" on the BBC. Neither Greg Lloyd nor Michael Irvin was so stigmatized. "It's live television," NBC Vice President Ed Markey said, rationalizing the outbursts. "It's an emotional moment. These things happen." Irvin wasn't about to let that stand. "I knew exactly what I was saying," he insisted later. "Those of you who can't believe I said it--believe it." Swearing isn't the only public act that Western civilization condones today but didn't 30 years ago. But it is one of the most interesting. It is everywhere, impossible to avoid or tune out. I am sitting in a meeting at the office, talking with a colleague about a business circumstance that may possibly go against us. "In that case, we're [expletive] ," he says. Five years ago, he would have said "screwed." Twenty years ago, he would have said, "We're in big trouble." Societal tolerance of profanity requires us to increase our dosage as time goes on. I am walking along a suburban street, trailing a class of pre-schoolers who are linked to each other by a rope. A pair of teen-agers passes us in the other direction. By the time they have reached the end of the line of children, they have tossed off a whole catalog of obscenities I did not even hear until I was well into adolescence, let alone use in casual conversation on a public street. I am talking to a distinguished professor of public policy about a foundation grant. I tell her something she wasn't aware of before. In 1965, the appropriate response was "no kidding." In 1996, you do not say "no kidding." It is limp and ineffectual. If you are surprised at all, you say what she says: "No shit." What word is taboo in middle-class America in 1996? There are a couple of credible candidates: The four-letter word for "vagina" remains off-limits in polite conversation (although that has more to do with feminism than with profanity), and the slang expression for those who engage in oral sex with males is not yet acceptable by the standards of office-meeting etiquette. But aside from a few exceptions, the supply of genuinely offensive language has dwindled almost to nothing as the 20th century comes to an end; the currency of swearing has been inflated to the brink of worthlessness. When almost anything can be said in public, profanity ceases to exist in any meaningful way at all. That most of the forbidden words of the 1950s are no longer forbidden will come as news to nobody: The steady debasement of the common language is only one of many social strictures that have loosened from the previous generation to the current. What is important is that profanity served a variety of purposes for a long time in Western culture. It does not serve those purposes any more. What purposes? There are a couple of plausible answers. One of them is emotional release. Robert Graves, who wrote a book in the 1920s called The Future of Swearing , thought that profanity was the adult replacement for childhood tears. There comes a point in life, he wrote, when "wailing is rightly discouraged, and groans are also considered a signal of extreme weakness. Silence under suffering is usually impossible." So one reaches back for a word one does not normally use, and utters it without undue embarrassment or guilt. And one feels better--even stimulated. The anthropologist Ashley Montagu, whose Anatomy of Swearing , published in 1967, is the definitive modern take on the subject, saw profanity as a safety valve rather than a stimulant, a verbal substitute for physical aggression. When someone swears, Montagu wrote, "potentially noxious energy is converted into a form that renders it comparatively innocuous." One could point out, in arguing against the safety-valve theory, that as America has grown more profane in the past 30 years, it has also grown more violent, not less. But this is too simple. It isn't just the supply of dirty words that matters, it's their emotive power. If they have lost that power through overuse, it's perfectly plausible to say that their capacity to deter aggressive behavior has weakened as well. But there is something else important to say about swearing--that it represents the invocation of those ideas a society considers powerful, awesome, and a little scary. I'm not sure there is an easy way to convey to anybody under 30, for example, the sheer emotive force that the word "[expletive]" possessed in the urban childhood culture of 40 years ago. It was the verbal link to a secret act none of us understood but that was known to carry enormous consequences in the adult world. It was the embodiment of both pleasure and danger. It was not a word or an idea to mess with. When it was used, it was used, as Ashley Montagu said, "sotto voce , like a smuggler cautiously making his way across a forbidden frontier." In that culture, the word "[expletive]" was not only obscene, it was profane, in the original sense: It took an important idea in vain. Profanity can be an act of religious defiance, but it doesn't have to be. The Greeks tempted fate by invoking the names of their superiors on Mount Olympus; they also swore upon everyday objects whose properties they respected but did not fully understand. "By the Cabbage!" Socrates is supposed to have said in moments of stress, and that was for good reason. He believed that cabbage cured hangovers, and as such, carried sufficient power and mystery to invest any moment with the requisite emotional charge. These days, none of us believes in cabbage in the way Socrates did, or in the gods in the way most Athenians did. Most Americans tell poll-takers that they believe in God, but few of them in a way that would make it impossible to take His name in vain: That requires an Old Testament piety that disappeared from American middle-class life a long time ago. Nor do we believe in sex any more the way most American children and millions of adults believed in it a generation ago: as an act of profound mystery and importance that one did not engage in, or discuss, or even invoke, without a certain amount of excitement and risk. We have trivialized and routinized sex to the point where it just doesn't carry the emotional freight it carried in the schoolyards and bedrooms of the 1950s. Many enlightened people consider this to be a great improvement over a society in which sex generated not only emotion and power, but fear. For the moment, I wish to insist only on this one point: When sexuality loses its power to awe, it loses its power to create genuine swearing. When we convert it into a casual form of recreation, we shouldn't be surprised to hear linebackers using the word "[expletive]" on national television. To profane something, in other words, one must believe in it. The cheapening of profanity in modern America represents, more than anything else, the crumbling of belief. There are very few ideas left at this point that are awesome or frightening enough for us to enforce a taboo against them. The instinctive response of most educated people to the disappearance of any taboo is to applaud it, but this is wrong. Healthy societies need a decent supply of verbal taboos and prohibitions, if only as yardsticks by which ordinary people can measure and define themselves. By violating these taboos over and over, some succeed in defining themselves as rebels. Others violate them on special occasions to derive an emotional release. Forbidden language is one of the ways we remind children that there are rules to everyday life, and consequences for breaking them. When we forget this principle, or cease to accept it, it is not just our language that begins to fray at the edges. What do we do about it? Well, we could pass a law against swearing. Mussolini actually did that. He decreed that trains and buses, in addition to running on time, had to carry signs that read "Non bestemmiare per l'onore d'Italia." ("Do not swear for the honor of Italy.") The commuters of Rome reacted to those signs exactly as you would expect: They cursed them. What Mussolini could not do, I am reasonably sure that American governments of the 1990s cannot do, nor would I wish it. I merely predict that sometime in the coming generation, profanity will return in a meaningful way. It served too many purposes for too many years of American life to disappear on a permanent basis. We need it. And so I am reasonably sure that when my children have children, there will once again be words so awesome that they cannot be uttered without important consequences. This will not only represent a new stage of linguistic evolution, it will be a token of moral revival. What the dirty words will be, God only knows.
|
C. less offended by
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.